Credit: IMS Luxembourg

The latest advancements in artificial intelligence are a source of both fascination and boundless hope, yet they also provoke greatest fears. The implications are vast, spanning democratic, social, environmental, and geopolitical realms. Under what conditions can this innovation deliver on its promises and meet the major challenges of our time?

Who hasn’t been immediately struck by the performance of generative AI the first time they entered a prompt into ChatGPT? It took OpenAI’s conversational robot just five days to attract its first million users, and only two months to reach the 100 million mark. Two years after the launch of this tool, and despite the many errors and ‘hallucinations’ it still generates, the astonishing speed of its adoption is about more than mere curiosity: it is a clear public acknowledgment of a tool that is unprecedented in its versatility – from text generation and synthesis to brainstorming and translation, the possibilities seem endless. And that, of course, is only the tip of the iceberg. Generative artificial intelligence is already delivering breakthroughs in fields as diverse as pharmaceutical research, precision agriculture, personalised education, and energy consumption optimisation.

The downgraded social animal?

While technology promises to enhance our abilities in the vision of an augmented man, Tristan Harris, a computer scientist and ethicist well known in Silicon Valley and co- founder of the Center for Humane Technology, invites us to consider the opposite: that it may, in fact, degrade human beings both in their individuality and their social functioning. In his presentation How Technology is Downgrading Humans, he reminds us of the dangers we encountered during our first significant interaction with AI, which began through social media platforms: diminished attention spans, social isolation, the culture of post-truth, political polarisation, and extremism. The impacts on society are numerous and interconnected, stemming from a business model driven by the relentless pursuit of attention, yet devoid of proper governance.

Digital Naives vs. Prophets of the Apocalypse

In the face of this major technological disruption, media debates fall into stark polarisation. The 'digital naive' are set against the prophets of the apocalypse. The former place blind faith in AI, nurturing grand hopes – even fantasies of immortality – through a near-magical belief in techno-solutionism. They dismiss any criticism of these technological developments by labelling sceptics as anti-progress dinosaurs. At the opposite extreme, the doomsayers raise the spectre of an apocalyptic future, demonising AI and attributing malicious intent to it. They foresee a takeover of humanity by this 'intelligence' – often anthropomorphised – believing such a fate to be inevitable.

Undoubtedly, both of these extremes are no more than fanciful narratives that confiscate the key to a more productive debate on this highly significant issue. The first myth to debunk is that we are not, in fact, talking about 'intelligence'. As Aurélie Jean, a specialist in computational sciences, points out, it is far more accurate to speak of algorithms, and it is crucial to understand both their potential and their limitations. "The machine only masters analytical intelligence," she explains, whereas humans possess emotional, creative, and practical intelligence. Thus, the belief that algorithmic models could somehow create, or even gain consciousness, as theorised in the singularity hypothesis, is misguided. The so-called 'great replacement' of humans by machines is therefore not on the horizon.

Yet, these algorithmic models are already embedded in our daily lives, often unnoticed: they are used in weather forecasting, fraud detection in banking, and real-time navigation apps. And the pace of innovation is accelerating. This is an entirely new technological wave, and its development is far outpacing expert’s predictions. Recent research on the transformative potential of AI is converging around a clear consensus: it is poised to revolutionise vast sectors of both our economy and society. A recent study by J.P. Morgan suggests that generative AI could add between 7 and 10 trillion dollars – up to 10% – to global GDP. According to a report by Implement Consulting Group, this technology, if rapidly adopted, could boost Luxembourg’s GDP by 9% within a decade. Amid that, the critical question remains: what are the tangible benefits? The purpose is central here. While the growth projections are impressive, what real impactwillthistechnologyhave?Itisnotaboutbeinganti- tech but rather adopting and deploying these technologies with discernment – in other words, being techno-lucid – because, without careful consideration, there are many potential pitfalls ahead.

Credit: datacenters.com

Irish data centers could account for 35% of the country’s energy consumption by 2026.

Intelligence that is anything but artificial

The full environmental impact of AI is still being assessed, as the sector continues to evolve, but the figures available are staggering. They are intrinsically tied to the very nature of the industry: the race for ever-greater computing power.

The corollary of this relentless competition is the dramatic rise in the number of data centres worldwide, from 500,000 in 2012 to 8 million today. According to Synergy Research, the number of new-generation, high- capacity centres is expected to triple within the next six years. The Energy Agency has even warned that by 2026, data centres in Ireland could consume up to 35% of the country’s energy supply – a nation that has positioned itself at the forefront of this industry, raising concerns about the risk of overloading the electricity grid during harsh winters. This is hardly surprising when one considers that a single query on an AI assistance toot like ChatGPT consumes between 10 and 30 times more electricity than a traditional internet browser search.

As Lou Welgryn, co-president of Data for Good, notes, "algorithms are putting an already carbon-intensive economy on steroids." Her organisation estimates that the widespread deployment of AI could result in an additional 2% of emissions per year, whereas an annual reduction of 7% is needed to limit global warming. The current AI boom is already undermining the ambitious carbon-neutrality goals set by major Big Tech companies. But it is not just the volume of emissions that is concerning; it is the speed of growth. AI development is soaring, while the expansion of renewable energy sources is struggling to keep pace.

Another significant concern is water use. Water is essential not only for cooling data centres, but also in the manufacturing of hardware. According to researchers at the University of California, Riverside, the global demand for AI could result in water withdrawal of between 4.2 and 6.6 billion cubic metres by 2027 – more than the total annual water consumption of 4 to 6 countries the size of Denmark. In a world where water scarcity is already a critical issue in many regions, this is becoming a major challenge, highlighting the potential for resource conflict.

Despite its name, AI is anything but "artificial" in its impact – it places numerous planetary boundaries under considerable strain. Alongside the environmental costs of energy consumption and water use, there is also the issue of rare minerals and electronic waste, with just 22% of such waste currently being recycled. The United Nations Environment Programme (UNEP) calls for a thorough assessment of AI’s environmental impact throughout its entire life cycle. As UNEP’s Chief Digital Officer, Golestan Sally Radwan, points out, “There is still much we don’t know about the environmental impact of AI but some of the data we do have is concerning. We need to make sure the net effect of AI on the planet is positive before we deploy the technology on a large scale.”

Algorithmic injustices

"I am a doctor," appears on the screen. A moment later, a bearded white man’s face is displayed. "No, I’m a woman," the text reads. The system corrects itself, and a young white woman’s face appears. The text then continues, "I’m black." This is an audiovisual project based on a text-to-image latent diffusion model (LDM) AI technology. The experimental film "My Word", created in 2023 by Barcelona-based director Carme Puche Moré, explores the implicit biases within technology. And there are many: biases related to gender, race, age, language, and more. These biases stem largely from the training data, as well as the models themselves and their validation processes. Algorithms reflect the human prejudices present in the data – and, more troublingly, they often amplify these biases significantly.

This phenomenon is known as algorithmic injustice, as these 'artificial' disparities in representation not only shape perceptions but also give rise to tangible inequalities in the real world. Take, for example, age bias, which can exclude certain demographics from AI-based recruitment processes. Similarly, facial recognition technology used in some countries has demonstrated higher error rates in identifying black or Asian individuals. In 2020, American Robert Williams was wrongfully arrested for watch theft after the software mistakenly matched his driving licence photo with that of the actual offender.

While Big Tech companies are making efforts to correct biases in their models by promoting more inclusive, non-stereotypical representations of underrepresented populations, the challenge remains immense. Ultimately, the code will never be fully neutral; algorithms will always reflect the biases of those who design them.

Credit: McKinsey & Company

Future Skills Need: Skills of today vs. skills of tomorrow in Europe and the US. Surveyed executives report rising demand for technological and advanced cognitive skills.

What kind of work tomorrow?

The impact of artificial intelligence on the workplace is substantial. At the frontline, of course, are click workers – the hidden, unacknowledged segment of an industry generating staggering profits. According to researchers at Oxford University, their number is estimated at 165 million worldwide. They train the models, clean the databases, or moderate the content. A report published by the United Nations' International Labour Organization condemns the precarious conditions faced by this invisible workforce: late or even night shifts, piecework pay that often falls below the legal minimum wage in their country, and exposure to psychological distress linked to the violent content they have to manage. These "poor workers 2.0" are found across the globe, and contrary to popular belief, they are also present in developed economies, where they combine these hours with other low-paying jobs.

Beyond the direct effects of the algorithm driven industry, AI is also ushering in a profound transformation of the labour market. According to a 2024 international study by BCG, 43% of respondents already use generative AI regularly at work, and overall confidence in the tool is steadily increasing. Routine tasks are being streamlined, enabling employees to focus on more engaging and strategic responsibilities, by automating repetitive tasks, assisting with decision-making, and enhancing operational efficiency. For 58% of users surveyed, generative AI saves them at least five hours each week. In practice, the rapid adoption of this technology has been primarily driven by employees themselves, while companies seem slower to implement it in a structured manner. In Luxembourg, only 14% of organisations have adopted these systems, compared to 35% of employees. A sharp acceleration in adoption is therefore anticipated in the near future, bringing with it a fundamental shift in the structure of work.

This raises the pressing question: which jobs will be altered or even disappear entirely? Half of AI users believe their jobs could become obsolete within the next decade. A recent McKinsey report suggests that, by 2030, up to 30% of current working hours could be automated or accelerated by generative AI. As a result, Europe could experience up to 12 million job transitions, representing 6.5% of the current workforce. This estimate aligns closely with that of Implement Consulting Group, which predicts that 6% of jobs in Luxembourg are at risk, while 72% of jobs will be enhanced by AI without becoming obsolete. These forecasts raise profound questions about the future of work and the skills required to thrive in this rapidly evolving landscape. Technological acceleration demands a strategy for talent redefinition and highlights an unprecedented need for training, especially through upskilling and reskilling programmes. According to McKinsey, companies are planning to retrain a third of their current workforce to address skills mismatches.

Credit: Implement Consulting Group

This includes cross-border workers residing in Germany, Belgium and France, who make up almost half of the total of 515,000 jobs.

Human resources departments are at the heart of the technological revolution, not only because they must anticipate the vast transformation of talent, but also because they themselves are major users of AI. A study by Gartner reveals that 80% of companies use AI in at least one HR process: performance analysis (such as continuous feedback, productivity tracking, etc.), talent management (including career development plans), or recruitment (such as CV screening and pre-interviews). Of course, vigilance is crucial, as there are numerous risks associated with AI, including algorithmic biases, data protection issues, excessive surveillance, and the potential disengagement of employees. Companies must identify the specific pitfalls of AI to ensure best practices in its deployment, as well as establish a solid governance framework for its use. Perhaps most importantly, the application of AI in HR compels us to reflect on the very notion of conformity: do we really want to conform to the rules set by algorithms when writing our CVs? Do we really want to conform when we record video responses during a simulated interview with an employer-bot, who will assess our interpersonal skills based on the algorithmic score assigned to the smile we give? What happens when we entrust a machine with the selection of candidates, asking AI to differentiate between human applicants? Are we sacrificing the element of surprise, creativity, unforeseen skills, relational chemistry, or, more simply, the uniqueness of a potential future employee?

Credit: X, @skyferrori / Midjourney

This viral image of the Pope wearing a Balenciaga jacket was generated using the Midjourney software.

The need for appropriate governance

The Centre for Human Technology reflects on the approach that has dominated to date. “For years, Silicon Valley has operated with a “move fast and break things” mentality. But as we’ve seen, it’s not just technology that breaks. By the time people understand the negative externalities of a new platform, product, or service, the harms can be difficult to reverse.” This highlights the need for a governance framework that includes a broad range of stakeholders (see our interview with Marc Faddoul), but it also points to the temporal challenges posed by the accelerating pace of technological advancement. Institutions are attempting to regulate in hindsight, struggling to keep up with the speed of IT innovation. Yet this remains a crucial issue, as the algorithmic industry is exploiting an unprecedented playing field across numerous sectors.

Europe, with the AI Act, is taking a pioneering role. It is the first regulatory body to establish a comprehensive legal framework for artificial intelligence. The aim is to ensure that AI systems are reliable, ethical, transparent, and secure. Special attention has been given to safeguarding fundamental rights, particularly with respect to surveillance and privacy concerns, and a risk-based approach has been adopted. The law prohibits applications and systems deemed to carry unacceptable risks, such as government social rating like the one implemented in China. Other applications, considered high-risk - such as those in the justice, recruitment, and transport sectors - are subject to specific legal requirements. As a result, the European regulation will impose new compliance obligations on companies, including start-ups. The challenge lies in finding the right balance between regulation and fostering innovation, because, while AI presents certain dangers, it also holds the potential for groundbreaking advances and can provide solutions to some of the most pressing challenges of our time.

We are thus entering our second major encounter with AI without having resolved the initial issues, which are now set to be amplified. This is because the driving force behind the rapid deployment of AI is the race to market. Both businesses and governments are reluctant to risk falling behind by failing to adopt this technology, fearing the loss of a significant competitive or geopolitical edge. As Asma Mhalla points out in her book Technopolitique, (Technopolitics) there is a convergence of interests between Big Tech and Big States on this matter. Yet, the risks associated with this technology are unprecedented. The list of negative consequences is indeed staggering: a decline in certain cognitive abilities, rampant misinformation, manipulated identities, fabricated child pornography, exponential fraud and crime, neglected languages and exclusion from diversity, destabilised nations, and the rise of cyber and automated biological weapons. The challenge is that we are already confronted with the cognitive and institutional limits of both our human brains and our organisations – the capacity to process this overwhelming flood of data and to mitigate the substantial risks it poses. Given the pace of technological advances, an increasing number of voices are now joining Harris in urging us to accelerate our efforts to establish governance that is fit for purpose.

Towards positive-impact AI

Artificial intelligence holds immense potential as a powerful tool for addressing some of society’s most pressing challenges. The examples are multiplying. In the healthcare sector, notable progress is being made in diagnostics and early detection. In the humanitarian field, the needs arising from natural disasters can now be anticipated, enabling aid to be better calibrated and delivered more swiftly. AI is also transforming personalised education. In the environmental domain, new systems are helping to reduce food waste in shops, hotels, and restaurants. Precision agriculture allows for monitoring of crops and soil moisture levels, thereby reducing the need for water and other resources. Algorithms are also being used to detect suspicious movements, aiding in the fight against poaching and illegal fishing. Carbon emissions in buildings can also be reduced through algorithms that analyse energy consumption in real time. The ecosystem surrounding these areas is highly vibrant, ranging from start-ups to tech giants like Microsoft’s AI for Good Lab.

The question, then, is not 'For or against AI?' but rather, ‘For what purpose?’, What positive impact can AI have on our lives and on the planet? It is essential to focus AI innovation on addressing the major challenges of our time. In this regard, the approach taken by the European Investment Bank (EIB), which is investing in AI for positive impact, is particularly inspiring. The institution is using AI to analyse innovative data sources, including text and satellite images, to guide its financing decisions, improve its decision-making processes, and assess the impact of its operations. For example, the technology enables the bank to identify areas where projects aimed at reducing methane emissions would have the greatest effect, thus determining the most effective investment strategies. It also facilitates in-depth analysis of operations and the necessary measures for marine biodiversity. Additionally, AI plays a decisive role in assessing drought and flood risks, which shapes the bank's mitigation strategies. These are just a few examples of how AI is being harnessed for the common good.

AI technology is undoubtedly powerful, but before fully embracing it, various stakeholders must confront the question of its purpose. On an individual level, the question might be, “Is generating virtual images of cats really worth the immense energy and water consumption?” But more importantly, on a collective level, we must ask, “What innovation strategy are we defining, and what impact do we aim to achieve?” This is an eminently political question, but companies are equally concerned. Businesses are at a crossroads, and the complexity of the risk-opportunity balance underscores the urgent need for AI-specific governance, with a multi-stakeholder board dedicated to these issues. Yet, too few companies have such structures in place today, despite the indisputable need for safeguards.

Credit: IMS Luxembourg

"The machine excels only in analytical intelligence," notes AI researcher Aurélie Jean. This is a call to nurture our uniquely human intelligence.

Human Intelligence

This is a time for discernment. The debate must now move into the public arena, and it is incumbent upon everyone to engage with the subject. Young people from across the globe are calling for this in a manifesto entitled "The Future We Want: Perspectives of Over 5,000 Young People on A.I. for our Society" (Youth Talks on AI, 2024). They advocate for a balanced and ethical approach to technology. Above all, they emphasise the importance of guarding us against overconfidence, and consequently, overdependence on AI. In doing so, they highlight the need to preserve our capacity for collaboration and critical thinking. At a time when artificial intelligence is receiving significant investment and attention, nurturing human intelligence becomes a top priority. It is the only way to ensure the thoughtful and responsible deployment of technology.