Marc Faddoul is an AI researcher, expert in recommendation systems and algorithmic auditing. Director and co-founder of AI Forensics, a digital rights organisation, he frequently advises regulators, including the European Commission, on matters of AI ethics and platform accountability.

What made you aware of the issue of algorithmic opacity?

The first project I worked on involved YouTube's recommendation algorithm. At the time, it was facing considerable criticism for promoting conspiracy theories. After a long period of denial, YouTube eventually acknowledged the issue and promised to reduce the spread of such content. However, their words couldn’t simply be taken at face value, and their algorithm itself was opaque. As a result, it became essential to actually measure the proportion of conspiracy theories being recommended in order to verify their commitment.

I initially conducted this study in an academic framework at the University of California, Berkeley. This experience also exposed me to the practical and operational challenges of carrying out such audits, which led me to the idea of founding AI Forensics. This non-profit organisation is dedicated to investigating algorithms and holding platforms accountable for the societal impact of their systems.

AI FORENSICS IS DEDICATED TO INVESTIGATING ALGORITHMS TO HOLD PLATFORMS ACCOUNTABLE FOR THE IMPACT OF THEIR SYSTEMS ON SOCIETY

Isn’t the lack of transparency in algorithms inherent to artificial intelligence, especially when engineers themselves admit their inability to explain certain models that operate like a 'black box'?

Yes, indeed. There is an aspect of the system that is fundamentally uninterpretable, which is a characteristic of deep learning – the primary paradigm used in many artificial intelligence systems, particularly for recommendations. However, other aspects, such as the overall design of the algorithm, could be made more transparent. For instance, what data is used to train the algorithm, what optimisation metrics are selected, and how the various objectives are prioritised. All this opacity is unnecessary, and the platforms could and should make this information accessible to the public.

Are major technology companies then deliberately opaque in their algorithmic practices?

Quite so. Generally, they justify this opacity by business confidentiality, but in reality, it also serves to limit public scrutiny. For researchers to carry out studies, like the ones I conducted on YouTube, access to the data is essential. Some platforms have implemented mechanisms that are more or less generous and functional. For a long time, X (formerly Twitter) was the best in this regard, offering researchers relatively broad access to its data. This is why, for a considerable period, there was more research conducted on X than on other platforms. However, since its acquisition by Elon Musk, this dynamic has been completely reversed. Today, X has become one of the most opaque platforms, having chosen to monetise access to its data as a core component of its business model. As a result, many researchers who previously used Twitter for sociological studies can no longer do so.

How do you conduct investigations, and how do you manage to analyse algorithmic mechanisms from the outside? 

One of our areas of expertise is obtaining ‘adversarial’ data, especially when access to official data provided by platforms is restricted. In such cases, we develop alternative methods to conduct quantitative audits.

These methods include platform scraping, where we recover content displayed on a webpage by simulating real user behaviour, or via other publicly accessible APIs (application programming interfaces). These techniques allow us to perform behavioural audits on recommendation systems.

What investigation has particularly stood out to you as part of your research?

Last year, we carried out an investigation into the moderation of political advertising on Meta (Facebook and Instagram). We discovered a systematic failure in moderation, which enabled influence campaigns, as political ads were not properly flagged as such on the platform. This loophole was exploited by a pro-Russian propaganda network, which used it extensively to spread disinformation among European users, particularly during the campaigns for the European Parliament elections last June. Thousands of fake accounts, mostly orchestrated from Russia, disseminated political messages, such as attempts to discredit aid to Ukraine. As a result of our study, the European Commission initiated an official investigation into Meta, based on the new European legislation, the Digital Services Act.

You mention that social network algorithms are becoming increasingly paternalistic. What do you mean by this?

By ‘paternalistic’, I mean that modern applications are progressively limiting the user's freedom of choice. In the past, social networks allowed users to select the content they wished to consume, which was largely based on their explicit preferences. However, with the rise of platforms like TikTok, social networks have shifted towards purely algorithmic recommendations, where users have less and less control, and the algorithm now dictates the content it believes is most likely to generate user engagement.

MODERN APPLICATIONS ARE INCREASINGLY REDUCING USERS' FREEDOM OF CHOICE
Credit: Mohamed Nohassi, Unsplash

''Generative AI can be used to manage fake accounts that interact directly with users based on psychological profiles: this is the massification of personalization.''

How does generative AI exacerbate issues such as misinformation?

Generative AI introduces and amplifies several issues beyond those posed by content distribution algorithms through recommendation systems. In particular, it enables the creation of misleading and illicit content, such as visual or audio deepfakes. Moreover, generative AI can be used to manage fake accounts and bots that interact directly with users, engaging in personalised conversations or sending targeted content, all designed to influence opinions based on the psychological profiles and interests revealed by users. This represents the massification of personalisation.

Aren't we witnessing a genuine delegation of power, with AI’s growing influence over our choices and, more broadly, our democratic processes?

Yes, absolutely. I would say we are indeed seeing a real delegation of power to the recommendation systems and artificial intelligence that distribute information online, acting as gatekeepers in the age of social media. This power that was once held by editorial teams in major newspapers has now been shifted to algorithms, like those used by YouTube, which have the ability to limit or amplify the content consumed by users. And this influence over the distribution and prioritisation of information is often even more significant than before.

WE CAN TALK ABOUT A TRUE DELEGATION OF POWER TO RECOMMENDATION SYSTEMS AND ARTIFICIAL INTELLIGENCES THAT DISTRIBUTE INFORMATION ONLINE

Do you see yourself as a counterpower in this context?

The true counterpower today, at least in Europe, is the European Commission, which, through some of the world’s most ambitious legislation, has established a particularly robust regulatory framework to tightly control platform practices. This approach is one that should be emulated. However, once these regulations are in place, the challenge becomes ensuring their effective enforcement. For our part, we play a supporting role to the European Commission by acting as a monitoring entity, identifying and reporting any shortcomings or failures of platforms. In some cases, these shortcomings can lead to significant fines and requirements to change practices, as we saw in the case of electoral interference I mentioned, where Meta faced legal action following our report.

What would you recommend as a positive way forward?

Personally, I would advocate for platforms an obligation to ensure some degree of interoperability within their systems. This would mean moving away from the current opaque models, where users are confined to ecosystems with limited choices, especially when it comes to algorithms. We are championing the idea of algorithmic pluralism, where platforms could offer alternative recommendation systems. To make this a reality, regulations promoting interoperability must be put in place. Platforms like BlueSky have already adopted such an approach, setting a positive example for a more open ecosystem. This represents a promising prospect.

Credit: IMS Luxembourg

''We advocate for the idea of algorithmic pluralism.'' Marc Faddoul at the Luxembourg Sustainability Forum 2024. Watch the replay of his talk (https://www.youtube.com/watch?v=QYAVknIzXnE).