[INTERVIEW] Fake AI – An interview with Frederike Kaltheuner
13 January 2022
[INTERVIEW] Deflating the NFT hype – An interview with Jack FRANSHAM
28 January 2022

[ARTICLE] The Bronner report makes new proposals about social media

By Rachel Griffin

On January 11th 2022, Gérald Bronner – a sociology professor at the Université Paris-Diderot – submitted an important report entitled “Les Lumière à l’ère numérique” , to President Emmanuel Macron. Commissioned by Macron in September 2021, the report primarily addresses issues around mis- and disinformation on social media; it also discusses other types of harmful online content such as hate speech, and more abstract issues such as the promotion of healthy democratic debate. The report includes proposals for some reforms which would be quite far-reaching if implemented, and could influence France’s positions in the ongoing negotiations for the EU’s major legislative project in this area, the Digital Services Act.

Bronner led a 14-member commission made up of academics, journalists, historians and representatives of civil society organisations. They interviewed representatives of platforms as well as other academic experts and high-profile critics of social media companies, such as the Facebook whistleblower Frances Haugen. As set out by Macron, the report’s objectives were to provide a summary of current issues around social media that could serve to educate the public; to offer policy recommendations; to propose ideas for new, pro-democratic online public spaces; and to develop a historical and geopolitical analysis of France’s exposure to global threats relating to the online informational environment, as well as potential countermeasures.

The report’s key motif is that of Enlightenment (Lumières). It opens with a quote from philosopher Emmanuel Kant’s 1793 essay ‘What is Enlightenment?’ on the need for individuals to ‘dare to know’, question established wisdom, and have the courage to develop their own understanding of the world. This choice may seem ironic, since the report then dedicates 100+ pages to the dangers that arise when people refuse to take experts at their word and instead make up their own minds about things like the benefits of Covid-19 vaccines. However, the report’s overall vision of the public sphere seems rather consistent with that of Kant’s essay, which praises the supposedly enlightened despotism of Prussia’s King Frederick II and recommends that authoritarian leaders should permit a degree of free debate while maintaining tight political control. The report’s vision of a flourishing online public sphere – one which brings individuals together in democratic debate, yet is highly regulated by a state which exercises tight control over the publication of ‘false news’ and guards against foreign security threats – seems to resonate quite well with such a philosophy.

What topics does the report address?

The report is wide-ranging, addressing seven key sub-themes. First, the psychosocial mechanisms that make people vulnerable to believing false information. This section emphasises in particular people’s (lack of) abilities to critically assess the reliability of information, and the tendency to believe information more strongly after repeated exposure. The latter represents a particular problem in the context of social media, since many social media recommendation systems work on the principle that if someone enjoys content on a particular topic, they should see more of it. The effects of algorithmic curation on information flows represent the second sub-theme. While the report notes that evidence on this point is uncertain and contradictory, it in particular criticises the ‘popularity bias’ – the principle that content which users are already engaging with should be further promoted to other users – which often enables sensationalist, extreme or controversial content to go viral. 

Third, the report examines the economic logics shaping how hate and disinformation spread online: the actors spreading them are often supported by, and even partly or wholly motivated by, the advertising revenue that such content can bring in. 

Fourth, it discusses the strategic dissemination of disinformation by foreign actors. Security threats arising from such activities in contexts such as elections are a major point of emphasis in the report – which even has a section titled ‘The Militarisation of Informational Space’ – and were picked up by Macron in his official response, discussed below. 

Fifth, the report evaluates the legal regulation of digital information markets. Here, it positively assesses the currently applicable French law – and does not propose major changes, presumably because such issues are already in the process of being regulated by the Digital Services Act, which aims to harmonise regulation across EU member states. 

Sixth, it investigates possible policies to enable individuals to make sense of information online. Indeed, it argues that stronger individual capabilities to evaluate the credibility and reliability of information are ‘without a doubt the best response’ to disinformation. In particular, it calls for better promotion of critical thinking and media literacy in the education system, which should train students in the values of the Enlightenment. It also highlights the role of traditional media in promoting reliable information.

Finally, in its conclusion the report suggests that the disruptive trends already discussed are just the beginning; the information environment will continue to change rapidly and continuing vigilance, research and regulatory responses will be needed. Ultimately, the report strikes an optimistic tone, suggesting that the digital information environment can offer a ‘new form of digital citizenship’, new possibilities for collaboration and collective intelligence, and positive spaces for democratic debate.

What does the report recommend?

The report makes 30 key recommendations, which President Macron has given his general endorsement, though it remains to be seen which will materialise as policies and in what form. Here, a few of the most notable are discussed. 

First, the report strongly emphasises promoting critical thinking and media literacy in the education system: it recommends creating a new dedicated body to put together standardised protocols, materials etc. for classes in these areas at all levels of the school system. 

Second, noting that empirical research in many of the areas discussed is lacking, ambiguous or contradictory and that much of it skews towards the US context, it calls for more research on how platforms affect informational environments. While this should soon be facilitated by the requirements in the Digital Services Act to share more data with academic researchers, the report also calls for more structured dialogue and cooperation between platforms and researchers.

Third, the report calls for private actors to take stronger measures to prevent actors spreading disinformation from benefiting from algorithmic promotion or advertising revenue. For example, state policies on corporate social responsibility should encourage advertisers and adtech companies to conduct more due diligence on where their ads appear (and thus, which actors they are funding).

Fourth, in the context of geopolitical and security issues, it proposes forming a co-regulatory group within the OECD for representatives of member states to work directly with platforms on security threats relating to disinformation. 

Fifth, it calls for more intense public pressure – through both legal and informal channels – on the major platforms to be more socially responsible, which it seems to define as giving more weight to public interest considerations in issues such as the design of its algorithms, the design of user interfaces and the active promotion of socially beneficial content. For example, users should easily be able to choose not to use algorithms which favour the most popular content. UX design should also be regulated to prevent ‘dark patterns’ which manipulate or mislead users into making certain choices (something we are all familiar with from website cookie banners which let you accept commercial tracking by clicking on a single brightly-coloured button, or navigate through multiple complex screens to reject them). The report is vague on the details of such regulation, but suggests that regulatory bodies need more industry-specific and interdisciplinary expertise to adequately oversee platform design. 

Finally, the report recommends retaining – and indeed, says that France should congratulate itself for – Article 27 of the 1881 press regulation law, which criminalises the publication of false news, if it is published in bad faith and of a nature that it could disturb the public order. This provision has been heavily criticised and is highly dubious from the perspective of international human rights law. As the report acknowledges, Article 27 does not even require that the information concerned disturbs public order, only that it in principle could – making the provision’s scope very flexible, and potentially broad enough to target large amounts of legitimate journalism and commentary. Notably, although the report claims that ‘criminal sanctions are an essential instrument in the fight against disinformation phenomena’, it does not back this up with any examples or evidence as to how such sanctions have been used or had positive effects so far.

The report also suggests adding corresponding civil liability provisions, specifying that anyone who digitally distributes news that they know to be false and harmful to the interests of others could be civilly liable – which would also apply to platforms, once such content is brought to their attention. It even suggests that France should push to amend the Digital Services Act to require platforms to delete content meeting the conditions of Article 27 across the EU. 

What are the report’s implications?

Given the vague and malleable definition of ‘false content that disturb the public order’, the retention and potentially EU-wide extension of the criminalisation of false news raises serious human rights concerns. Such provisions could become a powerful instrument of state censorship. The same is true of the proposal for extremely broad civil liability for distributing any kind of false information deemed harmful to others. Under human rights treaties including the European Convention on Human Rights, the right to freedom of expression unambiguously includes the right to make false statements; while this right can be proportionately restricted to serve other legitimate aims, international human rights law generally does not accept prohibitions that are as broadly and vaguely defined as Article 27, or restrictions that are not justified by a clear social harm. Despite the report’s rhetorical emphasis on democracy, its recommendations in this area display a markedly authoritarian character and are perhaps the most concerning part of the report. 

Interestingly, the report also makes heavy use of the term infox – a rough French equivalent of the term ‘fake news’, which has been largely discredited in English-language research and commentary on disinformation, because its vagueness and accusatory tone make it such a useful rhetorical tool for authoritarian and far-right politicians. Connoting a blend of information and intoxication, the term infox presents false information as something sinister, poisonous and powerful. Like the report’s securitised and militarised language, it plays into the often hyperbolic tone of public discussions around disinformation and the broader rise of ‘internet threat’ discourse used to justify greater state regulation of online speech.

Although the report’s introduction notes that disinformation and conspiracy theories thrive in a broader socioeconomic context of precarity and destabilisation – for example, in areas with higher unemployment rates – this is never mentioned again and is not addressed in the report’s recommendations. In general, there is a disconnect between the report’s discussion of the ‘information environment’ people encounter online and the broader political and social environment of which digital media are just one aspect. For example, the report emphasises the government’s desire to crack down on online hate speech, but does not consider how the incidence of online hate speech might be impacted by the broader political discourse. Indeed, politicians can themselves be the source of hate speech, as is frequently the case in France concerning Muslims, particularly in the context of the current presidential campaign. Evidence suggests that the use of such rhetoric by politicians encourages hate speech by citizens, both on- and offline. An approach like the Bronner report’s, which appears to consider harmful speech on social media as a technical problem isolated from broader social and political divisions, is unlikely to adequately address the issue.

More generally, the report arguably betrays what public policy scholar Ben Green recently conceptualised as a solutionist approach to tech regulation. As famously described by Evgeny Morozov, technosolutionist attitudes conceive social problems in reductively technical terms, insisting that they can be solved through technological innovation without addressing their underlying social and political causes. Green suggests that much of the debate around tech ethics and regulation follows a similar pattern, in which not technology itself, but superficial tech ethics initiatives like consultations and codes of conduct play the role of providing too-easy solutions that avoid engaging with difficult social and political problems. 

In this case, except for the parts dealing with media literacy, the report mostly focuses on solutions involving the platforms themselves, or how they are regulated, rather than the broader social context. Even media literacy education is to some extent treated as a silver bullet. There is little discussion of the concrete mechanisms by which it would address disinformation, and no engagement with arguments that people are motivated to share information not only by their understanding of truth or falsity, but also by powerful emotional dynamics that are unlikely to be affected by better information-comprehension skills. Many of the report’s recommendations also follow a ‘create a new expert body to oversee X’ template, with few concrete ideas for what such bodies could actually do – arguably simply kicking the can down the road and making it someone else’s problem.

On the day of the report’s publication, Macron responded with a speech which in particular picked up on its securitisation discourse, blaming ‘foreign propaganda’ for spreading disinformation. His speech was delivered before France’s press associations, and placed particular emphasis on the role of the press ecosystem in combating disinformation and foreign influence operations. Macron called for a stronger system of press self-regulation, in which the press industry would identify ‘reliable’ media to be promoted. According to Charis Papaevangelou, who researches the links between platforms, politics and the media at Université Paul Sabatier, relying on press self-regulation to address disinformation could be promising, but raises many more questions. ‘Doesn’t this imply that existing self-regulatory frameworks, like codes of journalistic ethics, have already failed? I believe we need more robust cooperative regulatory frameworks.’ These should not only include voices from within the press industry – which are often dominated by the biggest media companies – but also academics and civil society experts, and should be transparent to the public. ‘We can’t demand that platforms open up their data, share them with researchers etc., but then not at least try to apply some of those same standards to the media industry,’ Papaevangelou suggests. 

As well as calling for stronger press self-regulation, Macron positioned himself as a defender of the press more generally, for example by promising strict enforcement of the new ‘neighbouring rights’ for press publishers to demand more revenue from platforms introduced in the EU’s 2019 Copyright Directive. With a tense election campaign underway, Macron’s response to the report and its proposals will likely be influenced by the need to stay on the good side of the press and to position himself as a strong leader who can stand up to American ‘big tech’. He has welcomed the report’s recommendations in general terms, but has not yet indicated concretely which might be prioritised and implemented. Which of the report’s proposals will have a lasting impact beyond the election remains to be seen. 

Note: the Bronner report is currently only available in French. Direct quotes from the report are approximate translations by the author. 

Rachel Griffin is a PhD candidate at the Sciences Po School of Law and a research assistant at the Digital Governance & Sovereignty Chair. Her research focuses on social media regulation as it relates to social inequalities