[REPLAY] Webinar: Content Policy in the Age of Transparency, a Transatlantic Discussion
11 February 2022
[INTERVIEW] Is the Russian “splinternet” a possible scenario? 3 questions to Asma Mhalla
1 March 2022

[REVIEW] A look back at the Webinar “Content Policy in the Age of Transparency”

by Rachel Griffin

On the 21st February 2022, the Digital, Governance and Sovereignty Chair organised its first joint event with the Stanford Content Policy and Society Lab – which we hope will be the first of many collaborations. The webinar ‘Content Policy in the Age of Transparency’ brought together leading experts from France and the US to discuss how content policy could be better informed by academic research, and was streamed live on YouTube to an international audience. 

Mathias Vicherat, Director of Sciences Po, opened the webinar with a brief presentation which emphasised that the development of large-scale online platforms brings both opportunities and political challenges. At a time when both European and US regulators are moving forward with new platform regulation initiatives, Director Vicherat called for stronger transatlantic exchange of knowledge and expressed his hope for deeper collaboration between Stanford and Sciences Po in future.

The panel presentations were opened by Julie Owono, who is executive director of the Stanford Content Policy and Society Lab, as well as executive director of the NGO Internet Without Borders and an inaugural member of the Facebook (now Meta) Oversight Board. Ms Owono started by expansively defining the term content policy, arguing that it should not only encompass the content and application of content moderation rules but also the procedures by which they are determined, written and enforced – as well as how state regulation shapes these processes. She suggested that understanding the impacts of regulatory interventions and the unforeseen challenges that can arise requires international dialogue and multistakeholder participation as well as platform transparency. The hope is that lessons learned in one jurisdiction can inform future regulation around the world.

Professor Nate Persily of Stanford University then spoke about how regulation can enforce transparency about major platforms’ content policy. In particular, he presented the key features of the Platform Accountability and Transparency Bill currently under consideration by the US Congress, which he co-authored. Professor Persily noted the tensions between making platform data available to researchers and ensuring user privacy – a tension which the US bill aims to resolve by making data available only to researchers from academic institutions, in secure internal facilities. 

Finally, Professor Persily emphasised that transparency is not only about enabling academic research and regulatory scrutiny, important though that is – it should also substantively change platforms’ behaviour, by opening them up to more public criticism and strengthening their incentives to respond to public concerns. Such policies to strengthen the influence of civil society are particularly important in the US context, where First Amendment speech protections make many direct state interventions in content policy constitutionally impermissible.

Professor Florence G’sell, who leads the Digital, Governance and Sovereignty Chair, followed up by analysing in more detail the legal and institutional differences between the US and EU member states. In France, for example, there are various long-established civil and criminal provisions regulating on- and offline speech, including the very broad criminalisation of ‘false news’ in Article 27 of the 1881 press regulation law. These would certainly not be possible under US First Amendment law. Beyond this, though, there are also different media and political cultures, with different expectations about whose role it is to regulate speech. 

Nonetheless, despite the greater acceptance of state speech regulation in European countries, Professor G’sell noted that speech regulation at the scale of contemporary online platforms cannot feasibly be done only by the state – which would raise its own normative and political concerns, for example about media freedom. Platforms themselves must at least make first instance decisions, even if these are then subject to oversight. Precisely for this reason, strengthening accountability and transparency in platforms’ decision-making processes is vital. In this context, transparency should not be considered as only one thing, but comprises different aspects: for example, transparency towards researchers and transparency towards regulators and the public are all equally necessary, but will require different standards and procedures.

This was followed by a presentation by Professor Dominique Cardon, director of Sciences Po’s Médialab, and his colleague Emmanuel Vincent. They presented a recent study which utilised empirical data from Facebook’s CrowdTangle analytics service and its research data sharing programme Social Science One (which Professor Persily was also instrumental in setting up). 

Their research showed that when Facebook introduced new policies to reduce reach for accounts which had multiple ‘strikes’ for repeatedly sharing misinformation, the engagement (and presumably visibility) of their posts dropped significantly. However, this effect was seen only for Facebook Pages, whereas Groups – known to be also be an important channel for misinformation – were not affected in the same way. Moreover, many affected users responded to the drop in engagement by increasing their activity and posting more, with the result that their engagement steadily went back up. Overall, the research provided important insights into the effectiveness of platform policies against Covid-19 misinformation, a key policy concern in both the US and the EU – and illustrates the value of providing detailed platform data to independent researchers. 

Professor Cardon ended with some observations on the different transparency policies and programmes that have been developed since the EU introduced its voluntary Code of Conduct on Disinformation, noting that the types of data his team relied on from Facebook would not have been available on some other major platforms.  

Finally, Leila Morch, research programme coordinator of the Content Policy and Society Lab, concluded the panel presentations by stressing the need for public policy to be informed by academic research. This requires not only short-term exchanges of information on specific topics, but also deeper institutional and cultural connections, for example by increasing the number of PhD graduates entering the public sector. She also emphasised the importance of transatlantic exchange and collaboration, as well as a broader awareness of the interests and policy concerns of other countries (for example in the Global South) and broader international exchange. 

The panel discussion was followed by an open Q&A and further interesting exchanges between the panellists. Audience members pressed Professor Persily for further information about how transparency legislation would work and how it could be reliably enforced, and questioned Ms Owono about the working methods and possible future directions of the Meta Oversight Board. In response to audience questions, panellists also emphasised that content policy is not just about content moderation, but must take in wider questions about how content is organised, curated and promoted on digital platforms. 

The panel and Q&A were moderated by Rachel Griffin, PhD candidate and lecturer at Sciences Po Law School and research assistant at the Digital, Governance & Sovereignty Chair.

The webinar provided invaluable insights into current trends and regulatory developments in social media policy, and we hope that it will soon be followed by further collaboration between Sciences Po and Stanford.