Home>“Content Policy in the Age of Transparency”: A Cross-Cultural Conversation

25.02.2022

“Content Policy in the Age of Transparency”: A Cross-Cultural Conversation

(credits: ©Pixabay/Gerd Altmann)

In an ever-shifting digital landscape, governments are increasingly seeking to regulate online content, most notably that of the Tech Giants (Facebook, Twitter, Google, etc.). These efforts have raised a number of complex questions for researchers, policy makers, and citizens alike, highlighting the complexity of regulation in the post-internet era. On Monday, 21 February 2022, Sciences Po and Stanford University discussed some of the most pressing of these questions on digital regulation in the context of both the United States and the European Union in “Content Policy in the Age of Transparency, A Transatlantic Discussion”.

In this cross-cultural, far-reaching exploration of company accountability, the role of academia in data research, as well as democratic regulation and questions of government involvement organised by Sciences Po’s Digital Governance and Sovereignty Chair and Stanford University’s Content Policy and Society Lab, Sciences Po’s Dominique Cardon, Florence G’sell, and Emmanuel Vincent and Stanford University’s Nate Persily, Julie Owono, and Leila Morch shared their expertise on the subject matter, touching upon ideas of transparency, freedom of speech, legislation, and the role of academia.

Maintaining Democracy in the Digital Era

As Sciences Po President Mathias Vicherat noted in his introductory statement, “the deployment of the internet has led to an incredible democratization of information and knowledge”. Although it has“offered opportunities to individuals”, it has also raised some key difficulties, notably that of the propogation of fake news and hate speech. Indeed, the influence of social media on the 2020 election in the United States was brought up multiple times throughout the discussion to highlight the tenuous nature of democracy in our current digital landscape. 

Although, in the example of the U.S. election, a lack of regulation on social media platforms had the effect of posing a potential threat to the democracy of the United States, the executive director of Internet Without Borders and the Stanford Content Policy and Society Lab and member of the Facebook Oversight Board, Julie Owono, drew attention to the flip side of this issue. In her view, regulation itself also has the possibility of resulting in reductions in the freedoms inherent in a democratic society—notably that of freedom of speech and expression. According to her, it is important to “use common sense about what it means to fight the harms and misuses of social media and content in general” asking the question: “What does it mean to fight it without affecting, unnecessarily and disproportionately, freedoms—of expression, of course, but other freedoms as well?”

A Move Towards Transparency

To address this question and the complex ethical questions it raises, the experts assembled proposed transparency as the most viable antidote to threats to freedom of expression that would be raised by regulating content. In their conception, this form of regulation—based on the controlled release of data to those outside the company—could be key in ensuring and maintaining rights of individuals and companies while promoting company accountability through government enforcement of legislation. 

According to the Director of the Digital Governance and Sovereignty Chair at Sciences Po and Professor of Private Law at University of Lorraine Florence G’sell, “transparency is essential for everyone”. In her conception, “we need, to a certain extent, to trust the platforms” because, when it comes to content moderation, “they are in the best place to do it”. Yet this inclination to trust platforms is not without a caveat: “It must be done in a totally transparent manner”, which, until now, how not been the case.

On the other hand, as Researcher at the Sciences Po medialab and co-author of a recent study on Facebook’s interventions against accounts that repeatedly share misinformation Emmanuel Vincent noted, there are limitations to the trust that can be placed in companies. In his words, “If the incentives of the platform are to make money and maximize engagement, then whatever policy that goes against that is not really going to be enforced, because it goes against the core business model of the platform.” It is thus up to governments and legislation to incentivize companies to comply with data sharing and other proposed measures.

Nevertheless, the path forward is not always a clear one; during the discussion, Professor of Sociology and Director of the Sciences Po medialab Dominique Cardon articulated the question ambient throughout the conversation, “Is it possible to create this kind of space where regulators and researchers can access data because they’re linked to a constraint on privacy law? Could it be the best solution to have a democratic assessment of transparency for the public?”

Transparency Legislation: The Ideal Solution?

For Research Project Coordinator at Stanford Content Policy and Society Lab Leila Morchs, this is a key question, highlighting the current divide between companies on one hand and researchers and public opinion on the other. In her conception, “this divide could be avoided if we encouraged discussion between public policy agencies and the high civil society and the research and academic field.” Indeed, bridging this divide by encouraging discussion is an important step. Yet beyond discussion, as the members of the conversation noted, regulation on company transparency is essential to bridging this divide and creating constructive, positive change.

In Nate Persily’s view, it is precisely this that must be worked toward. His suggestion to encourage transparency and offset the issues created by the core company business model evoked by Emmanuel Vincent lies within legislation to allow researchers to access key company data and ensure government enforcement. As he notes, “The government has to be involved in creating a safe, privacy-protected pathway for researchers to access platform data.”

In his conception, this change would be useful for governments as they would have “better information on which to base public policy”, beneficial for the public, as they would “understand what is happening on these platforms and keep platforms accountable”, and better for researchers because it would open up “a new avenue of research on everything that is happening in human society”—from Covid disinformation to radicalism online and beyond.

Ultimately, discussions surrounding issues of regulation and government involvement in private company practices are far-reaching, folding in issues of democracy and freedom. Nevertheless, as Nate Persily notes, it is possible to propose policy that circumvents issues of freedom of speech. Indeed, when it comes to transparency regulation, “You’re not regulating content, you are regulating the technology; you’re regulating the time, place, and manner of speech.”  Moving forward, as decisions about regulation continue to be made on both a governmental and company level, this distinction may serve as food for thought, helping experts and researchers to ensure the implementation of policies that promote healthy and functioning democracies.

The Sciences Po Editorial Team

Find out more

Download our brochure