Amélie Heldt
[INTERVIEW] Platforms and their content moderation responsibility: 3 questions to Amélie HELDT
9 July 2020
Dominique Boullier
[INTERVIEW] Governing social media in the Covid19 crisis: 3 questions to Dominique BOULLIER
9 July 2020

[INTERVIEW] How can social media platforms better prevent fake news? 3 questions to evelyn douek

evelyn douek is a lecturer on law and S.J.D. candidate at Harvard Law School, and Affiliate at the Berkman Klein Center For Internet & Society. She studies global regulation of online speech, private content moderation institutional design and comparative free speech law and theory.
Interview by Rachel Griffin.

What do you think social media platforms should do to better tackle fake news?

This is a huge question! Thousands of trees have been felled by regulators and academics around the globe writing reports and articles trying to answer this question, so in my small space here let me do a classic academic move and break down the question into smaller questions instead.

First, ‘fake news’ is a nebulous term that means different things to different people, and has been co-opted by a number of actors to just simply mean ‘news I don’t like’. So it’s useful to be a little more precise about the different types of content that may require intervention by platforms. Wardle and Derakhshan developed a helpful and widely-cited taxonomy breaking it down into three categories: disinformation (false information spread with intention to harm), misinformation (false but not intended to cause harm), and mal-information (information based on reality and intended to cause harm). Even within these there will be subcategories depending on the actor behind the content and the methods they use to try to spread it. For example, you might deal with false clickbait articles being spread by people trying to make a profit differently to a well-resourced state-backed information operation targeted at a specific electoral process. Sorry to take your question about a single hard problem and turn it into multiple hard problems, but it’s no wonder platforms and regulators are struggling to find the right approach! It’s hard!

Even though there’s no single solution, as an overarching observation I would just say that we need to avoid overly simplistic or knee-jerk responses. Requiring platforms to adjudicate whether all the content on their platforms is true or not and take down anything deemed ‘false’ is not the right path: it’s not practically possible, it will endanger legitimate freedom of expression, and it’s not clear that simply removing content is necessarily the best way of correcting people’s false beliefs.

A better approach is to get out of the take-down/leave-up paradigm of content moderation and leverage the full variety of options that platforms have available to them. For example, they can add labels to content showing if it’s been fact-checked or manipulated, provide additional context or transparency about particular posts or accounts, add friction so that content doesn’t spread as far or as fast (such as by downranking content in the feed or changing platform affordances like how easy it is to share content), highlight authoritative information, help make counter-messaging easier or more effective, and many many more similar kinds of interventions. All of this will require imagination, experimentation and independent empirical studies to assess the effectiveness of various interventions, but I believe they will ultimately be more fruitful than relying on blunt tools.

What institutional structures do you think are needed for content moderation policies to be made in a legitimate way?

What I love about this question is its focus on process: focusing on the way policies are made and not the substance of them. I think that’s really important because substantive disputes around the proper limits of free speech are intractable, have been debated for centuries, will continue to be debated for many more, and the ‘right’ answer will be different for every society. So if we can’t expect everyone to just come together and agree on the ‘ideal’ content moderation policies, the next best thing we can do is focus on making sure that rules are made in a way that accord with ordinary rule of law values, such as transparency and due process.

What exactly that looks like is the key question for the next decade for platform governance. At the very least it means far more detail from platforms about what their rules actually are and the justifications behind them. This could be self-regulatory (see e.g. the next answer!) but I think will also require transparency mandates in hard law to ensure uniformity and consistency in disclosures. Next, it requires some sort of auditing and accountability for how these rules actually end up being enforced in practice: a good rule on paper means nothing if it isn’t operationalised across the platform. Facebook can say it’s banning hate speech, for example, but we need a mechanism (beyond journalists clicking around and writing news articles) for assessing how Facebook is interpreting those rules in practice, the extent to which there are false positives or false negatives in enforcement (and at scale there will always be false positives and negatives!) and whether this reveals any biases in the way hate speech is identified and removed.

The precise institutions needed to do all this, and the role of government and regulation, are still up for grabs. Social media companies are experimenting, governments are writing reports and passing new laws, and civil society are working on recommendations (see, for example, the Santa Clara Principles, which are a great place to start, and are in the process of being updated to keep moving the ball forward). It will certainly be interesting to see where it all heads! I think I’ll have plenty to keep me busy.

What do you think will be the impact of the new Oversight Board on Facebook’s approach to content moderation? What about its impact on the broader social media industry?

If the Oversight Board works (and I’m cautiously optimistic about it, although there are still a lot of open questions, as I’ve written about here) I think it promises two main improvements to Facebook’s content moderation ecosystem.

First, it introduces an independent check into Facebook’s policy formation and enforcement processes that will help make sure that really consequential decisions about the rules of one of the most important forums for modern public discourse are not guided by Facebook’s business interests alone, but are more based on the public interest and rights respecting. This can further help improve Facebook’s policy process by forcing the company to think more about how to justify its rules in advance of a challenge, and also by providing an issue-forcing mechanism that isn’t just trying to get media and public attention (which, to date, has been the main way to get Facebook to change its rules).

Second, even aside from any substantive benefits or changes in Facebook’s content moderation rules, the process of having a more public and transparent conversation about those rules and the reasons behind them is important. As I said above, people are always going to disagree with the substance of individual rules. But having a process for people to challenge decisions and have their point of view heard is an important step forward and can help people accept rules even if they continue to disagree with them.

And what about for the broader social media industry? Another big open question! I am sure other companies are watching the Facebook Oversight Board experiment with great interest. The Board is a wager by Facebook: it’s a bet that in the long-term, the potential legitimacy gains it gets from something like the Board will outweigh having to abide by individual rulings it disagrees with. But other companies don’t seem to think that legitimacy is as important. Whether they change their minds, or whether regulators force them to, will depend a lot on how Facebook’s experiment plays out.

Just one final note: there’s some discussion about whether Facebook’s Oversight Board itself could hear cases about other social media companies’ rules. Now, I think we should walk before we run and have the Board hear at least one case before we decide it should oversee the entire internet. But, and I’ll leave it here, I think the question of how much we want centralisation of content moderation rules and how much we want multiple competing laboratories of online governance is one of the most fascinating questions in platform governance today. There’s arguments for and against, and I think it ultimately needs different answers for different types of content, so I’m going to end here with a teaser for my paper on this – The Rise of Content Cartels – for any readers who want more!

You can read more of evelyn’s works here or by following her on her Twitter page.