Facebook logo and gavel
[ARTICLE] What could Facebook’s Oversight Board mean for online expression?
9 July 2020
Christian Pfister à Sciences Po
[INTERVIEW] Can cryptocurrencies compete with fiat money ? with Christian PFISTER
24 July 2020

[ARTICLE] Social media and content moderation in times of Covid-19

by Rachel Griffin

As countries around the world went into lockdown to control the spread of Covid-19, the importance of social media platforms became clearer than ever. They enabled people to maintain their social lives during a highly difficult time, connect with their communities, organise mutual aid networks, and to receive important news and health advice about the pandemic. Clearly, the services they offer are very valuable.
However, this period of social and political unrest has also highlighted the immense power of large platforms to shape public debate, and the lack of transparency and accountability around how this power is exercised. This first blog post by Rachel Griffin opens the first thematic focus of the Digital, Governance and Sovereignty Chair. This series of posts and interviews offers a deep-dive in some of the major issues in social media governance that the Covid-19 pandemic and its political consequences have brought to light.

One issue that has existed for a long time, but received much more attention from policymakers and the public in the context of the global health crisis, is the spread of misinformation on social media. A series of bad news stories about surveillance and misinformation (most prominently the scandal around Cambridge Analytica’s use of voter data in the Brexit and Trump campaigns) has already damaged public trust in big tech companies. Misinformation about science and health poses an even more immediate danger, and demands intervention from platforms. However, as this post will discuss, the measures taken to combat fake news about Covid-19 also pose a threat to freedom of expression, and highlight the lack of accountability and due process in social media moderation. The problem of fake news around Covid-19 is discussed in more detail by Professor Dominique Boullier of Sciences Po in his interview for the Chair.

The unaccountability and arbitrariness of platforms’ content moderation processes were thrown into sharper relief by recent events in the US, where the growing Black Lives Matter movement has placed intense pressure on platforms to crack down harder on hate speech. The biggest flashpoint has been content from President Trump, who has repeatedly posted threats and false information about the Black Lives Matter protests. In late May, Twitter started adding fact-checking and warning labels to Trump’s posts, which led to intense pressure on other social networks to do the same – particularly Facebook, which had initially tried to defend its hands-off approach to moderating political content, but was ultimately forced into a series of policy changes as it attempted to fend off criticism from civil rights leaders, employees, and advertisers.

This series of events could be viewed in a positive light, proving that users and activists can successfully demand more responsibility and accountability from powerful platforms. Yet it also highlights the ambiguity and arbitrariness of Facebook and Twitter’s decision-making procedures, which have huge ramifications for free speech around the world, and can be changed on a whim whenever it seems to suit the companies’ political and commercial interests. Two further blog posts provide a detailed analysis of Twitter’s decision, the Trump administration’s reaction, and its legal consequences, and of Facebook’s changing policies.

Overall, the responses of major platforms to the Covid-19 pandemic and the Black Lives Matter movement have highlighted the lack of institutional accountability and due process governing their power to influence and restrict online speech, and there is increasing interest in new regulatory models. The EU has so far favoured self-regulation by platforms, but this may soon change as it updates its legislation on social media and other internet platforms in the forthcoming Digital Services Act. The possibilities for future EU regulation are discussed in our interviews with Amélie Heldt of the Leibniz Institute for Media Research and with Professors Serge Abiteboul and Jean Cattan of ARCEP, the French telecommunications regulator.

Finally, another possible way forward is the development of more independent and accountable structures for self-regulation. We may be seeing a first step in this direction with the creation of Facebook’s independent Oversight Board, whose first members were announced earlier this month. However, many questions remain about how effective it will really be, and whether its limited jurisdiction will allow it to address the most important issues. These questions are addressed in more detail in our final blog post, and in an interview with evelyn douek of the Berkman Klein Center for Internet and Society.

The Covid-19 ‘infodemic’

We have probably all seen some fake news stories about Covid-19: maybe the disease is caused by 5G networks, or maybe the whole pandemic was faked and France’s hospitals are empty. They might be easy to laugh about, but such claims have serious consequences, as the WHO and EU have both warned. They decrease compliance with quarantine and public health measures, which can increase the spread of Covid-19. A recent UK study found that one in six people in the UK would refuse a potential coronavirus vaccine; the percentage was markedly higher among those who get their news primarily from social media, suggesting that anti-vaccine misinformation online has an influence. Rumours that 5G masts cause Covid-19 have led to repeated arson attacks in the UK, the Netherlands and France; in the UK, telecoms engineers have been attacked. In the US, far-right groups have used the pandemic as an opportunity to spread racist and antisemitic conspiracy theories.

Fake news is not only found online: particularly in the US, traditional media outlets like Fox News also played a major role in spreading misinformation about Covid-19. Globally, however, social media have emerged as a particularly important channel. Indeed, they have been a useful instrument for established far-right media and state actors to deliberately spread misinformation. The UK’s media regulator has found in repeated studies that around half of UK adults are seeing fake news about Covid-19; over a third say they find it hard to know what is true about the pandemic. In France, a study by independent polling company IFOP found that over a quarter of people believed the novel coronavirus had been deliberately created in a laboratory. In April, the EU accused Russia and China of targeting its citizens with fake news and conspiracy theories on social media, in order to create confusion about Covid-19 and undermine trust in public health authorities. A recent study from the Oxford Internet Institute found some Russian and Chinese outlets spreading conspiracy theories getting more engagement than major newspapers like Der Spiegel and Le Monde.

Social media provide a uniquely effective channel for misinformation to rapidly spread between users, and for malicious actors to target users in other countries. The spread of misinformation on social media poses a direct threat to public health. As the OII study found, it also undermines social cohesion and trust in democratic governments just as they are taking drastic measures in the middle of a major social and economic crisis. So what are the platforms doing about it?

Are platforms taking more action on fake news?

Major platforms have acknowledged the problem of misinformation around Covid-19, and introduced some policy changes. These include stricter limits on what users can post about coronavirus and scaled-up fact-checking operations. Under its new misinformation policy, introduced to deal with the pandemic in March, Twitter recently added fact-checking labels to tweets from US President Donald Trump for the first time, as discussed in detail in another post.

This is a noticeable shift from the large platforms’ reluctance to tackle misinformation in the past. In 2012 Twitter summed up its hands-off approach by describing itself as ‘the free speech wing of the free speech party’, while Mark Zuckerberg said in a 2016 statement, ‘We do not want to be arbiters of truth’. One reason for this is that moderating fake news is costly and difficult. For example, Facebook’s well-documented inaction on fake news and incitement of violence in Myanmar seems to be largely down to inadequate investment in staff with the necessary linguistic and cultural expertise. Combating misinformation also requires platforms to make difficult and contestable judgments about truth and accuracy, which brings them plenty of controversy and criticism for little economic reward. Disclaiming responsibility has therefore been a good strategy so far.

However, in a global public health emergency where misinformation is highly visible and poses a clear risk – not only in the developing world but also in wealthy Western nations – this approach is no longer viable. The reputational risk of failing to address fake news now seems greater than the risk of addressing it badly. Accordingly, we are seeing not only changes in policy details, but an underlying shift in how platforms define their own responsibilities. Arguably, this should be welcomed. The consequences of misinformation about Covid-19 are real and serious, as the WHO has made clear: fake treatments and mistrust of public health advice can cost lives. However, the unaccountable way that social media platforms are addressing it is also concerning, and has broader consequences for freedom of expression online. evelyn douek characterises their restrictive new policies as an ‘emergency constitution’ allowing drastic restrictions of free speech. In her interview, she highlights the need for more transparency and due process around how these sweeping powers are exercised.

Dominique Boullier points out in his interview that platforms currently regulate these sensitive and contestable issues in a very ad hoc way. As Facebook’s recent series of policy changes and U-turns on hate speech illustrates, they are often guided more by what they think will be popular with consumers and advertisers than by a strategic, long-term consideration of how their policies and platform architecture could prevent misinformation and promote a more constructive public debate.

For example, Facebook has removed events for anti-lockdown protests in the US which violate social distancing measures. A spokesperson stated, ‘We remove the posts when gatherings do not follow the health parameters established by the government and are therefore unlawful.’ If the company now plans to remove all illegal events, this would be very concerning: it would seriously impact the right to protest around the world. However, Facebook has since left up events for Black Lives Matter protests which also violate social distancing measures. This may seem like the right decision, but there is no transparency around how it was made and how the company will decide similar cases in future. Will they only remove events which they see as not only unlawful, but illegitimate in the court of public opinion? Protests by marginalised groups which attract less public sympathy and commercial support than the Black Lives Matter movement might find themselves silenced. Facebook’s decisions significantly affect the right to freedom of expression and association, but there are no binding rules or institutions requiring it to respect these rights, or to make decisions in a fair and consistent way.

The problems with the platforms’ new approach

Meanwhile, the pandemic and associated lockdown measures are having another concerning and probably long-lasting impact on how social networks regulate content. In mid-March, both Facebook and Google announced that they would let most of their content moderators stop work (for data security reasons, remote working is generally impossible). A far greater proportion of moderation on these and other social networks is now performed by AI. As in many other contexts, the crisis is accelerating an existing trend: due to the enormous volume of content they host, the long-term goal for social networks has been to do as much moderation as possible automatically.

Towards full automation?

Currently, the online lives we take for granted depend on an invisible army of low-wage workers removing the violent and abusive content which would otherwise flood our screens. Often they are abroad, in countries with cheaper labour. For example, much English-language moderation is done by contractors in the Philippines. Facebook’s shift to automated moderation seems to have been triggered by the lockdown of Manila.

In many ways, this shift is a good thing. Obviously requiring moderators to endanger themselves by coming to work during the pandemic would be highly problematic. Moreover, as detailed by journalist Casey Newton and academic Sarah Roberts, content moderation is a deeply unpleasant and stressful job. Moderators spend long periods watching extremely graphic content, which often does lasting damage to mental health. In May, a US court ruled that Facebook would have to pay all its moderators at least $1,000 in compensation for the severe psychological impacts of their work. If AI moderation means people no longer need to spend their days viewing content that can cause PTSD, arguably it should be welcomed.

As Roberts points out, however, automation leads to more heavy-handed and mistaken removal of content. AI moderation tools must process complex language and visual cues, but don’t yet have the sophistication and context that allows human moderators to distinguish between, for example, content that promotes violence and violent footage from journalists reporting in a war zone. It also perpetuates familiar biases: a recent study by AlgorithmWatch found that Google’s Perspective moderation software is more likely to flag otherwise identical comments as ‘toxic’ if the writer identifies themselves as black or gay. Social networks have already admitted more mistakes will be made due to increased automation: YouTube recently blamed the removal of comments criticising the Chinese Communist Party on an error in its automated moderation.

Despite her work with moderators, Roberts argues that the risk fully automated moderation poses to legitimate speech and activism is unacceptable: it is too opaque and unaccountable, ‘the nightmare of many human rights and free expression organizations’. The quality of moderation and fact-checking is already markedly worse even in major non-English-speaking markets like Spain and Italy, let alone developing countries like Myanmar. Moving towards less reliable automated moderation in these countries could pose a serious danger not only to free speech, but to safety and public health.

How effectively are platforms tackling misinformation?

The effectiveness of social media platforms’ fact-checking operations can also be questioned. Making highly publicised changes to their policies on fake news is much quicker and easier than actually implementing these policies, which requires substantial time and resources. The British Centre for Combating Digital Hate recently reported that of 649 posts it had reported for spreading misinformation about Covid-19, fewer than 10% were removed.

To combat misinformation, Facebook and Google both rely on independent fact-checkers, typically journalists and researchers. In a recent Politico article, fact-checkers around the world claimed that they don’t have enough data or funding to do their job properly, and feel they are barely making a dent in the huge volume of fake Covid-19 stories. Fact-checkers in the UK and Germany have also highlighted that much fake news spreads through end-to-end encrypted platforms like Facebook’s WhatsApp, where messages can’t be read or moderated.

A recent review of relevant academic literature in Wired concluded that warnings are generally an effective way to stop people believing fake news, and should be used more widely (although Facebook itself has often questioned this evidence). However, a study by analytics firm NewsGuard identified 36 ‘super-spreader’ Facebook pages promoting misinformation about Covid-19 to large audiences. Their English-language posts were flagged with fact-checking warnings 63% of the time, but for other European languages the rates were far lower: only three of 20 French posts were flagged and no Italian posts were. This was the case even though most of the pages had been singled out for posting misinformation in the past, and this information could have been used to flag them again. The company found a similar pattern on Twitter, with large accounts with a history of spreading fake news continuing to freely post Covid-19 misinformation.

Fake news and content moderation: the future

In the immediate context of the Covid-19 crisis, with a huge volume of fake news to deal with and business operations significantly disrupted, it’s understandable that mistakes have been made in content moderation. But there is a lot big tech companies could do to make it more effective. They could easily afford to hire more moderators and fact-checkers, and give them as much funding and data access as they need – not only in core English-speaking markets, but worldwide.

In its recent Joint Communication on disinformation, the EU Commission announced that it would require social media platforms which have signed up to its Code of Practice on misinformation (and strongly encourage those which haven’t) to provide monthly reports on how they are dealing with misinformation and make more data available to researchers. In addition, 75 free speech NGOs, and research institutions have called on social media platforms to preserve their data on content moderation during the pandemic and make it available to researchers. This would be a good start to improve accountability, but research alone is not enough. As Douek emphasises, more structured and consistent accountability mechanisms are needed.

Big tech companies entered 2020 facing a ‘techlash’. Governments around the world (including in the EU, Brazil and California) had been passing new regulations on privacy and data protection, and threatening more regulatory action, including a series of antitrust investigations in the US. France and Germany both recently passed laws specifically targeting fake news, increasing the penalties for platforms which fail to remove it. Facebook has been working closely with governments, including in France, to try to prove that it is taking a more responsible approach to content moderation. The Covid-19 pandemic has highlighted both the value of the communications infrastructure social media provide, and the need for more accountability around how they exercise their enormous power over online speech. It therefore seems likely to strengthen the movement towards more regulation of big tech companies.

In this context, the EU’s forthcoming Digital Services Act is an important opportunity to strengthen social media platforms’ accountability for how they regulate online speech. The DSA will update existing regulation of internet platforms and is expected to introduce new obligations with regard to content moderation. As well as transparency and reporting obligations, the EU could set procedural regulations, and require platforms to invest more resources in moderation and fact-checking. The regulatory possibilities are discussed by Amélie Heldt and by Serge Abiteboul and Jean Cattan in their interviews.

The platforms’ response to the Covid-19 pandemic so far, and to other recent events including the controversy over Trump’s posts, have also shown how much their moderation policies are influenced by reputational concerns and public opinion. Dangerous misinformation about health has always been a problem, but the pandemic made it so widely apparent that the platforms had to take action to protect their reputations. More recently, the Black Lives Matter movement has successfully pressured a number of social networks into cracking down on hate speech. Just weeks after firmly defending Facebook’s existing policies, Mark Zuckerberg was forced into announcing a series of changes by pressure from civil rights groups, employees and advertisers. The last few months have highlighted many of the long-running problems in social media governance, but also shown that it’s possible for pressure from users, activists and regulators to bring about positive change.

Rachel Griffin is a master’s student in public policy at Sciences Po Paris and the Hertie School of Governance in Berlin, and a research assistant at the Digital, Governance and Sovereignty Chair.