Donald Trump reading fake news
[ARTICLE] How Twitter started to control what public figures say online
9 July 2020
Facebook logo and gavel
[ARTICLE] What could Facebook’s Oversight Board mean for online expression?
9 July 2020

[ARTICLE] How public pressure forced Facebook to change its policies on hate speech

by Rachel Griffin

The growing Black Lives Matter protests have put pressure on all companies to publicly take action against racism, and show they are in tune with the strong public sympathy for the movement.
Social media which give a platform to racist and far-right content have become a particular target, with the biggest point of contention being US President Trump’s repeated posts threatening violence and spreading false information about protestors.
In this post, Rachel Griffin analyses how that pressure affected Facebook and urged the company to react.

Led by Twitter, many platforms have started to take more aggressive action on hate speech, including by the president and his supporters. This has triggered heavy criticism of Facebook, which initially stood by its hands-off approach to moderating content from politicians. Facing intense pressure to take action against Trump specifically and hate speech in general – including an employee walkout and an advertiser boycott coordinated by a coalition of civil rights groups – the company has been forced into a series of U-turns and policy changes. So far, however, these have failed to satisfy its critics. This post will analyse Facebook’s changing approach to hate speech, in particular from public figures such as Trump. As will be discussed at the end of the post, this latest controversy has also highlighted long-running problems with the company’s governance and content moderation policies, which remain unresolved.

How did Facebook’s moderation policies develop?

As detailed in a 2016 The Verge investigation, Facebook – like most social networks – developed its content moderation rules in an ad hoc, improvised way as it grew. The company’s first comprehensive internal moderation guidelines were compiled in 2009, though not made public. In 2018, Facebook published its guidelines, suggesting this would provide more transparency for users and invite more expert feedback. However, these public community standards are phrased in general terms and leave many questions unanswered – as the latest dispute over how they should be applied to Trump’s posts illustrates. In 2017, the Guardian gained access to thousands of pages of leaked internal moderation guidelines, making clear that the brief, high-level public guidelines do not give the full picture of Facebook’s moderation policies.

As detailed in our previous blog post, moderation is primarily undertaken by third-party contractors (many of them abroad and working in extremely poor conditions) and, increasingly, by AI. Researchers such as Sarah Roberts have documented that relying on automated systems and low-wage contractors working under intense time pressure entails significant quality and accuracy issues.  

In recent years, the spread of harmful content online has become an increasingly salient political theme. The scandal around the involvement of Cambridge Analytica’s data-gathering operation in the 2016 Brexit referendum and US election brought new attention to Facebook’s political influence. In the following years, the spread of fake news and hate speech on the platform became a major issue in the US, Brazil, India, Myanmar and many other countries around the world, with serious and sometimes lethal consequences. This has led to increased regulatory pressure, as analysed in detail in another of our posts. France and Germany have both passed online hate speech laws obliging social media platforms to remove hate speech within strict time limits or face heavy fines; the EU introduced a voluntary code of conduct on hate speech for the major platforms in 2016, although it’s possible that its forthcoming Digital Services Act will move further towards a hard law approach.

Alex Hern has suggested that Facebook historically did less to moderate hate speech than other sensitive content such as nudity, in part because it requires nuanced and context-sensitive decisions that are difficult for human moderators and even more so for automated systems. In recent years, pressure from regulators has forced it to take more action: it now removes a far higher proportion of hate speech proactively, before it is flagged by users, and introduced new tools to help moderators assess hate speech following an independent audit in 2019. The company has also tried to build cooperative relationships with regulators. It has worked especially closely with the French government, forming a joint working group of government officials and Facebook employees to examine hate speech policies in 2018. In anticipation of the EU’s Digital Services Act, it also presented its own proposals for regulation of content moderation in February. While arguing against the most onerous obligations, like strict time limits for removal of hate content, the proposals acknowledged the need for stronger regulation, calling in particular for more procedural obligations. Zuckerberg also publicly called for more regulation in the Financial Times, highlighting how much the company already works with regulators and public bodies. Nevertheless, the European Commission rejected Facebook’s proposals as inadequate, and is likely to include stricter obligations in the DSA.

Indeed, Facebook’s efforts to deal with hate speech so far leave a lot to be desired. The EU’s latest report on the implementation of its code of conduct found that Facebook and Instagram respond faster to hate speech complaints than most other platforms. Facebook also releases its own quarterly reports on moderation, in which it typically congratulates itself for improving the accuracy of its automated systems and finding more content before it is reported. Yet significant problems remain. Reports on content which is ultimately removed obviously fail to capture how much hate speech content remains online. However, a 2019 study found that only around half of a sample of posts containing hate speech were removed after being reported. The Wall Street Journal recently reported on a 2016 internal Facebook study which revealed that 64% of joins to extremist groups were driven by Facebook’s own automatic recommendations; senior executives suppressed the study and took no action. Civil rights groups have repeatedly highlighted the continuing prevalence of hate speech on Facebook, particularly in emerging markets where it is under less political and economic pressure to take action. As the Black Lives Matter protests started to spread across the US and then the world in June, hate speech on Facebook remained a major problem, and became a key target for activists.

Hate speech from public figures: Facebook’s trickiest issue

A recurring problem for Facebook as it sets and enforces policies on hate speech and other types of banned content has been how to deal with content from politicians – and especially from President Trump, who has consistently posted misleading, abusive and racist content on Facebook, Twitter and other social networks since the start of his election campaign. This has also become a key point of contention in the recent campaign against hate speech on Facebook, which was initially sparked by the company’s refusal to follow Twitter’s lead by hiding or removing a post by the president using a phrase associated with segregationist politicians of the 1960s to suggest that Black Lives Matter protestors would be shot.

As clarified in a 2019 blog post by Facebook’s VP of public policy, Nick Clegg, the company has two policies which give special treatment to objectionable content from politicians. First, it may make discretionary exceptions from its community standards for organic content which violates the standards but is considered ‘newsworthy’. Second, political advertising in general is exempt from the fact-checking applied to other ads. The second policy has met with particularly harsh criticism, especially by US Democratic politicians who have highlighted how easily it can be abused by the Trump campaign. Twitter added pressure to Facebook last year by banning political ads altogether. Yet the first policy also raises many problems, particularly since it is so poorly defined: the company is free to apply it in an opaque and arbitrary way.

Although Facebook publicly introduced the newsworthiness exception in a 2016 blog post (sparked by a public controversy over its removal of a newspaper article featuring the iconic ‘napalm girl’ Vietnam War photograph for nudity), it has never clearly defined the policy. It is briefly mentioned in the introduction to the community standards released in 2018, which states that, ‘In some cases, we allow content that would otherwise go against our Community Standards – if it is newsworthy and in the public interest. We only do this after weighing the public interest value against the risk of harm and we look to international human rights standards to make these judgments.’ There is no indication of how the company defines newsworthiness, or how it assesses the public interest and risk of harm. The specific case category of posts by politicians is not mentioned anywhere. It is clear, however, that in practice prominent politicians are exempt from the often unreliable and heavy-handed moderation processes that apply to ordinary users, including journalists and political activists whose posts and accounts have been mistakenly removed. High-profile political cases are decided by the company’s top policy team and, in Trump’s case, by Mark Zuckerberg personally. Yet there are still no consistent or transparent rules setting out how the public interest in seeing what politicians are saying is weighed against risks to public safety. 

Facebook has defended its policies on both newsworthiness and fact-checking of political ads on principled grounds, arguing that political speech should not be regulated by a private company and that citizens should be able to see and judge it for themselves. However, there is plenty of evidence that its approach to moderation of public figures is also influenced by political and reputational considerations. The US Republican party has repeatedly used unsupported accusations of anti-conservative bias to try to pressure social networks into relaxing their moderation policies. Seemingly, in Facebook’s case, this has been quite successful. A recent report by the Washington Post revealed that since 2015 the company repeatedly adjusted its policies on hate speech and public figures, and even its News Feed algorithm, to avoid antagonising Trump and his party; it has also held off on removing violating content where it thought this could disproportionately impact conservatives. Facebook’s policy chief, Joel Kaplan, is a prominent Republican who has made concerted efforts to improve Facebook’s relationship with the party; in October, Zuckerberg and his wife had a private dinner with Trump at the White House.

The Black Lives Matter movement and Facebook’s latest crisis

In this context, the Black Lives Matter movement has brought renewed attention to the problem of hate speech on social media, and to Facebook’s willingness to accommodate Trump and his far-right supporters. Twitter was the first social network to change its policies in response to the protests: on May 26th and 29th, it added warning labels to two Trump tweets which made false claims about voter fraud and threatened violence against protestors (an option introduced in a policy change last year, but now implemented for the first time). Since then, Twitter has stuck to its new approach by continuing to flag abusive content from Trump, and other platforms have taken similar actions. On June 3rd Snapchat stopped promoting Trump’s account in its Discover page. On June 29th, Reddit updated its hate speech policy and banned its largest pro-Trump forum along with over 2,000 other subreddits, while Twitch suspended Trump’s own account for reposting a 2016 video featuring racist comments about Mexicans. Facebook was initially the outlier in refusing to change its policies, and quickly became the key target for activists campaigning against hate speech online. Subsequent concessions have failed to satisfy its critics; pressure on the company has mounted quickly, and come from unfamiliar sources.

After Twitter first labelled Trump’s tweets, which had also been posted to Facebook, the company immediately faced demands to do the same, or remove them altogether. Zuckerberg personally decided to leave the posts up, defending his position in a public Facebook post on May 30th. This met with heavy criticism from politicians, journalists and civil rights leaders, and on the 1st June, hundreds of Facebook employees staged a walkout to protest this decision. They were quickly backed up by open letters from Facebook-funded scientists and from an anonymous group of content moderators, calling for action against Trump. As Facebook expert Siva Vaidhyanathan has pointed out, finding and retaining talented staff is a key challenge for big tech companies; the company is used to brushing off criticism from politicians and the press, but angry employees are a much bigger problem.

Unable to ignore them, Facebook tried making some small concessions. On the 5th June, Zuckerberg announced that it would review its policies on public figures and consider introducing warning labels, although at that point he was rather circumspect about the idea, arguing that he didn’t want Facebook to start editorialising on content it didn’t like. He also promised to overhaul the company’s decision-making processes and make sure it included a more diverse range of voices. In the following days, the company also announced small policy changes in other areas, presumably hoping to gain some countervailing positive publicity. On the 6th June, it rolled out a previously announced policy to label posts from state-owned media, and on the 16th it introduced a new option for users to opt out of seeing political ads and a ‘voter information center’ for the US elections. Perhaps trying to signal a less accommodating attitude towards Trump, it also banned a set of ads from his campaign on the 18th for featuring symbols used by the Nazis. These changes failed to satisfy its critics, and pressure continued to mount.

The turning point came on the 17th June, when a coalition of US civil rights groups launched the Stop Hate for Profit campaign, calling on companies to boycott buying ads on Facebook until it changed its hate speech policies. This was a significant shift for two reasons. First, it identified a new and highly effective way to put pressure on Facebook. The company makes 98% of its revenue from advertising; executives started holding daily meetings with advertisers to try and persuade them not to join the boycott, indicating how much it concerned the company. Second, the campaign significantly broadened the scope of the discussion, calling not only for action against Trump but for a much more specific and comprehensive set of policy changes to combat hate speech across the platform.  Over the next two weeks, the campaign quickly gathered support from increasingly large companies: Verizon joined the boycott on the 25th June, and Unilever – one of the world’s biggest advertisers – on the 26th.

Two hours after Unilever’s announcement, Facebook caved: Zuckerberg made another public statement, announcing a series of policy concessions. Notably, the company reversed its position on Twitter-style warning labels: objectionable content left up on the basis of the newsworthiness exception would now be hidden behind a warning. It would also now remove threats of violence from politicians, ban hate speech in political ads (using a broader definition of hate speech than for organic content), tighten restrictions on voting misinformation, and add links to accurate information on all posts mentioning voting. Bloomberg and the New York Times have also reported that the company is internally discussing temporarily banning political ads ahead of the US election.

For many commentators and activists, however, this is too little, too late. The boycott has continued to grow, attracting more big brands and New Zealand’s biggest news website, and triggering an 8% drop in Facebook’s stock price. Facebook has made further concessions to try and stem the tide of criticism, including announcing an independent audit of its hate speech policies, but most of Stop Hate for Profit’s demands have still not been met. Following a disappointing meeting with Zuckerberg and Sheryl Sandberg, the campaign’s organisers dismissed Facebook’s efforts so far as ‘spin’, arguing that – as ever – the company is willing to engage in dialogue with critics but not to take concrete action. Pressure on the company will likely continue to build.

Facebook’s deeper governance problems

Facebook’s appeals to principles of free speech and democracy may not give the full picture of why it refused to take action against Trump for so long, but they are not entirely disingenuous. How a private company exercises its substantial power to restrict the communications of an elected leader raises some genuinely difficult political and ethical questions, and has a substantial impact on political debate. The public should be able to have confidence not merely that a given decision is correct, but that the decision-making process ensures it is made in good faith, with the public interest in mind, and is not unduly influenced by Facebook’s political and commercial interests. The process illuminated by the Trump episode shows how far the company is currently falling short of this standard.

Facebook’s decisions on these types of cases are not subject to any kind of due process or independent oversight: Zuckerberg took personal responsibility for the decision. As discussed earlier, he and his top team have consistently striven to build a close relationship with the Republican party and avoid antagonising them; critics have also highlighted that he spoke to Trump on the phone the same day he decided not to take action against the initial controversial posts. In a call with employees, he emphasised that this was after his decision was made. But regardless of whether he was actually influenced by the call or his desire to maintain a good relationship with the president, he is not an impartial decision-maker, and is not subject to any oversight or accountability.

In this context, while it may have brought about some positive changes in Facebook’s policies, the advertiser boycott has done nothing to address the underlying problems: that it wields enormous power over public debate in a highly opaque, unaccountable and arbitrary way. The need for better institutions and procedures to guarantee fair application of content policies was highlighted by legal academic evelyn douek in her interview for the Chair. Elsewhere, she has pointed out that using commercial pressure to force Facebook into policy changes will not always be as good for minority rights as it seems to have been in this case, and that it will incentivise short-term responses to immediate public pressure, rather than the deeper structural changes needed to ensure Facebook exercises its power responsibly. The mooted political ad blackout is a case in point. If it were a serious, considered attempt to improve the quality of political debate, it would probably not have been introduced so shortly before the election and in the midst of another major crisis. Commentators were quick to point out that it doesn’t address the key criticisms being levelled at Facebook and that it would raise a host of new problems.

It’s also notable how entirely focused Facebook seems to be on the US, both in the historical development of its policies on hate speech and public figures since Trump’s rise to prominence, and in its latest series of policy changes. This is understandable, since the pressure on the company is coming primarily from its US employees and from the US-led Stop Hate for Profit campaign. But Facebook’s policies on moderating speech from political leaders have major implications for political discourse around the world. For example, Indian prime minister Narendra Modi and Filipino president Rodrigo Duterte both – like Trump – rely heavily on social media rather than traditional press channels to communicate with their electorates, and are controversial figures who have been accused of encouraging violence. India has more Facebook users than any other country. Activists from developing nations including Myanmar, Kenya and Sri Lanka have long been raising concerns about how little attention Facebook pays to ‘rest of world’ countries and the impact of its policies in different cultural and political contexts, with sometimes lethal consequences. The series of policy changes outlined in the previous sections shows that Facebook is responding in a rapid, ad hoc way to the political demands it faces in the US, without considering how its policies will affect the majority of its users worldwide. 

Institutional accountability: what happened to the Oversight Board?

At this point, we might recall that just months ago Facebook was celebrating its new initiative to create a more transparent and accountable governance structure which would fairly represent users around the world: the Oversight Board. As detailed in our previous post, which analyses its powers and potential future role, the Board will be an independent body to hear appeals on difficult moderation issues and recommend policy changes. Currently it is not yet operational; moreover, its current jurisdiction only allows it to review content removals, so it couldn’t review Zuckerberg’s decision to leave Trump’s posts up. However, the Board’s bylaws suggest that its jurisdiction could expand in future as it settles into its role. Indeed, on the 3rd June, just after the employee protest, the Board released its own public statement about the crisis. It emphasised the possibility for expansion of its jurisdiction and suggested that once it begins hearing cases later this year, ‘how Facebook treats posts from public figures…are the type of highly challenging cases that the Board expects to consider’.

It’s therefore quite surprising that none of Facebook’s recent announcements mention the Board. Even in Zuckerberg’s statement on the 5th June which specifically says he wants to establish a more transparent and inclusive decision-making process, he seems to completely forget about the institution he already created to do exactly that. Unfortunately, this may be an early indication that Facebook doesn’t plan to expand the Board’s powers in future. However, recent events have shown that the company can be forced to change its approach by coordinated public pressure. Notably, at the end of June, the NGO Accountable Tech launched a campaign aiming to put pressure on the Oversight Board’s members to demand stronger powers from Facebook. If activists, users and advertisers were to shift their focus towards demanding changes in Facebook’s governance institutions as well as its substantive policies, it’s possible they could pressure the company into strengthening the Board’s role.

That said, Facebook’s actions so far and its sidelining of the Board seem to suggest that it prefers to respond to commercial pressure with short-term, superficial solutions, rather than systemic, institutional change. Although the boycott has brought about some small concessions, Zuckerberg seems unwilling to budge on Facebook’s core policies, reportedly telling employees that the company won’t change its approach to hate speech due to financial pressure and that ‘all these advertisers will be back on the platform soon enough’.

In this context, it should be remembered that the EU has just opened the initial consultation period for its forthcoming Digital Services Act, which will update the regulation of online platforms and is expected to set out new requirements for transparent and rule-based social media moderation. Regulation would be a more democratic – and likely more effective – way to bring about institutional change than commercial pressure. Hopefully EU legislators will take into account the long-running problems illustrated by this episode, and the clear need for online spaces to be regulated by fair and transparent institutions and procedures rather than the arbitrary decisions of company executives.

Rachel Griffin is a master’s student in public policy at Sciences Po Paris and the Hertie School of Governance in Berlin, and a research assistant at the Digital, Governance and Sovereignty Chair.