Online hate speech
[ARTICLE] How public pressure forced Facebook to change its policies on hate speech
9 July 2020
[ARTICLE] Social media and content moderation in times of Covid-19
9 July 2020

[ARTICLE] What could Facebook’s Oversight Board mean for online expression?

by Rachel Griffin

On the 6th May, Facebook announced the first 20 members of its new Oversight Board. This body, first proposed by the company in 2018, will take responsibility for overseeing many of Facebook’s decisions on content moderation and community standards. In particular, it will be able to hear appeals from users on the removal of their content, and overrule the company’s management, which has committed to abide by all the board’s decisions except where doing so would be illegal.
In this contribution, Rachel Griffin explains why the creation of an independent oversight body to regulate free speech online, controlled neither by the state nor by private interests, is an important milestone.

The board’s bylaws envisage that its jurisdiction could be expanded to cover more content and decision types in the future; its director, British human rights campaigner Thomas Hughes, has also suggested that other social networks might ask the Oversight Board for rulings. It’s possible that we will look back on this first experiment in independent regulation of online content as a watershed moment in the governance of the digital public sphere. Then again, it’s also possible that the critics pointing out the board’s narrow jurisdiction and limited influence over Facebook are right: its prestigious membership and much-publicised independence will just serve to launder the company’s reputation, without making any meaningful difference to its activities. As detailed in another of our blog posts, Facebook’s content moderation policies – particularly on hate speech – have become a major topic of public debate in recent weeks; a public campaign and advertiser boycott have successfully pressured it into some policy changes. So far, however, it’s still unclear how much the company is really changing its approach, and what role the Oversight Board will ultimately play.

A first step towards better governance?

Legal scholar evelyn douek has described the Board as ‘the most ambitious attempt yet to cut the Gordian knot of platform content moderation’. On the one hand, there are obvious flaws with leaving the regulation of online speech entirely up to private companies. Their dependence on advertising revenue simultaneously incentivises a conservative and prudish approach to some content, such as nudity, and a dangerously laissez-faire approach to other content, such as attention-grabbing conspiracy theories. But increasing state control of these important communication channels seems equally undesirable. Douek suggests the Oversight Board is a reasonable attempt to find a ‘third, least-worst option’. Although she recognises the limitations on the Board’s powers, she also suggests that just requiring content moderation decisions to be publicly discussed, challenged and justified, rather than being made in the closed offices of a private company, is a big step forward.

Many aspects of this new institutional setup suggest a genuine willingness from Facebook to engage constructively with its responsibilities and with affected communities. The integrity and expertise of the Oversight Board’s first members are widely respected. They include former ECHR and US federal court judges, a former Danish prime minister, a Nobel Peace Prize-winning activist for press freedom, and the former editor of the Guardian. The Board’s legal structure gives it significant independence: it will be funded by a $130 million trust, and members will be employed by the Board, a separate LLC, not by Facebook. Its membership will be self-selecting, with no further involvement from Facebook once there are 40 members. In a recent paper informed by extensive empirical research into the Board’s creation and institutional structure, legal academic Kate Klonick concludes that it has meaningful intellectual and financial independence from the company it regulates.

As discussed in our other post about Facebook’s controversial policies on hate speech, the status quo of content moderation – where company executives who are easily swayed by commercial considerations exercise unrestricted power – is deeply problematic. There is a genuine and pressing need for more independent regulation of social media, and for a willingness to experiment with new governance structures, based on transparency and dialogue with all stakeholders. Instead of looking only at what the Oversight Board will achieve immediately, we may regard its creation as the beginning of a process which will hopefully lead to more such experiments, and to a gradual shift towards more transparent and accountable regulation of online speech. 

Is Facebook avoiding the real issues?

Yet one could also take a more pessimistic view. Critics are already highlighting major concerns about how the Board will exercise its new powers, and in particular about deeper underlying problems with its institutional structure and jurisdiction. If these concerns are right, the Board’s creation may not be the first step in a journey towards better digital governance, but merely a convenient distraction from the real, structural problems with Facebook and other social media.

Siva Vaidhyanathan, a media studies expert who has written a book on how Facebook undermines democracy, argues that the board is disproportionately American (it has only one member from India, the country with the most Facebook users). He suggests it has deliberately been designed to skew towards the classically American value of free speech, at the expense of other values like safety and human dignity which are more prominent in other countries’ legal traditions. Indeed, the first line of its bylaws is, ‘The purpose of the Oversight Board is to protect freedom of expression’. This is also reflected in its membership, which is weighted towards journalists and free speech advocates, as opposed to members with more expertise in areas like algorithmic bias and the spread of disinformation.

The Board’s narrow jurisdiction has also been a key focus for criticism, and shows a similar bias towards maximising free speech. As douek points out, the Board will have the power to review and overturn removals of content, but not decisions to leave content up, even though these decisions have also led to major controversies – most recently, Facebook’s much-criticised decision not to moderate posts by Donald Trump inciting violence against black protestors. Although the Oversight Board is not yet operating, it released a statement referring indirectly to the controversy (‘how Facebook treats posts from public figures that may violate their community standards…are the type of highly challenging cases that the Board expects to consider when we begin operating’). It argued that the current political crisis in the US highlights the need for independent oversight of Facebook’s content moderation decisions. Yet as things stand, the Board would not have the power to review the decision to leave the posts up unless Facebook chose to refer the case to it. This is not a very powerful form of oversight.

Vaidhyanathan also argues that the Board’s limited jurisdiction to decide whether individual pieces of content should have been removed gives it ‘no influence over anything that really matters in the world’. It will not even be able to create general codes or policies beyond the individual cases it hears: it will be up to Facebook to decide whether to apply rulings to other, similar pieces of content. As more and more content moderation decisions are made automatically, at huge scale (a trend which has been accelerated by the Covid-19 pandemic, as discussed in our first blog post), individual appeals in a tiny fraction of moderation cases will have very little impact. 

More importantly, Facebook exercises huge influence over online expression through means other than content moderation. Its News Feed algorithms select which content to display; it may mean little that a post is left up if no one sees it. Its site architecture also deliberately encourages certain forms of interaction and behaviour over others. The Wall Street Journal recently revealed that Facebook’s executives knew in 2016 that their recommendation algorithms were driving the majority of joins of extremist groups, and chose to take no action. Of course, the company’s pioneering systems of dataveillance and targeted advertising can also be used to shape political debate; it has controversially refused to ban or even fact-check political ads. These topics are all much more economically significant for Facebook than the occasional controversy about whether a single piece of content should be removed, since they directly impact user interactions and time on site, and therefore ad revenue. They will be completely outside the Board’s influence.

We should remember that Facebook’s core business is manipulating and monetising attention. Its long series of announcements about the Oversight Board, culminating in the much-publicised naming of the first members, could be regarded as an effective act of misdirection: one which shifts everyone’s attention towards the sometimes difficult but fundamentally minor issue of when content should be taken down, and away from more inconvenient questions about how the platform’s structure and design influence public debate. As the Washington Post recently revealed, at the same time late last year that it was drawing up the Board’s bylaws and recruiting its first members, Facebook was also setting up a secretive new lobbying group to oppose further tech regulation in the US. Its commitment to transparency and independent oversight appears somewhat superficial.

A lack of industry-wide coordination

The ‘Gordian knot’ douek identifies is not a new problem. Older media – most obviously newspapers – also demanded regulatory systems which could balance free expression against other important values, without handing all the power to governments or company boards. Reasonably successful systems of self-regulation have been devised to achieve this. In theory that should be possible for social media as well.

Notably, however, the Oversight Board does not represent an industry group, but a single enormous platform which counts one-third of the world’s population as users. Traditional models of collective media self-regulation do not easily map onto this unprecedented situation. As Vaidhyanathan points out, ‘When self-regulation succeeds at improving conditions for consumers, citizens, or workers, it does so by establishing deliberative bodies that can act swiftly and firmly, and generate clear, enforceable codes of conduct.’ Bodies which represent a number of competing companies can do this: if one company breaks the rules, the others can apply meaningful pressure. In contrast, the Oversight Board will have only the power that Facebook gives it.

The lack of coordination on standards between social media companies and other tech firms is also problematic. Facebook may be an industry unto itself, but it is not the only company with the power to significantly restrict online speech. For example, Cloudflare is a medium-sized web security company with 500 employees. Following public pressure to withdraw services from far-right websites, its CEO has repeatedly highlighted how much power his company has to arbitrarily remove content from the internet, calling for more general and transparent regulation. Meanwhile, Chinese-owned TikTok (now the world’s seventh largest social network, with 800 million active users, and growing fast) has been criticised for draconian moderation policies which remove not only politically sensitive content but posts from users deemed too poor, too sexually provocative or too ugly. Even taking the most optimistic view of the Oversight Board’s influence on Facebook, without industry-wide coordination on content standards its impact will be limited.

Where will social media governance go from here?

In the short term, the Oversight Board is unlikely to have a major impact on how Facebook moderates content and shapes public debate. As Vaidhyanathan says, ‘The power of Facebook is its ability to choose what everyone sees. It’s not that it gets to choose what you post.’ For as long as the Board can only make decisions in individual removal cases, and has no influence on Facebook’s algorithms and advertising systems – or even on the general moderation policies which are applied at vast scale, and increasingly by AI – its impact will be minor. Self-regulation by a single company is also inherently weak. It means the Oversight Board has limited leverage over Facebook, and none at all over the many other companies which exercise significant power over online speech.

As Klonick points out, the world of social media changes extremely fast, and the future landscape is very difficult to predict. The real significance of the Board’s creation may not be its immediate effects, but the cultural shift it could ultimately come to represent. Maybe we will look back on it as a one-off publicity maneuver, with little lasting impact. Yet it’s also possible to see the Board’s creation as the start of a process in which more participatory and accountable governance structures will be created across the social media industry, and more content regulation decisions will be subjected to independent and transparent oversight. In this way, its impact on public debate could be lasting and highly positive.

It’s too early to say which will be the case, but there are a few encouraging signs. The Board’s bylaws allow for the possibility that its jurisdiction could be expanded in future; some consultation documents from 2019 even suggested that it could advise on algorithmic ranking. Facebook and Mark Zuckerberg personally have recently faced intense criticism over their attitude to hate speech, including employee protests and a boycott by major advertisers. As our other blog post outlines, although the company has announced several policy changes and a planned independent audit, the Oversight Board has barely been mentioned; so far the company seems more interested in attempting to deflect criticism with quick-fix policy changes than making serious reforms to its governance structure. But if public pressure is maintained, the company might ultimately see the commercial advantage in handing more power to a trusted independent body and making fewer controversial decisions itself.

It’s even possible that the Board could morph over time into a regulatory body for the entire industry, or that if it is seen as a success it could lead to the creation of such a body. Facebook advisor Noah Feldman and Oversight Board chair Thomas Hughes have both suggested that other social media companies might voluntarily submit cases to the Oversight Board for rulings in future. Klonick has hypothesised various ways that other platforms could either participate in the Oversight Board, or replicate aspects of its legal and institutional structure. Already, earlier this year, TikTok created its own Content Advisory Council of independent experts on law, technology ethics, and child safety. The Council has even less immediate power than the Facebook Oversight Board, but its creation may indicate a broader, positive trend: tech companies recognising and responding to public pressure to make their moderation decisions in a more transparent and accountable way. The crucial question is not what the Oversight Board will achieve immediately with its current powers, but whether this emerging trend will last, and how the Board’s creation will affect social media governance over the long term.

Rachel Griffin is a master’s student in public policy at Sciences Po Paris and the Hertie School of Governance in Berlin, and a research assistant at the Digital, Governance and Sovereignty Chair.