[POLICY BRIEF] Digital Media and Populism by Caterina FROIO and Anna BREHM
13 April 2021
[REPLAY] The Digital Markets Act, webinar with Francesco Ducci
2 June 2021

[ARTICLE] Facebook’s suspension of Trump’s account: what to make of the Oversight Board’s decision?

By Florence G’sell

May 6, 2021

On May 5, 2021, Facebook’s Oversight Board (FOB) issued its decision on the merits of Facebook’s suspension of Donald Trump’s account on January 7, in the wake of the events on Capitol Hill. This long-awaited decision provides valuable information on how this new body intends to support Facebook in its moderation practices and monitor its decisions. However, the decision may be disappointing in that it does not settle the issue, but merely gives Facebook six months to decide on a final sanction for Trump’s actions. In fact, the Board has left it up to Facebook to choose between the permanent deactivation of Trump’s account or a temporary suspension that would allow Trump to eventually return to the platform. The decision also illustrates the difficulties raised by the status of the Oversight Board and by the standards applied by this new body, which still need to be clarified. Finally, it shows that the Oversight Board (or FOB) intends to act less as a countervailing power to Facebook than as a body that guarantees the proper application of the platform’s own terms and conditions, all in compliance with very general international standards relating to freedom of expression.

First, let’s review the facts. On January 6, 2021, activists wishing to prevent the certification of the 2020 presidential election by Congress entered the U.S. Capitol by force, causing five deaths and many injuries. On the same day, Facebook decided – after having deleted several posts by Donald Trump – to prevent him from publishing on Facebook and Instagram for 24 hours. The next day, January 7, Mark Zuckerberg announced the decision to suspend Donald Trump’s accounts on Facebook and Instagram for an indefinite period, on the grounds that the President was trying to prevent the peaceful and legal transition of power to his elected successor, Joe Biden: in this context, the risks generated by his presence on the network were too great. Facebook’s decision was followed, on January 8, by that of Twitter, which chose to permanently delete Donald Trump’s account, on the basis that his tweets were likely to incite others to reproduce the violent acts committed on Capitol Hill. 

At the time, Trump’s “deplatforming” provoked many reactions, between relief and reluctance. “Facebook officially silences the President of the United States. For better or worse, this will be remembered as a turning point in the battle for the control over digital speech, tweeted Edward Snowden, while Alexey Navalny denounced “an unacceptable act of censorship” and German Chancellor Angela Merkel saw Trump’s ban as “problematic”. A few days later, Jack Dorsey, while regretting Twitter’s inability to produce a healthy conversation, simply concluded: “If folks do not agree with our rules and enforcement, they can simply go to another Internet service.” Two months later, when the CEOs of leading tech companies were called to testify before Congress, Dorsey was the only one who clearly acknowledged the role played by social networks in the events on Capitol Hill. For its part, Facebook had decided in January to submit its suspension decision to its Oversight Board (FOB). 

What is the Facebook Oversight Board or FOB? It’s a body dreamed up in 2018 by Harvard law professor Noah Feldman, who had suggested the idea of a “supreme court” for Facebook. Designed to settle disputes between users and the platform over the deletion of user content, Facebook’s Oversight Board is reminiscent of an arbitration body – except that only one of the parties, Facebook, has set its composition, the conditions of referral and the procedure to be followed. The Board has a quasi-judicial role insofar as it examines appeals and makes decisions that are binding on the platform, but it also has a (small) political role, as it can make general recommendations in an advisory capacity. 

Launched in May 2020, the FOB is composed of twenty members including renowned figures such as Helle Thorning-Schmidt (former Prime Minister of Denmark), Catalina Botero Marino (former Special Rapporteur for Freedom of Expression at the Inter-American Commission on Human Rights), Alan Rusbridger (former editor of the Guardian), and Tawakkol Karman (an activist who received the Nobel Peace Prize in 2011 for her role in the Arab Spring protests in Yemen). Five of the 20 members are American, three are European, and the rest come from all over the world. Since last fall, the FOB has been reviewing appeals against Facebook’s moderation decisions and can make decisions that Facebook has committed in advance to follow. 

It is in this context that Facebook’s decision regarding Donald Trump was submitted to the FOB. Specifically, Facebook asked the Board two questions. The first was related to the suspension decision adopted on January 7. The second question asked about the appropriate strategy to apply to the accounts of political leaders in general. While the FOB gave a clear, though not definitive, answer to the first question, it gave only an imperfect answer to the second. In this blog post, we will limit ourselves to a few general remarks on this very instructive decision (see also this very informed commentary by evelyn douek).

1.First: the Oversight Board does not make a final decision on whether to “deplatform” Trump and refers the problem to Facebook

The FOB has unambiguously confirmed Facebook’s decision to suspend Trump’s account, and recalled the risks of violence caused by the former President’s publications. On the other hand, the Board strongly criticizes Facebook for imposing an indefinite sanction. Indeed, this possibility is not provided for in the platform’s general terms and conditions, which provide for only three sanctions: the deletion of publications that violate Facebook’s terms and conditions, a time-limited suspension of the account, or the permanent deactivation of the webpage and the account. Consequently, the Board gives Facebook a period of 6 months to make a final decision regarding Donald Trump’s account. This final decision will, in any case, have to be justified.

The FOB is therefore passing on to Facebook the responsibility of deciding on the thorniest issue: what to do with Donald Trump’s account? And this even though the very purpose of Facebook’s referral to the Board was to have it decide on this point! One gets the impression that the Board did not want to risk taking a position on such a controversial subject. Paradoxically or not, the FOB states, in its decision, that by deciding to refer the matter to the Board after having made a decision that is not listed in its terms and conditions, Facebook sought to evade its responsibilities…We can undoubtedly applaud the FOB’s desire to reason rigorously by strictly controlling the enforcement of Facebook’s terms and conditions. But for the rest, there is little indication of its position on the possible return of Donald Trump on the platform.

Admittedly, the decision indicates some circumstances that should be taken into consideration to justify a possible permanent deactivation of Trump’s account. For example, it specifies that Facebook will have to analyze the context within Facebook and Instagram, but also outside both platforms, which it has done in the past by utilizing information published by the Department of Homeland Security (DHS) regarding terrorist threats. Several criteria were mentioned by a minority of the Board members, such as Trump ceasing to make unsubstantiated claims about alleged voter fraud (which he has not stopped doing); acknowledging his policy violations and committing to the platform’s rules in the future; or ceasing to support those involved in the Capitol Hill riots. These scattered considerations seem largely indicative, however. The responsibility for deciding the final fate of Donald Trump’s account is clearly returned to Facebook, even if it is asked to justify its position.  

2. Second: the FOB applies the rules set by Facebook

One thing is certain: the Oversight Board intends to strictly monitor Facebook’s application of its own terms and conditions, within the framework set by the platform. The “indefinite” suspension, which is not provided for in the terms and conditions, is therefore described as a “vague and non-standardized sanction” imposed by Facebook in an “arbitrary” manner. By criticizing Facebook on this point, the Board is acting as a court of law that carries out a form of “legality control” and, in so doing, imposes a “principle of legality” according to which the sanctions imposed must be provided for in advance by the platform’s rules. The control carried out is also a control of proportionality: the Board verifies that Facebook’s decision constitutes a proportionate reaction in light of the circumstances and the risks incurred. The parallel with the control carried out by a court is all the more striking in that the Board is “referring” the decision on the merits to Facebook, just as the French Court of Cassation refers a dispute to a court of appeal on the merits after having carried out its control. 

However, the parallel with a traditional court must be very strongly qualified. As for the “legality”, the “norms” that the Board is charged to enforce here consist, first and foremost, of the terms and conditions of a private company (see Rachel Griffin “Metaphors matter: why we should’nt call the Facebook Oversight Board a court”) The Board expressly states that the control of “legality” is carried out with regard to “Facebook’s community standards,” which is enough to make any French jurist used to seeing the law voted in Parliament as the gold standard shudder. No statutory standard is intended to apply here, even though the decision mentions the First Amendment to the American Constitution, presenting it as equivalent to article 19 of the International Covenant on Civil and Political Rights. But this is hardly a departure from the environment designed by the platform. Facebook has, in fact, decided from the outset (and quite logically) to exclude national state laws from the standards applied by the Board. However, the platform has also chosen to submit to international standards protecting human rights and freedom of expression, such as the UN Guiding Principles on Business and Human Rights. This is the only circumstance that authorizes the Board to look beyond the platform’s own quasi-legal order and mobilize external standards to verify that Facebook’s decision complies with applicable international standards. 

A lot of questions are raised by Facebook’s choice to submit to the norms of international law, which in principle are addressed to States. These international principles certainly constitute a common base of standards shared by a large number of countries, but are they really appropriate for this context? Are they not too vague, too general? Do they correspond to the reality of the contemporary world, and to the context of immense transnational platforms within which hate speech can spread virally? Moreover, we do not know what place and scope is really given to these principles of international law and, in particular, whether these international principles will always prevail over Facebook’s terms and conditions. It is true that the Board criticizes the platform’s rules in the decision on the grounds that they lack the clarity required by international standards. In another recent decision, the Board found that a Facebook standard leads to unnecessary and disproportionate restrictions on freedom of expression from the perspective of international law (while also conveniently stating that the standard is not in line with Facebook’s “values” either). However, the Board has no other power than to decide on the dispute submitted to it and to rule on the possible restoration of the disputed post. For the rest, it can only recommend that the platform modify the standards that it deems incompatible with international law, a recommendation that Facebook is not required to follow. 

In this case, the Board recalls that, while “political speech enjoys a high level of protection under human rights law because of its importance for democratic debate“, international human rights standards allow for restrictions on freedom of expression where there is a serious risk of incitement to discrimination, violence or other unlawful action. From the perspective of international law, the Board states, rules restricting freedom of expression must meet three requirements: they must be clear and accessible, have a legitimate aim, and be necessary and proportionate to the risk of harm. The Board therefore reviews Facebook’s terms and conditions and their implementation according to these three criteria. On the first point, it considers that Facebook’s rules are sufficiently clear, but criticizes the vague and uncertain nature of the “indefinite” sanction imposed on Trump. On the second, it believes that Facebook’s rules pursue a legitimate goal. On the third (necessity and proportionality), it relies on the criteria adopted by the Rabat Action Plan and in particular on a six-point evaluation grid developed under that action plan, to assess the potential of speech to create a serious risk of incitement to discrimination, violence or any other unlawful action. It concludes that Facebook’s decision was necessary and proportionate in light of the events of January 6, while noting that a minority of Board members felt it was appropriate to also draw on Donald Trump’s behavior in the months leading up to his suspension (such as the sentence uttered during the events following the death of George Floyd that had so shocked the public and that Facebook had refused to remove: “When the looting starts, the shooting starts).

We can certainly welcome the fact that the Board is controlling, on the basis of international standards, the implementation of Facebook’s terms and conditions. However, this control creates a strange, unprecedented and somewhat baroque legal situation, in which Facebook’s terms and conditions are assessed as if they defined a state legal order. Legality on the platform is determined by “Facebook’s community standards and Instagram’s community guidelines.” In this respect, the decision seems not only to apply the guarantees provided by international law, but also to enshrine this somewhat strange arrangement, constituted by the platform’s rules. Moreover, it is only because Facebook decided to incorporate international law norms that they were applied here, as if the platform would not otherwise be subject to any external control.

3.Third: the Board does not consider it useful to grant a special status to political leaders

The Board was quick to dismiss the special status of Donald Trump, who was still a prominent political leader at the time of his suspension. Little attention is paid here to the overwhelming power of large technology companies, which have the ability to silence the incumbent President of the United States, who just won 74 million votes, based solely on their terms of use. This is understandable: created and appointed by Facebook, the Oversight Board accepts from the outset the idea that the platform becomes, through its moderation policy, an arbiter of democratic debate which controls what can be said publicly by elected politicians. 

Facebook had specifically asked the Board to make recommendations on policy towards political leaders, whether they be senior officials, elected officials or candidates for election. Here the Board settles for very general recommendations. Above all, it does not believe it is useful to make a real distinction between political leaders, on the one hand, and, on the other hand, people who are particularly influential on social networks but who have no political mandate. The FOB’s reasoning for this conclusion is exclusively risk-based. People with a large audience can indeed, the Board says, generate serious risks, regardless of whether they have official functions or not: what is important is the degree of influence. The Board therefore insists on the risks and nuisances generated by the speeches of political leaders, dismissing out of hand their legitimacy or the need for citizens to be aware of what they say. If there is a risk of harm under international human rights standards, the Board insists, Facebook must suspend the account. But it is up to the platform alone to assess the merits of the suspension! The political leader in question will only have to address the Oversight Board in a second step…

It is regrettable that the Board refuses to draw a distinction between users officially entitled to represent the State and others. Its position is all the more surprising given that Facebook, like other platforms, has adopted special conditions for influential people and politicians, towards whom there is increased tolerance on the grounds that their speech is of “media interest” (“newsworthiness“). The decision also states that high-profile accounts are “cross-checked” to minimize moderation errors (update on October 6 2021: as recent leaks by whistleblower Frances Haugen revealed, in practice this cross-check program often means that high-profile figures are exempted entirely from any kind of moderation control). This does not prevent Facebook from acting at its own discretion in the presence of repeated violations of its terms and conditions: at the end of March 2021, it blocked Venezuelan President Nicolas Maduro from publishing or commenting for 30 days, on the grounds that he had repeatedly violated Facebook’s terms and conditions by praising an unproven treatment for Covid-19. In such circumstances, one can only deplore the Board’s refusal to rule more specifically on the conduct of political leaders.

There are still some recommendations that can be approved. The Board suggests that the process of moderating political content be entrusted to specialized personnel capable of evaluating the political context. It also encourages greater transparency regarding the rules applicable to influential users.

4.Fourth: Facebook can still make progress in termes of transparency

The Board has repeatedly emphasized that Facebook’s rules lack clarity and that their implementation is not sufficiently transparent, particularly with regard to sanctions. It should also be noted that Facebook refused to answer some of the Board’s questions, such as those concerning the design of the platform and its impact on the visibility of Donald Trump’s publications, or general policies towards political leaders. Once again, the Board’s recommendations for increased transparency are welcome, even if they remain very general. It is also worth noting that the Board calls for a proper investigation of Facebook’s potential contribution to the accusations of election fraud and the tensions they have generated. 

5.Fifth: the Board appears less as a countervailing power than as a body in charge of guaranteeing the satisfactory implementation of the platform’s policy

In its decision, the Board adheres to its mandate, as set out by Facebook. It reviews decisions in light of Facebook’s own terms and conditions and the international standards to which the platform has chosen to adhere. But this control remains very limited, restricted to a “legal order” whose contours are defined by Facebook and into which no state law can penetrate. No constitutional or legislative norm is intended to be taken into account here alongside Facebook’s own rules and very general international principles. Is all this sufficient to guarantee the rights of users on the network? While Facebook’s initiative and the way in which the Board intends, as best it can, to carry out its mission are to be welcomed, the fact remains that the platform is not a State. The quasi-jurisdictional control mechanism put in place here is exercised with respect to and within the framework of a private company that is not subject to any democratic control and is not accountable to the people. From this point of view, it is difficult for the French observer to see in the Board’s decision an equivalent to the US Supreme Court’s seminal Marbury v Madison decision, as some commentators have written. It remains to be seen whether, in the long term and within its limited scope, the Board will succeed in ensuring that these fundamental guarantees of transparency, integrity and freedom prevail. This is the condition that must be met if the Board is to assert its authority and one day deal, as it aspires to, with appeals against other social networks’ moderation decisions. 

Finally, we must not forget that the legitimacy of the Board has been challenged. It is composed of members exclusively appointed by Facebook, according to Facebook’s criteria, and remunerated, albeit indirectly, by the platform (through a trust in which Facebook has invested $130 million). In addition to this problem of independence, the representativeness of the members of the Board, which includes an arguably excessive proportion of Americans (five, or one-quarter), has been questioned. Under these conditions, can we hope that global principles relating to online speech could be developed? Or should we simply expect the consecration of Western, and particularly American, conceptions of freedom of expression? One could just as easily see this Oversight Board as a body in charge of reinforcing Facebook’s original strategy, which is particularly favorable to freedom of expression. It is this fear that led to the creation, by a group of experts, of an informal and competing body, the “Real Facebook Oversight Board”: it includes, for example, the former Estonian president, Toomas Henrik Ilves, professor Shoshana Zuboff, and Derrick Johnson, the president of the NAACP. 

In reality, the importance attached to the Oversight Board and its quasi-judicial function in the U.S. stems largely from the legal vacuum created by the impossibility, for American users, of invoking the First Amendment against moderation decisions or otherwise holding platforms accountable due to the immunity guaranteed by section 230 of the Communications Act. This void, which explains Facebook’s stated objective of building a body of precedents (FOB Charter, Article 2 section 2), is not felt in the same way in Europe and in France, where statutes are currently being drafted in order to grant more legal guarantees to users. The proposed European Digital Services Act (articles 17 and 18) and the French Law reinforcing respect for the principles of the Republic and the fight against separatism (article 19 bis) strengthen the possibilities for internal and external recourse against moderation decisions. In this respect, it will undoubtedly be possible for the FOB’s decisions to be challenged one day before the competent national courts.

In any case, the ball is now in Facebook’s court: the company will have to disclose its position on the Board’s non-binding policy recommendations by June 4 2021, and then, within six months, finally make its final decision regarding Trump’s account. 

Update on June 4, 2021

By Guillaume Guinard

On June 4 2021, Facebook disclosed its decision to suspend Trump for two years, effective from the date of the initial suspension on January 7. It justified its decision through new enforcement protocols, which, it claims, address the criticisms from the Oversight Board regarding the open-ended nature of the suspension. In doing so, it asserted that the Board is a “significant check on Facebook’s power”, while acknowledging that it is only so “in the absence of frameworks agreed upon by democratically accountable lawmakers”. Nonetheless, it is clear that the shape of the new enforcement protocols can only be attributed to Facebook. Furthermore, Facebook reserved the right to follow the Oversight’s Board recommendations or not, based on feasibility as well as “disagreement about how to reach the desired outcome”, while committing to justifying its decisions in the future. It also committed to make its assessment of newsworthiness more transparent, while concurring with the Board’s opinion that content from politicians should be treated in the same way as anyone else’s. All in all, this response depicts the Oversight Board more as an advisory body in favor of transparency, than as the independent body making binding judgements on content decisions that it was originally claimed to represent. This might explain why the platform still calls for “thoughtful regulation” in the area, even though such a call is widely regarded as a business strategy, in that Facebook wants light-touch regulation that would not require significant change for itself while imposing higher compliance costs on smaller competitors.


Florence G’sell is a Professor of Law at the Université de Lorraine and a lecturer at the Sciences Po School of Public Affairs. She holds the Chair Digital, Governance, and Sovereignty.

Guillaume Guinard is a research assistant at the Digital, Governance and Sovereignty Chair, a master’s student in Public Policy at Sciences Po Paris and a Philosophy graduate of Glasgow University.