Dominique Boullier
[INTERVIEW] Governing social media in the Covid19 crisis: 3 questions to Dominique BOULLIER
9 July 2020
Donald Trump reading fake news
[ARTICLE] How Twitter started to control what public figures say online
9 July 2020

[INTERVIEW] Rethinking social media regulation: 3 questions to Serge Abiteboul and Jean Cattan

Serge Abiteboul is a member of the board of the French telecommunications regulator ARCEP. He is also a senior researcher at the Institut National de Recherche en Informatique et Automatique (Inria). He holds a PhD from the University of Southern California.
Jean Cattan is an advisor to the Chairman of ARCEP and adjunct faculty at Sciences Po Law Scool. He holds a PhD in public law and graduated from College of Europe.
Interview by Rachel Griffin.

You have written about a potential ‘third way’ for social media regulation (advocated by French president Emmanuel Macron), in which content rules are not made only by the state or by private interests but in a more participatory and flexible way. What concrete steps do you think European regulators could take to make this a reality?

This third way we are discussing has three dimensions.

The first one consists of a European framework setting the objectives to be achieved (non-propagation of hate content, fake news, promotion of media literacy, fight against addiction, respect for the individual, etc.), the overall institutional architecture and the different means at our disposal. Not all social media platforms are subject to the same regime. Simply put, only ‘systemic’ platforms are targeted. Those platforms are those which, because of their very large number of users, pose concrete threats to the entire system, to society as a whole.

To ensure the most appropriate implementation of the European objectives and to ensure an effective dialogue between all stakeholders, regulators are established at the national levels and endowed with the appropriate powers. This is the second dimension.

The primary mission of national regulators is to coordinate the effective codesign of rules governing social media, in agreement with the European objectives. All the interested stakeholders must be part of this design: social media of course, but also associations, judges, academics, administration, Parliament, etc. The success of this collaborative work is a prerequisite for wide acceptance of the moderation. In agreement with the different stakeholders, the role of the national regulator is also to set up objectives for the social media platforms. It is then the responsibility of social media platforms to realise the objectives, and in particular to adjust their conditions of use to the rules that have been agreed upon.

To enable the effectiveness of this process, national regulators have the means to supervise social media platforms and verify that they are dedicating appropriate means towards implementing the objectives that have been specified. For that, the regulator has the power to collect information on material and human resources, algorithmic procedures, internal processes, statistics relating to moderation, motivations behind certain decisions, etc.

The regulator has a central role in sharing the information that feeds the functioning of social media services with the different stakeholders, to enable permanent improvement of the rules, or for social purposes such as detection and support of users at risk. In particular, the data that is used to train the machine learning algorithms and explains the algorithmic choices is shared, with due regard for the protection of companies’ assets. To set up a new public space which is as open as possible, the regulator therefore functions itself as a social network.

As the third and last dimension, the regulator has a strong sanctioning power. But contrary to what is sometimes envisioned, it is not for the regulator to sanction a social media platform because a particular piece of content was published, or not withdrawn promptly enough. If offenses are committed, judges are in charge. On the other hand, the regulator has the means to sanction social media platforms for non-compliance with obligations of means (i.e. processes, resources or decisions are inadequate to fulfill the goals) and for systematic dysfunctions (i.e. some of their actions structurally violate the general objectives and obligations, such as information sharing).

 How do you think the EU’s forthcoming Digital Services Act (DSA) will be affected by the Covid-19 crisis, if at all?

During this period, the importance of digital in society and the economy became even more obvious. We also had confirmation of the enormous power of Big Tech and their systemic platforms. It is one thing to know that a few companies dominate digital markets, but it is another to realise they start dictating their choices to governments. A paradigmatic demonstration was given by Google and Apple who, by controlling access to Bluetooth on smartphones, tried to impose their conditions on states for the management of public health. Whether it was legitimate or not – states can also be liberticidal, of course – this raises questions about the place private companies occupy in democracies.

More and more, they infringe on states’ responsibilities and disrupt business competition. By doing that, they call into question the relationship between economic power and democracy. It was primarily to preserve democracy from the power of large companies that antitrust laws were deployed. Today, however, we must face a reality: the tools we have in competition law and elsewhere in ordinary law are not enough to counter the ability of Big Tech to impose their choices on society. From this observation, it is logical for the DSA to acquire new tools to preserve competition and reaffirm the control of state governments.

According to the proposals put on the table by the Commission, these tools would come from a strengthening of competition law itself. From the view of strengthening competition law, the avenues that are considered are now very well-known: changes in merger law to preserve the capacity to innovate, emergency measures, etc. We believe that this will not be enough, perhaps because we may be biased by our experience in a regulated sector, telecommunications.

The Commission is also considering alternative venues, investigating a second batch of proposals revolving around ex ante regulation. This ex ante regulation would impose remedies on a few targeted companies that have a structuring role in our economy and in particular in the digital economy. The remedies may be of various kinds, including transparency requirements, accounting or even activity separation, data sharing with governments or competitors, non-discrimination obligations, etc. Such remedies are standard in telecommunications law and have shown their full potential to open up monopolistic markets to competition.

The crisis seems to have contributed to changing the focus. Before the crisis, the evolution of competition law was put forward. Now, ex ante regulation is rightly getting a prominent place.

As more and more content moderation on social media is done by AI systems rather than human moderators, how do you think this shift can be managed in a way that best protects the interests of users?

Social media are primarily software services. In particular, these services decide what content is pushed to which users.

Originally, moderation has been relying primarily on human moderators, working in a vacuum in private companies. The psychological disorders they suffer have been largely documented. Also, algorithmic moderation can better match the massive flow of content and react rapidly than human moderation. Furthermore, during the Covid-19 crisis, the largest part of the army of human moderators was disabled. Finally, software moderation turns out to be better (at least, not as bad) as human moderation and is improving. For all these reasons, software is in fact more and more used for content moderation.

However, it is essential that we – humans – exercise control over the entire process. Typically, human moderators are used to validate the choices of AI. In cases of appeal, humans have of course the last word. It is also important to keep in mind that software moderation is mostly based on machine learning algorithms, trained by data corpora that are constructed by humans. There is no magic; everything is, and should remain, driven by human choices.

It is essential to realize the importance of training data for moderation. It must be seen as a public resource and made available to smaller players that don’t have the means to build it.  Its construction must involve the whole of society, and its quality and absence of biases must be guaranteed. For moderation, such biases can be particularly harmful because by deciding what can be published or not, this data is contributing to the definition of society. The impact on the nature of public space is far too great to be ignored. This implies that the training data used for moderation as well as the results of the moderation should be carefully analysed and continuously monitored.

If you want to dive deeper in Serge and Jean’s proposals for a new social media regulative framework, you can find their article on ‘The Third Way’ here (in French).