[REVIEW] A look back on the conference “Should money remain in the hands of states?”
20 April 2020
evelyn douek
[INTERVIEW] How can social media platforms better prevent fake news? 3 questions to evelyn douek
9 July 2020

[INTERVIEW] Platforms and their content moderation responsibility: 3 questions to Amélie HELDT

Amélie P. Heldt is a legal scholar and PhD candidate with the University of Hamburg. She is a Junior Researcher with the Leibniz Institute for Media Research, an associated researcher with the Humboldt-Institute for Internet and Society and was a visiting fellow with the Information Society Project at Yale Law School in 2019. She focuses on matters of freedom of expression in the digital public sphere in her research and PhD project.
Interview by Rachel Griffin.

How do you think content moderation policies can be made in a legitimate way? Should social media platforms take more responsibility, or should they be made through other institutional structures? 

There are several parameters which, in my opinion, should be considered when making content moderation policies (CMPs). First, some types of speech are restricted by law – not in all countries, such as the US, but in many. Not all speech-restricting laws meet the standards developed in deliberative democracies and can, therefore, be used as smokescreens for censorship. But when they do meet the requirements of proportionality under the most careful scrutiny, they should, in principle, be included in CMPs. Regarding CMPs that go beyond the categories of unlawful content, it is crucial to question their legitimacy because it is a notion we are less aware of in the analogue context. “In real life”, we act according to social norms which might be different from one context to another and are often implicit – something that is quite different when we communicate on social media platforms. But, maybe, it is a question of generations? Now that we communicate even more via digital tools due to the pandemic, we might develop equivalent social norms for digital communication.

On social media platforms, CMPs can be legitimate because they are part of the agreement between users and platforms, that is, the terms of use. Users need to know what type of behavior is not permitted and what can be the consequences – in other words, basic transparency standards that some platforms have neglected for a long time (for more details see the annual index by Ranking Digital Rights). However, it is not enough to make CMPs without the bigger picture: some social media platforms have very high user numbers. They are no longer a simple third-party hosting service but they have become part of the public sphere and, therefore, a space for public discourse. This leads to the question of whether they should be making the rules unilaterally, or rather in a concerted manner if we want the CMPs to be legitimate.

The latter is a matter of responsibility, not liability: social media platforms benefit from comprehensive immunity from liability under provisions such as section 230 of the Communications Decency Act (CDA) or article 14 of the e-Commerce directive. Nonetheless, they bear a huge responsibility when it comes to our communication online, which is why they ought to consult with experts from academia and civil society when making their CMPs. Good examples are, inter alia, the Santa Clara Principles, which propose standards for transparency in content moderation, the work done by many NGOs in this field, the reports by the UN Special Rapporteur on Freedom of Expression, and the extensive literature by scholars.

In my research, I focus on the horizontal effect of fundamental rights and how it affects the relationship between platforms and users. It is a doctrine developed by the German Federal Constitutional Court since the 1950s, which gained new importance through, first, the privatization of public space, and, nowadays, through the emergence of digital public spheres via social media platforms. Jurisprudence allows us to evaluate the situation according to the specific case brought to court and, at the same time, to develop principles that can be adopted by others.

As more and more content moderation on social media is done by AI systems rather than human moderators, how do you think this shift can be managed in a way that best protects the interests of users?

In terms of definition, there is a difference between moderating and enforcing, and I think this difference is very relevant here. Moderating speech requires more than just identifying matching hashes, it requires seeing content in a context and balancing freedom of expression with other, important rights. We cannot keep technology out of the equation and, although the systems are not fit for purpose yet, algorithmic enforcement is probably part of the solution when it comes to removing harmful content at scale. But AI cannot fix the underlying social problems and it cannot help us develop the CMPs discussed in the previous question. There is a very high risk of deploying systems that would remove or downgrade lawful and legitimate speech. As a society, we should not allow social media platforms to “move fast and break things” when it comes to human rights, and as lawyers, we have a particular responsibility to keep an eye on these matters and to safeguard principles of democracy and the rule of law.

How do you think the EU’s forthcoming Digital Services Act (DSA) will be affected by the Covid-19 crisis, if at all?

I think that the Covid-19 crisis shows -once again!- how much we depend on digital communication tools and how crucial they are in times of spatial distancing: they are the window to our social life. That is why they should no longer be thought of as mere “services” but rather as parts of the public sphere. Legally speaking, it does not mean we should treat them like state actors, but when conceiving the DSA, lawmakers have to be aware of the nature of the “digital service” provided. Information and data should no longer be perceived only as goods. Moreover, large platforms do not provide one single service, they tend to offer a multitude of services and solutions in an ecosystem, leading to a lock-in effect and increasing the customers’ dependency. Given all this, an important task will be to redress the competitive disparities between dominant digital platforms and new market entrants. I hope the lawmakers of the DSA will live up to the expectations expressed by the public opinion since the Cambridge Analytica scandal (and other events) and adopt a human-rights-infused regulation.

You can read more of Amélie’s works on her researcher’s page or by following her on Twitter.