There are several parameters which, in my opinion, should be considered when making content moderation policies (CMPs). First, some types of speech are restricted by law – not in all countries, such as the US, but in many. Not all speech-restricting laws meet the standards developed in deliberative democracies and can, therefore, be used as smokescreens for censorship. But when they do meet the requirements of proportionality under the most careful scrutiny, they should, in principle, be included in CMPs. Regarding CMPs that go beyond the categories of unlawful content, it is crucial to question their legitimacy because it is a notion we are less aware of in the analogue context. “In real life”, we act according to social norms which might be different from one context to another and are often implicit – something that is quite different when we communicate on social media platforms. But, maybe, it is a question of generations? Now that we communicate even more via digital tools due to the pandemic, we might develop equivalent social norms for digital communication.
The latter is a matter of responsibility, not liability: social media platforms benefit from comprehensive immunity from liability under provisions such as section 230 of the Communications Decency Act (CDA) or article 14 of the e-Commerce directive. Nonetheless, they bear a huge responsibility when it comes to our communication online, which is why they ought to consult with experts from academia and civil society when making their CMPs. Good examples are, inter alia, the Santa Clara Principles, which propose standards for transparency in content moderation, the work done by many NGOs in this field, the reports by the UN Special Rapporteur on Freedom of Expression, and the extensive literature by scholars.
In my research, I focus on the horizontal effect of fundamental rights and how it affects the relationship between platforms and users. It is a doctrine developed by the German Federal Constitutional Court since the 1950s, which gained new importance through, first, the privatization of public space, and, nowadays, through the emergence of digital public spheres via social media platforms. Jurisprudence allows us to evaluate the situation according to the specific case brought to court and, at the same time, to develop principles that can be adopted by others.
In terms of definition, there is a difference between moderating and enforcing, and I think this difference is very relevant here. Moderating speech requires more than just identifying matching hashes, it requires seeing content in a context and balancing freedom of expression with other, important rights. We cannot keep technology out of the equation and, although the systems are not fit for purpose yet, algorithmic enforcement is probably part of the solution when it comes to removing harmful content at scale. But AI cannot fix the underlying social problems and it cannot help us develop the CMPs discussed in the previous question. There is a very high risk of deploying systems that would remove or downgrade lawful and legitimate speech. As a society, we should not allow social media platforms to “move fast and break things” when it comes to human rights, and as lawyers, we have a particular responsibility to keep an eye on these matters and to safeguard principles of democracy and the rule of law.
I think that the Covid-19 crisis shows -once again!- how much we depend on digital communication tools and how crucial they are in times of spatial distancing: they are the window to our social life. That is why they should no longer be thought of as mere “services” but rather as parts of the public sphere. Legally speaking, it does not mean we should treat them like state actors, but when conceiving the DSA, lawmakers have to be aware of the nature of the “digital service” provided. Information and data should no longer be perceived only as goods. Moreover, large platforms do not provide one single service, they tend to offer a multitude of services and solutions in an ecosystem, leading to a lock-in effect and increasing the customers’ dependency. Given all this, an important task will be to redress the competitive disparities between dominant digital platforms and new market entrants. I hope the lawmakers of the DSA will live up to the expectations expressed by the public opinion since the Cambridge Analytica scandal (and other events) and adopt a human-rights-infused regulation.