[STUDENT PAPER] The Metaverse: Challenges and Regulatory Issues
12 June 2022
[REPLAY] A look back at the Conference “Technochauvinism: A discussion with Jennifer Cobbe”
22 June 2022

[STUDENT ESSAY] The role of social media platforms in the current Russo-Ukrainian conflict

By Pauline Fau

The Digital, Governance and Sovereignty Chair will now publish, on a regular basis, the finest essays and papers written by Sciences Po students in the course of their studies.

This blogpost features the essay written by a second-year student at Sciences Po (Reims campus), as part of the course taught by Rachel Griffin and entitled ‘The Law & Politics of Social Media’.

The role of social media platforms in the current Russo-Ukrainian conflict (and its implications for our access to information)

On February 24th, 2022, Russia invaded Ukraine. Since then, there has been an escalation of violence and regional war. However, with this war, a new front appeared: social media and the Internet. In what scholars qualify as hybrid warfare, information and its manipulation are a weapon, and with social media increasingly used as a news source, they are becoming a tool and weapon in this war.

Both parties, Russia and Ukraine, have used social media to advance their agenda. Russia has been mainly using social media to spread disinformation on the conflict, while Ukraine has been seeking support from others, mainly Western countries.

To analyse the issue of access to information, it is more appropriate to focus on the most used social media platforms, because the more widespread and used the platforms are, the more relevant they are as information sources. This blog post will concentrate on Meta (Facebook, Instagram and WhatsApp), YouTube, Twitter and TikTok. Google will also be considered, because even though it might not be a social media platform, it is a tech company influencing our access to information and social media today, and has taken certain measures that are relevant to the Russo-Ukrainian conflict.

In this conflict, social media have taken many measures to restrict Russian disinformation, which strays from their traditional non-interference position on disinformation. By blocking access to Russian media sources, social media platforms have shown their capacity to restrict information, which can pose a threat if it is done for political agendas, and if it hides necessary information, and not just disinformation.

This situation puts forwards the question of whether the place of social media platforms in the current Russo-Ukrainian conflict highlights potential broader issues with these platforms as regards global access to information.

This subject requires analysing in more detail the measures social media platforms have taken against Russia, and the threat these types of measures can pose. Potential solutions to fight these issues will also be put forward.

I. The measures social media platforms have taken against Russia


Global social media platforms such as Meta, YouTube, Google, Twitter and TikTok have taken important measures to restrict Russian use of social media and disinformation. As they are the platforms with the largest reach, their measures have impacted the highest number of people.

It is important to point out that Western social media platforms (Meta, Twitter, YouTube and Google), and TikTok have taken different stances on the matter. Measures common to the Western social media platforms have included downranking posts from Russian state-affiliated media. This meant removing them from recommendations on YouTube and Twitter, putting them lower on the stories feed on Instagram, etc. They have banned Russian state-controlled media like Russia Today and Sputnik from their platforms, therefore removing access to their accounts and audiences.

Another significant measure they have taken is banning ads from and demonetising Russian state-affiliated accounts. The scope of this measure varies: Twitter has banned all ads in Russia and Ukraine, YouTube and Meta have demonetised Russian state media outlets and Google has fully stopped selling ads in Russia and prohibited these media outlets from buying and selling ads via its platforms.

Focusing on more specific measures, Meta has adapted its platforms’ content moderation policies to avoid “removing content from ordinary Ukrainians expressing their resistance and fury at the invading military forces, which would rightly be viewed as unacceptable”. This has raised controversy as Russians have accused Meta of Russophobia, claiming this allowed hate speech regarding Russians and Russian soldiers not to be censored.

Google has also banned Russia Today and Sputnik from its apps and search engine. Apple has done so as well concerning mobile apps. While they are not social media platforms, they are also highly relevant to the concerns of this blog post, as these measures considerably reduce access to information. Indeed, these actions remove news sources from the public, no matter the quality of these sources.

Twitter has also decided it will no longer recommend government accounts when the regime limits access to free information and is taking part in armed interstate conflict, which seems like a measure whose scope will be broader than the Russo-Ukrainian conflict.

Finally, TikTok has reacted differently. The Chinese platform has taken no official stance, while the others have, and has even been accused of spreading disinformation on the conflict. However, it has blocked Russian state-controlled media for users in the EU, and has said it will label content from state-affiliated media. However, according to Vice News, the Russian government has been paying TikTok influencers to post pro-Russian content. On the other hand, the US government had also been meeting with TikTok influencers to brief them on spreading accurate information on the conflict.

Therefore, despite the controversies around TikTok, all mentioned platforms have taken measures to fight disinformation on the conflict, most of them directed at Russian state-affiliated media, by restricting their access or visibility on the platforms. While this is done for a widely-supported and legitimate cause today, the measures put in place could be threatening if used for other purposes.

II. The potential threat of these types of measures on access to information


Indeed, these measures can be threatening if misused, which is something to bear in mind when looking at this conflict. Indeed, before the conflict, the legislation on misinformation and disinformation was much looser, as was the position of platforms. This raises the question of why they suddenly reacted in this way, and whether it could lay the ground for future issues.

It is important to understand the legal context on moderation and false information on social media platforms before the conflict. To do so, we should distinguish misinformation and disinformation. The difference between the two is intent: disinformation is the intentional circulation of false information, whereas misinformation is circulating information without knowing it is false. The current conflict mainly concerns disinformation, since Russian media are knowingly circulating false information, like claiming bombing victims were staged, etc.

Regarding legislation, there is very little regulating how social media platforms deal with both misinformation and disinformation. The 2018 EU Code of Practice on misinformation asks platforms to make voluntary efforts to regulate misinformation, but is not legally binding, and “voluntary efforts” is very vague. Certain states ban misinformation, like France, but it is very difficult to regulate as banning misinformation clashes with freedom of expression, for example if someone expresses their mistaken opinion. Narrow approaches to misinformation are tricky, making it difficult to adopt binding and constraining legislation on misinformation, which explains the lack of laws on the topic.

The same can be said for social media platforms, which historically have not made considerable efforts to moderate misinformation. In general, platforms have opposed moderation, meaning evaluating the posted content and removing it if it didn’t comply with laws or terms and conditions. Moderation is complicated given the amount of content on the platforms and the necessity of analysing content in context to evaluate its compliance with the rules.

This is even harder with misinformation and disinformation: it is difficult to understand whether the spread of false information is deliberate or not, and false information which is initially spread deliberately can then be shared by people who mistakenly believe it is true, etc. Fighting the vicious circle of false information is extremely complicated for social media platforms. In addition to the technical challenge it poses, platforms tend to favour as little moderation as possible, also to maintain as many users as possible.

Therefore, the current measures ordered by governments and taken by social media platforms stand out from the traditional regulation on social media from both governments and platforms. Why they did so now, and what it means, are key factors in assessing the threat it may pose.The specificity of the conflict spurred these drastic measures. However, if applied in other contexts, they could be very threatening to our societies and access to information.

Indeed, this shift in position from governments and platforms can most certainly be attributed to the violence of the war, and therefore this position as a mobilisation against the “hybrid warfare” of Russia. Social media platforms might even see a business incentive, in that they would lose users and support if they allowed such disinformation to circulate, because they have not necessarily obeyed governmental regulations in such a way before.

But these actions done in another context, or for political agendas, can be extremely harmful. For example, the US government meetings with TikTok influencers to “spread accurate information”. If this were to be done for the government’s political agenda it would be quite problematic, given the audience these influencers have – usually young and easily-influenced people, who would not necessarily identify the scripted dimension.

The same can be said for information in general. While platforms are blocking and downranking disinformation from Russian-affiliated media, what if they started doing so for information or media outlets that simply contradict the government or other political figures? This restriction of information would not necessarily be noticed, and would therefore (unconsciously) influence users’ minds, making it very threatening for society. Similarly, if Google starts removing sources for political reasons, as it has done for Russia Today and Sputnik, this can become quite problematic as it restricts our access to information, with Google being the main search engine today.

Social media platforms have done so in the past in response to governmental requests, and have faced backlash for it. Facebook was criticised for censoring posts critical of the Vietnamese government in 2020, and Facebook and Twitter took down posts criticising the Indian government’s coronavirus policies. While these censorship measures can be concerning, if users are aware of them they can remain limited, because if the platforms start censoring protests against human rights abuses they might face boycotts from users. But as the Guardian accurately puts forward, complying with certain authoritarian countries’ requests is a way for platforms to avoid being banned in these countries, and therefore avoid losing profits.

The specificity of the conflict spurred these drastic measures. However, if applied in other contexts, they could be very threatening to our societies and access to information.

Indeed, this shift in position from governments and platforms can most certainly be attributed to the violence of the war, and therefore this position as a mobilisation against the “hybrid warfare” of Russia. Social media platforms might even see a business incentive, in that they would lose users and support if they allowed such disinformation to circulate, because they have not necessarily obeyed governmental regulations in such a way before.
But these actions done in another context, or for political agendas, can be extremely harmful.

For example, the US government meetings with TikTok influencers to “spread accurate information”. If this were to be done for the government’s political agenda it would be quite problematic, given the audience these influencers have – usually young and easily-influenced people, who would not necessarily identify the scripted dimension.

The same can be said for information in general. While platforms are blocking and downranking disinformation from Russian-affiliated media, what if they started doing so for information or media outlets that simply contradict the government or other political figures? This restriction of information would not necessarily be noticed, and would therefore (unconsciously) influence users’ minds, making it very threatening for society. Similarly, if Google starts removing sources for political reasons, as it has done for Russia Today and Sputnik, this can become quite problematic as it restricts our access to information, with Google being the main search engine today.

Social media platforms have done so in the past in response to governmental requests, and have faced backlash for it. Facebook was criticised for censoring posts critical of the Vietnamese government in 2020, and Facebook and Twitter took down posts criticising the Indian government’s coronavirus policies. While these censorship measures can be concerning, if users are aware of them they can remain limited, because if the platforms start censoring protests against human rights abuses they might face boycotts from users. But as the Guardian accurately puts forward, complying with certain authoritarian countries’ requests is a way for platforms to avoid being banned in these countries, and therefore avoid losing profits.

III. What can be done to protect access to information from abuses by social media platforms?


As shown by the examples above, these censorship and access to information issues are not new, but the threat is more visible with the Russo-Ukrainian conflict. But certain principles may help in moving forward and away from this threat.

An essential principle that needs to be developed more is transparency. This is important for two reasons: it enables users to exercise a certain check on tech companies and it can help users avoid being manipulated by algorithms. Indeed, by being aware of the actions of tech companies and their platforms, users can “control” tech companies, by boycotting the platforms if they disagree with the methods. In addition, by being aware of the platform’s practices, such as its algorithms, they can avoid being manipulated, or influenced into thinking in a certain way, and distance themselves from the content they see. For example, there is more and more information surfacing about social media algorithms designed to be divisive and spread misinformation. The more users are aware of it, the more distanced they can be from the content they see, and they can even seek outside content to corroborate or add nuance to what they have read.

In addition to transparency, reducing the platform monopoly is essential to reduce the threat of tech companies and their actions. Indeed, the more people rely on platforms, the more they are passive victims of their actions, with little resources to counter them. Multiplicity of platforms can create more diverse content adapted to the different needs of users, and it gives the opportunity for users to switch platforms if they are dissatisfied with one. By being less dependent on one platform, they can seek diversity of content and sources elsewhere, and are therefore less subject to the decisions of tech companies.

To conclude, the measures taken by social media platforms in the Russo-Ukrainian conflict highlight problems concerning access to information and censorship, if social media platforms start responding to governmental requests and concealing information without the user being made aware of it. Two essential points to fight this are transparency and reducing tech company monopolies, to enable users to switch platforms if they are dissatisfied, or seek more diversity of sources and content.

It is however important to highlight that the concerns put forward here come from a Western and liberal context. In authoritarian countries, many social media platforms are banned, content is censored, and access to information is limited, which contrasts with the focus here on countries where social media platforms respond relatively little to governmental influence and prioritise individual freedoms like freedom of expression. In addition, with scandals such as Cambridge Analytica, the concerns put forward are probably already existent, but hopefully not as widespread.

Pauline Fau is a Bachelor student at Sciences Po (Reims campus ). Pauline is interested by politics and business, and the interaction between corporate companies and political governance, which led her to focus on social media companies and political decisions.

Established in 2010, the Reims campus offers two distinct programmes: one focussing on transatlantic relations between North America and Europe, and the other oriented towards Africa, examining the continent’s relations with Europe.