by Dominique Boullier, CEE
What has happened to our media system over the past decade? How have we managed to reach the current level of ‘media warming’, which is a general inflammation of our minds, influenced by high frequency stimulations and reactions? The ‘cool’ social media of the 2000s and web 2.0 have undergone a radical transformation, and have monopolised our attention for both receiving and producing messages, particularly pictures and videos.
Dominique Boullier, sociology professor at the Centre for European Studies and Comparative Politics (CEE), adopted a pluridisciplinary approach for his publication ‘Comment sortir de l’emprise des réseaux sociaux “(How to escape the grasp of social networks) (Le Passeur, 2020). A useful book to counter the toxic impacts of the current operation of digital networks and smartphones.
Shortly before 2010, Facebook, Twitter and YouTube were monetised by selling advertising space to brands, proposing increasingly specific targeting to reach audiences whose engagement rate (responsiveness) and not just exposure they claimed to be able to measure. Every interface was modified to this purpose. The Retweet button, invented in 2009, has become the primary tool of virality (no need to copy the tweet, or even read it, thus establishing a hierarchy between the tweets). Other examples include the hashtag, which condenses a theme into a single word, messages that, even with a 280-character limit, remain very brief, and trending topics that are defined according to the speed of reaction of Twitter accounts to a tweet (not its popularity). I use the term ‘the atomic clock of public space’ to describe this mechanism specific to Twitter, insofar as even journalists end up selecting and prioritising their subjects according to Twitter indicators. Pre-2010, the Like button appeared on Facebook and other social networks/platforms, and the display of view, like and share scores, known as ‘vanity metrics’, became generalised. Everyone chases these reputation indicators, from companies to ordinary users, politicians, celebrities and even researchers!
Our attention has gradually become captive of these platforms, which have become quasi-obligatory, using formalised methods in what is called captology. All these elements were already known to cognitive science, which have studied the diversity of attention regimes according to duration and intensity. I propose to consider four main attention regimes:
While these attention regimes have been amplified by digital technology, the alert regime has gained importance with social media. Social networks capture our attention in two ways:
1/the filter bubble effect, which puts forward posts that confirm our vision of the world, our habits, loyalty, called the confirmation bias. YouTube algorithms thus invite us to watch similar videos for hours;
2/ the alert effect, which promotes new things (‘novelty score’), even if this means proposing something shocking, disgusting or scandalous. This operating mode notably explains why fake news spreads faster and further than other messages, not so much because of its intrinsic deceit, but because of the shock factor that attracts our attention, encouraging us to react and respond, even with criticism.
These alert effects, produced by so-called artificial ‘intelligence’ algorithms, favour what makes us react to increase our engagement rate.
After calculating profiles and ‘patterns’, these two effects – bubble and alerts – are sold to brands in order to propose attractive advertising spots. What is perhaps most surprising is that the commercial performance of these investments is impossible to demonstrate. The data collected, called analytics, are increasingly opaque and inaccessible to the brands themselves, and yet they continue to pay out improbable sums of money to the two main platforms (Google and Facebook account for 75% of on-line advertising) through an equally opaque auction system. One day, this on-line advertising bubble will burst but it is still useful to provide signals to investors because the brands themselves have become financial stakes that have now surpassed their purely commercial aspects. As with all bubbles, no brand wants to be the first to break away.
These choices and excesses have had serious consequences on our public space, even making public debate impossible. Aside from the volume and visibility given to fake news and hate-fuelled content, the pace of permanent alert imposed by these platforms encourages immediate reaction rather than a considered response, and this ultimately contaminates public space. However, it is true that these platforms were not designed for democratic discussion and deliberation.
For example, when the Arab Spring or 2019 social movements hit various countries, although the platforms proved useful for raising the alarm and enabling coordination, they were definitely no good for elaborating a programme or organising debates.
It is therefore time for a radical reduction of our dependency on these platforms.
To counter the domination of these media, some freeware communities have organised their own network with Mastodon.
Jimmy Wales, co-founder of Wikipedia, is also testing WikiTribune Social. Other similar initiatives should be encouraged by governments if they want to avoid the collapse of the media system that supports the public space, which has been developing for more than 200 years.
However, national authorities are slow to react, having also been seduced by these vanity platforms that allow political personalities to believe in their growing popularity through followers (who can be bought) and likes/retweets (that robots can provide) rather than doing their best to win elections based on their political agendas! Furthermore, the 2010s was a period of liberal dogmatic exaltation for the platform model, including those, like Uber, that deliberately professed their intention to sabotage established urban rules (social status for drivers, company liability in the event of a dispute, etc.). Nothing should oppose such supposedly innovative business, even when the platforms close the market with dumping or acquisition operations. It was not until the Cambridge Analytica scandal in 2017, followed by the Capitol attack and the suspension of Trump’s accounts, that the world’s politicians began to detect a threat to their institutions and their own positions. They are now claiming to regulate, very vaguely in Europe with the Digital Service Act and more radically in the USA with the recent Safe Tech Act, that Biden’s team is trying to get approved by Congress. It is therefore a good time to break up the ‘Big Five’.
However, the purely industrial vision (anti-trust) of the break-up, similar to that of the AT&T telephone operator instigated in 1982 by Reagan, is no longer relevant. Separating Facebook from Instagram and WhatsApp, for example, will not be enough to reduce its power, notably in terms of advertising. The opaque advertising remuneration model must also be dismantled, by implementing independent measures with all stakeholders (brands, agencies, measurement organisations, user representatives) and by separating the advertising sales agencies from the functions they propose, such as search engines or social networks. Social networks must also be forced to become editors and regulators of their content, like all other media channels, and no longer permitted to hide behind the status of the provider (see the debate on section 230 in the USA), which must be reserved for those with a purely technical function. This is likely to be very costly for them, but it would be much easier than asking them to self-regulate outside any legal rulings and without being able to punish them, which is what the governments are attempting to do.
These platforms were created in a libertarian world, with no legal culture. They self-regulate according to the principles of ‘rough consensus and running code’. This means that the code continues to function: it is tested by actually implementing it and, when things go wrong, apologies are made, as Mark Zuckerberg did throughout 2018 (notably in the wake of the Cambridge Analytica scandal). Bringing these platforms back into a legal framework is a fundamental challenge to enable citizens to regain some control over these all-powerful entities. Their traces capture methods must also be broken up because these are not private data (time spent on a page, correlations between basic actions, like a click, a like or movement from one page to another), making them difficult to govern via the General Data Protection Regulation (GDPR). All of their algorithms must be made auditable because their size should bring these platforms under the scope of public service obligations.
Finally, breaking up the attention-capturing systems that are part of the interaction design would help to slow down the spread of content. It is not necessary and often not possible -or even desirable – to restrict freedom of expression (which can be regulated by law) but there is no reason to accept the fast pace imposed by a freedom of propagation that is not intended to be guaranteed by the laws of democratic governments. We can attack ‘Free Reach’, to break up the chains of mental contamination, without restricting ‘Free Speech’. Mental speed checks have become a democratic priority as well as a public health issue, just like traffic speed in the past. ‘Code is law,’ said American lawyer and academic Lawrence Lessig. In other words, the design of interfaces and algorithms should enable propagation to be slowed down.
By implementing a kind of graduated response, the platforms must first be required to have interfaces that display the numbers of publications, retweets, reactions (rather than screen time which is not an indicator of pace), then to do away with vanity metrics, then to display speed regulators to enable individuals to control themselves, and finally, to allocate drawing rights if no change is observed. These rights would allow each user to post X contributions, shares, likes, etc., per platform per 24-hour period. Once this quota has been reached, the person would have to wait until the next day to post again. This concept forces us to select the information we publish and share, thus restoring a filter role that is normally played by the media. Since individuals appear to have become media channels, we must accept full responsibility and therefore become responsible publishers. To achieve this, the platforms must be required to implement a strict specification, based on the General Data Protection Regulation (GPRD) model, which makes access to the market dependent upon the supply of these regulation instruments to users.
Although the excesses that we have observed might have come as a surprise, our response must be determined according to the cognitive and political processes involved and not merely copy rules suited to the former media regime. We are currently facing a climate crisis that is forcing us to look to the long term and our survival. It would be disastrous to continue to use media that paradoxically encourage us to react in the increasingly short term. Media warming can only aggravate global warming.
Dominique Boullier, sociologist and linguist, is a professor of sociology at Sciences Po and conducts research in the Centre for European Studies and Comparative Politics (CEE). He teaches at the School of Management and Innovation (‘innovation and digital technology: concepts and strategies’), at the School of Public Affairs (‘Pluralism of Digital Policies’ and ‘Network propagation and events management’) and at the Executive School, where he heads the social sciences module of the Digital Humanities master’s programme.
CONDITIONS DE REPUBLICATION
Vous pouvez republier cet article en ligne ou sur papier sous notre licence Creative Commons. Vous ne pouvez pas modifier ou raccourcir le texte, vous devez attribuer l'article à Cogito et vous devez inclure le nom de l'auteur dans votre republication.
Si vous avez des questions, veuillez envoyer un courriel à email@example.com