By Morgan Martel
The Digital, Governance and Sovereignty Chair will now publish, on a regular basis, the finest essays and papers written by Sciences Po students in the course of their studies.
This blogpost features the essay written by a fourth-year student at Sciences Po (Reims campus), as part of the course taught by Rachel Griffin and entitled ‘The Law & Politics of Social Media’.
In the United States, 36% of adults aged between 18-29 consider social media their primary source of news. Such staggering figures surrounding the prominence of social media as a news source raise important questions regarding the accuracy and diversity of this information. These concerns are exemplified no better than by the issue of “filter bubbles”.
As conceptualized by Eli Pariser, filter bubbles are the creation of a unique universe of information that each individual lives in online, without deciding what information they receive or what is filtered out.
Filter bubbles raise concerns about threats of polarization, an increased tendency to extremist viewpoints, the proliferation of fake news, and the impacts these changes may have on democracy. In democracies, the ability to work towards compromise and engage in constructive discourse is essential for effective governance, which is undermined when political discourse becomes uncivil and polarized.
To address these issues, this blog will explore Eli Pariser’s question whether “the most serious political problem posed by filter bubbles is that they make it increasingly difficult to have a public argument”?. I will argue in the affirmative given the threats that hindered public discourse can have on democracy.
To understand how filter bubbles exacerbate confirmation bias, it is first, important to understand how algorithms create filter bubbles. On social media platforms, algorithms create recommender systems that suggest, display or rank content to individuals or groups according to a determination made by the service provider such as relevance, interest, importance, or popularity. These recommender systems are created with a goal such as personalizing content, encouraging platform engagement, delivering targeted advertising, or others. To pursue these goals, algorithms then create filter bubbles when they narrow the range of content recommended to users.
Before the influence of algorithms, users are liable to confirmation bias: the tendency to process information by looking for or interpreting information that is consistent with their existing beliefs resulting in them ignoring information that contradicts those beliefs. Coupling the human tendency to confirm one’s own beliefs with the narrowing of information sources, filter bubbles only amplify the effects of confirmation bias by removing adversary sources and/or viewpoints.
As Kuehn notes, political content specially tends to exploit this vulnerability and reinforce confirmation bias. This becomes more troubling when considering that many social media accounts are not subject to professional editorial policies and are often ideologically biased- meaning people may be caught in a bubble of biased and un-reviewed information.
AlgorithmWatch has argued that the increased fragmentation of social groups proliferated by filter bubbles prevents people from sharing a common experience and understanding each other. This may dangerously reduce the common ground of public discourse that is necessary for a functioning democracy. This blog will explore this threat to democracy through three main issues that are caused or proliferated by filter bubbles: polarization, fake news, and increased tendencies to view extremist content.
Polarization
Amrollahi argues that the self-confirming feedback loop created by filter bubbles will, in the long term, create increasingly polarized and fragmented communities. Normatively, this poses grave consequences for democracies, because it makes compromises – an essential feature of effective democratic governance – more difficult and in some cases impossible.
In the US, studies have found correlations between viewing like-minded political content and an increase in polarization. An Oxford University study found that in America, not only does viewing like-minded content on social media have the potential to polarize people or strengthen the attitudes of existing partisan leanings, but viewing like-minded media may also increase anger toward the “other side”.
These findings are corroborated by AlgorithmWatch’s study, which finds the societal consequences of polarization and fragmentation “strongly accentuated” by social media filter bubbles. These increases in polarization should be taken seriously, given the hindrance that polarization poses to public discussions and debates.
Critics of the dangers of filter bubbles have argued that the data surrounding this topic is too heavily focused on American populations and that it does not apply to other less-polarized countries. However, these data should be regarded as a cautionary tale to the rest of the world rather than as a uniquely American exception.
First, studies about the damaging effects of filter bubbles on public discourse are extremely relevant to the US given their already polarized political climate. In a world where America’s political and economic well-being has ripple effects across the world, other countries should be concerned about how filter bubbles might be harming the political future of the US.
In addition, the proliferation of American polarization by social media algorithms should be taken as a warning to the rest of the world. The US is not the only highly polarized country. A diverse set of countries such as Brazil, India, Poland, and Turkey have all seen a rise in polarization. Since polarization is not unique to the US and can feasibly happen across the globe, countries should be concerned about the effects of filter bubbles in the event that they too become politically divided.
Fake News
Filter bubbles can also lead to the increased dissemination of fake news. Since filter bubbles replicate and make suggestions based on content a user has already viewed, they can lead to a lack of access to factual news. In turn, this leads to the spread of emotionally charged and biased news that is unlikely to be verified from sources outside of the filter bubble.
With the rise of social media as a primary news source, this can lead to the changing of news consumption from factual news sources to emotionally charged and politically biased news. Public discourse that is increasingly based on false information could have damaging effects on policymaking and democracy when fake news infiltrates political systems.
Extremist Content
Providing a means of communication on social media platforms will inevitably lead to the communication of some extremist content. However, it is the increased proliferation of this content through recommender systems and filter bubbles that raises concern.
Open recommender systems that prioritize engagement are likely to favour content that evokes emotion and have been found to play a significant role in the dissemination of extremism alongside other problematic content. Although there is limited research in this field, the evidence that does exist suggests that filters may be increasing the possibility of systems promoting extremist materials.
This trend is made abundantly clear when examining the case of YouTube. The YouTube recommender system has been found to prioritize extreme content following a user’s engagement with said content and to actively push extreme content above moderate content in its video suggestions. These findings are even more pertinent when considering that 70% of YouTube views come from recommendations.
The promotion of extremist content is especially alarming in the context of filter bubbles. Homogenous groups, which filter bubbles increasingly create, are more likely to become extreme in their thinking. This means that as filter bubbles create more and more homogenous groups, the mere creation of these groups can lead to more extremism.
A marked increase in extremism in public discourse due to filter bubbles can create the perception that some views are more widely held than they are, which legitimizes the systematic harassment of marginalized communities. An increase in harassment based on extremist viewpoints can in no way be said to be a positive for public discourse or democracy.
The impacts of filter bubbles on polarization may be more relevant in already polarized countries. Although empirical evidence regarding the effects of filter bubbles is unclear, it does suggest that customizability increases political polarization.
Given the importance of user-driven choice and since filter bubbles amplify a user’s pre-existing beliefs, filter bubbles may be more damaging in already-polarized countries since they can perpetuate and amplify these polarized opinions. The effects of filter bubbles in non-polarized countries may be more muted where there are fewer polarized beliefs for filter bubbles to perpetuate.
Opponents of the dangers of filter bubbles point out the lack of empirical evidence to prove that filter bubbles do proliferate polarization, fake news, and extremism. An early study on Facebook in 2015 found that algorithms do partially influence a user’s feed, but that the influence on political filtering caused by algorithms is weaker than expected. Nonetheless, the fact that some influence is exerted could be increasingly relevant in already polarized countries, such as the US, where a lack of opposing views or objective news, left out by filter bubbles, will only serve to worsen polarization.
Some also argue that issues of polarization, fake news, and extremism are not caused by filter bubbles but rather are social issues that would exist with or without filter bubbles. However, this argument ignores the fact that just because these issues exist without social media, does not mean that algorithms are not exacerbating these issues online.
To minimize the negative effects of filter bubbles, three approaches have been suggested: 1) alerting people about filter bubbles, 2) bursting the filter bubble, or 3) a hybrid architecture. Alerting people about the bubble focuses on using tools that identify and alert users when they are in a filter bubble. However, none of the current models have suggested frameworks for showing users the existence or significance of their bubble in real-time. Techniques for bursting the filter bubble include bypassing or changing algorithms, extending users’ awareness of the bubble, and encouraging users to explore different ideas that will lead them to break their own filter bubble. Hybrid architectures propose an integrated tool that acts to both alert users about the potential of filter bubbles and to break those filter bubbles.
Although it is outside of the scope of this blog to explore what solutions would be best, it is within the scope to say that implementing a solution to filter bubbles that function across platforms and algorithms is essential for protecting public discourse and democracy. Filter bubbles created by algorithms have the potential to limit a society’s ability to engage in public discourse.
Filter bubbles draw on the human tendency to confirmation bias and proliferate this by creating an online world void of opposing opinions. By amplifying confirmation bias and removing adversary viewpoints, filter bubbles can increase the spread of polarization, fake news, and extremist rhetoric. These effects may be more significant the more polarized a country already is and may be less relevant to relatively non-polarized countries. Although the harms of filter bubbles may be most relevant in polarized countries such as the US, guarding against them should be broadly adopted as any country could become polarized.
To do this, innovative solutions need to be adopted to inform people when they are in filter bubbles and to break those bubbles algorithmically or by leading users to break those bubbles themselves. However, this topic requires significantly more empirical research that analyzes the effects of filter bubbles on public discourse and democracy across different social media platforms and that is done in countries outside of the US.
Morgan Martel is a 4th year student studying law with a minor in business at Carleton University in Ottawa, Canada. She has also earned a Certificate of Social Sciences and Humanities from the Sciences Po Paris Campus de Reims. She aspires to continue her studies in law and to specialize in environmental law issues.