[REPLAY] A look back at the Conference “Technochauvinism: A discussion with Jennifer Cobbe”
22 June 2022
[STUDENT PAPER] Should we rethink the governance of platforms in order to create a system of checks and balances consistent with our democratic values?
28 June 2022

[STUDENT ESSAY] The filter bubble and me: how our voices are restricted by what we see

By Matthieu Lê

The Digital, Governance and Sovereignty Chair will now publish, on a regular basis, the finest essays and papers written by Sciences Po students in the course of their studies.

This blogpost features the essay written by a second-year student at Sciences Po (Reims campus), as part of the course taught by Rachel Griffin and entitled ‘The Law & Politics of Social Media’.

The filter bubble and me: how our voices are restricted by what we see

In 2011, Eli Pariser proclaimed: “the most serious political problem posed by filter bubbles is that they make it increasingly difficult to have a public argument.” Pariser, an author and activist who has been active in fighting for a more democratic use of the 21st century’s technological and digital tools, came up with the term “filter bubble” and has actively promoted it. He explained the concept as one’s “own personal unique universe of information that you live in online,” and that the content in this filter bubble “depends on who you are and depends on what you do.” Such bubbles occur as some content deemed especially relevant to the user is highlighted, while less pertinent information is edited out. Since December 2009, Google had pioneered the practice of using various types of indications – as many as 57 – based on user activity to create a personalized search engine experience for everyone, which has now become common practice among most major social media and online platforms. At the time, Pariser’s analysis of the potential effects of the personalization on data were quite visionary, and many of his predictions have proved to be at least partially true nowadays.

Why did Pariser conclude that filter bubbles make it increasingly difficult to have a public argument? One notable element of concern is the inability of users to get a “full picture,” a variety of information from different sides of such arguments. As Pariser put it, users need to be exposed to content that is “uncomfortable, challenging, or important” in order to remain aware of other individuals’ perspectives and see the issues in someone else’s shoes. Individuals with contrasting opinions are also often filtered out of each other’s feeds, making it harder for every individual to come into contact in the first place. This raises concerns from a political standpoint, since political discourse among individuals is often considered a necessary step in decision-making. As political historian Pierre Rosanvallon famously quoted, political participation goes well beyond voting and “blends three dimensions of interaction between the people and the political sphere: expression, implication and intervention.” Filter bubbles would inhibit this process by pushing individuals in separate ideological spheres and preventing them from expressing their views to other spheres.

However, Pariser’s theory has faced controversy among experts, some of whom have asserted that the effects of such filters may be overexaggerated. For example, a research study conducted by three German professors on Google News argued that while personalization was quite prominent in the functionalities of the app, it had little effect on the content diversity presented to users. Therefore, should we agree with Pariser’s statement, or should we view his concerns for filter bubbles’ impact on public arguments as alarmist and exaggerated?

Drawing support from existing literature on the subject, it is evident that filter bubbles do in fact pose a serious threat to political participation through the restriction of public arguments, despite the fact that its effects may not be as severe as Pariser warns. We must then emphasize that the current policies and legislation concerning filter bubbles has been thus far insufficient to regulate the political and societal impacts of such filter bubbles.

Tangible Impacts of Filtering Methods on User Discourse

 There is significant evidence proving that algorithmic curation of platform content for individual users has provoked adverse effects towards the diversity of content and thus the ability of users to hold public arguments. One visible phenomenon of algorithmic curation is the risk of being caught in “echo chambers.” This term refers to being stuck in a metaphorical environment that repeatedly presents similar information, which “is thought to lead to a reinforcement of one’s attitudes when attitude-fitting information is amplified while counter-attitudinal information is missing”. In other words, users are misled to believe that their viewpoint is not only justified but also shared by an overwhelming number of other individuals, since they are constantly fed with content that algorithms believe are most aligned with the views already held by that individual. We can view this as a form of selection bias, from an epistemological standpoint, since an individual’s feed on social media platforms essentially consist of a sampling of bits of information from around the world. Due to the failure to provide a randomized sampling free of individual biases, these feeds fail to create an accurate representation of society as it exists in reality.

This poses a fundamental issue when we consider the political decisions that citizens make, since individuals typically shape their own political views through interactions with other members of their society, accepting some of their beliefs while rejecting others. This is the purpose of public argumentation, but by putting people into separate spheres where they cannot (easily) see what goes on in other spheres.

However, this could appear counterintuitive to some, since social media platforms were initially seen as a way to facilitate public discourses, by allowing strangers from different geographic regions or socio-economic groups to interact freely in a virtual space. In fact, some scholars have continued to argue that the lack of diversity in user content has alternative causes.

For example, some studies (such as the aforementioned Google News study conducted by Haim, Graefe, and Brosius) argued that online news sources tended to overrepresent some outlets and conceal other outlets (possibly a result of search-engine optimization), but that at the individual level, diversity of sources was not necessarily harmed by preference filtration, implying that policies should aim to tackle wider aspects of these platforms’ mechanisms rather than individual filtering. These arguments are weak, however, because while ensuring that the framework of the platform itself does not lead to biased news is of course essential, user personalization will continue to reduce the diversity of information if unchecked because entire topics and areas of interests will be omitted from every individual’s new feed, even if articles from the source are included. Haim, Graefe, and Brosius themselves acknowledged that algorithmic curation could have narrowed news diversity more than individuals self-selecting their news exposure, regardless of how the platform selected which outlets to feature.

Moreover, some research has revealed how the role of users’ own actions in creating this separation is often neglected. Many online users of sites like Facebook have adopted a conflict-averse behavior that cause them to adopt strategies that seek out familiarity and reject content or users they may perceive as hostile to their lifestyle. After all, a large part of the user experience provided by online platforms are created, whether directly or indirectly, by the users themselves (for example, joining certain groups or following certain personalities), so their responsibility in this polarization of information cannot be ignored. However, we cannot use this as a reason to let algorithmic curation free of blame altogether. Users can choose to self-select and associate with specific groups out of free will while still seeing different content when they choose, but filtration mechanisms prevent them from doing so at all, and they bolster this self-selection without any approval from the user.

It is also evident that filter bubbles can cause a threat to the notion of modern democracy itself. In the Western liberalist view of democracy, free and universal elections are not enough to qualify a system as democratic; certain fundamental values must also be upheld and protected. Filter bubbles pose threats to these values because they allegedly limit the freedom of choice of individuals by imposing filters that violate users’ autonomy and threaten the separation of powers and the freedom of the media by enabling these platforms to serve the agenda of certain individuals or groups. Modern thinkers like Rawls and Habermas have also espoused a form of modern democracy known as “deliberative democracy,” in which citizens addressing political concerns publicly is a key mechanism of decision-making (and therefore, filter bubbles threaten the very core of this model). In this sense, scholars may argue that Pariser’s statement is erroneous because the difficulty to have public arguments is not the most serious political problem per se. However, public argumentation is central to all of these core values in a democratic system, and the digital world is proving to be the main location for any kind of public discourse, so it would be naïve to ignore just how essential public expression on social media platforms has become.

The Lack of Proper Regulation Concerning Algorithmic Curation

Having emphasized the dangers of filter bubbles for public argumentation, we must now ask ourselves whether the laws and policies implemented by our governments thus far have been sufficient in providing methods to fight against the negative effects of user personalization. Unsurprisingly, they have not. Algorithmic curation has remained much less regulated than other aspects of social media in both the EU and the US (two regions I have chosen to focus on due to personal familiarity). The EU E-Commerce Directive and the Digital Services Act have addressed the concerns of algorithmic recommendations and have expressed that liability protections do not apply to recommended content, but there are no explicit requirements for platforms yet.

Therefore, we must look to expand the current legislation by proposing new laws and policies that would create a better legal framework for algorithmic curation. Regulating such features is also difficult, because it could pose an issue of governments trying to interfere too much into the activities of private companies. Additionally, the ability to customize user experience and filter the extremely heavy volume of online content by personal relevance has been a key selling point for platforms like Google and Facebook. Therefore, restricting personalization completely would set these platforms back and lose the benefits of the technological innovations from the past decade. After all, users would be completely lost in the sea of information that is uploaded and downloaded every second from the Internet if there were no type of relevance optimization at all.

We must thus search for a solution that enables users to keep content filtration while ensuring that users are exposed to diverse content, not just content aligned to what the user is used to seeing and is comfortable with. Naturally, this is rather paradoxical. Some researchers have proposed that, in the same way software has sparked the problem, software could be the answer. In 2013, developers created a tool named Balancer which tracks reading activities of users and provides a periodic report that points out tendencies and biases the reader has; since then, a number of other software have been designed with a similar aim. A new law could therefore be proposed requiring major platforms to provide some transparent information on biases in user activity and in the content proposed by different apps. However, this could also raise a concern in terms of user privacy and consent, as it would require user search histories to be analyzed by a software, which may spark controversy.

When it comes to public deliberation, apps like ConsiderIt and OpinionSpace have offered forums and spaces for users to learn about tradeoffs of different opinions and engage in debates with people that the software identify as being “drastically different,” and Google Chrome add-ons have even offered informing users when a webpage has been refuted or contradicted elsewhere on the Internet. While it would be difficult to enforce such software as a legal requirement, these built-in features for major platforms could be a powerful solution to fight by-design information biases.

Conclusion

Filter bubbles have provoked much polarized debate in the last ten years, and while their effects on online user experience have yet to be completely understood, it is undeniable that they have completely shifted the way we as platform users interact with the world’s information and process it. Through algorithmic curation, these “bubbles” have trapped many users into spheres of self-perpetuation that confirm what they already think while preventing them from developing alternative viewpoints. Although there is ongoing debate over how much of an issue this is, we must remember that algorithmic curation was principally created as a way to optimize the monetization of social media platforms and search engines, through boosting engagement of these sites and making commercial content more targeted. Therefore, they must still be restricted by law as filtration can easily become a tool for corporations to usurp their power over the flow of information.

There are inevitably limitations to the proposed solutions that have been mentioned, but software making users aware of their biases is certainly a promising field to explore. Another limitation that must be underscored is that most of the sources addressing the subject of filter bubbles in this field are principally focused on online platforms in a Western, liberal democratic context. However, in other socio-political contexts, especially those with “less free media systems,” the effects of filtering take new dimensions. Filter bubbles can be used by authoritarian governments to extent their grip on their population, but they can also serve as a protector of information freedom by facilitating more independent views. Therefore, more research needs to be conducted on the political, economic, and social contexts of algorithmic curation in order to gain a more holistic understanding of this technology’s impacts.

Matthieu Lê is a dual Bachelor of Arts student in the Sciences Po – Columbia University dual degree program (Reims campus). Matthieu is interested in management and financial services, and he aspires to help find innovative solutions to modern societal and corporate issues. His interest in social media’s omnipresent role in both the public and private spheres inspired him to research filter bubbles and their influence on the uses of these modern technologies.