Project Liberty : 5 New Research Projects

Project Liberty : 5 New Research Projects

  • © 2003 -2024 Shutterstock© 2003 -2024 Shutterstock

Project Liberty  : 5 New Research Projects Selected In The Framework Of The Partnership Between Sciences Po And Project Liberty

In response to the call for projects, themed "Shaping Our Digital Future: How to Build, Invest In, Deploy, and Regulate New Technologies for the Common Good" Sciences Po and Project Liberty are pleased to announce the recipients of the 2024 research grants. Sciences Po academics were awarded funding to support research that develops evidence towards building technology for the common good:

  • Delimiting Systemic Risks in Social Media Governance: Putting the DSA Into Practice
  • Leveraging Web3 and AI technologies for Governance Innovation in the Public Interest
  • Quantifying the Contribution of Online Platforms to Affective Polarisation
  • New Legal Theories for the Digital: Digital Governance and the Governance of the Digital
  • From Fact to Opinion: Opinions and Intentions in Sharing Misinformation

Detailed presentations

Delimiting Systemic Risks in Social Media Governance: Putting the DSA Into Practice, project led by Beatriz Botero Arcila & Rachel Griffin, Sciences Po Law School

The 2022 Digital Services Act (DSA) establishes a new comprehensive regulatory framework for online platforms in the EU. It reserves the most extensive obligations for ‘very large online platforms’ (VLOPs), defined as those with over 45 million users. These platforms are considered particularly important for the broader media system and for political and democratic discourse. VLOPs are obliged to assess and evaluate ‘systemic risks’ in specified areas (e.g. fundamental rights, polarisation, public health and security); reasonably and proportionately mitigate these risks; and report to the Commission on their
mitigation measures.
Importantly, the DSA also introduces mechanisms for vetted researchers to access platforms’ internal data “for the sole purpose of conducting research that contributes to the detection, identification and understanding of systemic risks in the Union”. Policymakers envisage that independent scrutiny from academic researchers and civil society organisations will play an essential role in identifying and defining systemic risks associated with social media, as well as holding VLOPs accountable for effectively mitigating them.
This project will bring together a team of researchers from Sciences Po’s Law School and Médialab to be a central part of that effort. The project aims to monitor and critically evaluate how the systemic risk framework is being implemented, and to position itself as a key resource for regulators implementing and enforcing the DSA. It will provide actionable guidance and resources for regulators, civil society and researchers on how to utilise this framework to strengthen the governance of online media platforms.
We will focus on two main lines of research: How are systemic risks and appropriate risk mitigation measures understood and defined by VLOPs and other stakeholders within the DSA framework? What data should researchers have access to in order to evaluate and monitor systemic risks within the DSA framework?

Leveraging Web3 and AI technologies for Governance Innovation in the Public Interest, project led by Dina Waked, Sciences Po Law School in partnership with Stanford Center for Legal Informatics (CodeX)

Contemporary digital markets are dominated by business models based on proprietary software, centralising decision-making power in a few hands, and allocating revenues to shareholders and management. Contrastingly, the novel technologies referred to as “Web3” and artificial intelligence (“AI”) come with the promise of fostering new governance structures that enable the transition to open standards and protocols, collaborative decision-making, and giving a voice to a broader range of stakeholders. Desynchronised with the new technological frontiers, organisational laws worldwide revolve around the standard corporate form with asset ownership, centralised management, and a narrow orientation towards shareholder value. Yet this corporate form was crafted in response to the technological affordances of the previous century, when transaction and agency costs prohibited the emergence of open-source, collaborative and multi-stakeholder governance mechanisms. Given the technological affordances of Web3 and AI, there is a misfit between technological development and the governance models recognised by the law. Building on this misfit, this project questions how legal systems can adapt to and foster innovative governance mechanisms. This overarching question is split in three sub-questions: 1) how can novel technologies be leveraged to create associational models that provide a meaningful alternative to the corporate form, 2) how the transition to open-source models and decomposable tokens will affect property rights, and 3) what legal forms will Web3 and AI applications evolve to take.
These questions are topical for several reasons. First, instead of enabling a fairer distribution of economic opportunities and bolstering participatory mechanisms, technological developments have been instrumentalized for anti-democratic purposes, concentrating outsize economic power in the hands of a few. Second, we witness the shortcomings of traditional governance mechanisms in the case of companies active at the frontier of innovation, the OpenAI debacle being only an episode in a series of scandals. These trends render this research exceptionally timely.

Quantifying the Contribution of Online Platforms to Affective Polarisation, project led by Kevin Arceneaux, CEVIPOF in partnership with NGO Build Up

What methods can be used to quantify polarization in online social media platforms in a manner that is comparable across platforms, and can inform platform regulation? What kind of interventions should be employed to mitigate individual-level polarization in social media?
For the first questions, this project intends to measure 5 types of polarization for each of 5 major platforms used by the Kenyan population. The platforms studied are Facebook, Instagram, Twitter, YouTube and TikTok. The types of polarization — following Build Up’s prior work characterizing archetypes of polarization on social media — are attitude polarization, norm polarization, interaction polarization, affiliation polarization, and interest polarization. Each of these conceptual varieties of polarization will be formalized into a quantitative measure that can be calculated using data that can be collected by users without the cooperation of platforms.
For the second research question, the project team will design online experiments to understand the main drivers of individual demands for polarizing content and how to diminish this bottom-up demand. Research team aims to understand the role of societal polarization in social media consumption.

New Legal Theories for the Digital: Digital Governance and the Governance of the Digital, project led by Rebecca Mignot-Mahdavi & Raphaële Xenidis, Sciences Po Law School

Technological disruptions offer valuable opportunities that should be harnessed for the common good. Yet, dystopian media stories about discriminatory machines and disinformation systems have brought to light some of the harms that new technologies can cause (Garcia, 2016). Law-makers around the globe are starting to address these societal and ethical risks with new legislative and regulatory frameworks (European Commission, 2021). Yet, a major question has so far remained unaddressed: How do socio-technical disruptions challenge the epistemological foundations of law-making? In other words, how do new socio-technical systems change the nature of regulatory objects, subjects and paradigms and thereby prompt new modes of law-making?
This project investigates this fundamental question in three steps. First, it investigates how algorithmic rationality is a defining feature of the socio-technical re-ordering powered by new technologies (Aradau & Blanke 2022). Second, using three case studies, the project explores how the divergence between algorithmic rationality and legal rationality leads to the inadequacy of existing legal rules and architectures. The three case studies on (1) AI-enabled systems in the security realm, (2) AI-enabled systems in employment and social benefits and (3) algorithmically-supported profiling in online advertising allow critically analysing emerging legal responses to societal and ethical risks. Third, the project examines how attempts to preserve existing legal infrastructures not only leads to adapting substantive rules, but also transforms prevailing modes of law-making. In so doing, the project critically assesses the impact of technology-induced changes in modes of law-making on prevailing justice infrastructures in society. These shifts not only have legal consequences, but also a significant impact on law’s construction of the social. Mapping and theorizing these socio-legal transformations is key to understanding how digital governance and its regulation destabilize the normative equilibria underpinning traditional law-making. Overall, this project paves the way for new legal theories for the digital.

From Fact to Opinion: Opinions and Intentions in Sharing Misinformation, project led by Achim Edelmann, Sciences Po médialab

In today’s digital landscape, individuals are frequently bombarded with misleading or false information, which can significantly influence personal beliefs, health choices, political views, and even election results. This has led to calls from governments and civil society for actions against misinformation, primarily focusing on containing its spread. However, research often overlooks that not everyone sharing misinformation believes or intends to propagate it; some do so to express disagreement, raise awareness of its falsity, or even humour.
This oversight is critical, as the primary concern is misinformation’s impact on opinions and behaviour. For instance, events surrounded by misinformation like the Capitol storming or George Floyd’s case in the U.S. may be shared for various reasons, and it’s unclear if fact-checking efforts alter public perception or engagement with such information. To truly address misinformation’s effects, we must move beyond just studying its spread and instead analyze the opinions and intentions associated with sharing.
In this project, we therefore ask two main questions:
Which opinions and intentions do people express when sharing misinformation? In particular, to what extent does endorsement and rejection of misinformation vary by substantive domains, such as public issues versus private concerns, and across political camps?
Does fact-checking influence opinions and intentions? Given that fact-checking might reduce misinformation's spread but still strengthen supporters’ views, we specifically ask whether it reduces opinion polarization. This will allow us to identify formats of fact-checking that are more effective in mitigating the negative influence of misinformation on opinion polarization.

Retour en haut de page