Internet Censorship, Surveillance and Algorithmic Governmentality. An Interview with Félix Tréguer

12/02/2020
Félix Tréguer. CERI.

Félix Tréguer is postdoctoral researcher at CERI Sciences Po and associate researcher at CNRS Centre for Internet and Society. At CERI, he is part of the French research team of the ANR-ORA project “GUARDINT: Oversight and Intelligence Networks: Who Guards the Guardians?” He is the author of L'utopie déchue : une contre-histoire d'Internet, XVe-XXIe siècle (Fayard , 2019). Félix answers our questions on his research interests and focus, and presents the aims and scope of the project he works on, GUARDINT.

Can you tell us a bit about your academic background: what was the subject of your PhD and what were your thematic (and geographic, if any) areas of focus?

After a Master’s degree in Public Affairs at Sciences Po, I worked as Political and Legal Analyst at La Quadrature du Net, a French NGO defending rights and liberties in the digital environment—an organization in which I continue to serve as a volunteer today. During my time at La Quadrature, I had the chance to take part in the political and legal controversies concerning the regulation of copyright on the internet, online freedom of expression, and the neutrality of networks. But from the first months of my work with La Quadrature, I also felt the need to take some distance from our work, to break with the hectic rhythm of activism in order to take the time to think. During the summer of 2010, I spent three months doing research at the Berkman Center for Internet and Society (University of Harvard), an interdisciplinary centre specialising in the study of internet-related socio-political issues. This very rich experience convinced me to start a PhD at the EHESS.

My PhD (2012-2017) aimed at situating the contemporary debates on censorship and surveillance of internet communications in the long history of the public sphere. For this reflexive work, I started from the history of the “bourgeois” public sphere defined by Jürgen Habermas, while working toward a critical appropriation of the concept in order to address some descriptive shortcomings pointed out by historians, but also the issues related to his normative conception of democratic debate. Combining history, political theory, law, and the study of media and computing technology, I aimed at tracing back the political disputes that had to do with the different parameters that structure the public media sphere while also considering different power practices often treated separately (censorship, surveillance, propaganda, secrecy, centralization of the means of communication).

You have recently published a book called L'utopie déchue: une contre-histoire d'Internet (XVe-XXIe siècle) (Fayard, 2019). What is the main thesis of your book?

The book is the result of my PhD. As the title suggests, my book examines the failure of the founding utopias of the digital activism, of which I was a part. As early as the 1990s indeed, there was this idea that the internet represented a space that escaped the sovereignty of states and markets, a “new electronic frontier” where emancipatory communication practices could bloom and contribute to the process of democratization. The internet is an extremely complex and pluralist sociotechnological apparatus and it remains of course the instrument of emancipatory practices. But after ten years working on such issues, I can only observe, with the Committee for the Liquidation or Subversion of Computers (a neo-luddite group that was active in the region of Toulouse, France, during the early 1980s) that the internet is also and above all “yet another tool, a particularly efficient one, serving the dominant”. The other utopia that my research has pushed me to question is that of human rights placed at the top of the hierarchy of norms: however indispensable they may be, I do not consider that they lastingly and efficiently safeguard against states’ illiberal practices.

In order to deconstruct this double utopia, the book presents a combined history of the state, of social movements, and of communications technologies, while following the interactions between practices of resistance and practices of power since the fifteenth century and the “invention” of the printing press. This long-term history shows that the forms of control of the media public space follow the three “economies of power” Michel Foucault identified: legal, disciplinary, and security. In fact, the digital public sphere constitutes—just as the migration policies analysed by many of my colleagues at CERI—an excellent opportunity to observe the transformation of the exercise of power in a context of destabilization of national sovereignties in the face of neoliberal globalization. In order to re-establish effective policies of surveillance and censorship given the acceleration of transnational data flows, representative regimes have strayed from the rule of law and partnered with private actors who master the digital infrastructure. The massive surveillance of internet communications, or the forms of extra judicialization and automation of censorship policies, are illustrations of this ongoing process.

What do you mean by “counter-history”?

In his 1976 lectures at the Collège de France entitled Il faut défendre la société (translated in English as Society Must Be Defended), Michel Foucault referred to the major historiographical rupture brought by the Enlightenment: before, Foucault said, history was that of kings and their power, a history of lineage and marriage, a history of royal symbolism. Despite efforts by historians and professors to revive a critical history, in many ways history continues to contribute to the staging of the legitimation of power, in particular within the educational system.

And yet, against this history of royalty by royalty, Foucault reminded us that the advocates of the eighteenth-century enlightened bourgeoisie imposed a counter-history: one that reactivated the conflicts between monarchs and peoples and that, at the same time, no longer had the effect of homogenizing the political community by arousing fascination, but undertook to reactivate the memory of the struggles, “the war that rumbles under peace”.

Counter-history seeks to underline the contingent nature of the political order, and shows how it results from power relations. By assuming this term, I have taken up these strategies and I seek to make them work critically, against the official history of our “liberal democracies”. I have also sought to reactivate the forces of the defeated, that is, activists defending a democratic public sphere. In my book I aim at highlighting the contradictions of representative regimes, and the way in which the promise of democracy is constantly undermined by practices that deny political equality, call into question liberties, or nullify their effects.

What do you currently focus on in your research?

In keeping with what I have done since 2015 when I joined CERI as a researcher for the ANR UTIC project, I continue to explore the controversies surrounding the regulation of large scale digital platforms in Europe and North America, be it the fight against terrorist propaganda or the access of states to data stored by these platforms. These debates show that there is an ongoing negotiation process through which state security apparatuses integrate private companies, which, for the most part—and unlike other private players such as telecommunication operators—used to be outside of their sphere of influence, in order to restore efficient forms of control on digital communications.

More recently, I have also started working on the development of algorithmic governmentality in the urban public space in light of the many Smart City projects and their security variations—what the industry now calls “Safe City”. A lot has been said on the development of facial recognition in China, or on the predictive applications of police in the United States. After years of research and development paid for by public money, the use of Big Data and artificial intelligence technologies by city police has spread in France and other European countries. Current projects mix unsophisticated prototypes of predictive policing with real-time social networks surveillance, but also with automated video surveillance allowing for the detection of suspect behaviour on the streets. With La Quadrature du Net, I helped develop a research-action project called technopolice.fr, launched in September 2019. The campaign aims to document these deployments while allowing those who are interested to mobilize around these issues.

Technopolice.fr, 2019

Technopolice.fr, 2019.

These two fields help me reflect upon the process of automation within bureaucracies, and the resistance it generates. After the trend of new public management in the 1980s, which had already contributed to profoundly transforming the bureaucratic field, the latter is today directed by the paradigm of “data governance”. While the introduction of analysis and classification algorithms in certain sectors such as security and intelligence, higher education, or social welfare is already the subject of considerable controversy, recent advances in artificial intelligence techniques point to an ever-increasing delegation of data processing, categorisation, and decision-making tasks to computer programmes. These new technologies make a qualitative leap in automation technically possible and economically sustainable—budgetary rationality is often put forward to justify these deployments—thus contributing to illiberal forms of government.

You are currently working at the CERI for a research project funded by the ANR, called "GUARDINT. Oversight and Intelligence Networks: Who Guards the Guardians?" Can you briefly tell us what the main objectives of this project are and how they meet your own research interests?

GUARDINT aims at shining a critical light on the theoretical foundation, the history, and the practices associated with democratic oversight of state surveillance in the digital age, and more particularly surveillance as it is enacted by intelligence services. This critical perspective goes with a research-action dimension of the project, aimed at monitoring the legal rules framing state surveillance, and at addressing the great shortcomings in this field. Among these shortcomings is a gap between transnational surveillance practices and nationally framed control systems (for instance, the sharing of data between French intelligence services and their counterparts abroad is not subject to any independent control). Under Professor Didier Bigo’s leadership, we work with our German partners from WZB Berlin and Stiftung Neue Verantwortung (SNV), as well as with our colleagues from King’s College London. Here at CERI, GUARDINT continues the work done on post-Snowden controversies regarding internet surveillance. We are starting a sociogenesis of democratic control by studying the scandals since the 1960s that have contributed to shedding light on these practices covered by state secrecy and that have therefore participated in the implementation of oversight mechanisms. We also plan to focus on the making of surveillance technologies through research and industrial policies. It seems that this constitutes unthought of aspect of democratic oversight of state surveillance.

GUARDINT comes into the scope of my research and allows me to consider a question that seems central to me: that of the “procedurization”—i.e. the introduction of procedures, of institutions to implement them—of sociotechnological controversies, and in this case, those linked to computing. My previous research has led me to observe that the “procedural” perspective aimed at framing the negative effects of IT have proven barely effective. The sociohistorical perspective we have adopted within GUARDINT could well confirm these observations and produce critical resources for questioning the conventional ways of managing controversies and other technopolitical crises. This seems all the more necessary given that, in the face of artificial intelligence and the concern it causes, we are witnessing a revival of debates on “ethics” and “transparency” of algorithms, including within the academic world. However, these processing arrangements had already been put forward in the 1970s with the first controversies linked to computerization, and it does not seem that they have been able to answer the objectives they were assigned at the time.

Interview & photo by Miriam Périer, CERI.

Back to top