[ARTICLE] Grand Ambitions? An Analysis of the EU’s Digital Strategy
22 December 2021
[ARTICLE] The Bronner report makes new proposals about social media
19 January 2022

[INTERVIEW] Fake AI – An interview with Frederike Kaltheuner

By Rachel Griffin

The essay collection Fake AI was released in December 2021 by Meatspace Press. Like all their publications, it’s free to download online and can be read here. The book’s contributors have expertise in many fields from academic computer science to tech journalism and art, but the common thread of all the essays is the limitations of the technologies we call ‘artificial intelligence’ – and the dangers of exaggerating their capabilities. The Chair interviewed the book’s editor, Frederike Kaltheuner, about the project and its goals.

The essays in Fake AI address various ways that AI can fail to live up to expectations. For example, because it’s measured according to flawed and easily gamed benchmarks, because implementing poorly-thought-through technological solutions allows people to circumvent real social issues, or because AI is designed based on flawed assumptions to achieve tasks that are not actually possible. On the other hand, how would you define successful AI? What principles should people conceiving, designing and using AI systems follow to avoid the pitfalls described in the book?

That’s a really good question. Let me take a step back. The book was at least partially inspired by a talk given by Arvind Narayanan, Associate Professor for Computer Science at Princeton University, who claimed that much of what is commercially sold as AI today is actually ‘snake oil’. In other words: we have no evidence that it works, and based on our scientific understanding of the relevant domains, we have strong reasons to believe that it couldn’t possibly work.

This doesn’t mean that AI is a scam, or that all AI applications are snake oil. Some of the technologies that are called AI are not at all snake-oil – there has been genuine remarkable scientific progress, also in areas like image recognition. Where the misunderstanding originates is that progress in one domain doesn’t necessarily translate into progress in another domain, or that artificial general AI – the hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can – is just around the corner. It’s very much not.


The book has quite a unique design, which a note at the end explains was partly generated using AI systems. Can you tell us a bit about the book’s design, why you chose it and what you think it adds to the book’s message? 

The idea was actually proposed by the book’s designers, to use a form of AI-constrained design. For the outline the designers used a small collection of book scans, as well as an image GAN (Generative Adversarial Network), an unsupervised machine learning framework, where two neural networks compete with each other in order to generate visual output. The purpose of using an image GAN is to create new instances by detecting, deconstructing and subsequently reconstructing existing patterns to create speculations about continuations. The outcome is, to be perfectly honest, sometimes frustrating to read. But it raises questions about the limits of AI creativity, and as such, I do think it really does add to the overall theme of the book. 


A lot of literature on AI – including critical literature – assumes that it’s very powerful and will exert various kinds of sinister influence, for example by expertly manipulating our behaviour. As Deb Raji highlights in her essay, actually AI systems are far from having such advanced capabilities in most areas and they’re more likely to cause harm when they go wrong. Why do you think there is such a tendency for companies and institutions to adopt AI systems that are demonstrably unreliable or inaccurate? And what do you hope Fake AI will achieve as an intervention in these debates? 


This tendency to overstate the capabilities of AI, even by its critics, was a key motivation to write this book in the first place. This is actually a key theme in a lot of my work over the past few years: it’s Orwellian when it works, and Kafkaesque when it doesn’t. We tend to think that harm results exclusively from technology working as intended. Harm can be just as severe when technology doesn’t work at all, or is poorly designed and constructed. It may feel Kafkaesque when you have to prove who you are to a system that doesn’t recognise you. But it’s more than that – it can be incredibly offensive, discriminatory, and lead to consequences that affect someone’s livelihood, or life.

I like Deb Raji’s essay a lot, especially because it comes from someone who has worked in engineering teams. Within policy cycles there’s sometimes a reluctance to say that tech is sloppily built, or doesn’t work at all. That’s why so many policy discussions are focused on mitigating harms, rather than defining red lines, not just based on risk, but also based on what AI is simply not equipped to do.

What I hope the book does is show that the Emperor doesn’t wear clothes.


Your own essay addresses the theme of identity, and the way that AI applications are often designed to rely on rigid identity categories and pseudoscientific assumptions about the nature of social groups. For example, many applications assume that gender and sexuality can be identified based on physiological features, when these are fluid and socially constructed aspects of human experience. What are the social or economic pressures that lead AI designers and users to rely on these reductive categories? 

I don’t know if social or economic pressures are at the core of the problem. There are actually two parts of the problem. One is that AI systems are routinely used to classify, categorise and rank people based on data about them. There is nothing inherently wrong with this, but here’s the issue: there is no neutral, or objective way to classify or categorise people. This is always, inevitably a normative decision, but the in-built assumptions are often hidden or entirely obscured when the classifying and ranking is being automated. Because assumptions are invisible, it becomes harder to challenge or question them. This has far-reaching consequences for all those who are misclassified by automated systems, or whose lived identities are rendered invisible. 

A related, but entirely separate issue is the revival of biological determinism and essentialism, so the idea that behaviours, interests or abilities are biologically pre-determined, rather than shaped by society. We’ve seen how AI techniques have been used to justify biological essentialism – the infamous paper by two Stanford researchers who claimed that AI can detect whether you’re gay or straight (it cannot). There are also applications of AI that simply rely on incredibly ignorant and reductive views about, for instance, gender. A lot of gender recognition falls into this category, which can only be described as deeply transphobic. And then there’s the outright racist use of AI to ‘detect’ ethnicity based on appearance. This isn’t science fiction, there are face recognition systems that try to do just that. Is it racism, profiteering or ignorance? It’s hard to tell, but whatever the root cause, the consequences are real and significant. 

Another recurring theme in the book is the importance of media coverage, and how it shapes people’s perceptions and expectations of AI. Often the media are susceptible to hype around seemingly new technologies and don’t dig into their flaws as much as they perhaps should. What do you think needs to change in the media system when it comes to new technologies? 

So many things. I am still in awe how many news outlets never changed their headlines that claimed ‘AI can detect whether you’re gay or straight’ even after the paper that sparked the reporting had been widely criticised and debunked. Perhaps it is the idea that being critical of tech means being critical of the potentially negative effects of powerful technology? Or the fact that many tech stories are treated as product reviews rather than science news which deserves scientific scrutiny? 

I really love the essay by James Vincent, a tech journalist for The Verge, in which he describes the nuance and challenges involved in reporting about tech without promoting hype. 

Frederike Kaltheuner is a tech policy analyst based in Berlin.

Rachel Griffin is a PhD candidate at the Sciences Po School of Law and a research assistant at the Digital Governance & Sovereignty Chair. Her research focuses on social media regulation as it relates to social inequalities