Can the EU AI Act successfully regulate generative AI?

Can the EU AI Act successfully regulate generative AI?

Law&Tech Working Group

Another thrilling session of the Law&Tech Working Group is planned Thursday 30 March 2023 from 12:00 to 14:00 (Paris time).

We have the pleasure to host Prof. Lilian Edwards, stellar scholar, to help us delve into AI regulation and look more closely at generative AI under the EU AI Act proposal.

Generative AI models have been gaining enormous recognition in the last year throughout the world. Large Language Models (LLMs) such as Open AI’s GPT-3 have already moved from proof of concept to consumer and industry adoption in fields like chatbots for customer helpline support and business journalism. Models for creating AI images from text prompts (eg. DALL-E2, Craiyon, Stable Diffusion) and even text-to-video (eg. Meta’s Make-me-a-Video; Google’s Imagen) are being hailed as inaugurating an era when AI starts to deliver on the promise of democratising creativity and innovation. But these models also bring worries about bias and stereotyping, fake news and hate speech generation, and threats to human creators.

Large models have assaulted the draft EU AI Act currently in progress which was designed for the earlier generation of machine learning models. Can the AI Act withstand this onslaught? If not, what other legal instruments should we look to?

Lilian EdwardsLilian Edwards is a leading academic in the field of Internet law. She has taught information technology law, e-commerce law, privacy law and Internet law at undergraduate and postgraduate level since 1996 and been involved with law and artificial intelligence (AI) since 1985.

She worked at the University of Strathclyde from 1986–1988 and the University of Edinburgh from 1989 to 2006. She became Chair of Internet Law at the University of Southampton from 2006–2008, and then Professor of Internet Law at the University of Sheffield until late 2010, when she returned to Scotland to become Professor of E-Governance at the University of Strathclyde, while retaining close links with the renamed SCRIPT (AHRC Centre) at the University of Edinburgh. She resigned from that role in 2018 to take up a new Chair in Law, Innovation and Society at Newcastle University.

She is the editor and major author of Law, Policy and the Internet, one of the leading textbooks in the field of Internet law. She won the Future of Privacy Forum award in 2019 for best paper (‘Slave to the Algorithm’ with Michael Veale) and the award for best non-technical paper at FAccT in 2020, on automated hiring. In 2004 she won the Barbara Wellberry Memorial Prize in 2004 for work on online privacy where she invented the notion of data trusts, a concept which ten years later has been proposed in EU legislation. She is a partner in the Horizon Digital Economy Hub at Nottingham, the lead for the Alan Turing Institute on Law and AI, a Turing fellow, and a fellow of the Institute for the Future of Work. At Newcastle, she is the theme lead in the data NUCore for the Regulation of Data. Edwards has consulted for, inter alia, the EU Commission, the OECD, and WIPO. In 2021-22, she is part-seconded to the Ada Lovelace Institute to lead their work on the future of global AI regulation.

How to participate

The event will be held in person and online (to receive the link, please send an email to Marta Arisi, PhD Candidate:

Event Date: 
Thursday, 30 March, 2023 - 12:00 - 14:00
Event Location: 
Online and in-person (Sciences Po, 1 place Saint-Thomas d'Aquin, Pavillon McCourt, 2nd Floor, Paris 7ème)
Tags :
Back to top