[INTERVIEW] The proposed antitrust bills in the US and the recent executive order signed by President Biden: 3 questions to Randy PICKER
20 August 2021
[INTERVIEW] International trends in data protection and privacy regulations: 3 questions to Fabian DELCROS
26 August 2021

[ARTICLE] Regulating Artificial Intelligence: Could the EU’s “AI Act” lead the way forward?

by Can Şimşek

The proposed Artificial Intelligence Act (AI Act) of the European Commission will be the very first Regulation that directly addresses “artificial intelligence”, if adopted. By introducing this new Regulation on top of the General Data Protection Regulation (GDPR) and the Law Enforcement Directive which came into force in 2018, the European legal framework will be covering the two main elements composing “Artificial Intelligence” (AI): data and algorithms.  

The EU has been very influential when it comes to global tendencies that concern regulating digital technologies. The GDPR heated up the data protection debate in the US and China besides serving as a model regulation for a lot of States including Japan, Brazil, India, Kenya, South Korea, and even California. If adopted, the AI Act could trigger a similar effect among the countries in which AI developers and/or deployers reside. Yet, the initial draft of the Regulation has many weak sides. 

1. Material Scope of the AI Act 

In fact, even defining AI is a challenging task since it is a dynamic term. In colloquial language, intelligence is often seen as whatever machines or animals have not done yet. This creates an “AI effect” since people tend to move the bar for calling something “AI ” higher as technology progresses. Therefore, it has been heavily debated whether having a general regulation on AI would be the right approach in the first place. For instance, the German AI Association drafted a Position Paper stating that regulating “AI” is not feasible since “a clear definition of the term that allows us to differentiate AI from already existing algorithms is missing.” Ultimately, the Commission chose to list the techniques and approaches that are defined as “AI” for the purposes of its AI Act, in order to overcome this dilemma.  

Article 3(1) of the proposed AI Act defines an ‘artificial intelligence system’ (AI system) as:  

“software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.”  

As for the initial draft, Annex I covers:  

“(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge- based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimization methods.”  

Since new techniques and approaches can emerge, Article 4 empowers the Commission to adopt delegated acts to amend the list of techniques and approaches listed in Annex I. Thereby, the AI Act is designed as a “living document.” After all, rather complex algorithmic systems that are created with the listed approaches or techniques will be covered under the regulation.  

Despite this meticulous effort, the material scope of the AI Act is problematic. Notably, Article 2 of the Regulation leaves out the AI systems that are;  

– developed or used exclusively for military purposes;  

– used by public authorities in a third country or by an international organization in the framework of an international agreement for law enforcement and judicial cooperation;  

– high-risk AI systems that are safety components of products or systems, or which are themselves products, and regulated under the listed lex specialis (except Article 84).  

In other words, most of the algorithmic systems that could pose a threat to fundamental rights are left outside the scope of the Regulation. Still, the material scope of the proposed Regulation is broader than the algorithmic systems that it actually regulates. But how does this happen? Here is the explanation: For the AI systems that are covered, the AI Act contains four distinct regulatory regimes which are tailored in proportion to the level of risk posed by the given AI system. These are: “unacceptable risk” that is prohibited; “High-risk” that is strictly regulated; “limited risk” that is subjected to transparency obligations, and “minimal risk” or “no risk” that are allowed a free rein. Consequently, most of the AI systems that are available today will not be affected by this Regulation since they pose minimal risk or no risk to human rights or safety. Therefore, the proposed AI Act would have a rather limited substantive effect despite having a broad material scope. This situation could restrain the Member States from regulating the AI systems further as the AI Act will harmonize the EU Law. Accordingly, the Member States will need to rely on the exceptions in Article 114 of the TFEU to introduce further restrictions on placing of AI systems on the market. This approach is understandable given the overregulating concerns of some Member States. For instance, Denmark has spearheaded a position paper at the beginning of the negotiations on behalf of 14 states in which they argue that a soft law approach should be given weight. However, some countries will surely oppose this preemptive aspect of the Regulation. 

2. Substantive Law 

Going beyond being merely a law that addresses a certain use case of the AI systems, the EU’s AI Act aims to be a comprehensive regulatory framework. First and foremost, it addresses the risk of malfunctioning and threats to physical or mental safety. For the “high-risk” AI systems – which are categorically defined in the Regulation and listed in its Annex III-, the EU plans to harness the technical experience of standardization bodies for synthetizing “best practices” and require “high quality data, documentation, traceability, transparency, human oversight, accuracy and robustness.”  In this regard, the “transparency by design” approach in the Act is a significant novelty. Following Article 13 (1), high-risk AI systems should be designed and developed in a way that ensures their transparent operation, which should suffice for the users to interpret the systems output and use it appropriately. The “appropriate type and degree of transparency” will be assessed by achieving compliance with other relevant obligations set out in Chapter 3 of the same title which in fact includes measures that enable transparency in the broad sense such as putting a quality management system in place, drawing up technical documentation, conducting conformity assessments, and keeping automatically generated logs.  

Following the new transparency requirements, the public authorities and the conformity assessment bodies will be able to open the “black box” of the high-risk AI systems. This is worth praising, given that trade secrets and intellectual property rights are prone to veil the discriminatory or unfair practices of algorithmic systems. Yet, these legal obstacles will still overrule when it comes to transparency towards the individual users. As mentioned in the foreword of the AI Act, the increased transparency obligations will not disproportionately affect the right to protection of intellectual property that is enshrined in the Article 17(2) of the European Charter, since “they will be limited only to the minimum necessary information for individuals to exercise their right to an effective remedy and to the necessary transparency towards supervision and enforcement authorities, in line with their mandates.” 

Lastly, the Regulation will introduce further transparency requirements towards the individuals. For one, a certain kind of transparency is required for the AI systems that pose a “limited risk.” Following Article 52 (1), providers of AI systems shall ensure that “AI systems intended to interact with natural persons are designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.” E.g., users should know if they are interacting with a chat-bot so that they can take an informed decision on whether to continue or not. Similarly, second paragraph of the article obliges “emotion recognition” or “bio-metric categorization” systems to inform the natural persons exposed thereto. Finally, the third paragraph requires that users of AI systems that generate or manipulate content which would falsely appear truthful or authentic (such as “deep fakes”) to disclose that the given content was generated or manipulated.  

In general, these novelties are welcomed by the stakeholders. Nonetheless, it is worth noting that the hectic usage of words such as “to inform”, “minimum necessary information” or “transparency” could need further clarifications in order to safeguard legal certainty. 

3. The Debate on Bio-metric Identification   

There is a burning debate concerning the automated recognition of human features in public spaces. In fact, this is a crucial issue for ensuring due process rights and eliminating algorithmic bias. The EU’s draft AI Act limits the usage of real-time facial recognition systems by public authorities to certain exceptional cases and prohibits the use of AI systems for unacceptable practices such as social credit scoring. Therefore, the EU will clearly differ from China in terms of regulating such uses of AI systems. On the other hand, the US seems to be in a parallel direction with the EU as some States on the other side of the Atlantic have already adopted laws limiting the use of facial recognition technology. 

Although Article 42 of the proposed AI Act severely limits the usage of real-time remote bio-metric identification systems (including facial recognition systems), the European Data Protection Board and the European Data Protection Supervisor adopted a joint opinion calling for a general ban on any use of AI for automated recognition of human features in publicly accessible spaces. According to the opinion, using faces, gait, fingerprints, DNA, voice, keystrokes and other bio-metric or behavioral signals should be prohibited without an exception. A similar debate exists in the US as some States such as Portland take a stronger stance against the use of facial recognition systems. Whether the final version of the EU’s AI Act will have stricter rules on the use of AI systems for such purposes would surely have an impact on the global debate, starting from the US.  

4. The Enforcement Model of the AI Act  

For implementing the new rules, Member States of the EU will designate national competent authorities being a “national supervisory authority”, a “market surveillance authority” and a “notifying authority.” Then, the notifying authorities will notify the “notified bodies” that will be involved in conformity assessment and certification processes. Moreover, the national supervisory authorities will represent the Member States in the “European Artificial Intelligence Board” which will be established after the adoption of the AI Act. Bottom line, if the AI developers or deployers fail to comply with the rules, national supervisory authorities will be imposing heavy fines.  

5. Conclusion 

As the businesses who wish to deploy their AI systems in the European market will need to comply with these standards, it is reasonable to expect that there will be a “Brussels effect” changing the debate on the global scale. Consequently, the enactment of this legislative proposal might clinch the EU’s role as the “standard setter” in regulating digital technologies. Yet, it is debatable whether the proposed AI Act is up for the task. In fact, the draft Regulation does not represent a brand-new approach that is tailored for artificial intelligence. It merely echoes the “New Legislative Framework” which was introduced in 2008 as part of the EU’s product safety regime. Unlike pharmaceutical regulations, this regime does not require the developers to submit their files to a central agency like the Food and Drug Administration in the US or the European Medicines Agency in the EU for premarket approval. Instead, the Regulation mostly relies on self-assessment and third-party certification which could end up as box-ticking exercises. Therefore, it is worth pondering whether this approach will suffice to prevent prohibited AI systems from entering the market or being used by the developers in the long term. 

To conclude, the proposed AI Act of the EU is a huge first step towards regulating complex algorithmic systems on a global scale. Although it is quite natural to have differences between countries when it comes to regulatory policies, it is also certain that the global world requires a certain amount of convergence. In this regard, the EU seems determined to take the lead and remain as the “standard setter” in regulating digital technologies. If enforced properly, the transparency requirements in the AI Regulation could move things forward in terms of safety, fairness, and accountability. However, striking the right balance between innovation and the risks that it carries along is not an easy task. Addressing the mentioned issues delicately could be the key for drafting an AI Regulation that might become a model for the rest of the world.  

Can Şimşek is a lawyer (LL.M) and a research assistant at the Digital, Governance and Sovereignty Chair.