[ARTICLE] Small Steps: An Analysis of Biden’s Tech Policy Agenda
8 December 2021
[INTERVIEW] Fake AI – An interview with Frederike Kaltheuner
13 January 2022

[ARTICLE] Grand Ambitions? An Analysis of the EU’s Digital Strategy

By Guillaume Guinard and Rachel Griffin

On the 11th December 2020, the 27 EU member states officially reached an agreement regarding InvestEU, a plan to invest €1,074 billion by 2027 to “build a greener, more digital and more resilient European economy”. Getting there proved to be a challenge, however, as proponents of an ambitious recovery plan clashed with the ‘frugal’ views of four northern states.

Similarly, the EU’s digital strategy is characterised by broad overarching goals whose specifics tend to be underwhelming at the end of negotiations. Regulatory initiatives like the AI Act, sold by politicians as transformative measures which will retain Europe’s global leadership in tech regulation, have been assessed by experts as unambitious and generally pro-business. The InvestEU fund mentions digital transformation as a key priority in the pandemic recovery, with a requirement for member states  to allocate at least 20% of their share of the fund to digital measures. Nonetheless, investment in data storage and associated technologies has been plagued by controversies and disagreements surrounding the development of a European ‘sovereign’ cloud. Indeed, disagreement among private actors has meant that expectations towards efficient self-regulation have been lowered. One can only hope that recently launched private and public initiatives to set European environmental standards for the development of digital infrastructure won’t suffer from the same issues.

Digital infrastructure

Data sovereignty

The development of a ‘sovereign’ solution for online data storage in Europe follows from three key objectives. First, it is necessary to ensure citizens’ privacy and data protection rights are respected, as stated in the General Data Protection Regulation (GDPR), as well as any further applicable national protections. This requires localisability and transparency of data and transfers, so that users can appeal to their national courts in case of dispute.

Second, users of a European cloud service should not be subject to foreign laws that circumvent the protections within the EU legal framework. This is especially crucial given that the currently dominant cloud providers are based in the United States, and laws such as the Foreign Intelligence Surveillance Act (FISA) and the Clarifying Lawful Oversees Use of Data Act (CLOUD Act) compel any US-based company to hand over data on any individual, including foreigners, to the US justice system on the basis of a legal mandate.

Third, data security – in terms of both privacy and guaranteed access to data – is a necessity. Cloud services can host data essential to the provision of government services, as well as highly valuable economic intelligence. It is therefore important to be confident the data cannot be accessed by third parties or hackers. This is illustrated by the controversy surrounding the decision of the French public investment bank ‘Bpifrance’ to outsource the hosting of data relating to government-backed loans to Amazon Web Services. This data contains critical information regarding the health of many businesses which were affected by the COVID-19 crisis, which is why it is vital to ensure this data is protected from international competitors.


Additionally, while Forrester Research estimates that the value of the cloud services market grew from $6 to $300 billion between 2008 and 2019, and it is expected to keep on growing, foreign and especially American tech giants have a ‘data advantage’ which puts them in a better position to benefit from the increase in demand, and set the rules regarding access to data. Concretely, this means that the high volume of data accumulated by a small number of firms makes it difficult for competitors to provide comparable products, makes clients dependent on the biggest firms, and gives them power to set the conditions under which other companies may access the data they hold. 

With these concerns in mind, the EU is developing its strategy for a sovereign cloud, which includes a dedicated legal framework and the Gaia-X initiative. Within the European strategy for data, it announced a co-investment plan between the Commission and member states of €4 to €6 billion: this will be dedicated to financing common architectures within Europe, easing the interoperability and portability of data, and ensuring cloud services follow European regulations. Going forward, the EU seeks to create a European single market for data. To this end, the European Alliance on Industrial Data and Cloud was launched on the 14th of December 2021. The purpose of this alliance of European top tech companies is not only to foster cooperation between them and advise the EU’s industrial strategy and investment plans, but also to advise on the drafting of the upcoming ‘European Data Governance Act’. This new regulatory framework aims to facilitate data sharing in safe conditions – specifically data that are subject to the rights of others, such as  intellectual property rights, trade secrets and personal data. It also aims to provide additional safeguards against unlawful international transfer or governmental access to non-personal data, in addition to those implemented under the GDPR.

The Gaia-X initiative constitutes the ‘body’ of this strategy. Its goal is to connect cloud service providers to each other in order to create a catalogue aimed at public administrations and the public. By using this catalogue, clients can ensure that the service they use follows European standards and regulations. On the other hand, common technical standards and reliance on open source technologies mean clients can combine different service providers to suit their needs, improving the competitiveness of smaller, more specialised European companies in relation to American and Chinese leaders. As such, the European strategy is to increase the competitiveness of its offer in cloud services not by fostering the development of giants, but by creating strong incentives for self-regulation and cooperation among local actors.


However, the way the initiative has unfolded so far suggests these stated goals may not be fully realised. The idea of Gaia-X as a ‘sovereign cloud’ seems to have changed into that of à  label-assigning entity. While the initial 22 founding members were exclusively French and German actors, this number increased to 180 within 6 months and now includes European subsidiaries of foreign giants, such as Amazon, Google and Huawei. Reaching consensus and collaboration across such a wide range of entities – including small and major European cloud  service providers, universities, and organisations within the broader IT sector – will be a great challenge. Moreover, foreign-owned companies whose headquarters are not situated in Europe will be able to vote to elect Gaia-X’s administrative council but not be elected themselves, and they will be able to form mutual agreements regarding standardisation with other members. Requiring  small local actors to negotiate and cooperate with these giants calls into question whether the initiative still aims at improving their competitiveness. Finally, the initiative’s inclusion of Palantir, a company initially funded by the CIA and regularly involved in controversies over its use and handling of data, sow doubt regarding its commitment to higher data protection standards.

These sources of tension are credited for the initiative’s slow development, leading many of the European actors involved to express frustration and lower their expectations towards the project. This has triggered further investment projects and initiatives in an attempt to deliver on the project’s initial ambitions, coming both from the European Commission and the private sector. Still, in the near future, it seems that moves to further European interests in the cloud services market will rather come from legislation such as the Digital Markets Act than from standardisation within the industry.

Public services


An area that has seen real progress in inter-European cooperation on the use of data is public health. The most high-profile policy success in this area is the EU Digital Covid certificate, which standardised the acceptance of vaccination certificates and rules of entry across EU members. Another example of collaborative health policy is the European Federated Health Information Infrastructure (DIPoH), which seeks to standardise and foster good practices for health data across European countries. Nonetheless, initiatives to centralise health data at the national level have triggered criticism linked to data sovereignty, such as the ongoing controversy surrounding the French Health Data Hub’s decision to rely on Microsoft’s cloud service, Azure.

Cybersecurity

In December 2020, the Commission also launched a new strategy for cybersecurity – an often underappreciated policy area which is nonetheless crucial for new digital infrastructure projects to function. This is especially true in the current context, in which ransomware attacks on public institutions and businesses are increasing and can be expected to continue doing so, due to developments like the broader adoption of cryptocurrencies and the growth of ‘ransomware as a service’ providers.

The strategy includes two new proposed directives, on common cybersecurity rules and security in critical sectors. It will also launch a network of security operations centres across the EU, which will research cybersecurity risks, proactively identify attacks and provide targeted support to small and medium enterprises, coordinated by an EU-level Joint Cyber Unit. However, details remain vague: the Commission has stated that the new cybersecurity efforts will be ‘powered by AI’, but without further concretisation this may be more of a buzzword than a strategy.

Regulatory reform

The worldwide influence of laws like GDPR has given the EU the reputation of being a ‘regulatory superpower’ – reining in big tech and setting standards for the world to follow. These ambitions are reflected in the Von der Leyen Commission’s legislative agenda, with several major proposals setting out new regulations for tech companies. But while the AI Act, Digital Services Act and Digital Markets Act may have a similar international impact to GDPR, their ambition should not be overstated. Legal academic Paddy Leerssen has characterised the EU tech landscape as a ‘regulated oligopoly’. Consistently with this approach, the EU is continuing to specify new compliance obligations for powerful tech companies without making more fundamental changes to the industry’s structure and incentives.

Digital platforms

The Digital Services Act – touted as a way to ensure that the few major platforms who control large swathes of online public discourse and access to information respect ‘European values’ and protect democratic debate – primarily asks the largest platforms to conduct yearly risk assessments and pay for independent audits. The weaknesses of regulatory schemes which rely on tools like audits to delegate enforcement to a private compliance industry have been extensively documented by law and regulatory governance scholars like Michael Power, Julie Cohen and Lauren Edelman. Formalities often take precedence over the actual goals of the legislation, and concepts like ‘risk’ are easily twisted to primarily protect corporate interests over public values.

These obligations also only apply to ‘very large online platforms’ with over 45 million EU users. For smaller platforms, the Digital Services Act mainly focuses on ensuring they have adequate content moderation procedures and allow users to challenge mistaken content removals. In addition to the well-documented inadequacies of individual complaint and redress mechanisms as a way of protecting freedom of expression online, it’s notable that these procedures were already effectively industry best practices. In a continent with the world’s oldest public service media traditions, the Digital Services Act doesn’t even try to think outside the box of the current privatised and market-driven social media landscape.

The Digital Markets Act is perhaps more ambitious in that it aims to redress structural problems in the industry. It aims to prevent the largest ‘gatekeeper’ platforms from taking advantage of their size and market power to further entrench their dominance, hobble competitors and take over new markets, by regulating practices such as ‘self-preferencing’ (promoting a company’s own products over third parties using its platform) and data sharing. This is done by creating an entirely new regulatory regime which departs from traditional competition law mechanisms, aiming to allow faster and more flexible regulatory enforcement. However, it’s notable that the EU has again opted for regulation over more radical industry restructuring: for now, digital and competition commissioner Margrethe Vestager has rejected the option of breaking up the largest platform conglomerates, which is discussed more prominently in US antitrust debates.

Artificial intelligence

The EU’s other major initiative in tech regulation is the AI Act proposed in April 2021, which aims to comprehensively regulate all kinds of artificial intelligence software through a risk-based approach – AI applications in areas that create higher risks for individuals and the public interest, like employment and biometric identification, will be subject to more stringent rules. While the introduction of a comprehensive and binding regulation could seem like a step up from the vague statements of intent and ethical guidelines that have thus far characterised international AI policy – but closer examination by experts has indicated that, in its current form, the Act is unlikely to have the impact hoped for.

In fact, it may even hold back the regulation of AI. As legal experts Michael Veale and Frederik Zuiderveen Borgesius have explained, the Act is a harmonisation measure exercising the EU’s powers under Article 114 TFEU – which means that member states are no longer allowed to legislate in this area. Crucially, although the Act’s main substantive obligations apply only to a narrow set of AI applications deemed high-risk, its scope in principle covers all forms of AI – and Annex I defines AI extremely broadly, encompassing not only the computation-intensive machine learning techniques that are now most typically associated with the term, but effectively most forms of software. This means that all national regulation of software systems not covered by the Act will be barred.

Many of the AI Act’s new obligations for high-risk applications are also far weaker than they seem at first glance. Prohibitions on the highest-risk AI applications, like social scoring and facial recognition, turn out on closer examination to have very limited scope or practical relevance: for example, facial recognition by law enforcement is only banned when used in real time, meaning that police could still use it to track down people in recorded CCTV footage, and EU companies will still be able to sell such systems to clients abroad.

For other high-risk applications, following the New Legislative Framework developed for product safety regulation, compliance and enforcement are largely delegated to the private sector. The regulations will almost entirely be enforced through self-assessment by manufacturers and compliance with industry standards – meaning that Europe’s standardisation bodies, which are largely unaccountable to the public and have little expertise in tech ethics and public policy, will have significant power to determine what the regulations mean in practice. Furthermore, Veale and Borgesius’ analysis highlights that the conception of AI systems as discrete ‘products’ whose manufacturer can check them for safety before they go out into the world is a poor fit for the actual functioning of the software industry and its complex supply chains, where datasets and models are frequently reused and tweaked multiple times for different purposes and an AI system produced for one purpose may later be used in harmful ways by downstream actors. Overall, the Act seems to permit much more to the AI industry than it forbids, as long as they make sure to tick the right compliance boxes

Guillaume Guinard is a research assistant at the Digital, Governance and Sovereignty Chair, a master’s student in Public Policy at Sciences Po Paris and a Philosophy graduate of Glasgow University. 

Rachel Griffin is a PhD candidate at the Sciences Po School of Law and a research assistant at the Digital Governance & Sovereignty Chair. Her research focuses on social media regulation as it relates to social inequalities .

Credits photos : Ivan Marc / Shutterstock