Governing New Technologies for a more Secure World
Replay the panel discussion and read the summary below
Governing new technologies for a more secure world (Panel 6)
Chaired by: Stéphanie Balme, Dean of Undergraduate Studies, Sciences Po; PSIA Faculty. | Student Greeter: Eli Scher-Zagier, PSIA student, Master in International Security.
- Bruno Caïtucoli, Retired French Air Force General Officer; Advisor to the CEO at EDF-Renewables France; PSIA Faculty; Scientific Advisor for the Master in International Security, PSIA
- Vanessa Evers, Director, NTU Institute of Science and Technology for Humanity, Singapore
- Alexandra Geese, Member of the European Parliament
- Casper Klynge, Vice President of European Government Affairs, Microsoft
- Marusa Rus, PSIA student, Master in International Public Management
- Jamie Susskind, author of “Future Politics: Living Together in a World Transformed by Tech” (Oxford, 2018)
Technologies are everywhere. Most of the time, they are regarded as positive due to their ability to connect us with people all over the world. But often they are used with the wrong intentions. When Eli Scher-Zagier commenced this panel about governing technologies, he said exactly that: “Technology is neither good nor evil. But it can be used for both.” He highlighted the significance of the topic: “Today’s discussion is more important than ever: amid the Covid-19 pandemic, we have seen the world benefit from new technologies at a breathtaking scale.”
Stéphanie Balme commenced the discussion. She divided it up into three perspectives and approaches: 1) a research-based one; 2) a political one, and finally; 3) a practical one.
The research-based approach: words from an engineer and a lawyer
To understand the effect of technology on the world, a basic understanding of how technology and AI works is crucial. Vanessa Evers explained it to us: “We collect a lot of data, we feed it to the computer, and after a while the computer finds patterns and starts to recognize things by itself.” This seems pretty straightforward, but it raises a lot of questions. What if technologies start to gain a, rather irresponsible, mind of their own?
An expert in robots, Evers focuses on developing technology in a responsible way. Her work starts from the basic idea that people are social. However, robots and AI, clever as they are, don’t understand this concept. “It’s absolutely crazy that all this technology that we use doesn’t adapt to these social situations,” Evers exclaimed. Her goal then, is to make sure it does. But it doesn’t always work: she pointed out the malign goals that some people intend to use technology for. “While my work is to make it possible that computers can recognize social situations, I have no control over how it is being used.” The same technology that she develops for a car, to predict whether a child might be part of a family that is crossing the road, could also be used for drone technology to inflict harm upon what it assumes to be a group of soldiers, she explained. “This is something that keeps me awake at night.”
Technologies that may be used for various ends, whether good or bad, require regulation. But why (and how) do we regulate and govern technologies? Jamie Susskind provided the legal and political perspective on this matter. His answer, in short, is that technologies exert power.
During his time as undergraduate, when the internet wasn’t that big of a deal and AI was barely a concept, Susskind noticed that political theory wasn’t very concerned with the future. “It struck me as odd at the time, that I could do an entire degree, at one of the best universities in the country, without even having to mention the internet.”
Internet and technologies exert power and according to Susskind they do so in three ways. Firstly, they decide rules. “Every time you interact with a digital technology, you are subject to the rules that are coded into it,” he explained. Secondly, they gather information about us. This is useful because it makes it easier to influence people’s behavior in ways that are surprising and unpredictable. Finally, technologies filter our perception of the world. “We increasingly rely on digital technology to know what is going on out there…. [but] these systems are only ever going to provide us with a very small slice of reality.”
If you combine these three powers, you see a concentration of power in the hands of those who own and control digital technologies. “Such a concentration of power calls for governance, scrutiny, and transparency,” Susskind concluded.
The political approach: use of force and avoiding rules
When thinking of technology being used for evil, many people think of drones. That is exactly the topic that Bruno Caïtucoli brought up.
In the 90s, drones were mostly used to gain information. In the past decade, however, drones have undergone a technical revolution which allows them to deliver ammunition. But whatever the goal, the drone remains a tool, not a robot or person. Caïtucoli explained: “A drone can be operating in Iraq, but the pilot might be in the US... So a human remains in the loop.”
Nevertheless, drones remain discrete pieces of equipment, which leads to them being perceived differently than other technology. “Instead of considering a drone exactly as we consider a traditional aircraft, maybe there is a tendency to go a little further in the use of drones,” Caïtucoli said. As a result, choices to deploy drones are sometimes exaggerated. “The point is not the use of drones, but the use of force,” Caïtucol explained. In other words, the choice to use a drone is made by humans who want to force harm upon others, “the drone itself is not the issue.”
Although unique, drones are subject to rules. Other types of technology, despite being hugely influential, have managed to avoid such regulations. According to Marusa Rus, the reason is political. Rather than using the two traditional ways of exerting political influence – directly through elections or indirectly through structural economic power – “big tech has discovered a third mechanism of political influence...it’s social power.” This is its ability to create new social values that are rooted in the ideology of “tech is good for all.” Essentially, big tech has positioned itself at the center of our social interactions by providing a medium for people to interact with one another. “By inserting itself in our everyday life as a new social practice, big tech gains tremendous social power that changes the way we see the present and the future,” Rus explained.
The practical approach: European Policies
At this point, the panelists seemed to wholeheartedly agree that tech does demand governance and regulation. But who will do so? The tech companies themselves or the states? The final two panelists provided a perspective.
Casper Klygne describes the moment he first realized that technology requires regulation. “I had a pretty significant awakening in 2017,” he says. He was referring to the cyber-attacks that took out infrastructure from some of the biggest companies in the world. He also mentioned a more recent attack that was directed at the company he worked for: the cyber-attack on Microsoft whereby the hackers comprised their SolarWinds’ Orion monitoring and managing software.
Klygne is convinced that all of this might just be the beginning: “We still haven’t seen the end of what we witnessed a few weeks ago,” he said, “Attacks like these illustrate shortcomings and demand solutions.” Klygne had a clear idea: “One of the ways of us responding is to double down on multistakeholderism.” He also emphasized the need to recognize the threat of cybersecurity for companies and governments alike and explained that a sweet spot must be found between regulating these technologies for positive ends and preventing them from being used or manipulated. In that regard, he believes “Europe will be leading the way.”
Alexandra Geese agrees but emphasizes the difficulties. “Who’s going to decide who can speak?” she asked. What makes this matter incredibly difficult, she thinks, is the fact that we don’t know how these technologies work. “When I was a child and the car broke down, you would open the hood to have a look at the engine and figure out what was broken. With today’s technologies, you can’t do that anymore.”
EU Regulation addressed three different aspects, Geese explained: 1) platform regulation, by bringing in civil society, 2) AI regulation, by getting rid of its racial, sexist and socio-economic biases, and 3) infrastructure, by loosening the control of the companies housing our data. “Tech is great, but we need to be in control, and this is what the European Union will hopefully do next year,” Geese concluded.
Most of the audience members were interested in hearing the panelists’ thoughts on regulation, and what exactly it would require. In response to the first question, Evers discussed the need for ethics as well as the inclusion of politicians with technical backgrounds into this type of decision making. Susskind agreed and offered two solutions: 1) giving people new rights related to digital technologies; or 2) issuing general standards and rules. According to him, we need a shift from the former to the latter.
Technologies aren’t only being used by the people; governments use technology as well, for example to store data about citizens or to streamline visa applications. The extent to which technology is being used depends on the state – which can create new inequalities and tensions. Caïtucoli said there should be a certain degree of technology that is equal for all governments. Yet, at the same time, countries try to have technological advantages as well.
The panelists could have continued discussing much longer but it was soon time for the next part of the Summit. All in all, it seemed that all speakers wholeheartedly agreed that technologies should be regulated, yet had different thoughts on the how and the who.
(c) An article written by Meike Eijsberg, PSIA student in the Master in International Public Management, 2021