[INTERVIEW] “Digital Sovereignty : A US perspective” with Anupam Chander
16 February 2023
[ARTICLE] Artificial Intelligence, the Brazilian judiciary and some conundrums
3 March 2023

[INTERVIEW] The legal and social challenges of Extended Reality worlds: 3 questions to Brittan Heller

By Tamian Derivry

Extended reality (XR), which encompasses a variety of immersive technologies such as augmented reality (AR) and virtual reality (VR), is emerging as a major technological innovation. In this interview, Brittan Heller, Senior Fellow at the Atlantic Council and Affiliate at Stanford Cyber Policy Center, agreed to share some of her thoughts on the social and regulatory challenges posed by XR worlds.

1. What are the main legal and democratic challenges raised by XR worlds?

Number one: jurisdiction. We have cybercrime conventions and national agreements around the jurisdiction, venue, and civil and criminal codes for evidence collection. But we haven’t figured out how this is going to work with virtual worlds. When you are creating a world in VR or AR, sometimes it is an overlay on a physical space, sometimes it is a digital space and there are different levels of ties to offline spaces. One of the first questions we need to answer before we get to systems of governance, before we get to how content moderation will work, before we get to the role of AI, is who will decide where the rules apply and how the rules are applicable. That is venue and jurisdiction.

Number two: participation and representation, which I approach as a question about accessibility based on the hardware. The form factor of XR hardware is still changing. Many headsets have come out and different accessories and affordances are being created. However, if we want a metaverse for all, we need to make sure that the hardware works with different body types and diverse humans. An MIT researcher went to Kenya to do her thesis research and she brought a bunch of Oculus Go headsets with her to study XR adoption. The strap snapped over half of the time when she put it on people because the headset was not contemplated for different hairstyles and textures. A lot of women experience simulation sickness in VR at higher rates than men. If you look at debates that came out in the last few years, you will see commentators say it’s because women don’t play video games and they are just not used to it or because they are more prone to nausea because they have children. The actual answer was that headsets that came out in earlier editions were not adjustable and were built for men’s heads. With VR headsets, the distance between your pupils is very important to prevent you from getting nauseous. The average interpupillary distance for a woman is smaller than for a man. Having the wrong interpupillary distance makes a person disoriented and nauseous. It’s not that women were less inclined to use the device, it’s that these devices were not built to fit 51% of the population.

Similarly, I’ve written about accessibility in terms of apparent and non-apparent disabilities. Disability is unique under international law because it is a status that most people move in and out of throughout their lives. The World Health Organization says that one out of seven people, over a billion, are disabled. The protections that we have to help disabled people navigate the world actually benefit everyone. In stores in the U.S., they have parking spots for expectant mothers and people in wheelchairs, so they don’t have to travel as far. If you break your leg and end up using a wheelchair, you are going to benefit from affordances like reserved parking during the time that you are physically disabled. Meta’s Oculus waited until update 30 to make it so that someone who had to be seated would have the vantage point of the person who was walking around. Scientific American interviewed a VR user who had muscular dystrophy and ironically, the only VR game they could use was a rock climbing game, because the user modified an Xbox controller eliminating the need to lift their arms up and down.

If the metaverse is going to be a place where we all work, socialize, entertain ourselves and learn, we have to make sure that the hardware is accessible for all. We’re at the point where we still need certain people in the virtual room. I think that’s how we should look at democratic participation at this point. According to the research of Jessica Outlaw, disabled populations are some of the earliest adopters of XR technology, which makes sense, but they’re often treated as an afterthought by platform developers. It’s not just about democratic principles, or human rights, it’s actually about a missed market opportunity.

The first hurdle about the metaverse in US courts happened earlier in 2022. In Panarra v. HTC Corporation,  the U.S. court decided whether or not the Americans with Disabilities Act applied to virtual spaces. Panarra is a blind user, who was using the HTC’s Viveport Infinity subscription service, which branded itself as the Netflix of VR. The user said they needed closed captioning, which was not provided by the platform. HTC came back and said they don’t have to provide that because they’re just providing content like Netflix, and are not responsible for having the individual experiences be closed captioned. The court rejected that argument, saying that Viveport is akin to a public accommodation and so the platform itself needs to make sure it meets the Americans with Disability Act accessibility standards. This is very valuable for me as a lawyer and an advocate, because I can go to my XR clients and say that this is not only the right thing to do, it’s actually mitigating a litigation risk. But courts are just starting to contemplate the issue of accessibility.

2. What can we do to ensure online safety in XR environments?

There are a few things we can do. Number one is to realize that this is not social media. I get weary when companies create Terms of Service that are cut from two-dimensional properties and brought into three-dimensional spaces. The challenge of content moderation in the metaverse is different because it’s not just conduct and content. It’s conduct, content, and environment. Right now, we don’t have 3D classifiers. The technical apparatuses and policy regimes that companies have developed to govern online speech do not cover virtual worlds, where you have to create the digital architecture and the bounds of physical space.

Number two is that we should expect the best and the worst of human behavior. I spoke to one of the founders of the earliest virtual worlds, Philip Rosedale, who told me that when he created Second Life, it was a space for adults. About 20% of early users were children, and it made Second Life have a different tenor as a space for adults. Rosedale commented that he thinks virtual worlds at this point are about 80% children. This is a boon for developers of these worlds. A 2022 McKinsey report said Gen Z expected to spend an average of 4-5 hours a day in virtual worlds. New generations are becoming digital natives in XR spaces. But at the same time, there’s much we don’t know, like how this affects children’s spatial and cognitive development. Studies are coming back saying it actually could inhibit children’s spatial development. We need to know more about how XR impacts our bodies and our minds.

Third is the question of content moderation. At this point, content moderation in virtual worlds does not scale. What most companies do, as a start, is take speech-to-text and then run it through apparatuses developed for social media platforms. That is not sustainable and that does not fully encompass risk factors. It is very hard for a company to maintain that its social media platforms are different in the form of its XR properties, if they’re using the same technical systems to police users, but also to claim that the rules for social media are not the governing document. I have seen companies develop different terms of service, conduct and content policies for VR, and say that their social media policies do not apply. I think this is because of two things. One, they want to create a more permissive space for creativity. Two, and related to that, they need users to generate content. And three, the best practice in content moderation at this point is drawing from video games. Dual tiered community-based moderation, where platforms state an overarching rule for the biggest litigation risks (like counterterrorism-related content) but let individual communities determine the functional rules of their virtual world. To summarize, when you look at content moderation in the metaverse, you need to be realistic about what is possible and scalable, what is practical based on community moderation, and three, what is needed to ensure user safety.

Right now, people are very excited about LLMs (Large language models) and generative AI. I think the only way that this becomes usable is if it is a tool to augment human intelligence rather than a replacement for content moderators. This week, we saw a judge in Columbia issue a court ruling on medical treatment for an autistic child. He also wrote part of his judicial holding with the help of ChatGPT. It’s creating a lot of pearl clutching alongside justified horror by civil society advocates and lawyers. ChatGPT and other generative models incorporate questions of bias. When I say bias, I don’t mean the social prejudice that gets embedded into technical systems, but more of a computer science-based understanding of bias. AI systems work by identifying patterns and that is called bias. Examining questions of governance of online spaces that use chatGPT, you must consider auditing the outputs and examining the quality and content of the inputs. In 2016, Microsoft made Tay-Bot, which was a bot that was supposed to give you the experience of chatting with an AI that had the intelligence of a teenage girl. It took less than 24 hours for it to be spouting Nazi propaganda. Similar problems with BingGPT imitating aggression, emotional volatility, and even love were just surfaced by Kevin Roose. When I teach AI ethics, I say “Garbage in, garbage out.” If you’re going to use generative AI to help content moderators in virtual spaces, we need to understand it’s just brute forcing a data set, running it through a scrape of web-based content again and again. We need to be aware of the type of patterns that will come out of this. My prediction is that the performance won’t be as nuanced as a human moderator. If you replace a person with a computer-based intelligence system, it may be quite good, but it’s not going to exercise discretion like a human would. Even if generative AI models are better at determining context, they’re not necessarily going to be as nimble with incorporating new information.

3. Should we take legal action to protect users from immersive targeted advertising?

The problem with targeted advertising in virtual worlds centers around the issue of user consent. How do you give meaningful informed consent to the tracking, monitoring and monetization of your involuntary processes? How do you educate users about their gazes being tracked? The information you can get from eye tracking is quite remarkable. For example, you can tell if somebody is sexually attracted to the person they’re looking at, you can tell whether or not somebody is telling the truth, you can tell whether or not somebody shows pre-clinical signs of physical and mental illnesses, like autism, schizophrenia or Parkinson’s. These are things that people may not know about themselves. This is information that users probably wouldn’t consent to give to a company if it was placed in those types of terms. A consumer may be asked to consent to the tracking of their gaze, but they may not be fully aware of the type of behavioral, health and character-based inferences that could be made about them from this data.

The second important point is that, at least in the United States, privacy law does not protect mental privacy, a term that I’ve been helping develop. Privacy regimes are designed to protect your identity. There is a focus on personal identifying information and the way your online behavior in Web-2 can be used to single you out as an individual. But virtual worlds are different both in the quantity and the quality of data that you get. XR headsets need biometric measurements in order to function. The hardware needs to track your gaze, it needs to measure your interpupillary distance, and it needs to know how you move. Stanford Virtual Human Interaction Lab’s Mark Miller has research that shows only a few minutes of recorded content about a user in XR is as physically identifying as their fingerprint. This changes the pre-existing paradigm of privacy law. Right now, there is no law to protect us from the inferences that can be made about our thoughts and feelings in the United States. Companies have not yet started to develop product features that embody privacy protections related to mental privacy. My fear as a human rights lawyer is that if people feel that information like their sexual preferences will be available from eye-tracking data, which they are, and that they’re going to try to censor the way they think and feel.

We need to adapt existing privacy regulations. This then leads me to the question of whether we want to regulate based on hardware and technical functionality, or regulate based on outcomes and harms. Because we don’t know so much about how this impacts our bodies and minds, I think regulation that focuses on the harms will be more flexible. You should consider future proofing when you’re working with emerging technologies. To the best of my knowledge, companies are not yet using targeted advertising, but it’s tempting for them. I do think it should be prohibited. Some products technically try to distinguish gaze tracking from eye tracking, but from a policy level, I think it’s more akin to debates about facial recognition. In the United States, they have started banning facial recognition in civic environments. We can’t ban it in XR or the headsets won’t work. At the same time, we need to realize that the data flows that move in and out of these devices don’t just record your reaction to stimuli, they create a record of the stimuli itself. That is why it’s different from the data flows of Web-2 based platforms – and that is why it is very tempting for companies to consider advertising-based models, especially when they don’t have a clear monetization scheme. 

XR is the most valuable learning tool that humanity has come up with yet. This also makes it very tempting to be appropriated as a tool for buying and selling. If I play a racing game, and I see a red race car that I really like, my body starts to react to it. My heart rate goes up a little bit, my skin gets a little moister and the pupils dilate. If a company was able to see that, track it and sell that information to an advertiser, I could start getting ads in my email about auto insurance or car loans. If it’s combined with other datasets, I could see a similar red race car on a different platform being driven by someone who looks like a person I think is very sexy. This is what immersive advertising will look like. When Reality Labs said they were going to start immersive advertising in Blaston VR, it took five days for consumer outcry to cancel that initiative. Immersive advertising is experiential. It’s not graphic and it’s not based on images and pictures. Instead, it’s based on interaction. I don’t think that’s well understood yet. It’s not product placement, it’s placement in the product. It’s not like looking at a movie poster from Jurassic Park, it’s like paying for a Jurassic Park-branded experience so that you can go and feed a dinosaur. That is an advertisement for the Jurassic Park movie franchise, but you pay for the privilege of experiencing. You must think about immersive advertising as experiential.

Legal codes have not caught up to that yet, especially in the United States. There are regulations in the EU that say the companies need to identify when we are interacting with an AI. I’ve not seen this manifested in any metaverse property yet. If we are going to use AI for content moderation, my question is more of a product-based question than a legal one: how is that going to manifest in XR properties? If you have a virtual guide, are they supposed to be wearing a different color jacket? Are they going to have a branded name above their head? Are they going to be wearing a name tag that says “I am an AI”? Or is it going to look like a single line in a Terms of Service document that you clicked without reading. There are lots of questions about AI architecture and interaction that are not answered yet.


Brittan Heller is Senior Fellow at the Atlantic Council, with the Digital Forensics Research Lab, examining XR’s connection to society, human rights, privacy, and security. She is also on the steering committee for the World Economic Forum’s Metaverse Governance initiative. She is an incoming affiliate at the Yale Law School Information Society Project and the Stanford Law School Program on Democracy and the Internet.