With officials in some countries equipped with augmented reality helmets that detect people’s temperatures to predict COVID-19, it is timely to question the ethical implications of such technologies.
A new literature review by Drs Marcus Carter and Ben Egliston from the Department of Media and Communications details six broad challenges and opportunities that augmented (AR) and virtual realities (VR) present.
1. What are your expectations of privacy in public space?
Aside from visual information, AR devices can capture information like voice and sound from an environment, which is currently overlooked in legislation and AR privacy discussions. But there is also potential in using AR in public spaces in a way that benefits people. For example, artists have used AR to overlay artworks at the New York Museum of Modern Art, making artworks unrecognisable, with the goal of challenging the authority of high art as something often produced by individuals with certain social and class interests.
2. Are these technologies accessible and inclusive?
The design of AR devices can exclude participation and use by people with disabilities. Gender-based exclusion, too, can be problematic, with reports of sexual harassment (against humans and nonhumans) in virtual spaces. Yet these devices can also be inclusive: they can foster empathy for people with disabilities by allowing users to experience other bodies and can assist people with cognitive impairments.
3. How much power are you ceding to platforms?
Platforms that acquire VR technology provides them with access to granular data that may otherwise not be available. This data can subsequently be monetised – or worse, used by unscrupulous actors like totalitarian governments. For example, Facebook may use its Oculus (virtual reality) data for targeted advertising. Considering Facebook’s history of questionable business practices, such as their privacy violations around facial recognition, this is troubling. The privacy of spatial data that AR can capture – think environments mapped in Pokémon Go – is of perhaps even greater concern.
4. Is the military or government involved?
Much of the initial, underlying technologies of VR and AR were developed by the US military; for example, spatial software used by Google Maps and in Pokémon Go was used as a warfighting technology by the US in Iraq in the early 2000s. More recently, during the ongoing COVID-19 pandemic, several countries, including China, Italy and the UAE, have equipped officials with ‘smart helmets’ that use AR to detect people’s temperatures. Given China has already fallen afoul of Western ethics standards by using facial recognition software to profile its Uighur population, the potential for misuse of AR data by governments is clear.
5. Do these technologies foster empathy?
While some VR proponents, like VR filmmaker Chris Milk, claim it can boost people’s empathy (‘you feel present with the people that you are inside of it with’), this is not supported – and indeed is challenged – by research.
6. Will your employer use them against you?
The authors could not find any literature (including non-academic writing) that critically addresses the implications of AR in the workplace or even maps the scale of its use in this context. There is, however, literature on wearable devices, like smart wristbands, in the workplace. These are used for everything from tracking the efficiency of manual laborers in factories (which Amazon did), to recording white collar workers’ fitness goals. It does not take a giant leap to imagine the implications of AR and VR in workplaces.