She studies AI as existential media

Amanda Lagerkvist is senior lecturer at the Department of Informatics and Media and has established an entirely new field of research: existential media studies.

Photograph: Mikael Wallerstedt

How are we influenced when smart digital assistants, like Siri and Alexa, become part of our homes? And what happens when we begin to track deviating individuals through biometrics? "More research is needed on what it means to be human in a digital era," says Amanda Lagerkvist.

She is senior lecturer at the Department of Informatics and Media and has established an entirely new field of research: existential media studies. This field deals with what it means to be human in a digital age, when more and more of society is automated.

With her research, she wants to contribute to more than just a critical perspective around the hype associated with artificial intelligence.

"We cannot just say these developments are positive. A new opportunity with technology can also include vulnerability. This is why humanists like myself need to be involved. But we shouldn't just criticise, we also have to take on the challenge creatively. We need to talk to AI researchers about what we want to do with AI, what major societal questions we can solve, but also what values are in the balance."

Biometric AI changes how we view humanity

Amanda Lagerkvist heads the interdisciplinary project BioMe. It deals with the existential consequences of biometric AI, that is technology that uses the body to identify a person, such as facial and voice recognition.

"The face, voice and body are unique for humans. What happens when they are interwoven with new technologies? What is the biometric person and how will it change how we see ourselves and each other?"

Other problems occur when biometrics are used to localise people and to track potentially deviating individuals in a society.

"What happens with LGBTQ+ and minority groups, such as undocumented individuals, whose entire existence is based on not being seen? These are examples of why this research is so important. These autonomous systems reach into the very depths of our existence."

Asking inconvenient questions

She is one of the researchers to receive a grant within the large WASP-HS programme, which studies AI and automated systems from a humanistic and social sciences perspective. She is pleased to see that the humanities have been given such a central role in the initiative.

"As researchers in the humanities, our task and our mission from society is always to ask inconvenient questions, to provide context and to dig deeper. But we have to work with AI researchers to formulate what is at stake."

She is collaborating with the Chalmers Artificial Intelligence Research Center and plans a series of meetings both in Gothenburg and Uppsala. Together, they will be conducting collaborative research in cooperation with industry.

Solving major societal challenges

AI is rife with prophets declaring it will solve all of humanity's problems. Amanda Lagerkvist says it would naturally be wonderful if major societal challenges, like the coronavirus pandemic or the climate crisis, could be solved with the help of technology. But there needs also to be a discussion about what AI should not be used for. Perhaps we need "safe zones" where automation does not reach, according to Amanda Lagerkvist.

"Based on the discussion so far, it has sounded like AI is a force of nature that will take over and reign over us, but do we need to automate everything just because we can? The technology is built by people for specific purposes and to solve specific problems, but it becomes dangerous when we allow it to take the lead in societal development."

    Facts

    /Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.