Just imagine new AI technology is able to read human emotions flawlessly. How would that affect us as humans? That is the question PhD candidate Alexandra Prégent is exploring.
'The use of emotion recognition is increasing across a wide range of sectors,' says Prégent. 'Healthcare, security, border control, HR, retail: these sectors all see benefits in uncovering, recognising, and analysing people's emotions because they assume it will enable them to draw meaningful conclusions. For an HR department, it's interesting to know your personality type and your level of 'trustworthiness', while a security company might want a system that may flag individuals before they even become aggressive.'
Current technology is not yet capable of delivering such findings. However, in her research, Prégent anticipates that this will soon be the case. 'I think it's important to predict the ethical risks associated with these functionalities and try to understand the kind of impact they are going to have on our society,' she says. 'Most scientific studies on emotional expressions come from social psychology, and emotion recognition systems' capacity to make reliable inferences from them stem from work in affective computing. Together, their work sets the tone for what we currently use them for as humans and what we can realistically anticipate technology will use them for in the near future. But on the question of the desirability and risks linked to such capacities, philosophers are needed.
Who has authority?
An important point for Prégent is people's freedom to choose whether to keep their emotions to themselves or to show them. 'As things are today, you have authority over your own emotions. When you say you're in pain, a doctor has to investigate, even if tests don't show any sign of pain.' The same applies in law enforcement. 'If you convincingly show remorse in a court of law, the judge will believe you.'
This will change once technology is able to recognise 'hidden' emotions. Prégent: 'Another authority than you will have the last word on your deepest emotions.' This will affect our emotional interactions. 'Emotional expressions have different functions in society. We might smile intentionally at someone to show we appreciate them. If we use technology to only focus on our felt (episodic) emotions, we will disrupt the way we behave in social interactions. Right now, if someone says "I love you", we generally trust them. The use of technology might change that trust or even cause it to disappear.'
Chilling effect
Then there is the 'chilling effect'. 'Foucault brilliantly explained how the mere idea of constant surveillance, even without the use of technology, causes structural changes in individuals' behaviours. This 'chilling effect' has also been associated with constant monitoring by cameras and microphones. But because this is happening more and more frequently, it's sometimes said that new generations are less bothered by it, that we get used to it. For me, I still have a hard time coping with the idea that you can't voice an opinion or a question in a lecture room without being certain that you are not being recorded.'
Keep the risks in mind
Prégent does believe this new technology has benefits. 'It's great that Parkinson's or Alzheimer's, for example, can be diagnosed better and it also seems to offer possibilities for people with autism. It is reported that they often struggle to recognise people's emotions, and glasses with emotion recognition technology for instance might be a great help to them in that regard. It's easy to conclude that these kinds of techniques can play a positive role in society, but for me, I prefer to focus on the risks. The big tech companies push the benefits enough.'