Study: Public Perceptions About Emotion AI Use Across Contexts in the United States
Artificial intelligence is reshaping countless industries, but according to a new University of Michigan study, not all groups consider it beneficial.
Emotion AI or emotion recognition, which claims to enable machines to interpret human emotions, has sparked discomfort, particularly among marginalized groups, according to a comprehensive survey of a diverse U.S. population.
This discomfort spans applications in health care, workplace settings, cars and even children's toys.
Marginalized communities-such as minorities and individuals with disabilities-express significantly lower comfort levels, indicating profound societal and ethical implications, the research indicated.
The study found that while people are somewhat more comfortable with AI detecting happiness and surprise, widespread unease persists, especially around applications in social media, job interviews and consumer research.

"These comfort discrepancies highlight a pressing need to consider identity when assessing emotion AI's societal reach," said Nazanin Andalibi, assistant professor at the School of Information and the study's lead author.
With responses from nearly 600 individuals, the research underscores a collective apprehension about emotion AI across key use cases: public spaces, health care, the workplace, job interviews, consumer research, border control, social media, children's toys, education, cars and personal pursuits. Although comfort was low across 11 contexts, even the most favorable context (health care) yielded low comfort; this is striking as emotion AI use in health care is often celebrated, the researchers say.

"Emotion AI claims to infer our deepest, most private feelings. Even if these inferences are not accurate-which many experts say they are not-its rise still raises serious privacy concerns, as demonstrated through individuals' discomfort across deployment contexts," said study co-author Alexis Shore Ingber, School of Information research fellow.
Crucially, the analysis reveals that closely examining identity factors provides deeper insights into these comfort levels. People of color generally report more comfort with emotion AI than white individuals in most situations, except contexts like public spaces and job interviews.
The findings emphasize an urgent call to action for developers and policymakers in the United States: Respecting this discomfort and enacting strong regulations to protect "emotion data" are essential as emotion AI continues to be deployed across high-stakes settings.
"Emotion AI and emotion data should be regulated in the United States," Andalibi said. The European Union banned the use of emotion AI in the workplace and education recently; while that is not perfect, it is a step in the right direction, and I hope the US does better," The findings will be shared at the 2025 Conference on Human Factors in Computing Systems in Yokohama, Japan.