AI Faces Skepticism Among Marginalized Americans

University of Michigan
Concept illustration of AI tipping the scales on marginalized people. Image credit: Nicole Smith, made in part with Midjourney

Study: AI attitudes among marginalized populations in the U.S.: Nonbinary, transgender, and disabled individuals report more negative AI attitudes (DOI: 10.1145/3715275.3732081)

Artificial intelligence may be marketed as society's great equalizer-transforming businesses, streamlining work and making life easier for all-but for many marginalized Americans, AI doesn't feel like a promise.

It feels like a threat.

A new University of Michigan study finds that gender minorities, women and disabled individuals-especially those who are neurodivergent or living with mental health conditions-hold significantly more negative attitudes toward AI than their majority-group peers.

Using data from a nationally representative survey of 742 people, researchers asked participants to report their attitudes about AI's impact on their lives and work. The responses reveal a sharp divide in positivity toward AI across lines of identity.

Oliver Haimson
Oliver Haimson

"AI may be everywhere, but it's not for everyone-at least not yet," said lead author Oliver Haimson, assistant professor at the U-M School of Information and Digital Studies Institute. "If we continue to ignore the perspectives of marginalized people, we risk building an AI-powered future that deepens inequities rather than reducing them."

Key findings:

  • Nonbinary and transgender participants reported the most negative views of AI overall.
  • Women were significantly less likely than men to view AI positively.
  • Disabled participants, particularly those who are neurodivergent or have mental health conditions, also expressed negative AI attitudes.
  • Surprisingly, Black participants held more positive views of AI than white participants, suggesting a more nuanced relationship with the technology among people of color.

What's driving the distrust?

The study highlights real-world harms that may fuel negative perceptions of AI:

  • Facial recognition software that misgenders or misclassifies trans and nonbinary people-often while surveilling them.
  • Predictive policing algorithms that reinforce racial bias and lead to unjust arrests.
  • Health care systems that rely on AI models not designed with disabled people in mind.

"These are not abstract concerns," Haimson said. "People who are wary of AI often have lived experiences with systems that misidentify, exclude or harm them due to their gender or disability."

Why it matters

The research calls into question the dominant narrative of AI as a neutral or universally beneficial tool. It urges technologists, companies and policymakers to pause and ask: Who is this technology working for-and who is being left behind?

The study's co-authors are doctoral student Samuel Reiji Mayworm, research fellow Alexis Shore Ingber and assistant professor Nazanin Andalibi.

The research was presented at the recent 2025 ACM Conference on Fairness, Accountability and Transparency (FAccT '25)-a leading venue for work at the intersection of AI and ethics.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.