Coaching Tool Alerts Users to AI Bias

Pennsylvania State University

A coaching tool built into artificial intelligence (AI)-powered systems may raise user awareness of bias in AI algorithms and help individuals better prompt generative AI tools to produce more inclusive content, according to researchers at Penn State and Oregon State University.

The researchers developed a new text-to-image generative AI application intended to provide immediate media literacy interventions - methods designed to make users pause and reflect on the inclusiveness of their prompt design before image generation. As users enter prompts into the application, the "inclusive prompt coaching" tool issues warnings about biases in generative AI systems and offers suggestions for making their prompts more inclusive. The team presented their research today (April 16) at the 2026 Association of Computing Machinery Computer-Human Interaction Conference on Human Factors in Computing Systems in Barcelona, Spain. The paper received an honorable mention from the conference's awards committee.

In the study, the researchers found that the inclusive prompt coaching intervention increased users' awareness of algorithmic bias, or its tendency to produce stereotypical content. It also boosted their confidence in writing inclusive prompts to produce less biased outputs. The intervention also increased users' perceived trust calibration, or their capability to adjust their trust levels to better reflect the systems' actual trustworthiness. But the intervention led to a less satisfactory user experience, according to the researchers.

"Oftentimes, media literacy interventions like those for social media occur outside of the medium, informing or warning users about the dangers of social media before or after they've interacted with it," said study co-author S. Shyam Sundar, Evan Pugh University Professor and the James P. Jimirro Professor of Media Effects at Penn State. "Here we are using the medium itself - AI text-to-image generators - to educate users about how to better use the medium while they're interacting with it. It's a newer twist on the media literacy approach to address the problem of lack of inclusiveness in generative AI."

To see if prompt coaching can serve as an effective media literacy intervention, the researchers recruited 344 study participants from an online survey platform. They randomly assigned the participants to one of three study conditions: an inclusive prompt coaching condition; a detailed prompt coaching condition; and no coaching condition. The latter two served as control conditions. The researchers asked participants to use the system to generate an image of any character and then answer questions about their experience using the AI system, such as how much control they felt they had over the tool, their awareness of algorithmic bias and their confidence in their ability to craft effective prompts.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.