Generative AI Models Could Distort Human Beliefs

American Association for the Advancement of Science (AAAS)

Generative AI models such as ChatGPT, DALL-E, and Midjourney all have features that may distort human beliefs through their transmission of false information and stereotyped biases, according to Celeste Kidd and Abeba Birhane. In this Perspective, they discuss how research on human psychology can explain why generative AI could be particularly powerful in distorting beliefs. The capabilities of generative AI have been exaggerated at this point, they suggest, leading to a belief that these models exceed human capabilities. People are predisposed to adopt the information of knowledgeable, confident agents like generative AI more rapidly and with greater certainty, the authors note. Generative AI can create false and biased information that can be spread widely and repeatedly, both factors that predict how deeply that information can be ingrained in people's beliefs. People are most influenceable when they are looking for information and hold more stubbornly to information once it has been received. As much of generative AI is designed for searching and information-providing purposes at the moment, it may be difficult to change the minds of people exposed to false or biased information provided by generative AI, Kidd and Birhane suggest. "There is an opportunity right now, while this technology is young, to conduct interdisciplinary studies to evaluate these models and measure their impacts on human beliefs and biases before and after exposure to generative models—before these systems are more widely adopted and more deeply embedded into other everyday technologies," they conclude.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.