AI Arguments Persuasive, Even When Labeled

PNAS Nexus

Labeling content as AI-generated does not make it less persuasive than human-authored or unlabeled content, according to a study. Isabel O. Gallegos and colleagues conducted a survey experiment with 1,601 Americans to test whether authorship labels affect the persuasiveness of AI-generated messages about public policies. Participants viewed an AI-generated message about one of four policy issues, including geoengineering, drug importation, college athlete salaries, and social media platform liability. Participants were randomly assigned to see the message labeled as created by an expert AI model, labeled as written by a human policy expert, or presented with no label. The messages proved persuasive overall, influencing policy views by 9.74 percentage points on average on a 0-100 scale. While 92% of participants assigned to the AI and human label conditions believed the authorship labels, labels produced no significant differences in attitude change toward the policies, judgments of message accuracy, or intentions to share the message. These patterns held across political party, prior experience with AI, and education level. Older individuals showed slightly more negative reactions to AI-labeled content compared to human-labeled content. According to the authors, while labeling AI-generated content may be transparent, labeling by itself won't address all the challenges posed by AI-generated information.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.