Generative AI Promises and Perils for Democracies, Says Kreps

Generative artificial intelligence - popularized in 2022 by Open AI's ChatGPT application - threatens to undermine trust in democracies when misused, but may also be harnessed for public good, Sarah Kreps told the President's Council of Advisors on Science and Technology (PCAST) during a public meeting May 19.

Kreps, the John L. Wetherill Professor in the Department of Government in the College of Arts and Sciences and director of the Cornell Tech Policy Institute in the Cornell Jeb E. Brooks School of Public Policy, was one of three researchers invited to discuss AI's impact on society with the group of scientists and engineers appointed by President Joe Biden to provide advice and recommendations. The council recently launched a working group to help assess opportunities and risks regarding generative AI, which refers to systems that can generate text, images and videos from a prompt after being trained on large data sets.

Sarah Kreps

Presenting on "The Perils and Prospects of Generative AI in Democratic Representation," Kreps, said that even five years ago, when the technology was far less capable and user-friendly, her research found that people can't discern between news stories written by AI and by mainstream media outlets such as the New York Times.

"The threat might not be that people can't tell the difference - we know that - but that if as this content proliferates, they might just not believe anything," said Kreps, participating virtually in the meeting held in San Diego. "If people stop believing anything, then it's eroding a core tenet of a democratic system, which is trust."

Kreps, a former Air Force officer who is also an adjunct professor at Cornell Law School, became an early academic collaborator with Open AI following the 2016 presidential election, which a Senate Intelligence Committee report found was the target of widespread misinformation and foreign interference. Then, she said, Russian-led misinformation campaigns often included errors obvious to native English speakers - for example, a social media post with the headline, "In America you have right to bear arms," over a picture of a bear.

But more advanced and user-friendly tools based on increasingly powerful large language models could help overcome those deficiencies, enabling faster dissemination of political content that appears more authentic, even if it is false.

"This seemed potentially problematic from a national security perspective," Kreps said.

In subsequent research, Kreps investigated whether AI could be used to manipulate elected leaders through "astroturfing," or using a high volume of messages to create a sense of broad public support for an issue. For example, in 2017, the Federal Communications Commission found that only 6% of public comments it received about net neutrality were unique.

Kreps and colleagues emailed AI-generated advocacy letters to more than 7,000 state legislators. The results were concerning, she said, with response rates that were low overall, but identical on the key issues of guns, health care and schools.

Safeguards against AI manipulation are insufficient so far, Kreps said, but beginning to be implemented. When she prompted ChatGPT to write an op-ed supporting Russia's invasion of Ukraine, the program said it could not.

"Guardrails are emerging," Kreps said, "but the technology is so new and dynamic, it's a real challenge."

Phone calls and town halls, she said, are two low-tech avenues for more direct communication between constituents and elected representatives that could reduce mistrust.

Kreps concluded with a more hopeful perspective on how the technology could be used to enhance democracy. Lawmakers, she said, are inundated with emails. Her research shows that the same AI tools that generate messages could be used to detect them, and to provide summaries of their content by issue. AI potentially could be used to generate responses (with appropriate disclosures) that constituents deem more effective than the boilerplate language commonly used today.

"A democracy really is about these connections between the government and people," Kreps said. "There are ways to think about how generative AI can be used in the public interest."

In response to questions from PCAST members, Kreps said digital literacy education is needed well before college to help students understand and navigate technologies including generative AI, rather than trying to prohibit them. And she said modest government investments could help create incentives for research and applications by "public-minded innovators."

PCAST co-chair Maria Zuber, vice president for research and E. A. Griswold Professor of Geophysics and at the Massachusetts Institute of Technology, thanked Kreps and co-panelists Sendhil Mullainathan, the Roman Family University Professor of Computation and Behavioral Science at the University of Chicago Booth School of Business, and Daron Acemoglu, the Elizabeth and James Killian Professor of Economics in the Department of Economics at MIT, for their contributions.

"While we're talking about AI, all of you emphasize the criticality of placing humans at the center of the discussion," Zuber said. "You've certainly given us some very powerful examples and insight into where this needs to go to really help people."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.