Are You Fooled By Disinformation?

Politifact's editor-in-chief gives a UConn audience a crash course on how to spot false info online

Amanda Crawford, left, and Katie Sanders, right, address an audience at UConn.

Katie Sanders and Professor Crawford speaking on AI, fact-checking, and disinformation (Anna Heqimi / UConn Photo)

Even though a timely discussion on fact-checking and AI was held in Wilbur Cross on April 1, the widespread dissemination of disinformation is no joke.

Disinformation is inundating our media diet. With artificial intelligence becoming extremely difficult to recognize at first glance, distinguishing fact from fiction is a skill that everyone needs to develop.

Katie Sanders, editor-in-chief of PolitiFact - a nonpartisan fact-checking website - joined Amanda J. Crawford, professor in the Department of Journalism, to discuss current media posts that are deceiving people, and what methods news consumers can use to uncover the truth.

The falsely reported death of Israeli Prime Minister Benjamin Netanyahu, spread on multiple social media platforms, is a recent example of disinformation, and debunking it swiftly made it PolitiFact's most popular fact-check, according to Sanders.

Transparency, "rabid nonpartisanship" - equally scrutinizing both political parties - and correcting mistakes publicly are among PolitiFact journalists' principles, she added.

Sanders said the process of verifying information begins with analyzing the source, asking questions including: Who is behind the post? What evidence is there that supports the claim(s) made? What are other sources saying about the claim(s) made?

For visuals, reverse image search is a mechanism used when needed to verify assertions on a photo's origins.

A wide shot of a large room where two speakers on the left are addressing the audience on the right.
Sanders and Crawford field questions about disinformation and AI (Anna Heqimi / UConn Photo)

Crawford stressed the importance of viewing everything with a critical eye. "When we see something that supports our preconceived bias, then we are more likely to fall for it," she said. "We're all at risk with being okay with disinformation if it supports our side."

With PolitiFact's "Truth-O-Meter," journalists research and report on the extent of accuracy on a specific and popular claim. Reporters then suggest a rating for the widely disseminated statement ranging from "True" to "Half True" to "Pants on Fire" - a rating when the statement found is inaccurate and is a "ridiculous" allegation. Three editors review and discuss each article before publication, asking the reporter for evidence for everything they state either supporting or refuting components examining a certain claim.

Even though fact-checking and spreading truth is imperative, Crawford warned that there are times when fact-checking can "backfire."

Crawford, whose research concentrates on misinformation and media coverage of mass shootings, raised a concern about amplifying controversies or disinformation when it is not yet widely known to the public, with the risk that publishing fact-checks on falsehoods could give them traction.

All the spread of false information on social media is nothing new, the increasing sophistication of generative AI has made it easier than ever before.

As a seemingly benign example, Crawford cited AI-generated cat videos that people believe are real. The corollary: If we are falling for benign fake cat videos, are we going to believe disinformation that actually has consequences?

PolitiFact recently investigated an AI video showing a crying toddler touching the casket of his military father who was killed in Iran, Sanders said. Many expressed their empathy and sorrow after viewing the video, believing it was real.

Yet Sanders cited one benefit to artificial intelligence. She said her team is experimenting with a "Jurisprudence Assistant" that generates recommended ratings for fact-checks, consulting an archive of PolitiFact's previous claim ratings. Sanders said the AI assistant can provide more information or help editors strengthen their conclusions. They do not use AI to draft or edit stories.

For those new to fact-checking, Sanders said that AI tools such as ChatGPT can be useful as a starting point for research, akin to a Google search. However, users must approach the tool with caution, cognizant that fake sources can be generated.

The message of the evening was clear: Stay vigilant and approach everything with skepticism.

The event was sponsored by UConn, PolitiFact, and the Connecticut Foundation for Open Government. Co-sponsors included the UConn Department of Journalism, Department of Political Science, Humanities Institute, Alan R. Bennett Political Science Honors Fund, and Student Chapter of the Society of Professional Journalists.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.