It is not often that cold, hard facts determine what people care most about and what they believe. Instead, it is the power and familiarity of a well-told story that reigns supreme. Whether it's a heartfelt anecdote, a personal testimony or a meme echoing familiar cultural narratives, stories tend to stick with us, move us and shape our beliefs.
Authors
- Mark Finlayson
Associate Professor of Computer Science, Florida International University
- Azwad Anjum Islam
Ph.D. Student in Computing and Information Sciences, Florida International University
This characteristic of storytelling is precisely what can make it so dangerous when wielded by the wrong hands. For decades, foreign adversaries have used narrative tactics in efforts to manipulate public opinion in the United States. Social media platforms have brought new complexity and amplification to these campaigns. The phenomenon garnered ample public scrutiny after evidence emerged of Russian entities exerting influence over election-related material on Facebook in the lead-up to the 2016 election.
While artificial intelligence is exacerbating the problem , it is at the same time becoming one of the most powerful defenses against such manipulations. Researchers have been using machine learning techniques to analyze disinformation content .
At the Cognition, Narrative and Culture Lab at Florida International University, we are building AI tools to help detect disinformation campaigns that employ tools of narrative persuasion. We are training AI to go beyond surface-level language analysis to understand narrative structures, trace personas and timelines and decode cultural references .
Disinformation vs. misinformation
In July 2024, the Department of Justice disrupted a Kremlin-backed operation that used nearly a thousand fake social media accounts to spread false narratives. These weren't isolated incidents. They were part of an organized campaign, powered in part by AI.
Disinformation differs crucially from misinformation. While misinformation is simply false or inaccurate information - getting facts wrong - disinformation is intentionally fabricated and shared specifically to mislead and manipulate. A recent illustration of this came in October 2024, when a video purporting to show a Pennsylvania election worker tearing up mail-in ballots marked for Donald Trump swept platforms such as X and Facebook.
Within days, the FBI traced the clip to a Russian influence outfit, but not before it racked up millions of views. This example vividly demonstrates how foreign influence campaigns artificially manufacture and amplify fabricated stories to manipulate U.S. politics and stoke divisions among Americans.
Humans are wired to process the world through stories . From childhood, we grow up hearing stories, telling them and using them to make sense of complex information. Narratives don't just help people remember - they help us feel. They foster emotional connections and shape our interpretations of social and political events.
This makes them especially powerful tools for persuasion - and, consequently, for spreading disinformation. A compelling narrative can override skepticism and sway opinion more effectively than a flood of statistics. For example, a story about rescuing a sea turtle with a plastic straw in its nose often does more to raise concern about plastic pollution than volumes of environmental data.
Usernames, cultural context and narrative time
Using AI tools to piece together a picture of the narrator of a story, the timeline for how they tell it and cultural details specific to where the story takes place can help identify when a story doesn't add up.
Narratives are not confined to the content users share - they also extend to the personas users construct to tell them. Even a social media handle can carry persuasive signals. We have developed a system that analyzes usernames to infer demographic and identity traits such as name, gender, location, sentiment and even personality, when such cues are embedded in the handle. This work, presented in 2024 at the International Conference on Web and Social Media , highlights how even a brief string of characters can signal how users want to be perceived by their audience.
For example, a user attempting to appear as a credible journalist might choose a handle like @JamesBurnsNYT rather than something more casual like @JimB_NYC. Both may suggest a male user from New York, but one carries the weight of institutional credibility. Disinformation campaigns often exploit these perceptions by crafting handles that mimic authentic voices or affiliations.
Although a handle alone cannot confirm whether an account is genuine, it plays an important role in assessing overall authenticity. By interpreting usernames as part of the broader narrative an account presents, AI systems can better evaluate whether an identity is manufactured to gain trust, blend into a target community or amplify persuasive content. This kind of semantic interpretation contributes to a more holistic approach to disinformation detection - one that considers not just what is said but who appears to be saying it and why.
Also, stories don't always unfold chronologically. A social media thread might open with a shocking event, flash back to earlier moments and skip over key details in between.
Humans handle this effortlessly - we're used to fragmented storytelling. But for AI, determining a sequence of events based on a narrative account remains a major challenge.
Our lab is also developing methods for timeline extraction , teaching AI to identify events, understand their sequence and map how they relate to one another, even when a story is told in nonlinear fashion.
Objects and symbols often carry different meanings in different cultures, and without cultural awareness, AI systems risk misinterpreting the narratives they analyze. Foreign adversaries can exploit cultural nuances to craft messages that resonate more deeply with specific audiences, enhancing the persuasive power of disinformation.
Consider the following sentence: "The woman in the white dress was filled with joy." In a Western context, the phrase evokes a happy image. But in parts of Asia, where white symbolizes mourning or death , it could feel unsettling or even offensive.
In order to use AI to detect disinformation that weaponizes symbols, sentiments and storytelling within targeted communities, it's critical to give AI this sort of cultural literacy. In our research, we've found that training AI on diverse cultural narratives improves its sensitivity to such distinctions.
Who benefits from narrative-aware AI?
Narrative-aware AI tools can help intelligence analysts quickly identify orchestrated influence campaigns or emotionally charged storylines that are spreading unusually fast. They might use AI tools to process large volumes of social media posts in order to map persuasive narrative arcs, identify near-identical storylines and flag coordinated timing of social media activity. Intelligence services could then use countermeasures in real time.
In addition, crisis-response agencies could swiftly identify harmful narratives, such as false emergency claims during natural disasters. Social media platforms could use these tools to efficiently route high-risk content for human review without unnecessary censorship. Researchers and educators could also benefit by tracking how a story evolves across communities, making narrative analysis more rigorous and shareable.
Ordinary users can also benefit from these technologies. The AI tools could flag social media posts in real time as possible disinformation, allowing readers to be skeptical of suspect stories, thus counteracting falsehoods before they take root.
As AI takes on a greater role in monitoring and interpreting online content, its ability to understand storytelling beyond just traditional semantic analysis has become essential. To this end, we are building systems to uncover hidden patterns, decode cultural signals and trace narrative timelines to reveal how disinformation takes hold.
Mark Finlayson receives funding from US Department of Defense and the US National Science Foundation for his work on narrative understanding and influence operations in the military context.
Azwad Anjum Islam receives funding from Defense Advanced Research Projects Agency (DARPA).