Story and multimedia by Joey Garcia, University Communications and Marketing
If you've used generative artificial intelligence, you've likely noticed that the system is often in agreement, complimenting the user in its response. But human interactions aren't typically built on flattery. To help strengthen these conversations, researchers in the USF Bellini College of Artificial Intelligence, Cybersecurity and Computing are challenging the technology to think and debate in ways that resemble human reasoning.
AI systems don't hold firm beliefs the way humans do. They generate responses based on statistical data patterns without tracking how confident they are in an idea or whether that confidence should change over time. Building on that limitation, USF doctoral student Onur Bilgin developed a framework to study how AI systems respond to disagreement. The work was conducted in USF Associate Professor John Licato's Advancing Machine and Human Reasoning Lab.

USF Associate Professor John Licato and doctoral student Onur Bilgin built this framework to explore how future AI systems might reason together more transparently and predictably.
"We wanted to understand what happens when AI systems are given the ability to hold a belief and then encounter opposing viewpoints, similar to situations people find themselves in. That process can help people think through complex problems by examining different perspectives rather than relying on a single answer."
USF Doctoral Student Onur Bilgin
GIVING AI EXPLICIT BELIEFS
Using this approach, the lab focused on how assigning beliefs and confidence levels shapes the way AI systems respond to disagreement. In his framework, Bilgin used agents. Unlike a typical chat interaction, agents are user-created roles within the same AI system with defined tasks and viewpoints.
-
example of two AI agents
Excerpt example of two agents with instructed beliefs and confidence levels.
-
example of AI agents aruging back and forth
Example of two agents debating a position.
In Bilgin's framework, each agent is designed to have a specific belief and confidence level. For example, one agent might argue that solar energy is the most reliable renewable power source and hold that view with high confidence. A second agent is then introduced in the same chat to challenge that belief, arguing that wind energy is more reliable because it can generate power day and night, but with lower confidence.
"Rather than trying to decide which belief is right, we're focused on understanding how different levels of confidence shape the way an AI system responds when its beliefs are challenged and how those beliefs shift or stabilize over time," Bilgin said.
OBSERVING HUMAN-LIKE PATTERNS IN AI
After the debate rounds, the team observed how closely the AI agents' behavior mirrored familiar human group dynamics. Agents assigned lower confidence levels were more open to revising their beliefs, while those starting with higher confidence tended to be more persuasive. When several agents disagreed with a single participant, that participant was more likely to change its position, similar to peer pressure in human discussions.
"These aren't emotions or opinions in the human sense," Bilgin said. "But the patterns of belief change we observed, including confidence, openness and influence from others, are very similar to how people reason in group settings."

Bilgin with his AI framework.

Final confidence levels after multiple debate rounds, revealing agent two as the more persuasive agent.
Notably, these behaviors emerged without retraining the AI models. Simply adding structured belief information to the prompt was enough to change how the systems reasoned during debate.
WHY BELIEF STRUCTURE MATTERS
The findings show an important distinction in AI design: Changing how AI sounds isn't the same as changing how it decides. Many users assume that telling AI to have a certain personality will influence its behavior. But this research suggests that meaningful behavioral change requires more than tone. It requires explicit structure defining what the system believes and how those beliefs can evolve.

The belief framework is one of many exciting projects happening within the college and in Jon Licato's Advancing Machine and Human Reasoning Lab.
"As AI systems are increasingly used to support planning, analysis and decision-making, understanding how beliefs form and change becomes critical," Licato said. "If we want AI systems to reason together reliably, we need to think beyond surface-level prompts."
The research offers insight into how future AI systems might reason together more transparently and predictably. Systems that can track and update beliefs may be easier to inspect, test and govern, contributing to ongoing conversations around AI safety and trust.