As artificial intelligence tools like ChatGPT are integrated into our everyday lives, our interactions with AI chatbots online become more frequent. Are we welcoming them, or are we trying to push them away?
New research from Binghamton University is trying to answer those questions through VizTrust, an analytics tool to make user trust dynamics in human-AI communication visible and understandable.

Xin "Vision" Wang, a PhD student at the Thomas J. Watson College of Engineering and Applied Science's School of Systems Science and Industrial Engineering, is developing VizTrust as part of her dissertation. She presented her current work and findings in April at the Association for Computing Machinery (ACM) CHI 2025 conference in Yokohama, Japan.
VizTrust was born out of a pressing challenge: User trust in AI agents is highly dynamic, context-dependent, and difficult to quantify using traditional methods.
"Most studies rely on post-conversation surveys, but they only can capture trust state before and after the human-AI interaction," Wang said. "They miss the detailed, moment-by-moment signals that show why a user's trust may rise or fall during an interaction."
To address this, VizTrust evaluates user trust based on four dimensions grounded in social psychology: competence, benevolence, integrity and predictability. Additionally, VizTrust analyzes trust-relevant cues from user messages - such as emotional tone, engagement level and politeness strategies - using machine learning and natural language processing techniques to visualize changes in trust over the course of a conversation.
"The power of large language models and generative AI is rising, but we need to find out the user experience when people use different conversational applications," Wang said. "Without diving in to see what exactly happened that influenced a bad experience, we can never really find out the best solution to improve the AI model."
The research paper illustrates the functionality of VizTrust through a use case involving a software engineer stressed out by his job and a therapy chatbot designed to support workers. They discuss his work-related stress, and it offers him some advice on how to deal with it.
By analyzing subtle linguistic and behavioral shifts in user language and interaction, VizTrust pinpoints moments when trust is built or eroded. For example, VizTrust points out one moment when the trust level goes down because of repeated suggestions that the user doesn't like. This type of insight is vital not only for academic understanding but also for practical improvements to conversational AI system design.
"Trust is not just a user issue - it's a system issue," Wang said, "With VizTrust, we're giving developers, researchers and designers a new lens to see exactly where trust falters, so they can make meaningful upgrades to their AI system."
VizTrust has already gained recognition by being accepted as a late-breaking work at CHI 2025, the most prestigious conference in the field of human-computer interaction. VizTrust stood out among more than 3,000 late-breaking submissions from around the world, when the competitive acceptance rate was just under 33%.
Co-authors on the project include SSIE Assistant Professors Sadamori Kojaku and Stephanie Tulk Jesso as well as Associate Professor David M. Neyens from Clemson University and Professor Min Sun Kim from the University of Hawaii at Manoa.
Wang is moving VizTrust to the next stage of development and will increase its adaptability to individual differences.
"When people interact with AI agents, they may have very different attitudes," she said. "We may need to take a specific, individual perspective to understand their trust - for example, their personal characteristics, their implicit trust level, even their previous interactions with AI systems can influence their attitudes."
Looking ahead, Wang envisions deploying VizTrust as a publicly available tool online to support broader research and development.
"By making VizTrust accessible," she said, "we can begin to bridge the gap between technical performance and human experience and make AI system more human-centered and responsible."