Artificial intelligence (AI) is said to be a "black box," with its logic obscured from human understanding - but how much does the average user actually care to know how AI works? It depends on the extent to which a system meets users' expectations, according to a new study by a team that includes Penn State researchers. Using a fabricated algorithm-driven dating website, the team found that whether the system met, exceeded or fell short of user expectations directly corresponded to how much the user trusted the AI and wanted to know about how it worked.
The study is available online ahead of publication in the April 2026 issue of the journal Computers in Human Behavior. The findings have implications for companies across industries, including health care and finance, that are developing such systems to better understand what users want to know and to deliver useful information in a comprehensible way, according to co-author S. Shyam Sundar, Evan Pugh University Professor and James P. Jimirro Professor of Media Ethics in the Penn State Donald P. Bellisario College of Communications.
"AI can create all kinds of soul searching for people - especially in sensitive personal domains like online dating," said Sundar, who directs the Penn State Center for Socially Responsible Artificial Intelligence and co-directs the Media Effects Research Laboratory. "There's uncertainty in how algorithms produce what they produce. If a dating algorithm suggests fewer matches than expected, users may think something is wrong with them, but if suggests more matches than expected, then they might think that their dating criteria are too broad and indiscriminate."
In this study, 227 participants in the United States who reported being single answered questions at smartmatch.com, a fictitious dating site created by the researchers for the study. Each participant was assigned to one of nine potential testing conditions and directed to answer typical dating site questions about their interests and traits they find desirable in others. The site then told them that it would provide 10 potential matches on their "Discover Page" and that it "normally generates five 'Top Picks' for each user." Depending on the testing condition, the participant would see either the five mentioned "Top Picks" with a message confirming that five options was the norm, or a variation accompanied by a message noting that while five options was typical, this time the system found two or 10.
"If someone expect five matches, but get two or 10, then a user may think they've done something wrong or that something is wrong with them," said lead author Yuan Sun, assistant professor in the University of Florida's College of Journalism and Communications. Advised by Sundar, she earned her doctorate from Penn State in 2023. "If the system works fine, you just go along with it; you don't need a long explanation. But what do you need if your expectations are unmet? The broader issue here is transparency."
That may be different than how humans respond when other humans defy expectations, according to co-author Joseph B. Walther, Bertelsen Presidential Chair in Technology and Society and distinguished professor of communication at the University of California, Santa Barbara, who has long studied expectancy violations in interpersonal settings. When humans violate expectations, surprised victims tend to make judgments about the violator, increase or decrease how much they like them, and approach or avoid them thereafter.
"Being able to find out 'why the surprise?' is a luxury and source of satisfaction," he said, explaining that asking another person why they behaved as they did is intrusive and potentially awkward. "But it appears that we're unafraid to ask the intelligent machine for an explanation."
Participants in the study had the opportunity to request more information about their results and then rate their trust in the system. The researchers found that when the system met expectations - delivering the promised five top picks - participants reported trusting the system without needing an explanation of the AI's inner workings. When the system overdelivered, a simple explanation to clarify the mismatched expectations bolstered user trust in the algorithm. However, when the system underdelivered, users required a more detailed explanation.
"Many developers talk about making AI more transparent and understandable by providing specific information," Sun said. "There is far less discussion about when those explanations are necessary and how much should be presented. That's the gap we're interested in filling."
The researchers pointed to how many social media apps already provide an option for users to learn more about the systems in place, but they're relatively standardized, use technical language and are buried in the fine print of broader user agreements.
"Tons of studies show that these explanations don't work well. They're not effective in the goal of transparency to enhance user experience and trust," Sundar said, noting that many of the current explanations are treated like disclaimers. "No one really benefits. It's due diligence rather than being socially responsible."
Sun noted that the bulk of scientific literature reports that the better a site performs, the more people trust it. Yet, these findings suggested that wasn't the case: People still wanted to understand the reasoning, even if they were given far more top picks than promised.
"Good is good, so we thought people would be satisfied with face value, but they weren't. They were curious," Sun said. "It's not just performance; it's transparency. Higher transparency gives people more understanding of the system, leading to higher trust."
However, as more industries adopt AI, the researchers said simple transparency is not sufficient.
"We can't just say there's information in the terms and conditions, and that absolves us," Sun said. "We need more user-centered, tailored explanations to help people better understand AI systems when they want it and in a way that meets their needs. This study opens the door to more research that could help achieve that."
Mengqi "Maggie" Liao, University of Georgia, also collaborated on this project.