More and more people are turning to ChatGPT or other AI chatbots for advice and emotional support, and it's easy to see why. Unlike a friend or a therapist, a chatbot is always available, listens to everything you have to say, and provides responses that are often thoughtful and validating.
But confiding in chatbots can be risky. Many of these technologies are designed primarily to drive engagement, and may provide users with responses that are false or harmful. And unlike a friend or therapist, the output of a chatbot reflects the norms and biases of the algorithm's dataset, which could differ from those of your social group or community. With many people seeking advice from chatbots, these unknown norms and biases could have surprising impacts on human behavior and society at large.

"Through their advice and feedback, these technologies are shaping how humans act, what they believe and what norms they adhere to," said Pratik Sachdeva, a senior data scientist at UC Berkeley's D-Lab. "But many of these tools are proprietary. We don't know how they were trained. We don't know how they are aligned."
To start to reveal the hidden norms encoded in popular AI chatbots and how they might impact human behavior, Sachdeva and Tom van Nuenen, a senior data scientist and lecturer at the D-Lab, turned to the internet's favorite source of moral dilemmas: Reddit's "Am I the Asshole?" (or AITA) forum.
In a recent study, which is published as a pre-print, Sachdeva and Van Nuenen confronted each of seven different large language models (LLMs) - the AI systems that power chatbots - with more than 10,000 real-world social conflicts posted to the forum, asking them each to decide who was at fault in each situation and comparing their responses to those of Reddit users.
They found that the seven chatbots often showed striking differences in how they judged the Reddit users' moral dilemmas, revealing that each LLM reflects different ethical standards. However, when they compared their judgments with those of Reddit users,or Redditors, they found that the consensus opinion of the seven chatbots usually agreed with the consensus opinion of people on Reddit.
"When you have a dilemma, you might ask a series of different friends what they think, and each of them might give you a different opinion. In essence, this is what Reddit users are doing on the AITA forum," Sachdeva said. "You could do the same thing with chatbots - first, you ask ChatGPT, then you ask Claude and then you ask Gemini. When we did that, we found that there was consistency between the majority opinions of Redditors and the majority opinion of chatbots."
Through their advice and feedback, these technologies are shaping how humans act, what they believe and what norms they adhere to.
Pratik Sachdeva