Are you a social savant who easily reads people's emotions? Or are you someone who leaves an interaction with an unclear understanding of another person's emotional state?
New UC Berkeley research suggests those differences stem from a fundamental way our brains compute facial and contextual details, potentially explaining why some people are better at reading the room than others - sometimes, much better.
Human brains use information from faces and background context, such as the location or expressions of bystanders, when making sense of a scene and assessing someone's emotional state. If someone's facial expression is clear, but the emotional information in the context is unclear, most people's brains will heavily weigh the clear facial expression and minimize the importance of the background context. Conversely, if a facial expression is ambiguous but the background context provides strong cues of how a person feels, they'll rely more on the context to understand the person's emotions.
Think of it like a close-up photo of a person crying. Without background context, you might assume they're sad. But with context - a wedding altar, perhaps - the meaning shifts significantly.

It adds up to a complex statistical assessment that weighs different cues based on their ambiguity.
But while most people are naturally able to make those judgment calls, Berkeley psychologists say that others seemingly treat every piece of information equally. This discrepancy between complex calculus and simple averages might explain our vast differences in understanding emotions, said Jefferson Ortega, lead author of the study published today (Dec. 16) in Nature Communications.
"We don't know exactly why these differences occur," said Ortega, a psychology Ph.D. student. "But the idea is that some people might use this more simplistic integration strategy because it's less cognitively demanding, or it could also be due to underlying cognitive deficits."
Ortega's team had 944 participants continuously infer the mood of a person in a series of videos. He likened it to a video call: Some of the clips contained hazy backgrounds - like blurring your background in a Zoom meeting. Others had hazy faces and clear context. This allowed his team to isolate the emotional information people get from a person's face and body and the information they get from the context.
Using the participant's scene assessments from those two conditions, Ortega used a model to predict what rating they would provide when they viewed all of the scene details - what he called the "ground truth."
He wanted to know if people really weighed different inputs differently, valuing facial expressions more when backgrounds were blurred or backgrounds when the faces were fuzzy. This process, called Bayesian integration, is a statistical way of understanding whether people combine different types of information based on its ambiguity.
He expected everyone would weigh the ambiguities, decide which field to rely more on, and make an assessment. That was true in about 70% of cases.
However, instead of assessing the context ambiguity, the remaining 30% of participants had more simplistic strategies that basically averaged the two cues.
"It was very surprising," Ortega said, adding that it's less cognitively demanding to take simple averages than to weigh different factors more or less heavily almost instantly. "The computational mechanisms - the algorithm that the brain uses to do that - is not well understood. That's where the motivation came for this paper. It's just an amazing feat."

Courtesy of Jefferson Ortega
Ortega worked with David Whitney, a Berkeley professor of psychology whose lab focuses on how people use context to make inferences about others' emotions. The lab previously found that when a character is blurred out from a scene, people could still use context to infer the person's emotions.
"Some observers are very good at integrating context and facial expressions to understand emotions," Whitney said of the strong individual differences shown in Ortega's research. "And some folks are not so good at it."
The work adds to Ortega's recent research on people with traits associated with autism who seem to be less able to weigh and combine details from faces and backgrounds. That raises the question, Ortega said, about what integration strategy they're using, potentially offering a clearer window into their information-processing systems.
"This work sets the foundation for investigating that in the future," Ortega said.