Research: Faces Behind Us Seen as More Emotionally Intense

Toyohashi University of Technology (TUT)

A research team from the Cognitive Neurotechnology Unit and the Visual Perception and Cognition Laboratory at Toyohashi University of Technology investigated how facial expressions are perceived when a face is located behind an observer. Participants wearing a head-mounted display observed 3D face models presented either in front of or behind them in a virtual reality (VR) environment and made binary judgments about the facial expression. The stimuli varied continuously from neutral to angry, and participants judged whether each face appeared neutral or angry. Across four experiments—including conditions using happy and fearful expressions in addition to anger—the results consistently showed that faces presented behind the observer tended to be perceived as more emotionally intense than those presented in front (a "behind-enhancement bias"). Notably, for angry expressions, a similar tendency was observed even when participants did not turn around but instead viewed the face behind them via a virtual mirror. These findings suggest that it is not the act of turning itself, but rather the spatial position of the face behind the observer, that may influence emotion perception. This study suggests the existence of a spatially modulated perceptual bias related to the processing of emotionally salient stimuli (e.g., potential threats) located behind the observer.

The findings were published online in Cognition on March 30, 2026.

https://doi.org/10.1016/j.cognition.2026.106532

<Main>

Humans rely on facial expressions to infer others' emotions and intentions, enabling appropriate social behavior. Although many studies have investigated the mechanisms of facial expression perception, most have focused on faces presented in front of the observer. In everyday environments, however, faces may also appear behind or to the side. The influence of egocentric spatial position on emotion perception has not been fully understood.

In this study, we used a virtual reality (VR) environment to systematically manipulate whether faces were presented in front of or behind the observer and quantified perceived expression intensity using psychophysical methods. Participants wearing a head-mounted display observed 3D face models presented either in front or behind them. In the front condition, faces were viewed directly, whereas in the behind condition, participants turned around to view the stimuli from either the left or right side. In each trial, participants judged whether a face appeared neutral or emotional (e.g., angry). For example, in the anger condition, intermediate faces were generated by morphing between neutral and angry expressions, and participants judged whether each face appeared neutral or angry. Based on these responses, we quantified perceptual bias in expression judgments separately for front and behind conditions.

In Experiment 1, using angry and happy expressions, faces presented behind the observer were perceived as more emotionally intense than those presented in front. To determine whether this effect was driven by body rotation or spatial position, Experiment 2 introduced a condition in which participants viewed faces behind them via a virtual mirror without turning around. Under this condition, the effect persisted for angry expressions. However, similar effects were not clearly observed for happy (Experiment 3) or fearful (Experiment 4) expressions under the no-rotation condition. Importantly, when participants turned around to directly view faces behind them, the enhancement effect was consistently observed across all experiments.

These findings indicate that facial expression perception depends not only on facial features but also on egocentric spatial position—specifically, where the face is located relative to the observer. The enhanced perception of faces behind the observer may reflect a spatially tuned perceptual bias that prioritizes emotionally salient stimuli located behind the observer. The first author, Dr. Hideki Tamura, commented: "Most previous psychophysical studies have focused on perception in front of the observer. Our findings suggest that perception behind the observer may also be systematically biased. This provides new insight into how human perception is shaped by spatial context."

<Future Directions>

Although this study focused on faces as socially relevant stimuli, future research will examine whether similar effects occur for non-social stimuli or for lower-level visual features such as color and shape. This will help clarify how general this spatial bias is across perceptual domains. In addition, it will be important to investigate whether higher-level social judgments, such as trustworthiness or attractiveness, are also influenced by egocentric spatial position. These findings may contribute to the design of human–computer interfaces and human–agent interaction systems by improving our understanding of how emotion and attention are distributed across space.

<Publication Information>

Tamura,H.*,Kobayashi,Y.,Nakauchi,S.,&Minami,T. (2026). Enhanced emotion perception for faces behind the observer. Cognition, 273, 106532. https://doi.org/10.1016/j.cognition.2026.106532

(*: Corresponding author)

<Acknowledgments>

This work was supported by JSPS KAKENHI (Grant Numbers JP25K21323 to H.T., JP25H01141 to S.N., and JP23KK0183 to T.M.). This study was based on the results obtained from project JPNP20004, which was subsidized by the New Energy and Industrial Technology Development Organization (NEDO). The authors wish to thank Teruyuki Inoue and Yuta Matsubara for supporting the data collection.

Part of this work was supported by the Tokai Pathways to Global Excellence (T-GEx) program under the MEXT Strategic Professional Development Program for Young Researchers.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.