Researchers at the University of Vermont have uncovered a powerful new insight about how language works—one that overturns a cornerstone assumption in psychology, linguistics, and artificial intelligence that has stood for more than 70 years.
Their study, published May 6 in Science Advances, introduces "ousiometrics," the quantitative study of essential meaning, and reveals that language is fundamentally organized not around emotion alone, but around a deeper structure shaped by power, danger, and order.
At the heart of the discovery is a striking and far-reaching finding: human language is systematically biased toward safety.
A Hidden Bias in Language
For decades, researchers have assumed that meaning could be distilled into three core emotional dimensions: valence (positive vs. negative), arousal (excited vs. calm), and dominance (controlling vs. submissive)—collectively known as the VAD framework. Based on pioneering work in the 1950s by Charles Osgood and others, this model has been widely used across disciplines, from psychology and linguistics to sentiment analysis in AI systems.
But the team's large-scale analysis—drawing on billions of uses of more than 20,000 words and diverse real-world texts—demonstrates that this framework is flawed. Using modern computational methods, the researchers—with support from the US National Science Foundation, Google, MassMutual, and other funders—identified a different set of underlying dimensions. Not only are the VAD dimensions not independent, but they obscure a more fundamental organizing principle of language.
Instead, the researchers show that meaning is best described by three independent dimensions: power (weak vs. powerful), danger (safe vs. dangerous), and structure (ordered vs. chaotic).
At a moment when language technologies are rapidly reshaping communication—from large language models to automated content moderation—the need to understand how meaning truly works is urgent. The new results explain over 90% of the variance in meaning, compared to roughly 72% for the traditional VAD model.
When the team examined how words are used across books, news, social media, and spoken language, one pattern stood out consistently: language strongly favors words associated with safety over those associated with danger.
This "safety bias" reframes a long-standing observation in linguistics known as the Pollyanna principle—the tendency for human language to skew positive. Rather than simply reflecting emotional positivity, the researchers show that this effect is caused by a deeper orientation toward safety. "The Pollyanna principle's positivity bias," the study concludes. "is, in fact, a one-dimensional projection of an underlying safety bias."
"This is a big observation that comes out of this work," said Peter Dodds, director of the UVM's Complex Systems Institute and senior author of the study. "Expressions of safety are crucial to all language."
Beyond Positivity: Language as a Survival System
The implications are profound. If language is biased toward safety, then communication itself may be shaped by evolutionary pressures tied to survival. Words are not just emotional signals—they are tools for navigating risk, assessing threats, and coordinating behavior in uncertain environments.
This perspective helps explain why safety-related distinctions are so deeply embedded in everyday communication. Across cultures and contexts, humans constantly signal whether situations, people, or actions are safe or dangerous. The study suggests that this axis is not secondary to emotion—it is foundational.
In this view, positivity in language is not simply about expressing happiness or approval. It is about signaling predictability and safety in a shared environment. Julia Zimmerman, a postdoctoral researcher in UVM's Computational Story Lab and study co-author, says the new framework illuminates something fundamental about how humans experience the world. "Power, danger, and structure," she said, "are relevant to every person that's ever lived."
Like the Pollyanna principle, linguists have long observed a bias in language toward expressions of goodness and low aggression. "We now understand," the team writes, that these are "shadows of an underlying linguistic safety bias."
Rethinking Meaning Across Disciplines
The findings challenge assumptions at the core of several fields.
In artificial intelligence, the implications are immediate. Many natural language processing systems rely on sentiment analysis grounded in VAD-like frameworks. If those frameworks misrepresent the essential nature of meaning, then AI systems may be systematically misinterpreting human language. Incorporating power, danger, and structure could lead to more accurate and interpretable models—particularly in applications involving risk, trust, and decision-making.
In linguistics, the study reframes how meaning is structured at its most basic level. Rather than organizing words primarily by emotional tone, it suggests that meaning is grounded in survival-relevant distinctions—what is powerful, what is dangerous, and what is structured.
In psychology, the work calls into question decades of research built on the VAD model. If the foundational dimensions of meaning are different than previously thought, then interpretations of emotion, perception, and behavior may need to be revisited.
In neurobiology, the findings resonate with what is known about the brain's sensitivity to threat and safety. The discovery of a linguistic safety bias suggests that these biological priorities may be mirrored in the structure of language itself, offering a potential bridge between neural processes and symbolic communication.
A New Scientific Framework: Ousiometrics
To uncover these patterns, the researchers developed new tools and methods for analyzing meaning at scale. Central among them is the "ousiometer," an instrument designed to quickly measure the essential meaning of large-scale texts—yielding an average meaning score. (The word "ouisa" comes from Ancient Greek and is a root for the English word "essence.") Expanding on the team's earlier creation of a "hedonometer" (a happiness meter) the new device can remotely sense overall patterns of meaning in texts as diverse as the novels of Jane Austen, Arthur Conan Doyle's Sherlock Holmes stories, the New York Times, Wikipedia, transcriptions of talk radio programs, and Twitter.
One example in the new study traces the "ousiometric trajectory" for an English translation of Victor Hugo's masterpiece novel, Les Misérables. Like a multicolored protein, the book's tangled path winds its way over a grid—defined by four opposing pairs: dangerous/safe, weak/powerful, gentle/aggressive, and bad/good—distilling the essential meaning of sections of the book as the story advances.
Crucially, the new study distinguishes between words as abstract entities ("types") and words as they used ("tokens"). (For example, as a category "apple" is a type and every time the word "apple" is used in a sentence is a token.) Earlier research largely treated all words as equally important, regardless of how frequently they appear. By accounting for usage frequency, the team of ten scientists—led by Peter Dodds and Chris Danforth, professors in UVM's College of Engineering and Mathematical Sciences , along with colleagues from the Santa Fe Institute; the Complexity Science Hub in Austria; Howard Hughes Medical Center; University of California, Berkeley; University of Adelaide; and MassMutual Data Science—was able to reveal patterns, like the safety bias, that only emerge in real-world language.
Why This Matters Now
If language is systematically biased toward safety, this has implications for how information spreads, how narratives are constructed, and how people interpret the world around them. It may influence everything from political discourse to mental health communication to the design of AI systems that interact with humans.
More broadly, the study points to a fundamental shift in how to best understand the foundations of language: meaning is not just about emotion or sentiment. It rests on the need to navigate a world of risks, relationships, and structures. By uncovering a deeper geometry of meaning, the team of researchers suggest a new way to understand language—not just as a system of symbols, but as a reflection of what it takes to survive in a highly social and dangerous world.