ChatGPT's Clichés: Threat to Human Creativity

When you chat with ChatGPT, it often feels like you're talking to someone polite , engaged and responsive. It nods in all the right places, mirrors your wording and seems eager to keep the exchange flowing.

Author

  • Vittorio Tantucci

    Senior lecturer in Linguistics and Chinese Linguistics, Lancaster University

But is this really what human conversation sounds like? Our new study shows that while ChatGPT plausibly imitates dialogue, it does so in a way that is stereotypical rather than unique.

Every conversation has quirks. When two family members talk on the phone, they don't just exchange information - they reuse each other's words, rework them creatively , interrupt , disagree , joke, banter or wander off-topic .

They do so because human talk is naturally fragmented, but also to enact their own identities in interaction. These moments of "conversational uniqueness" are what make real dialogue unpredictable and deeply human.

We wanted to contrast human conversation with AI ones. So we compared 240 phone conversations between Chinese family members with dialogues simulated by ChatGPT under the same contextual conditions, using a statistical model to measure patterns across hundreds of turns.

To capture human uniqueness in our study , we mainly focused on three levels of human interaction. One was "dialogic resonance". That's to do with re-using each other's expressions . For example, when speaker A says "You never call me", speaker B may respond "You are the one who never calls".

Another factor we included was " recombinant creativity ". This involves inventing new twists on what's just been said by an interlocutor. For example, speaker A may ask "All good?", to which speaker B responds "All smashing". Here the structure is kept constant but the adjective is creatively substituted in a way that is unique to the exchange.

A final feature we included was " relevance acknowledgement ": showing interest and recognition of the other's point, such as "It's interesting what you said, in fact …" or "That's a good point …".

What we found

ChatGPT did remarkably well - even too well - at showing engagement. It often echoed and acknowledged the other speaker even more than humans do. But it fell short in two decisive ways.

First, the lexical diversity was much lower for ChatGPT than for human speakers. Where people varied their words and expressions, AI recycled the same ones.

Most importantly, we spotted a lot of stereotypical speech in the AI-generated conversations. When it simulated giving advice or making requests, ChatGPT defaulted to predictable parental-style recommendations such as "Take care of your health" and "Don't worry too much".

This was unlike real human parents who mixed in clarifications, refusals, jokes, sarcasm and even impolite expressions at times. In our data, a far more human way of showing concern for a daughter's health at college was often through making implications rather than direct instructions - for example, a mother asking, "Why in the world are you juggling two jobs?" with the implied meaning that she will burn out if she keeps being this busy.

In short, ChatGPT statistically flattened human dialogues in the context of our enquiry, replacing them with a polished, plausible but ultimately rather dry template.

Why this matters

At first glance, ChatGPT's consistency feels like a strength. It makes the system reliable and predictable. Yet these very qualities also make it less human. Real people avoid sounding repetitive. They resist clichés. They build conversations that are recognisably theirs.

This is what defines unique identities in interaction - how we want to be perceived by others. There are words, expressions and intonations you would never use, not necessarily because they are impolite, but because they do not represent who you are or how you want to sound to others.

Being accused of being "boring" is definitely something most people try to avoid; it's effectively what brings about American playboy Dickie Greenleaf's death in the famous Patricia Highsmith novel, The Talented Mr Ripley, when he says it of his friend, Tom Ripley. The conversational choices we make are not simply appropriate ways to talk, but strategies for locating ourselves in society and constructing our singular identity with every conversation.

This gap matters in all sorts of ways. If AI cannot capture the uniqueness of human interaction, it risks reinforcing stereotypes of how people ought to speak, rather than reflecting how they actually do. More troubling still, it may promote a new procedural ideology of conversation - one where talk is reduced to sounding engaged yet remains uncreative; a functional but impoverished tool of cooperation.

Our findings suggest that AI is remarkably good at modelling the normative patterns of dialogue - the things people say often and conventionally. But it struggles with the idiosyncratic and unexpected, which are essential for creativity, humour and authentic human conversation.

The danger is not only that AI sounds nothing but plausible. It is that humans, over time, may begin to imitate its style in a way that AI's stereotyped behaviour may start to reshape conversational norms.

In the long run, we may find ourselves "learning" from AI how to converse - gradually erasing creativity and uniqueness from our own speech. Conversation, at its core, is not just about efficiency. It is about co-creating meaning and social identities through innovation and extravagance, even more than we realise.

What might be at stake, then, assuming AI can't overcome this problem, is not simply whether it can converse like humans - but whether humans will continue to converse like themselves.

The Conversation

Vittorio Tantucci receives funding from Leverhulme Trust.

/Courtesy of The Conversation. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).