New Study Compares the Creative Processes of Humans and Large Language Models

An increasing number of people uses large language models for co-creation. (Image generated with AI)
© geralt via pixabay
To the point
- Humans and AI: Men and large language models (LLMs) use similar creative strategies, employing flexible and persistent approaches.
- Important differences: Unlike humans, LLMs show a clear preference for one approach in each task, and flexible LLMs tend to score higher on creativity.
- Potential application: Matching people with LLMs that complement their creative style could enhance collaborative creativity.
Creativity is no longer exclusive to humans. Some forms of artificial intelligence are capable of producing poetry, entrepreneurial concepts, even visual art. Many people use large language models (LLMs) such as ChatGPT, which are trained on vast amounts of text, for co-creation: The artificial intelligence offers ideas and suggestions, while the human provides guidance, context, and direction.
While researchers have examined the creative output of LLMs in recent years, the underlying process remains largely unexplored. This is why Surabhi S. Nath, a researcher at the Max Planck Institute for Biological Cybernetics in Tübingen, Germany, set out to understand how creativity arises in LLMs and whether their creative process can be compared to the way the human mind finds ideas.
Flexible and persistent creative approaches
To this end, Nath focused on a parameter of creativity which has been well-established in psychological research: the distinction between flexible and persistent approaches. It is perhaps best illustrated by example: When prompted to list all the animals they can think of, people with a persistent approach might begin with pets, followed by farm animals, then birds, and so on, while those who prefer a more flexible approach will jump often from one category to another. "The trade-off between broad and deep search, between exploring new possibilities and exploiting existing ideas, is central to any creative endeavor," Nath comments.
To test for these different strategies, Nath and her collaborators asked both human participants and various LLMs to perform standard psychological creativity tasks, such as coming up with alternative uses for a brick or a paper clip, for example, repurposing the brick as a step or as a paperweight. They were surprised to find that people and machines approached the task in remarkably similar ways, using both flexible and persistent strategies. Each large language model showed a clear preference for either a persistent or a flexible approach in each task, but are less consistent than humans when comparing across different tasks. Moreover, the flexible LLMs produced more creative results compared to persistent LLMs, whereas in humans, both methods led to similar output.
Enhancing collaboration between humans and AI
Nath suggests that these results pave the way for more effective co-creation: People who tend to be persistent might benefit from choosing a flexible LLM as their sparring partner in a task, and vice versa. She also envisions that further research into the creative processes of humans and machines could offer insights into how creativity can be learned.
Whether the findings hold true for other types of creative tasks remains to be seen. "More naturalistic settings are much more complex and difficult to study," Nath cautions. "The next logical step could be to look at creativity in games; they provide a richer scenario, but are still controllable."