Brown Scholars Unravel Neuroscience of ChatGPT

The Carney Institute for Brain Science brought together faculty who study different aspects of artificial intelligence to discuss what it has in common with human intelligence, and its implications for society.

PROVIDENCE, R.I. [Brown University] - ChatGPT, a new technology developed by OpenAI, is so uncannily adept at mimicking human communication that it will soon take over the world - and all the jobs in it. Or at least that's what the headlines would lead the world to believe.

But if ChatGPT sounds like a human, does that mean it learns like one, too? And just how similar is the computer brain to a human brain?

In a Feb. 8 conversation organized by Brown University's Carney Institute for Brain Science, two Brown scholars from different fields of study set out to answer those questions and others on the parallels between artificial intelligence and human intelligence. Carney Conversations is a series of discussions with world-class experts on intriguing topics in brain science, and the discussion on the neuroscience of ChatGPT offered attendees a peek under the hood of the machine learning model-of-the-moment.

The conversation was not only timely, given the media dominance of ChatGPT - and emerging competitors like Google's Bard - but also enlightening, with participants approaching the topic from different academic perspectives. Ellie Pavlick is an assistant professor of computer science at Brown and a research scientist at Google A.I. who studies how language works and how to get computers to understand language the way that humans do. Thomas Serre is a Brown professor of cognitive, linguistic and psychological sciences and of computer science who studies the neural computations supporting visual perception, focusing on the intersection of biological and artificial vision. Joining them as moderators were Carney Institute director and associate director Diane Lipscombe and Christopher Moore, respectively.

Pavlick and Serre offered complementary explanations of how ChatGPT functions relative to human brains, and what that reveals about what the technology can and can't do. For all the chatter around the new technology, the model isn't that complicated and it isn't even new, Pavlick said. At its most basic level, she explained, ChatGPT is a machine learning model designed to predict the next word in a sentence, and the next word, and so on.

This type of predictive-learning model has been around for decades, said Pavlick, who specializes in natural language processing. Computer scientists have long tried to build models that exhibit this behavior and can talk with humans in natural language. To do so, a model needs access to a database of traditional computing components that allow it to "reason" overly complex ideas.

What is new is the way ChatGPT is trained, or developed. It has access to unfathomably large amounts of data - as Pavlick said, "all the sentences on the internet."

"ChatGPT, itself, is not the inflection point," Pavlick said. "The inflection point has been that sometime over the past five years, there's been this increase in building models that are fundamentally the same, but they've been getting bigger. And what's happening is that as they get bigger and bigger, they perform better."

What's also new is the way that the ChatGPT and its competitors are available for free public use. To interact with a system like ChatGPT even a year ago, Pavlick said, a person would need access to a system like Brown's Compute Grid, a specialized tool available to students, faculty and staff only with certain permissions, and would also require a fair amount of technological savvy. But now anyone, of any technological ability, can play around with the sleek, streamlined interface of ChatGPT.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.