Can AI Replace Human Work? - ChatGPT Examined

This article was not written by ChatGPT. The artificial intelligence-powered chatbot-which can generate essays and articles with a simple prompt, have natural-sounding conversations, debug computer code, write songs, and even draft Congressional floor speeches-has quickly become a phenomenon. Developed by the Microsoft-backed OpenAI, the computer program reportedly hit 100 million users in January alone and has been called an AI breakthrough. Its apparent prowess-in one study, it fooled respected scientists into believing its fake research paper abstracts-has left professional writers feeling nervous and spooked Google into urgently ramping up its own AI efforts.

But is all the hype overblown? Behavioral scientist Chiara Longoni, a Boston University Questrom School of Business assistant professor of marketing, says we may all be intrigued by ChatGPT-and AI in general-but we're a long way from embracing it or trusting it over a human.

In recent studies, Longoni has looked at whether people believe AI-generated news stories or trust a computer program's medical diagnoses. She's found we're mostly pretty skeptical of AI. A machine can write a fact-packed news article, but we'll second-guess its veracity; a program can give a more accurate medical analysis than a human, but we're still more likely to go with our doctor's advice.

Photo: Headshot of Chiara Longoni. A white woman wearing an olive green blouse and black blazer sits and poses with hands folded on the table in front of her. She smiles in front of a burgundy backdrop.
Chiara Longoni has found we'll cut people some slack when they mess up, but we're much less forgiving of artificial intelligence. Photo by Eric Levin

Longoni's latest paper, published in the Journal of Marketing Research, examines trust in AI programs used by the government. With researchers from Babson College and the University of Virginia, she studied how consumers react when AI gets something wrong-in calculating benefits or tracking social security fraud, for example-versus what happens if a person messes up. When an AI program made a mistake, Longoni discovered, people distrusted all AI, something the researchers called "algorithmic transference." But we're less likely to mistrust all people when one person drops the ball-meaning, we cut people slack if they screw up, but don't give AI the same leeway.

So, if we're not ready to trust AI, what to make of the rapid growth of ChatGPT? Could it mark a turning point in our attitude toward algorithms? The Brink asked Longoni, a dean's research scholar, about the latest AI leap, our resistance to intelligent computer programs, and whether she's worried about her students submitting an influx of ChatGPT-penned essays.

This interview has been edited-by a human-for clarity and brevity.

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.