A University of Michigan anthropologist takes on artificial intelligence and the authority we give to it
EXPERT Q&A

As people embrace ChatGPT and other large language models, University of Michigan anthropologist Webb Keane says it's easy for people to imbue AI with a human, or even god-like, authority.
Keane studies the role of religion in people's ordinary lives, ethics and morality. But he's also interested in how people anthropomorphize inanimate objects, particularly objects that appear to use human language or language systems in the way we do. Keane explains the ways in which people may start giving moral power to artificial intelligence-but may find that AI is simply a mirror of the people and corporations who have built it.
Why do we put such trust into what ChatGPT is telling us?
The authority we give to AI has many sources, but the ones that particularly interest me are the ones that are tapping into the way we human beings have given authority to nonhuman things in lots of different contexts over the course of human history. We have a strong tendency to project intentions and deep thoughts onto things that appear animate, things that can use language like we do, or use signal systems like we do, to communicate.
We've actually done this with ancient Delphic oracles in Greece and ancient China. We did this with the I Ching, and it looks to me like people are starting to do this, in many cases, with algorithms-even little things, simple things like a Fitbit or Spotify recommendation algorithms, which again are saying things like, "I know your taste better than you do."
Can you talk a little bit about moral boundaries, and how those have shifted for people over time?
The story of the history of morality is going to depend on where we draw the boundaries between human and nonhuman. Who counts? To whom do my ethics apply? And to whom does it not apply? Much of the story of progress in justice and in rights over the past few centuries has depended on expanding those people who are included among those who count, who matter, in moral terms.
There was a time when women didn't have a right to vote, when we enslaved people, when it was OK to beat a horse to death in the streets of a big city. Our changing attitudes toward these have not just been changes in legal systems or ideas about justice, but have been expansions in the moral circle.
When you expand the moral circle, you keep bumping up against things that are ambiguous, that you're not sure where they belong-which side of the line between human and nonhuman do they belong? And that's where you start to get really interesting moral trouble.
Where is humanity headed with its newfound love of AI?
We need to be very canny about and very smart about what kind of authority we're willing to grant it, what kinds of decisions we're willing to let it make for us, and be smart about that and not prematurely give it more credence, more believability, more faith than it deserves. AI is a human creation, and it's up to humans to know what to do with it.
The other thing I have to say is AI is also the product of some very powerful, very large corporations, and they have their own interests. So the other thing we need to be wary of is granting too much power to AI and forgetting who's behind it.
Should we be worried about AI?
The world seems to be divided between AI optimists and AI pessimists, techno utopians and techno dystopians. I worry more about the utopians simply because the history of technology is a history of many wonderful things, which always seem to come along with unintended consequences, whether you're talking about the invention of the airplane and the automobile, which were wonderful things, and they have been crucial players in climate change. The invention of nuclear power was a terrifically wonderful thing, and it gave us the threat of nuclear apocalypse.
Can you give us an example of moral decision-making in AI?
Ann Arbor, Michigan, is one of the testing grounds for self-driving vehicles. You might say to yourself, well, that's a cool gizmo, but then you start to see more of them, and you may start to ask yourself, "Do I trust it not to hit me if I absent mindedly cross the street in front of it?" The problem is, it's not simply a matter of designing a car that's not going to hit things. You've got to design a car that has to make decisions that, at the end of the day, are sometimes going to be moral decisions.
Imagine one of these self-driving vehicles is driving down the road, and there is a mom with a toddler crossing in front. The car has to make a very quick decision: Do I hit the mom and the toddler, or do I run into a tree and kill all the occupants in the car?
Things like that are bound to happen sooner or later. That is not just a technical option; that is a moral decision. In some sense, we've outsourced those choices to the machine. But just because we've done so doesn't change the fact that it's not just a technical problem with a technical fix-there is going to be some kind of moral decision lying behind it.
You wrote about the intersection of these ideas in a recent book, right?
My new book, "Animals, Robots, Gods," was first prompted by something I was struck by when ChatGPT first showed up. I quickly noticed that something funny was going on when a lot of people who were deeply involved in new technologies like AI and robots, people who pride themselves on being extremely rational, extremely scientific people, people who are totally secular, Silicon Valley computer engineers and investors, when they start talking about the powers of ChatGPT they often fall into theological language, saying things like how god-like AI is becoming.
We are constantly creating moral others, or moral interlocutors, whether it's yelling at your computer because it froze just at the crucial moment, kicking your car when it breaks down, talking to your dog. These are trivial examples, but sometimes we build very large versions of that. There are people who are succumbing to the temptation to attribute enormous authority to AI-indeed, it's designed to elicit this-and I think that's actually in some sense an expansion on these very minor, ordinary ways in which we anthropomorphize the inanimate world and grant it human-like powers.