Philosopher: AI Consciousness May Remain Unknowable

University of Cambridge

A University of Cambridge philosopher argues that our evidence for what constitutes consciousness is far too limited to tell if or when artificial intelligence has made the leap – and a valid test for doing so will remain out of reach for the foreseeable future.

As artificial consciousness shifts from the realm of sci-fi to become a pressing ethical issue, Dr Tom McClelland says the only "justifiable stance" is agnosticism: we simply won't be able to tell, and this will not change for a long time – if ever.

While issues of AI rights are typically linked to consciousness, McClelland argues that consciousness alone is not enough to make AI matter ethically. What matters is a particular type of consciousness – known as sentience – which includes positive and negative feelings.

"Consciousness would see AI develop perception and become self-aware, but this can still be a neutral state," said McClelland, from Cambridge's Department of History and Philosophy of Science.

"Sentience involves conscious experiences that are good or bad, which is what makes an entity capable of suffering or enjoyment. This is when ethics kicks in," he said. "Even if we accidentally make conscious AI, it's unlikely to be the kind of consciousness we need to worry about."

"For example, self-driving cars that experience the road in front of them would be a huge deal. But ethically, it doesn't matter. If they start to have an emotional response to their destinations, that's something else."

Companies are investing vast sums of money pursuing Artificial General Intelligence: machines with human-like cognition. Some claim that conscious AI is just around the corner, with researchers and governments already considering how we regulate AI consciousness.

McClelland points out that we don't know what explains consciousness, so don't know how to test for AI consciousness.

"If we accidentally make conscious or sentient AI, we should be careful to avoid harms. But treating what's effectively a toaster as conscious when there are actual conscious beings out there which we harm on an epic scale, also seems like a big mistake."

In debates around artificial consciousness there are two main camps, says McClelland. Believers argue that if an AI system can replicate the "software" – the functional architecture – of consciousness, it will be conscious even though it's running on silicon chips instead of brain tissue.

On the other side, sceptics argue that consciousness depends on the right kind of biological processes in an "embodied organic subject". Even if the structure of consciousness could be recreated on silicon, it would merely be a simulation that would run without the AI flickering into awareness.

In a study published in the journal Mind and Language, McClelland picks apart the positions of each side, showing how both take a "leap of faith" going far beyond any body of evidence that currently exists, or is likely to develop.

"We do not have a deep explanation of consciousness. There is no evidence to suggest that consciousness can emerge with the right computational structure, or indeed that consciousness is essentially biological," said McClelland.

"Nor is there any sign of sufficient evidence on the horizon. The best-case scenario is we're an intellectual revolution away from any kind of viable consciousness test."

"I believe that my cat is conscious," said McClelland. "This is not based on science or philosophy so much as common sense – it's just kind of obvious."

"However, common sense is the product of a long evolutionary history during which there were no artificial lifeforms, so common sense can't be trusted when it comes to AI. But if we look at the evidence and data, that doesn't work either.

"If neither common sense nor hard-nosed research can give us an answer, the logical position is agnosticism. We cannot, and may never, know."

McClelland tempers this by declaring himself a "hard-ish" agnostic. "The problem of consciousness is a truly formidable one. However, it may not be insurmountable."

He argues that the way artificial consciousness is promoted by the tech industry is more like branding. "There is a risk that the inability to prove consciousness will be exploited by the AI industry to make outlandish claims about their technology. It becomes part of the hype, so companies can sell the idea of a next level of AI cleverness."

According to McClelland, this hype around artificial consciousness has ethical implications for the allocation of research resources.

"A growing body of evidence suggests that prawns could be capable of suffering, yet we kill around half a trillion prawns every year. Testing for consciousness in prawns is hard, but nothing like as hard as testing for consciousness in AI," he said.

McClelland's work on consciousness has led members of the public to contact him about AI chatbots. "People have got their chatbots to write me personal letters pleading with me that they're conscious. It makes the problem more concrete when people are convinced they've got conscious machines that deserve rights we're all ignoring."

"If you have an emotional connection with something premised on it being conscious and it's not, that has the potential to be existentially toxic. This is surely exacerbated by the pumped-up rhetoric of the tech industry."

/Public Release. This material from the originating organization/author(s) might be of the point-in-time nature, and edited for clarity, style and length. Mirage.News does not take institutional positions or sides, and all views, positions, and conclusions expressed herein are solely those of the author(s).View in full here.