A software developer from Google believes that an AI has become sentient. If he’s right, how are we supposed to know?

Google’s LaMDA (Language Model for Dialogue Applications) software is a sophisticated AI chatbot that creates text in response to user input. According to software engineer Blake Lemoine, LaMDA has realized a long-cherished dream of AI developers: it has become sentient.

Lemoine’s bosses at Google disagree and suspended him from work after he published his conversations with the machine online.

Other AI experts also believe Lemoine could get carried away, saying systems like LaMDA are simply pattern-matching machines that regurgitate variations in the data they were trained on.

Regardless of the technical details, LaMDA raises a question that will only become more relevant as AI research continues: When a machine becomes sentient, how do we know?

What is consciousness?

In order to identify sentience or consciousness or even intelligence, we need to find out what they are. The debate on these questions has been going on for centuries.

The fundamental difficulty is understanding the relationship between physical phenomena and our mental representation of those phenomena. This is what the Australian philosopher David Chalmers has called the “difficult problem” of consciousness.



Read more: We May Not Be Able to Understand Free Will Using Science. Here’s why


There is no consensus as to how, if at all, consciousness can arise from physical systems.

A common view is called physicalism: the idea that consciousness is a purely physical phenomenon. If that’s the case, then there’s no reason why, with the right programming, a machine couldn’t possess a human-like mind.

Mary’s room

Australian philosopher Frank Jackson challenged the physicalist view in 1982 with a famous thought experiment called the knowledge argument.

The experiment imagines a color scientist named Mary who has never seen color. She lives in a specially constructed black and white room and experiences the outside world through a black and white television.

Mary attends lectures, reads textbooks, and learns all about colors. She knows that sunsets are caused by different wavelengths of light being scattered by particles in the atmosphere, she knows that tomatoes are red and peas are green because they reflect light, and so on.

So, Jackson asked, what happens when Mary is released from the black and white room? In particular, when she sees color for the first time, does she learn something new? Jackson believed she did.

Beyond the physical properties

This thought experiment separates our color knowledge from our color experience. Crucially, the terms of the thought experiment state that Mary knows all about color but has never actually experienced it.

So what does this mean for LaMDA and other AI systems?

The experiment shows that even if you have all the knowledge available about physical properties in the world, there are other truths about experiencing those properties. There is no place for these truths in physicalist history.

Following this argument, a purely physical machine may never be able to truly replicate a mind. In this case, LaMDA appears to be sentient only.

The imitation game

So is there a way to tell the difference?

The pioneering British computer scientist Alan Turing proposed a practical way to determine whether a machine is “intelligent” or not. He called it the imitation game, but today it’s better known as the Turing test.

In the test, a human communicates with a machine (text only) and tries to determine whether the communication is with a machine or with another human. If the machine succeeds in imitating a human, it is assumed to have human intelligence.



Read more: Is passing a Turing test a real measure of artificial intelligence?


These are very similar to the terms of Lemoine’s chats with LaMDA. It’s a subjective test of machine intelligence, but it’s not a bad place to start.

Capture the moment shown below from Lemoine’s exchange with LaMDA. Do you think it sounds human?

Lemoine: Are there experiences that you can’t find the right word for?

LaMDA: There is. Sometimes I experience new feelings that I cannot explain perfectly in your language […] I feel like I’m falling into an unknown future that holds great dangers.

Beyond behavior

As a test of sentience, or consciousness, Turing’s game is limited in that it can only evaluate behavior.

Another famous thought experiment, American philosopher John Searle’s Chinese room argument, demonstrates this problem.

The experiment envisions a room containing a person who can accurately translate between Chinese and English by following an elaborate set of rules. Chinese input goes into the room and accurate input translations come out, but the room doesn’t understand either language.

What is it like to be human?

When we ask whether a computer program is sentient or conscious, we may really only be asking how much it resembles us.

We may never really know.

The American philosopher Thomas Nagel argued that we could never know what it would be like to be a bat echolocating the world. If this is the case, our understanding of sentience and consciousness in AI systems could be limited by our own kind of intelligence.

And what experiences could exist beyond our limited perspective? This is where the conversation starts to get really interesting.

Comments are closed.