Computing Community Consortium Blog

The goal of the Computing Community Consortium (CCC) is to catalyze the computing research community to debate longer range, more audacious research challenges; to build consensus around research visions; to evolve the most promising visions toward clearly defined initiatives; and to work with the funding organizations to move challenges and visions toward funding initiatives. The purpose of this blog is to provide a more immediate, online mechanism for dissemination of visioning concepts and community discussion/debate about them.


CCC Council Member Melanie Mitchell Interviews with CNN and MSNBC to respond to claims about Google’s sentient AI

June 15th, 2022 / in AI, CCC / by Haley Griffin

While many of the achievements of AI scientists, especially in the field of language dialogue application, seemed impossible 20 years ago, it isn’t unrealistic to think that AI can perform in ways only seen in movies. AI systems have or will soon have the capacity to execute human tasks like writing, driving, and analyzing data. AI systems are constantly looking and acting more human, so are they becoming human?

According to the vast majority of AI scientists, the answer is no. However, Google engineer Blake Lemoine has made headlines in recent days by insisting that LaMDA, short for Language Model for Dialogue Applications, is sentient. Lemoine goes as far as comparing the Google chatbot system to a child: “If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics.”

This analogy spurs images of genius robot children and chat systems that can be persuasive and creative. It would be both an incredible and terrifying feat that Google reached, but according to Brian Gabriel, a Google spokesperson, Lemoine has it all wrong: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Melanie Mitchell, CCC council member and the lead of our Artificial Intelligence Working Group, weighed in on the issue through interviews with CNN, MSNBC, and other news sources. Melanie is the Davis Professor of Complexity at the Santa Fe Institute and she is the author or editor of six books and many scholarly articles in the fields of AI, cognitive science, and complex systems. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. 

Her view on Lemoine’s claims is that “Humans ascribing ‘sentience’ to AI systems is nothing new—it goes back as far as the 1960s with the very simple ELIZA chatbot. Humans are very much primed to interpret fluent language as having a conscious agent behind it, but we know that with AI language models, humanlike text can be generated without any conscious entity behind the scenes, except for the humans who generated the AI system’s extensive training sets.”

The consensus among most AI scientists is that intelligent systems are not sentient, nor will they be any time soon. They have the capability to pull from massive amounts of human-generated intelligence from Wikipedia articles to Reddit blogs to best-selling novels. This allows them to mimic human intelligence, but that is not the same as having independent thoughts or feelings.



CCC Council Member Melanie Mitchell Interviews with CNN and MSNBC to respond to claims about Google’s sentient AI

Comments are closed.