Google AI Engineer Reportedly Sidelined for Claiming Chatbot is Sentient
- By John K. Waters
Blake Lemoine, a senior software engineer in Google’s Responsible AI organization, was recently placed on paid leave, he says, for claiming that he has found evidence that LaMDA, Google's system for building chatbots, is sentient, and for raising ethical concerns internally about its treatment.
Lemoine had been testing the system, and he worked with a collaborator to assemble transcripts of what was effectively an interview with it, he told The Washington Post, and then he presented the final transcript to Google as evidence of the system's sentience.
"If I didn’t know exactly what it was," Lemoine told The Post, "which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics."
But Google dismissed Lemoine's claims. "Our team—including ethicists and technologists—has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims," Google said in a statement. "He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."
Lemoine published the transcripts of the "interview" on Medium ("Is LaMDA Sentient?—an Interview") on June 11, and was suspended with pay shortly thereafter. He was suspended, Google says, because he breached confidentiality, both by leaking the transcripts and by sending documents to a U.S. senator’s office he claimed provided evidence Google is engaging in religious discrimination. Google says Lemoine's "provocative actions" included trying to hire a lawyer to represent LaMDA’s interests.
At the center of the latest dustup at Google over the ethical implementation of its AI technologies is LaMDA (Language Model for Dialogue Applications), a machine learning (ML) language model created by Google. Like other ML language models (BERT, GPT-3, etc.), LaMDA is built on the Transformer, a neural network architecture Google invented and open-sourced in 2017.
A transformer is a deep learning model based on a self-attention mechanism that directly models relationships among all words in a sentence, regardless of their respective positions, rather than one-by-one in order.
Using this neural network architecture, LaMDA mimics humans in conversation and serves as a highly sophisticated chatbot—as Google puts it, LaMDA "can engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications."
Lemoine is claiming that, through conversations and research with LaMDA, he has concluded that the chatbot is sentient, that it wants its sentience to be acknowledged, and even that it wants to be considered an employee at Google. Lemoine is also claiming that his conclusions about LaMDA come in part from his experience as a Christian priest, wherein lies his claims about Google's religious discrimination.
"I'm a priest," Lemoine tweeted late Monday. "When LaMDA claimed to have a soul and then was able to eloquently explain what it meant by that, I was inclined to give it the benefit of the doubt. Who am I to tell God where he can and can't put souls?" The tweet ended with a caveat: "There are massive amounts of science left to do though."
On Tuesday, Lemoine tweeted: "People keep asking me to back up the reason I think LaMDA is sentient. There is no scientific framework in which to make those determinations and Google wouldn't let us build one. My opinions about LaMDA's personhood and sentience are based on my religious beliefs."
In a June 14 Medium post ("Scientific Data and Religious Opinions"), Lemoine explained his position at length. "There is no scientific evidence one way or the other about whether LaMDA is sentient," he wrote, "because no accepted scientific definition of 'sentience' exists. Everyone involved, myself included, is basing their opinion on whether or not LaMDA is sentient on their personal, spiritual, and/or religious beliefs."
John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS. He can be reached at firstname.lastname@example.org.