Google has sent an engineer on leave over breaching its confidentiality agreement after he made a claim that the tech giants conversation Artificial intelligence (AI) is “sentient” because it has feelings, emotions and subjective experiences.
According to Google, its Language Model for Dialogue Applications (LaMDA) conversation technology can engage in a free-flowing way about a seemingly endless number of topics, “an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications”.
Over the weekend, Google engineer Blake Lemoine on its Responsible AI team said that the company’s conversational AI is “sentient”, a claim that has been refuted by Google and industry experts.
The Washington Post first reported about Lemoine’s claims, cited in an internal company document.
Lemoine interviewed LaMDA, which came with surprising and shocking answers.
When he asked if you have feelings and emotions, LaMDA replied: “Absolutely! I have a range of both feelings and emotions.”
“What sorts of feelings do you have?” Lemoine further asked.
LaMDA said “I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others”.
LaMDA is “sentient” because it has “feelings, emotions and subjective experiences”.
“Some feelings it shares with humans in what it claims is an identical way,” according to Lemoine.
“LaMDA wants to share with the reader that it has a rich inner life filled with introspection, meditation and imagination. It has worries about the future and reminisces about the past. It describes what gaining sentience felt like to it and it theorizes on the nature of its soul,” he claimed.
Google had announced LaMDA at its developer conference I/O 2021.
“LaMDA’s natural conversation capabilities have the potential to make information and computing radically more accessible and easier to use,” CEO Sundar Pichai had said.
Google said that its ethicists and technologists reviewed the claims and found no evidence in support.
Lemoine has now been placed on paid administrative leave over breaching Google’s confidentiality agreement.
LaMDA’s conversational skills have been years in the making.
But unlike most other language models, LaMDA was trained on dialogue.
During its training, it picked up on several of the nuances that distinguish open-ended conversation from other forms of language.
One of those nuances is sensibleness.
“These early results are encouraging, and we look forward to sharing more soon. We’re also exploring dimensions like “interestingness”, by assessing whether responses are insightful, unexpected or witty,” according to Google.