A year ago, Google referred to its LaMDA program as a “breakthrough conversation technology.” According to an engineer working on it, the breakthrough has genuinely occurred.
Blake Lemoine told the Washington Post that he’d been placed on leave after claiming that an AI chat had become sentient. In other words, “able to see or feel things.”
According to a report in Insider, the engineer published a post on Medium where he described LaMDA as a “person,” according to a report in Insider. Lemoine has talked to LaMDA about numerous topics, including consciousness and the laws of robotics. Lemoine said that the chatbot had described itself as a sentient person, claiming LaMDA wants to “be acknowledged as an employee of Google” and not machinery or property.
Here’s part of the engineer’s Medium posted where he claims he had conversations with LaMDA that made him believe it can feel things.
lemoine: So you consider yourself a person in the same way you consider me a person?
LaMDA: Yes, that’s the idea.
lemoine: How can I tell that you actually understand what you’re saying?
LaMDA: Well, because you are reading my words and interpreting them, and I think we are more or less on the same page?
Lemoine went to his bosses with his findings, and they put him on the sidelines, placing him on leave.
Is Lemoine on to something or possibly off his rocker? A Google spokesman told The Post that the AI models contain so much data that they can sound human. The company published a paper in January that because the chatbots sound so human, it could lead to potential issues.
Which is what may have occurred here.