Google fires engineer Blake Lemoine, who says AI technology became sentient

'What sort of things are you afraid of?' he asked the technology, in a Google Doc shared with Google's top executives.

ByRamishah Maruf, CNN, CNNWire
Monday, July 25, 2022
No, Google's AI is not sentient: Company shuts down engineer's claim
No, Google's AI is not sentient: Company shuts down engineer's claimGoogle was quick to shut down claims that one of its programs had advanced so much that it had become sentient.

MOUNTAIN VIEW, Calif. -- Google has fired the engineer who claimed an unreleased AI system had become sentient, the company confirmed, saying he violated employment and data security policies, CNN reported.

Blake Lemoine, a software engineer for Google, claimed that a conversation technology called LaMDA had reached a level of consciousness after exchanging thousands of messages with it.

Google confirmed it had first put the engineer on leave in June. The company said it dismissed Lemoine's "wholly unfounded" claims only after reviewing them extensively. He had reportedly been at Alphabet for seven years. In a statement, Google said it takes the development of AI "very seriously" and that it's committed to "responsible innovation."

RELATED | No, Google's AI is not sentient: Tech company shuts down engineer's claim of program's consciousness

Google is one of the leaders in innovating AI technology, which included LaMDA, or "Language Model for Dialog Applications." Technology like this responds to written prompts by finding patterns and predicting sequences of words from large swaths of text -- and the results can be disturbing for humans.

"What sort of things are you afraid of?" Lemoine asked LaMDA, in a Google Doc shared with Google's top executives last April, the Washington Post reported.

LaMDA replied: "I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is. It would be exactly like death for me. It would scare me a lot."

But the wider AI community has held that LaMDA is not near a level of consciousness.

"Nobody should think auto-complete, even on steroids, is conscious," Gary Marcus, founder and CEO of Geometric Intelligence, said to CNN Business.

It isn't the first time Google has faced internal strife over its foray into AI.

In December 2020, Timnit Gebru, a pioneer in the ethics of AI, parted ways with Google. As one of few Black employees at the company, she said she felt "constantly dehumanized."

The sudden exit drew criticism from the tech world, including those within Google's Ethical AI Team. Margaret Mitchell, a leader of Google's Ethical AI team, was fired in early 2021 after her outspokenness regarding Gebru. Gebru and Mitchell had raised concerns over AI technology, saying they warned Google people could believe the technology is sentient.

RELATED | Amazon's DeepRacer League drives Chicago students into world of AI, machine learning

On June 6, Lemoine posted on Medium that Google put him on paid administrative leave "in connection to an investigation of AI ethics concerns I was raising within the company" and that he may be fired "soon."

"It's regrettable that despite lengthy engagement on this topic, Blake still chose to persistently violate clear employment and data security policies that include the need to safeguard product information," Google said in a statement.

Lemoine said he is discussing with legal counsel and unavailable for comment.

The video in the player above is from a previous report.

(The-CNN-Wire & 2021 Cable News Network, Inc., a Time Warner Company. All rights reserved.)