Google Fired Blake Lemoine, the Engineer Who Said LaMDA Was Sensitive

Remark

Blake Lemoine, the Google engineer who told The Washington Post that the company’s artificial intelligence was sensitive, said the company fired him on Friday.

Lemoine said he received a termination email from the company on Friday, along with a request for a video conference. He asked to have a third party attend the meeting, but he said Google declined. Lemoine says he is in talks with lawyers about his options.

Lemoine worked for Google’s Responsible AI organization and, as part of his job, began talking to LaMDA, the company’s artificially intelligent system for building chatbots, in the fall. He came to believe the technology was conscious after signing up to test whether the artificial intelligence could use discriminatory or hateful language.

The Google engineer who thinks the company’s AI has come to life

In a statement, Google spokesperson Brian Gabriel said the company takes AI development seriously and has reviewed LaMDA 11 times, as well as publishing a research paper detailing responsible development efforts.

“If an employee shares concerns about our work, as Blake did, we review them extensively,” he added. “We found Blake’s claims that LaMDA is conscious to be completely unfounded and have spent months working with him to clear that up.”

He attributes the conversations to the company’s open culture.

“It is regrettable that despite long-term involvement in this topic, Blake still chose to continue to violate clear employment and data security policies, including the need to protect product information,” Gabriel added. “We will continue our careful development of language models and we wish Blake the best.”

Lemoine’s resignation was first reported in the Big Technology newsletter.

Lemoine’s interviews with LaMDA sparked a broad discussion about recent advances in AI, public misunderstandings about how these systems work, and social responsibility. Google previously sent heads of its Ethical AI division, Margaret Mitchell and Timnit Gebru, out after warning about risks associated with this technology.

Google has hired Timnit Gebru as an outspoken critic of unethical AI. Then she was fired for it.

LaMDA uses Google’s most advanced major language models, a type of AI that recognizes and generates text. These systems cannot understand language or meaning, researchers say. But they can deceptively produce human speech because they are trained on massive amounts of data crawled from the Internet to predict the next most likely word in a sentence.

After LaMDA talked to Lemoine about personality and his rights, he started researching further. In April, he shared a Google doc with top executives called “Is LaMDA Sentient?” that included some of its conversations with LaMDA, where it claimed to be aware. Two Google executives investigated and denied his claims.

Big Tech builds AI with bad data. So scientists looked for better data.

Lemoine was placed on paid administrative leave earlier in June for violating the company’s confidentiality policy. The engineer, who spent most of his seven years at Google working on proactive search, including personalization algorithms, said he may be considering starting his own AI company focused on collaborative video games for storytelling.

Leave a Comment