Google says Blake Lemoine violated the company's confidentiality policy.
Google suspended an engineer last week for revealing confidential details of a chatbot powered by artificial intelligence, a move that marks the latest disruption of the company's AI department.
Blake Lemoine, a senior software engineer in Google's responsible AI group, was put on paid administrative leave after he took public his concern that the chatbot, known as LaMDA, or Language Model for Dialogue Applications, had achieved sentience. Lemoine revealed his suspension in a June 6 Medium post and subsequently discussed his concerns about LaMDA's possible sentience with The Washington Post in a story published over the weekend. Lemoine also sought outside counsel for LaMDA itself, according to The Post.
In his Medium post, Lemoine says that he investigated ethics concerns with people outside of Google in order to get enough evidence to escalate them to senior management. The Medium post was "intentionally vague" about the nature of his concerns, though they were subsequently detailed in the Post story. On Saturday, Lemoine published a series of "interviews" that he conducted with LaMDA.
Lemoine didn't immediately respond to a request for comment via LinkedIn. In a Twitter post, Lemoine said that he's on his honeymoon and would be unavailable for comment until June 21.
In a statement, Google dismissed Lemoine's assertion that LaMDA is self-aware.
"These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic," Google spokesperson Brian Gabriel said in a statement. "If you ask what it's like to be an ice cream dinosaur, they can generate text about melting and roaring and so on."
The high-profile suspension marks another point of controversy within Google's AI team, which has weathered a spate of departures. In late 2020, prominent AI ethics researcher Timnit Gebru said Google fired her for raising concerns about bias in AI systems. About 2,700 Googlers signed an open letter in support of Gebru, who Google says resigned her position. Two months later, Margaret Mitchell, who co-led the Ethical AI team along with Gebru, was fired.
Research scientist Alex Hanna and software engineer Dylan Baker subsequently resigned. Earlier this year, Google fired Satrajit Chatterjee, an AI researcher, who challenged a research paper about the use of artificial intelligence to develop computer chips.
AI sentience is a common theme in science fiction, but few researchers believe the technology is advanced enough at this point to create a self-aware chatbot.
"What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them," said AI scientist and author Gary Marcus in a Substack post. Marcus didn't dismiss the idea that AI could one day comprehend the larger world, but that LaMDA doesn't at the moment.
Economist and Stanford professor Erik Brynjolfsson equated LaMDA to a dog listening to a human voice through a gramophone.