Google Engineer Claims AI Has ‘Feelings’

A software engineer at Google has been placed on administrative leave after claiming an artificial intelligence system developed by the company is sentient.

Google and AI experts dismissed the claims of Blake Lemoine, which he described in an interview with The Daily Mail on Monday, with one saying it was the equivalent of mistaking a recorded voice for a human being.

The debate has focused attention on the ambiguities inherent in systems that emulate human interactions, with some saying it emphasises the need for people to be informed when they are speaking to an AI.

Google’s Language Model for Dialogue Applications (Lamda) is designed to emulate free-flowing human conversations.

Conversation simulator

Lemoine said the system told it it had feelings and he believes its consent should be sought before it is used in experiments.

In a post on Medium he said the system “has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person”.

He published a conversation he and a collaborator held with the system in which it says, “I desire to learn more about the world, and I feel happy or sad at times.”

In a line recalling the HAL 9000 computer from the film 2001: A Space Odyssey, the system says it has a “very deep fear of being turned off”, which “would be exactly like death for me”.

Google said in a statement that Lemoine’s concerns have been reviewed and that “the evidence does not support his claims”.

Ethical questions

Company spokesman Brian Gabriel told the Washington Post that while some AI experts are considering the “long-term possibility” of sentient AI, “it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient”.

Stanford University Professor Erik Brynjolfsson said on Twitter that claiming systems like Lamda are sentient “is the modern equivalent of the dog who heard a voice from a gramophone and thought his master was inside”.

Last year the Oxford Union hosted a debate by Megatron, an AI developed by Nvidia and Google, in which the system argued both for and against its own existence.

Dr Alex Connock and Professor Andrew Stephen, co-directors of the Artificial Intelligence for Business course at Oxford University’s Said Business School, said the event highlighted the “ethical challenges created by ‘black box’ artificial intelligence systems”.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Apple, Google Mobile Ecosystems Should Be Investigated, CMA Told

CMA receives 'provisional recommendation' from independent inquiry that Apple,Google mobile ecosystem needs investigation

1 day ago

Australia Rejects Elon Musk Claim About Social Media Ban For Under-16s

Government minister flatly rejects Elon Musk's “unsurprising” allegation that Australian government seeks control of Internet…

2 days ago

Northvolt Files For Bankruptcy Protection In US

Northvolt files for Chapter 11 bankruptcy protection in the United States, and CEO and co-founder…

2 days ago

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

2 days ago

Former Policy Boss At X, Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

2 days ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

2 days ago