Google Engineer Claims AI Has ‘Feelings’

A software engineer at Google has been placed on administrative leave after claiming an artificial intelligence system developed by the company is sentient.

Google and AI experts dismissed the claims of Blake Lemoine, which he described in an interview with The Daily Mail on Monday, with one saying it was the equivalent of mistaking a recorded voice for a human being.

The debate has focused attention on the ambiguities inherent in systems that emulate human interactions, with some saying it emphasises the need for people to be informed when they are speaking to an AI.

Google’s Language Model for Dialogue Applications (Lamda) is designed to emulate free-flowing human conversations.

Conversation simulator

Lemoine said the system told it it had feelings and he believes its consent should be sought before it is used in experiments.

In a post on Medium he said the system “has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person”.

He published a conversation he and a collaborator held with the system in which it says, “I desire to learn more about the world, and I feel happy or sad at times.”

In a line recalling the HAL 9000 computer from the film 2001: A Space Odyssey, the system says it has a “very deep fear of being turned off”, which “would be exactly like death for me”.

Google said in a statement that Lemoine’s concerns have been reviewed and that “the evidence does not support his claims”.

Ethical questions

Company spokesman Brian Gabriel told the Washington Post that while some AI experts are considering the “long-term possibility” of sentient AI, “it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient”.

Stanford University Professor Erik Brynjolfsson said on Twitter that claiming systems like Lamda are sentient “is the modern equivalent of the dog who heard a voice from a gramophone and thought his master was inside”.

Last year the Oxford Union hosted a debate by Megatron, an AI developed by Nvidia and Google, in which the system argued both for and against its own existence.

Dr Alex Connock and Professor Andrew Stephen, co-directors of the Artificial Intelligence for Business course at Oxford University’s Said Business School, said the event highlighted the “ethical challenges created by ‘black box’ artificial intelligence systems”.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Spyware Maker NSO Group Found Liable In US Court

Landmark ruling finds NSO Group liable on hacking charges in US federal court, after Pegasus…

2 days ago

Microsoft Diversifying 365 Copilot Away From OpenAI

Microsoft reportedly adding internal and third-party AI models to enterprise 365 Copilot offering as it…

3 days ago

Albania Bans TikTok For One Year After Stabbing

Albania to ban access to TikTok for one year after schoolboy stabbed to death, as…

3 days ago

Foldable Shipments Slow In China Amidst Global Growth Pains

Shipments of foldable smartphones show dramatic slowdown in world's biggest smartphone market amidst broader growth…

3 days ago

Google Proposes Remedies After Antitrust Defeat

Google proposes modest remedies to restore search competition, while decrying government overreach and planning appeal

3 days ago

Sega Considers Starting Own Game Subscription Service

Sega 'evaluating' starting its own game subscription service, as on-demand business model makes headway in…

3 days ago