Google Engineer Claims AI Has ‘Feelings’

A software engineer at Google has been placed on administrative leave after claiming an artificial intelligence system developed by the company is sentient.

Google and AI experts dismissed the claims of Blake Lemoine, which he described in an interview with The Daily Mail on Monday, with one saying it was the equivalent of mistaking a recorded voice for a human being.

The debate has focused attention on the ambiguities inherent in systems that emulate human interactions, with some saying it emphasises the need for people to be informed when they are speaking to an AI.

Google’s Language Model for Dialogue Applications (Lamda) is designed to emulate free-flowing human conversations.

Conversation simulator

Lemoine said the system told it it had feelings and he believes its consent should be sought before it is used in experiments.

In a post on Medium he said the system “has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person”.

He published a conversation he and a collaborator held with the system in which it says, “I desire to learn more about the world, and I feel happy or sad at times.”

In a line recalling the HAL 9000 computer from the film 2001: A Space Odyssey, the system says it has a “very deep fear of being turned off”, which “would be exactly like death for me”.

Google said in a statement that Lemoine’s concerns have been reviewed and that “the evidence does not support his claims”.

Ethical questions

Company spokesman Brian Gabriel told the Washington Post that while some AI experts are considering the “long-term possibility” of sentient AI, “it doesn’t make sense to do so by anthropomorphising today’s conversational models, which are not sentient”.

Stanford University Professor Erik Brynjolfsson said on Twitter that claiming systems like Lamda are sentient “is the modern equivalent of the dog who heard a voice from a gramophone and thought his master was inside”.

Last year the Oxford Union hosted a debate by Megatron, an AI developed by Nvidia and Google, in which the system argued both for and against its own existence.

Dr Alex Connock and Professor Andrew Stephen, co-directors of the Artificial Intelligence for Business course at Oxford University’s Said Business School, said the event highlighted the “ethical challenges created by ‘black box’ artificial intelligence systems”.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

Justice Dept Argues Google Must Face Harsh Remedies

US Justice Department lawyers argue Google must face wide-ranging remedies including selling off Chrome, with…

9 mins ago

Huawei ‘Set’ To Begin Mass Production Of Advanced AI Chip

Huawei reportedly to begin mass shipments of Ascend 910C AI accelerator to Chinese customers as…

38 mins ago

Italy, White House Condemn ‘Discriminatory’ Tech Taxes

Italy, White House issue joint statement condemning 'discriminatory' tech taxes as US seeks to end…

22 hours ago

Italian Newspaper Hails ‘Success’ With AI-Generated Supplement

Italian newspaper Il Foglio says four-page AI-generated supplement published every day for a month shows…

22 hours ago

Huawei Updates Smart Glasses With Live Translation

Huawei launches Titanium edition of Eyewear 2 smart glasses with gesture controls and AI-powered simultaneous…

23 hours ago

Head Of Chinese Chip Tools Company Drops US Citizenship

Gerald Yin, founder, chairman and chief executive of key Chinese chip tools maker AMEC, drops…

23 hours ago