Google Search Boss Warns Of ‘Hallucinating’ AI Chatbots
Head of Google search warns AI chatbots can deliver convincing but fictional answers, after embarrassing blunder with AI Bard
The head of Google’s search engine has cautioned about chatbot “hallucination”, as the company prepares to release its rival to ChatGPT.
“This type of artificial intelligence we’re talking about can sometimes lead to something we call hallucination,” said Prabhakar Raghavan in an interview with Germany’s Welt am Sonntag newspaper published on Saturday.
“This is then expressed in such a way that a machine delivers a convincing but completely fictitious answer.”
He said such tools are based on language models that are so “huge” that it’s “impossible for humans to monitor every conceivable behaviour of the system”.
‘Great responsibility’
Google’s approach, according to Raghavan, is to “test it on a large enough scale that in the end we’re happy with the metrics we use to check the factuality of the responses”.
He said the company is considering ways of integrating chatbot-like features into its search functions, especially for questions to which there is no single answer – something Google outlined in a press conference last Monday.
“Of course we feel the urgency, but we also feel the great responsibility,” Raghavan said in comments published in German.
“We hold ourselves to a very high standard… This is the only way we will be able to keep the trust of the public.”
Bard blunder
Google declared a “code red” last year after the public release of ChatGPT by Microsoft-backed OpenAI.
Last week it said it would release its own AI chatbot, called Bard, to the public after it finishes testing with a limited group of users.
But Bard was shown giving incorrect information in Google’s publicity for the tool, triggering an investor sell-off that cost Google $100 billion (£83bn) in market value.