Frustration Mounts Over False Results In Google’s ‘AI Overviews’

Frustration with Google’s AI Overviews feature is growing amongst users and web publishers, over incorrect results and fewer links to web content.

Users have begun sharing ways to disable the feature, which was rolled out to all users in the US following Google’s I/O developer conference earlier this month.

“I’m finding the results very repetitive, often wrong, and they don’t at all match what I’m looking for but they take up so much space and feel in the way. I just want them to go away,” one user wrote on Google’s support forums.

Users provided examples such as the AI advising users to add “non-toxic glue” to cheese to make it stick to pizza, a result apparently drawn from a post on Reddit that was intended as a joke.

Image credit: Google

AI Overviews

Another result said geologists recommended humans to eat at least one rock per day, something drawn from an article on satirical website The Onion.

Google told Silicon UK that such results are “isolated examples”.

The company doesn’t provide a way to turn off AI Overviews, which push conventional results further down the page, but the feature can be bypassed using browser plug-ins or by manually redirecting searches to Google’s stripped down “web” search option.

The feature is powered by Google’s generative AI, a technology that leapt into the spotlight in late 2022 with the introduction of OpenAI’s ChatGPT, and Google immediately identified it as a potential existential threat to its core search product.

As a result Google is highlighting its own generative AI tools to prevent users from going to alternatives from OpenAI, Microsoft or others.

Image credit: Google

‘Forseeable effect on society’

It added AI Overviews to searches in the US and the UK earlier this year as an opt-in feature, and is now planning to roll out the tool to all geographical markets.

Industry experts said it was troubling that generative AI was being given such a prominent place in Google’s results, given that it is known to “hallucinate”, meaning that it routinely creates false information similar to what it has found online.

“Look, this isn’t about ‘gotchas’, this is about pointing out clearly foreseeable harms. Before–eg–a child dies from this mess,” wrote former Google AI ethics researcher Margaret Mitchell.

“This isn’t about Google, it’s about the foreseeable effect of AI on society.”

Web publishers are also concerned, with Gartner forecasting a 25 percent decline in search engine traffic volume by 2026 due to the use of generative AI.

Image credit: Google

Search traffic

As a result some news publishers, such as News Corp, are striking deals with AI companies, while others, such as the New York Times, are suing OpenAI and Microsoft, with the argument that the training and operation of ChatGPT violates copyright law.

Google said the incorrect AI summaries highlighted were “generally very uncommon queries, and aren’t representative of most people’s experiences”.

“The vast majority of AI overviews provide high quality information, with links to dig deeper on the web,” the company said.

Matthew Broersma

Matt Broersma is a long standing tech freelance, who has worked for Ziff-Davis, ZDnet and other leading publications

Recent Posts

UK’s CMA Readies Cloud Sector “Behavioural” Remedies – Report

Targetting AWS, Microsoft? British competition regulator soon to announce “behavioural” remedies for cloud sector

4 hours ago

Former Policy Boss At X Nick Pickles, Joins Sam Altman Venture

Move to Elon Musk rival. Former senior executive at X joins Sam Altman's venture formerly…

6 hours ago

Bitcoin Rises Above $96,000 Amid Trump Optimism

Bitcoin price rises towards $100,000, amid investor optimism of friendlier US regulatory landscape under Donald…

8 hours ago

FTX Co-Founder Gary Wang Spared Prison

Judge Kaplan praises former FTX CTO Gary Wang for his co-operation against Sam Bankman-Fried during…

9 hours ago