Badly Written “Answers” Plague Google’s AI Overview

Google’s AI Overview has been hit with a very familiar problem. The output may not exactly be the highest quality at times.

Last week, I wrote about the introduction of Google’s AI Overview. Unlike much of the other AI hype the large media companies, this instance of AI actually had some credibility (though many large media companies were unable to really articulate why this is the case). This latest AI development is currently only available to US residents.

So, what does Google’s AI Overview do that makes it a troubling development? Simply put, it summarizes some of the top results in a query and creates its own answer. This answer is then placed on the top of a search query. After some scrolling, there are advertisements that follow it. From there, after even more scrolling, the actual results are present. Additionally, there is a “web” tab that filters the results to showcase the original web search results.

For those who understand basic Search Engine Optimization (SEO), that is a very significant thing to happen. Most people who use Google look at the top results. You generally want to be among the top results because after a certain number of results, users either find what they are looking for or they perform a different search (or maybe lose interest). As a result, if you find your webpage further down the list, it means that you get significantly fewer clicks. Your website is generally less visible.

If your results are suddenly pushed very far down the webpage, people are more likely to see the AI Overview response (assuming it’s present for that query). This, in theory, negates the users need to click on a third party website that actually put forth significant time and effort to present an answer for that query. It means that users, again, in theory, would be less likely to click on a third party website and, as a result, those clicks could dry up. Fewer clicks mean less visibility, fewer ad impressions, and fewer subscriptions and/or donations.

That is the reason why there are many fears for what the AI could mean for search results and, consequently, the future of third party websites. Simply relying on third party search results for a websites success is an extremely tall order.

As I explained in the previous report, all of this depends on whether users end up adopting the AI Overview as their primary source or not. If there is widespread adoption, then that puts into significant doubt as to whether websites in general will survive at all or not. If, however, users generally react to it negatively and look for the original web results en-mass, then independent websites like this one are more likely to survive.

There are reports surfacing recently that AI Overview may be feeding users bad results. From the BBC:

Its experimental “AI Overviews” tool has told some users searching for how to make cheese stick to pizza better that they could use “non-toxic glue”.

The search engine’s AI-generated responses have also said geologists recommend humans eat one rock per day.

A Google spokesperson told the BBC they were “isolated examples”.

Some of the answers appeared to be based on Reddit comments or articles written by satirical site, The Onion.

They have been widely mocked on social media.

But Google insisted the feature was generally working well.

“The examples we’ve seen are generally very uncommon queries, and aren’t representative of most people’s experiences,” it said in a statement.

“The vast majority of AI overviews provide high quality information, with links to dig deeper on the web.”

The inaccurate answers follows a very familiar pattern with the deployment of other AI modules to automate and replace work on the cheap. Whether that was the infamous DoNotPay case where AI was supposed to automatically write legal briefs only to find out later that the briefs were poorly written. Then there was the CNET AI story where articles written by AI ended up being highly inaccurate. Gannett did something similar, only they tried to hide the use of AI to replace journalists. That, of course, didn’t work out any better for them.

Earlier this year, AI was used to try and predict what the successor of the Nintendo Switch was going to be and predict a launch date. The prediction became little more than a wild guess. In all likelihood, if any of those predictions ended up being accurate, it would likely be little more than a lucky guess rather than gleaning some sort of magical insight because it is some magical “AI” technology.

Inaccuracies have plagued many companies seeking to automate intricate work such as journalism, writing legal documents, or other tasks. Expecting full automation of complex tasks with AI is asking for a miracle. In some cases, it led to companies shutting down or badly damaging their reputation. The lesson being that AI may be a tool to assist in the generation of content, but to simply entrust an AI to do everything is extremely foolhardy.

The case with AI Overview is a bit more interesting than some random CEO thinking they have found an automatic way of generating wealth with little work. It’s more that it’s performing a much simpler task of summarizing content that already exists which isn’t exactly necessarily asking the moon here. It’s trying to automate a specific task which is a bit closer to a realistic thing for a modern AI to do.

What’s more, both sides in this case can be entirely accurate here. It can very easily be true that Google’s AI Overview has spat out really bad answers like recommending people eat rocks. At the same time, it could also be true that these are, indeed, isolated incidences and Google is working on refining their offering to remove these erroneous results.

What could be interesting here is that although AI Overview may be accurate 95% of the time, but these bad answers might fuel perception that the AI answers may not be trustworthy. For independent websites, that would be great news because there might be a growing contingent of users who would flat out reject answers provided by the AI – opting instead to seek the original web results.

Whether this is the beginning of the troubles with AI Overview or just a bump in the road, of course, remains to be seen. Still, it does make the story of this at least somewhat more interesting at least.

Drew Wilson on Mastodon, Twitter and Facebook.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top