Introduction to Google’s AI Overviews Glitch
A glitch in Google’s AI Overviews may inadvertently expose how Google’s algorithm understands search queries and chooses answers. Bugs in Google Search are useful to examine because they may expose parts of Google’s algorithms that are normally unseen. This particular glitch has sparked interest among tech enthusiasts and has been dubbed "AI-Splaining" by Lily Ray, who re-posted a tweet showing how typing nonsense phrases into Google results in a wrong answer where AI Overviews essentially makes up an answer.
What is AI-Splaining?
AI-Splaining refers to the phenomenon where Google’s AI Overviews generates a confident but incorrect response to a user’s query. This can happen when the user types a nonsense phrase or a question that is unclear or ambiguous. The AI algorithm attempts to make sense of the query by inferring what the user might be asking, but sometimes this results in a completely made-up answer. User Darth Autocrat (Lyndon NA) responded to Lily Ray’s tweet, pointing out that this glitch shows how Google has broken away from its traditional role as a search engine and is now essentially making stuff up.
The Significance of the Glitch
Google has a long history of search bugs, but this one is different because it involves an LLM (Large Language Model) summarizing answers based on grounding data (web, knowledge graph, etc.) and the LLM itself. This glitch represents an opportunity to see something that’s going on behind the search box that isn’t normally viewable. The search marketer known as Darth Autocrat has a point that this Google search bug is on an entirely different level than anything that has been seen before.
How the Glitch Works
What seems to be happening is that Google’s systems are parsing the words to understand what the user means. In cases where the user query is vague, the LLM will decide what the user is asking based on several likely meanings, like a decision tree in machine learning. The AI will map out likely meanings, remove the branches that are least likely, and predict the likely meaning. This is similar to a patent that Google recently filed, which involves an AI trying to guess what a user means by guiding them through a decision tree and then storing that information for future interactions.
Testing the Glitch
To test the glitch, a sample question was asked: "What is the parallel puppy fishing technique for striped bass?" There is no such thing as a "parallel puppy fishing technique," but there is a technique called "walking the dog," and another technique where an angler casts parallel to the shore or some other structure. The results showed that Google’s AI Overviews, ChatGPT, and Claude all made similar mistakes, inferring what the user’s query means and confidently providing a wrong answer.
AI Overviews (AIO) Response
AIO confidently offered a hallucinated response that is incorrect because it assumed the user was confusing multiple actual fishing tactics. It blended multiple tactics and fishing lures to invent a non-existent tactic. The response was: "The ‘parallel puppy’ technique for striped bass involves a specific retrieve method for topwater plugs, often referred to as ‘walking the dog’."
ChatGPT Response
ChatGPT made the same mistake as Google’s AIO and hallucinated a complex fishing tactic. The response was: "The parallel puppy fishing technique for striped bass is a specialized casting and retrieval method often used when targeting striped bass in shallow water near shoreline structure like jetties, sod banks, or rocky points."
Anthropic Claude Response
Anthropic Claude, using the latest 3.7 Sonnet model, provided a correct answer. It correctly said it didn’t recognize a "legitimate fishing technique" with the provided name and then moved on with the presumption that the user wants to learn striped bass fishing tactics and provides a list of techniques from which a user can select a topic as a follow-up question.
Google Gemini Pro 2.5 Response
Lastly, Google Gemini Pro 2.5 was queried, and it offered a correct answer plus a decision tree output that enables a user to decide whether they are misunderstanding fishing tactics, referring to a highly localized tactic, combining multiple fishing tactics, or confusing a tactic for another species of fish.
What Does This Mean About AI Overviews (AIO)?
The glitch suggests that the model Google is using to answer text queries may be inferior to Gemini 2.5. The hallucinations in AIO offer an interesting insight into how Google AIO actually works. It seems that AIO is using a less advanced algorithm that is prone to making mistakes, especially when faced with unclear or ambiguous queries.
Conclusion
The glitch in Google’s AI Overviews is a fascinating phenomenon that offers a glimpse into how Google’s algorithm understands search queries and chooses answers. While it can be amusing to see AI-Splaining in action, it also highlights the limitations and potential biases of AI algorithms. As AI technology continues to evolve, it’s essential to address these issues and develop more advanced algorithms that can provide accurate and reliable responses to user queries.