In 2024, Google fundamentally changed how we search. Their AI Overviews now sit at the top of search results, essentially taking the old featured snippets concept and supercharging it with AI – or as Google puts it, providing “helpful summaries” of web content. 

But here’s the thing: these aren’t just summaries. They’re AI-generated interpretations that digest and regurgitate content from multiple sources, serving up pre-packaged answers that users are expected to trust implicitly. This isn’t just another search feature – it’s a fundamental shift that opens up a concerning trust gap.

Google AI interprets, combines, and reproduces information drawn from various sources.

While Google touts this as a search enhancement, the reality is more troubling: it’s a feature that could gradually erode our ability to think critically about information. There’s no opt-out button – no simple way to return to the traditional search experience that encouraged us to evaluate and compare multiple sources. 

Instead, we’re being nudged toward accepting a single, AI-generated “truth” – one that, as we’ve already seen, can sometimes have AI recommend adding “moon dust” to your coffee for an energy boost and eating a rock a day to maintain good health. 

But don’t worry. If you’re skeptical about accepting their answers at face value, I’m going to show you exactly how to fact-check these AI Overviews. Let me break it down for you.

AI reflects what’s popular online rather than what’s true.

The 3 Main Problems with AI Overviews

The Hallucination Problem 

When it comes to AI, hallucinations aren’t just some minor glitch – they’re built into the very nature of how these systems work. While Google claims they’re “doing their best” to minimize these errors, they can’t eliminate them entirely. In fact, just recently, Liz Reid, the head of Google’s search business, acknowledged in a blog post that “some odd, inaccurate or unhelpful AI Overviews certainly did show up.”

The core issue? These AI systems don’t actually know what’s true – they only know what’s popular online. When AI Overviews confidently declare that astronauts have met cats on the moon, played with them, and provided care, it’s not just making things up – it’s failing to distinguish between factual information and popular content like satire or fiction.

The Black Box Problem 

The second glaring issue is that Google’s AI Overviews are essentially a black box – we can’t see how they arrive at their conclusions or which parts of which sources they’re drawing from. While Google provides links to source material, there’s no transparency about how the AI synthesizes and interprets this information. 

This lack of transparency makes it impossible to fully trust the results, especially when the stakes are high. Companies like Google are incentivized to maintain this opacity to protect their competitive advantage and intellectual property, but this creates a fundamental trust problem: how can users verify information when they can’t understand how it was generated, or what biases might be embedded in the process?

The Critical Thinking Problem 

Perhaps most concerning is how AI Overviews fundamentally change the way we process information. 

Traditional search results present us with multiple viewpoints and sources, requiring us to evaluate, compare, and make informed decisions. This process of assessment and critical thinking is crucial to how we understand complex topics. But AI Overviews short-circuit this process by serving up a single, pre-digested answer that users are encouraged to accept without question.

This evolution in how we consume information brings new considerations: As we embrace AI-powered search, we must be mindful to maintain our ability to think critically and evaluate sources independently. The future of effective information gathering will depend on finding the right balance between AI’s convenience and human discernment.

A person verifies claims from Google’s AI Overview using multiple sources to confirm accuracy.

How to Fact-Check AI Overviews

The good news is that while you can’t easily turn off AI Overviews, you can learn to fact-check them effectively. Here are the essential steps:

Cross-Reference Multiple Sources 

Don’t just trust the links Google provides below its AI Overview. Do your own independent search on key claims. Remember that, as I mentioned earlier,  AI systems don’t actually know what’s true – they only know what’s popular online. When AI Overviews make surprising claims, it’s crucial to verify them across multiple authoritative sources.

Watch for Implausible Claims 

As we’ve seen with examples like the widely shared AI overview in which it was suggested one use glue to get cheese to stick to pizza, AI can sometimes generate completely insane content, often drawing from satire or fictional sources without recognizing them as such. If something sounds unusual or surprising–or downright dangerous–that’s your cue to dig deeper.

Verify Numbers and Data 

When AI Overviews present statistics, dates, or quantitative claims, these need special scrutiny. Remember that AI can confidently present numbers that look plausible but are entirely fabricated. Always track down the original source of any statistical claims.

Use AI to Check AI 

Here’s an interesting approach: you can use ChatGPT to fact-check Google’s AI Overviews. Because these AI models are trained on different datasets and use different algorithms, they may catch errors that others miss. 

Try copying the AI Overview content into ChatGPT and asking it to verify specific claims or point out potential inaccuracies. Just remember that ChatGPT can also hallucinate, so use this as one of many verification tools, not your only source of truth.

Use Google’s Web-Only View 

Here’s a practical tip: After your search, click the “More” tab and select “Web” to see traditional search results without the AI Overview. This gives you access to the raw sources, allowing you to evaluate information the old-fashioned way – by comparing multiple viewpoints and sources.

The key is to treat AI Overviews as a starting point for research, not as the final word. By maintaining healthy skepticism and following these verification steps, you can benefit from the convenience of AI summaries while avoiding their potential pitfalls.

Looking Ahead

As AI Overviews roll out to over a billion users worldwide, we’re witnessing a fundamental shift in how people interact with information online. This isn’t just about convenience – it’s about the future of critical thinking and information literacy.

While AI technology continues to advance at an impressive pace, the relationship between human intelligence and artificial intelligence is evolving into a delicate balance. As we navigate this new era of search, maintaining healthy skepticism isn’t just good practice – it’s essential for preserving our ability to think independently and evaluate information critically. 

The future of search may be AI-powered, but it still needs human wisdom to be truly useful.