Skip to main content

An SEO Published a Fake Google Update. It Ranked on Page One. That Should Terrify All of Us.

Jon Goodey deliberately published an AI hallucination about a fake Google core update. It ranked on Google's first page, appeared in AI Overviews, and multiple sites repeated it as fact. Here's why this matters more than you think.

Jhonty Barreto

By Jhonty Barreto

Founder of SEO Engico|March 21, 2026|7 min read

An SEO Published a Fake Google Update. It Ranked on Page One. That Should Terrify All of Us.

I keep coming back to this experiment because it breaks something I wanted to believe about how search works.

Jon Goodey, an SEO practitioner, spotted an AI hallucination about a non-existent Google core update. Instead of correcting it, he published it as a LinkedIn article. On purpose. To see what would happen.

What happened should make every SEO practitioner uncomfortable.

What Actually Happened

Goodey published a LinkedIn newsletter claiming Google had launched a "March 2026 Core Update." It was completely fabricated. There was no such update at the time he published it. He wanted to test whether anyone would verify the information before sharing it (Search Engine Journal, March 2026).

The article ranked on Google's first page for "Google March update 2026."

It appeared in Google AI Overviews, the feature Google positions as a reliable summary of the best information on the web.

And then it spread. Multiple websites published their own detailed, authoritative-sounding articles about this update that didn't exist. TechBytes published a piece that invented specific technical details like a "Gemini 4.0 Semantic Filter" and "Zero Information Gain" classification system. None of it was real. All of it sounded completely plausible.

The major SEO news outlets, Search Engine Journal, Search Engine Land, SE Roundtable, didn't touch it. They know better. But the independent sites that picked it up didn't check. They just published.

Why This Worked

I've been turning this over in my head and I think three things made this possible.

LinkedIn carries platform authority

Google treats LinkedIn as a trusted domain. High domain authority. Millions of users. Regular indexation. When Goodey published on LinkedIn, the content inherited that trust signal. Google's algorithm saw a post on a high-authority platform by a real person with a real profile and real engagement, and treated it as credible.

This is the same reason Reddit content ranks so prominently now. Platform authority is being used as a proxy for content quality. And that proxy is deeply flawed.

AI can't fact-check. It can only pattern-match.

AI Overviews are built on language models that predict what text should come next based on patterns. They're very good at sounding authoritative. They're terrible at verifying whether something is true.

When Google's AI Overview pulled from Goodey's article, it wasn't evaluating the claim. It was recognising that the text matched the pattern of what a Google update announcement looks like. Right format, right terminology, right structure. The AI saw the shape of truth and assumed it was truth.

The internet amplifies, it doesn't verify

Once one source published the fake update, other sites referenced it as confirmation. This is the echo chamber effect. Each new article citing the previous one made the misinformation look more established. Within days, you could find multiple "independent" sources all confirming the same fiction.

Goodey himself put it simply: "Most readers don't fact check. AI overviews and search amplify misinformation. One article is echoed by the Internet with other sites repeating and embellishing on the original false information."

What This Reveals About E-E-A-T

Google has spent years telling us that E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) is central to how they evaluate content quality. They've published detailed quality rater guidelines. They've made it a pillar of their core update communications.

And yet a fabricated article about a fake algorithm update ranked on page one and appeared in AI Overviews.

Let's be honest about what this means. E-E-A-T is an aspiration, not a reality. Google's algorithms can detect some signals of expertise, things like author credentials, site reputation, citation patterns. But they fundamentally cannot evaluate whether a specific claim is true or false.

The quality rater guidelines describe what Google wants their algorithms to do. Goodey's experiment shows what they actually do. The gap between those two things is concerning.

And it gets more uncomfortable. Google's global affairs president Kent Walker recently stated that the company will not implement EU-mandated fact-checking requirements, calling such measures "simply not appropriate or effective." So not only can the algorithm not fact-check, Google has explicitly said they won't build systems to do it.

The AI Overviews Problem

This is the part that genuinely worries me.

Google positions AI Overviews as helpful summaries that save users time. The implicit promise is: we've read the internet so you don't have to, and here's what's true.

But AI Overviews cited a completely false article as fact. If the feature had existed purely as blue links, at least users would have had to click through and evaluate the source themselves. AI Overviews removed that step. They told the user "here's the answer" and the answer was wrong.

This isn't an edge case. News and media websites receive the highest page views from AI platforms but show the lowest engagement (9to5Google), suggesting users primarily visit to fact-check AI-generated content. People are already learning not to trust AI summaries blindly. But many still do.

For businesses relying on accurate information in search results, this is a real risk. If your competitor publishes misleading claims about your industry, those claims could appear in AI Overviews as established fact before anyone has a chance to correct them.

What I'd Actually Take Away From This

1. Original reporting has never been more valuable

If AI can rank and amplify fabricated content that sounds authoritative, then the only defensible content strategy is content that demonstrates genuine, verifiable expertise. Share data you collected. Reference experiments you ran. Link to primary sources. Show your working.

The bar for "good enough" content just got higher because "sounds right" is no longer enough. It never was, but now we have proof.

2. Be sceptical of everything, including industry news

I'll be honest, I've shared SEO news without thoroughly verifying every claim. Most of us have. Goodey's experiment is a wake-up call. Before you adjust your strategy based on something you read online, ask yourself: what is the primary source? Can I verify this independently? Or am I just trusting that someone else verified it?

3. Your author credentials actually matter

Goodey's article ranked partly because LinkedIn carries authority. But the sites that embellished on his fiction ranked because Google couldn't distinguish between real analysis and made-up technical details.

If you want your content to stand apart, invest in visible expertise signals. Real author bios with verifiable credentials. Citations to primary research. Links to your actual work and results. Not because these will prevent you from being outranked by misinformation, clearly they won't. But because they build trust with the humans who actually read your content.

4. Don't outsource your thinking to AI summaries

This applies to all of us, whether we're consuming AI Overviews or using ChatGPT for research. AI tools are incredibly useful for synthesis and pattern recognition. They are dangerously unreliable for verification. Treat every AI-generated claim as a starting point for research, not a conclusion.

Why I'm Still Betting on Real Expertise

I know this whole thing could read as doom and gloom. Google can't fact-check. AI amplifies fiction. E-E-A-T is aspirational. Misinformation ranks.

But here's what Goodey's experiment also proves: the sites that didn't fall for it were the ones with actual editorial standards. Search Engine Journal, Search Engine Land, SE Roundtable. They all ignored the fake update. They have processes. They have reputations built on being right, not being first.

Over time, the sites that embellished on Goodey's fiction will lose credibility. The sites that maintained standards will gain it. Google's algorithms might be fooled today, but reader trust compounds. And reader trust is the one ranking factor that can't be faked.

The experiment exposed a real vulnerability in how search works. It should make us all more careful, more sceptical, and more committed to doing the hard work of creating content that's actually true.

Because if the bar for ranking on page one is just "sounds plausible on a trusted platform," then the only way to differentiate yourself is to be the source people trust when everything else turns out to be noise.

Ready to grow?

Scale your SEO with proven systems

Get predictable delivery with our link building and content services.