
Every few months, someone declares SEO dead.
This time, the culprit is ChatGPT. Or Gemini. Or Perplexity. Or whatever shiny new AI assistant just dropped. The narrative is always the same: users will stop searching Google, AI will just give them answers, and your carefully optimized product pages will rot in digital obscurity.
I’ve heard this story before. Voice search was supposed to kill SEO. Featured snippets were supposed to kill SEO. Now it’s LLMs.
Here’s what actually happened: 76% of AI Overview citations come from pages already ranking in Google’s top 10 organic results. That’s not a coincidence. That’s a pattern. And if you understand that pattern, you understand why the “SEO is dead” crowd is fundamentally wrong.
LLMs Are Not Magic
Let’s strip away the hype for a second.
Large Language Models don’t have opinions. They don’t browse the internet for fun. They don’t “discover” your brand through serendipity. They are statistical prediction machines trained on massive datasets of text, and when they generate answers, they pull from sources they can actually understand.
This is the part everyone misses: LLMs are only as good as the data they consume.
If your website is a crawlable, well-structured, technically sound piece of digital architecture, you’re feeding the machine clean fuel. If your site is a bloated mess of JavaScript rendering issues, broken canonical tags, and zero structured data, you’re invisible. Not just to Google. To every AI system scraping the web for authoritative answers.
The secret to ranking in AI isn’t some new mystical optimization technique. It’s the same foundational work that’s always mattered. Clean code. Fast pages. Clear hierarchy. Explicit documentation of what your business actually is.

The Correlation Nobody Wants to Admit
Here’s a number that should end the debate: pages ranking in Google’s top 10 show a 0.65 correlation with mentions in AI systems.
That’s not weak. That’s remarkably strong for a signal that supposedly “doesn’t matter anymore.”
Why? Because Google and LLMs are solving the same problem. They’re trying to figure out which sources are trustworthy, relevant, and easy to parse. Google has been refining this for decades. LLMs are newer to the game, but they’re using similar signals: authority, topical depth, E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness), and whether your content actually answers the question.
If you’ve done the work to rank organically, you’ve already done most of the work to get cited by AI.
If you haven’t? You’re starting from zero in both arenas.
Technical SEO is the Training Data
I’ve written before about Maslow’s Hierarchy of Needs for SEO. The concept is simple: before you worry about content strategy, link building, or conversion optimization, you need a technically healthy foundation. Crawlability. Indexability. Site architecture that makes sense.
That hierarchy just got more important.
Think about it from the AI’s perspective. When an LLM is trained or when it’s pulling real-time data to generate an answer, it’s looking for sources it can trust. And trust, in this context, means:
- Can I actually access this page? (Crawlability)
- Is the content structured in a way I can parse? (Schema, semantic HTML)
- Does this source have authority on this topic? (Backlinks, brand mentions, topical clusters)
- Is the information current and accurate? (Freshness, citations)
Every single one of those factors is a technical SEO fundamental. Your robots.txt matters. Your XML sitemap matters. Your page speed matters. Your internal linking structure matters.
The technical base of your website is now the training base for AI.
Schema is the Source Code for AI Answers
If there’s one thing I’d tell every ecommerce brand to invest in right now, it’s structured data.
Schema markup is not about getting pretty star ratings in search results. That’s a nice side effect. The real value is that Schema is explicit documentation of your business that machines can read without guessing.
When you implement proper JSON-LD markup, you’re telling Google, ChatGPT, Perplexity, and every other data-hungry system exactly what your products are, what they cost, whether they’re in stock, who manufactured them, and what customers think of them. You’re not hoping they figure it out from context. You’re handing them a manual.

Research shows that citations now rank as the third most important factor for AI visibility at 13%, with three of the top five AI visibility factors related to citations and structured data. That’s not a minor detail. That’s a fundamental shift in how machines determine what to recommend.
If an LLM can’t understand your site, it won’t cite you. Period.
And if you think your Shopify app or Magento plugin is “handling” your Schema automatically, I’d encourage you to actually validate it. I’ve audited dozens of enterprise sites where the structured data was technically present but riddled with errors: missing required fields, incorrect nesting, duplicate IDs, or Schema that contradicts what’s actually on the page. Garbage in, garbage out.
The Multi-Platform Reality
Here’s something that caught a lot of people off guard: ChatGPT uses Bing for local data.
Let that sink in. If you’ve been ignoring Bing because “nobody uses it,” you’ve been ignoring a primary data source for the most talked-about AI on the planet.
AI visibility isn’t a single-channel game anymore. You need presence across Google, Bing, Apple Business Connect, and any other platform that feeds data into these systems. Your Google Business Profile matters. Your Bing Places listing matters. The consistency of your NAP (Name, Address, Phone) data across the web matters.
This isn’t new advice. Local SEO practitioners have been preaching this for years. But now the stakes are higher because inconsistent data doesn’t just confuse human users. It confuses the AI systems that are increasingly mediating how people find businesses.
What Happens When You Ignore This
I recently talked to a brand that was panicking about their AI visibility. They’d noticed that when people asked ChatGPT about products in their category, competitors were being recommended but they weren’t.
Their site had:
- A staging environment accidentally indexed
- Canonical tags pointing to the wrong URLs
- Product Schema that hadn’t been updated in three years
- Page speed scores in the 30s on mobile
- An XML sitemap with 40,000 URLs, half of which returned 404s
They weren’t invisible because AI is unfair. They were invisible because they’d given the machines nothing to work with. Their site was a maze with no map, and they were shocked that nobody could find their way through it.

This is the reality. If your technical foundation is broken, you’re not just losing Google rankings. You’re losing the ability to be cited, recommended, or mentioned by any AI system that matters.
The Fundamentals Haven’t Changed
I know this isn’t the sexy answer people want. Everyone’s looking for the “AI SEO hack” that will catapult them to the top of ChatGPT’s recommendations. Some new tool. Some secret prompt. Some trick that the agencies are hiding.
There is no trick.
The secret is that great SEO has always been about making your site readable, trustworthy, and authoritative. That was true when Google was the only game in town. It’s still true now that LLMs are entering the conversation.
The sites that will win in the AI era are the same sites that have always won: the ones with clean architecture, fast load times, proper structured data, authoritative content, and a technical foundation that doesn’t require a machine to guess what you’re selling.
If anything, the bar just got higher. LLMs are demanding more explicit signals. More structured data. More clarity. More documentation of who you are and why you matter.
The brands that treat their website like a carefully architected system will thrive. The brands that treat it like an afterthought will wonder why the AI keeps recommending their competitors.
The Foundation is Still the Foundation
If you’ve been reading my work, you know I believe in starting with technical health. That philosophy hasn’t changed. If anything, the rise of LLMs has validated it.
You can’t content-strategy your way out of a broken foundation. You can’t link-build your way past a site that AI systems can’t crawl. You can’t “optimize for ChatGPT” if your Schema is a mess and your site architecture is incomprehensible.
The secret to ranking in AI is the same secret it’s always been: do the fundamentals better than everyone else.
When was the last time you audited your technical foundation with AI visibility in mind? If you’re not sure where you stand, let’s talk.

