
Your traditional Share of Voice metric is a vanity number. I’m not sugarcoating it.
In 2026, measuring how often your brand shows up in Google search results tells you almost nothing about how AI systems perceive, trust, and recommend you. The game has shifted from visibility to inclusion, and if you’re not tracking your presence across ChatGPT, Perplexity, Claude, and Gemini, you’re flying blind while your competitors are getting quoted.
TL;DR: Traditional SOV measures how often you appear. LLM Share of Inclusion measures how often you get cited as the answer. This post walks you through a five-step forensic process to audit your brand’s presence across every major AI platform, using your own GSC data, manual probing, and tools like Semrush AI Visibility Toolkit and Perplexity.
Traditional SOV Is Dead. Share of Inclusion Is the New KPI.
I wrote recently about how technical SEO has evolved from library science to witness testimony. The same principle applies here.
Old-school Share of Voice asked: “How much of the search results page real estate do I own?”
That question is irrelevant when there’s no search results page.
When someone asks ChatGPT “What’s the best Shopify SEO app for product schema?” there are no ten blue links. There’s one answer. Maybe two. You’re either in that answer or you don’t exist.
Golden Fact: A brand mentioned 100 times in AI responses might still have weak Share of Voice if competitors receive 400 mentions across the same prompts. Absolute visibility means nothing without competitive context.
The new question is: “When AI systems synthesize an answer in my category, am I one of the witnesses they call to the stand?”
That’s Share of Inclusion.
And it’s not some vibes-based “AI KPI.” Platforms are already formalizing how they measure inclusion and citations. Semrush’s AI Visibility Toolkit explicitly frames the problem around monitoring brand presence in AI answers and AI-driven surfaces (i.e., tracking whether you’re being pulled into the answer, not just ranking for a keyword) (Semrush KB: AI Visibility Toolkit, 2025).
Now let’s measure it like an unimpeachable witness. Not like a marketer with a dashboard addiction.

Step 1: Build Your High-Intent Prompt Bank (Using GSC Data)
Before you can measure anything, you need to know what questions to ask.
I don’t start with guesswork. I start with Google Search Console.
Pull your top 200 queries by impressions from the last 90 days. Filter for queries that contain question modifiers: “how to,” “best,” “what is,” “vs,” “alternative to,” “for ecommerce,” “for Magento 2.”
These are your high-intent prompts, the exact language your audience uses when they’re in research mode. These queries translate almost directly into the prompts people type into ChatGPT or Perplexity.
Sort them into four buckets:
- Category education queries: “What is technical SEO?” or “How does a technical SEO audit work?”
- Solution discovery prompts: “Best tools for Shopify SEO” or “How to fix crawl errors on Magento 2”
- Comparison queries: “Semrush vs Ahrefs for ecommerce SEO” or “Screaming Frog alternatives”
- Use case-specific questions: “How to implement product schema for Magento 2 SEO”
You want at least 30-50 prompts spread across these buckets. This becomes your forensic audit checklist.
Step 2: The Training Data Audit (Manual Probing)
Now comes the hands-on work.
Open ChatGPT, Claude, Gemini, and Perplexity in separate tabs. Take your first prompt from the bank and run it through all four platforms. Manually.
Yes, I know this is tedious. That’s the point. Automated tools miss nuance.
For each response, document:
- Was your brand mentioned by name? (Yes/No)
- What position was your brand in the response? (First, second, buried in a list, not at all)
- Was your brand linked or cited? (Perplexity and Gemini often include source links)
- What competitors were mentioned instead?
Run each prompt at least three times across different sessions. AI responses vary dynamically based on phrasing, context, and timing. You need an average, not a single snapshot.
Golden Fact: The core AI Share of Voice formula is simple: (Your Brand Citations ÷ Total Citations) × 100 = Your AI SoV %. If you get 25 mentions out of 100 total brand mentions across your tracked queries, your AI SoV is 25%.
Track this in a spreadsheet. One row per prompt, columns for each platform. This is your baseline.

Step 3: The RAG Loop (Tracking Citations in Perplexity)
Perplexity is the forensic goldmine here.
Unlike ChatGPT or Claude, Perplexity explicitly shows its sources with clickable citations. This means you can see exactly which pages the AI is pulling from when it synthesizes an answer.
That’s the practical, in-the-trenches version of retrieval-augmented generation: the model isn’t “remembering” you. It’s retrieving documents that look like credible testimony, then writing an answer from that evidence pile.
Run your high-intent prompts through Perplexity and track:
- Which domains appear in the citations? (Yours? Competitors? Third-party review sites?)
- Which specific pages are cited? (Is it your homepage? A blog post? A product page?)
- How recent is the cited content? (RAG systems pull from live data: freshness matters)
This tells you whether your content is making it into the retrieval-augmented generation loop. If Perplexity consistently cites your competitors’ blog posts but ignores yours, that’s a content gap you can fix.
When I want to backstop this with tooling, I’m not looking for “AI SEO magic.” I’m looking for an audit trail: prompts → answers → citations → source URLs. That’s exactly why Semrush shipped an AI-specific visibility product in the first place—tracking whether your brand is present in AI answers and how that visibility changes over time (Semrush KB: AI Visibility Toolkit, 2025; see also Getting Started with AI Visibility Toolkit, 2025).
Tools like Semrush AI Visibility Toolkit and Otterly.AI can automate this citation tracking at scale, but I always start with manual probing to understand the landscape before I hand it off to software.
Step 4: Measure Entity Association (The “Who Are My Peers?” Test)
This step reveals how AI systems categorize you in their knowledge graphs.
Ask each LLM platform a simple question: “Who are the leading consultants in technical SEO for ecommerce?”
Or: “What are the best agencies for Magento 2 SEO?”
Or: “Who should I hire for a technical SEO audit on Shopify?”
Document every brand mentioned. Then ask yourself:
- Am I in this list?
- Who else is in this list? (Are these my actual competitors or completely different players?)
- What attributes does the AI associate with each brand?
This is the “entity association” test. AI systems don’t just know that you exist: they know what category you belong to and who your peers are.
And there’s a very specific technical reason this test works: AI search systems are leaning harder on entity recognition + entity linking (NER/EL) to decide what a “thing” is and what it’s related to. If the model can’t confidently extract and link your brand entity to the right attributes (“Magento 2,” “technical SEO,” “ecommerce”), you don’t get grouped with your real peers—you get dumped into the generic agency bucket. Lazarina Stoy lays out this exact shift: AI search is entity-first, and your visibility is downstream of whether systems can recognize and connect your entities consistently (How AI Search Platforms Leverage Entity Recognition, Stoy, 2025).
If you’re a Shopify SEO specialist but AI keeps grouping you with general marketing agencies, you have an entity positioning problem. Your content isn’t semantically grounding you in the right category.
Golden Fact: Some advanced platforms track entity-based AI SoV, which counts how many times your brand appears as a recommended entity divided by the total number of entities listed. This is often more actionable than raw mention counts.
If you want a second metric to sanity-check “Share of Inclusion,” borrow from Arun Shastri’s Share of Model framing: measure how often the model mentions you for category prompts relative to competitors, then review the sentiment and attribute accuracy of those mentions—because a wrong mention is worse than no mention (Arun Shastri on Share of Model, 2025).

Step 5: Sentiment and Accuracy (The Hallucination Audit)
Getting mentioned isn’t enough. You need to get mentioned correctly.
Run brand-specific prompts through each platform:
- “What does [Your Brand] specialize in?”
- “What do customers say about [Your Brand]?”
- “Is [Your Brand] good for [your core service]?”
Read the responses carefully. Are they accurate? Do they pulling from outdated information? Are they hallucinating services you don’t offer or credentials you don’t have?
This is the hallucination audit. AI systems sometimes confabulate details when they lack strong source material. If ChatGPT says you specialize in “enterprise ERP migrations” when you actually focus on ecommerce SEO, that’s a problem.
Document every inaccuracy. These become your content priorities: you need to publish clear, authoritative pages that explicitly state who you are, what you do, and what you don’t do. Give the AI unimpeachable testimony to quote.
Traditional SOV vs. LLM Share of Inclusion
| Metric | Traditional SOV | LLM Share of Inclusion |
|---|---|---|
| What it measures | Search results page visibility | Inclusion in AI-synthesized answers |
| Data source | Rank tracking tools | Manual probing + AI visibility tools |
| Primary KPI | Impression share | Citation frequency and position |
| Competitive context | Keyword-level rankings | Entity association and peer grouping |
| Content signal | Keyword optimization | Semantic grounding and entity density |
| Platform scope | Google, Bing | ChatGPT, Perplexity, Claude, Gemini, Google AI Overviews |
| Freshness factor | Moderate | Critical (RAG pulls live data) |
The Forensic Summary: What This Audit Reveals
After completing these five steps, you’ll have a clear picture of:
- Your baseline AI Share of Voice across ChatGPT, Perplexity, Claude, and Gemini
- Which competitors are getting cited instead of you: and on which platforms
- Which of your pages (if any) are making it into RAG citation loops
- How AI systems categorize your brand relative to your actual positioning
- Where hallucinations or inaccuracies are damaging your reputation
This isn’t a one-time audit. Track these metrics monthly. Your AI SoV will shift as you publish new content, as competitors optimize, and as the LLMs update their training data and retrieval systems.
If you’re running a Shopify store, a Magento 2 site, or any ecommerce operation that depends on organic discovery, this audit is no longer optional. The AI platforms are where your customers are starting their research. If you’re not being cited, you’re not being considered.
Ready to run this audit but don’t have the bandwidth? I do this forensic work for ecommerce brands and technical SEO teams who want to know exactly where they stand in the AI visibility landscape. Reach out and let’s diagnose your Share of Inclusion before your competitors figure out theirs.

