Content Strategy: SEO as Witness Testimony

by | Jan 26, 2026 | Ai, Content Strategy, SEO Tips

A courtroom depicting a scene where an expert witness is talking with a judge.  The expert witness represents content written for LLMs and the judge represents the Ai algorithm that cites relevant content.

The game changed. You just didn’t notice.

If you’re still optimizing your content strategy for search engines like you’re organizing a library, you’ve already lost. The LLMs powering ChatGPT, Gemini, and Perplexity don’t care about your meta tags. They care about whether you’re credible enough to quote.

TL;DR: Traditional SEO was about being findable. LLM Optimization (LLMO) is about being quotable. If you aren’t the most reliable witness in the courtroom of an AI’s training data, you’re invisible, and your competitors are testifying in your place.

I came across a line from Gemini last week that stopped me cold:

“Think of traditional SEO as ‘Library Science’ and LLM Optimization (LLMO) as ‘Witness Testimony.'”

That’s not a metaphor. That’s a diagnosis.

The Librarian Era of Content Strategy Is Over

For two decades, SEO operated on a simple premise: organize your content strategy so Google’s crawlers could find it, index it, and rank it.

We became librarians.

We obsessed over sitemaps, canonical tags, and crawl budget. We structured our sites like filing cabinets. We wrote meta descriptions like index cards. The goal was simple: make sure the robot could find the book on the shelf.

This worked because search engines were retrieval systems. They matched queries to documents. They rewarded organization.

Here’s what the Library Science era looked like:

  • Sitemaps told crawlers which pages existed
  • Meta tags described what each page contained
  • Internal linking created pathways for discovery
  • Index management prevented duplicate content from confusing the system

The librarian’s job was curation. Keep the shelves clean. Label everything correctly. Trust that the system would reward order.

I spent years doing this work from my desk here in Boise. I’ve submitted thousands of URLs to Google Search Console. I’ve disavowed toxic backlinks. I’ve fixed crawl errors at 2 AM.

None of that matters the way it used to.

A modern courtroom witness stand illuminated before dusty library shelves, symbolizing SEO's shift from library science to testimony.

The Courtroom Has Replaced the Library

LLMs don’t retrieve documents. They synthesize answers.

When someone asks ChatGPT a question, it doesn’t pull up a ranked list of URLs. It generates a response by drawing on patterns learned from training data, and increasingly, from real-time retrieval systems like Bing, Google’s Search Generative Experience, and Perplexity’s index.

The question is no longer “Can they find you?”

The question is: “Will they quote you?”

This is the shift from Library Science to Witness Testimony. In a courtroom, the judge doesn’t care how many books you’ve written. The judge cares whether you’re credible, clear, and quotable under pressure.

An LLM is the judge. Your content is the witness.

If your content hedges, rambles, or contradicts itself, the AI treats it as unreliable. If your content is dense with verifiable facts, named entities, and clear declarative statements, the AI elevates it as a source of truth.

Golden Fact: LLMs prioritize content that reduces their “hallucination risk“, the probability of generating false information. Your job is to be so specific, so authoritative, and so semantically grounded that quoting you is safer than synthesizing an answer from weaker sources.

The Librarian vs. The Witness

Here’s the fundamental contrast between the two eras:

AttributeThe Librarian (Traditional SEO)The Witness (LLM Optimization)
GoalBe findable in the indexBe quotable in the synthesis
Primary AssetPage structure and metadataEntity density and authority
Success MetricRankings and impressionsCitations and inclusions
Content StrategyKeyword-optimized, scannableDeclarative, fact-dense, unambiguous
Technical FocusCrawlability and indexationSemantic grounding and Knowledge Graph presence
Failure ModeInvisible to crawlersDismissed as unreliable
Competitive RiskOutranked by competitorsReplaced by competitors’ testimony

The librarian asks: “Is my content strategy organized?”

The witness asks: “Is my content strategy unimpeachable?”

A chrome AI judge and empty marble witness chair in a minimalist courtroom, illustrating credibility in LLM optimization.

How to Become the Unimpeachable Witness

I’ve been testing LLMO strategies for the past six months. Here’s what actually moves the needle when you’re trying to become the source that LLMs cite.

1. Increase Entity Density

Vague content gets ignored. Specific content gets quoted.

Entity density refers to the number of concrete, named things in your content, tools, people, companies, technical terms, data points, locations. LLMs use entities to anchor their understanding of what your content is about and how authoritative it is.

Instead of writing “use analytics tools to track performance,” write “use Google Analytics 4’s BigQuery export to query session-scoped event data.”

The second version gives the LLM something to hold onto. It’s verifiable. It’s specific. It’s quotable.

Golden Fact: Content with high entity density is more likely to be included in Knowledge Graph relationships, which directly influences how LLMs retrieve and synthesize information.

2. Write Declarative Statements

Hedging kills credibility.

Phrases like “it might be worth considering” or “some experts believe” signal uncertainty. LLMs are trained to prefer confident, declarative statements because they reduce synthesis risk.

Compare:

  • Weak: “Schema markup can potentially help search engines understand your content better.”
  • Strong: “Schema markup provides explicit semantic signals that search engines and LLMs use to classify and cite your content.”

The second version is a fact. The first is a hedge. One gets quoted. One gets ignored.

3. Ground Content Strategy Semantically

Semantic grounding means connecting your content to established knowledge structures.

This includes:

  • Using Schema.org markup to explicitly define entities on your page (Person, Organization, Article, HowTo)
  • Referencing authoritative sources that LLMs already trust
  • Including structured data that maps your content to Knowledge Graph entities
  • Naming specific tools, frameworks, and methodologies rather than speaking in abstractions

When I write about technical health as the foundation of SEO, I name the specific crawlers, error codes, and diagnostic tools. That specificity isn’t just for readers, it’s for the machines that will decide whether to cite me.

4. Eliminate Contradiction and Ambiguity

LLMs are trained on vast corpora of text. When they encounter contradictory information, they either average the claims (producing hallucinations) or deprioritize the conflicting sources.

Your content needs to be internally consistent and externally aligned with verifiable facts.

This means:

  • Don’t publish multiple articles that contradict each other
  • Update old content rather than publishing conflicting new content
  • Cite sources that the LLM can cross-reference
  • Avoid speculative language that could be interpreted multiple ways
Side-by-side cubes show vague versus entity-dense content, highlighting the importance of specificity in SEO for LLMs.

The Cost of Algorithmic Perjury

Here’s what happens when you optimize for librarians instead of witnesses:

Your content gets indexed. It might even rank. But when someone asks an LLM a question in your space, the AI pulls from your competitor, the one who wrote with entity density, declarative authority, and semantic grounding.

You’re in the courtroom, but you’re not being called to testify.

Worse, if your content is vague or contradictory, you become a source of hallucination risk. The LLM learns to avoid you. Your domain becomes associated with uncertainty.

I call this algorithmic perjury, when your content’s lack of clarity causes the AI to generate false or muddled answers that it then attributes to “sources like yours.”

You didn’t lie. But your sloppiness made the machine lie.

Golden Fact: LLMs increasingly use retrieval-augmented generation (RAG) systems that pull from live indexes. If your content isn’t structured for quotability, you’re excluded from the retrieval set entirely.

The Forensic Shift

I’ve spent years auditing sites with rotting technical foundations, broken canonicals, orphaned pages, bloated indices. That work still matters for crawlability.

But the frontier has moved.

The new forensic question isn’t “Can Google find this page?”

It’s “Would an LLM stake its credibility on quoting this sentence?”

If the answer is no, you have work to do.

From here in Boise, I’m watching agencies continue to sell the Library Science playbook while their clients slowly disappear from the AI-generated answers that are replacing traditional search results.

The witnesses are taking the stand. The librarians are being dismissed.

If your content strategy hasn’t shifted from organization to testimony, let’s talk. I run forensic audits that evaluate your content’s quotability, entity density, and semantic grounding: the factors that determine whether LLMs treat you as a source of truth or a source of noise.

Explore Our Latest Insights

Why Technical Health is the First Step in SEO

Most Marketing Directors I speak with start our first meeting the exact same way. They pull up a list of high-volume keywords and ask, "Sean, how do we rank for these by Q3?" My answer is rarely what they want to hear. I tell them to put the keyword list away. If your...

The Hidden Dangers of GA4 Migrations for E-Commerce

The migration deadline has passed, the dust has settled, and yet, I am still seeing the wreckage. Most e-commerce managers treated the switch to Google Analytics 4 (GA4) like a software update on their iPhone. They clicked a button, maybe installed a generic plugin,...

Your ‘Green’ PageSpeed Score Might Be Lying to You

I recently had a prospective client send me a screenshot of their Google PageSpeed Insights report. It was a perfect 98/100 on mobile. They were thrilled. They thought they had solved their site speed issues. I then pulled up their actual user data in the Chrome User...
Sean Edgington sitting down at a desk writing a blog on a laptop

Written By Sean Edgington

Senior Strategist at Digital Mully