
The game changed. You just didn’t notice.
If you’re still optimizing your content strategy for search engines like you’re organizing a library, you’ve already lost. The LLMs powering ChatGPT, Gemini, and Perplexity don’t care about your meta tags. They care about whether you’re credible enough to quote.
TL;DR: Traditional SEO was about being findable. LLM Optimization (LLMO) is about being quotable. If you aren’t the most reliable witness in the courtroom of an AI’s training data, you’re invisible, and your competitors are testifying in your place.
I came across a line from Gemini last week that stopped me cold:
“Think of traditional SEO as ‘Library Science’ and LLM Optimization (LLMO) as ‘Witness Testimony.'”
That’s not a metaphor. That’s a diagnosis.
The Librarian Era of Content Strategy Is Over
For two decades, SEO operated on a simple premise: organize your content strategy so Google’s crawlers could find it, index it, and rank it.
We became librarians.
We obsessed over sitemaps, canonical tags, and crawl budget. We structured our sites like filing cabinets. We wrote meta descriptions like index cards. The goal was simple: make sure the robot could find the book on the shelf.
This worked because search engines were retrieval systems. They matched queries to documents. They rewarded organization.
Here’s what the Library Science era looked like:
- Sitemaps told crawlers which pages existed
- Meta tags described what each page contained
- Internal linking created pathways for discovery
- Index management prevented duplicate content from confusing the system
The librarian’s job was curation. Keep the shelves clean. Label everything correctly. Trust that the system would reward order.
I spent years doing this work from my desk here in Boise. I’ve submitted thousands of URLs to Google Search Console. I’ve disavowed toxic backlinks. I’ve fixed crawl errors at 2 AM.
None of that matters the way it used to.

The Courtroom Has Replaced the Library
LLMs don’t retrieve documents. They synthesize answers.
When someone asks ChatGPT a question, it doesn’t pull up a ranked list of URLs. It generates a response by drawing on patterns learned from training data, and increasingly, from real-time retrieval systems like Bing, Google’s Search Generative Experience, and Perplexity’s index.
The question is no longer “Can they find you?”
The question is: “Will they quote you?”
This is the shift from Library Science to Witness Testimony. In a courtroom, the judge doesn’t care how many books you’ve written. The judge cares whether you’re credible, clear, and quotable under pressure.
An LLM is the judge. Your content is the witness.
If your content hedges, rambles, or contradicts itself, the AI treats it as unreliable. If your content is dense with verifiable facts, named entities, and clear declarative statements, the AI elevates it as a source of truth.
Golden Fact: LLMs prioritize content that reduces their “hallucination risk“, the probability of generating false information. Your job is to be so specific, so authoritative, and so semantically grounded that quoting you is safer than synthesizing an answer from weaker sources.
The Librarian vs. The Witness
Here’s the fundamental contrast between the two eras:
| Attribute | The Librarian (Traditional SEO) | The Witness (LLM Optimization) |
|---|---|---|
| Goal | Be findable in the index | Be quotable in the synthesis |
| Primary Asset | Page structure and metadata | Entity density and authority |
| Success Metric | Rankings and impressions | Citations and inclusions |
| Content Strategy | Keyword-optimized, scannable | Declarative, fact-dense, unambiguous |
| Technical Focus | Crawlability and indexation | Semantic grounding and Knowledge Graph presence |
| Failure Mode | Invisible to crawlers | Dismissed as unreliable |
| Competitive Risk | Outranked by competitors | Replaced by competitors’ testimony |
The librarian asks: “Is my content strategy organized?”
The witness asks: “Is my content strategy unimpeachable?”

How to Become the Unimpeachable Witness
I’ve been testing LLMO strategies for the past six months. Here’s what actually moves the needle when you’re trying to become the source that LLMs cite.
1. Increase Entity Density
Vague content gets ignored. Specific content gets quoted.
Entity density refers to the number of concrete, named things in your content, tools, people, companies, technical terms, data points, locations. LLMs use entities to anchor their understanding of what your content is about and how authoritative it is.
Instead of writing “use analytics tools to track performance,” write “use Google Analytics 4’s BigQuery export to query session-scoped event data.”
The second version gives the LLM something to hold onto. It’s verifiable. It’s specific. It’s quotable.
Golden Fact: Content with high entity density is more likely to be included in Knowledge Graph relationships, which directly influences how LLMs retrieve and synthesize information.
2. Write Declarative Statements
Hedging kills credibility.
Phrases like “it might be worth considering” or “some experts believe” signal uncertainty. LLMs are trained to prefer confident, declarative statements because they reduce synthesis risk.
Compare:
- Weak: “Schema markup can potentially help search engines understand your content better.”
- Strong: “Schema markup provides explicit semantic signals that search engines and LLMs use to classify and cite your content.”
The second version is a fact. The first is a hedge. One gets quoted. One gets ignored.
3. Ground Content Strategy Semantically
Semantic grounding means connecting your content to established knowledge structures.
This includes:
- Using Schema.org markup to explicitly define entities on your page (Person, Organization, Article, HowTo)
- Referencing authoritative sources that LLMs already trust
- Including structured data that maps your content to Knowledge Graph entities
- Naming specific tools, frameworks, and methodologies rather than speaking in abstractions
When I write about technical health as the foundation of SEO, I name the specific crawlers, error codes, and diagnostic tools. That specificity isn’t just for readers, it’s for the machines that will decide whether to cite me.
4. Eliminate Contradiction and Ambiguity
LLMs are trained on vast corpora of text. When they encounter contradictory information, they either average the claims (producing hallucinations) or deprioritize the conflicting sources.
Your content needs to be internally consistent and externally aligned with verifiable facts.
This means:
- Don’t publish multiple articles that contradict each other
- Update old content rather than publishing conflicting new content
- Cite sources that the LLM can cross-reference
- Avoid speculative language that could be interpreted multiple ways

The Cost of Algorithmic Perjury
Here’s what happens when you optimize for librarians instead of witnesses:
Your content gets indexed. It might even rank. But when someone asks an LLM a question in your space, the AI pulls from your competitor, the one who wrote with entity density, declarative authority, and semantic grounding.
You’re in the courtroom, but you’re not being called to testify.
Worse, if your content is vague or contradictory, you become a source of hallucination risk. The LLM learns to avoid you. Your domain becomes associated with uncertainty.
I call this algorithmic perjury, when your content’s lack of clarity causes the AI to generate false or muddled answers that it then attributes to “sources like yours.”
You didn’t lie. But your sloppiness made the machine lie.
Golden Fact: LLMs increasingly use retrieval-augmented generation (RAG) systems that pull from live indexes. If your content isn’t structured for quotability, you’re excluded from the retrieval set entirely.
The Forensic Shift
I’ve spent years auditing sites with rotting technical foundations, broken canonicals, orphaned pages, bloated indices. That work still matters for crawlability.
But the frontier has moved.
The new forensic question isn’t “Can Google find this page?”
It’s “Would an LLM stake its credibility on quoting this sentence?”
If the answer is no, you have work to do.
From here in Boise, I’m watching agencies continue to sell the Library Science playbook while their clients slowly disappear from the AI-generated answers that are replacing traditional search results.
The witnesses are taking the stand. The librarians are being dismissed.
If your content strategy hasn’t shifted from organization to testimony, let’s talk. I run forensic audits that evaluate your content’s quotability, entity density, and semantic grounding: the factors that determine whether LLMs treat you as a source of truth or a source of noise.

