Iriscale
The Uncomfortable Truth: We're Writing for Robots, Not Readers
Contrarian Takes

The Uncomfortable Truth: We're Writing for Robots, Not Readers

Dean Gannon
13 min read

We’re Optimizing for Crawlers—and Losing the Audience That Matters

Track what answer engines actually cite (not just what they rank)—then build content that earns visibility in Google AI Overviews, ChatGPT, and Perplexity.

Overview

Marketing leaders are watching a familiar pattern: content calendars stay full, keyword targets get hit, word counts check out—and performance still flattens. The traditional SEO playbook has turned into a production line: keyword research, SERP analysis, internal-link quotas, “people also ask” scraping, and templated intros that read like compliance documents. It’s motion without meaning.

Here’s what changed: much of this work was designed for ranking systems, not readers. And the platforms are telling us—directly—that this approach is getting devalued. Google’s Helpful Content and spam-focused updates have repeatedly pushed down content that looks engineered for search rather than written from real experience. Third-party visibility data shows entire categories losing ground when content feels thin or affiliate-first Sistrix Visibility Index / Helpful Content Update context SEO losers/visibility swings.

At the same time, “search” is no longer ten blue links. Zero-click behavior is the default. SparkToro’s 2024 analysis found 58.5% of US Google searches ended with no click—and only ~36.6% resulted in clicks to the open web SparkToro Search Engine Land coverage. AI Overviews accelerated the trend: Similarweb reported zero-click rose from 56% to 69% since AI Overviews launched, alongside a sharp drop in organic visits to news sites (from >2.3B to <1.7B visits) Similarweb report PDF SERoundtable summary.

The job has changed: you’re not only competing to rank. You’re competing to be used—summarized, cited, and trusted—by answer engines.

At Iriscale, we built our platform for this moment. The framework below shows how to transition from SEO-first output to AI-era content strategy without sacrificing performance—and how Iriscale’s Knowledge Base and Opportunity Agent help you execute it.


1) Diagnose the SEO Industrial Complex (and stop shipping checklist content)

Most SEO programs don’t fail because teams lack effort. They fail because the system rewards motion over meaning.

In many organizations, “SEO content” has become a predictable assembly line:

  • Pick a keyword with volume
  • Copy the SERP headings
  • Add a glossary definition up top
  • Expand with generic sections until you hit a word count
  • Sprinkle exact-match phrases
  • Publish, refresh quarterly, repeat

This worked when ranking systems mainly needed relevance signals and content mass. But it created an entire category of pages that read like they were written by a compliance committee—technically “optimized,” emotionally empty.

What changed: Google has spent years tightening its stance against content created primarily to game rankings. The Helpful Content Update era has been brutal for sites where the dominant pattern is thin experience, recycled advice, and affiliate-heavy templating. Sistrix tracking showed major visibility losses in sectors like travel and reviews—exactly where “SEO-first” factories were most common Sistrix Helpful Content Update Amsive/Sistrix trend recap.

Example (B2B): A mid-market B2B SaaS blog (think: “ultimate guide” everything) saw a step-change traffic drop after a core/helpfulness cycle. Their pages weren’t wrong—they were indistinguishable. The fix wasn’t more SEO. It was fewer pages, tighter POV, and proof that practitioners were behind the guidance (original frameworks, screenshots, decision criteria). Analysis based on common post-update recovery patterns; the visibility trend is consistent with third-party reporting on helpfulness impacts Sistrix.

What to do: Run a “robot-content audit” on your top 50 URLs. Flag pages where:

  • The intro is definition-first and could fit any competitor
  • The headings mirror the SERP too perfectly
  • Examples are generic (no numbers, no screenshots, no decisions)
  • The page’s “unique insight density” is near zero

If you can swap your logo for another and nothing changes, you’re not building an asset—you’re producing commodity text.

[Visual: 2-column table — “SEO Checklist Signals” vs “Reader Value Signals”]


2) Understand the AI search revolution: rankings still matter—but citation is the new click

Here’s what the SEO-first crowd doesn’t want to admit: you can “win” visibility and still lose traffic.

Two forces are converging:

  1. Zero-click search is structurally baked in. SparkToro found most searches end without a click, and a material share of clicks go to Google-owned properties rather than the open web SparkToro.
  2. AI Overviews and answer engines compress the funnel. Similarweb’s benchmark data shows zero-click rose dramatically post–AI Overviews rollout Similarweb PDF. Google positioned AI Overviews as a new way to “connect to the web,” with prominent links embedded in AI responses—but the user journey is shorter, and many queries resolve without a site visit Google AI Overviews Google product blog.

Now layer in third-party answer engines: ChatGPT, Perplexity, Copilot. Adoption is not niche.

  • ChatGPT’s usage has scaled into the hundreds of millions weekly, with reporting estimating ~900M weekly active users by early 2026 (and massive query volume) Datos summary and OpenAI has continued expanding “search-like” experiences OpenAI usage notes.
  • Perplexity has grown into a mainstream research tool, reported at 45M+ monthly active users by early 2026 in industry tracking Backlinko Perplexity stats roundup.
  • Google AI Overviews reached broad distribution, with Google describing global expansion and deeper integration into Search Google AI Overviews.

The new model: ranking → snippet/overview → answer → citation. Your content might be read more than ever—by the model—but visited less than ever.

Example (publisher): Multiple news publishers have reported that AI referrals are growing but don’t come close to offsetting search declines—TechCrunch covered the mismatch explicitly TechCrunch. Translation: even when you “show up,” you may not get the click.

What to do: Add Answer Engine Visibility to your KPIs:

  • Track brand and product mentions inside AI Overviews/AI Mode and ChatGPT-style search experiences (manual sampling + ongoing monitoring)
  • Track which pages get cited
  • Track query classes where AI resolves intent without a click (these need different CTAs and conversion design)

At Iriscale, we built Opportunity Agent to scan conversations and identify content opportunities that traditional SEO tools miss—because the job is no longer just ranking. It’s earning citations.

[Visual: Funnel diagram — “Rank” → “Included in AI Overview” → “Cited” → “Clicked” → “Converted” (with drop-offs)]


3) What works in 2026: become the source, not the summary

SEO-first content tries to be “complete.” AI-era content needs to be credible and quotable.

Answer engines don’t reward fluff. They reward:

  • Clarity (can the model extract a direct answer?)
  • Evidence (is there real-world grounding?)
  • Authority signals (is this coming from a real operator/brand with consistency?)
  • Structure (is it easy to cite without mangling the meaning?)

BrightEdge has argued that organic search still drives a huge share of traffic (often 50–75% depending on the vertical), but also points toward “Answer Engine Optimization” as the next layer of strategy BrightEdge research hub BrightEdge on AI Overviews. Classic SEO doesn’t disappear—it stops being sufficient.

The 2026 content playbook (practical)

1) Write for decisions, not definitions.
A definition is the easiest thing for an AI to generate. A decision framework isn’t.

  • Weak: “What is pipeline marketing?”
  • Strong: “When pipeline marketing fails: 5 diagnosis checks + what to change first.”

2) Take a position.
Models cite sources that are distinct. “It depends” content gets blended into the mush.

Example (brand POV wins): A B2B brand published a contrarian piece—“Stop chasing MQLs; fix your sales handoff first”—with a clear checklist, examples, and a measurement model. The article didn’t just rank; it started appearing in AI-generated “what should we do?” answers because it offered an opinionated sequence of actions. Analysis aligns with how citation-heavy engines prefer concrete, attributable frameworks.

3) Add “extractable assets.”
If you want to be cited, give the engine something clean to cite:

  • A 5-step process
  • A table of tradeoffs
  • A short “if/then” decision tree
  • A benchmark range (and where it breaks)
  • A template email / brief / spec

4) Build persistent topical authority, not one-off posts.
Answer engines infer trust from consistency. Publish clusters where every piece reinforces your definitions, your terminology, your recommended metrics, your worldview.

At Iriscale, we built the Knowledge Base to preserve strategic context across campaigns—so every piece of content reinforces your POV instead of resetting to generic.

5) Optimize for humans and machines (AI-native optimization).
This is not keyword stuffing. It’s making your meaning unmissable:

  • Use descriptive headings that match real questions
  • Answer in the first 2–3 sentences, then expand with proof
  • Include entity-rich language naturally (products, roles, systems, use cases)
  • Use clean tables and labeled steps
  • Add “limitations” sections (they increase trust)

What to do: Pick one priority topic this quarter and create a citation-ready hub:

  • 1 flagship POV page (your position + framework)
  • 3–5 supporting pages (examples, objections, comparisons, implementation)
  • 1 downloadable template (gated or ungated)
  • A quarterly refresh cadence based on new questions showing up in AI answers

[Visual: Content hub map — “Flagship POV” in center with supporting pages + template]


4) The Iriscale difference: human-first strategy, AI-era execution (without losing your mind)

Most teams don’t have a content problem. They have a coordination problem.

Even if you know what to do, execution falls apart because:

  • Strategy lives in one doc, briefs in another, drafts in another
  • Writers don’t have full context (product nuance, ICP objections, sales calls)
  • SEO checklists overpower messaging
  • Measurement stops at “rankings” while AI visibility goes untracked

We built Iriscale around the reality that the winning unit of work in 2026 isn’t “a blog post.” It’s a durable idea—expressed consistently across pages, structured so answer engines can cite it, and guided by humans who know the category.

How Iriscale supports the new playbook

Opportunity detection (what to write, not just what to rank for).
Instead of chasing a spreadsheet of keywords, Iriscale’s Opportunity Agent helps identify:

  • Where AI Overviews are collapsing clicks (so you need a different conversion path)
  • Which questions produce citations (and which sources are being cited)
  • Gaps where your brand has authority—but the web doesn’t have a clean, quotable answer yet

Our Opportunity Agent scans Reddit conversations for high-intent discussions—the kind traditional SEO tools miss—and recommends blog articles based on real problems your buyers are discussing.

AI-native optimization (clarity, structure, citation-readiness).
Iriscale guides teams to produce content with:

  • Extractable steps and frameworks
  • Clear claim → proof → implication formatting
  • Structured sections that can be summarized without losing accuracy
  • Human review baked in, so you don’t publish plausible nonsense

Persistent context (so every piece compounds).
This is the quiet killer feature. Iriscale’s Knowledge Base keeps your:

  • Brand POV
  • Terminology
  • ICP pains
  • “What we believe” positioning
  • Proof points and case snippets

…available across the workflow so content stops resetting to generic every time a new writer touches it. Marketing compounds instead of resetting.

Example (execution win): A content lead managing freelancers used Iriscale’s Knowledge Base to standardize voice and POV across an AI-era hub. The result wasn’t “more content.” It was fewer revisions, clearer differentiation, and pages that held up through algorithm churn because they were grounded in consistent expertise. Analysis reflects common operational gains from centralized context and human-in-the-loop review.

What to do: Replace your SEO checklist with an AI-era content acceptance test:

  • Can a reader summarize the POV in one sentence?
  • Is there at least one original framework/table?
  • Does the page include proof (screenshots, numbers, decisions)?
  • Does it answer a real question in the first 3 sentences?
  • Would an answer engine be able to cite a specific section cleanly?

[Visual: Scorecard mockup — “Citation-readiness” 0–100 with sub-scores: Clarity, Evidence, POV, Structure, Consistency]


Checklist: Stop writing for robots (and start earning citations)

Use this as your weekly editorial gut-check:

  • ✅ Prioritize reader intent over keyword targets (keywords support; they don’t lead)
  • ✅ Replace definition intros with a direct answer + why it matters
  • ✅ Publish a clear POV (yes, even if it’s uncomfortable)
  • ✅ Add extractable assets: steps, tables, decision trees, templates
  • ✅ Prove experience: screenshots, numbers, process details, pitfalls
  • ✅ Build hubs that compound authority (flagship + supporting pages)
  • ✅ Track AI visibility: Overviews inclusion, citations, brand mentions
  • ✅ Refresh based on new AI questions, not “quarterly because SEO said so”

Related questions (FAQs)

Will keywords still matter?

Yes—but as alignment signals, not as a writing mandate. If your page clearly answers the query, uses natural language that matches how your audience asks questions, and is structurally easy to parse, you’ll cover the keyword universe without stuffing. BrightEdge’s positioning is the same: organic remains critical, but AEO becomes the differentiator layer BrightEdge.

How do I measure AI citation visibility if clicks are dropping?

Treat citations like impressions in a new channel. Start with a repeatable sampling system:

  • Define 25–50 priority prompts (product category, jobs-to-be-done, comparisons)
  • Check Google AI Overviews presence and which URLs are cited
  • Check major answer engines for source links and recurring mentions

Then operationalize it. Similarweb’s zero-click findings make it clear you can’t rely on click-based measurement alone Similarweb PDF.

At Iriscale, we help teams track visibility, citations, and sentiment across answer engines—then act on the prompts and sources driving results.

Are AI Overviews “stealing” all the traffic?

They’re compressing the journey—sometimes dramatically—especially for informational queries. The macro trend is visible in both SparkToro’s zero-click research and Similarweb’s post-Overviews benchmark shifts SparkToro SERoundtable. The move is to design content for being cited and ensure your pages still convert when the click does happen.

What kind of content gets cited by answer engines?

Content that’s easy to extract without losing meaning: crisp headings, direct answers, structured steps, and grounded evidence. Also: distinctive POV. When everything sounds the same, models have no reason to pick you.

Does “helpful content” mean I should stop publishing frequently?

No. It means stop publishing indistinguishable content frequently. Publish at a pace that allows for proof, POV, and coherence across a topic cluster. Quantity without differentiation is what gets punished when quality-focused updates roll through (as seen in industries hit during Helpful Content cycles) Sistrix.


See how Iriscale makes AI-era content compounding (not exhausting)

If your team is tired of shipping SEO checklists and hoping for the best, it’s time to build a content engine designed for rankings and citations.

Request an Iriscale demo to see:

  • Opportunity Agent: find content opportunities traditional SEO tools miss
  • Knowledge Base: preserve strategic context so every piece compounds
  • AI-native optimization that stays human-first

Request a demo to see how Iriscale turns conversations into content opportunities—so marketing compounds instead of resetting.


Related guides (LEARN)

  • Iriscale LEARN: How AI Search Works
  • Iriscale LEARN: Building Authority for AI Search
  • Iriscale LEARN: Stop Writing for Google

Sources

[1] Nearly 60% of Google searches end without a click in 2024 (Search Engine Land): https://searchengineland.com/google-search-zero-click-study-2024-443869
[2] SparkToro 2024 Zero-Click Search Study: https://sparktoro.com/blog/2024-zero-click-search-study-for-every-1000-us-google-searches-only-374-clicks-go-to-the-open-web-in-the-eu-its-360/
[3] Similarweb 2025 SEO Benchmarks report (PDF): https://www.similarweb.com/corp/wp-content/uploads/2025/02/attachment-2025-SEO-Benchmarks-report.pdf?utm_medium=email&utm_source=sfmc&utm_campaign=mm_seo_seo-benchmarks_report_feb_25
[4] Similarweb zero-click growth summary (SERoundtable): https://www.seroundtable.com/similarweb-google-zero-click-search-growth-39706.html
[5] TechCrunch on ChatGPT referrals vs search declines: https://techcrunch.com/2025/07/02/chatgpt-referrals-to-news-sites-are-growing-but-not-enough-to-offset-search-declines/
[6] Sistrix: Google Helpful Content Update (September 2023): https://www.sistrix.com/blog/google-helpful-content-update-september-2023/
[7] Sistrix: SEO losers in Google US search (2024): https://www.sistrix.com/blog/indexwatch-seo-losers-in-google-us-search-2024/
[8] Amsive: SEO in 2023 winners/losers and trends (includes Sistrix visibility context): https://www.amsive.com/insights/seo/seo-in-2023-winners-losers-and-overall-trends/
[9] Google: AI Overviews hub page: https://search.google/ways-to-search/ai-overviews/
[10] Google product blog: New ways to connect to the web with AI Overviews: https://blog.google/products-and-platforms/products/search/new-ways-to-connect-to-the-web-with-ai-overviews/
[11] BrightEdge: AI Overviews page: https://www.brightedge.com/ai-overviews
[12] BrightEdge: Research reports hub: https://www.brightedge.com/resources/research-reports
[13] Datos: ChatGPT Search by the Numbers (performance/usage roundup): https://datos.live/blog/chatgpt-search-by-the-numbers-how-is-it-performing-in-the-search-space/
[14] OpenAI: How people are using ChatGPT: https://openai.com/index/how-people-are-using-chatgpt/
[15] Backlinko: Perplexity statistics (usage roundup): https://backlinko.com/perplexity-statistics