Why I ran this experiment
Most AI tools in marketing are answering the wrong question.
They answer “how do I write content faster?” The question that actually matters is “how do I build a content strategy that is connected, governed, and defensible — and then execute it without rebuilding it from scratch every quarter?”
Those are not the same question. And the tools built to answer the first one fail catastrophically at the second.
So I ran a deliberate, sceptical test. I gave Iriscale’s AI Chief Marketing Agent — the CMA — one hour and a clean brief. No pre-loaded strategy. No saved templates. No hand-holding. Just a one-paragraph description of Iriscale, a target audience, a list of requirements, and a few seed topics.
The question I was actually asking: can a purpose-built AI Chief Marketing Agent produce a complete, defensible content strategy from scratch — without me steering it every two minutes?
Not “generate ten blog ideas.” A real plan. Demand data, TOFU/MOFU/BOFU mapping, content architecture, governance rules, and a weekly execution workflow a team could actually run.
What follows is the step-by-step unfiltered walkthrough of exactly what the CMA produced. The structure, the tables, the decision logic, the gaps — so you can judge whether an AI-built content strategy is mature enough for high-stakes work.
The short answer: it was.
What I gave the CMA to start
Before getting into the output, here is exactly what I gave the CMA as input. I am being specific because the quality of the output is directly connected to the quality of this brief — and most teams underbrief their AI tools by a factor of ten.
Inputs provided:
- A one-paragraph description of Iriscale (connected AI growth marketing platform with a Chief Marketing Agent)
- Target customers: B2B marketing teams and agencies at 50 to 500 employees
- Must-haves: brand governance, real keyword data, TOFU/MOFU/BOFU coverage, AI and human oversight workflow
- Seed topics: content strategy, SEO, AI workflows, marketing amnesia, tool sprawl
That is it. No keyword list. No personas. No positioning document. Everything else the CMA built from that brief — which is the point.
Step 1: Knowledge extraction — the CMA built a brand and market brain before writing a single headline
The first thing the CMA did was refuse to jump into keywords.
This surprised me. Most generic AI tools sprint to “here are your keywords” the moment you give them a topic. The CMA treated knowledge extraction as a prerequisite — the same way a senior strategist would spend their first week onboarding before making a single recommendation.
It produced three outputs before touching keyword research.
Output 1: Positioning snapshot
- Category: AI content strategy platform and Chief Marketing Agent
- Primary promise: Generate a complete, governed content strategy using real demand signals — then execute it with AI and human oversight
- Differentiator hypothesis: A unified platform with governance beats a collection of prompt-and-pray tools
That last line matters. The CMA was not just describing Iriscale — it was identifying the specific positioning angle most likely to resonate with the buyer who has already tried disconnected AI tools and been disappointed.
Output 2: Buying committee map
The CMA did not stop at “marketing manager.” It broke the buying committee into four distinct roles with different motivations:
| Role | Primary concern |
|---|---|
| Economic buyer (VP Marketing / CMO) | Pipeline impact, risk, governance, board-level accountability |
| Champion (Content lead / SEO Manager) | Speed, coverage, content quality, workflow efficiency |
| User (Writers, PMMs, Demand Gen) | Clear briefs, reliable workflow, time savings |
| Blocker (Legal, Brand, Security) | Data handling, brand drift, approval control |
This is a meaningful distinction for content strategy. Content written to persuade the champion is different from content written to neutralise the blocker. A strategy that maps content to the full buying committee — not just the primary persona — closes more deals than one that does not.
Output 3: Objection inventory
Before producing a single keyword, the CMA drafted the objections the content strategy would need to address:
- “AI content is generic and hurts brand reputation”
- “AI cannot understand our niche or our customers”
- “We cannot trust an AI with high-stakes strategic decisions”
- “Strategy tools produce slide decks that no one actually implements”
Starting with objections before starting with keywords is the move of an experienced strategist, not an AI tool running a prompt. It means the content architecture is built to overcome the real barriers — not just to rank for the obvious terms.
The actionable takeaway: If you are evaluating any AI content strategy tool, do not start with “write me content.” Start with: “Summarise my positioning, my buying committee, and my objections in a single page — and tell me what you still need to know.” If it cannot do that cleanly, it is not ready to drive strategy.
Step 2: Keyword Repository — a demand-led system, not a list
Once the knowledge base existed, the CMA built what it called a Keyword Repository — a structured database designed to power planning decisions over time, not a one-time export.
The key distinction: a keyword list answers “what are people searching for?” A keyword repository answers “what should we build content about, in what order, for which buyer stage, with what priority?” The CMA built the second one.
A note on the metrics below: these are illustrative samples in the format the CMA produced, showing how the repository is structured and used. When you run this inside Iriscale, the repository is populated with real volume, CPC, and difficulty data from your specific market and geography.
Cluster A: “AI Chief Marketing Agent” — category definition cluster
| Keyword | Volume | CPC | Intent | Funnel stage |
|---|---|---|---|---|
| AI chief marketing agent | 250/mo | $9.50 | Mixed informational + commercial | TOFU → MOFU bridge |
| Chief marketing agent AI | 90/mo | $8.10 | Informational | TOFU |
| AI marketing agent | 1,300/mo | $7.20 | Informational | TOFU |
CMA note: “Define the category first. Publish an explainer, then a use-case guide, then an evaluation framework. Buyers cannot evaluate what they do not yet understand.”
Cluster B: “AI content strategy tool” — solution evaluation cluster
| Keyword | Volume | CPC | Intent | Funnel stage |
|---|---|---|---|---|
| AI content strategy tool | 700/mo | $12.40 | Commercial investigation | MOFU |
| Automated content strategy | 400/mo | $10.60 | Commercial investigation | MOFU |
| Content strategy automation | 150/mo | $8.90 | Commercial investigation | MOFU |
CMA note: “Compete on governance and workflow — not on speed. Avoid the ‘AI writer’ framing entirely. Buyers at this stage are evaluating systems, not copywriting assistants.”
Cluster C: “B2B content strategy framework” — evergreen credibility cluster
| Keyword | Volume | CPC | Intent | Funnel stage |
|---|---|---|---|---|
| B2B content strategy framework | 1,100/mo | $6.80 | Informational with lead-magnet potential | TOFU |
| Content strategy template B2B | 900/mo | $5.40 | Informational | TOFU |
| Content audit process | 1,600/mo | $4.90 | Informational | TOFU |
CMA note: “Offer downloadable templates. Tie the wasted-content problem (60 to 70 percent of content going unused) to the need for strategic governance. The template earns the email. The governance framing earns the demo.”
What made this feel like senior strategy, not a bot
Three things separated the CMA’s keyword output from a generic keyword export.
First, every term was tagged to a buyer stage. Stage tagging is how you avoid building a TOFU-only content engine that drives traffic but not pipeline. The CMA flagged the funnel balance explicitly: too many TOFU terms, not enough MOFU or BOFU coverage.
Second, it wrote decision rules alongside the data. Not just metrics — logic. One example rule the CMA produced: “If CPC is above $10 and intent is evaluation-stage, prioritise MOFU comparison and evaluation assets with BOFU proof modules (case study, demo narrative, ROI model) linked from the same page.”
Third, it flagged quality risk explicitly. The CMA noted that using AI to generate content volume without governance is the primary failure mode for teams that adopt AI content tools — and built governance into the keyword system architecture from the start rather than as an afterthought.
The actionable takeaway: Build a keyword repository that forces stage, intent, and asset type alongside volume and difficulty. If your spreadsheet only has volume and difficulty, you have a content calendar — not a content strategy.
Step 3: Funnel mapping — topics mapped to jobs-to-be-done with explicit conversion paths
Once the repository existed, the CMA generated a Funnel Coverage Map — translating keyword clusters into content that serves a specific buyer job at a specific stage of the decision process.
This is where most AI content tools fall apart. They label things TOFU, MOFU, and BOFU, but the mapping is shallow — the same evergreen blog post is called TOFU in one strategy and MOFU in another, depending on what the tool decided that day.
The CMA mapped every topic to a specific job-to-be-done, a content format, and a conversion step. Here is an excerpt.
TOFU — problem and category education
Pillar: “What is an AI Chief Marketing Agent?”
- Job-to-be-done: Understand the new category and what it replaces — tools, headcount, agency retainers
- Formats: Long-form guide, LinkedIn carousel, short video script
- Conversion step: Newsletter signup or template download
- Key angle: Most buyers have never encountered this category. The first job is definition, not persuasion
Pillar: “Why most content strategies fail — and how to fix them”
- Job-to-be-done: Recognise the specific failure mode (content that does not compound, resets every quarter)
- Hook: Only 29 percent of B2B marketers rate their content strategy as highly effective
- Supporting angle: 60 to 70 percent of created marketing content goes unused — not because it was written badly, but because it was created without the strategic continuity that would make it deployable
- Conversion step: Download the content strategy checklist
MOFU — solution evaluation and internal buy-in
Pillar: “AI content strategy tool: how to evaluate platforms”
- Job-to-be-done: Build a shortlist and defend it to leadership
- Formats: Evaluation checklist, governance guide, ROI model
- Conversion step: Demo request
- Key angle: The buyer at this stage needs to win an internal argument, not just find a good product
Pillar: “Automated content strategy workflows that do not break brand”
- Job-to-be-done: Overcome the quality and governance objections raised by legal, brand, and security stakeholders
- Conversion step: Request access or trial
- Key angle: Address the blocker’s concerns directly — the champion cannot move forward until the blocker is neutralised
BOFU — proof and decision enablement
Asset: “CMA strategy pack for [industry]”
- Job-to-be-done: Prove the system understands the specific niche before the buyer commits
- Formats: Sample strategy output, content briefs, first-month editorial calendar
- Conversion step: Book a working session
- Key angle: Show the output before asking for the decision
Asset: “Governed content operations playbook”
- Job-to-be-done: Give legal and brand stakeholders the documentation they need to approve the tool internally
- Formats: Policy template, role-based approval workflow, audit trail explanation
- Conversion step: Forward to the blocking stakeholder
The detail that changed my view: conversion paths were explicit
Instead of “TOFU blog → generic CTA,” the CMA built actual conversion paths with named steps:
TOFU guide → content strategy template download → email nurture sequence → MOFU evaluation page → demo request → booking
That is not a funnel diagram. That is an operating plan. And the difference between a strategy that produces pipeline and one that produces traffic is almost always in the specificity of the conversion path.
The actionable takeaway: If your funnel map does not specify job-to-be-done, format, and conversion step for every topic, it is a topic list. Ask your team — or your AI tool — to produce a one-page coverage heatmap showing TOFU/MOFU/BOFU balance by month.
Step 4: Content architecture — pillars, clusters, and internal linking rules
With the funnel map complete, the CMA turned it into site architecture: pillar pages, supporting cluster articles, and internal linking rules written as enforceable workflow requirements.
This is the step where automated content strategy either becomes operational — or dies as a well-formatted plan that no one maintains after month two.
Pillar page blueprint (example)
Pillar: AI Chief Marketing Agent: Definition, Use Cases, and How to Evaluate One
- Pre-built H2 section structure: Definition, How it works, Governance, Success metrics, Evaluation checklist
- Proof module slots: Mini case studies, workflow diagrams, sample strategy outputs (to be populated from real customer data)
- CTA modules tiered by stage: Template download (TOFU), Evaluation checklist (MOFU), Demo request (BOFU)
Cluster map under the pillar
The CMA generated four supporting cluster articles mapped to the primary pillar:
- “AI content strategy tool vs generic chatbots: the governance gap”
- “Automated content strategy workflows: brand safety and approvals”
- “B2B content strategy framework with downloadable templates”
- “Content audit process: how to identify and eliminate unused content”
Each cluster article supports the pillar’s topical authority — and each is written to close a specific objection from the buying committee map generated in Step 1.
Internal linking rules
The CMA did not just suggest a linking structure. It wrote linking requirements as operational rules:
- Every cluster article must link up to its pillar page using exact-match or partial-match anchor text at least once
- Every pillar page must link down to six to ten cluster articles using descriptive anchor text
- MOFU assets must receive internal links from at least three TOFU pages per month
- New articles must identify three existing articles they will link to before drafting begins
Those rules sound simple. Most teams do not operationalise them — and then wonder why their pillar strategy never lifts domain authority or reduces bounce rate on cluster articles.
Distribution layer built into the architecture
The CMA added a distribution requirement to every pillar: each one had “repurposing slots” defined alongside the content brief. A pillar article does not ship until the carousel outline, newsletter version, and webinar abstract are also assigned.
This directly addresses the content waste problem. Content that is created without a distribution plan before drafting begins is the content that ends up in the 60 to 70 percent of assets that go unused. The CMA built prevention into the architecture rather than making it a separate editorial reminder that eventually gets forgotten.
The actionable takeaway: Treat content architecture as a system with enforceable rules — not a one-time diagram. Write linking requirements and distribution requirements into your editorial process. If your AI content strategy tool does not enforce them automatically, do it manually in a checklist.
Step 5: Drafting and workflow — briefs, QA rules, and a human-in-the-loop operating cadence
This is the step where I expected the CMA to produce generic output. “Here is your content brief template. Good luck.”
Instead, it produced what I would call a content ops pack: briefs, workflow stages, QA gates, and governance rules designed to prevent the specific problems that teams cite when AI content quality drifts — inconsistency, brand misalignment, and the gradual erosion of trust between marketing and sales.
Content briefs (per asset)
Each brief the CMA generated included six required fields:
- Primary keyword plus supporting keywords
- Search intent and target funnel stage
- Content angle — objection-led, template-led, or category-defining
- SME prompts — specific questions to ask internal subject matter experts before drafting begins
- Evidence requirements — the specific data points and proof the article must include
- Conversion requirement — the specific CTA and the specific next step it leads to
The evidence requirements field is the one most teams skip. Requiring every brief to specify the data points the article must include is what prevents the “generic AI content” failure mode — because the writer, whether human or AI-assisted, cannot complete the brief without sourcing the specific proof.
Governance rules
The CMA proposed a governance framework with four components:
Brand voice guardrails: An approved vocabulary list, a list of banned claims, and tone requirements written as specific rules rather than subjective guidelines
Citation discipline: Any numeric claim must be sourced before publication. Claims without sources must be labelled as analysis or team observation — not presented as fact
Approval workflow: Draft → Editor QA → Brand review → Legal review (optional, triggered by specific content types) → Publish
Audit trail: Every published piece carries a record of what the AI generated versus what humans edited. This is not just a quality measure — it is a trust measure between marketing and the executive team that approved AI adoption
Weekly execution cadence
The CMA produced a specific weekly operating rhythm:
| Day | Activity |
|---|---|
| Monday | Keyword and intent review — select 2 TOFU, 1 MOFU, 1 BOFU assets for the week |
| Tuesday | Briefs locked — SMEs scheduled for input sessions |
| Wednesday | Draft produced — internal link plan confirmed |
| Thursday | QA review — governance checks completed |
| Friday | Publish — distribution activated — content backlog updated |
This cadence is designed around a team of two to four people. It is not a ten-person content operation. It is a lean team running a connected system — which is the exact context Iriscale’s CMA was built for.
The biggest structural win: unused content prevention built into the workflow
The CMA did not mention the content waste problem and move on. It embedded prevention into every stage of the workflow:
- Every asset requires a distribution plan defined before drafting begins
- Every asset must map to a funnel stage and a specific conversion CTA before a brief is approved
- A quarterly refresh and prune sprint is scheduled as a standing calendar event — not an ad hoc exercise when someone notices traffic declining
The actionable takeaway: If you want AI and human oversight to function at scale, make governance a workflow stage — not a policy document. The CMA’s approach — rules, approvals, audit trail, and distribution requirement before drafting — is the difference between “AI wrote a blog post” and “AI is safe to scale.”
What the CMA produced in full — the complete strategy pack
Here is the complete output the CMA delivered in under an hour. This is not a list of what it could produce. This is what it actually produced.
Strategy Pack (complete CMA output)
- [ ] Positioning snapshot — category definition, primary promise, differentiator hypothesis
- [ ] ICP and buying committee map — economic buyer, champion, users, blockers, each with specific concerns
- [ ] Objection inventory — quality, niche accuracy, brand voice, security, ROI, and strategic credibility
- [ ] Keyword repository — clusters with volume, CPC, intent, stage, priority, and decision rules
- [ ] Funnel coverage map — TOFU/MOFU/BOFU topics with job-to-be-done, format, and conversion step
- [ ] Pillar and cluster architecture — pillar outlines, supporting cluster list, and distribution slots
- [ ] Internal linking rules — up-link and down-link requirements, MOFU link minimums, new article linking checklist
- [ ] Content brief template — intent, angle, SME prompts, evidence requirements, CTA
- [ ] Governance and QA framework — brand guardrails, citation rules, approval workflow, audit trail
- [ ] Weekly execution cadence — daily activities, asset selection, publication, and distribution rhythm
- [ ] Reporting specification — what to measure by stage (traffic, leads, demo requests)
If you want to run this without Iriscale, copy this checklist into Notion and work through each item with your team. If you are using Iriscale, this is what the CMA generates as a unified strategy pack — in a single session, from a single brief.
The honest gaps — what the CMA could not do without human input
This article would be less useful if I pretended the output was perfect.
Three things still require human input:
First, real customer proof. The CMA flagged where case studies, customer quotes, and specific outcome data should go — but it cannot invent them. The “proof module slots” in every pillar page blueprint are placeholders. Filling them requires the team to surface real customer outcomes. That is not a limitation of the CMA — it is the right answer. Invented proof is worse than no proof.
Second, niche validation. The CMA built the strategy from the brief I provided. The ICP assumptions, the objection inventory, and the funnel map are hypotheses — not facts. They should be validated against sales call recordings, win-loss notes, and CRM data before the team scales production. The CMA explicitly listed its unknowns at the end of the knowledge extraction step and flagged which assumptions needed validation. That transparency matters.
Third, editorial voice. The briefs tell a writer what angle to take, what evidence to include, and what conversion step to drive toward. They do not tell the writer how to open an article in a way that makes a VP of Marketing stop scrolling. That judgment — the voice, the specific observation, the sentence that makes someone say “that is exactly my experience” — is human. The CMA accelerates everything around it. The voice still has to come from the team.
Is Iriscale right for your team?
Iriscale is built for B2B SaaS marketing teams at the 50–500 employee stage who are ready to replace a disconnected content operation — one where strategy resets every quarter, where AI output requires heavy editing because no tool knows the brand, and where the content library grows while the pipeline impact stays flat.
If you want to see the CMA generate a complete strategy for your specific market — your ICP, your keyword clusters, your buying committee, your governance requirements — book a 30-minute walkthrough. Bring your brief. The CMA will build the rest in the session.
Frequently Asked Questions
What is an AI Chief Marketing Agent and how is it different from a regular AI writing tool?
An AI Chief Marketing Agent is a strategic layer that operates above individual content generation — building the knowledge base, keyword architecture, funnel map, content architecture, and governance framework that governs what gets written, for whom, and why. A regular AI writing tool produces text from a prompt. The CMA produces the system that determines what prompts should be written, in what order, for which buyer stage, with what evidence requirements and governance rules. The output is not a blog post. It is an operating plan.
How long does it actually take for the CMA to build a complete content strategy?
In the session described in this article, the CMA produced a complete strategy pack — positioning snapshot, buying committee map, objection inventory, keyword repository, funnel coverage map, content architecture, briefs, governance framework, and weekly cadence — in under an hour from a cold brief. That includes the time for the knowledge extraction step, which most teams skip and then pay for later in brand drift and editorial inconsistency.
Can the CMA understand a niche it has never encountered before?
The CMA mitigates niche misunderstanding by starting with structured knowledge extraction rather than jumping to keywords. At the end of Step 1, it produces a list of explicit unknowns — the assumptions it made that need human validation before scaling production. For niche validation, the team should check the ICP and objection inventory against sales call recordings and win-loss notes before treating the strategy as final. The CMA’s BOFU plan — industry-specific strategy packs that demonstrate niche understanding — is also designed to prove category competence before the buyer commits.
How does the CMA prevent the generic content problem?
Two mechanisms. First, evidence requirements: every brief specifies the exact data points, proof statements, and customer examples the article must include. A writer cannot submit a completed brief without sourcing those specifics — which forces differentiation. Second, SME prompts: every brief includes specific questions to ask internal subject matter experts before drafting. That expert input is what makes an article specifically yours rather than generically correct.
What governance does the CMA build into the content workflow?
The CMA’s governance framework has four components: brand voice guardrails (approved vocabulary, banned claims, tone requirements written as specific rules), citation discipline (every numeric claim must be sourced or labelled as internal analysis), an approval workflow (draft to editor QA to brand review to optional legal review to publish), and an audit trail that records what AI generated versus what humans edited. The audit trail is the component most teams skip — and the one that builds the most trust between the marketing team and the executive team that approved AI adoption.
Does the CMA replace the content team?
No — and it is designed specifically not to. The CMA produces the system, the architecture, the briefs, and the governance framework. The content team fills three things the CMA cannot: real customer proof (case studies, specific outcomes, customer voice), niche validation (confirming ICP assumptions against actual sales data), and editorial voice (the specific observation or opening sentence that makes a reader stop and feel seen). The CMA accelerates everything around those three inputs. The team provides the inputs.
How is the keyword repository different from a standard keyword export from a tool like SEMrush?
A keyword export from a traditional tool gives you volume, difficulty, and CPC. The CMA’s keyword repository adds stage tagging (which funnel stage each term serves), intent classification (informational, commercial investigation, or transactional), asset type recommendation (what format best serves this keyword and buyer), and decision rules (if CPC is above a threshold and intent is evaluation-stage, prioritise comparison and proof assets). The repository is designed to answer “what should we build and in what order” — not just “what are people searching for.”
What happens after the strategy is built — how does the CMA help with execution?
The CMA produces the strategy pack and the first month of content briefs as part of the session. Execution runs through Iriscale’s Articles Hub (brief management, AI-assisted drafting, editorial workflow, and approval management), the Opportunity Agent (continuous community scanning that surfaces new brief opportunities as they emerge from buyer conversations), and Search Ranking Intelligence (tracking whether published content is performing in both Google and AI search engines). The strategy does not sit in a document. It lives in the platform that executes it.
Related reading
© 2026 Iriscale · iriscale.com · AI-Powered Growth Marketing for B2B SaaS