Iriscale
ARTICLE

Domain Migration + Crawled-Not-Indexed Recovery: The Complete Decision Framework

Domain Migration + Crawled‑Not‑Indexed Recovery: The Complete Decision Framework

When rankings stall between positions 70–90 and Google Search Console fills with “Crawled – currently not indexed” flags, the instinct is to blame the domain and start fresh. Sometimes that’s the right call. Often, it’s not. This guide gives you a single diagnostic framework to separate domain-level baggage from fixable indexing blockers—then execute the correct path with measurable milestones.


What You’ll Learn

Two symptoms frequently appear together—large “crawled-not-indexed” buckets and chronic low rankings—but they don’t always share the same root cause. Use this framework to decide whether to fix in place, migrate with 301 redirects, or rebuild on a new domain without redirects. Then track recovery with clear, evidence-based milestones.


The Combined Problem: Indexing Selection vs. Domain Trust

Indexing is not guaranteed, even after Google crawls a URL. Google’s documentation confirms that crawling and indexing depend on content changes, server responses, site organization, and overall demand for your pages [1], [3]. In practice, “Crawled – currently not indexed” can represent thin or duplicative pages, crawl-budget inefficiencies, or technical blocks—without implying a penalty.

Domain migrations add a second layer of risk. Google’s guidance is consistent: for a standard site move with URL changes, implement 301 redirects, update internal links, and keep redirects in place for at least a year so Google can reprocess the move multiple times [4], [5]. However, Google representatives have acknowledged the 301 “paradox”: if the old site carries algorithmic or manual issues, redirects can carry those signals forward. Search industry reporting on Google’s guidance confirms that penalties can follow via redirects, and that a true “clean slate” requires avoiding signal inheritance—no redirects and avoiding reusing the same footprint [6], [7].

This is where most teams get stuck: the mechanism that preserves equity (301s) can preserve problems. This article resolves that tension with a decision framework that treats domain migration crawled not indexed as one combined diagnostic problem.

What to do next: Don’t choose a migration approach until you’ve proven whether the indexing backlog is primarily (a) technical/crawl-budget, (b) content selection/quality, or © domain-level trust/penalty signals. The next seven steps show you how.


Step 1: Understand the Combined Problem (Indexing Selection vs. Domain Trust)

“Crawled – currently not indexed” is a selection outcome: Google fetched the page but decided not to add it to the index or not to keep it indexed. Google’s crawling and indexing documentation emphasizes that indexing is conditional and influenced by site quality, accessibility (mobile and rendering), and how efficiently Google can discover and re-crawl important URLs [1], [3]. Industry explanations align: common causes include thin or duplicative content, redirect or URL inconsistencies, large sets of low-value pages (often parameterized, faceted, user-generated content, or pagination), and weak internal signals about what matters [8], [9].

The migration question becomes intertwined because indexing backlogs often look like “the domain is ignored.” But many cases are simply wasted crawl demand: Googlebot spends resources on low-value URLs or gets mixed signals (canonicals, soft duplicates, inconsistent internal linking), so your money pages remain unindexed or rank poorly.

Real-World Scenarios

Position ~75, no migration needed (technical block): A services site sits at positions 70–90 for core queries and shows 8,000 “crawled-not-indexed.” Investigation reveals an overly broad Disallow rule applied to a key directory during a redesign, so Google crawls some URLs via external links but can’t reliably access internal discovery paths. After fixing robots rules, resubmitting sitemaps, and validating fixes in GSC, the “crawled-not-indexed” bucket trends down over weeks, and priority pages re-enter the index—typical of issues highlighted in GSC monitoring guidance [10].

Large site with pagination noise: An e-commerce store generates tens of thousands of paginated and filtered URLs. Conductor notes that large sites often see this status for sitemap and pagination-heavy setups and that it may resolve once signals consolidate—if the site clarifies what should be indexed [11].

Thin programmatic pages: A directory site publishes thousands of near-identical location pages with minimal unique value. Moz’s analysis flags thin and duplicate patterns as a frequent reason for crawled-but-not-indexed outcomes, pushing the fix toward consolidation and quality improvements rather than domain changes [8].

What to do next: Treat “crawled-not-indexed” as a prioritization problem first. Assume nothing about penalties until you’ve tested accessibility, crawl efficiency, and page-level value signals.


Step 2: The 301 Redirect Paradox (Preserve Equity vs. Preserve Baggage)

Google’s migration documentation is straightforward: for a standard move with URL changes, use permanent redirects (301s), keep them live long enough (often at least a year), and update sitemaps and internal links to accelerate consolidation [4], [5]. Search industry coverage of Google’s guidance confirms that keeping redirects for a year supports multiple rounds of recrawling and reprocessing [12]. Google also advises that large changes can take months to settle in search results [13].

The paradox appears when you suspect the old domain has domain-level issues. Search industry reporting on Google statements indicates that if a site is penalized (manual or algorithmic), redirecting can carry those issues to the new domain because Google understands the move is effectively the same site [6]. Conversely, if you avoid redirects to prevent inheriting problems, you also abandon much of your earned link equity and relevance signals.

Case-Style Illustrations

E-commerce migration with 301s + stagnation: An e-commerce brand migrates to a cleaner name and 301s everything. Post-move, indexing is fine—but rankings remain flat and “stuck.” A backlink review shows manipulative legacy anchors and suspicious link bursts. The new domain inherits that risk profile because the redirect confirms continuity—consistent with how Google is reported to treat penalties across redirects [6].

Media site keeps redirects <3 months: A publisher removes redirects early to reduce server load. Six months later, old URLs still appear in search sporadically, and the new URLs consolidate slowly. This matches Google guidance that redirects should stay long enough for repeated reprocessing; short windows increase instability [12], [13].

“Should I migrate without 301 redirects” dilemma: A site owner asks this exact question when they believe the domain is “tainted.” Google commentary reported in the search press suggests a genuine “clean slate” requires not carrying signals over—including not redirecting—and also avoiding reusing the same structure and content footprint that makes it easy to map old-to-new [7]. This is a last resort, not a default.

What to do next: A 301 migration is the right default for healthy sites. If you have credible evidence of domain-level toxicity or penalty-like suppression, consider either fixing first or executing a true clean-slate move with known trade-offs.


Step 3: Domain Toxicity Diagnostic (Separate “Suppressed Domain” from “Fixable Site”)

Before you pick a path, you need a defensible read on whether you’re dealing with domain penalty recovery work (domain-level trust suppression) or standard technical and content remediation. Use a layered diagnostic: GSC evidence, backlink risk signals, and change-correlation.

Google’s documentation emphasizes using Search Console for monitoring and debugging, including URL Inspection, indexing reports, and validation of fixes [10]. For “crawled-not-indexed,” Google’s FAQ and community threads point you back to fundamentals: ensure pages are accessible, valuable, not blocked, and properly signaled through canonicalization and site structure [1], [14]. In other words, Google rarely frames this status as “you’re penalized”—more as “we chose not to index it.”

Backlink Toxicity Indicators (Practical Thresholds)

Use these as risk flags, not definitive proof:

  • >20% exact-match commercial anchors across referring domains (often a manipulation footprint)
  • Majority of links from de-indexed or spammy domains (indicates link ecosystem risk)
  • Sudden link velocity spike (e.g., +10,000 links in days) without a PR event or viral spike

These indicators matter because if the domain’s link graph is dominated by manipulative patterns, a redirect-based move may consolidate the same risk onto the new domain—undermining a toxic domain migration.

Diagnostic Snapshots

GSC pattern suggests “selection,” not penalty: Thousands crawled-not-indexed, but important URLs show “Indexed” in URL Inspection and impressions exist in Performance. This often points to a long tail of low-value URLs rather than domain-wide suppression—aligned with Google’s explanation that indexing depends on demand and quality [1], [3].

Correlation with a site change: Indexing drops immediately after a templating change that introduced duplicate canonicals or “noindex.” Google’s crawl-budget guidance emphasizes managing noindex, errors, and sitemaps to improve crawl efficiency—changes here can cause broad indexation shifts without penalties [2].

“Everything crawled, nothing ranks” after link burst: You see indexing instability and ranking caps across sections even after technical fixes. Combined with anchor and velocity flags, this is when penalty-like suppression becomes plausible—especially if the site had a history of aggressive link building—and the safest approach may be remediation before any redirect migration.

What to do next: Don’t label a domain “toxic” based on crawled-not-indexed alone. Escalate to toxicity hypotheses only when (a) fixes don’t change indexing trends, and (b) backlink and footprint signals suggest systematic manipulation or long-term low quality.


Step 4: The Decision Framework (Stay + Fix vs. Migrate with 301s vs. Clean Slate)

Here’s the unified decision tree for a crawled not indexed migration strategy. The goal: decide once, then execute cleanly.

Decision Tree (Practical)

A) First prove “can Google access and understand the site?”

  • If URL Inspection shows blocked resources, render issues, or inconsistent canonicals → Stay and fix (Step 7 monitoring still applies) [10].
  • If robots.txt or meta directives are wrong → Stay and fix (then validate in GSC) [10].
  • If sitemaps are outdated or flooded with non-canonical URLs → Stay and fix [2], [3].

B) If access is fine, ask: “Is Google choosing not to index because of value and duplication?”

  • If many pages are thin, duplicated, paginated, parameterized, or user-generated content with little unique value → Stay and fix content architecture (prune or noindex, consolidate, improve internal linking) [8], [11].
  • If the site is in a “bad state” overall, rebuilding may outperform endless patching—Google commentary reported in the search press supports starting over when quality is deeply compromised [15].

C) If value fixes don’t move the needle, assess domain-level baggage risk

  • If backlink risk flags are severe and historical tactics are questionable → consider Clean-slate migration (Step 5), accepting you will forfeit much inherited equity, but you reduce the probability of carrying the same suppression signals.

D) If domain is healthy but branding or tech requires a move

  • Choose Migration with 301s (Step 6) per Google’s move guidance [4], [5].

“Before → After” Outcomes

Stay + fix wins (position 75 recovery): A local lead-gen site stuck around position 75 discovers an accidental robots block and sitemap pointing to non-canonical URLs. After fixes and GSC validation, crawled-not-indexed drops steadily and priority pages climb into top 20 over the next 4–8 weeks—trend behavior consistent with GSC validation and crawl and index fundamentals [10], [1].

301 migration is right (healthy domain): A SaaS company rebrands. They map redirects 1:1, update internal links, and keep redirects live for at least a year, matching Google’s recommendation. Rankings dip briefly, then consolidate over subsequent months—consistent with “changes can take months” guidance [4], [13].

Clean slate accelerates reindexing: A content site with heavy duplication rewrites templates, reduces near-duplicates, and launches on a new domain without redirects. With a smaller, higher-quality corpus and clearer internal linking, key sections reindex faster. This is plausible because a smaller, cleaner site improves crawl demand and reduces wasted crawl paths—aligned with crawl budget principles [2]—but results vary widely.

What to do next: Make the decision based on evidence progression: if technical and content improvements measurably reduce crawled-not-indexed and improve indexing of priority pages, do not migrate. If you can’t shift the curve and domain baggage signals are strong, plan a clean slate deliberately.


Step 5: Clean-Slate Migration Protocol (No 301s, Minimal Footprint Carryover)

A clean-slate move is not a “standard migration.” It’s a controlled relaunch designed to reduce inherited signals when you believe the current domain is irreparably compromised. Search industry reporting on Google commentary suggests that if you want a true reset, you avoid redirects and avoid making it trivial to map old-to-new one-to-one [7]. That’s the core principle: reduce continuity signals.

Important trade-off: you are intentionally abandoning most historical link equity. This path is justified only when you believe equity is net-negative, or when the site’s overall state is so poor that rebuilding is the more rational investment—an option Google’s representatives have been reported to recommend for sites in a “bad state” [15].

Protocol (High-Level)

Build a new information architecture: Don’t replicate every old URL. Prioritize your top converting and top informational clusters.

Rewrite and upgrade content: Reduce templated duplication; add unique value, original media, and clearer topical focus. Moz and other industry analyses consistently point to thin and duplicate patterns as drivers of crawled-not-indexed outcomes [8].

Launch with strict index hygiene: Only submit canonical, index-worthy URLs in XML sitemaps; keep parameters and low-value pages out. Crawl budget guidance supports reducing crawl waste and focusing Googlebot on important pages [2].

Separate analytics and tracking: Avoid obvious linking patterns from old domain assets that recreate continuity.

Let the old domain sunset carefully: You may keep it live (no redirects) with minimal pages or a basic notice for users, but understand Google may still crawl it.

Case-Style Uses

Affiliate site with legacy link schemes: Historic link buying created anchor manipulation. The site is rebuilt with fewer pages, stronger editorial, and no redirect continuity to avoid importing the same trust issues—concept aligns with penalty carryover risks discussed in industry coverage [6], [7].

User-generated content forum overwhelmed by thin pages: Instead of migrating everything, the team launches a curated knowledge base on a new domain with only best content, leaving the user-generated content behind. This reduces low-value URL volume and improves crawl demand—aligned with crawl budget concepts [2].

Local directory “clone pages”: A directory replaces thousands of near-identical pages with fewer, richer hub pages. Indexation improves because the corpus is smaller and more differentiated—aligned with common crawled-not-indexed causes [8].

What to do next: If you choose clean slate, commit to genuinely better content and tighter URL inventory. “New domain, same junk” usually recreates the same indexing selection outcome—just later.


Step 6: Migration with 301s Protocol (Preserve Signals, Avoid Self-Inflicted Indexing Loss)

If you’re not trying to escape domain baggage, follow Google’s standard site-move playbook: 301 redirects, updated internal links, correct sitemaps, and long-lived redirects [4], [5]. The biggest failures come from redirect mapping gaps, chaining, and launching with broken canonicals—problems that look like indexing issues but are really migration execution errors.

Protocol (Practical Sequence)

Pre-migration URL inventory: Export all indexable URLs, top landing pages, and top-linked pages.

1:1 redirect mapping: Redirect every important old URL to the most relevant new URL. Avoid mass redirecting everything to the homepage.

Update internal links at launch: Don’t rely on redirects for internal navigation; it wastes crawl resources and slows consolidation—aligned with crawl efficiency principles [2], [3].

Submit new XML sitemap(s): Include only canonical URLs. Keep old sitemaps accessible for a period to help discovery of redirects; Google has discussed keeping sitemaps around post-move in industry coverage [16].

Keep 301s live for at least a year: Search industry reporting on Google guidance emphasizes this timeline so signals can be reprocessed multiple times [12].

Migration Outcomes

Healthy rebrand migration: Temporary ranking volatility lasts weeks, then stabilizes as signals consolidate—consistent with Google’s caution that changes can take months to reflect fully [13].

Partial mapping causes “crawled-not-indexed”: A site migrates but misses redirects for older blog posts. Google crawls old URLs from backlinks, hits 404s, and spends less demand on new URLs; the new posts are discovered but not indexed quickly. Fixing redirect gaps and resubmitting sitemaps improves the indexing curve—aligned with crawl budget and indexing basics [2], [3].

Redirect chains slow consolidation: Old → interim → new chains add latency and crawl inefficiency. Flattening to single-hop 301s improves crawl effectiveness, which can reduce “crawled-not-indexed” for newly launched sections over time—aligned with crawl capacity and demand concepts [2].

What to do next: If you migrate with 301s, treat it like an engineering release: inventory, mapping, canonical hygiene, sitemap discipline, and long-lived redirects. Most “indexing problems” after a migration are actually execution gaps.


Step 7: Post-Migration (or Post-Fix) Recovery Monitoring and Red Flags

Whether you stayed and fixed, migrated with 301s, or launched clean slate, your job isn’t done at launch. Google’s Search Console documentation emphasizes ongoing monitoring and debugging through reports like Page Indexing, Core Web Vitals, and URL Inspection, plus validating fixes [10]. Google also notes that recrawling, reindexing, and reflecting major changes can take months, not days [13].

What to Monitor (Weekly for 8–12 Weeks)

GSC Page Indexing trend: Your “Crawled – currently not indexed” count should trend down for priority templates after improvements, even if total URLs fluctuate.

URL Inspection for key pages: Confirm Google sees the canonical you intend, the last crawl is recent enough, and the page is eligible for indexing [10].

Sitemaps: Track discovered vs indexed counts for submitted sitemaps; keep them clean of non-canonical URLs—aligned with crawl budget hygiene [2].

Server logs (if available): Confirm Googlebot is spending time on important URL paths.

“GSC Snapshot” Patterns

Trending down post-fix: After pruning thin pages and fixing canonicals, the crawled-not-indexed bucket declines week-over-week while “Indexed” increases for the primary sitemap set—pattern consistent with the concept of focusing crawl demand [2], [10].

Red flag: indexed count drops after migration: If indexed pages fall sharply and stay down, check for accidental noindex, canonical to old domain, blocked rendering resources, or redirect loops—classic migration hygiene failures. Monitor with GSC tools [10].

Red flag: old domain resurfaces: Google may keep showing old URLs intermittently if redirects were removed early or mapping is incomplete, which aligns with guidance to keep redirects long enough for repeated reprocessing [12], [13].

What to do next: Set expectations internally: meaningful recovery often takes weeks to months. Your KPI is not “index everything,” it’s “index and rank the pages that deserve it,” with a clean crawl path and consistent canonical signals.


Clean-Slate Migration Checklist

Clean‑Slate Migration Checklist (No‑301 Reset)

  • [ ] Confirm evidence of domain-level baggage (not just technical crawled-not-indexed) using GSC and backlink risk review [10], [1].
  • [ ] Define the new URL inventory: only pages you can defend as unique and valuable [8].
  • [ ] Build new information architecture and internal linking (hub → spoke), minimize low-value parameter and pagination crawl paths [2].
  • [ ] Create fresh content (rewrite templates, add unique sections and media, reduce duplication) [8].
  • [ ] Launch with: correct robots.txt, no accidental noindex, correct canonicals, mobile and rendering accessible [3], [10].
  • [ ] Submit clean XML sitemap(s) containing only canonical URLs; monitor indexing by sitemap [2], [10].
  • [ ] Avoid redirects and avoid 1:1 footprint replication when the goal is a true reset—aligned with reported Google guidance [7].
  • [ ] Weekly monitoring for 12 weeks: indexing trends, key URL Inspection, server errors, crawl spikes [10].

Related Questions

How long does recovery take after fixing “Crawled – currently not indexed”?

If the root cause is technical (robots, canonicals, sitemap hygiene), you often see directional improvement in weeks, but full reprocessing can take months depending on crawl demand and site size [2], [13].

How long should I keep 301 redirects after a domain migration?

Google’s migration guidance and industry reporting of Google commentary commonly recommend keeping 301s for around a year so Google can reprocess the move multiple times [4], [12].

Should I do a partial migration (only the “good” sections)?

Yes—sometimes. If “good” sections have strong unique value and clean signals, migrating only those can reduce low-quality URL volume while you fix or retire the rest. Use GSC to validate indexing outcomes by directory and sitemap [10].

Can I use a mixed strategy: 301 some URLs, leave others behind?

You can, but be explicit about intent: redirect high-value pages you trust; don’t redirect low-quality or risky sections if you’re trying to reduce baggage. Monitor for unexpected indexing of old URLs and crawl waste [10], [2].

What if I migrate with 301s and still have indexing issues?

Assume execution gaps first: wrong canonicals, noindex, blocked resources, redirect chains or loops, or sitemaps full of non-canonical URLs. Debug with URL Inspection and Page Indexing reports [10].


Next Steps

If you’re stuck between “fix it” and “start over,” run this framework on your top 50 revenue-driving URLs and your largest crawled-not-indexed templates first. The right decision becomes obvious when you can explain, with evidence, what Google is seeing—and why it’s declining to index.


Sources

[1] https://developers.google.com/search/help/crawling-index-faq
[2] https://developers.google.com/crawling/docs/crawl-budget
[3] https://developers.google.com/search/docs/crawling-indexing
[4] https://developers.google.com/search/docs/monitor-debug/search-console-start
[5] https://developers.google.com/search/docs/crawling-indexing/javascript/javascript-seo-basics
[6] https://moz.com/blog/crawled-currently-not-indexed-coverage-status
[7] https://moz.com/blog/crawled-currently-not-indexed-coverage-status
[8] https://www.conductor.com/academy/index-coverage/faq/crawled-currently-not-indexed/
[9] https://www.conductor.com/academy/index-coverage/faq/crawled-currently-not-indexed/
[10] https://support.google.com/webmasters/thread/248401570/page-is-not-indexed-crawled-currently-not-indexed?hl=en
[11] https://support.google.com/webmasters/thread/248401570/page-is-not-indexed-crawled-currently-not-indexed?hl=en
[12] https://www.bigredseo.com/crawled-currently-not-indexed/
[13] https://www.bigredseo.com/crawled-currently-not-indexed/
[14] https://yoast.com/crawled-currently-not-indexed-google-search-console/
[15] https://yoast.com/crawled-currently-not-indexed-google-search-console/
[16] https://pomodo.io/tech-archive/google-redirecting-urls-keep-old-sitemaps-6-months/

Related Articles