Back to all posts
checklists 15 min read

Website Redesign SEO Checklist: How to Replatform Without Losing Traffic

Ali Gundogdu ·
Website Redesign SEO Checklist: How to Replatform Without Losing Traffic

A website redesign is the most common moment when SEO traffic gets quietly destroyed. Not by a Google update, not by a competitor, but by a launch where someone forgot the redirect map, the new URLs were shorter “because they look cleaner”, and the structured data lived only in the old templates. By the time the dashboard shows a 40 percent drop in organic traffic, the project is closed, the team has moved on, and someone is left explaining why the new site looks great and ranks for nothing.

This checklist exists to prevent that story. It walks through the work that has to happen before, during, and after a redesign so that the visibility you built over years travels with the new site instead of getting left behind. It is structured around the moments where things actually go wrong.

Why Redesigns Tank SEO

The single sentence answer is, search engines see the old site and the new site as different sites unless you explicitly tell them otherwise. Every change introduces a chance for that signal to break.

URLs change because the new information architecture is cleaner. The redirect map is missing the long tail. Meta titles get rewritten by a new copywriter who did not know the old patterns. Internal links go through three hops because the dev team did “compatibility redirects” without checking the destinations. The XML sitemap is generated fresh and lists only the live new URLs, while Google still has 50,000 old URLs in its index that no longer resolve.

None of this is malice. It is the predictable result of a project that treats SEO as a “phase 2” concern instead of building it into every decision. The fix is to surface those decisions early and bake the answers into the launch plan.

Hybrid risograph and antique illustration of two classical temples connected by an arrow, representing migration from an old site to a new site

Phase 1: Pre Redesign Audit

Before the design team opens Figma, the SEO team needs a complete picture of what the current site earns. This is the baseline against which everything else gets measured.

Crawl the current site end to end. Use a real SEO crawler with JavaScript rendering and export the full URL inventory. For each URL, record the status code, the final canonical, the meta title, the meta description, the H1, the internal link count, the external backlink count, and the organic traffic over the last 12 months. This dataset is the redirect map’s source of truth.

Pull Search Console performance for the last 16 months. Export the URLs that received clicks and impressions, the queries that drove them, and the page level traffic distribution. The top 20 percent of pages usually drive 80 percent of the traffic. Those pages need explicit preservation plans; the long tail can be handled with patterns.

Pull backlink data. Export the URLs that have inbound links. A page with no organic traffic but with high authority backlinks is still a critical asset because that link equity flows through it to the rest of the site. Losing that page in a migration is invisible in traffic dashboards but visible in the long term in the site’s overall authority.

Map the structured data. What schema types live on which templates? Article, Product, Organization, BreadcrumbList, FAQPage. Every one of these has to be preserved in the new templates, ideally with identical or improved markup. A page that loses its FAQPage schema loses its rich result spot the next time Google recrawls.

Document the URL patterns. How are URLs constructed today? /blog/{slug}, /product/{category}/{slug}, /{lang}/{section}/{slug}. The new site does not have to keep the same patterns, but if it changes them, every change needs a redirect. Write the patterns down before any decision about the new architecture.

The output of phase 1 is a spreadsheet with every URL on the current site, its key signals, and its planned destination on the new site. That spreadsheet is the source of truth for everything that follows. A solid technical SEO audit is the right framework for this phase.

Phase 2: The Redirect Map (The Single Most Important Artifact)

The redirect map is where most migrations fail. It is also the easiest part to get right if you take it seriously early.

A redirect map is a list of old URLs and the new URL each one should send a user (and a search engine) to. It needs to cover every URL that has ever been indexed, not just the URLs that are popular today. Forgotten pages with high quality backlinks are common; they earn the site authority every day until the redirect map drops them.

The rules of a good redirect map.

Every redirect is HTTP 301, not 302. A 301 tells Google the move is permanent and transfers signal; a 302 keeps the old URL as the canonical and leaks signal. Use 302 only for truly temporary moves (an outage, an A/B test).

Every redirect lands on a relevant destination, not the homepage. Redirecting /blog/post-about-hreflang to /blog because “the homepage is good for everything” loses the relevance match. The right destination is the new page that covers the same topic, even if it is not a perfect match. Better to land 80 percent on topic than to land on a generic listing.

Every redirect is a single hop. /old-url → /intermediate-url → /new-url is a redirect chain that bleeds equity and slows crawling. Always redirect directly to the final URL, even if it means rewriting old chained rules.

Every parameter version is covered. /blog/post?utm_source=... should still redirect cleanly, ideally with the parameter stripped on the new side. Forgotten parameter versions are how analytics breaks for weeks after launch.

How to build the map at scale.

For pattern based changes (/blog/post becomes /articles/post), use regular expressions in the redirect rules. For one off changes, list each pair explicitly. Most redirect maps end up with 70 percent patterns and 30 percent explicit, which is healthy.

Validate the map before launch. Run a crawler against the new site with the old URLs as input. Every old URL should hit a 301 and land at a 200. Any 404 in the report is a hole; any 301 to 301 is a chain; any 200 from an old URL means a redirect rule never got configured.

Risograph and antique diagram of arrows mapping old URLs to new URLs, with some arrows neat and direct and others tangled, on a parchment background

Phase 3: URL Structure Decisions

If the new site keeps the same URLs, skip this phase. Most redesigns do not. New URL structures are a design opportunity, but every change has a cost.

Resist the urge to “clean up” URLs unless there is a real reason. A URL like /blog/2023/06/article-title looks dated, but changing it means a redirect for every blog post ever published. If the dated structure is not actively hurting (it is not), leave it. Spend the redirect budget on structures that are actually broken.

Pick a clear hierarchy and stick to it. A URL like /category/subcategory/product-slug reads well to humans and to search engines. Inconsistent depths (/product-slug for some, /category/product-slug for others) make the site harder to crawl and harder to interpret.

Decide trailing slashes once and forever. /about/ and /about are different URLs to search engines. Pick one and enforce it server side with a redirect. Mixing them is a slow leak of duplicate URL pairs and lost link equity.

Handle language and region prefixes carefully. If the new site introduces hreflang for the first time, that is its own project. The hreflang implementation guide covers the rules.

Avoid stop words and filler. /the-best-guide-to-seo is worse than /seo-guide. Shorter and more direct URLs are easier to scan, share, and remember. But shorter is not the same as cryptic; /g1 is bad for completely different reasons.

A summary table:

DecisionGood choice
Pattern/category/subcategory/slug or flat /slug
Date in URLAvoid unless content needs versioning
Trailing slashPick one, enforce server side
Stop wordsDrop
Case sensitivityLowercase, redirect uppercase variants
Special charactersASCII only, hyphens for spaces

Phase 4: Carry Over On Page Signals

Every page on the new site needs to carry the SEO signals that earned the old page its rankings. This is mostly invisible work that disappears if it goes well.

Title tags and meta descriptions. Bring over the existing tags by default. If the rewrite team wants new ones, that is a separate decision per page, not an automatic overwrite. Pages with strong CTR in Search Console should keep their proven titles. The full mechanics of this live in the meta tags SEO guide.

Heading structure. Every page should still have one H1, hierarchical H2s and H3s, and content that maps to the queries it ranks for. A redesign that flattens all headings into divs because “the design system uses divs” is a self inflicted SEO wound.

Internal links. Map the internal link graph before the migration. If page A linked to page B with anchor text “X technique”, and B is now /new-url, page A’s link should be updated to point at the new URL, not redirected through the 301. Internal links going through 301s are not fatal but they are sloppy and they accumulate.

Canonical tags. Every indexable page on the new site needs a self referencing canonical. Pages that legitimately consolidate to another URL (parameter variants, paginated views, regional duplicates) need canonicals pointing at the master. The canonical tags guide is the reference for the rules.

Structured data. Bring over every schema type from the old templates. If you do not have an inventory yet, run a crawl with the schema markup guide framework and use it as the migration target.

hreflang. If the site is multilingual, every hreflang annotation has to update at the same time as the URLs change. A redirect map that forgets to refresh hreflang leaves Google pointing at URLs that 301 elsewhere, which collapses the international setup.

Phase 5: Staging Environment QA

Most launches fail because the testing environment does not match production closely enough. Building a realistic staging is half the work.

Make staging crawlable but not indexable. Staging should be blocked from public search engines with noindex headers and HTTP basic auth, but a real crawler with credentials should be able to fetch it. Crawl staging the way Googlebot will crawl production.

Run a full audit against staging. Use the same tools you used in phase 1. Compare the metrics. Pages that used to have a meta description should still have one. Pages that used to have FAQ schema should still have it. Pages that used to be canonicalized to a master should still be.

Verify redirects on staging. Apply the redirect map to the staging configuration and crawl the old URLs against it. Every old URL should resolve to its mapped new URL with a single 301. This is the highest leverage test in the whole project, because it catches both missing rules and chained rules.

Check rendering. If the new site uses heavy JavaScript, render every key template through a JavaScript aware crawler and confirm the content extracted matches what is visible. A modern site that fails this test will fail JavaScript SEO on launch.

Measure performance. Core Web Vitals on staging predict Core Web Vitals on production for most templates. A new design that loads slower than the old design is a regression even if it is more beautiful. The Core Web Vitals guide covers the targets.

Pull a staging Search Console. Set up Search Console for the staging hostname temporarily and submit a sample sitemap. This surfaces problems Google specifically cares about that other tools might miss.

The deliverable from phase 5 is a punch list of every issue that must be resolved before launch and a signed off statement that the redirect map works as intended. No launch without that statement.

Phase 6: Launch Day

Launch day is mostly about doing nothing surprising. The work was done in the previous phases; launch day is the execution.

Pick a launch window with low traffic. Most sites have a window where traffic is at its lowest (late night, weekend morning). Use that window so that early problems affect the fewest users. Tuesday at 10am during a peak season is the wrong launch window for almost any site.

Update DNS, then verify. After DNS cuts over, verify from multiple geographic locations that the new site is responding. CDN propagation can leave one region on the old site for hours; catching that early matters.

Submit the new XML sitemap immediately. Generate the new sitemap, validate it, submit to Search Console. The faster Google sees the new URLs, the faster reindexing happens. The XML sitemap guide covers the validation step.

Crawl production with the old URL list. Within an hour of launch, run your old URL list through a crawler against production. Every old URL should hit a 301 and land at 200. Any 404 is a critical issue and should be fixed in the next deploy.

Watch the obvious dashboards. Search Console coverage, Google Analytics real time, error rate, server response time. The first 24 hours show whether the launch is healthy. Most launch problems become visible in this window if anyone is looking.

Communicate with the team. Stakeholders should know that some traffic fluctuation in the first week is normal as Google reindexes. A 5 percent dip for 7 to 14 days is recoverable; a 30 percent dip that does not start coming back after two weeks is a real problem.

Phase 7: The First 30 Days

After launch, the work shifts from prevention to detection. The first month is when problems that hid through testing reveal themselves.

Daily for the first week:

  • Search Console coverage report, watch for new “Not Found” entries
  • Search Console enhancements report, watch for schema warnings
  • Server logs, watch for Googlebot 404s and 500s
  • Internal Slack or stand up updates so the team sees the same signals

Weekly for the first month:

  • Full crawl of the site, compare against the pre launch baseline
  • Search Console performance, watch the top URLs from phase 1
  • Backlink reports, make sure inbound links still resolve correctly
  • Compare traffic curves to the same period last year, not just to last week

Common problems that surface in week 2:

  • Redirect rules that exist but redirect to the wrong destination (cosmetic match without semantic match)
  • Pages that load fine for users but render empty for Googlebot (JavaScript regression)
  • Pages that lost their canonical and now compete with their own duplicates
  • Pages that gained a noindex accidentally from a default in the new CMS

Common problems that surface in week 4:

  • Long tail URLs that nobody remembered, now 404ing, accumulating in the Coverage report
  • Schema markup that validates but does not produce rich results because a required property was dropped
  • Internal link patterns that point at 301s instead of the final URL, draining a fraction of crawl efficiency

A 30 day review at the end of the first month closes the migration. Compare every signal from phase 1 to its current state. Anything not at parity gets a ticket. Most migrations are not “done” until the 30 day review finds nothing.

Risograph and antique illustration of an old parchment scroll being stamped with a wax seal at the bottom, suggesting a launch day completion ritual

What Goes Wrong Even When You Did Everything

A few problems survive even a careful migration. Knowing they exist helps you not panic when they show up.

Reindexing takes time. Even with a perfect redirect map, Google takes weeks to fully consolidate the old URLs into the new ones. A small visibility dip during this period is normal. The recovery curve, not the launch day number, is the metric that matters.

Cache invalidation lags. Browsers, CDNs, and Google’s own cache may show old versions of pages for hours after launch. If a user reports “the site is broken”, a hard refresh often fixes it. Server side, double check the CDN configuration.

Some queries shift. A page that ranked for “cheap red shoes” may rank slightly differently for that query after the migration, even if the content is identical. Google reevaluates a moved page against its current index, and the index has moved too. Small ranking shifts are noise; large shifts on the top pages are the signal to investigate.

Old screenshots in Search Console. The “Live Test” tool may show the old version for a while because it caches. Trust the URL Inspection tool’s history view, not the screenshot, for indexing state.

What to Use This Checklist For

For a single page redesign, this is overkill. For a section level redesign, use phases 2, 4, 6, 7. For a full site migration, all seven phases. The checklist is the spine; every project adapts the depth of each phase to its scale.

For deeper context on the rendering side of modern migrations, the JavaScript SEO and rendering guide covers what changes when the new site is React or Vue. For an audit framework that uses the same crawl data, the technical SEO audit checklist gives the structure of an ongoing program after the migration is closed. And Seodisias is built to surface the gap between what a crawler sees on the old site and what it sees on the new one, which is the daily question during a migration.

A website redesign should be a moment of investment in the brand, not a moment of bleeding the SEO foundation that brought users to the brand in the first place. With the right preparation, the new site launches better than the old one, on every signal that matters.