
Most technical SEO checklists are 80 items long and treat every issue the same. They don’t. A broken canonical tag will tank your rankings. A missing Open Graph image won’t. We’ve run technical audits across 40+ websites since 2019 and we’ve learned that roughly 30 checks account for 90% of the technical issues that actually suppress organic traffic. This is that list, organized by what matters most.
“Every technical audit we run at ScaleGrowth follows a priority framework. We fix what moves the needle first, not what’s easiest to check off,” says Hardik Shah, Founder of ScaleGrowth.Digital.
The 30 checks below are grouped into three tiers. Critical items can directly kill your indexing or tank your Core Web Vitals scores. Important items affect crawl efficiency and user experience. Nice-to-have items are worth fixing but won’t cause emergencies if you skip them for a sprint.
What Does a Technical SEO Checklist Actually Cover?
A technical SEO checklist is a structured set of checks that evaluates how well search engines can crawl, render, index, and rank your website. It covers the infrastructure layer of SEO: server configuration, HTML markup, page speed, mobile readiness, and structured data. It’s different from content SEO or link building because it deals with the plumbing, not the messaging.
For practitioners, think of it as a health check for everything between your server and Google’s index. If your content is brilliant but your robots.txt blocks the entire /blog/ directory (we’ve seen it happen twice this year), none of that content will rank. Technical SEO makes sure the doors are open before you worry about what’s inside the room.
The Priority Table: All 30 Checks at a Glance
| # | Check | Category | Priority | Impact if Broken |
|---|---|---|---|---|
| 1 | Robots.txt not blocking key pages | Crawlability | Critical | Pages invisible to Google |
| 2 | XML sitemap exists and is valid | Crawlability | Critical | Slower discovery of new pages |
| 3 | No crawl errors on important URLs | Crawlability | Critical | Lost rankings on affected pages |
| 4 | Canonical tags are correct | Indexability | Critical | Duplicate content, split authority |
| 5 | No accidental noindex on live pages | Indexability | Critical | Pages deindexed entirely |
| 6 | HTTPS on all pages | Security | Critical | Ranking penalty, browser warnings |
| 7 | No mixed content warnings | Security | Critical | Browser blocks resources, broken pages |
| 8 | LCP under 2.5 seconds | Speed | Critical | Failed Core Web Vitals assessment |
| 9 | CLS under 0.1 | Speed | Critical | Poor user experience, ranking signal |
| 10 | INP under 200ms | Speed | Critical | Failed Core Web Vitals (replaced FID March 2024) |
| 11 | Mobile responsive design | Mobile | Critical | Mobile-first indexing failure |
| 12 | Redirect chains under 3 hops | Crawlability | Important | Wasted crawl budget, slower pages |
| 13 | No redirect loops | Crawlability | Important | Pages unreachable |
| 14 | Pagination handled properly | Indexability | Important | Thin content, crawl waste |
| 15 | Orphan pages identified and linked | Internal Linking | Important | Pages never discovered by crawlers |
| 16 | Internal link depth under 4 clicks | Internal Linking | Important | Deep pages get less crawl priority |
| 17 | Broken internal links fixed | Internal Linking | Important | Lost link equity, poor UX |
| 18 | Schema markup validated | Structured Data | Important | Lost rich snippets |
| 19 | Tap targets properly sized (48x48px min) | Mobile | Important | Mobile usability errors in GSC |
| 20 | JavaScript content rendered for bots | JS Rendering | Important | Content invisible to crawlers |
| 21 | Server response time under 200ms | Speed | Important | Slow TTFB degrades all other metrics |
| 22 | Image optimization (WebP, sizing, lazy load) | Speed | Important | Bloated pages, slow LCP |
| 23 | Hreflang tags correct (if multilingual) | International | Important | Wrong language served, duplicate issues |
| 24 | 404 page returns proper status code | Crawlability | Nice-to-have | Soft 404s waste crawl budget |
| 25 | Breadcrumb markup present | Structured Data | Nice-to-have | Missed SERP enhancement |
| 26 | Open Graph and Twitter Card tags | Social | Nice-to-have | Poor social sharing appearance |
| 27 | Font loading optimized | Speed | Nice-to-have | Flash of unstyled text, minor CLS |
| 28 | CSS/JS minified and compressed | Speed | Nice-to-have | Slightly larger page weight |
| 29 | HTTP/2 or HTTP/3 enabled | Speed | Nice-to-have | Slower resource multiplexing |
| 30 | Log file analysis for crawl patterns | Advanced | Nice-to-have | No visibility into actual crawler behavior |
Now let’s go through each group in detail.
Critical Checks: Fix These First or Nothing Else Matters
1. Is Your Robots.txt Blocking Important Pages?
Your robots.txt file tells search engine crawlers which parts of your site they can and can’t access. One wrong line in this file can make an entire section of your website invisible to Google. We ran an audit for an ecommerce brand in 2024 that had accidentally blocked their entire /products/ subfolder with a Disallow rule left over from a staging environment. Six months of product pages, gone from the index.
Check your robots.txt right now. Go to yourdomain.com/robots.txt and read every line. Make sure it doesn’t block your key landing pages, blog posts, or category pages. Then validate it using the Google Search Console robots.txt tester. This takes 3 minutes and could save you months of lost traffic.
2. Does Your XML Sitemap Exist and Is It Valid?
An XML sitemap tells Google every URL you want indexed. If you don’t have one, Google still finds pages through internal links, but it’s slower and less complete. According to Google’s own documentation, sitemaps are especially useful for sites with more than 500 pages, sites with lots of archived content, and new sites with few external links.
Your sitemap should be at /sitemap.xml, referenced in your robots.txt, submitted to Google Search Console, and contain only 200-status URLs. No 404s, no redirects, no noindexed pages. A sitemap with junk URLs actually hurts you because it wastes your crawl budget on pages Google will abandon anyway.
3. Are You Monitoring Crawl Errors?
Google Search Console’s Page Indexing report shows you every URL Google tried to crawl but couldn’t. The errors that matter most are server errors (5xx), which signal reliability problems, and 404s on pages that used to rank. If you had a page ranking #4 for a mid-volume keyword and it suddenly returns a 404, that ranking is gone within days.
Check this report weekly. Not monthly. Weekly. Set up alerts if your GSC setup allows it. We build automated monitoring into every technical SEO audit we run because manual checking gets forgotten the moment the team gets busy.
4. Are Your Canonical Tags Correct?
Canonical tags tell Google which version of a page is the “real” one when multiple URLs serve similar content. This is straightforward in theory. In practice, it’s one of the most commonly broken elements we find in audits.
Common mistakes: pages canonicalizing to themselves when they shouldn’t (paginated pages), pages canonicalizing to a completely different page by accident (CMS misconfiguration), and HTTP pages canonicalizing to HTTP instead of HTTPS. Every page on your site should have a self-referencing canonical tag pointing to its own HTTPS URL, unless it’s intentionally a duplicate of another page.
5. Do Any Live Pages Have a Noindex Tag?
A noindex meta tag or X-Robots-Tag header tells Google not to include a page in its index. This is useful for staging environments, internal search results pages, and thin tag archive pages. It’s devastating when applied to your homepage or money pages by mistake.
We’ve seen this happen after site migrations more times than we can count. A developer sets noindex on the staging version, the migration happens, and nobody removes the tag. Three weeks later someone notices organic traffic dropped 70%. Crawl your entire site with Screaming Frog or Sitebulb and filter for noindex directives. Every single one should be intentional.
6. Is HTTPS Active on Every Page?
Google confirmed HTTPS as a ranking signal back in 2014. By 2026, there’s no excuse for running any page on HTTP. But it’s not enough to have an SSL certificate installed. You need to verify that every page redirects from HTTP to HTTPS, that your canonical tags point to HTTPS versions, and that your sitemap uses HTTPS URLs.
The bigger concern today isn’t whether you have HTTPS. It’s whether you have mixed content.
7. Do You Have Mixed Content Warnings?
Mixed content happens when an HTTPS page loads resources (images, scripts, stylesheets) over HTTP. Modern browsers will block these resources, which can break your page layout, hide images, or disable functionality entirely. Google Chrome DevTools will show you mixed content warnings in the Console tab.
The fix is usually straightforward: update the URLs of those resources to HTTPS, or use protocol-relative URLs. But on large sites with thousands of pages and years of content, mixed content can be buried in old blog posts that embed images using hardcoded HTTP URLs. Crawl tools catch these systematically.
8. Is Your Largest Contentful Paint (LCP) Under 2.5 Seconds?
LCP measures how long it takes for the largest visible element on your page to render. Google considers anything under 2.5 seconds “good.” According to web.dev’s Core Web Vitals documentation, pages that fail the LCP threshold see measurably lower rankings in mobile search results.
The usual culprits for poor LCP are unoptimized hero images (the #1 offender), slow server response times, render-blocking JavaScript, and CSS that loads massive font files before the page can paint. Start by checking your LCP element in PageSpeed Insights. It tells you exactly which element is the bottleneck. Then work backward from there.
9. Is Your Cumulative Layout Shift (CLS) Below 0.1?
CLS measures how much your page layout shifts while loading. If you’ve ever tried to click a button on a mobile page and the layout jumped right before your tap, causing you to click an ad instead, that’s high CLS. Google’s threshold is 0.1.
The top causes are images without explicit width and height attributes, dynamically injected content above the fold, and web fonts that cause text to reflow when they load. Of these, images without dimensions are the easiest fix and the most common problem. Just add width and height attributes to every <img> tag. The browser reserves the space before the image loads, preventing the shift.
10. Is Your Interaction to Next Paint (INP) Under 200ms?
INP replaced First Input Delay (FID) as a Core Web Vital in March 2024. It measures the responsiveness of your page to all user interactions, not just the first one. The threshold is 200 milliseconds. This one matters more than FID did because it captures the full session, not a single click.
Poor INP usually comes from heavy JavaScript execution blocking the main thread. If you’re running a React or Next.js site with lots of client-side rendering, INP problems are almost guaranteed unless you’ve been deliberate about code splitting and lazy loading components. Long tasks (anything over 50ms on the main thread) are the enemy here. Break them up.
11. Is Your Site Fully Responsive on Mobile?
Google moved to mobile-first indexing for all sites by 2023. This means Google primarily uses the mobile version of your content for indexing and ranking. If your mobile experience is broken, your desktop rankings suffer too.
Test beyond just “does it resize.” Check that all content visible on desktop is also visible on mobile (no hidden-by-default sections), that forms work on mobile, that tables scroll horizontally or reformat for small screens, and that no horizontal scrolling occurs. Google’s Mobile-Friendly Test was deprecated in December 2023, so use Lighthouse in Chrome DevTools instead.
Important Checks: These Affect Crawl Efficiency and UX
12. Do You Have Redirect Chains Longer Than 2 Hops?
A redirect chain happens when URL A redirects to URL B, which redirects to URL C. Every hop in the chain adds latency and leaks a small amount of PageRank. Google says they’ll follow up to 10 redirects, but practically, you want to keep every redirect to a single hop. No intermediaries.
Redirect chains accumulate over time. A page migrates in 2020, then the destination migrates in 2022, and suddenly you’ve got 3-hop chains nobody planned. Screaming Frog’s redirect chain report will map every chain on your site. Fix them by updating the original redirect to point directly to the final destination.
13. Do You Have Any Redirect Loops?
A redirect loop is when page A redirects to page B, and page B redirects back to page A. The browser gives up after a few attempts and shows an error. These are rarer than chains but far more damaging because the affected page is completely unreachable. They usually happen when HTTP-to-HTTPS redirects and www-to-non-www redirects interact badly at the server config level.
14. Is Pagination Handled Properly?
If you have paginated content (category pages with 50+ products, blog archives, search results), Google needs to understand the relationship between those pages. Google dropped support for rel=”prev/next” in 2019, which surprised a lot of people. The current best practice is to make sure each paginated page has unique content (not just a different subset of the same list), a self-referencing canonical tag, and links to the next and previous pages in the visible content.
For ecommerce, “view all” pages work well if the page loads in a reasonable time. For blogs, make sure your archive pages actually surface older content and don’t just create a mountain of thin paginated pages that add zero value.
15. Do You Have Orphan Pages?
An orphan page is a page that exists on your site but has zero internal links pointing to it. Google can find it through your sitemap, but without internal links, it signals that the page isn’t important enough to be connected to your site’s structure.
Compare your sitemap URLs against your crawl data. Any URL in the sitemap that doesn’t appear in the crawl is an orphan (because crawlers only follow links). These pages need internal links from relevant pages, or they need to be removed if they’re truly not valuable.
16. Can Every Important Page Be Reached in 4 Clicks or Less?
Click depth matters. Pages buried 6 or 7 clicks from the homepage get crawled less frequently and pass less PageRank. The general rule is: every important page should be reachable within 3-4 clicks from the homepage. A Screaming Frog crawl will show you the click depth of every page.
If you’ve got valuable pages buried at depth 6+, restructure your navigation or add contextual internal links from higher-level pages. This is especially common on large blogs where old posts sink further from the homepage as new content pushes them down the archive.
17. Are There Broken Internal Links?
Broken internal links send users and crawlers to 404 pages. Every broken link is a dead end. It wastes the crawl budget Google allocated to your site, and it creates a poor user experience. Run a crawl, filter for 404 responses, and fix them. Either update the link to the correct URL or remove it if the destination page no longer exists.
On WordPress sites, broken internal links happen constantly as posts are deleted, slugs are changed, and permalink structures are updated. A monthly crawl catches these before they pile up.
18. Is Your Structured Data Valid?
Structured data (Schema.org markup) helps Google understand what your page is about and can trigger rich results in search. The key schema types for most sites are Article, Product, FAQPage, LocalBusiness, Organization, and BreadcrumbList.
Having schema isn’t enough. It has to be valid. Google’s Rich Results Test will validate your markup and show you exactly what’s wrong. Common errors include missing required fields, incorrect data types (string instead of number), and markup that doesn’t match the visible page content, which Google calls “misleading structured data” and may penalize. Validate after every site update.
19. Are Tap Targets Properly Sized on Mobile?
Google recommends tap targets be at least 48×48 CSS pixels with at least 8 pixels of space between adjacent targets. Small tap targets cause frustration on mobile and show up as errors in Lighthouse and Google Search Console’s mobile usability report.
The most common offenders are navigation menus with tightly packed links, footer links, and inline text links that are too close together. This is a CSS fix, usually involving increased padding on clickable elements.
20. Can Search Engines See Your JavaScript-Rendered Content?
If your site relies on client-side JavaScript to render content (React, Angular, Vue without server-side rendering), there’s a real risk that Google’s crawler sees a blank or partial page. Google can render JavaScript, but it does so in a deferred queue that can delay indexing by days or weeks.
Test this by viewing your page in Google Search Console’s URL Inspection tool with “View Crawled Page.” Compare it to what you see in a browser. If important content is missing, you need server-side rendering (SSR) or static site generation (SSG). For our technical SEO service, JavaScript rendering issues are among the first things we check, especially for SaaS companies running single-page applications.
21. Is Your Server Response Time Under 200ms?
Time to First Byte (TTFB) is how long it takes your server to start sending a response. Google doesn’t officially set a threshold, but web.dev recommends keeping TTFB under 800ms as the upper bound, with 200ms being the target for good performance. A slow TTFB drags down every other speed metric because nothing can render until the server responds.
Common causes of slow TTFB include unoptimized database queries, no server-side caching, shared hosting with limited resources, and no CDN for geographically distributed audiences. If your TTFB is over 600ms, look at your hosting and caching setup before you optimize anything else. Speed improvements downstream won’t compensate for a slow origin server.
22. Are Your Images Optimized?
Images are typically the heaviest resources on a web page. The three pillars of image optimization for technical SEO are format (use WebP or AVIF instead of PNG/JPEG where supported), sizing (don’t serve a 4000px image in a 800px container), and loading strategy (lazy load images below the fold, but eager load the LCP image).
A 2023 study by HTTP Archive found that images account for roughly 42% of total page weight on median websites. Converting to WebP alone typically reduces image file sizes by 25-35% compared to JPEG at equivalent quality.
23. Are Hreflang Tags Correct? (If Multilingual)
Hreflang tags tell Google which language and regional version of a page to serve in search results. If you operate in multiple countries or languages, incorrect hreflang can result in the wrong language version appearing in search results, or duplicate content issues between regional variants.
The syntax is unforgiving. Every hreflang annotation must be bidirectional (page A references page B, and page B references page A). Each page must include a self-referencing hreflang tag. And the language codes must follow ISO 639-1 (two-letter language codes) with optional ISO 3166-1 Alpha 2 country codes. Google’s Search Console International Targeting report will flag hreflang errors, but it’s not comprehensive. Crawl tools do a better job.
Nice-to-Have Checks: Won’t Break You, But Worth Fixing
24. Does Your 404 Page Return an Actual 404 Status Code?
A soft 404 is when a page shows a “not found” message but returns a 200 status code to the server. Google treats soft 404s as a quality signal and may reduce crawl frequency for your site. Your custom 404 page should return a proper 404 HTTP status code while still being helpful to users.
25. Do You Have Breadcrumb Markup?
BreadcrumbList schema helps Google understand your site’s hierarchy and can display breadcrumb trails in search results instead of raw URLs. It’s a simple win that takes 15 minutes to implement and makes your search listings more readable and clickable. It won’t dramatically move rankings, but it’s free SERP real estate.
26. Are Open Graph and Twitter Card Tags Set?
These tags control how your pages appear when shared on social media. Not a direct ranking factor, but shared links drive traffic and can lead to backlinks. At minimum, set og:title, og:description, og:image, and og:url on every page. Use images sized at 1200x630px for optimal display across platforms.
27. Is Font Loading Optimized?
Custom web fonts can cause a Flash of Invisible Text (FOIT) or Flash of Unstyled Text (FOUT), both of which contribute to CLS. Use font-display: swap to show a fallback font while the custom font loads. Preload your most important font files with <link rel="preload">. And limit the number of font weights you load. Every additional weight is another HTTP request and more bytes.
28. Are CSS and JavaScript Files Minified and Compressed?
Minification removes unnecessary characters (whitespace, comments) from code files without changing functionality. Compression (gzip or Brotli) reduces the transfer size of those files over the network. Together, they can reduce CSS and JS payload by 60-80%.
Most modern build tools (Webpack, Vite, esbuild) handle minification automatically. For compression, configure it at the server or CDN level. If you’re on WordPress, caching plugins like LiteSpeed Cache or WP Rocket handle both. Check your response headers for Content-Encoding: br or Content-Encoding: gzip to confirm compression is active.
29. Is HTTP/2 or HTTP/3 Enabled?
HTTP/2 allows multiplexed connections (multiple resources downloaded simultaneously over one connection), header compression, and server push. HTTP/3 uses QUIC for even faster connections on unreliable networks. Most modern hosting providers and CDNs support HTTP/2 by default. Check with curl -sI https://yourdomain.com and look for the protocol version in the response.
If you’re still on HTTP/1.1, you’re leaving performance on the table. The switch is typically a server configuration change, not a code change.
30. Have You Analyzed Your Server Log Files?
Log file analysis shows you exactly how Google’s crawler behaves on your site: which pages get crawled most often, which ones are ignored, how frequently Googlebot visits, and what status codes it encounters. This is the only way to get ground truth about crawl behavior. Google Search Console gives you a sample. Logs give you everything.
Tools like Screaming Frog Log File Analyser, Oncrawl, and JetOctopus can parse your access logs and map crawler behavior. For large sites (100,000+ pages), this is where you find the crawl budget issues that no other tool reveals.
How Should You Prioritize These 30 Checks?
Start with the 11 critical checks. All of them. If your site fails any critical check, fix it before you touch anything else. A site with perfect structured data but broken canonical tags is still a site that’s losing rankings.
After criticals, tackle the important checks in this order: internal linking issues (checks 15-17), JavaScript rendering (check 20), then speed and mobile items. The reason for that sequence is simple: if Google can’t find or render your pages, speed optimization is wasted effort.
Nice-to-have items are best handled as a batch during quarterly maintenance. Don’t skip them entirely, but don’t let them distract from the checks that directly affect indexing and ranking.
How Often Should You Run a Technical SEO Audit?
We recommend a full 30-point audit quarterly, with weekly monitoring of the top 5 critical items (robots.txt, crawl errors, Core Web Vitals, noindex tags, and HTTPS/mixed content). Site migrations, CMS updates, and major redesigns should trigger an immediate full audit regardless of the schedule.
Google processes around hundreds of billions of pages across the web, and its algorithms update constantly. What passed last quarter may not pass this quarter. Core Web Vitals thresholds changed in March 2024 when INP replaced FID. Staying current matters.
“We’ve seen sites lose 30% of their organic traffic after a routine CMS update because nobody ran a technical audit afterward. Automated monitoring catches what human memory forgets,” says Hardik Shah, Founder of ScaleGrowth.Digital.
What Tools Do You Need for a Technical SEO Audit?
You don’t need 15 tools. Here’s what actually gets the job done:
| Tool | What It Does | Cost |
|---|---|---|
| Google Search Console | Crawl errors, indexing status, Core Web Vitals, manual actions | Free |
| Screaming Frog SEO Spider | Full site crawls: broken links, redirects, canonicals, meta tags | Free up to 500 URLs, $259/year for unlimited |
| PageSpeed Insights | Core Web Vitals with both lab and field data | Free |
| Chrome DevTools | JavaScript rendering, mixed content, network waterfall, Lighthouse | Free |
| Google Rich Results Test | Structured data validation | Free |
That’s 5 tools and 4 of them are free. For enterprise-scale sites (50,000+ URLs), you might add Sitebulb, Oncrawl, or Lumar for deeper crawl analysis and historical tracking. But for 95% of sites, the table above covers everything on this checklist.
What Mistakes Do People Make With Technical SEO Checklists?
The biggest mistake is treating all 30 checks as equal priority. They’re not. We’ve watched in-house SEO teams spend two weeks optimizing font loading (check 27, a nice-to-have) while their canonical tags were pointing to staging URLs (check 4, critical). Priority frameworks exist for a reason. Use the table above.
The second mistake is running the audit once and filing it away. A technical SEO audit is not a one-time event. It’s a recurring process. Every code deployment, content migration, and plugin update can introduce new issues. The sites that maintain strong organic performance are the ones that build technical SEO monitoring into their workflow, not the ones that audit once a year.
Third: relying on automated tool scores instead of reading the actual findings. A Lighthouse score of 85 doesn’t mean your technical SEO is fine. We’ve audited sites with a Lighthouse performance score of 90+ that had broken canonicals on 40% of their pages. Scores give you a starting point. The individual checks give you the truth.
When Should You Hire a Technical SEO Specialist vs. Do It Yourself?
If your site has fewer than 200 pages, runs on WordPress or Shopify with standard themes, and doesn’t use heavy JavaScript frameworks, you can probably handle this checklist yourself with the free tools listed above. Set aside half a day quarterly and work through it systematically.
If your site has 1,000+ pages, uses custom code or a JavaScript framework, has international versions, or has gone through recent migrations, bring in a specialist. The complexity compounds fast. Redirect chains interact with canonical tags which interact with hreflang which interacts with JavaScript rendering. An experienced technical SEO practitioner sees these interactions immediately. Someone running through a checklist for the first time won’t.
At ScaleGrowth, our SEO audit service covers all 30 checks in this list and 25+ additional items specific to your CMS, industry, and site architecture. We don’t just flag issues. We prioritize them by business impact and give you an implementation roadmap your dev team can actually follow.
Your Next Step
Pick up Google Search Console and Screaming Frog today. Run through the 11 critical checks first. If you find more than 3 critical issues, that’s your signal that technical SEO has been neglected and it’s costing you traffic right now. Fix those before you publish another blog post, build another landing page, or spend another rupee on content.
If you want a team that runs this systematically, with automated monitoring and a clear fix-by-impact roadmap, talk to us. We’ve been doing this since 2019 and we know where the bodies are buried.