Mumbai, India
March 14, 2026

Core Web Vitals: What Actually Moves the Needle

Core Web Vitals are Google’s measurable performance metrics that directly affect rankings. Since the Page Experience update rolled out in 2021 and was refined through 2024, CWV has moved from a tiebreaker signal to a genuine ranking factor for competitive queries. But here’s what most guides won’t tell you: only specific fixes actually move your scores. A lot of common advice wastes engineering time on optimizations that look good in Lighthouse but don’t change real-user data.

“We’ve optimized Core Web Vitals for sites ranging from 200 pages to 50,000 pages across e-commerce, BFSI, and healthcare in India. The fixes that move CrUX scores are usually 4 or 5 specific interventions, not 40 micro-optimizations. The trick is knowing which 4 or 5,” says Hardik Shah, Founder of ScaleGrowth.Digital.

What Are Core Web Vitals, Exactly?

Three metrics. That’s it. Google measures three specific things about how users experience your page:

Metric What It Measures Good Threshold Poor Threshold Replaced
LCP (Largest Contentful Paint) How fast the main content loads ≤ 2.5 seconds > 4.0 seconds ,
CLS (Cumulative Layout Shift) How much the page layout jumps around ≤ 0.1 > 0.25 ,
INP (Interaction to Next Paint) How fast the page responds to user input ≤ 200ms > 500ms FID (March 2024)

INP replaced FID (First Input Delay) in March 2024. This is significant because FID only measured the delay of the first interaction. INP measures the responsiveness of every interaction throughout the page lifecycle. Sites that passed FID easily may fail INP because their JavaScript blocks the main thread during scrolling, clicking, or typing, not just on first load.

Simple definition: LCP is about speed. CLS is about visual stability. INP is about responsiveness.

Technical definition: LCP measures the render time of the largest image or text block visible in the viewport. CLS calculates the sum of individual layout shift scores for every unexpected layout shift that occurs during the page’s lifespan. INP observes the latency of all click, tap, and keyboard interactions, then reports the worst one (technically the 98th percentile to account for outliers).

Practitioner definition: LCP is usually your hero image or largest heading. CLS is usually caused by images without dimensions, late-loading ads, or web fonts. INP is usually caused by heavy JavaScript running on the main thread.

Does LCP Actually Affect Rankings?

Yes, with caveats. A 2023 study by Semrush analyzing 1 million URLs found that pages with “Good” LCP scores were 1.4x more likely to rank in the top 10 compared to pages with “Poor” LCP. But content relevance and backlinks still dominate. A slow page with excellent content will outrank a fast page with thin content.

Where CWV matters most is in competitive SERPs. If you and a competitor have similar content quality, similar backlink profiles, and similar topical authority, page experience becomes the tiebreaker. In e-commerce, where hundreds of sites sell similar products, CWV can be the difference between position 3 and position 8.

Google’s own documentation states CWV is a ranking signal, not the dominant one. But ignoring it is increasingly costly as more sites optimize for it.

What Actually Causes Slow LCP?

LCP is the metric most people focus on and the one with the most wasted effort. Here are the actual causes, ordered by how often they’re the real bottleneck:

Cause 1: Slow server response time (TTFB). If your server takes 1.5 seconds to respond, your LCP cannot possibly be under 2.5 seconds. Time To First Byte is the floor for LCP. Shared hosting in India commonly has TTFB of 800ms to 2,000ms. A VPS or managed cloud hosting from providers like AWS, DigitalOcean, or even Hostinger VPS typically gets this under 400ms.

What to do: Measure TTFB with WebPageTest.org (select a test location in Mumbai or Singapore). If TTFB exceeds 600ms, your hosting or backend is the bottleneck. No amount of image optimization will fix this. Switch hosting or add a server-side cache (Redis, Varnish, or your CMS’s built-in caching).

Cause 2: Unoptimized LCP element. The LCP element is usually a hero image. If that image is a 2MB uncompressed JPEG, it will load slowly regardless of your server speed.

What to do: Identify your LCP element using Chrome DevTools (Performance panel → Timings → LCP). Convert images to WebP or AVIF format. Serve responsive images using srcset. For above-the-fold hero images, add the fetchpriority="high" attribute and remove loading="lazy" (you want the hero image to load eagerly, not lazily).

Here’s a real example from an Indian e-commerce site we worked on in 2024:

Change LCP Before LCP After Improvement
JPEG → WebP conversion 4.2s 3.1s -1.1s
Added fetchpriority=”high” 3.1s 2.7s -0.4s
Preload hero image via link tag 2.7s 2.3s -0.4s
Moved from shared to VPS hosting 2.3s 1.4s -0.9s

Total improvement: 4.2s down to 1.4s. Four changes. Not forty.

Cause 3: Render-blocking resources. CSS and JavaScript files in the <head> that block rendering delay everything, including LCP. The browser won’t paint anything until it’s downloaded and parsed all render-blocking resources.

What to do: Inline your critical CSS (the CSS needed for above-the-fold content) directly in the HTML. Defer non-critical CSS with media="print" onload="this.media='all'". Add defer or async attributes to JavaScript files. WordPress sites with 15 plugins often have 8 to 12 render-blocking files; a caching plugin like WP Rocket or a manual approach with critical CSS extraction can cut this to 1 or 2.

Cause 4: Web font loading. Custom fonts that load from Google Fonts or self-hosted files can delay text rendering. If your LCP element is a text block (a heading, for example), the browser may wait for the font file before rendering it.

What to do: Add font-display: swap to your @font-face declarations. This tells the browser to show text immediately in a fallback font, then swap to the custom font when it loads. Preload your primary font file with <link rel="preload" href="font.woff2" as="font" type="font/woff2" crossorigin>. If you’re loading Google Fonts, self-host them instead to eliminate the DNS lookup and connection to fonts.googleapis.com.

What Causes CLS, and How Do You Fix It?

CLS measures visual instability. Every time a visible element shifts position unexpectedly, it contributes to the CLS score. A score above 0.1 means your page jumps enough to annoy users.

The three most common causes and their fixes:

Images and videos without explicit dimensions. When an image loads without width and height attributes (or CSS aspect-ratio), the browser doesn’t know how much space to reserve. The image loads, and everything below it gets pushed down. This is the single most common CLS source.

Fix: Add width and height attributes to every <img> tag. The browser uses these to calculate the aspect ratio and reserve space before the image loads. For responsive images, the CSS aspect-ratio property works too. This one change eliminates 40 to 60% of CLS issues on most sites.

Dynamically injected content. Ad slots, cookie consent banners, email signup popups, chat widgets: anything that appears after initial render and pushes existing content down or sideways causes CLS. Ad networks like Google AdSense are notorious for this.

Fix: Reserve space for ad slots using CSS min-height. For ads, use the min-height of the expected ad size before the ad loads. For popups and banners, use overlays (position: fixed) instead of inline elements that push content. Cookie consent bars should overlay at the bottom of the screen, not push the page content down from the top.

Web font swapping. The font-display: swap fix I mentioned for LCP can actually cause CLS. The fallback font renders, then the custom font loads and the text shifts because the fonts have different metrics (character width, line height). You’re trading LCP for CLS.

Fix: Use a font fallback with adjusted metrics. The CSS size-adjust, ascent-override, and descent-override properties let you match the fallback font’s metrics to your custom font. Tools like Fontaine (from Nuxt) or Next.js’s built-in font optimization handle this automatically. For manual implementations, use the Font Fallback Generator tool to get the right override values.

CLS Source % of Sites Affected Fix Difficulty Impact on Score
Images without dimensions 65% Easy (HTML attribute) High
Late-loading ads 45% Medium (CSS + ad config) High
Font swap shift 30% Medium (CSS overrides) Medium
Dynamic banners/popups 35% Easy (CSS positioning) Medium
Lazy-loaded content above fold 15% Easy (remove lazy loading) Low-Medium

What Is INP and Why Is It Harder to Fix Than FID?

INP (Interaction to Next Paint) is the newest Core Web Vital, replacing FID in March 2024. It measures how long it takes for the page to visually respond after a user clicks a button, taps a link, presses a key, or interacts with a form.

FID was easy to pass. It only measured the first interaction’s input delay. Most sites passed because the first click usually happens after the page has finished loading JavaScript. INP is harder because it measures every interaction, including ones that happen while JavaScript is still executing.

The technical root cause of bad INP is almost always long tasks on the main thread. A “long task” in browser terms is any JavaScript execution that takes more than 50 milliseconds. During a long task, the browser can’t respond to user input. The user clicks, and nothing happens for 200, 300, sometimes 500+ milliseconds.

Fix 1: Break up long tasks. If you have a JavaScript function that takes 150ms to execute, break it into smaller chunks using requestAnimationFrame() or setTimeout(0) to yield control back to the browser between chunks. The scheduler.yield() API is the modern approach (supported in Chrome since version 115).

Fix 2: Reduce third-party JavaScript. Tag managers, analytics scripts, A/B testing tools, chat widgets, and social media embeds all compete for main thread time. Run a WebPageTest trace and look at the “Main Thread” waterfall. You’ll often find 60 to 70% of main thread time is consumed by third-party scripts.

Practical approach: audit every script on your page. Remove anything that doesn’t justify its performance cost. Load remaining third-party scripts with defer or dynamically after user interaction (load the chat widget only when the user clicks the chat icon, not on page load).

Fix 3: Reduce DOM size. Pages with 3,000+ DOM elements take longer to update after interactions because the browser has to recalculate styles and layout across a larger tree. WordPress sites with page builders like Elementor or Divi commonly generate 5,000 to 8,000 DOM elements. Simplifying your page structure, or moving to a leaner theme, directly improves INP.

“Most Indian websites we audit have INP problems caused by tag managers loading 15 to 20 scripts simultaneously. You don’t need a heat map tool, a session recording tool, AND a behavioral analytics tool all running at once. Pick one, load the others on-demand,” says Hardik Shah, Founder of ScaleGrowth.Digital.

Should You Optimize for Lab Data or Field Data?

This distinction confuses a lot of people, and it matters enormously for decision-making.

Lab data comes from tools like Lighthouse, PageSpeed Insights (the “Performance” section), and WebPageTest. These tools run tests in controlled environments with specific device and network settings. Lab data is useful for debugging because it’s reproducible and shows waterfall timelines.

Field data comes from real users via the Chrome User Experience Report (CrUX). This is what Google actually uses for rankings. Field data reflects the experience of real visitors on real devices with real network connections. It’s shown in the “What your real users are experiencing” section of PageSpeed Insights and in the Core Web Vitals report in Google Search Console.

The disconnect: you can get a Lighthouse score of 95 and still fail CWV in field data. Lighthouse simulates a mid-range phone on a 4G connection. Your real users in tier-2 and tier-3 Indian cities might be on 3G connections with older Android devices. Their experience is worse than what Lighthouse simulates.

Aspect Lab Data (Lighthouse) Field Data (CrUX)
Source Simulated test Real Chrome users
Used for rankings No Yes
Debugging value High Low (no waterfall)
Includes INP No (uses TBT as proxy) Yes
Affected by user geography No Yes
Update frequency Instant (per test) 28-day rolling average

Focus your optimization on field data. Use lab data for diagnosis. If your CrUX data shows good scores, don’t chase a higher Lighthouse number. Lighthouse scores above 90 are nice for screenshots but don’t improve rankings if your field data is already passing.

What’s the Fastest Way to Improve CWV Scores Site-Wide?

If you’re looking for the highest-impact changes that work across most sites, here’s the priority order based on what we see in practice:

1. Fix your hosting (TTFB under 400ms). This affects LCP across every page. If your TTFB is over 800ms, nothing else you do will be enough. CDN implementation (Cloudflare is free for basic CDN) can cut TTFB by 30 to 50% for users far from your origin server.

2. Add width/height to all images. This fixes CLS across every page. A single find-and-replace in your template files or a WordPress plugin like Perfmatters can handle this site-wide.

3. Convert images to WebP. This improves LCP by reducing image file sizes by 25 to 35% compared to JPEG. WordPress plugins like ShortPixel or Imagify do this automatically. On other CMS platforms, your build process should handle conversion.

4. Defer non-critical JavaScript. This improves both LCP (by removing render-blocking) and INP (by reducing main thread work). Add defer to every script that doesn’t need to execute before first paint. That’s almost every script.

5. Preload the LCP element. Add a <link rel="preload"> tag in the <head> for your hero image. This tells the browser to start downloading it immediately instead of waiting to discover it in the HTML. Typical LCP improvement: 200ms to 500ms.

These five changes, applied site-wide, will pass CWV for approximately 80% of websites. The remaining 20% have specific issues (complex SPAs, heavy ad implementations, custom JavaScript frameworks) that need individual diagnosis.

Don’t overthink Core Web Vitals. The fixes are well-documented, the thresholds are clear, and the tools to measure them are free. Where teams get stuck is in prioritization: they try to get a perfect Lighthouse score instead of passing CrUX thresholds. Pass the thresholds first. Optimize further only if you’re competing in a tight SERP where every ranking signal counts.

Need a performance audit of your site? Get in touch, and we’ll run the numbers.

Related Service

Web Development →

Free Growth Audit
Call Now Get Free Audit →