Core Web Vitals in 2026: LCP, INP, CLS, and What Actually Moves Rankings

Core Web Vitals are Google’s measurable answer to a simple question. Does this page actually feel fast to a real person on a real phone? Lab scores can lie, marketing claims can lie, but the timing data captured from real Chrome users in the field tells the truth. That dataset is what Core Web Vitals report on, and it is why this set of metrics has stuck around when so many other page experience initiatives have faded.
In 2026 the metric set is settled. LCP measures loading. INP measures interactivity. CLS measures visual stability. The three together cover the moments where users notice a slow site. This guide walks through each metric, the thresholds that matter, lab versus field data, the playbook that fixes each one, and the honest answer to the question parents of every site eventually ask, when does Core Web Vitals actually move rankings?
The Three Metrics in 2026
Google groups Core Web Vitals into three named metrics, one for each category of user perceived performance.
Largest Contentful Paint (LCP) captures loading. The clock starts when the user requests the page and stops when the largest visible content element finishes painting. That element is usually the hero image, a banner, or a large heading.
Interaction to Next Paint (INP) captures interactivity. INP replaced First Input Delay (FID) in March 2024. Where FID measured only the very first interaction, INP measures every interaction across the session and reports the longest delay. That change matters because most users have a smooth first click and then run into a janky third or fourth interaction. INP catches the bad moments FID never saw.
Cumulative Layout Shift (CLS) captures visual stability. CLS is the sum of unexpected layout shifts during the session. A button that moves down the screen as a slow advert loads above it, a recipe that jumps half a screen as a font swaps in, both produce CLS. Layout shifts are the kind of bug a developer rarely notices on a fast laptop, and exactly the kind a real user feels every day on a slower phone.
These three metrics share two design choices that explain why they ended up in the standard. They are user perceived, not technical. And they are reported from real Chrome users via the Chrome User Experience Report (CrUX), not from synthetic lab runs. Both choices are deliberate. Earlier metrics like Time to First Byte and First Contentful Paint were technical proxies that did not always match what users felt. Core Web Vitals try to measure the feeling itself.
Largest Contentful Paint (LCP)
LCP marks the moment when the page becomes visually meaningful. If the largest element is still loading, the user is still waiting. If it has painted, the user can engage.

The 2026 thresholds for LCP, set by Google, are unchanged from prior years.
- Good: 2.5 seconds or less
- Needs improvement: 2.5 to 4.0 seconds
- Poor: more than 4.0 seconds
The 75th percentile is what counts. Google looks at the LCP value at the 75th percentile of your real visitors over a 28 day window. If 75 percent of your real users see LCP at 2.5 seconds or less, the page is good. If only the median sees that and the 75th percentile is 5 seconds, the page is poor.
The largest element is identified live by the browser. On a typical content page it will be the hero image, a featured photo, or sometimes a large headline. On commerce pages it is often the main product image. On a homepage with a video, the video poster image takes the role.
The common causes of slow LCP fall into five buckets.
Slow server response. Time to First Byte is part of LCP. If the server takes 800 ms to send the first byte, LCP cannot start until then. Server response time is set by your hosting, your CDN, and how much work the server does before responding.
Render blocking resources. CSS files in the head and JavaScript without async or defer block the browser from painting. Each blocking resource is a delay added to LCP.
Slow resource load. The largest image itself takes time to download. A 2 MB hero image on a slow connection adds seconds. The fix is modern image formats (WebP, AVIF), correct sizing for the viewport, and the right loading hint.
Client side rendering. A page that paints nothing until JavaScript runs and fetches data has LCP equal to the JavaScript boot time. Single page applications without server side rendering frequently show LCP above 4 seconds on real phones.
Font loading. A late text element promoted to LCP can wait on a web font. Font display swap and a size adjusted fallback prevent the text from being held back.
The fix sequence for LCP is always the same. Identify the LCP element. Measure how it is loaded. Remove the slowest piece of that chain.
Interaction to Next Paint (INP)
INP became the official interactivity metric in March 2024, replacing FID. The headline difference: FID measured only the very first input, INP measures every input, and reports the worst.
The 2026 thresholds for INP are also stable.
- Good: 200 ms or less
- Needs improvement: 200 to 500 ms
- Poor: more than 500 ms
INP is reported as the worst (or near worst) interaction latency observed in the session. The browser tracks every click, tap, and key press, measures the time from input to next paint, and surfaces the longest one as the page’s INP for that session.
INP fails for one reason. The main thread is blocked when the user interacts. JavaScript runs on the main thread, layout runs on the main thread, paint runs on the main thread. If a long JavaScript task is in flight when the user clicks a button, the click sits in the queue until the task finishes. The visible result is a button that does not respond for half a second.
The five common causes of poor INP.
Long JavaScript tasks. A function that runs for 300 ms blocks the main thread for 300 ms. Common offenders are big synchronous loops, deserialization of large JSON payloads, and complex state recalculations in single page apps.
Third party scripts. Analytics, advertising, A/B testing, tag managers. Each loads on the main thread, each can fire a long task at unpredictable moments.
Heavy event handlers. A click handler that does too much work in a single function. Click triggers a route change, route change loads data, data triggers a heavy render. The user clicked once and waited 600 ms.
React or Vue render storms. A state change that triggers re-rendering of the entire tree, while the user is mid scroll or mid tap.
Animation jank during interaction. A CSS transition or a JavaScript animation that holds the main thread while the user tries to scroll or click.
The fix toolkit for INP is well established. Break long tasks into smaller chunks using scheduler.yield(), requestIdleCallback, or simple setTimeout(fn, 0). Move heavy work to a Web Worker. Defer non critical third party scripts behind user interaction. Memoize React components and use useDeferredValue for non urgent state. Audit what runs synchronously on every interaction and decide what can run later.
Cumulative Layout Shift (CLS)
CLS measures the visual instability of the page. Every time something moves on the screen unexpectedly, CLS goes up. The metric is unitless, calculated as the sum of layout shift scores during the session.
The 2026 thresholds for CLS.
- Good: 0.1 or less
- Needs improvement: 0.1 to 0.25
- Poor: more than 0.25
A single layout shift score is the impact fraction times the distance fraction. An image that pushes half the page down by ten percent of the viewport scores 0.05. Two such shifts in a session sum to 0.10, right on the edge. Three put the page in the poor zone.
The four causes that produce the vast majority of CLS issues.
Images and videos without dimensions. When the browser does not know the size of an image until it loads, it has to reserve no space, paint surrounding content, and shift everything down once the image arrives. Always set width and height attributes on images and videos, or use the equivalent CSS aspect ratio.
Dynamically inserted content. Ads, embeds, banners, and notifications added to the DOM after the initial render shift everything below them. The fix is to reserve space ahead of time, even when the content has not loaded yet, by setting a minimum height on the container.
Web fonts swapping. When a web font finishes loading and replaces the fallback font, the text often reflows because the metrics differ. The fix is font-display: swap combined with the size-adjust CSS descriptor on the @font-face rule, which scales the fallback to match the metrics of the web font and removes the visible jump.
Animations that affect layout. A CSS transition on top, left, width, or height triggers layout reflow on every frame. Animate transform and opacity instead, both of which the browser can handle without affecting layout.
CLS is the easiest of the three to fix once you find the offending element. The Layout Instability API in Chrome DevTools (Performance panel, then “Web Vitals” or “Experience” track) highlights every shift with the affected DOM node.
Lab Data Versus Field Data
The two ways to measure Core Web Vitals serve different purposes. Mixing them up leads to wasted optimization work.
Lab data comes from a synthetic test. Lighthouse, PageSpeed Insights lab section, WebPageTest, all simulate a load on a controlled device under a controlled network. The result is reproducible and useful for diagnosis. A lab score of 95 on Lighthouse tells you the page can be fast.
Field data comes from real Chrome users. CrUX is the public dataset, refreshed monthly. PageSpeed Insights field section, the Chrome UX Report API, and Search Console’s Core Web Vitals report all read from CrUX. Field data is what Google uses to evaluate your site for ranking purposes.
A page can score 100 on Lighthouse and fail in CrUX. The discrepancy comes from the device and network mix of your real visitors versus the simulated profile. Lighthouse uses a moto G4 on a slow 4G connection by default. Your real users may be on older Android phones on weaker networks, or on tablets that run JavaScript slower than the lab assumes.
The reverse also happens. A page with a Lighthouse score of 60 can pass CrUX, because real users mostly visit on faster devices than the lab simulates.
Treat lab data as a diagnostic tool. It tells you what an optimized device should see, and it produces stack traces, render timelines, and clear before and after comparisons. Treat field data as the verdict. It tells you what your real users see, and it is what determines passing or failing the Core Web Vitals threshold.
The cycle is: change the page, verify in lab that the change works, then wait 28 days for CrUX to reflect the change in field data.

The Playbook for Each Metric
Each metric has a recurring playbook. Read field data first, diagnose in lab, fix the underlying cause, verify in lab, then wait for field data to confirm.
LCP playbook.
- Open PageSpeed Insights for the URL, look at the field data section.
- If the page fails LCP, scroll to the diagnostics. The largest element is named.
- Open Chrome DevTools Performance panel, reload the page with throttling on (Slow 4G, 4x CPU slowdown).
- Find the LCP element in the timeline. Look at what blocked it. Common culprits in order: server response, render blocking CSS, render blocking JavaScript, slow image load.
- Apply the smallest fix that addresses the dominant cause. Server side rendering for SPA pages, modern image format and explicit dimensions for image LCP, async or defer for blocking scripts.
- Verify in lab that LCP drops below 2.5 seconds.
- Mark the date. Field data will reflect the change in CrUX 28 days later.
INP playbook.
- Open the Search Console Core Web Vitals report. Find URLs failing INP.
- On a real phone or in DevTools mobile emulation, reproduce the failing interaction.
- Open Performance panel, record from before the click. Stop after the next paint.
- Find the long task that blocked the main thread. The flame chart shows the function name.
- Apply one of the standard fixes. Yield with
scheduler.yield. Defer the work behindrequestIdleCallback. Move it to a Web Worker. Replace the synchronous algorithm with an incremental one. - Verify in lab that the interaction completes in under 200 ms.
- Wait for CrUX to confirm in field data.
CLS playbook.
- Open Chrome DevTools, Performance panel, record a page load with the Web Vitals overlay enabled.
- Every layout shift appears as a red rectangle on the affected element with a contribution score.
- Identify the cause. Image without dimensions, late inserted ad, font swap, animation on layout property.
- Apply the fix. Set
widthandheighton images. Reserve container space for dynamic content. Usefont-display: swapwithsize-adjust. Animatetransformnottop. - Verify the layout shift disappears in the recording.
- Wait for field data.
The playbook works because each metric has a small, well known set of root causes. Once you know which root cause is in play, the fix is mechanical.
When Core Web Vitals Actually Move Rankings
The honest question every team eventually asks. We have ranked tenth for years, will green Core Web Vitals push us to fifth?
The honest answer. Probably not on its own.
Google has been consistent. Core Web Vitals is a tiebreaker, not a primary ranking factor. When two pages match on relevance, content quality, and authority, the one with better Core Web Vitals can win the tiebreak. When pages differ on those primary signals, Core Web Vitals does not close the gap.
The ranking impact of Core Web Vitals shows up most clearly in three scenarios.
Mobile commerce competition. Two product pages, similar relevance, one loads in 2 seconds and the other in 5 seconds. The fast page tends to outrank the slow one over time, especially in highly competitive categories where every signal matters.
Sites near the boundary. A site whose Core Web Vitals are at the edge of poor and one tier of pages slips into the failing zone, that tier loses ranking. Moving the failing pages back to good restores the prior ranking. The effect is bidirectional and sometimes sharp.
Engagement signals via user behavior. This is the longer term effect. A fast, stable site keeps users around longer. Lower bounce, more pages per session, better return rate. These engagement signals correlate with ranking, and they are themselves an outcome of good Core Web Vitals.
The scenarios where Core Web Vitals does not move rankings.
A site without topical authority. A new site can be the fastest in its category and still rank below older sites with weak performance. Authority is the dominant signal until it is established.
A page with a clear content gap. A page that does not answer the query well will not rank above a page that does, regardless of speed.
Topics where one site dominates. When a single site has built clear authority in a niche, faster competitors rarely break in on speed alone. They need content depth and links first.
The rule of thumb that has held up for several years: aim for green Core Web Vitals because it improves user experience, captures the tiebreak when relevance is close, and supports the engagement signals that compound over time. Do not aim for green Core Web Vitals as a ranking shortcut, because it is not one.
For a deeper view of how speed interacts with the rest of technical SEO, the technical SEO audit checklist covers the full surface of issues that compound with Core Web Vitals. For traffic that depends on AI engines, the same speed work supports AI ready pages, since both Google and AI crawlers prefer fast, stable rendering.
Conclusion
Core Web Vitals is a small set of metrics that captures most of what real users feel when a page is slow. LCP for loading, INP for interactivity, CLS for visual stability. Each has a known threshold, a known set of root causes, and a known fix sequence. Lab tools tell you whether the fix works. Field data, the 28 day rolling CrUX dataset, tells you whether real users see the improvement.
Stop chasing 100 in Lighthouse. Aim for green field data. The two are different goals and the second is the one that matters for both users and ranking.
For the audit work that surrounds Core Web Vitals, pair this with meta tags review and a regular SEO crawler routine. Speed work and crawl work share root causes, render blocking resources, server response time, oversized assets. Both audits run together more efficiently than either runs alone.
If you need a tool to crawl your site and surface speed problems alongside the rest of technical SEO, download Seodisias for free. It runs locally on your machine, has no URL limits, and reports the page level signals that influence Core Web Vitals as part of every audit.