Contact
SEO

Core Web Vitals:2026年完全最適化ガイド

Empirium Team13 min read

Google has been using Core Web Vitals as a ranking factor since 2021. Five years later, most sites still fail at least one metric. According to the Chrome User Experience Report (CrUX), only 42% of origins pass all three Core Web Vitals thresholds on mobile. That means 58% of websites are leaving ranking potential on the table — not because their content is bad, but because their pages are slow, jumpy, or unresponsive.

The thresholds have tightened. FID is gone, replaced by INP. And Google's weighting of page experience signals has shifted from tiebreaker to genuine ranking factor, especially in competitive SERPs where content quality is comparable across the top 10.

Here's every metric, what causes failures, and how to fix them.

The 2026 Core Web Vitals Landscape

Three metrics define Core Web Vitals in 2026:

Metric What It Measures Good Needs Improvement Poor
LCP (Largest Contentful Paint) Loading performance ≤ 2.5s 2.5s – 4.0s > 4.0s
CLS (Cumulative Layout Shift) Visual stability ≤ 0.1 0.1 – 0.25 > 0.25
INP (Interaction to Next Paint) Responsiveness ≤ 200ms 200ms – 500ms > 500ms

INP replaced FID (First Input Delay) in March 2024. This was a significant shift. FID only measured the delay before the browser started processing the first interaction. INP measures the latency of all interactions throughout the page's lifecycle — clicks, taps, key presses — and reports the worst one (with some statistical smoothing). It's a much harder metric to pass.

The practical targets for competitive sites aren't "good" — they're excellent:

  • LCP under 1.5s (not just 2.5s)
  • CLS at zero (not just under 0.1)
  • INP under 100ms (not just 200ms)

Why aim higher? Because the "good" threshold is the minimum. Sites that hit 1.2s LCP outperform sites at 2.4s, even though both technically "pass." Google's ranking algorithm uses the actual values, not just the pass/fail status.

LCP Optimization: The Critical Path

LCP measures when the largest visible element in the viewport finishes rendering. Usually, that's a hero image, a heading, or a large text block. The key word is "visible" — content below the fold doesn't count.

What Causes Poor LCP

The LCP timeline has four phases, and delays in any of them compound:

  1. Time to First Byte (TTFB): Server response time. If your server takes 800ms to respond, you've already burned a third of your budget.
  2. Resource load delay: Time between TTFB and when the browser starts loading the LCP resource. Caused by render-blocking CSS/JS and resource discovery delays.
  3. Resource load duration: How long the LCP resource (usually an image) takes to download.
  4. Element render delay: Time between the resource loading and the browser actually painting it. Caused by main thread blocking from JavaScript.

Fix Each Phase

TTFB (target: under 200ms):

  • Use a CDN. Cloudflare, Vercel Edge, or AWS CloudFront. Non-negotiable for global audiences.
  • Enable HTTP/2 or HTTP/3. Multiplexed connections reduce overhead.
  • Use stale-while-revalidate caching. Serve cached content while regenerating in the background.
  • At Empirium, we use Next.js ISR (Incremental Static Regeneration) with edge caching. TTFB is typically 50-80ms for static pages.

Resource load delay (target: zero):

  • Preload the LCP image: <link rel="preload" as="image" href="/hero.webp">
  • Inline critical CSS. The CSS needed to render above-the-fold content should be in the HTML, not in an external stylesheet.
  • Remove render-blocking JavaScript from <head>. Use async or defer on all non-critical scripts.

Resource load duration (target: under 500ms):

  • Serve images in WebP or AVIF format. WebP is 25-35% smaller than JPEG. AVIF is 50% smaller.
  • Use responsive images with srcset. Don't serve a 2400px image to a 375px mobile screen.
  • Compress aggressively. A hero image should be under 100KB on mobile.
  • Use fetchpriority="high" on the LCP image element.

Element render delay (target: under 100ms):

  • Don't lazy-load the LCP image. This is the #1 mistake we see. loading="lazy" on the hero image delays rendering.
  • Minimize main thread JavaScript during page load. Long tasks (>50ms) block rendering.
  • Avoid display: none on the LCP element that later gets toggled visible by JavaScript.

LCP Quick Wins

<!-- Preload hero image -->
<link rel="preload" as="image" href="/hero.webp" fetchpriority="high" />

<!-- Use responsive images -->
<img
  src="/hero.webp"
  srcset="/hero-400.webp 400w, /hero-800.webp 800w, /hero-1200.webp 1200w"
  sizes="100vw"
  alt="Description"
  width="1200"
  height="630"
  fetchpriority="high"
/>

This pattern alone fixes LCP on 60% of the sites we audit. Read our web performance optimization checklist for the complete list.

CLS Elimination: Zero Layout Shift

CLS measures how much the visible content shifts during page load. A score of 0 means nothing moves. A score of 0.1 means something shifted by 10% of the viewport. Anything above 0.1 fails, but the real target is zero.

The Five Causes of Layout Shift

  1. Images without dimensions. The browser doesn't know how much space to reserve until the image loads. Then the content below jumps.
  2. Web fonts loading late. The fallback font renders, then the web font loads and the text reflows at a different size.
  3. Dynamic content injection. Ads, banners, cookie notices, or chat widgets that push content down after the initial render.
  4. Embeds without reserved space. YouTube iframes, Twitter cards, or maps that resize after loading.
  5. Late-loading CSS. If CSS loads after the HTML renders, the styled layout differs from the unstyled one.

Fix Each Cause

Images: Always set width and height attributes. Modern browsers use these to calculate the aspect ratio before the image loads. CSS aspect-ratio also works.

<img src="/photo.webp" width="800" height="450" alt="..." />

Fonts: Use font-display: swap with size-adjusted fallbacks. The size-adjust property matches your fallback font's metrics to your web font's, preventing reflow.

@font-face {
  font-family: 'Inter';
  src: url('/fonts/inter.woff2') format('woff2');
  font-display: swap;
  size-adjust: 107%;
}

Better yet, preload your font files:

<link rel="preload" href="/fonts/inter.woff2" as="font" type="font/woff2" crossorigin />

Dynamic content: Reserve space with CSS min-height for elements that load dynamically. Cookie banners should overlay content (position: fixed), not push it down.

Embeds: Wrap embeds in a container with a fixed aspect ratio:

.video-container {
  aspect-ratio: 16/9;
  width: 100%;
}

CSS: Inline critical CSS in the HTML <head>. Load non-critical CSS asynchronously with the media="print" trick:

<link rel="stylesheet" href="/non-critical.css" media="print" onload="this.media='all'" />

INP: The Newest and Hardest Metric

INP replaced FID because FID was too easy to pass. FID only measured the first interaction's input delay. A page could have terrible responsiveness on every subsequent interaction and still score well on FID. INP captures the reality of how users experience your site.

What Causes Poor INP

INP failures come from one root cause: the main thread is busy when the user interacts. Specifically:

  • Long JavaScript tasks. Any task over 50ms blocks the main thread. If the user clicks during a 200ms task, they wait 200ms before seeing a response.
  • Excessive DOM size. Large DOMs (>1,500 elements) slow down style recalculations and layout computations after interactions.
  • Forced synchronous layouts. Reading layout properties (like offsetHeight) inside a loop that also modifies the DOM triggers expensive layout thrashing.
  • Third-party scripts. Analytics, chat widgets, and ad scripts often run heavy JavaScript on interaction events.

Optimization Strategies

Break up long tasks:

// Bad: one 300ms task
function processItems(items) {
  items.forEach(item => heavyComputation(item));
}

// Good: yield to the main thread between chunks
async function processItems(items) {
  for (const chunk of chunkArray(items, 50)) {
    chunk.forEach(item => heavyComputation(item));
    await new Promise(r => setTimeout(r, 0));
  }
}

Use requestAnimationFrame for visual updates:

button.addEventListener('click', () => {
  requestAnimationFrame(() => {
    button.classList.add('active');
  });
  requestIdleCallback(() => doExpensiveWork());
});

Reduce DOM size: Virtualize long lists. Don't render 500 product cards in the DOM — render the 20 visible ones and use intersection observer for the rest.

Audit third-party scripts: Use Chrome DevTools Performance panel → filter by "Third-party" to see which external scripts run long tasks during interactions. Remove or defer the worst offenders.

INP Debugging with Chrome DevTools

  1. Open DevTools → Performance → check "Web Vitals"
  2. Click Record, interact with the page, stop recording
  3. Look for the INP marker in the timeline
  4. Trace the interaction back to its handler — the flame chart shows exactly which function is blocking

Measuring Real User Performance

Lab tools like Lighthouse test your site under controlled conditions. Real users experience your site on a 2019 Android phone over a 3G connection in Jakarta. These are fundamentally different measurements.

Data Source Type Best For
Lighthouse Lab Debugging specific issues
PageSpeed Insights Lab + Field Quick checks with CrUX data
CrUX Dashboard Field Tracking real user trends over time
web-vitals.js Field Custom RUM with your own analytics
Search Console Field CWV status for your indexed pages

Set up web-vitals.js for continuous real user monitoring:

import { onLCP, onCLS, onINP } from 'web-vitals';

function sendToAnalytics({ name, value, id }) {
  fetch('/api/vitals', {
    method: 'POST',
    body: JSON.stringify({ metric: name, value, id }),
  });
}

onLCP(sendToAnalytics);
onCLS(sendToAnalytics);
onINP(sendToAnalytics);

Report at the 75th percentile (p75), which is what Google uses for ranking. Your median user might have good performance while your p75 user fails — and it's the p75 that determines your ranking signal.

Performance Budgets

Set concrete budgets and enforce them in CI:

Metric Budget Enforcement
LCP ≤ 1.5s Lighthouse CI assertion
CLS ≤ 0.05 Lighthouse CI assertion
INP ≤ 150ms Manual testing on low-end device
Total JS ≤ 200KB gzipped Bundle size check in CI
Total page weight ≤ 500KB Webpack/Turbopack budget plugin

At Empirium, every project ships with Lighthouse CI configured in the deployment pipeline. Builds that regress Core Web Vitals below budget fail automatically. Read more about our approach in real user monitoring vs Lighthouse.

FAQ

Do Core Web Vitals differ between mobile and desktop?

Yes. Google evaluates mobile and desktop separately. Most sites fail on mobile but pass on desktop because mobile devices have slower processors and mobile networks have higher latency. Google uses the mobile score for mobile rankings and desktop score for desktop rankings. Optimize for mobile first — that's where most of your traffic comes from, and it's the harder target.

How much do third-party scripts impact Core Web Vitals?

Significantly. A typical analytics + chat widget + cookie consent stack adds 150-300ms to LCP and can easily cause INP failures. Audit your third-party scripts quarterly. Load non-essential scripts with async and defer their execution until after the page becomes interactive. If a third-party script consistently causes INP failures, replace it or remove it.

Does the framework choice affect Core Web Vitals?

Absolutely. Static site generators (Next.js with SSG, Astro) consistently outperform client-side-rendered SPAs (Create React App, Vue CLI) on LCP and INP. Server-side rendering (Next.js with SSR) falls in between. The framework doesn't guarantee good performance, but it sets your baseline. We use Next.js with static generation at Empirium because it gives us the best starting point for all three metrics.

How quickly do Core Web Vitals improvements affect rankings?

Google uses a 28-day rolling average of CrUX data. After you deploy improvements, it takes 28 days for the full effect to appear in CrUX, and then additional time for Google to factor the updated data into rankings. Total timeline: 4-8 weeks from deployment to ranking impact. Don't expect overnight results, but do expect measurable improvement within two months.

Can I pass Core Web Vitals with WordPress?

Technically yes, practically difficult. A stock WordPress install with a lightweight theme can pass. But the moment you add 10+ plugins, a page builder, WooCommerce, and a theme with built-in animations, you're fighting an uphill battle. The WordPress sites we've audited average 4.2s LCP on mobile. It's fixable, but the effort often exceeds the cost of rebuilding on a modern framework. See our WordPress alternatives analysis for the comparison.

Written by Empirium Team

Explore More

Deep-dive into related topics across our five pillars.

Pillar Guide

2026年の国際SEO:マルチリージョンランキング実践ガイド

複数国・複数言語でのランキング完全ガイド。

View all SEO articles

Related Resources

Need help with this?

Talk to Empirium