Por que construímos tudo static-first
Every website we build at Empirium starts as static HTML files served from a global CDN. The homepage, the services pages, the blog posts, the case studies — they're all pre-rendered at build time and deployed to edge servers worldwide. No database queries. No server-side processing. No runtime.
Then, and only then, we add dynamic layers where they're genuinely needed. A contact form that submits to a serverless function. A chat widget that loads client-side. A pricing calculator that fetches rates from an API. Each dynamic layer is a deliberate addition with a measurable performance cost — not the default starting point.
This approach — static-first — delivers 95+ Lighthouse scores, sub-second load times, near-zero hosting costs, and an infrastructure so simple it essentially cannot go down. It's the architectural philosophy behind every project we ship.
The Static-First Principle
The principle is simple: start with the fastest possible architecture and add complexity only where justified.
A pre-rendered HTML file served from a CDN edge node is the theoretical maximum performance for web delivery. The request hits the nearest edge server (20-50ms), the server sends a cached file (no processing time), and the browser renders the content immediately. Total time from click to visible content: 200-500ms. No server-side architecture can match this, because any computation adds latency.
Most web development works in the opposite direction. You start with a dynamic framework (WordPress, Ruby on Rails, Django), build everything as server-rendered pages, and then try to optimize — adding caching layers, CDN proxies, and query optimizations to approximate the performance that static delivery provides for free.
Static-first inverts this. You start at maximum performance and selectively trade performance for capability:
Layer 0: Static HTML at the edge (0ms server time)
↓ only if needed
Layer 1: ISR — static pages that refresh in background (~0ms per request)
↓ only if needed
Layer 2: Edge functions — lightweight compute at CDN nodes (~5-50ms)
↓ only if needed
Layer 3: Serverless functions — full Node.js for complex logic (~50-300ms)
↓ only if needed
Layer 4: Server-side rendering — per-request page generation (~100-500ms)
↓ only if needed
Layer 5: Client-side fetching — data loaded after page render (variable)
Each step down adds latency and complexity. The discipline is staying as high on this stack as possible. A page that can be static should be static, even if making it dynamic would be "easier" from a development perspective.
The Performance Ceiling of Static
Static-first sites achieve Core Web Vitals scores that dynamic architectures structurally cannot match:
| Metric | Static-First (Measured) | SSR Average | WordPress Average |
|---|---|---|---|
| TTFB | 25-60ms | 150-500ms | 400-2,000ms |
| LCP | 0.5-1.0s | 1.2-3.0s | 2.5-6.0s |
| INP | 30-80ms | 80-300ms | 200-500ms |
| CLS | 0.00-0.02 | 0.02-0.10 | 0.05-0.25 |
| Lighthouse Score | 95-100 | 70-90 | 30-70 |
The TTFB advantage is the foundation. Static files served from a CDN edge node skip every latency-adding step in the traditional pipeline: DNS to origin server, TCP connection to origin, server processing time, database queries, template rendering. The response is a file read from cache at the nearest geographic location. You cannot optimize an SSR response to be faster than not having an SSR response.
The INP advantage comes from smaller JavaScript bundles. Static-first sites ship only the JavaScript needed for interactivity — a theme toggle, a mobile menu, a form validation script. There's no hydration framework re-running the server's rendering work in the browser. The main thread is free to respond to user interactions immediately.
The CLS advantage comes from predictable rendering. Static HTML includes explicit dimensions for all elements. There's no content shift from lazy-loaded server responses or JavaScript-rendered components pushing the layout around after initial paint.
Google's CrUX data confirms this at scale: sites deployed on static hosting (Vercel, Netlify, Cloudflare Pages) pass all three Core Web Vitals 2.5x more frequently than sites on traditional hosting.
Adding Dynamic Layers Progressively
Static-first doesn't mean static-only. Most production sites need some dynamic functionality. The key is adding each layer deliberately:
Layer 1: ISR for content freshness. Blog posts, product pages, and any content that changes periodically use Incremental Static Regeneration. The page is statically generated but regenerates in the background when content updates. Visitors always get a cached response (static performance) with content that's at most minutes old.
export const revalidate = 300 // rebuild every 5 minutes
export default async function PricingPage() {
const plans = await getPlans()
return <PricingTable plans={plans} />
}
Performance cost: zero for visitors. The regeneration happens asynchronously.
Layer 2: Client-side interactivity. Form validation, tooltips, accordions, and theme toggles run in the browser after the static page loads. These are small JavaScript modules that enhance the page without blocking initial render.
// Dynamic import — loads only when needed
const ChatWidget = dynamic(() => import('./ChatWidget'), {
ssr: false,
loading: () => null
})
Performance cost: depends on JavaScript size. Keep interactive components under 50KB each.
Layer 3: API routes for mutations. Form submissions, email sending, and data writes go to serverless functions. The page itself remains static — the form posts to /api/contact, and the serverless function handles the logic.
Performance cost: only on the action (form submit), not on page load. The user sees the static page instantly; the API call happens on interaction.
Layer 4: Edge functions for personalization. Lightweight logic at the CDN edge can modify static responses per visitor: A/B test variants, geo-specific content, or authentication checks. Cloudflare Workers and Vercel Edge Functions execute in under 50ms.
Performance cost: 5-50ms added to TTFB, which is still faster than SSR.
Layer 5: SSR as a last resort. Pages that are entirely unique per request (user dashboards, search results, personalized feeds) use server-side rendering. But these are typically behind authentication — they don't need SEO optimization or public-facing performance.
The Developer Experience
Counter-intuitively, static-first is simpler to develop than dynamic-first:
No server to manage. Static sites deploy to CDN platforms that handle scaling, SSL, caching, and failover automatically. There's no server to provision, no database to maintain, no caching layer to configure. git push → site is live worldwide.
Faster local development. Static site generators run a development server that rebuilds pages in milliseconds on file changes. No database to seed, no server to boot, no Docker compose to orchestrate. npm run dev and you're working.
Simpler testing. Static pages can be tested by loading HTML files — no server running, no database state to mock. End-to-end tests are faster and more reliable because there's no server-side variability.
Predictable deployments. A static deployment either succeeds (all pages build correctly) or fails (build error). There's no "it works on staging but not production" because there's no runtime environment difference — the output is the same HTML files regardless of where they're served.
Easier debugging. When something goes wrong on a static site, the scope of investigation is narrow: is the HTML correct? Is the CSS correct? Is the JavaScript correct? There's no server log to check, no database query to debug, no caching layer to invalidate. The code you wrote is the code that runs.
Case Studies
B2B SaaS marketing site. A 45-page marketing site with blog, case studies, and a pricing calculator. Built with Next.js in static-first mode. Dynamic layers: ISR for blog posts (on-demand revalidation via CMS webhook), a client-side pricing calculator, and a serverless contact form.
Results: Lighthouse score 98. TTFB 35ms globally. Hosting cost: $0/month (Vercel free tier). Build time: 22 seconds. Annual maintenance: 4 hours/quarter for dependency updates.
Multi-language agency site. A 200-page site in 10 languages with a portfolio, team section, and contact forms. Built with Astro (zero JavaScript by default). Dynamic layers: minimal JavaScript for mobile navigation and form submission.
Results: Lighthouse score 100. Total page weight 85KB average. TTFB 28ms. Zero JavaScript on 90% of pages. Build time: 15 seconds for all 200 pages. The site loads faster than any competitor in their industry.
E-commerce product catalog. 2,000 product pages with search, filtering, and a cart. Built with Next.js. Product pages are statically generated with ISR (revalidated when inventory changes via webhook). Search and cart are client-side components. Checkout redirects to Stripe.
Results: Lighthouse score 94. Product pages load in 0.8 seconds. Cart interaction feels instant because the page is already loaded. Hosting: $20/month (Vercel Pro for build minutes). Compared to the previous Shopify site: 40% faster load times, 22% higher conversion rate.
FAQ
Can static-first work for e-commerce? Yes, with the right architecture. Product catalog pages are statically generated (they change infrequently). Search, filtering, and cart are client-side JavaScript. Checkout is a serverless function or hosted checkout (Stripe Checkout). Inventory and pricing updates trigger ISR revalidation. The Shopify vs custom decision is separate from the rendering strategy — you can use static-first with any backend.
How do I handle user authentication in a static-first architecture?
Public pages (marketing, blog, documentation) are static. Authenticated pages (/dashboard/*, /account/*) are server-rendered or client-side rendered behind an auth check. The auth boundary is clean: everything before login is static, everything after login is dynamic. Use middleware (Vercel Edge Middleware, Cloudflare Workers) to handle the auth check at the edge without slowing down public pages.
What about real-time features (chat, notifications, live data)? Real-time features are always client-side — they connect to WebSocket or Server-Sent Events endpoints after the page loads. The static page renders instantly, and the real-time connection establishes in the background. The user sees the page immediately; live data appears moments later. This is actually the ideal pattern because the page is usable before the real-time features are ready.
Won't build times become a problem as the site grows? For most B2B sites (under 1,000 pages), build times stay under 60 seconds. For larger sites, use ISR — only pre-build high-traffic pages and generate the rest on first request. Next.js handles this natively. A 10,000-page site can have a 30-second build by pre-rendering only the top 500 pages and generating the rest on-demand.
What frameworks support static-first best? Astro is the purest static-first framework (zero JS by default, opt-in interactivity). Next.js is the most flexible (supports every rendering strategy in one project). Hugo is the fastest builder (10,000+ pages in seconds) but limited to Go templating. For B2B sites with mixed requirements, Next.js with a static-first discipline is what we recommend — and build — at Empirium.