TL;DR / Edge rendering runs server-side rendering at CDN edge locations geographically close to users, dramatically reducing Time to First Byte compared to a single origin server.
How It Works
┌────────────┐
│ Origin │
│ Server │
└────────────┘
│
│
┌──────────────└──────────────┐
│ │
│ │
↓ ↓
┌──────────────┐ ┌──────────────┐
│ Edge Node │ │ Edge Node │
│ Tokyo │ │ Frankfurt │
└──────────────┘ └──────────────┘
│ │
│ │
│ │
│ │
│ │
↓ ↓
┌──────────┐ ┌──────────┐
│ User │ ~20ms │ User │ ~15ms
└──────────┘ └──────────┘
SSR at the edge, not origin. TTFB drops dramatically.
Traditional SSR runs on an origin server — a single location or region. A user in Tokyo requesting a page from a server in Virginia adds approximately 150-200ms of network latency just for the round trip, before any rendering work begins. Edge rendering eliminates this by moving the SSR execution to CDN edge nodes distributed worldwide.
How Edge Rendering Works
CDN providers (Cloudflare, Vercel, Deno Deploy, Netlify, AWS CloudFront) maintain hundreds of points of presence (PoPs) globally. Edge rendering deploys your server-side rendering logic to these PoPs. When a user requests a page, the nearest edge node executes the rendering code, generates the HTML, and responds — all within the local network region. The round-trip latency drops from 150ms+ to 10-30ms.
The edge runtime is typically a lightweight JavaScript environment based on V8 isolates (Cloudflare Workers, Vercel Edge Runtime) or Deno. These are not full Node.js environments — they start in microseconds and have restricted APIs. There is no file system access, limited native module support, and constrained memory. The trade-off is cold start times measured in single-digit milliseconds versus hundreds of milliseconds for traditional serverless functions.
What Runs at the Edge
Not all SSR workloads are suitable for edge rendering. The ideal case is rendering that depends on request-local data (cookies, geolocation, headers) or data from globally replicated stores. Common edge rendering use cases include:
- Personalization — rendering different content based on A/B test cookies, user locale from
Accept-Language, or geolocation. - Authentication gates — checking session tokens and rendering authenticated vs. anonymous content.
- Internationalization — serving pre-translated content based on the user's region.
- Feature flags — evaluating flags at the edge to render the appropriate variant.
These cases work well because the data needed for rendering is either in the request itself or available from edge-compatible data stores (KV stores, globally replicated databases like CockroachDB, PlanetScale, or Turso).
The Data Proximity Problem
Edge rendering's biggest challenge is data access. Your rendering code runs at the edge, but your database likely runs in one region. A Tokyo edge node rendering a page that queries a Virginia database still incurs the cross-region latency — it just moved the problem from the user-to-server hop to the server-to-database hop.
Solutions include globally replicated databases (read replicas in multiple regions), edge-compatible key-value stores (Cloudflare KV, Vercel Edge Config), and aggressive caching strategies. Some architectures use a split approach: render the page shell at the edge with cached/static data, then stream or lazy-load personalized data from the origin.
Edge vs. Serverless vs. Traditional SSR
Traditional SSR runs on long-lived servers — always warm, full Node.js, unlimited APIs, but fixed location. Serverless SSR (AWS Lambda, GCP Cloud Functions) runs on-demand with auto-scaling, but cold starts can add 200-500ms and execution is still region-bound. Edge SSR runs on-demand at the nearest PoP with sub-millisecond cold starts, but with runtime restrictions.
The progression is a trade-off between capability and latency. Full Node.js gives you everything but at fixed locations. Edge gives you minimal latency but constrains your runtime. Most production architectures combine both: edge rendering for the page shell and personalization, origin functions for heavy data operations.
Framework Support
Next.js supports edge rendering via export const runtime = 'edge' on route segments. Remix runs on Cloudflare Workers. Nuxt supports edge deployment through Nitro. SvelteKit can deploy to edge platforms via adapters. Astro supports Cloudflare and Deno Deploy. Each framework handles the edge runtime constraints differently, but all require awareness of the restricted API surface.
Gotchas
- No Node.js APIs at the edge.
fs,child_process,net,crypto(Node version), and most native modules are unavailable. Libraries that depend on these will not work. Always check your dependency tree for Node.js-specific imports. - Database connections are ephemeral. Edge functions cannot maintain persistent connection pools. Use HTTP-based database drivers (Neon serverless driver, PlanetScale serverless driver) or connection poolers like PgBouncer.
- Memory and CPU limits are strict. Cloudflare Workers have 128MB memory and 30-second CPU time limits. Complex rendering (large pages, heavy computation) may exceed these.
- Edge caching strategy is critical. Without caching, every request triggers a full render. Implement stale-while-revalidate patterns or incremental static regeneration at the edge to avoid unnecessary compute.
- Cold starts are fast but not zero. V8 isolate cold starts are ~1-5ms, but if your code imports large libraries, the module evaluation adds overhead on the first invocation.