Code Splitting

TL;DR / Code splitting breaks your application into multiple bundles loaded on demand, so users only download the JavaScript they need for the current view.

How It Works

 ┌────────────┐          ┌────────────┐
 │   Entry    │          │  Route A   │
 │   Chunk    │────┐────→│   Chunk    │────┐
 │            │    │     │            │    │     ┌────────────────┐
 └────────────┘    │     └────────────┘    │     │     Shared     │
                   │                       ┌────→│  Vendor Chunk  │
                   │                       │     │                │
                   │                       │     └────────────────┘
                   │     ┌────────────┐    │
                   │     │  Route B   │    │
                   └────→│   Chunk    │────┘
                         │            │
                         └────────────┘


 Initial load: Entry chunk only

Edit diagram

Code splitting is the practice of dividing application code into discrete chunks that are loaded independently. Rather than shipping a monolithic bundle containing every route, component, and library, the application loads an initial chunk to render the first screen and defers everything else until it is actually needed.

Entry Points and Chunks

A chunk is a unit of output from a bundler — a single JavaScript file that the browser downloads and executes. The bundler determines chunk boundaries through two mechanisms: explicit entry points (multiple entry configs in webpack, or multiple input files in Rollup) and dynamic split points (calls to import()). Each entry point produces its own chunk. Each dynamic import creates an async chunk that the runtime loads on demand.

Route-Based Splitting

The most common splitting strategy is route-based: each route in a single-page application becomes its own chunk. When the user navigates to /dashboard, the framework loads the dashboard chunk. When they navigate to /settings, a different chunk loads. The entry chunk contains the router, the framework runtime, and shared utilities — everything needed to bootstrap and determine which route chunk to fetch.

In React, this is implemented via React.lazy() wrapping a dynamic import. In Vue, it is the default behavior of async component definitions in vue-router. Next.js and Nuxt automatically split by page. The key insight is that route transitions are natural loading boundaries — users already expect a brief pause when navigating, making it the ideal moment to fetch new code.

Vendor Splitting

Libraries change less frequently than application code. Splitting vendor dependencies (React, lodash, date-fns) into a separate chunk allows the browser to cache them independently. When you deploy a new version of your app, users only re-download the application chunk while the vendor chunk remains in the HTTP cache. Webpack's splitChunks.cacheGroups configuration and Rollup's manualChunks option give explicit control over which modules land in which vendor chunk.

Common Chunk Extraction

When two or more async chunks share the same dependency, bundlers can extract that dependency into a shared chunk loaded once and cached. Webpack's splitChunks.minChunks threshold controls when this happens — setting it to 2 means any module imported by at least two chunks gets extracted. Without this optimization, the same library code could appear duplicated across multiple route chunks, increasing total download size.

Prefetching and Preloading

Code splitting introduces a latency cost: the browser must fetch a chunk before executing it. Prefetching mitigates this by downloading chunks during idle time before the user actually requests them. <link rel="prefetch"> tells the browser to fetch a resource at low priority. <link rel="preload"> fetches at high priority for resources needed imminently.

Webpack supports this declaratively through magic comments: import(/* webpackPrefetch: true */ './Settings') inserts a prefetch link in the document head. Frameworks like Next.js automatically prefetch route chunks when <Link> components scroll into the viewport, making route transitions nearly instantaneous.

Granularity Trade-Offs

Splitting too aggressively creates many small chunks, increasing HTTP request overhead. HTTP/2 multiplexing reduces this cost significantly, but there is still per-request overhead from compression dictionaries, TLS record framing, and browser scheduling. Splitting too conservatively produces large chunks that defeat the purpose. The optimal granularity depends on your application's navigation patterns and the size distribution of your modules.

A practical heuristic: split at route boundaries, extract vendor code used across routes into a shared chunk, and use dynamic imports for heavy components below the fold (modals, charts, editors). Measure with Lighthouse and real user monitoring to verify that Time to Interactive improves.

Runtime Chunk Loading

The bundler emits a small runtime (the "chunk loading runtime") in the entry chunk that manages fetching and executing async chunks. In webpack, this runtime maintains a registry of chunk IDs to URLs, handles script tag injection, and resolves the Promise returned by import() once the chunk has loaded and executed. Understanding this runtime is important for debugging chunk loading failures — network errors, CORS issues, or mismatched public paths all surface here.

Gotchas

  • Over-splitting increases waterfall depth — if chunk A loads chunk B which loads chunk C, the user waits for three sequential network requests. Keep the dependency chain between chunks shallow.
  • Shared state across chunks needs coordination — if a lazily loaded chunk creates a new instance of a singleton (e.g., a store or SDK client), you get duplicated state. Shared modules must be in a common chunk or the entry chunk.
  • Dynamic imports with variable paths prevent static analysis — import(./pages/${name}) forces the bundler to include every possible match in a single chunk or create a chunk per file, which may not be what you want.
  • Flash of loading state — without prefetching, route-based splits show a loading spinner on every navigation. Users perceive this as slower than a monolithic bundle where the code was already present.
  • Cache invalidation across chunks — changing one module can invalidate the content hash of multiple chunks. Use stable chunk IDs (moduleIds: 'deterministic' in webpack) to minimize cache busting.