Garbage Collection Timing

TL;DR / V8's garbage collector uses a generational strategy with minor GC (~1ms) for short-lived objects and major GC (5-50ms+) for long-lived objects, and poorly timed major collections cause visible frame drops.

How It Works

 ┌──────────────┐          ┌──────────────┐          ┌──────────────┐
 │  New Space   │ survives │  Old Space   │  blocks  │    Frame     │
 │ (Young Gen)  │─────────→│  (Old Gen)   │─────────→│    Budget    │
 └──────────────┘          └──────────────┘          └──────────────┘

  Minor GC: ~1ms            Major GC: 5-50ms          16.6ms @ 60fps
  Scavenge alg.             Mark-Sweep-Compact
                                                      Major GC = jank!

Edit diagram

V8 (Chrome, Edge, Node.js) divides the heap into two generational spaces based on the observation that most objects die young. Understanding when and how garbage collection pauses occur is essential for maintaining smooth 60fps rendering.

New Space (Young Generation) is a small region (typically 1-8MB per semi-space) where all new allocations land. It uses the Scavenge algorithm: the space is divided into two semi-spaces (from-space and to-space). Allocations fill the from-space sequentially (bump-pointer allocation, extremely fast). When from-space is full, a minor GC fires. It copies live objects from from-space to to-space (compacting them), then swaps the spaces. Dead objects are implicitly collected -- they simply are not copied. This is fast (sub-millisecond to a few milliseconds) because it only visits live objects, and most objects in new space are dead.

Objects that survive two minor GC cycles are promoted to Old Space. The assumption is that if an object survived two collections, it is likely long-lived and not worth repeatedly copying.

Old Space (Old Generation) is much larger (hundreds of MB to GB) and uses the Mark-Sweep-Compact algorithm. Marking traverses the object graph from roots, marking all reachable objects. Sweeping reclaims unmarked memory. Compaction moves surviving objects together to eliminate fragmentation. Major GC is significantly more expensive because it must traverse the entire live object graph, which can be enormous.

V8 mitigates major GC pauses through several techniques. Incremental marking breaks the marking phase into small steps interleaved with JavaScript execution. Rather than pausing for the entire mark phase, V8 marks a batch of objects, yields to JavaScript, marks another batch, and so on. The tri-color marking scheme (white/gray/black) tracks progress, and a write barrier catches mutations to already-scanned objects during the incremental phase.

Concurrent marking moves most of the marking work to background threads while JavaScript runs on the main thread. Only the initial root scanning and a brief finalization step (re-scanning objects modified during concurrent marking) happen on the main thread. This reduced major GC main-thread pauses from 5-50ms down to 1-5ms in many cases.

Concurrent sweeping and parallel compaction similarly offload work. Sweeping happens entirely off the main thread. Compaction uses multiple helper threads for parallel evacuation of pages.

Idle-time GC leverages Chrome's task scheduler. Between animation frames and during idle periods, V8 performs incremental marking steps and minor GC work. The requestIdleCallback model aligns with this -- V8 tries to schedule GC work when the browser reports idle time, avoiding frame-critical paths.

Despite all optimizations, major GC still causes observable jank in certain scenarios. Large allocation spikes (parsing a big JSON response, creating thousands of objects in a loop) fill new space rapidly, triggering frequent minor GCs and premature promotions that bloat old space, eventually triggering major GC. Retained allocation patterns -- creating objects that persist (added to arrays, maps, caches) -- bypass the generational hypothesis and force work into the expensive major GC path.

Memory pressure triggers more aggressive GC. When system memory is low, V8 performs more frequent and more thorough collections, including forced compaction. On memory-constrained devices, GC pauses become both more frequent and longer.

The Chrome DevTools Performance panel shows GC events as yellow markers labeled "Minor GC" and "Major GC" in the flame chart. The "Memory" checkbox adds a heap size timeline overlay, showing the sawtooth pattern of allocation and collection. Sudden vertical drops indicate GC events; the height of each tooth indicates allocation rate.

Allocation profiling (Memory panel > "Allocation sampling") reveals which code paths create the most GC pressure. Reducing allocation rate -- through object pooling, avoiding intermediate array allocations, reusing buffers -- is more effective than trying to control GC timing directly.

Gotchas

  • You cannot trigger GC directly from JavaScript -- global.gc() exists only when V8 is run with --expose-gc; in browsers, you have zero direct control over when GC runs
  • Object pooling reduces GC pressure but adds complexity -- reusing objects avoids allocation/collection overhead, but you must manually reset state, and pool sizing is its own optimization problem
  • Hidden classes and inline caches add per-shape overhead -- creating objects with different property orders generates more hidden classes, increasing old-space metadata that survives GC
  • Large ArrayBuffers and typed arrays are allocated in large object space -- they skip new space entirely, go directly to old space, and are never compacted; fragmentation from large buffer allocation/deallocation accumulates
  • WeakRef callbacks (FinalizationRegistry) are not deterministic -- they fire at GC's discretion, not at a predictable time; do not rely on them for time-sensitive cleanup