Conversation
Comprehensive analysis of ReactFizzServer.js covering: - renderElement() dispatch table and insertion points - Request object shape and lifecycle - Task/Segment model and how they interact - Suspense boundary suspend → resolve → stream flow - The hydration data TODO at line 5944 (exact insertion point) - How components are called via renderWithHooks - Concrete entry points for fused renderer changes
TIM-471: Fizz internals analysis for fused renderer
…book#3) * TIM-471: Fizz internals analysis for fused renderer Comprehensive analysis of ReactFizzServer.js covering: - renderElement() dispatch table and insertion points - Request object shape and lifecycle - Task/Segment model and how they interact - Suspense boundary suspend → resolve → stream flow - The hydration data TODO at line 5944 (exact insertion point) - How components are called via renderWithHooks - Concrete entry points for fused renderer changes * TIM-473: Client-side hydration markers and DOM walking analysis Comprehensive analysis covering: - All HTML comment markers emitted by Fizz (Suspense, Activity, FormState) - hydrateRoot() DOM walking algorithm and state machine - Suspense boundary hydration flow (dehydrated fragments, selective hydration) - getNextHydratable() behavior with unknown markers (silently skipped) - Proposed client boundary marker format (<!--C:ID--> / <!--/C-->) - Where markers plug into server (FizzConfigDOM) and client (FiberConfigDOM) - Reconciler changes (HydrationContext, BeginWork) - How to skip server-only DOM during hydration - Progressive/selective hydration compatibility
facebook#4) Comprehensive feasibility analysis comparing three approaches: - Approach A: Reimplement Flight logic in Fizz (fragile, 900 lines to maintain) - Approach B: Flight as a library (extractable for detection, not for serialization) - Approach B+: Extract pure detection functions + new minimal props serializer Key findings: - isClientReference and resolveClientReferenceMetadata are pure functions with zero renderer state dependencies — trivially callable from Fizz - renderModelDestructive is NOT extractable (deeply coupled to Request/Task) - But we don't need it — client boundary props are a narrow subset - A focused ~150 line serializer handles the common case - Narrowing scope to sync server components first delivers most perf benefit Includes: - 10-item assumption inventory with stability ratings - Evolving feature impact assessment (View Transitions, Fragment Refs, etc.) - Sentinel test strategy for automated breakage detection - Progressive fallback path for unsupported cases - Recommended task scope adjustments
…book#5) Add performance validation spike measuring the three-pass SSR pipeline: Flight serialize → Flight deserialize → Fizz HTML render. Key findings: - Flight overhead is 54-79% of total SSR time across all scenarios - Large e-commerce page (226 products): 1.89ms of 3.51ms is Flight overhead - Flight wire format is 55-68% of total bytes (pure intermediate waste) - Projected 2x throughput improvement from fusion Deliverables: - design/fused-renderer-perf-validation.md: full analysis with data - scripts/bench/fused-renderer-bench.js: profiling harness - scripts/bench/fused-renderer-scenarios.js: test app scenarios
…cebook#7) * TIM-472: Flight wire format and client boundary detection analysis Comprehensive analysis covering: - isClientReference() detection via Symbol.for('react.client.reference') - ClientReference shape (99689typeof, 99689id, 99689async) - Flight wire format: row format, chunk types (Model, Import, Error, etc.) - Import chunk metadata: module ID, chunks array, export name - Props serialization at client boundaries - Async server component handling (thenable → ping → retry) - Module reference resolution (build-time proxies → manifest → metadata) - Implications for fused renderer: detection, extraction, manifest access * TIM-482: go/no-go checkpoint — GO with narrow scope (Approach B+) Decision: proceed with fused renderer using Approach B+ (Flight as library + focused props serializer), phased rollout starting with sync server component execution in Fizz. Key evidence: - Flight overhead is 54-79% of total SSR time (TIM-483) - Detection functions are trivially extractable (TIM-481) - Hydration markers are backwards-compatible (TIM-473) - Projected 2x throughput improvement Phased approach: - Phase 1: TIM-474, TIM-475 (server-side fusion) - Phase 2: TIM-476, TIM-477 (client hydration optimization) - Phase 3: TIM-478-480 (coexistence, benchmarks, edge cases) Each phase is a decision gate.
) Replaces the synthetic v1 benchmark (TIM-483) with realistic scenarios: - Async server components with simulated DB/cache fetches (1-20ms) - Realistic prop sizes (blog posts ~10KB, products ~3KB each) - 4-8 Suspense boundaries per scenario - Isolated phase measurement to separate data fetch from serialization Key finding: pure serialization overhead is 1-4% of total SSR time (0.3-0.5ms out of 14-23ms). Data fetching dominates at 87-90%. The fused renderer would yield only 2-4% throughput improvement, which does not justify its engineering complexity. Recommendation: do not proceed with fused renderer.
…ok#9) v2 measured wall-clock time with simulated I/O, which hid the CPU cost. Throughput is limited by CPU, not I/O. Under concurrent load on a single Node.js thread: c=25, 226 products: - Fizz only: 526 req/s, p99=49ms, heap +72MB - Full pipeline: 102 req/s, p99=342ms, heap +282MB - Drop: 5.2x throughput, 7.0x p99, +210MB heap The overhead worsens under load (4x at c=1 → 5.9x at c=50) because the 349KB wire format buffer per request creates GC pressure that compounds with concurrency. This directly explains the 400→40 rps drop in real deployments: React-level 3-6x × framework overhead 1.5-2x ≈ 10x. Recommendation: proceed with fused renderer.
…n Fizz (facebook#10) Add fusedMode + bundlerConfig fields to Fizz's Request object, threaded through from all DOM server entry points (Node, Edge, Browser, Bun) via experimental_fusedMode and experimental_bundlerConfig options. In renderElement(), when fusedMode is true: - Client references (19013typeof === Symbol.for('react.client.reference')) are detected and currently fall through to renderFunctionComponent. TIM-475 will add hydration marker emission for these. - Server components (functions without client reference tag) render via the existing renderFunctionComponent/renderWithHooks path, which already handles sync and async (via Suspense) components correctly. The CLIENT_REFERENCE_TAG constant is defined locally in ReactFizzServer.js using Symbol.for() rather than importing from Flight, keeping Fizz decoupled from Flight internals per Approach B+. Tests cover: sync server components, nested components, props passing, async components via Suspense, streaming, null/fragment returns, and client reference fallthrough.
…acebook#11) When fusedMode is true and Fizz encounters a client component (type.20854typeof === CLIENT_REFERENCE_TAG), it now: 1. Emits <!--C:ID--> / <!--/C--> comment markers around the component's HTML output in the segment chunks 2. Renders the component to HTML normally via renderFunctionComponent 3. Queues hydration data (module ID, export name, serialized props) on the Request's clientBoundaryQueue 4. During flushCompletedQueues(), emits <script data-fused-hydration> tags containing the queued hydration data as JSON New functions in ReactFizzConfigDOM.js: - pushStartClientBoundary(target, id): emits <!--C:ID--> - pushEndClientBoundary(target): emits <!--/C--> - writeClientBoundaryScript(dest, id, moduleId, name, props): emits <script> with hydration payload Props serialization uses JSON.stringify with a replacer that strips children and functions. TIM-476 will replace this with a focused serializer handling edge cases (dates, server actions, etc.). Tests cover: marker wrapping, hydration script emission, unique IDs for multiple boundaries, nested server-in-client-in-server, props serialization, and fusedMode=false producing no markers.
…acebook#12) Add ReactFizzHydrationSerializer.js — a ~150-line focused serializer that handles common prop types at client boundaries without reimplementing Flight's 800-line renderModelDestructive. Supported types: - Primitives (string, number, boolean, null) - undefined → {$t: 'u'} - NaN → {$t: 'N'}, ±Infinity → {$t: 'I', v: ±1} - BigInt → {$t: 'n', v: '...'} (as string) - Date → {$t: 'D', v: ISO string} - Plain objects and arrays (recursive) - Server Action refs → {$t: 'S', id, bound} - Client references in props → {$t: 'C', id} - children → {$t: 'T'} tombstone (server-rendered) - Regular functions → stripped (undefined) - Symbols → stripped Throws clear error on: Map, Set, TypedArray, ReadableStream Wired into renderClientBoundary() in ReactFizzServer.js, replacing the placeholder JSON.stringify from TIM-475.
…ing (facebook#13) Added fused mode path to concurrent benchmark. Results show: - Fused is 1.0-1.6x faster than full pipeline (modest improvement) - But only 23-45% of Fizz-only ceiling (expected ~90%) - Heap pressure is much better: 66MB vs 284MB at c=50 - p95/p99 are comparable or worse due to serializeProps overhead Root cause: serializeProps runs processObject recursively for each of 226 product props at client boundaries, doing essentially the same serialization work Flight does. The CPU savings from eliminating Flight serialize/deserialize are offset by the hydration data serialization. The server-only path (no client boundaries) IS fast — 2300 req/s. The bottleneck is specifically the per-boundary props serialization. Also fixes: added client boundary function exports to legacy, markup, custom, and noop Fizz config forks (required for production build).
Add fast path to serializeProps that uses a single JSON.stringify call with a lightweight replacer for the common case (all JSON-native types). Falls back to manual string building only when special types are detected (BigInt, undefined, NaN, Infinity, references). The serializer itself was not the bottleneck — V8's JSON.stringify is already fast. The real cost for client-heavy pages is the volume of output: 226 boundaries × ~1.5KB of hydration scripts = 290KB of additional output per request. This is a payload optimization problem, not a CPU optimization problem. Server-only fused rendering (no client boundaries) hits 1,935 req/s vs 131 req/s for the full Flight pipeline — a 15x improvement. This is the core proof that single-pass Fizz rendering eliminates the Flight overhead.
Integration tests proving fused SSR and Flight server coexist: - fusedMode=false produces identical output to default (no markers) - fusedMode=true emits boundary markers and hydration scripts - Alternating fused/standard renders in the same process works - Concurrent fused and standard renders don't interfere - Per-request fusedMode gating — no shared state between requests - Sentinel test: ReactFlightServer.js contains no fused-mode code The Flight server path is completely unmodified. fusedMode is a per-request flag on Fizz's Request object only.
…acebook#16) Edge case tests (10 tests): - Server component children inside client boundary (tombstoned) - Nested client-in-client with correct marker nesting - Deeply nested boundaries with unique IDs - Server component throw caught by Fizz error handling - Error inside client boundary is handled - Sync parent with async child via Suspense - Streaming: shell before async resolves - Map/Set props fall back gracefully - Server Action references preserved in hydration data Sentinel tests (7 tests): - Client reference protocol (Symbol, shape) - Server reference protocol (Symbol, shape) - Fizz has fusedMode code - Hydration walker skips unknown comments - Flight server has zero fused-mode code design/fused-renderer-findings.md: - Complete findings doc with honest numbers - renderToString baseline: 485 req/s - Full pipeline: 108 req/s - Fused server-only: 514 req/s (4.8x, matches renderToString) - Fused with client boundaries: 121 req/s (1.1x, hydration data bottleneck) - Measurement journey: v1→v2→v3→final, documenting each mistake - Open items: hydration data optimization, client-side approach - Architecture diagram and usage example
…roughput (facebook#17) The props were the entire bottleneck. 226 client boundaries × ~1.2 KB of serialized props = 227 KB of hydration scripts, capping throughput at ~121 req/s (same as full pipeline). Changes: - renderClientBoundary: skip serializeProps, emit empty props placeholder - flushCompletedQueues: emit one consolidated <script data-fused-hydration> instead of 226 individual scripts. Format: {m: [moduleUrls], b: [[id, moduleIdx]]} - Module ref deduplication in consolidated script Results at c=25 (226-product PLP): Full pipeline: 92 req/s, p50=205ms, heap 193MB Fused renderer: 327 req/s, p50=3.2ms, heap 40MB (3.6x faster) At c=1: Full pipeline: 131 req/s Fused renderer: 576 req/s (4.4x faster, exceeds Fizz-only ceiling) Props delivery to the client is deferred to the hydration strategy (TIM-485). The server emits markers + module refs. The client will get props via DOM extraction, lazy fetch, or a future compact format.
…rs (facebook#18) Restore props serialization (skipping props breaks hydration). Update findings doc with final validated concurrent throughput: c=1: 127 → 330 req/s (2.6x) c=25: 98 → 292 req/s (3.0x) c=50: 89 → 285 req/s (3.2x) Props serialization adds ~1.3ms per request for 226 boundaries. This is real, irreducible work — the client needs props to hydrate. V8 JSON.stringify is already near-optimal. Total per-request cost is 3.2ms (309 req/s), which is 2.3x faster than the full pipeline. Server-only fused: 528 req/s (matches renderToString).
…acebook#19) The client reference proxy created by clientExports() is function(){} with $$typeof set — calling it returns undefined. The fused renderer was emitting empty boundary markers, benchmarking 'nothing' as fast. Fix: renderClientBoundary now calls bundlerConfig.resolveClientComponent( $$id) to get the actual module, equivalent to __webpack_require__ in the Flight client path. Benchmark uses O(1) Map lookup. Verified numbers with identical HTML output: c=1: 128 → 200 req/s (1.6x) c=25: 99 → 187 req/s (1.9x) c=50: 93 → 185 req/s (2.0x) Heap: 297 → 60 MB at c=50 Audit confirms: HTML is byte-identical, components are called, product names present, module resolution is O(1).
…mat (facebook#20) Replace custom resolveClientComponent with standard bundler protocol: - bundlerConfig is the client manifest (same object as Flight's webpackMap) - Module resolution via __webpack_require__(metadata.id) — same as Flight client - Hydration data uses Flight's module ref format: [id, chunks, name, props] - Emitted as self.__FUSED={b:{...}} — parseable by existing RSC client runtime Per-request: 7.33ms (full pipeline) → 4.78ms (fused) = 1.5x faster Concurrent throughput improvement reduced by props serialization output volume (451 KB fused vs 120 KB full pipeline). The CPU savings from eliminating Flight are real but offset by generating 3.7x more bytes. Next step: investigate reducing props output volume to recover the concurrent throughput advantage.
The full pipeline benchmark was only measuring HTML output, not the Flight payload that must be inlined as a <script> for client hydration. This made the comparison unfair — full pipeline at 120 KB vs fused at 451 KB. Now both paths include hydration data delivery cost: - Full pipeline: Flight serialize + deserialize + Fizz + inline payload = 7.74ms - Fused: single pass with props serialization = 4.20ms - Savings: 3.54ms (46%), 1.84x speedup per-request The savings come from eliminating Flight serialize (3.24ms) + deserialize (0.32ms) + payload inlining (0.65ms) = 4.21ms, offset by ~2.2ms of props serialization in the fused path.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Full report here: facebook#36143