JavaScript

React Architecture at Scale: State Management, Server Components, and Performance

DSi
DSi Team
· · 13 min read
React Architecture at Scale

React applications that start clean and fast have a way of becoming slow and unmanageable. At ten components, everything works. At a hundred, you start noticing quirks. At a thousand — spread across multiple teams, serving millions of users — the architecture decisions you made in month one either carry you forward or drag you down.

With React 18 now established and its concurrent features available in production, the ecosystem is at an inflection point. We have automatic batching, useTransition, useDeferredValue, and improved Suspense. Meanwhile, Next.js 13 has introduced an experimental app router with React Server Components in beta, hinting at where the architecture is heading. State management is diversifying beyond Redux, with lighter alternatives like Zustand and Jotai gaining serious adoption. But having better tools does not eliminate the need for deliberate architectural thinking — if anything, the expanding surface area of React makes architecture decisions more consequential.

This guide covers the architectural patterns, state management strategies, and performance techniques that work at scale — drawn from building and maintaining large React applications across full product development cycles. Whether you are starting a new application or refactoring an existing one that has outgrown its architecture, these are the decisions that matter most.

React 18 Concurrent Features: The Architecture Upgrade

React 18, released in March 2022, introduced concurrent rendering — the most significant change to React's internals since the fiber rewrite. These features are not just performance optimizations. They fundamentally change how you think about rendering priorities and user experience in large applications.

How concurrent features change your architecture

Before React 18, every state update was urgent and synchronous. If a user typed into a search box that triggered filtering across thousands of items, the UI would freeze until the render completed. React 18 introduces the concept of transitions — state updates that can be interrupted if something more urgent comes along, like another keystroke.

The useTransition hook lets you mark state updates as non-urgent. While a transition is rendering, React keeps the current UI responsive and shows the updated result when it is ready. This is not debouncing — the update still happens as soon as React has the resources to complete it. It is a fundamentally different rendering model where the framework itself manages priority.

Automatic batching is the other major win. In React 17 and earlier, state updates inside promises, setTimeout, or native event handlers were not batched — each setState call triggered a separate re-render. React 18 batches all state updates by default, regardless of where they originate. For large applications with complex event handlers that update multiple pieces of state, this can cut render counts significantly with zero code changes.

Suspense and streaming SSR

React 18 also brings improvements to Suspense and introduces streaming server-side rendering. With streaming SSR, the server can start sending HTML to the browser before the entire page has finished rendering. Components wrapped in Suspense boundaries render independently — fast components appear immediately while slower ones stream in as they complete.

  • Suspense boundaries define loading states for asynchronous operations. Wrap data-dependent components in Suspense to show fallback UIs while data loads, replacing the manual isLoading state pattern that clutters most codebases.
  • Selective hydration allows React to prioritize hydrating the components the user is interacting with, rather than hydrating the entire page top-to-bottom. This means interactive elements become responsive faster.

These features pair especially well with frameworks like Next.js 13, which is building its new app router around React's Suspense-first architecture. While the app router is still in beta and not ready for every production use case, it demonstrates where React application architecture is heading.

The emerging Server Components model

React Server Components, available experimentally in Next.js 13's app router, represent a future where components can render entirely on the server and send zero JavaScript to the browser. The concept is compelling: data-fetching components, layout shells, and static content never ship client-side code, dramatically reducing bundle sizes.

However, Server Components are still experimental. The ecosystem is catching up — many popular libraries do not yet support Server Components, best practices are being established in real-time, and the mental model of server versus client boundaries is new to most teams. For production applications with paying users, the proven SSR and SSG patterns in Next.js's pages router remain the safer choice. For new greenfield projects, experimenting with the app router is worthwhile, but plan for rough edges.

State Management at Scale: Redux Toolkit vs Zustand vs Jotai

State management is where most large React applications accumulate technical debt. The wrong choice does not break your application immediately — it slowly makes every new feature harder to build, every bug harder to trace, and every refactor more terrifying. The state management landscape is shifting, with established tools being joined by compelling new alternatives.

Redux Toolkit: the proven standard

Redux Toolkit (RTK) remains the most battle-tested choice for large React applications. It has addressed many of the original complaints about Redux — the boilerplate is dramatically reduced with createSlice, and RTK Query provides a powerful data fetching and caching layer built directly into the Redux ecosystem. For teams that already know Redux, RTK is the natural evolution.

At scale, Redux's strengths are its strict architecture and excellent developer tooling. The Redux DevTools allow time-travel debugging, action replay, and state diffing that no other library matches. The opinionated structure of slices, reducers, and selectors provides guardrails that prevent ad-hoc state management chaos in codebases maintained by large teams. RTK Query eliminates the need for a separate data fetching library for many use cases.

Zustand: the rising pragmatic choice

Zustand is rapidly gaining adoption as a lighter alternative to Redux. It offers a minimal API surface, requires almost no boilerplate, and scales cleanly from small to medium-large applications. A Zustand store is just a function that returns an object — no providers wrapping your app, no reducers, no action types, no dispatch.

Zustand's appeal is its simplicity. New team members understand the state layer in minutes, not days. Stores are easy to split by domain, and selectors prevent unnecessary re-renders without requiring complex memoization patterns. The middleware system handles persistence, devtools integration, and logging. For teams starting new projects that do not need Redux's full ceremony, Zustand is an increasingly compelling default.

Jotai: atomic state for complex UIs

Jotai takes a fundamentally different approach: instead of centralized stores, state is broken into independent atoms that can be composed together. Each atom is a single piece of state, and components subscribe only to the atoms they read.

This model excels in applications with many independent, fine-grained state updates — think design tools, complex form builders, spreadsheet interfaces, or any UI where dozens of elements update independently. Jotai's derived atoms let you compute values from other atoms without manual memoization, and the atomic model eliminates the "one store change re-renders everything" problem by design.

Factor Redux Toolkit Zustand Jotai
Bundle size ~12 KB ~1 KB ~2 KB
Boilerplate Moderate Minimal Minimal
Learning curve Medium-High Low Low-Medium
Re-render control Selectors + memoization Selectors Atomic by default
DevTools Excellent Good (via middleware) Good
Ecosystem maturity Very mature Growing fast Growing
Best for Complex state + large teams Most applications Fine-grained UIs
The best state management library is the one your entire team can use correctly. A well-implemented Zustand store will outperform a poorly implemented Redux store every time. Choose based on your team's needs and your application's actual complexity, not on what looks most impressive in a conference talk.

Server state belongs in a server cache

One of the most important architectural lessons of the past two years is that server state and client state are fundamentally different and should be managed separately. Libraries like React Query (now TanStack Query) and SWR have proven that data fetched from APIs should live in a dedicated cache layer, not in your Redux store or Zustand store.

  • Colocate state by default. If state is only used by one component, keep it in that component with useState. Lift to a shared store only when two or more unrelated components need the same data.
  • Split stores by domain. A single monolithic store becomes unmanageable past a few hundred lines. Create separate stores for authentication, UI state, feature-specific data, and application settings.
  • Use selectors everywhere. Never subscribe a component to an entire store. Select only the specific fields the component needs. This is the single most impactful pattern for preventing unnecessary re-renders.
  • Keep server state in React Query or SWR. Data fetched from APIs belongs in a server cache with automatic background refetching, cache invalidation, and optimistic updates. Mixing server and client state in a single store is a guaranteed source of synchronization bugs.

Memoization and Render Optimization

In large React applications, unnecessary re-renders are the primary source of performance problems. React 18's automatic batching helps, but you still need deliberate memoization strategies to keep render performance under control as your component tree grows.

When to use useMemo and useCallback

The hooks useMemo and useCallback exist to prevent expensive recomputations and unnecessary child re-renders. useMemo caches the result of a computation so it is not recalculated on every render. useCallback caches a function reference so child components that receive it as a prop do not re-render when the parent renders.

The key is applying these selectively. Wrapping every value in useMemo adds its own overhead — the cost of checking dependencies on every render. Use useMemo for genuinely expensive computations (filtering or sorting large arrays, complex calculations) and useCallback for functions passed as props to memoized child components. Profile first to identify where memoization actually matters.

React.memo for component-level memoization

React.memo wraps a component so it only re-renders when its props change. This is essential for components that sit below frequently updating parents but receive stable props. Without React.memo, these components re-render on every parent update even when their output would be identical.

Use React.memo on leaf components in your tree that are expensive to render, components that receive stable props from frequently updating parents, and list item components that render inside virtualized lists. The combination of React.memo on child components and useCallback on parent-provided functions is the standard pattern for preventing render cascades in large applications.

Code Splitting Strategies That Actually Work

Code splitting is the practice of breaking your JavaScript bundle into smaller chunks that load on demand. At scale, it is the difference between a two-second initial load and a twelve-second one. Frameworks like Next.js handle some of this automatically, but deliberate code splitting at the application level is still essential.

Route-level splitting

Every route in your application should be a separate chunk. This is the lowest-effort, highest-impact code splitting strategy. Users loading your dashboard should not download the JavaScript for your settings page, reporting module, or admin panel. In Next.js, route-level splitting is automatic via the pages directory. In client-side React with React Router, use React.lazy with dynamic imports for each route component.

Component-level splitting for heavy features

Beyond routes, identify components that are large and not immediately visible: rich text editors, charting libraries, PDF viewers, complex modal workflows, and data visualization components. These should be lazily loaded when the user triggers them, not included in the initial bundle. Wrap them in React.lazy with a Suspense boundary that shows a lightweight placeholder while the chunk loads.

Third-party library splitting

Large dependencies — date libraries, charting frameworks, form validation schemas — should be dynamically imported at the point of use rather than statically imported at the top of a file. A single static import of a charting library can add 200 KB or more to a chunk, even if the chart is only visible on one page behind a tab that most users never click.

  • Analyze first: Use tools like webpack-bundle-analyzer or the Next.js bundle analyzer to identify your largest chunks before optimizing. Do not guess — measure.
  • Set a budget: Define a maximum initial JavaScript bundle size (200 KB gzipped is a strong target for most applications) and fail your CI build if it is exceeded.
  • Prefetch strategically: For routes the user is likely to visit next, use prefetching to load chunks in the background during idle time. This gives you the performance benefit of code splitting without the perceived latency of on-demand loading.

Component Architecture for Large Teams

When multiple teams work in the same codebase, component architecture becomes as much an organizational problem as a technical one. The goal is to let teams move independently without breaking each other's work.

Feature-based folder structure

Organize your codebase by feature, not by file type. Instead of putting all components in a components directory, all hooks in a hooks directory, and all utilities in a utils directory, group everything related to a feature together:

  • features/dashboard/ contains its own components, hooks, stores, utilities, and tests
  • features/billing/ is fully self-contained with its own data layer and UI
  • shared/ contains truly shared components (design system primitives, layout components, common hooks)

This structure maps cleanly to team ownership. The dashboard team owns everything in features/dashboard. The billing team owns features/billing. Cross-team dependencies are explicit imports from the shared directory, which has strict API contracts and versioned changes.

The shared component library

Every large React application needs a shared component library — a set of foundational UI components (buttons, inputs, modals, tables, layout primitives) that every team uses. The key is treating this library as a product with its own API stability guarantees:

  • Strict TypeScript interfaces for all component props
  • Storybook documentation with usage examples for every component
  • Visual regression tests to catch unintended style changes
  • A changelog and semver-style versioning for breaking changes
  • A dedicated team or rotating ownership responsible for maintenance

Enforcing boundaries

Without enforcement, architectural boundaries erode within weeks. Use these mechanisms to keep your structure clean:

  • ESLint import rules that prevent features from importing directly from other features (force them through the shared layer)
  • Barrel exports that define the public API of each feature module — internal implementation files are not importable from outside
  • CI checks that flag new cross-feature imports for mandatory code review
  • Dependency graphs generated on each PR to visualize and track coupling between modules

Performance Optimization Patterns

Performance at scale is not about applying a checklist of tricks. It is about building a culture of measurement, identifying actual bottlenecks, and applying targeted fixes. Here are the patterns that consistently deliver the largest improvements.

Profile before optimizing

The React DevTools Profiler and browser performance tools should be the starting point for every optimization effort. Record a profiling session, identify the components with the highest render counts and the longest render durations, and focus your efforts there. Optimizing a component that renders in 0.5 milliseconds is a waste of time regardless of how many times it re-renders. Optimizing a component that takes 50 milliseconds per render and fires on every keystroke is transformative.

Virtualization for long lists

If your application renders lists with more than a few hundred items — tables, feeds, search results, file explorers — virtualization is mandatory. Libraries like react-window or react-virtualized render only the items currently visible in the viewport, plus a small buffer. A list of 10,000 items renders the same 20 to 30 DOM nodes regardless of total size. Without virtualization, the same list creates 10,000 DOM nodes, and every state change triggers layout recalculation across all of them.

Debounce and throttle user input

Search inputs, filter controls, and resize handlers that trigger state updates on every event are one of the most common sources of jank in large applications. Debounce text inputs (wait until the user stops typing for 200 to 300 milliseconds before updating state) and throttle continuous events like scroll and resize to fire at most once per animation frame. React 18's useDeferredValue can also help here by letting React deprioritize the expensive filtering render while keeping the input responsive.

Image and asset optimization

Images are frequently the largest assets on a page. Use modern formats (WebP or AVIF), serve responsive sizes with srcset, lazy-load images below the fold, and use a CDN with automatic format negotiation. Next.js's built-in Image component handles many of these optimizations automatically. For applications with heavy image content, these optimizations alone can cut page weight by 60 to 70 percent.

Performance optimization at scale is a continuous practice, not a one-time project. Build performance budgets into your CI pipeline, monitor real user metrics in production, and treat performance regressions with the same urgency as functional bugs.

Testing Strategies at Scale

Testing a large React application requires a deliberate strategy that balances coverage, speed, and maintenance cost. The testing pyramid still applies, but the proportions shift when you are dealing with hundreds of components and complex user flows.

Unit tests for logic, not rendering

Test your business logic — state management functions, data transformers, validation rules, utility functions — with fast, isolated unit tests. These tests run in milliseconds, require no DOM, and catch the majority of logical regressions. Do not unit test component rendering unless the component contains complex conditional logic. Testing that a button renders with the correct label is low value; testing that your pricing calculator returns the correct amount for edge cases is high value.

Integration tests for user flows

Use React Testing Library for integration tests that simulate real user behavior: clicking buttons, filling forms, navigating between pages, and verifying that the correct output appears. These tests catch the bugs that unit tests miss — incorrect prop passing, broken event handlers, state management integration issues, and race conditions in async flows.

End-to-end tests for critical paths

Reserve end-to-end tests (Playwright or Cypress) for your application's most critical user flows: authentication, checkout, data creation workflows, and any path where a bug directly costs money or loses users. E2E tests are slow and flaky compared to unit and integration tests, so keep the suite small and focused on high-value paths.

Visual regression testing

For shared component libraries and design system elements, visual regression testing catches CSS changes that no other test type detects. Tools like Chromatic or Percy capture screenshots of every component state and flag visual differences on each pull request. This is essential when multiple teams contribute to shared UI components.

Putting It All Together

A well-architected React 18 application at scale combines these patterns into a coherent system:

  1. Concurrent features for responsive UIs — use useTransition for non-urgent updates, Suspense for loading states, and automatic batching to reduce unnecessary renders across the board
  2. Deliberate state management — colocate state by default, choose Redux Toolkit or Zustand for shared client state based on your team's needs, and keep server data in React Query or SWR
  3. Strategic memoization — apply useMemo, useCallback, and React.memo where profiling shows genuine performance impact, not as a default pattern on every component
  4. Aggressive code splitting — route-level splitting as a baseline, component-level splitting for heavy features, and strict bundle budgets enforced in CI
  5. Feature-based architecture — organize by team ownership, enforce boundaries with tooling, and maintain a shared component library with strict API contracts
  6. Continuous performance monitoring — profile regularly, measure real user metrics, and treat performance as a feature, not an afterthought

None of these patterns are revolutionary in isolation. The compounding effect of applying all of them consistently across a large codebase is what separates React applications that scale gracefully from those that become unmaintainable. The architecture decisions you make now determine whether your team is shipping features or fighting the framework twelve months from today.

Building at this level requires engineers who have seen these patterns succeed and fail across multiple large codebases. If your team is scaling a React application and needs experienced frontend engineers who can establish the right architecture from day one, our team has the depth to help.

FAQ

Frequently Asked
Questions

The most impactful React 18 features for large applications are automatic batching, which groups multiple state updates into a single re-render for better performance, and the new concurrent rendering features including useTransition and useDeferredValue. These let you mark certain state updates as non-urgent so the UI stays responsive during expensive renders. Suspense for data fetching is also maturing and pairs well with libraries like React Query to simplify loading state management.
Redux Toolkit remains the most proven choice for very large applications with complex state transitions and teams that need strict architectural patterns. However, lighter alternatives are gaining serious traction. Zustand is becoming a popular choice for its minimal boilerplate and simple API. Jotai is ideal for applications with many independent pieces of atomic state. For server state, React Query has become the standard — it handles caching, background refetching, and optimistic updates far better than putting API data in Redux.
Proceed with caution. React Server Components in Next.js 13 are still in beta via the new app router. While the architecture is promising, the ecosystem is not fully mature yet — many popular libraries have not been updated for Server Component compatibility, and best practices are still being established. For new greenfield projects where you can tolerate some instability, experimenting with the app router is worthwhile. For production applications with existing users, the pages router in Next.js remains the safer, battle-tested choice.
Use a feature-based folder structure where each feature directory contains its own components, hooks, utilities, and tests. This minimizes cross-team file conflicts and allows teams to work independently. Establish a shared component library with strict API contracts for common UI elements. Enforce architectural boundaries with ESLint rules and CI checks. Most importantly, document component ownership so every file in the codebase has a clear team responsible for it.
The most common mistake is unnecessary re-renders caused by poor state placement. When global state changes trigger re-renders across dozens of unrelated components, performance degrades rapidly. The fix is to colocate state as close as possible to where it is used, split global stores into smaller domain-specific stores, and use selectors to ensure components only re-render when the specific data they consume changes. Profiling with React DevTools and addressing the top re-rendering components typically yields a 40 to 60 percent improvement in render performance.
DSi engineering team
LET'S CONNECT
Scale your React
application architecture
Our frontend engineers bring deep React expertise to complex applications — from architecture design to performance optimization.
Talk to the team