Next.js 15 Partial Prerendering: Real-World Patterns and Tradeoffs

#Next.js 15 Partial Prerendering
Sandor Farkas - Founder & Lead Developer at Wolf-Tech

Sandor Farkas

Founder & Lead Developer

Expert in software development and legacy code optimization

Next.js 15 Partial Prerendering (PPR) is one of the most significant rendering architecture changes in the React ecosystem in years. The pitch is compelling: serve a static HTML shell instantly from the CDN edge, then stream in dynamic sections as they resolve — all from a single route, without manually splitting pages into separate static and dynamic segments. Teams that get PPR right unlock performance that would previously have required a complex micro-frontend architecture. Teams that get it wrong ship subtle, hard-to-reproduce bugs that only appear under real traffic.

This post is for engineers and technical leads evaluating or actively using Next.js 15 in production. It covers what PPR actually does at the rendering layer, the patterns that make it reliable, and the specific failure modes that catch teams off guard.

What Next.js 15 Partial Prerendering Actually Does

At its core, PPR splits a single route into two rendering layers. The static shell — anything outside a <Suspense> boundary that does not depend on request-time data — is prerendered to HTML and cached at the edge. Inside <Suspense> boundaries, dynamic components are streamed in at request time.

The crucial distinction from standard App Router streaming: in a non-PPR route, any dynamic function call (cookies(), headers(), searchParams) inside the route tree causes the entire route to opt out of static rendering. With PPR enabled, those calls are isolated inside Suspense boundaries, so the outer shell can still be statically cached even if a child component reads a cookie.

This means the cache key for the static shell is determined at build time — and that is where the complexity begins.

Enabling PPR in Next.js 15

PPR graduated from experimental to stable in Next.js 15. To enable it globally, add the following to next.config.ts:

import type { NextConfig } from 'next'

const config: NextConfig = {
  experimental: {
    ppr: true,
  },
}

export default config

For incremental adoption on specific routes, export experimental_ppr from a layout or page file:

export const experimental_ppr = true

The per-route flag is the safer starting point for existing codebases. It lets you migrate high-traffic pages without touching routes that have complex dynamic behavior you have not fully audited yet.

Pattern 1: The Static Shell / Dynamic Island Model

The most reliable mental model for PPR is to treat your page as a static frame with explicitly labelled dynamic holes. Navigation, page headers, marketing copy, and structural layout belong in the static shell. User-specific content, personalized recommendations, real-time counts, and anything auth-gated belongs inside a <Suspense> boundary with a meaningful fallback.

// app/dashboard/page.tsx
export const experimental_ppr = true

export default function DashboardPage() {
  return (
    <div>
      <StaticHero />         {/* prerendered, served from CDN */}
      <Suspense fallback={<MetricsSkeleton />}>
        <LiveMetrics />      {/* streamed at request time */}
      </Suspense>
      <Suspense fallback={<FeedSkeleton />}>
        <PersonalizedFeed /> {/* streamed at request time */}
      </Suspense>
    </div>
  )
}

The fallback components matter more here than they do in standard streaming. Because the static shell is cached and delivered instantly, users see the skeleton immediately and then watch it replaced with real data. A poor fallback — wrong dimensions, mismatched layout — produces a jarring layout shift that often feels worse than a traditional server render would have.

Pattern 2: Isolating Auth Without Blocking the Shell

The most common PPR mistake is reading auth state in the static shell. A call to cookies() or getServerSession() outside a Suspense boundary causes Next.js to downgrade the entire route to dynamic rendering, silently defeating PPR.

The correct approach: treat authentication as a dynamic concern, confined inside a Suspense boundary. The static shell renders the page frame; auth-gated content renders inside a boundary that streams once the session resolves.

async function AuthenticatedSection() {
  const session = await getServerSession(authOptions)
  if (!session) return <LoginPrompt />
  return <UserDashboard user={session.user} />
}

export default function Page() {
  return (
    <>
      <PublicPageHeader />
      <Suspense fallback={<AuthSkeleton />}>
        <AuthenticatedSection />
      </Suspense>
    </>
  )
}

This pattern has a secondary benefit: the public shell can be cached and served to both unauthenticated and authenticated users, which significantly improves CDN cache hit rates.

Pattern 3: Granular Suspense Over Broad Wrapping

Teams migrating from Pages Router often wrap entire page sections in a single Suspense boundary as a quick fix. This works, but wastes PPR's primary advantage: the ability to stream independent data sources in parallel.

Prefer narrow, purpose-scoped boundaries over broad wrappers. If your sidebar fetches from a slow internal API and your main content fetches from a fast database, a single boundary forces both to wait for the slower one. Two boundaries let the fast section render immediately while the slow section catches up.

The practical limit: avoid more than five or six Suspense boundaries per route. Beyond that, the streaming overhead and fallback management complexity begin to outweigh the parallelism gains.

The Cache-Invalidation Bug That Bites Most Teams

The subtlest PPR failure mode involves build-time data that looks dynamic. Consider a navigation component that fetches from a CMS at render time without any caching directive:

// This looks safe but is a cache time-bomb
async function Nav() {
  const items = await fetch('https://cms.internal/nav').then(r => r.json())
  return <Navigation items={items} />
}

If Nav is in the static shell (outside Suspense), Next.js evaluates it at build time and bakes the result into the prerendered HTML. When your CMS content changes, the static shell does not update until the next deployment — or until the cache TTL expires, if you have configured revalidation in next.config.ts.

Teams discover this bug when a CMS editor updates navigation links and they do not appear on the live site. The fix is either to move the component inside a Suspense boundary (making it dynamic) or to add explicit revalidation:

const items = await fetch('https://cms.internal/nav', {
  next: { revalidate: 3600 }, // or tags for on-demand revalidation
}).then(r => r.json())

Understanding this distinction — build-time static versus fetch-cached static — is the single most important mental model for debugging PPR in production.

Next.js 15 Caching Changes That Affect PPR

Next.js 15 changed the default caching behavior for fetch calls in Server Components: responses are no longer cached by default, unlike in Next.js 13 and 14. This change is especially relevant for PPR because your static shell may include components that make fetch calls you assumed were cached from a prior codebase.

Audit every fetch in your static shell after upgrading. Add explicit { cache: 'force-cache' } or { next: { revalidate: N } } options where you want caching, and verify the behavior in the Next.js dev tools — the request waterfall panel now shows whether a fetch hit the cache or the network.

When PPR Is Not Worth the Complexity

PPR adds architectural complexity that is not always justified. Three scenarios where you should skip it:

Fully authenticated applications. If every meaningful route requires a session, the static shell will be generic enough that CDN caching offers minimal value. Standard streaming with loading.tsx files gives you most of the UX benefit with far less configuration.

High-frequency data with tight freshness requirements. If your key metrics update every 30 seconds and users expect live data, the streaming delay from a dynamic boundary is more visible than the TTFB improvement from the static shell. Server-Sent Events or WebSocket updates may be a better fit — and careful selection of your application development approach matters here.

Teams new to App Router. PPR assumes fluency with data fetching in Server Components, Suspense semantics, and Next.js caching behavior. Introducing it before the team has that foundation tends to produce inconsistent results and difficult-to-diagnose regressions. Build that foundation first, then layer in PPR.

Practical Migration Checklist

Before enabling PPR on an existing route:

  1. Identify all dynamic function calls (cookies, headers, useSearchParams) in the route tree and confirm they are inside <Suspense> boundaries.
  2. Audit every fetch call in the static shell for correct caching directives.
  3. Design meaningful skeleton fallbacks that match the final content dimensions to minimize layout shift.
  4. Confirm auth flows do not read session state in the static shell.
  5. Test with network throttling to confirm the fallback-to-content transition is acceptable.
  6. Add monitoring for TTFB (should improve) and streaming time-to-content (should be comparable or better than before).

PPR in the Broader App Router Architecture

Partial Prerendering is not a standalone feature — it works best as part of a coherent App Router architecture that also considers route grouping, parallel routes, intercepting routes, and the Server/Client Component boundary. Teams that treat PPR as a drop-in performance fix without reviewing the broader architecture often introduce inconsistencies that surface as hard-to-reproduce rendering bugs under production load.

This is especially true for B2B SaaS products where certain users need real-time dashboards and others browse mostly static marketing pages within the same Next.js application. Getting that split right requires deliberate architectural decisions — not just enabling a flag.

If your team is adopting Next.js 15 or scaling an existing App Router application and wants a senior review of the rendering and data-fetching architecture, Wolf-Tech offers code quality consulting and custom software development services built for exactly this. We work with technical teams across Europe and North America on Next.js, React, and full-stack TypeScript applications.

Reach out at hello@wolf-tech.io or visit wolf-tech.io to discuss your architecture.