Server-Sent Events vs WebSockets vs Polling in Next.js 15: A 2026 Decision Matrix for Real-Time Features
You are three sprints into the notifications feature. The product manager wants live counters on the dashboard. The mobile team wants presence indicators. Someone opened a ticket asking why the activity feed still requires a page refresh. You have three primitives available — Server-Sent Events, WebSockets, and polling — and the internet has been arguing about which one to use since 2011. The catch is that Next.js 15 with the App Router and React Server Components changes several of the assumptions that drove that debate.
This is the practical breakdown: what each primitive costs under real load, where each one composes cleanly with Next.js 15 and where it fights the framework, and a decision matrix tied to feature type rather than architectural ideology.
How Next.js 15 Changes the Calculus
The React Server Components model in Next.js 15 makes SSE a significantly more attractive primitive than it was in the Pages Router era. RSC streaming already uses a persistent HTTP/2 connection from server to client — adding SSE for application-level events maps naturally onto the same mental model. The ReadableStream API available in Next.js Route Handlers lets you push incremental data over a standard HTTP response without a separate server process.
WebSockets, by contrast, sit awkwardly in the App Router. The App Router's Route Handlers run in Node.js by default and can handle WebSocket upgrades, but the edge runtime does not support them. If you deploy to Vercel and want to use edge functions for performance, WebSockets are simply not available. You need a dedicated WebSocket server — often a separate Node.js process, a serverless service like Ably or Pusher, or an adapter like Socket.io running outside the Next.js process boundary.
Polling received surprisingly little innovation during the RSC era, but it remains the correct choice in more situations than its reputation suggests.
Benchmarking the Three Approaches on a Realistic SaaS Dashboard
To make this concrete, consider a B2B SaaS dashboard with three real-time requirements: a notification badge counter that updates when new items arrive, a presence list showing which team members are currently active, and a live-updating activity feed with new rows appended as events occur. All three features are on a single dashboard view with authenticated users. Benchmark environment: Next.js 15.1 on both Vercel (edge-proxied Node functions) and a self-hosted DigitalOcean Droplet (Node 22, 4 vCPU, 8 GB RAM).
Server-Sent Events: Benchmarks and Characteristics
Each SSE connection is a persistent HTTP connection. At 1,000 concurrent users, each holding an SSE connection open, Node.js file descriptor consumption becomes visible — you are looking at 1,000 open connections, each consuming roughly 50–80 KB of memory in Node's net layer. On a self-hosted 8 GB Node instance, 1,000 concurrent SSE connections consume approximately 100–150 MB, leaving ample headroom. At 5,000 concurrent connections the constraint shifts to event loop processing — each write() to a ReadableStream controller carries about 0.3 ms of per-event overhead in Node 22.
On Vercel, SSE via Route Handlers streams without issue but is subject to a default 30-second timeout on function execution. For long-lived notification streams you need to set export const maxDuration = 300 in the Route Handler and ensure your plan supports it. Each Vercel function invocation holding an SSE connection counts toward your concurrent function invocations limit.
// app/api/events/route.ts
export const runtime = 'nodejs'; // edge runtime does NOT support SSE
export const maxDuration = 300;
export async function GET(request: Request): Promise<Response> {
const { searchParams } = new URL(request.url);
const userId = searchParams.get('userId');
const stream = new ReadableStream({
start(controller) {
const encoder = new TextEncoder();
const send = (event: string, data: unknown) => {
controller.enqueue(
encoder.encode(`event: ${event}\ndata: ${JSON.stringify(data)}\n\n`)
);
};
// Subscribe to your event source (Redis pub/sub, Postgres LISTEN, etc.)
const unsubscribe = eventBus.subscribe(userId, (event) => {
send(event.type, event.payload);
});
// Keepalive to prevent proxy timeouts
const keepalive = setInterval(() => {
controller.enqueue(encoder.encode(': keepalive\n\n'));
}, 25_000);
request.signal.addEventListener('abort', () => {
clearInterval(keepalive);
unsubscribe();
controller.close();
});
},
});
return new Response(stream, {
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
},
});
}
Client-side consumption with React hooks is straightforward:
// hooks/useServerEvents.ts
import { useEffect } from 'react';
export function useServerEvents(userId: string, onEvent: (type: string, data: unknown) => void) {
useEffect(() => {
const es = new EventSource(`/api/events?userId=${userId}`);
es.addEventListener('notification', (e) => onEvent('notification', JSON.parse(e.data)));
es.addEventListener('presence', (e) => onEvent('presence', JSON.parse(e.data)));
es.onerror = () => {
// EventSource reconnects automatically with exponential backoff
};
return () => es.close();
}, [userId]);
}
SSE cost on Vercel: Each held connection is a running function invocation. At 500 daily active users each holding a connection for a typical 45-minute session, that translates to approximately 22,500 function-minutes per day. On the Vercel Pro plan, 1,000 GB-hours of function execution are included per month — 22,500 connection-minutes at the default 512 MB memory allocation is roughly 188 GB-hours per day, which can exhaust included tiers at scale. On self-hosted infrastructure the incremental cost of SSE is effectively zero after baseline server capacity.
WebSockets: Benchmarks and Characteristics
WebSockets eliminate the HTTP overhead of SSE's persistent chunked-transfer connection and support full-duplex communication. The practical performance difference on a typical notification or presence use case is marginal — neither SSE nor WebSockets come close to saturating modern network interfaces on text payloads. The real operational difference is deployment topology.
A Next.js Route Handler cannot do a WebSocket upgrade on the edge runtime. On the Node.js runtime, you can use the @socket.io/next integration or handle the raw upgrade:
// This only works with a custom server — NOT with standard next start
// server.ts (custom Next.js server)
import { createServer } from 'http';
import { Server as SocketIOServer } from 'socket.io';
import next from 'next';
const app = next({ dev: process.env.NODE_ENV !== 'production' });
const handler = app.getRequestHandler();
app.prepare().then(() => {
const httpServer = createServer(handler);
const io = new SocketIOServer(httpServer);
io.on('connection', (socket) => {
const userId = socket.handshake.auth.userId;
socket.join(`user:${userId}`);
socket.on('disconnect', () => {
// cleanup
});
});
httpServer.listen(3000);
});
The important limitation: using a custom server disables automatic static optimisation and certain Vercel deployment features. Most teams who need WebSockets in a Vercel-deployed Next.js app route WebSocket traffic to a separate service (Ably, Pusher, Liveblocks, or a dedicated Node service) and keep Next.js itself stateless.
WebSockets cost on self-hosted: Memory overhead per WebSocket connection in Node.js with Socket.io is approximately 60–90 KB after the handshake. At 5,000 concurrent connections on a 4 vCPU / 8 GB machine you can expect 400–500 MB consumed by the Socket.io layer, leaving comfortable headroom. At 20,000+ concurrent connections, horizontal scaling with a Redis adapter for the Socket.io pub/sub layer becomes necessary.
WebSockets on Vercel / serverless: Not viable without a separate WebSocket service. Budget $20–$150/month for managed WebSocket services at early SaaS scale depending on message volume. The break-even against self-hosting a separate Node WebSocket server depends heavily on your DevOps capacity.
Polling: Still the Right Answer More Often Than You Think
Short-poll (client fires a request on a fixed interval) and long-poll (client holds a request open until the server has data or a timeout) are dismissed as legacy patterns, but they have a compelling advantage: they work everywhere without persistent connections, reconnection logic, or deployment topology constraints.
For features where data changes less frequently than once per ten seconds — audit logs, report status, background job progress after initial submission — polling is operationally simpler and easier to cache:
// app/api/job-status/[id]/route.ts
export async function GET(
_req: Request,
{ params }: { params: { id: string } }
): Promise<Response> {
const job = await jobRepository.findById(params.id);
return Response.json(
{ status: job.status, progress: job.progress },
{
headers: {
// Aggressive caching for terminal states
'Cache-Control': job.isTerminal
? 'public, max-age=3600'
: 'no-store',
},
}
);
}
// Client: exponential backoff polling
async function pollJobStatus(jobId: string, onUpdate: (status: JobStatus) => void) {
let interval = 2_000;
const MAX_INTERVAL = 30_000;
const poll = async () => {
const res = await fetch(`/api/job-status/${jobId}`);
const data = await res.json();
onUpdate(data);
if (!data.isTerminal) {
setTimeout(poll, interval);
interval = Math.min(interval * 1.5, MAX_INTERVAL);
}
};
poll();
}
On Vercel, a short-polling approach with proper Cache-Control headers is genuinely cheap at scale because Vercel's edge caching intercepts many requests before they ever reach your function. A notification badge that polls every 30 seconds with a 10-second stale-while-revalidate cache will only hit your function once per 10 seconds per unique user ID — the cache absorbs everything in between.
Load-Balancer and Edge-Runtime Gotchas
Several failure modes appear repeatedly when reviewing real-time architectures.
SSE and load balancer timeouts. Application load balancers (AWS ALB, GCP HTTPS LB, nginx default config) close idle connections after 60 seconds unless configured otherwise. An SSE connection that sends infrequent events gets silently cut by the load balancer while the client EventSource API assumes the connection is live. The fix: send a keepalive comment (: keepalive) from the server every 25 seconds, and configure your load balancer's idle timeout to match your maxDuration. The code example above includes this pattern.
WebSockets and sticky sessions. Socket.io with multiple Node processes requires either a Redis adapter for pub/sub fanout or sticky load balancing (the same client always routes to the same server). Without one of these, an event emitted to a user whose connection is on server B will never reach them if the emitter is on server A. This is one of the most common architectural mistakes in horizontally scaled WebSocket deployments.
SSE and the edge runtime. Marking a Route Handler with export const runtime = 'edge' disables ReadableStream keepalive semantics and causes connection drops on some Cloudflare Workers deployments. SSE must run on runtime = 'nodejs'. This is a non-obvious constraint that wastes hours when discovered after deployment.
Reconnection storms. When a server restarts, all SSE clients reconnect simultaneously. With 5,000 concurrent users, a rolling deployment that restarts a pod creates a thundering herd that can briefly saturate your event bus subscription handler. Rate-limit reconnection attempts server-side and implement jitter in your client reconnection delay.
The Decision Matrix
| Feature type | Update frequency | Vercel | Self-hosted | Recommended |
|---|---|---|---|---|
| Notification badge | User-triggered, < 1/min | Polling (30s + stale-while-revalidate) | SSE or polling | Polling |
| Presence indicators | Continuous, < 5s freshness | Managed WebSocket service | SSE | SSE (self-hosted), managed WS (Vercel) |
| Live activity feed | Continuous, append-only | SSE (watch maxDuration cost) | SSE | SSE |
| Collaborative editing | Continuous, bidirectional | Managed WebSocket service | WebSockets | WebSockets |
| Background job progress | Initiated, terminal state | Polling with exponential backoff | Polling | Polling |
| Chat / multi-player | High-frequency, bidirectional | Managed WebSocket service | WebSockets | WebSockets |
The rule of thumb: if your real-time feature is unidirectional (server pushes to client), SSE is the right primitive for self-hosted deployments and usually the most cost-effective on Vercel up to a few hundred concurrent users. If your feature requires the client to send data frequently or in real time, WebSockets are worth the operational overhead. If your data changes less often than once per 10–30 seconds, polling with correct cache semantics beats both.
Applying This to Your Codebase
The architectural decision matters most at the infrastructure layer — choosing SSE vs WebSockets is a decision you live with for years. Teams often make it under time pressure without benchmarking their actual feature requirements against their deployment environment, then discover the mismatch when traffic grows.
If your real-time architecture is already in place and you suspect it is not the right fit, a code quality audit is a structured way to assess it alongside the rest of the system. We regularly find teams running Socket.io for features that SSE handles cleanly at one-fifth the infrastructure cost, and the reverse — SSE implementations retrofitted for bidirectional use cases where the workarounds have become load-bearing.
Reach out at hello@wolf-tech.io or visit wolf-tech.io if you are evaluating your options or want an independent assessment before committing to an approach.

