Database Connection Pool Math: Why Your Idle Symfony Workers Are Quietly Killing PostgreSQL Throughput
A B2B SaaS team we worked with last year had a near-identical story to at least four others in our client history. They launched a Product Hunt campaign, traffic tripled over forty minutes, and their Symfony application started throwing PostgreSQL errors: FATAL: remaining connection slots are reserved for non-replication superuser connections. The strangest part? Their monitoring showed forty idle php-fpm workers sitting in pm.status with nothing to do. The application was not under CPU pressure. The database server had plenty of RAM. The PostgreSQL connection pool was simply full, and no one had ever done the math.
This post walks through that math, explains why the numbers surprise most teams, covers the PgBouncer configuration that creates a silent trap for Doctrine users, and gives you three pg_stat_activity queries that belong on every PHP team's Grafana board.
The Arithmetic Nobody Writes Down
PostgreSQL connection pool sizing feels like DevOps ceremony until you hit the ceiling. At that point, it becomes the most urgent engineering problem in the room. The formula is straightforward once you enumerate every component that opens a connection:
peak_connections =
(php-fpm pm.max_children)
+ (messenger_consumers × workers_per_consumer)
+ (scheduler_workers)
+ (cron_processes)
+ (monitoring_and_admin_connections)
+ (pg_reserved_superuser_slots)
PostgreSQL defaults to max_connections = 100, with three slots reserved for superuser connections by default. That leaves 97 for your application. A typical production Symfony setup looks like this:
- 60 php-fpm workers (
pm = dynamic,pm.max_children = 60) - 10 Messenger consumers (2 transports × 5 workers each)
- 2 scheduler processes
- 5 long-running cron commands
- 3 monitoring connections (your Datadog agent, a DBA query session, a read replica lag check)
That is 80 connections at steady state — already 82% of capacity — before a single spike arrives. When traffic doubles and php-fpm spins up all sixty children simultaneously, and a marketing cron kicks off during the same window, you are at 102. PostgreSQL starts refusing connections. The forty "idle" workers you see in the process list are the ones that connected a fraction of a second earlier; the rest never got through.
Why Messenger Workers Hurt More Than php-fpm
php-fpm workers hold a Doctrine connection for the duration of each request, then release it when the script exits (or the process handles a new request — the connection lifecycle depends on whether persistent connections are enabled). In normal Symfony usage, each worker opens a connection on first database access and Doctrine closes it when the worker process terminates or when you call $em->getConnection()->close() explicitly.
Symfony Messenger workers are a different category entirely. Each worker process runs in a loop, holding its database connection open indefinitely. This is by design — the connection is needed to poll the transport or to execute handlers. The consequence is that every Messenger worker permanently occupies a slot in PostgreSQL's connection table, regardless of whether it is actively processing a message.
If you have five transports and run three workers per transport under Supervisor, that is fifteen permanent connections that never release. Scale that across staging, production, and a DR environment, and the number climbs fast. The fix starts with making these visible by setting application_name in your Doctrine configuration:
# config/packages/doctrine.yaml
doctrine:
dbal:
url: '%env(DATABASE_URL)%'
options:
!php/const PDO::ATTR_EMULATE_PREPARES: true
default_table_options:
charset: utf8mb4
# Set application_name so pg_stat_activity identifies your workers
driver_options:
application_name: 'sf-messenger-worker'
You can override this per environment or per Messenger transport using a separate DATABASE_URL with ?application_name=sf-worker-async appended. Once this is in place, your monitoring queries become meaningful rather than showing an undifferentiated wall of anonymous PHP connections.
PgBouncer Transaction Pooling and the Doctrine Prepared Statement Trap
PgBouncer is the standard solution for reducing the number of real connections PostgreSQL has to manage. In session-pooling mode it is functionally transparent to Doctrine — one client session gets one server connection for its lifetime, so prepared statements work without changes. The problem is that session pooling does not actually reduce connections under concurrent load; it just adds a proxy hop.
Transaction-pooling mode is where PgBouncer earns its keep: a server connection is only held during an active transaction, then returned to the pool. For a php-fpm application where most requests have short, discrete transactions, this can cut your real PostgreSQL connections by 80%. But it creates a critical incompatibility with Doctrine.
Doctrine DBAL uses prepared statements by default. When PgBouncer multiplexes multiple client sessions onto the same server connection (across different transaction boundaries), prepared statements created in one session context can appear to belong to another, producing errors like:
SQLSTATE[42P05]: Duplicate prepared statement: ERROR: prepared statement "doctrine_0" already exists
Or worse, the statement executes silently against the wrong parameter context. There are two reliable fixes:
Option A — Disable server-side prepared statements in Doctrine (recommended for PgBouncer < 1.21):
# config/packages/doctrine.yaml
doctrine:
dbal:
url: '%env(DATABASE_URL)%'
options:
!php/const PDO::ATTR_EMULATE_PREPARES: true
PDO::ATTR_EMULATE_PREPARES tells PDO to simulate prepared statement behaviour in the client library rather than sending PREPARE/EXECUTE to PostgreSQL. Performance impact is negligible for typical OLTP queries; the main trade-off is that parameter type inference shifts from the server to the client.
Option B — Enable PgBouncer's built-in prepared statement tracking (PgBouncer ≥ 1.21):
; pgbouncer.ini
[pgbouncer]
prepared_statements = on
max_prepared_statements = 200
PgBouncer 1.21 introduced server-side tracking of prepared statements that survives connection hand-off. If you are running a recent version, this is the cleaner solution — it keeps full server-side type checking while enabling transaction pooling. Verify the version before relying on it: older Ubuntu LTS repositories still ship PgBouncer 1.17.
The Three Queries Every PHP Team Needs on Grafana
Connection pool problems are invisible until they become emergencies. These three queries, run against your production PostgreSQL instance every thirty seconds, surface the issues before your users do.
Query 1 — Connection inventory by application and state:
SELECT
application_name,
state,
count(*) AS connections
FROM pg_stat_activity
WHERE datname = current_database()
GROUP BY application_name, state
ORDER BY connections DESC;
Once you have application_name set on each component (php-fpm, Messenger workers, cron, monitoring), this query immediately shows you who is consuming pool slots and in what state. An idle in transaction row with a high count is a red flag — it means transactions are being held open too long, wasting connections that could be released.
Query 2 — Connection utilisation percentage:
SELECT
count(*) AS active_connections,
max_conn AS max_connections,
round(count(*) * 100.0 / max_conn, 1) AS utilisation_pct
FROM pg_stat_activity,
(SELECT setting::int AS max_conn
FROM pg_settings
WHERE name = 'max_connections') AS s
WHERE pg_stat_activity.datname = current_database()
GROUP BY max_conn;
Set a Grafana alert at 70% utilisation. By the time you hit 85%, the remaining headroom is not enough to absorb a traffic spike, and you need to act before the problem arrives rather than during it.
Query 3 — Idle-in-transaction leak detector:
SELECT
pid,
application_name,
state,
now() - query_start AS duration,
left(query, 100) AS last_query
FROM pg_stat_activity
WHERE state = 'idle in transaction'
AND datname = current_database()
AND query_start < now() - interval '5 minutes'
ORDER BY duration DESC;
A Doctrine entity manager left open across a slow external API call, a Messenger handler that opens a transaction before an HTTP request and forgets to commit on error, a CLI command that wraps an import loop in a single transaction for performance — these all appear here. Each one holds a connection hostage for the duration. Page on any result older than five minutes.
Putting It Together: A Sizing Checklist
Before you touch max_connections on the PostgreSQL server (increasing it past 200 starts consuming significant shared memory and degrading planning performance), run through this checklist:
- Enumerate all connection sources with the formula above and measure peak versus steady-state.
- Set
application_nameon every process type sopg_stat_activitygives you useful signal. - Deploy PgBouncer in transaction mode with
PDO::ATTR_EMULATE_PREPARESor PgBouncer ≥ 1.21 prepared-statement tracking. Aim for a 4:1 ratio of application connections to real PostgreSQL connections. - Cap Messenger workers deliberately — you almost never need more than 3–5 workers per transport. Prefer horizontal scaling of the host over dozens of workers on one machine.
- Add the three Grafana queries and set the 70% utilisation alert before your next launch.
The teams that get this right do not discover the ceiling during a Product Hunt campaign. They see a utilisation graph that trends upward, act while they still have a six-week runway, and the campaign day is uneventful.
If you want a second opinion on how your current Symfony stack handles database connection lifecycle — or if you have already hit the ceiling and need triage — a code quality audit is often the fastest path from crisis to stable. Reach us at hello@wolf-tech.io or visit wolf-tech.io.

