All guides

PostgreSQL Connection Pooling with PgBouncer: Setup Guide

Learn how to set up PgBouncer for PostgreSQL connection pooling. Covers installation, pool modes, configuration tuning, and production best practices.

February 17, 2026

PostgreSQL creates a new operating system process for every client connection. Each process consumes memory (typically 5-10 MB), competes for CPU scheduling, and holds locks. At 50 connections, this is fine. At 500, performance degrades noticeably. At 2,000+, PostgreSQL starts spending more time managing processes than executing queries.

Connection pooling solves this by sitting between your application and PostgreSQL, maintaining a smaller pool of actual database connections and multiplexing many client connections across them. PgBouncer is the most widely used connection pooler for PostgreSQL -- lightweight, stable, and battle-tested in production at companies handling millions of requests per second.

This guide covers why connection pooling matters, how PgBouncer works, how to configure it for production, and how it fits into a high-availability PostgreSQL stack with Patroni.

Why PostgreSQL Needs Connection Pooling

PostgreSQL's process-per-connection architecture was designed in the 1990s when a database server might handle dozens of simultaneous connections. Modern applications routinely open hundreds or thousands of connections -- microservices with their own connection pools, serverless functions that create short-lived connections, background workers, cron jobs, and monitoring agents all compete for connections.

The problems compound at scale:

  • Memory overhead -- each backend process allocates its own memory for work_mem, temp_buffers, and catalog caches. With 500 connections and work_mem set to 64 MB, you could theoretically consume 32 GB just in working memory.
  • Process scheduling -- the OS kernel must schedule hundreds of processes. Context switching becomes expensive.
  • Lock contention -- PostgreSQL's internal lock manager (ProcArray, buffer mapping locks) becomes a bottleneck as process count rises.
  • Connection setup cost -- establishing a PostgreSQL connection involves TLS handshake (if enabled), authentication, catalog cache loading, and shared memory setup. This takes 5-20 ms per connection -- negligible for long-lived connections but painful for short-lived ones.

The fix is straightforward: keep a small pool of persistent PostgreSQL connections and route client requests through them. That is exactly what PgBouncer does.

How PgBouncer Works

PgBouncer is a single-threaded, event-driven process that accepts client connections on one side and maintains a pool of server connections to PostgreSQL on the other. When a client sends a query, PgBouncer assigns an available server connection from the pool, forwards the query, returns the result, and (depending on pool mode) releases the server connection back to the pool.

Because PgBouncer is event-driven rather than process-per-connection, it can handle thousands of client connections with minimal memory overhead -- typically under 2 KB per connection. A single PgBouncer instance can comfortably handle 10,000+ client connections while maintaining only 100-200 actual PostgreSQL connections.

PgBouncer supports three pool modes that determine when a server connection is assigned to and released from a client. Choosing the right mode is the most important configuration decision you will make.

Pool Modes Explained

ModeConnection assignedConnection releasedBest for
SessionWhen client connectsWhen client disconnectsApplications using session-level features (PREPARE, SET, advisory locks, LISTEN/NOTIFY)
TransactionWhen client starts a transactionWhen transaction ends (COMMIT/ROLLBACK)Most OLTP web applications and APIs
StatementWhen client sends a queryWhen query result is returnedSimple autocommit workloads with no multi-statement transactions

Session Pooling

In session mode, a server connection is dedicated to a client for the entire duration of the client's session. The connection is only returned to the pool when the client disconnects.

This is the most compatible mode -- it supports every PostgreSQL feature because the client has exclusive use of the server connection. However, it provides the least pooling benefit. If you have 500 clients connected but only 50 are actively running queries, you still need 500 server connections.

Use session pooling when:

  • Your application uses prepared statements created with PREPARE (not the extended query protocol)
  • You rely on session variables set with SET (e.g., SET search_path)
  • You use advisory locks that must persist across transactions
  • You use LISTEN/NOTIFY
  • You use temporary tables that span transactions

Transaction Pooling

In transaction mode, a server connection is assigned when the client begins a transaction and released when the transaction commits or rolls back. Between transactions, the client does not hold a server connection.

This is the most efficient mode for typical web applications. If 500 clients are connected but transactions take an average of 10 ms and arrive every second, you need roughly 5 server connections to serve all 500 clients. The multiplexing ratio can be 100:1 or higher.

Transaction pooling has restrictions. The following features do not work because the client may get a different server connection for each transaction:

  • Session-level SET commands (use SET LOCAL inside transactions instead)
  • PREPARE/DEALLOCATE (use the extended query protocol's unnamed prepared statements instead -- most modern drivers do this by default)
  • Advisory locks held across transactions
  • LISTEN/NOTIFY
  • Temporary tables that persist across transactions

For most web frameworks (Django, Rails, Express, Spring Boot) using modern PostgreSQL drivers, transaction pooling works without modification.

Statement Pooling

In statement mode, a server connection is assigned and released per individual statement. Multi-statement transactions are not possible -- PgBouncer disallows explicit BEGIN/COMMIT blocks.

This mode is rarely used. It is only appropriate for simple autocommit workloads where every query is independent. Most applications need transactions, which rules out statement pooling.

Installation

From Package Managers

On Ubuntu/Debian:

sudo apt-get update
sudo apt-get install pgbouncer

On RHEL/CentOS/Fedora:

sudo dnf install pgbouncer

On macOS (for local development):

brew install pgbouncer

Via Docker

For production deployments, running PgBouncer in a container is common:

docker run -d \
  --name pgbouncer \
  -p 6432:6432 \
  -v /etc/pgbouncer/pgbouncer.ini:/etc/pgbouncer/pgbouncer.ini:ro \
  -v /etc/pgbouncer/userlist.txt:/etc/pgbouncer/userlist.txt:ro \
  edoburu/pgbouncer:v1.23.0-p1

Configuration

PgBouncer's configuration lives in pgbouncer.ini. There are two main sections: [databases] defines which databases PgBouncer proxies, and [pgbouncer] sets operational parameters.

Basic Configuration

[databases]
mydb = host=127.0.0.1 port=5432 dbname=mydb

[pgbouncer]
listen_addr = 0.0.0.0
listen_port = 6432
auth_type = plain
auth_file = /etc/pgbouncer/userlist.txt
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 25
min_pool_size = 5
reserve_pool_size = 5
reserve_pool_timeout = 3
max_db_connections = 100
server_round_robin = 1
ignore_startup_parameters = extra_float_digits
logfile = /var/log/pgbouncer/pgbouncer.log
pidfile = /var/run/pgbouncer/pgbouncer.pid
admin_users = postgres
stats_users = postgres

Wildcard Database Entries

Instead of listing each database individually, you can use a wildcard:

[databases]
* = host=127.0.0.1 port=5432

This forwards any database name the client connects with to PostgreSQL on the same host. Useful when you have many databases or want PgBouncer to be transparent.

User Authentication

PgBouncer needs to authenticate clients. The userlist.txt file contains username/password pairs:

"postgres" "your_password_here"
"app_user" "app_password_here"

For production, use auth_type = md5 or auth_type = scram-sha-256 (PgBouncer 1.21+ supports SCRAM). With auth_type = hba, you can reference a pg_hba.conf-style file for more granular access control.

Key Configuration Parameters

Connection Limits

  • max_client_conn -- Maximum number of client connections PgBouncer accepts. Set this high (1000-10000) since PgBouncer handles client connections cheaply. Default: 100.
  • default_pool_size -- Number of server connections per user/database pair. This is the core knob. Start with 20-25 and adjust based on your PostgreSQL max_connections setting. Default: 20.
  • min_pool_size -- Minimum number of server connections to keep open even when idle. Avoids connection setup latency for the first queries after an idle period. Default: 0.
  • reserve_pool_size -- Additional server connections allowed when the normal pool is exhausted and clients have been waiting longer than reserve_pool_timeout. Acts as overflow capacity. Default: 0.
  • max_db_connections -- Hard cap on total server connections per database, across all pools. Prevents PgBouncer from overwhelming PostgreSQL. Default: 0 (unlimited).
  • max_user_connections -- Hard cap on total server connections per user, across all databases. Default: 0 (unlimited).

Sizing default_pool_size

The right value depends on your PostgreSQL max_connections and how many PgBouncer instances you run.

Formula:

default_pool_size = (max_connections - reserved_connections) / pgbouncer_instances

If PostgreSQL has max_connections = 200, you reserve 10 for superuser access and Patroni, and you run 2 PgBouncer instances:

default_pool_size = (200 - 10) / 2 = 95

Round down to leave headroom. Setting default_pool_size = 80 with max_db_connections = 90 gives you a safe buffer.

Timeout Parameters

  • server_idle_timeout -- Close server connections that have been idle longer than this (seconds). Default: 600. Keeps the pool from holding connections unnecessarily during low-traffic periods.
  • server_connect_timeout -- How long to wait when establishing a new server connection. Default: 15 seconds.
  • server_login_retry -- Time to wait before retrying a failed server connection. Default: 15 seconds.
  • query_timeout -- Cancel queries running longer than this. Default: 0 (disabled). Use cautiously -- it kills long-running queries.
  • client_idle_timeout -- Disconnect clients that have been idle for this long. Default: 0 (disabled). Useful for cleaning up abandoned connections from misbehaving applications.

TLS Configuration

For encrypted connections between clients and PgBouncer:

client_tls_sslmode = require
client_tls_cert_file = /etc/pgbouncer/certs/pgbouncer.crt
client_tls_key_file = /etc/pgbouncer/certs/pgbouncer.key

For encrypted connections between PgBouncer and PostgreSQL:

server_tls_sslmode = verify-full
server_tls_ca_file = /etc/pgbouncer/certs/ca.crt

Integrating PgBouncer with Patroni for High Availability

In a Patroni-based HA setup, PgBouncer sits in front of HAProxy (or directly in front of Patroni nodes). The typical stack is:

Application --> PgBouncer --> HAProxy --> Patroni (Primary/Replicas)

PgBouncer connects to HAProxy's write endpoint (typically port 5000) for read-write traffic. HAProxy uses Patroni's REST API health checks to route connections to the current primary. When failover occurs:

  1. Patroni promotes a replica to primary.
  2. HAProxy detects the change via health checks within 2-5 seconds.
  3. PgBouncer's existing server connections to the old primary break.
  4. PgBouncer establishes new server connections through HAProxy to the new primary.
  5. Client connections through PgBouncer experience a brief interruption (transactions in flight are aborted) but do not need to reconnect -- PgBouncer handles reconnection transparently.

For read-write splitting, run two PgBouncer instances or two database entries -- one pointing to HAProxy's write port and one to the read port:

[databases]
mydb = host=haproxy_host port=5000 dbname=mydb
mydb_readonly = host=haproxy_host port=5001 dbname=mydb

Your application connects to mydb for writes and mydb_readonly for read queries.

Monitoring PgBouncer

PgBouncer exposes statistics through a virtual pgbouncer database that you query with any PostgreSQL client:

psql -h 127.0.0.1 -p 6432 -U postgres pgbouncer

Key queries:

-- Pool status: active, waiting, and idle connections per pool
SHOW POOLS;

-- Connection statistics
SHOW STATS;

-- Currently connected clients
SHOW CLIENTS;

-- Currently connected server connections
SHOW SERVERS;

-- Runtime configuration
SHOW CONFIG;

The most important metrics to watch:

  • cl_waiting in SHOW POOLS -- clients waiting for a server connection. If this is consistently above 0, your pool is undersized.
  • sv_active vs sv_idle in SHOW POOLS -- how many server connections are in use vs idle. If sv_idle is consistently 0, you may need a larger pool.
  • avg_query_time in SHOW STATS -- average query execution time. A sudden increase may indicate PostgreSQL issues, not PgBouncer issues.

For Prometheus-based monitoring, use the pgbouncer_exporter to scrape these statistics and create Grafana dashboards.

Common Issues and Troubleshooting

"no more connections allowed" Error

This means PgBouncer has hit max_client_conn. Either increase the limit or investigate why so many clients are connecting. Check for connection leaks in your application -- connections opened but never closed or returned to the application's connection pool.

Prepared Statement Errors in Transaction Mode

If your application uses named prepared statements (PREPARE stmt AS ...), they break in transaction mode because the statement exists only on the server connection used to create it. The client may get a different server connection for the next transaction.

Fix: Use unnamed prepared statements via the extended query protocol. Most modern drivers (libpq, JDBC, node-postgres, psycopg3) do this by default. If your driver sends PREPARE, switch to session pooling or configure the driver to use protocol-level prepared statements.

Authentication Failures

PgBouncer authenticates clients separately from PostgreSQL. The credentials in userlist.txt must match what clients send. If you change a PostgreSQL password, update userlist.txt and reload PgBouncer:

# Reload config without restarting
psql -h 127.0.0.1 -p 6432 -U postgres pgbouncer -c "RELOAD;"

Connections Piling Up After Failover

After a Patroni failover, PgBouncer's existing server connections to the old primary are broken. PgBouncer detects this and attempts to reconnect, but if server_connect_timeout and server_login_retry are set too high, clients may wait a long time. Set server_connect_timeout = 5 and server_login_retry = 5 for faster recovery in HA environments.

SET Statements Not Persisting

In transaction pooling mode, SET commands (e.g., SET timezone = 'UTC') only persist within the current transaction. The next transaction may use a different server connection with different settings. Use SET LOCAL inside a transaction block, or configure default values in postgresql.conf on the server side.

How sshploy Deploys PgBouncer

When you deploy a PostgreSQL/Patroni cluster with sshploy, PgBouncer is included and configured automatically as part of the recipe. sshploy runs Ansible playbooks that deploy PgBouncer as a Docker container (using the edoburu/pgbouncer image), generate the pgbouncer.ini configuration with the correct pool size limits and connection settings pointed at your PostgreSQL nodes through HAProxy, create the userlist.txt authentication file with your database credentials, and optionally configure TLS with your SSL certificates. The result is a fully wired PgBouncer instance that works with the rest of the Patroni stack out of the box -- no manual configuration needed.

PgBouncer vs PgPool-II

PgPool-II is an alternative connection pooler that also offers query caching, load balancing, and replication management. PgBouncer is significantly lighter -- it does one thing (connection pooling) and does it well. PgPool-II's additional features come at the cost of complexity and higher resource usage.

For most deployments, PgBouncer is the better choice for connection pooling. If you need read/write splitting, HAProxy paired with Patroni health checks is more reliable than PgPool-II's built-in load balancing. The PostgreSQL community and major hosting providers (Supabase, Neon, Crunchy Data) overwhelmingly use PgBouncer.

FAQ

How many client connections can PgBouncer handle?

PgBouncer can handle over 10,000 concurrent client connections on modest hardware (2 vCPU, 2 GB RAM). Each client connection uses approximately 2 KB of memory. The bottleneck is almost never PgBouncer itself -- it is the number of server connections your PostgreSQL instance can sustain.

Should I use PgBouncer's built-in connection pooling or my application's?

Use both. Your application-level pool (e.g., HikariCP for Java, SQLAlchemy pool for Python) should maintain a small number of connections to PgBouncer. PgBouncer then multiplexes connections from all your application instances. Set your application pool size small (5-20 per instance) and let PgBouncer handle the aggregation across all instances.

Can I run PgBouncer on the same server as PostgreSQL?

You can, and for small deployments it works fine. For production HA setups, running PgBouncer on a separate node (or co-located with HAProxy) is preferable. This decouples the pooler's availability from the database node -- if the PostgreSQL server goes down for maintenance or failover, PgBouncer remains available and can buffer client connections while a new primary is promoted.

How do I reload PgBouncer configuration without downtime?

Connect to the PgBouncer admin console and run RELOAD:

psql -h 127.0.0.1 -p 6432 -U postgres pgbouncer -c "RELOAD;"

This reloads pgbouncer.ini and userlist.txt without dropping existing connections. Changes to listen_addr and listen_port require a full restart.

What pool_mode should I use with Django / Rails / Next.js?

Transaction pooling works with all three. Django and Rails use the extended query protocol for prepared statements by default, which is compatible with transaction mode. For Next.js with Prisma, set pgbouncer=true in the connection string to enable compatible behavior. For Next.js with Drizzle or Kysely over node-postgres, transaction mode works without changes.

Ready to deploy?

Skip the manual setup. sshploy handles the entire deployment for you.

Deploy PostgreSQL with Patroni