ESC
Type to search guides, tutorials, and reference documentation.
Verified by Garnet Grid

Database Connection Pooling Strategies

Production database connection pooling. Covers pool sizing formulas, PgBouncer, connection lifecycle, pool exhaustion diagnosis, and serverless connection patterns.

Database connections are expensive resources. Each PostgreSQL connection consumes ~10MB of memory and requires a process fork. Most production databases support 100-300 concurrent connections before performance degrades. Yet a single application server can easily open 50+ connections. Without connection pooling, a modest deployment of 10 app servers can exhaust the database connection limit and cascade into a full outage.

Connection pooling solves this by maintaining a pool of reusable connections that are shared across application threads. Instead of opening and closing connections per request, threads borrow a connection from the pool, use it, and return it.


Pool Sizing

The optimal pool size is smaller than most engineers expect. More connections ≠ more throughput.

The Formula

Optimal Pool Size = (Core Count × 2) + Effective Spindle Count

For SSDs (spindle count ≈ 1):

Optimal Pool Size ≈ (CPU Cores × 2) + 1

Example: Database server with 8 cores → pool size of 17 per application instance.

With 10 app instances: 170 total connections. This leaves headroom for admin connections, monitoring, and migration scripts.

Why Smaller Pools Are Faster

Pool SizeThroughputAvg Latency
109,800 TPS2.1ms
2010,200 TPS3.8ms
509,100 TPS12.4ms
1007,300 TPS28.7ms

Beyond the optimal size, adding connections increases contention (CPU context switching, lock contention, cache line invalidation) without increasing throughput.


Application-Level Pooling

HikariCP (Java/JVM)

# application.yml
spring:
  datasource:
    hikari:
      maximum-pool-size: 17
      minimum-idle: 5
      idle-timeout: 120000      # 2 minutes
      max-lifetime: 1800000     # 30 minutes
      connection-timeout: 3000  # 3 seconds
      leak-detection-threshold: 30000

SQLAlchemy (Python)

from sqlalchemy import create_engine

engine = create_engine(
    "postgresql://user:pass@host/db",
    pool_size=10,           # Steady-state connections
    max_overflow=7,         # Burst capacity
    pool_timeout=3,         # Wait for connection (seconds)
    pool_recycle=1800,      # Recycle after 30 minutes
    pool_pre_ping=True,     # Health check before use
)

External Connection Pooling with PgBouncer

For deployments with many application instances, an external pooler like PgBouncer sits between the application and database:

App Instance 1 ─┐
App Instance 2 ─┼──→ PgBouncer (50 connections) ──→ PostgreSQL (50 connections)
App Instance 3 ─┘        ↕ pools 500+ app connections

Pool Modes

ModeDescriptionUse Case
SessionConnection held for entire client sessionLegacy apps with prepared statements
TransactionConnection held only during transactionMost web applications
StatementConnection returned after each statementHighest multiplexing, limited features

Transaction mode is the sweet spot for web applications. It provides 10-50x multiplexing while supporting transactions.

PgBouncer Configuration

[pgbouncer]
pool_mode = transaction
default_pool_size = 25
max_client_conn = 1000
reserve_pool_size = 5
reserve_pool_timeout = 3
server_lifetime = 3600
server_idle_timeout = 600

Diagnosing Pool Exhaustion

Pool exhaustion is one of the most common production issues. Symptoms: increasing response times, connection timeout errors, cascading failures.

Diagnostic Queries

-- Current connection count by application
SELECT application_name, state, COUNT(*)
FROM pg_stat_activity
GROUP BY application_name, state
ORDER BY COUNT(*) DESC;

-- Long-running queries holding connections
SELECT pid, now() - query_start AS duration, state, query
FROM pg_stat_activity
WHERE state != 'idle'
  AND (now() - query_start) > interval '30 seconds'
ORDER BY duration DESC;

-- Waiting connections (blocked)
SELECT blocked.pid AS blocked_pid,
       blocked.query AS blocked_query,
       blocking.pid AS blocking_pid,
       blocking.query AS blocking_query
FROM pg_stat_activity blocked
JOIN pg_locks bl ON bl.pid = blocked.pid
JOIN pg_locks l ON l.locktype = bl.locktype 
  AND l.relation = bl.relation AND l.pid != bl.pid
JOIN pg_stat_activity blocking ON l.pid = blocking.pid
WHERE NOT bl.granted;

Common Root Causes

  1. Connection leak: Application code opens connections but doesn’t close them in error paths
  2. Long-running queries: A slow query holds a connection for minutes instead of milliseconds
  3. Connection storm: Sudden traffic spike exceeds pool burst capacity
  4. Deadlocks: Two transactions block each other indefinitely

Serverless Connection Patterns

Serverless functions (AWS Lambda, Cloudflare Workers) create a unique challenge: each invocation may need a database connection, but there’s no persistent process to maintain a pool.

Solutions:

  1. Managed proxy: AWS RDS Proxy, Supabase connection pooler (PgBouncer built-in)
  2. HTTP-based access: Use a data API (Supabase PostgREST, Hasura) instead of direct connections
  3. Connection-aware design: Each Lambda opens one connection, uses it, closes it — with an external pooler absorbing the churn

The key insight: serverless doesn’t mean you can skip connection management. It means you need an external pooler even more than traditional architectures.

Jakub Dimitri Rezayev
Jakub Dimitri Rezayev
Founder & Chief Architect • Garnet Grid Consulting

Jakub holds an M.S. in Customer Intelligence & Analytics and a B.S. in Finance & Computer Science from Pace University. With deep expertise spanning D365 F&O, Azure, Power BI, and AI/ML systems, he architects enterprise solutions that bridge legacy systems and modern technology — and has led multi-million dollar ERP implementations for Fortune 500 supply chains.

View Full Profile →