PostgreSQL Performance Engineering
Optimize PostgreSQL for production. Covers query optimization, indexing strategies, EXPLAIN ANALYZE, connection pooling, partitioning, vacuum tuning, and monitoring for high-traffic applications.
PostgreSQL handles more production workloads than any other open-source database. But the default configuration is designed for a laptop, not production. Out of the box, Postgres allocates 128 MB of shared buffers, uses 4 MB of work memory, and has conservative autovacuum settings. This guide covers the engineering decisions that take Postgres from “it works” to “it handles 50,000 queries per second.”
Query Optimization
Reading EXPLAIN ANALYZE
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT)
SELECT o.id, o.amount, c.name
FROM orders o
JOIN customers c ON o.customer_id = c.id
WHERE o.created_at > '2025-01-01'
AND o.status = 'shipped'
ORDER BY o.created_at DESC
LIMIT 100;
-- Key things to look for:
-- Sequential Scan → Missing index
-- Nested Loop (with high row count) → Consider Hash Join
-- Sort (external merge) → work_mem too low
-- Buffers: shared hit vs shared read → Cache hit ratio
Index Strategy
| Index Type | Use Case | Example |
|---|---|---|
| B-tree (default) | Equality, range, sorting | CREATE INDEX ON orders(created_at) |
| GIN | Full-text search, JSONB, arrays | CREATE INDEX ON docs USING gin(content) |
| GiST | Geometric, range types, nearest-neighbor | CREATE INDEX ON locations USING gist(coordinates) |
| BRIN | Large tables with natural ordering | CREATE INDEX ON events USING brin(timestamp) |
| Hash | Equality-only (rare use) | CREATE INDEX ON users USING hash(email) |
| Partial | Subset of rows | CREATE INDEX ON orders(id) WHERE status = 'pending' |
| Covering | Avoid table lookups | CREATE INDEX ON orders(customer_id) INCLUDE (amount, status) |
Configuration Tuning
# postgresql.conf for 64 GB RAM, 16 CPU server
# Memory
shared_buffers = 16GB # 25% of RAM
effective_cache_size = 48GB # 75% of RAM
work_mem = 256MB # Per-operation sort/hash memory
maintenance_work_mem = 2GB # For VACUUM, CREATE INDEX
# WAL
wal_buffers = 64MB
max_wal_size = 4GB
min_wal_size = 1GB
checkpoint_completion_target = 0.9
# Query Planner
random_page_cost = 1.1 # SSD (default 4.0 is for HDD)
effective_io_concurrency = 200 # SSD
# Autovacuum (aggressive for high-write workloads)
autovacuum_max_workers = 6
autovacuum_naptime = 10s
autovacuum_vacuum_threshold = 50
autovacuum_vacuum_scale_factor = 0.02
autovacuum_analyze_threshold = 50
autovacuum_analyze_scale_factor = 0.01
# Connections
max_connections = 200 # Use connection pooler!
Connection Pooling
| Pooler | Mode | Best For |
|---|---|---|
| PgBouncer | Transaction pooling | Most applications (recommended) |
| pgpool-II | Connection + load balancing | Read replica routing |
| Supavisor | Multi-tenant pooling | Supabase/multi-tenant |
# PgBouncer config
[databases]
mydb = host=localhost dbname=mydb
[pgbouncer]
pool_mode = transaction # Release connection after each transaction
max_client_conn = 1000 # Accept up to 1000 app connections
default_pool_size = 50 # But only use 50 real DB connections
reserve_pool_size = 10 # Extra connections for spikes
reserve_pool_timeout = 3
server_idle_timeout = 600
Table Partitioning
-- Partition orders by month for query performance and maintenance
CREATE TABLE orders (
id BIGSERIAL,
customer_id BIGINT NOT NULL,
amount DECIMAL(10,2) NOT NULL,
status TEXT NOT NULL,
created_at TIMESTAMPTZ NOT NULL,
PRIMARY KEY (id, created_at)
) PARTITION BY RANGE (created_at);
-- Create monthly partitions
CREATE TABLE orders_2025_01 PARTITION OF orders
FOR VALUES FROM ('2025-01-01') TO ('2025-02-01');
CREATE TABLE orders_2025_02 PARTITION OF orders
FOR VALUES FROM ('2025-02-01') TO ('2025-03-01');
-- Automated partition creation (pg_partman)
SELECT partman.create_parent('public.orders', 'created_at', 'native', 'monthly');
Anti-Patterns
| Anti-Pattern | Problem | Fix |
|---|---|---|
| No connection pooler | Exhausting max_connections under load | PgBouncer in transaction mode |
SELECT * in application | Extra I/O, can’t use covering indexes | Select only needed columns |
| Missing indexes on JOINs | Full table scans on every join | Index all foreign key columns |
| Default autovacuum settings | Table bloat, degraded performance | Tune for your write volume |
| N+1 queries from ORM | 1000 individual queries instead of 1 join | Eager loading, batch queries |
| No EXPLAIN in code review | Slow queries shipped to production | EXPLAIN ANALYZE in PR checklist |
Checklist
- Connection pooler (PgBouncer) configured
- Memory settings tuned for server RAM
- Indexes on all foreign keys and frequently filtered columns
-
random_page_costset for SSD (1.1) - Autovacuum tuned for write volume
- Partitioning for tables > 100M rows
- Monitoring: connections, cache hit ratio, replication lag
- Slow query log:
log_min_duration_statement = 200ms - Regular EXPLAIN ANALYZE for critical queries
:::note[Source] This guide is derived from operational intelligence at Garnet Grid Consulting. For PostgreSQL consulting, visit garnetgrid.com. :::