Recommended: Cloud-Managed PostgreSQL
This is the recommended approach for production deployments. A managed PostgreSQL service handles the hardest parts of database operations for you, letting you focus on running Convoy rather than maintaining infrastructure.
- Automated backups with point-in-time recovery
- High availability with automatic failover
- Security patching applied without downtime
- Built-in connection pooling so you don’t need to deploy PGBouncer separately
- Autovacuum management tuned for the provider’s workload patterns
- Monitoring and alerting out of the box
Providers with Built-In Connection Pooling
Most managed PostgreSQL providers include a connection pooler that you can enable without deploying additional infrastructure:| Provider | Connection Pooler | Notes |
|---|---|---|
| AWS RDS / Aurora | RDS Proxy | Separate service; supports transaction-mode pooling |
| GCP Cloud SQL | Built-in connection pooling | Configurable via the Cloud SQL console |
| Azure Database for PostgreSQL | Built-in PgBouncer | Enable in server parameters |
| Aiven | Built-in connection pooling | Available on all plans |
| Supabase | Supavisor | Built-in; transaction mode supported |
| Neon | Built-in connection pooling | Enabled by default |
Configuring Convoy for a Managed Service
Point Convoy’s database configuration at your managed PostgreSQL endpoint. Most managed services require TLS:Environment Variables
convoy.json
max_open_conn, max_idle_conn, conn_max_lifetime). For Kubernetes deployments using Helm, see Kubernetes — managed Postgres.
Self-Hosted PostgreSQL with PGBouncer
Self-hosting PostgreSQL means you are responsible for backups, failover, security patching, disk space management, and autovacuum tuning. If any of these are not in place, we strongly recommend migrating to a managed PostgreSQL service before scaling further. A missed autovacuum can cause transaction ID wraparound — a catastrophic failure mode. Unmonitored WAL files and table bloat can fill disks silently.
Why PGBouncer?
Convoy’s worker and server processes open many concurrent database connections when processing events at scale. Each Convoy process defaults to 10 open connections. A typical deployment with a server and several agent replicas can quickly approach PostgreSQL’smax_connections limit, leading to errors like:
Connection Pooling Mode
PGBouncer must run in
transaction pooling mode for Convoy. Session pooling mode (the default) provides no benefit because each client session holds a dedicated PostgreSQL connection for its entire lifetime — the same as connecting directly. In transaction mode, connections are returned to the pool after each transaction completes, allowing many clients to share a small number of server connections.SHOW POOLS; and check the pool_mode column.
Installing PGBouncer
- Docker / Docker Compose
- Native (apt / yum)
Create a directory for PGBouncer configuration:Create Then put the full SCRAM string in the file:Create Replace Start PGBouncer:
/etc/convoy/pgbouncer/userlist.txt with the SCRAM hash for your PostgreSQL user. To get the hash, connect directly to PostgreSQL and run:/etc/convoy/pgbouncer/.env:your_postgres_host and your_postgres_password with your actual values.Add PGBouncer to your docker-compose.yml:docker-compose.yml
PGBouncer Configuration
Whether you use Docker environment variables or a nativepgbouncer.ini, these are the critical settings:
pgbouncer.ini
[databases] section accordingly (e.g., port=20001).
PostgreSQL Reload PostgreSQL after changing
pg_hba.conf must allow scram-sha-256 for TCP connections. PGBouncer connects to PostgreSQL over TCP (127.0.0.1), even when co-located on the same machine. If your pg_hba.conf uses peer or pam for local connections, PGBouncer will fail to authenticate. Ensure you have a rule like:pg_hba.conf: sudo systemctl reload postgresqlSettings Reference
| Setting | Recommended | Description |
|---|---|---|
auth_type | scram-sha-256 | Required. Must match your PostgreSQL authentication method. Most modern PostgreSQL instances default to SCRAM-SHA-256. |
auth_user | convoy | The PostgreSQL user PGBouncer uses to run the auth_query. Its SCRAM hash must be in auth_file. |
auth_query | SELECT usename, passwd FROM pg_shadow WHERE usename = $1 | Lets PGBouncer authenticate users dynamically against PostgreSQL. $1 is replaced with the connecting username. |
pool_mode | transaction | Required. Session mode provides no pooling benefit for Convoy. |
max_prepared_statements | 100 | Required. Convoy’s database driver (pgx) uses prepared statements. Without this, queries fail in transaction mode. Requires PGBouncer 1.21+. |
ignore_startup_parameters | extra_float_digits | Required. The pgx driver sets this parameter on connect; PGBouncer must ignore it. |
default_pool_size | 50 | Server connections per database/user pair. Start with 2-4x your CPU count. |
max_client_conn | 500 | Maximum client connections PGBouncer accepts. Must exceed total connections from all Convoy processes. |
max_db_connections | 250 | Hard cap on connections to PostgreSQL. Must not exceed PostgreSQL’s max_connections. |
min_pool_size | 10 | Minimum idle connections to keep open, avoiding latency on traffic spikes. |
Tuning Pool Size
default_pool_size: Start with 2-4x your CPU count (e.g., 50 for 8 vCPU, 75 for 32 vCPU). Monitorcl_waitinginSHOW POOLS;— if clients are frequently waiting, increase the pool size.max_client_conn: Set to at least the sum ofmax_open_connacross all Convoy processes plus headroom. With Convoy’s default of 10 connections per process: 1 server + 3 agents = 40 connections minimum. A setting of 300-500 provides comfortable headroom.max_db_connections: Must not exceed PostgreSQL’smax_connectionsminus a buffer for superuser access and monitoring tools.
Configuring Convoy to Use PGBouncer
Update Convoy’s database configuration to connect through PGBouncer instead of directly to PostgreSQL. Change the port from5432 to 6432 and point the host at the machine running PGBouncer:
Environment Variables
convoy.json
max_open_conn, max_idle_conn, conn_max_lifetime).
Verifying the Setup
Test connectivity through PGBouncer:pool_mode = transaction:
session instead of transaction, the configuration was not applied correctly. Re-check your pgbouncer.ini or Docker environment variables and restart PGBouncer.
PostgreSQL Tuning
The settings below apply to self-hosted PostgreSQL. Managed services typically handle these automatically.Memory
Memory
| Setting | Guideline | Example (8 vCPU / 32 GB) | Example (32 vCPU / 256 GB) |
|---|---|---|---|
shared_buffers | 25% of RAM | 8GB | 64GB |
effective_cache_size | 75% of RAM | 24GB | 192GB |
work_mem | Size per sort/join operation | 32MB | 64MB |
maintenance_work_mem | Memory for VACUUM, CREATE INDEX | 1GB | 2GB |
effective_cache_size is not an allocation — it tells the query planner how much memory is available for caching, which affects query plan choices.WAL & Checkpoints
WAL & Checkpoints
postgresql.conf
checkpoint_completion_target = 0.9 avoids I/O spikes. Enable log_checkpoints = on to monitor checkpoint frequency.Autovacuum (Critical)
Autovacuum (Critical)
Autovacuum must be enabled — never disable it. Convoy’s write-heavy workload generates dead tuples that autovacuum must clean up. If autovacuum falls behind, tables bloat, queries slow down, and in the worst case, transaction ID wraparound can cause a full database shutdown.For Convoy’s busiest tables, you can set more aggressive per-table thresholds:Monitor autovacuum by enabling
postgresql.conf
log_autovacuum_min_duration = 250 in postgresql.conf.Connection Limits
Connection Limits
postgresql.conf
max_connections must be greater than or equal to PGBouncer’s max_db_connections plus a buffer for superuser access and monitoring tools. Check your current setting:Query Planner (SSD Storage)
Query Planner (SSD Storage)
If your VM uses SSD-backed storage, these settings improve query plans:
postgresql.conf
Parallelism
Parallelism
For VMs with many cores, allow parallel query execution:Scale these values based on your CPU count. A reasonable starting point is to set
postgresql.conf
max_parallel_workers to half your vCPU count.postgresql.conf, restart PostgreSQL:
Monitoring
PGBouncer Health
Connect to the PGBouncer admin console and runSHOW POOLS; regularly:
| Metric | Healthy | Action |
|---|---|---|
cl_waiting | 0 | If consistently > 0, increase default_pool_size |
maxwait | < 1s | If high, increase pool size or investigate slow queries |
sv_active | < default_pool_size | If at the limit, increase default_pool_size |
cl_active | < max_client_conn | If at the limit, increase max_client_conn |
PostgreSQL Health
Troubleshooting
No improvement after adding PGBouncer
No improvement after adding PGBouncer
- Verify pool mode: Connect to PGBouncer admin and run
SHOW POOLS;. Thepool_modecolumn must showtransaction, notsession. Session mode provides no pooling benefit. - Verify Convoy is connecting to PGBouncer: Convoy must connect to port
6432(PGBouncer), not5432(PostgreSQL directly). Check yourCONVOY_DB_PORTsetting. - Verify prepared statements: Ensure
max_prepared_statementsis set to100in PGBouncer’s configuration. Without it, queries may fail silently in transaction mode.
FATAL: no more connections allowed (max_client_conn)
FATAL: no more connections allowed (max_client_conn)
PGBouncer has hit its client connection limit. Increase
max_client_conn in your PGBouncer configuration and restart or reload PGBouncer:FATAL: query_wait_timeout
FATAL: query_wait_timeout
Clients are waiting too long for a server connection from the pool. Either:
- Increase
default_pool_sizeto allow more concurrent server connections - Check for long-running queries in PostgreSQL that are holding connections:
Prepared statement errors
Prepared statement errors
If you see
prepared statement does not exist errors, ensure max_prepared_statements is set to 100 (or higher) in PGBouncer’s configuration. This setting is required for PGBouncer 1.21+ when using transaction pooling mode, because Convoy’s database driver (pgx) uses prepared statements by default.Connection refused on port 6432
Connection refused on port 6432
Verify PGBouncer is running:Check that
listen_addr and listen_port are configured correctly, and that firewall rules allow traffic on port 6432.SASL authentication failed / wrong password type
SASL authentication failed / wrong password type
This usually means a mismatch between PGBouncer’s
auth_type and the password format stored in PostgreSQL or userlist.txt:- PostgreSQL uses SCRAM but PGBouncer is set to
md5: Setauth_type = scram-sha-256in PGBouncer. SCRAM and MD5 hashes are not interchangeable. - Truncated SCRAM hash in
userlist.txt: The SCRAM string is long and easy to truncate. Verify the full hash withSELECT usename, passwd FROM pg_shadow WHERE usename = 'convoy';and ensure it matches exactly. auth_usercannot authenticate: When usingauth_query, PGBouncer must first authenticate theauth_useritself viaauth_file. Ensure theauth_filecontains the correct SCRAM hash for that user.
PAM authentication failed
PAM authentication failed
This means PostgreSQL’s Then reload PostgreSQL:
pg_hba.conf is using PAM authentication for TCP connections. PGBouncer connects via TCP (127.0.0.1), even when co-located. Update pg_hba.conf to use scram-sha-256 instead of pam:sudo systemctl reload postgresqlHigh dead tuple count on Convoy tables
High dead tuple count on Convoy tables
Autovacuum is not keeping up with Convoy’s write volume. Increase
autovacuum_vacuum_cost_limit in postgresql.conf or add per-table autovacuum settings: