--- checkId: check.db.pool.health plugin: stellaops.doctor.database severity: fail tags: [database, postgres, pool, connections] --- # Connection Pool Health ## What It Checks Queries `pg_stat_activity` for the current database and evaluates total connections, active connections, idle connections, waiting connections, and sessions stuck `idle in transaction`. The check warns when more than five sessions are `idle in transaction` or when total usage exceeds `80%` of server capacity. ## Why It Matters Pool pressure turns into request latency, migration timeouts, and job backlog. `idle in transaction` sessions are especially dangerous because they hold locks while doing nothing useful. ## Common Causes - Application code is not closing transactions - Connection leaks keep sessions open after requests complete - `max_connections` is too low for the number of app instances - Long-running requests or deadlocks block pooled connections ## How to Fix ### Docker Compose ```bash docker compose -f devops/compose/docker-compose.stella-ops.yml exec postgres psql -U stellaops -d stellaops -c "SELECT pid, state, wait_event, query FROM pg_stat_activity WHERE datname = current_database();" docker compose -f devops/compose/docker-compose.stella-ops.yml exec postgres psql -U stellaops -d stellaops -c "SELECT pid, query FROM pg_stat_activity WHERE state = 'idle in transaction';" ``` ### Bare Metal / systemd ```bash psql -h -U -d -c "SHOW max_connections;" ``` Review the owning service for transaction scopes that stay open across network calls or retries. ### Kubernetes / Helm ```bash kubectl exec -n -- psql -U -d -c "SELECT count(*) FROM pg_stat_activity;" ``` ## Verification ```bash stella doctor --check check.db.pool.health ``` ## Related Checks - `check.db.pool.size` - configuration and runtime pressure need to agree - `check.db.latency` - latency usually rises before the pool is fully exhausted