1.9 KiB
1.9 KiB
checkId, plugin, severity, tags
| checkId | plugin | severity | tags | ||||
|---|---|---|---|---|---|---|---|
| check.db.migrations.failed | stellaops.doctor.database | fail |
|
Failed Migrations
What It Checks
Reads the stella_migration_history table, when present, and reports rows marked failed or incomplete.
If the tracking table does not exist, the check reports informationally and assumes the service is using a different migration mechanism.
Why It Matters
Partially applied migrations leave schemas in undefined states. That is a common cause of startup failures and runtime 500 errors after upgrades.
Common Causes
- A migration script failed during deployment
- The database user lacks DDL permissions
- Two processes attempted to apply migrations concurrently
- An interrupted deployment left the migration history half-written
How to Fix
Docker Compose
docker compose -f devops/compose/docker-compose.stella-ops.yml logs --tail 200 doctor-web
docker compose -f devops/compose/docker-compose.stella-ops.yml exec postgres psql -U stellaops -d stellaops -c "SELECT migration_id, status, error_message, applied_at FROM stella_migration_history ORDER BY applied_at DESC LIMIT 10;"
Fix the underlying SQL or permission problem, then restart the owning service so startup migrations run again.
Bare Metal / systemd
journalctl -u <service-name> -n 200
dotnet ef database update
Kubernetes / Helm
kubectl logs deploy/<service-name> -n <namespace> --tail=200
kubectl exec -n <namespace> <postgres-pod> -- psql -U <db-user> -d <db-name> -c "SELECT migration_id, status FROM stella_migration_history;"
Verification
stella doctor --check check.db.migrations.failed
Related Checks
check.db.migrations.pending- pending migrations often follow a failed rolloutcheck.db.schema.version- schema consistency should be rechecked after cleanup