Refactor code structure for improved readability and maintainability
This commit is contained in:
@@ -13,7 +13,10 @@ These Compose bundles ship the minimum services required to exercise the scanner
|
||||
| `docker-compose.mirror.yaml` | Managed mirror topology for `*.stella-ops.org` distribution (Concelier + Excititor + CDN gateway). |
|
||||
| `docker-compose.telemetry.yaml` | Optional OpenTelemetry collector overlay (mutual TLS, OTLP ingest endpoints). |
|
||||
| `docker-compose.telemetry-storage.yaml` | Prometheus/Tempo/Loki storage overlay with multi-tenant defaults. |
|
||||
| `docker-compose.gpu.yaml` | Optional GPU overlay enabling NVIDIA devices for Advisory AI web/worker. Apply with `-f docker-compose.<env>.yaml -f docker-compose.gpu.yaml`. |
|
||||
| `env/*.env.example` | Seed `.env` files that document required secrets and ports per profile. |
|
||||
| `scripts/backup.sh` | Pauses workers and creates tar.gz of Mongo/MinIO/Redis volumes (deterministic snapshot). |
|
||||
| `scripts/reset.sh` | Stops the stack and removes Mongo/MinIO/Redis volumes after explicit confirmation. |
|
||||
|
||||
## Usage
|
||||
|
||||
@@ -101,4 +104,18 @@ The Helm chart mirrors these settings under `services.advisory-ai-web` / `adviso
|
||||
2. Update image digests in the relevant Compose file(s).
|
||||
3. Re-run `docker compose config` to confirm the bundle is deterministic.
|
||||
|
||||
Keep digests synchronized between Compose, Helm, and the release manifest to preserve reproducibility guarantees. `deploy/tools/validate-profiles.sh` performs a quick audit.
|
||||
Keep digests synchronized between Compose, Helm, and the release manifest to preserve reproducibility guarantees. `deploy/tools/validate-profiles.sh` performs a quick audit.
|
||||
|
||||
### GPU toggle for Advisory AI
|
||||
|
||||
GPU is disabled by default. To run inference on NVIDIA GPUs:
|
||||
|
||||
```bash
|
||||
docker compose \
|
||||
--env-file prod.env \
|
||||
-f docker-compose.prod.yaml \
|
||||
-f docker-compose.gpu.yaml \
|
||||
up -d
|
||||
```
|
||||
|
||||
The GPU overlay requests one GPU for `advisory-ai-worker` and `advisory-ai-web` and sets `ADVISORY_AI_INFERENCE_GPU=true`. Ensure the host has the NVIDIA container runtime and that the base compose file still sets the correct digests.
|
||||
|
||||
Reference in New Issue
Block a user