Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
- Introduced a comprehensive deployment guide for AdvisoryAI, detailing local builds, remote inference toggles, and scaling guidance. - Created a multi-role Dockerfile for building WebService and Worker images. - Added a docker-compose file for local and offline deployment. - Implemented a Helm chart for Kubernetes deployment with persistence and remote inference options. - Established a new API endpoint `/advisories/summary` for deterministic summaries of observations and linksets. - Introduced a JSON schema for risk profiles and a validator to ensure compliance with the schema. - Added unit tests for the risk profile validator to ensure functionality and error handling.
48 lines
2.5 KiB
Markdown
48 lines
2.5 KiB
Markdown
# AdvisoryAI packaging (AIAI-31-008)
|
|
|
|
Artifacts delivered for on-prem / air-gapped deployment:
|
|
|
|
- `ops/advisory-ai/Dockerfile` builds WebService and Worker images (multi-role via `PROJECT`/`APP_DLL` args).
|
|
- `ops/advisory-ai/docker-compose.advisoryai.yaml` runs WebService + Worker with shared data volume; ships remote inference toggle envs.
|
|
- `ops/advisory-ai/helm/` provides a minimal chart (web + worker) with storage mounts, optional PVC, and remote inference settings.
|
|
|
|
## Build images
|
|
```bash
|
|
# WebService
|
|
docker build -f ops/advisory-ai/Dockerfile \
|
|
-t stellaops-advisoryai-web:dev \
|
|
--build-arg PROJECT=src/AdvisoryAI/StellaOps.AdvisoryAI.WebService/StellaOps.AdvisoryAI.WebService.csproj \
|
|
--build-arg APP_DLL=StellaOps.AdvisoryAI.WebService.dll .
|
|
|
|
# Worker
|
|
docker build -f ops/advisory-ai/Dockerfile \
|
|
-t stellaops-advisoryai-worker:dev \
|
|
--build-arg PROJECT=src/AdvisoryAI/StellaOps.AdvisoryAI.Worker/StellaOps.AdvisoryAI.Worker.csproj \
|
|
--build-arg APP_DLL=StellaOps.AdvisoryAI.Worker.dll .
|
|
```
|
|
|
|
## Local/offline compose
|
|
```bash
|
|
cd ops/advisory-ai
|
|
docker compose -f docker-compose.advisoryai.yaml up -d --build
|
|
```
|
|
- Set `ADVISORYAI__INFERENCE__MODE=Remote` plus `ADVISORYAI__INFERENCE__REMOTE__BASEADDRESS`/`APIKEY` to offload inference.
|
|
- Default mode is Local (offline-friendly). Queue/cache/output live under `/app/data` (binds to `advisoryai-data` volume).
|
|
|
|
## Helm (cluster)
|
|
```bash
|
|
helm upgrade --install advisoryai ops/advisory-ai/helm \
|
|
--set image.repository=stellaops-advisoryai-web \
|
|
--set image.tag=dev \
|
|
--set inference.mode=Local
|
|
```
|
|
- Enable remote inference: `--set inference.mode=Remote --set inference.remote.baseAddress=https://inference.your.domain --set inference.remote.apiKey=<token>`.
|
|
- Enable persistence: `--set storage.persistence.enabled=true --set storage.persistence.size=10Gi` or `--set storage.persistence.existingClaim=<pvc>`.
|
|
- Worker replicas: `--set worker.replicas=2` (or `--set worker.enabled=false` to run WebService only).
|
|
|
|
## Operational notes
|
|
- Data paths (`/app/data/plans`, `/app/data/queue`, `/app/data/outputs`) are configurable via env and pre-created at startup.
|
|
- Guardrail phrases or policy knobs can be mounted under `/app/etc`; point `ADVISORYAI__GUARDRAILS__PHRASESLIST` to the mounted file.
|
|
- Observability follows standard ASP.NET JSON logs; add OTEL exporters via `OTEL_EXPORTER_OTLP_ENDPOINT` env when allowed. Keep disabled in sealed/offline deployments.
|
|
- For air-gapped clusters, publish built images to your registry and reference via `--set image.repository=<registry>/stellaops/advisoryai-web`.
|