feat(advisory-ai): Add deployment guide, Dockerfile, and Helm chart for on-prem packaging
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
- Introduced a comprehensive deployment guide for AdvisoryAI, detailing local builds, remote inference toggles, and scaling guidance. - Created a multi-role Dockerfile for building WebService and Worker images. - Added a docker-compose file for local and offline deployment. - Implemented a Helm chart for Kubernetes deployment with persistence and remote inference options. - Established a new API endpoint `/advisories/summary` for deterministic summaries of observations and linksets. - Introduced a JSON schema for risk profiles and a validator to ensure compliance with the schema. - Added unit tests for the risk profile validator to ensure functionality and error handling.
This commit is contained in:
@@ -27,9 +27,9 @@ Advisory AI is the retrieval-augmented assistant that synthesizes advisory and V
|
||||
- Guardrail behaviour, blocked phrases, and operational alerts are detailed in `/docs/security/assistant-guardrails.md`.
|
||||
|
||||
## Deployment & configuration
|
||||
- **Containers:** `advisory-ai-web` fronts the API/cache while `advisory-ai-worker` drains the queue and executes prompts. Both containers mount a shared RWX volume providing `/var/lib/advisory-ai/{queue,plans,outputs}`.
|
||||
- **Remote inference toggle:** Set `ADVISORYAI__AdvisoryAI__Inference__Mode=Remote` to send sanitized prompts to an external inference tier. Provide `ADVISORYAI__AdvisoryAI__Inference__Remote__BaseAddress` (and optional `...ApiKey`) to complete the circuit; failures fall back to the sanitized prompt and surface `inference.fallback_*` metadata.
|
||||
- **Helm/Compose:** Bundled manifests wire the SBOM base address, queue/plan/output directories, and inference options via the `AdvisoryAI` configuration section. Helm expects a PVC named `stellaops-advisory-ai-data`. Compose creates named volumes so the worker and web instances share deterministic state.
|
||||
- **Containers:** `advisory-ai-web` fronts the API/cache while `advisory-ai-worker` drains the queue and executes prompts. Both containers mount a shared RWX volume providing `/app/data/{queue,plans,outputs}` (defaults; configurable via `ADVISORYAI__STORAGE__*`).
|
||||
- **Remote inference toggle:** Set `ADVISORYAI__INFERENCE__MODE=Remote` to send sanitized prompts to an external inference tier. Provide `ADVISORYAI__INFERENCE__REMOTE__BASEADDRESS` (and optional `...__APIKEY`, `...__TIMEOUT`) to complete the circuit; failures fall back to the sanitized prompt and surface `inference.fallback_*` metadata.
|
||||
- **Helm/Compose:** Packaged manifests live under `ops/advisory-ai/` and wire SBOM base address, queue/plan/output directories, and inference options. Helm defaults to `emptyDir` with optional PVC; Compose creates named volumes so worker and web instances share deterministic state. See `docs/modules/advisory-ai/deployment.md` for commands.
|
||||
|
||||
## CLI usage
|
||||
- `stella advise run <summary|conflict|remediation> --advisory-key <id> [--artifact-id id] [--artifact-purl purl] [--policy-version v] [--profile profile] [--section name] [--force-refresh] [--timeout seconds]`
|
||||
|
||||
Reference in New Issue
Block a user