Files
git.stella-ops.org/docs/modules/advisory-ai/README.md
master ba4c935182 feat: Enhance Authority Identity Provider Registry with Bootstrap Capability
- Added support for bootstrap providers in AuthorityIdentityProviderRegistry.
- Introduced a new property for bootstrap providers and updated AggregateCapabilities.
- Updated relevant methods to handle bootstrap capabilities during provider registration.

feat: Introduce Sealed Mode Status in OpenIddict Handlers

- Added SealedModeStatusProperty to AuthorityOpenIddictConstants.
- Enhanced ValidateClientCredentialsHandler, ValidatePasswordGrantHandler, and ValidateRefreshTokenGrantHandler to validate sealed mode evidence.
- Implemented logic to handle airgap seal confirmation requirements.

feat: Update Program Configuration for Sealed Mode

- Registered IAuthoritySealedModeEvidenceValidator in Program.cs.
- Added logging for bootstrap capabilities in identity provider plugins.
- Implemented checks for bootstrap support in API endpoints.

chore: Update Tasks and Documentation

- Marked AUTH-MTLS-11-002 as DONE in TASKS.md.
- Updated documentation to reflect changes in sealed mode and bootstrap capabilities.

fix: Improve CLI Command Handlers Output

- Enhanced output formatting for command responses and prompts in CommandHandlers.cs.

feat: Extend Advisory AI Models

- Added Response property to AdvisoryPipelineOutputModel for better output handling.

fix: Adjust Concelier Web Service Authentication

- Improved JWT token handling in Concelier Web Service to ensure proper token extraction and logging.

test: Enhance Web Service Endpoints Tests

- Added detailed logging for authentication failures in WebServiceEndpointsTests.
- Enabled PII logging for better debugging of authentication issues.

feat: Introduce Air-Gap Configuration Options

- Added AuthorityAirGapOptions and AuthoritySealedModeOptions to StellaOpsAuthorityOptions.
- Implemented validation logic for air-gap configurations to ensure proper setup.
2025-11-09 12:18:14 +02:00

3.5 KiB
Raw Blame History

StellaOps Advisory AI

Advisory AI is the retrieval-augmented assistant that synthesizes advisory and VEX evidence into operator-ready summaries, conflict explanations, and remediation plans with strict provenance.

Responsibilities

  • Generate policy-aware advisory summaries with citations back to Conseiller and Excititor evidence.
  • Explain conflicting advisories/VEX statements using weights from VEX Lens and Policy Engine.
  • Propose remediation hints aligned with Offline Kit staging and export bundles.
  • Expose API/UI surfaces with guardrails on model prompts, outputs, and retention.

Key components

  • RAG pipeline drawing from Conseiller, Excititor, VEX Lens, Policy Engine, and SBOM Service data.
  • Prompt templates and guard models enforcing provenance and redaction policies.
  • Vercel/offline inference workers with deterministic caching of generated artefacts.

Integrations & dependencies

  • Authority for tenant-aware access control.
  • Policy Engine for context-specific decisions and explain traces.
  • Console/CLI for interaction surfaces.
  • Export Center/Vuln Explorer for embedding generated briefs.

Operational notes

  • Model cache management and offline bundle packaging per Epic 8 requirements.
  • Usage/latency dashboards for prompt/response monitoring with advisory_ai_latency_seconds, guardrail block/validation counters, and citation coverage histograms wired into the default “Advisory AI” Grafana dashboard.
  • Alert policies fire when advisory_ai_guardrail_blocks_total or advisory_ai_validation_failures_total breach burn-rate thresholds (5 blocks/min or validation failures > 1% of traffic) and when latency p95 exceeds 30s.
  • Redaction policies validated against security/LLM guardrail tests.
  • Guardrail behaviour, blocked phrases, and operational alerts are detailed in /docs/security/assistant-guardrails.md.

Deployment & configuration

  • Containers: advisory-ai-web fronts the API/cache while advisory-ai-worker drains the queue and executes prompts. Both containers mount a shared RWX volume providing /var/lib/advisory-ai/{queue,plans,outputs}.
  • Remote inference toggle: Set ADVISORYAI__AdvisoryAI__Inference__Mode=Remote to send sanitized prompts to an external inference tier. Provide ADVISORYAI__AdvisoryAI__Inference__Remote__BaseAddress (and optional ...ApiKey) to complete the circuit; failures fall back to the sanitized prompt and surface inference.fallback_* metadata.
  • Helm/Compose: Bundled manifests wire the SBOM base address, queue/plan/output directories, and inference options via the AdvisoryAI configuration section. Helm expects a PVC named stellaops-advisory-ai-data. Compose creates named volumes so the worker and web instances share deterministic state.

CLI usage

  • stella advise run <summary|conflict|remediation> --advisory-key <id> [--artifact-id id] [--artifact-purl purl] [--policy-version v] [--profile profile] [--section name] [--force-refresh] [--timeout seconds]
    • Requests an advisory plan from the web service, enqueues execution, then polls for the generated output (default wait 120s, single check if --timeout 0).
    • Renders plan metadata (cache key, prompt template, token budget), guardrail state, provenance hashes, signatures, and citations in a deterministic table view.
    • Honors STELLAOPS_ADVISORYAI_URL when set; otherwise the CLI reuses the backend URL and scopes requests via X-StellaOps-Scopes.

Epic alignment

  • Epic 8: Advisory AI Assistant.
  • DOCS-AI stories to be tracked in ../../TASKS.md.