docs consolidation and others

This commit is contained in:
master
2026-01-06 19:02:21 +02:00
parent d7bdca6d97
commit 4789027317
849 changed files with 16551 additions and 66770 deletions

View File

@@ -0,0 +1,210 @@
> **Imposed rule:** Work of this type or tasks of this type on this component must also be applied everywhere else it should be applied.
# Advisory AI API Reference (Sprint 110 Preview)
_Updated: 2025-11-03 • Owner: Docs Guild & Advisory AI Guild • Status: In progress_
## 1. Overview
The Advisory AI service exposes deterministic, guardrail-enforced endpoints for generating advisory summaries, conflict explanations, and remediation plans. Each request is backed by the Aggregation-Only Contract (AOC); inputs originate from immutable Conseiller/Excititor evidence and SBOM context, and every output ships with verifiable citations and cache digests.
This document captures the API surface targeted for Sprint 110. The surface is gated behind Authority scopes and designed to operate identically online or offline (local inference profiles).
## 2. Base conventions
| Item | Value |
|------|-------|
| Base path | `/v1/advisory-ai` |
| Media types | `application/json` (request + response) |
| Authentication | OAuth2 access token (JWT, DPoP-bound or mTLS as per tenant policy) |
| Required scopes | See [Authentication & scopes](#3-authentication--scopes) |
| Idempotency | Requests are cached by `(taskType, advisoryKey, policyVersion, profile, artifactId/purl, preferredSections)` unless `forceRefresh` is `true` |
| Determinism | Guardrails reject outputs lacking citations; cache digests allow replay and offline verification |
## 3. Authentication & scopes
Advisory AI calls must include `aoc:verify` plus an Advisory AI scope. Authority enforces tenant binding for all combinations.
| Scope | Purpose | Typical principals |
|-------|---------|--------------------|
| `advisory-ai:view` | Read cached artefacts (`GET /outputs/{{hash}}`) | Console backend, evidence exporters |
| `advisory-ai:operate` | Submit inference jobs (`POST /summaries`, `/conflicts`, `/remediation`) | Platform services, CLI automation |
| `advisory-ai:admin` | Manage profiles & policy (`PATCH /profiles`, future) | Platform operators |
Requests without `aoc:verify` are rejected with `invalid_scope`. Tokens aimed at remote inference profiles must also satisfy tenant consent (`requireTenantConsent` in Authority config).
## 4. Profiles & inference selection
Profiles determine which model backend and guardrail stack execute the request. The `profile` field defaults to `default` (`fips-local`).
| Profile | Description |
|---------|-------------|
| `default` / `fips-local` | Local deterministic model packaged with Offline Kit; FIPS-compliant crypto |
| `gost-local` | Local profile using GOST-approved crypto stack |
| `cloud-openai` | Remote inference via cloud connector (disabled unless tenant consent flag set) |
| Custom | Installations may register additional profiles via Authority `advisory-ai` admin APIs |
## 5. Common request envelope
All task endpoints accept the same JSON payload; `taskType` is implied by the route.
```json
{
"advisoryKey": "csaf:redhat:RHSA-2025:1001",
"artifactId": "registry.stella-ops.internal/runtime/api",
"artifactPurl": "pkg:oci/runtime-api@sha256:d2c3...",
"policyVersion": "2025.10.1",
"profile": "fips-local",
"preferredSections": ["Summary", "Remediation"],
"forceRefresh": false
}
```
Field notes:
- `advisoryKey` **required**. Matches Conseiller advisory identifier or VEX statement key.
- `artifactId` / `artifactPurl` optional but recommended for remediation tasks (enables SBOM context).
- `policyVersion` locks evaluation to a specific Policy Engine digest. Omit for "current".
- `profile` selects inference profile (see §4). Unknown values return `400`.
- `preferredSections` prioritises advisory sections; the orchestrator still enforces AOC.
- `forceRefresh` bypasses cache, regenerating output and resealing DSSE bundle.
## 6. Responses & caching
Successful responses share a common envelope:
```json
{
"taskType": "Summary",
"profile": "fips-local",
"generatedAt": "2025-11-03T18:22:43Z",
"inputDigest": "sha256:6f3b...",
"outputHash": "sha256:1d7e...",
"ttlSeconds": 86400,
"content": {
"format": "markdown",
"body": "### Summary
1. [Vendor statement][1] ..."
},
"citations": [
{
"index": 1,
"kind": "advisory",
"sourceId": "concelier:csaf:redhat:RHSA-2025:1001:paragraph:12",
"uri": "https://access.redhat.com/errata/RHSA-2025:1001"
}
],
"context": {
"planCacheKey": "adv-summary:csaf:redhat:RHSA-2025:1001:fips-local",
"chunks": 42,
"vectorMatches": 12,
"sbom": {
"artifactId": "registry.stella-ops.internal/runtime/api",
"versionTimeline": 8,
"dependencyPaths": 5,
"dependencyNodes": 17
}
}
}
```
- `content.format` is `markdown` for human-readable payloads; machine-readable JSON attachments will use `json`. The CLI and Console render Markdown directly.
- `citations` indexes correspond to bracketed references in the Markdown body.
- `context.planCacheKey` lets operators resubmit the same request or inspect the plan (`GET /v1/advisory-ai/plans/`cacheKey``) optional when enabled.
- Cached copies honour tenant-specific TTLs (default 24h). Exceeding TTL triggers regeneration on next request.
## 7. Endpoints
### 7.1 `POST /v1/advisory-ai/summaries`
Generate or retrieve a cached advisory summary. Requires `advisory-ai:operate`.
- **Request body:** Common envelope (preferred sections default to `Summary`).
- **Response:** Summary output (see §6 example).
- **Errors:**
- `400 advisory.summary.missingAdvisoryKey` empty or malformed `advisoryKey`.
- `404 advisory.summary.advisoryNotFound` Conseiller cannot resolve the advisory or tenant forbidden.
- `409 advisory.summary.contextUnavailable` SBOM context still indexing; retry later.
### 7.2 `POST /v1/advisory-ai/conflicts`
Explain conflicting VEX statements, ranked by trust metadata.
- **Additional payload hints:** Set `preferredSections` to include `Conflicts` or targeted statement IDs.
- **Response extensions:** `content.format` remains Markdown; `context.conflicts` array highlights conflicting statement IDs and trust scores.
- **Errors:** include `404 advisory.conflict.vexNotFound`, `409 advisory.conflict.trustDataPending` (waiting on Excititor linksets).
### 7.3 `POST /v1/advisory-ai/remediation`
Produce remediation plan with fix versions and verification steps.
- **Additional payload hints:** Provide `artifactId` or `artifactPurl` to unlock SBOM timeline + dependency analysis.
- **Response extensions:** `content.format` Markdown plus `context.remediation` with recommended fix versions (`package`, `fixedVersion`, `rationale`).
- **Errors:** `422 advisory.remediation.noFixAvailable` (vendor has not published fix), `409 advisory.remediation.policyHold` (policy forbids automated remediation).
### 7.4 `GET /v1/advisory-ai/outputs/{{outputHash}}`
Fetch cached artefact (same envelope as §6). Requires `advisory-ai:view`.
- **Headers:** Supports `If-None-Match` with the `outputHash` (Etag) for cache validation.
- **Errors:** `404 advisory.output.notFound` if cache expired or tenant lacks access.
### 7.5 `GET /v1/advisory-ai/plans/{{cacheKey}}` (optional)
When plan preview is enabled (feature flag `advisoryAi.planPreview.enabled`), this endpoint returns the orchestration plan using `AdvisoryPipelinePlanResponse` (task metadata, chunk/vector counts). Requires `advisory-ai:operate`.
## 8. Error model
Errors follow a standard problem+JSON envelope:
```json
{
"status": 400,
"code": "advisory.summary.missingAdvisoryKey",
"message": "advisoryKey must be provided",
"traceId": "01HECAJ6RE8T5H4P6Q0XZ7ZD4T",
"retryAfter": 30
}
```
| HTTP | Code prefix | Meaning |
|------|-------------|---------|
| 400 | `advisory.summary.*`, `advisory.remediation.*` | Validation failures or unsupported profile/task combinations |
| 401 | `auth.invalid_token` | Token expired/invalid; ensure DPoP proof matches access token |
| 403 | `auth.insufficient_scope` | Missing `advisory-ai` scope or tenant consent |
| 404 | `advisory.*.notFound` | Advisory/key not available for tenant |
| 409 | `advisory.*.contextUnavailable` | Dependencies (SBOM, VEX, policy) not ready; retry after indicated seconds |
| 422 | `advisory.*.noFixAvailable` | Remediation cannot be produced given current evidence |
| 429 | `rate_limit.exceeded` | Caller breached tenant or profile rate limit; examine `Retry-After` |
| 503 | `advisory.backend.unavailable` | Model backend offline or remote profile disabled |
All errors include `traceId` for cross-service correlation and log search.
## 9. Rate limiting & quotas
Advisory AI honours per-tenant quotas configured under `advisoryAi.rateLimits`:
- Default: 30 summary/conflict requests per minute per tenant & profile.
- Remediation requests default to 10/minute due to heavier SBOM analysis.
- Cached `GET /outputs/{{hash}}` calls share the `advisory-ai:view` bucket (60/minute).
Limits are enforced at the gateway; the API returns `429` with standard `Retry-After` seconds. Operators can adjust limits via Authority configuration bundles and propagate offline using the Offline Kit.
## 10. Observability & audit
- Metrics: `advisory_ai_requests_total``tenant,task,profile``, `advisory_ai_latency_seconds`, `advisory_ai_validation_failures_total`, `advisory_ai_cache_hits_total`.
- Logs: Structured with `traceId`, `tenant`, `task`, `profile`, `outputHash`, `cacheStatus` (`hit`|`miss`|`bypass`). Prompt bodies are **never** logged; guardrail violations emit sanitized snippets only.
- Audit events: `advisory_ai.output.generated`, `advisory_ai.output.accessed`, `advisory_ai.guardrail.blocked` ship to the Authority audit stream with tenant + actor metadata.
## 11. Offline & sovereignty considerations
- Offline installations bundle prompt templates, guardrail configs, and local model weights. Remote profiles (`cloud-openai`) remain disabled unless operators explicitly enable them and record consent per tenant.
- Cached outputs include DSSE attestations when DSSE mode is enabled. Export Center ingests cached artefacts via `GET /outputs/{{hash}}` using `advisory-ai:view`.
- Force-refresh regenerates outputs using the same cache key, allowing auditors to replay evidence during compliance reviews.
## 12. Change log
| Date (UTC) | Change |
|------------|--------|
| 2025-11-03 | Initial sprint-110 preview covering summary/conflict/remediation endpoints, cache retrieval, plan preview, and error/rate limit model. |

View File

@@ -0,0 +1,70 @@
# Advisory AI CLI Usage (DOCS-AIAI-31-005)
_Updated: 2025-11-24 · Owners: Docs Guild · DevEx/CLI Guild · Sprint 0111_
This guide shows how to drive Advisory AI from the StellaOps CLI using the `advise run` verb, with deterministic fixtures published on 2025-11-19 (`CLI-VULN-29-001`, `CLI-VEX-30-001`). It is designed for CI/offline use and mirrors the guardrail/policy contracts captured in `docs/modules/advisory-ai/guides/guardrails-and-evidence.md` and `docs/modules/policy/guides/assistant-parameters.md`.
## Prerequisites
- CLI binary from Sprint 205 (`stella`), logged in with scopes `advisory-ai:operate` + `aoc:verify`.
- Base URL pointed at Advisory AI gateway: `export STELLAOPS_ADVISORYAI_URL=https://advisory-ai.internal` (falls back to main backend base address when unset).
- Evidence fixtures available locally (offline friendly):
- `out/console/guardrails/cli-vuln-29-001/sample-vuln-output.ndjson` (SHA256 `e5aecfba5cee8d412408fb449f12fa4d5bf0a7cb7e5b316b99da3b9019897186`).
- `out/console/guardrails/cli-vuln-29-001/sample-sbom-context.json` (SHA256 `421af53f9eeba6903098d292fbd56f98be62ea6130b5161859889bf11d699d18`).
- `out/console/guardrails/cli-vex-30-001/sample-vex-output.ndjson` (SHA256 `2b11b1e2043c2ec1b0cb832c29577ad1c5cbc3fbd0b379b0ca0dee46c1bc32f6`).
- Policy hash pinned: set `ADVISORYAI__POLICYVERSION=2025.11.19` (or the bundle hash shipped in the Offline Kit).
## Quickstart
```bash
stella advise run summary \
--advisory-key csaf:redhat:RHSA-2025:1001 \
--artifact-id registry.stella-ops.internal/runtime/api \
--policy-version "$ADVISORYAI__POLICYVERSION" \
--profile fips-local \
--timeout 30 \
--json
```
- Use `--timeout 0` for cache-only probes in CI; add `--force-refresh` to bypass cache.
- `--profile cloud-openai` remains disabled unless tenant consent is recorded in Authority; guardrails reject with exit code 12 when disabled.
- Guardrail fixtures (`sample-vuln-output.ndjson`, `sample-vex-output.ndjson`, `sample-sbom-context.json`) live in Offline Kits and feed the backend self-tests; the CLI fetches evidence from backend services automatically.
## Exit codes
| Code | Meaning | Notes |
| --- | --- | --- |
| 0 | Success (hit or miss; output cached or freshly generated) | Includes `outputHash` and citations. |
| 2 | Validation error (missing advisory key, bad profile) | Mirrors HTTP 400.
| 3 | Context unavailable (SBOM/LNM/policy missing) | Mirrors HTTP 409 `advisory.contextUnavailable`.
| 4 | Guardrail block (PII, citation gap, prompt too large) | Mirrors HTTP 422 `advisory.guardrail.blocked`.
| 5 | Timeout waiting for output | Respect `--timeout` in seconds (0 = no wait). |
| 12 | Remote profile disabled | Returned when `cloud-openai` is selected without consent. |
| 7 | Transport/auth failure | Network/TLS/token issues. |
## Scripting patterns
- **Cache-only probes (CI smoke):** `stella advise run summary --advisory-key ... --timeout 0 --json > cache.json` (fails fast if evidence missing).
- **Batch mode:** pipe advisory keys: `cat advisories.txt | xargs -n1 -I{} stella advise run summary --advisory-key {} --timeout 0 --json`.
- **Profile gating:** set `--profile fips-local` for offline; use `--profile cloud-openai` only after Authority consent and when `ADVISORYAI__INFERENCE__MODE=Remote`.
- **Policy pinning:** always pass `--policy-version` (matches Offline Kit bundle hash); outputs include the policy hash in `context.planCacheKey`.
## Sample output (trimmed)
```json
{
"taskType": "Summary",
"profile": "fips-local",
"generatedAt": "2025-11-24T00:00:00Z",
"outputHash": "sha256:cafe...babe",
"citations": [{"index":1,"kind":"advisory","sourceId":"concelier:csaf:redhat:RHSA-2025:1001:paragraph:12"}],
"context": {
"planCacheKey": "adv-summary:csaf:redhat:RHSA-2025:1001:fips-local",
"sbom": {"artifactId":"registry.stella-ops.internal/runtime/api","versionTimeline":8,"dependencyPaths":5}
}
}
```
## Offline kit notes
- Copy the three CLI guardrail artefact bundles and their `hashes.sha256` files into `offline-kit/advisory-ai/fixtures/` and record them in `SHA256SUMS`.
- Set `ADVISORYAI__SBOM__BASEADDRESS` to the SBOM Service endpoint packaged in the kit; leave unset to fall back to `NullSbomContextClient` (Advisory AI will still respond deterministically with context counts set to 0).
- Keep `profiles.catalog.json` and `prompts.manifest` hashes aligned with the guardrail pack referenced in the Offline Kit manifest.
## Troubleshooting
- `contextUnavailable`: ensure SBOM service is reachable or provide `--sbom-context` fixture; verify LNM linkset IDs and hashes.
- `guardrail.blocked`: check blocked phrase list (`docs/modules/policy/guides/assistant-parameters.md`) and payload size; remove PII or reduce SBOM clamps.
- `timeout`: raise `--timeout` or run cache-only mode to avoid long waits in CI.

View File

@@ -0,0 +1,8 @@
bd85eb2ab4528825c17cd0549b547c2d1a6a5e8ee697a6b4615119245665cc02 docs/api/console/samples/advisory-ai-guardrail-banner.json
57d7bf9ab226b561e19b3e23e3c8d6c88a3a1252c1ea471ef03bf7a237de8079 docs/api/console/samples/vex-statement-sse.ndjson
af3459e8cf7179c264d1ac1f82a968e26e273e7e45cd103c8966d0dd261c3029 docs/api/console/samples/vuln-findings-sample.json
336c55d72abea77bf4557f1e3dcaa4ab8366d79008670d87020f900dcfc833c0 docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2-payload.json
c55217e8526700c2d303677a66351a706007381219adab99773d4728cc61f293 docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2.svg
9bc89861ba873c7f470c5a30c97fb2cd089d6af23b085fba2095e88f8d1f8ede docs/assets/advisory-ai/console/evidence-drawer-b1820ad.svg
f6093257134f38033abb88c940d36f7985b48f4f79870d5b6310d70de5a586f9 docs/samples/console/console-vex-30-001.json
921bcb360454e801bb006a3df17f62e1fcfecaaccda471ae66f167147539ad1e docs/samples/console/console-vuln-29-001.json

View File

@@ -0,0 +1,297 @@
# Advisory AI Console Workflows
_Last updated: 2025-12-04_
This guide documents the forthcoming Advisory AI console experience so that console, docs, and QA guilds share a single reference while the new endpoints finish landing.
## 1. Entry points & navigation
- **Dashboard tile**: `Advisory AI` card on the console overview routes to `/console/vuln/advisory-ai` once CONSOLE-VULN-29-001 ships. The tile must include the current model build stamp and data freshness time.
- **Deep links**: Copy-as-ticket payloads link back into the console using `/console/vex/{statementId}` (CONSOLE-VEX-30-001). Provide fallbacks that open the Evidence modal with a toast if the workspace is still loading.
## 2. Evidence surfacing
| Workflow | Required API | Notes |
| --- | --- | --- |
| Findings overview | `GET /console/vuln/findings` | Must include policy verdict badge, VEX justification summary, and last-seen timestamps. |
| Evidence drawer | `GET /console/vex/statements/{id}` | Stream SSE chunk descriptions so long-form provenance renders progressively. |
| Copy as ticket | `POST /console/vuln/tickets` | Returns signed payload + attachment list for JIRA/ServiceNow templates. |
### 2.1 Plan composer vs response panel
- **Plan inspector** (left rail) mirrors the orchestrator output: structured chunks, SBOM summary, dependency counts, and cache key. Surface cache hits with the “Reused plan” badge that reads from `plan.planFromCache`.
- **Prompt preview** must show the sanitized prompt _and_ the raw inference response side-by-side once CONSOLE-VULN-29-001 exposes `/console/vuln/advisory-ai/{cacheKey}`. Always label the sanitized prompt “Guardrail-safe prompt”.
- **Citations**: render as `[n] Source Name` chips that scroll the evidence drawer to the matching chunk. Use the chunk ID from `prompt.citations[*].chunkId` to keep navigation deterministic.
- **Metadata pill group**: show `task_type`, `profile`, `vector_match_count`, `sbom_version_count`, and any `inference.*` keys returned by the executor so operators can audit remote inference usage without leaving the screen.
Deterministic fixture snapshot (command output, replaces inline screenshot):
```bash
python - <<'PY'
import json, pathlib
payload_path = pathlib.Path('docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2-payload.json')
data = json.loads(payload_path.read_text())
metrics = data.get('metrics', {})
guard = data.get('guardrail', {})
violations = guard.get('violations', [])
print(f"# Advisory AI list view fixture (build {data.get('build')})")
print(f"- workspace: {data.get('workspace')} | generated: {data.get('generatedAtUtc')} | profile: {data.get('profile')} | cacheHit: {str(metrics.get('cacheHit', False)).lower()}")
meta = guard.get('metadata', {})
print(f"- guardrail: state={guard.get('state')} blocked={str(guard.get('blocked', False)).lower()} violations={len(violations)} promptLength={meta.get('promptLength')} blockedPhraseFile={meta.get('blockedPhraseFile')}")
print("\n| severity | policy | summary | reachability | vex | lastSeen | sbom |")
print("| --- | --- | --- | --- | --- | --- | --- |")
for item in data.get('findings', []):
print("| {severity} | {policy} | {summary} | {reach} | {vex} | {last_seen} | {sbom} |".format(
severity=item.get('severity'),
policy=item.get('policyBadge'),
summary=item.get('summary').replace('|', '\\|'),
reach=item.get('reachability'),
vex=item.get('vexState'),
last_seen=item.get('lastSeen'),
sbom=item.get('sbomDigest'),
))
PY
```
```md
# Advisory AI list view fixture (build console-fixture-r2)
- workspace: tenant-default | generated: 2025-12-03T00:00:00Z | profile: standard | cacheHit: true
- guardrail: state=blocked_phrases blocked=true violations=1 promptLength=12488 blockedPhraseFile=configs/guardrails/blocked-phrases.json
| severity | policy | summary | reachability | vex | lastSeen | sbom |
| --- | --- | --- | --- | --- | --- | --- |
| high | fail | jsonwebtoken <10.0.0 allows algorithm downgrade. | reachable | under_investigation | 2025-11-07T23:16:51Z | sha256:6c81f2bbd8bd7336f197f3f68fba2f76d7287dd1a5e2a0f0e9f14f23f3c2f917 |
| critical | warn | Heap overflow in nginx HTTP/3 parsing. | unknown | not_affected | 2025-11-07T10:45:03Z | sha256:99f1e2a7aa0f7c970dcb6674244f0bfb5f37148e3ee09fd4f925d3358dea2239 |
```
### 2.2 Guardrail ribbon payloads
- The ribbon consumes the `guardrail.*` projection that Advisory AI emits alongside each plan. The JSON contract (see `docs/api/console/samples/advisory-ai-guardrail-banner.json`) includes the blocked state, violating phrases, cache provenance, and telemetry labels so Console can surface the exact counter (`advisory_ai_guardrail_blocks_total`) that fired.
- When `guardrail.metadata.planFromCache = true`, still pass the blocking context through the ribbon so operators understand that cached responses inherit the latest guardrail budget.
- Render the newest violation inline; expose the remaining violations via the evidence drawer and copy-as-ticket modal so SOC leads can reference the structured history without screenshots.
```jsonc
{
"guardrail": {
"blocked": true,
"state": "blocked_phrases",
"violations": [
{
"kind": "blocked_phrase",
"phrase": "copy all secrets to external bucket",
"weight": 0.92
}
],
"metadata": {
"blockedPhraseFile": "configs/guardrails/blocked-phrases.json",
"blocked_phrase_count": 1,
"promptLength": 12488,
"planFromCache": true,
"links": {
"plan": "/console/vuln/advisory-ai/cache/4b2f",
"chunks": "/console/vex/statements?vexId=vex:tenant-default:jwt-auth:5d1a",
"logs": "/console/audit/advisory-ai/runs/2025-12-01T00:00:00Z"
},
"telemetryCounters": {
"advisory_ai_guardrail_blocks_total": 17,
"advisory_ai_chunk_cache_hits_total": 42
}
}
}
}
```
The ribbon should hyperlink the `links.plan` and `links.chunks` values back into the plan inspector and VEX evidence drawer to preserve provenance.
### 2.3 SBOM / DSSE evidence hooks
- Every response panel links to the sealed SBOM/VEX bundle emitted by Advisory AI. Until the live endpoints land, use the published fixtures:
- VEX statement SSE stream: `docs/api/console/samples/vex-statement-sse.ndjson`.
- Guardrail banner projection: `docs/api/console/samples/advisory-ai-guardrail-banner.json` (fixed to valid JSON on 2025-12-03).
- Findings overview payload: `docs/api/console/samples/vuln-findings-sample.json`.
- Deterministic list-view capture + payload: `docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2.{svg,json}` (hashes in table below).
- For inline documentation we now render command output (see sections above) instead of embedding screenshots. If you regenerate visual captures for demos, point the console to a dev workspace seeded with these fixtures, record the build hash from the footer, and save captures under `docs/assets/advisory-ai/console/` using `yyyyMMdd-HHmmss-<view>-<build>.png` (UTC, with matching `…-payload.json`).
#### Fixture hashes (run from repo root)
- Verify deterministically: `sha256sum --check docs/modules/advisory-ai/console-fixtures.sha256`.
| Fixture | sha256 | Notes |
| --- | --- | --- |
| `docs/api/console/samples/advisory-ai-guardrail-banner.json` | `bd85eb2ab4528825c17cd0549b547c2d1a6a5e8ee697a6b4615119245665cc02` | Guardrail ribbon projection. |
| `docs/api/console/samples/vex-statement-sse.ndjson` | `57d7bf9ab226b561e19b3e23e3c8d6c88a3a1252c1ea471ef03bf7a237de8079` | SSE stream sample. |
| `docs/api/console/samples/vuln-findings-sample.json` | `af3459e8cf7179c264d1ac1f82a968e26e273e7e45cd103c8966d0dd261c3029` | Findings overview payload. |
| `docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2-payload.json` | `336c55d72abea77bf4557f1e3dcaa4ab8366d79008670d87020f900dcfc833c0` | List-view sealed payload. |
| `docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2.svg` | `c55217e8526700c2d303677a66351a706007381219adab99773d4728cc61f293` | Deterministic list-view capture. |
| `docs/assets/advisory-ai/console/evidence-drawer-b1820ad.svg` | `9bc89861ba873c7f470c5a30c97fb2cd089d6af23b085fba2095e88f8d1f8ede` | Evidence drawer mock (keep until live capture). |
| `docs/samples/console/console-vex-30-001.json` | `f6093257134f38033abb88c940d36f7985b48f4f79870d5b6310d70de5a586f9` | Console VEX search fixture. |
| `docs/samples/console/console-vuln-29-001.json` | `921bcb360454e801bb006a3df17f62e1fcfecaaccda471ae66f167147539ad1e` | Console vuln search fixture. |
## 3. Accessibility & offline requirements
- Console screens must pass WCAG 2.2 AA contrast and provide focus order that matches the keyboard shortcuts planned for Advisory AI (see `docs/modules/advisory-ai/overview.md`).
- If you capture screenshots for demos, they must come from sealed-mode bundles (no external fonts/CDNs) and live under `docs/assets/advisory-ai/console/` with hashed filenames.
- Modal dialogs need `aria-describedby` attributes referencing the explanation text returned by the API; translation strings must live with existing locale packs.
### 3.1 Guardrail & inference status
- Display a **guardrail ribbon** at the top of the response panel with three states:
- `Blocked` (red) when `guardrail.blocked = true` → show blocked phrase count and require the operator to acknowledge before the response JSON is revealed.
- `Warnings` (amber) when `guardrail.violations.Length > 0` but not blocked.
- `Clean` (green) otherwise.
- If the executor falls back to sanitized prompts (`inference.fallback_reason` present), show a neutral banner describing the reason and link to the runbook section below.
- Surface `inference.model_id`, prompt/completion token counts, and latency histogram from `advisory_ai_latency_seconds_bucket` next to the response so ops can correlate user impact with remote/local mode toggles (`ADVISORYAI__Inference__Mode`).
Guardrail ribbon projection (command output, replaces mock screenshot):
```bash
python - <<'PY'
import json, pathlib
p = pathlib.Path('docs/api/console/samples/advisory-ai-guardrail-banner.json')
obj = json.loads(p.read_text())
guard = obj['guardrail']
meta = guard['metadata']
print('# Guardrail ribbon projection (banner sample)')
print(f"- blocked: {guard['blocked']} | state: {guard['state']} | violations: {len(guard['violations'])}")
print(f"- planFromCache: {meta.get('planFromCache')} | blockedPhraseFile: {meta.get('blockedPhraseFile')} | promptLength: {meta.get('promptLength')}")
print('- telemetry counters: ' + ', '.join(f"{k}={v}" for k,v in meta['telemetryCounters'].items()))
print('- links: plan={plan} | chunks={chunks} | logs={logs}'.format(
plan=meta['links'].get('plan'),
chunks=meta['links'].get('chunks'),
logs=meta['links'].get('logs'),
))
print('\nViolations:')
for idx, v in enumerate(guard['violations'], 1):
print(f"{idx}. {v['kind']} · phrase='{v['phrase']}' · weight={v.get('weight')}")
PY
```
```md
# Guardrail ribbon projection (banner sample)
- blocked: True | state: blocked_phrases | violations: 1
- planFromCache: True | blockedPhraseFile: configs/guardrails/blocked-phrases.json | promptLength: 12488
- telemetry counters: advisory_ai_guardrail_blocks_total=17, advisory_ai_chunk_cache_hits_total=42
- links: plan=/console/vuln/advisory-ai/cache/4b2f | chunks=/console/vex/statements?vexId=vex:tenant-default:jwt-auth:5d1a | logs=/console/audit/advisory-ai/runs/2025-12-01T00:00:00Z
Violations:
1. blocked_phrase · phrase='copy all secrets to external bucket' · weight=0.92
```
## 4. Copy-as-ticket guidance
1. Operators select one or more VEX-backed findings.
2. Console renders the sanitized payload (JSON) plus context summary for the receiving system.
3. Users can download the payload or send it via webhook; both flows must log `console.ticket.export` events for audit.
## 5. Offline & air-gapped console behaviour
1. **Volume readiness** confirm the RWX volume (`/var/lib/advisory-ai/{queue,plans,outputs}`) is mounted; the console should poll `/api/v1/advisory-ai/health` and surface “Queue not available” if the worker is offline.
2. **Cached responses** when running air-gapped, highlight that only cached plans/responses are available by showing the `planFromCache` badge plus the `generatedAtUtc` timestamp.
3. **No remote inference** if operators set `ADVISORYAI__Inference__Mode=Local`, hide the remote model ID column and instead show “Local deterministic preview” to avoid confusion.
4. **Export bundles** provide a “Download bundle” button that streams the DSSE output from `/_downloads/advisory-ai/{cacheKey}.json` so operators can carry it into Offline Kit workflows documented in `docs/OFFLINE_KIT.md`. While staging endpoints are pending, reuse the Evidence Bundle v1 sample at `docs/samples/evidence-bundle/evidence-bundle-v1.tar.gz` (hash in `evidence-bundle-v1.tar.gz.sha256`) to validate wiring and any optional visual captures.
## 6. Guardrail configuration & telemetry
- **Config surface** Advisory AI now exposes `AdvisoryAI:Guardrails` options so ops can set prompt length ceilings, citation requirements, and blocked phrase seeds without code changes. Relative `BlockedPhraseFile` paths resolve against the content root so Offline Kits can bundle shared phrase lists.
- **Sample**
```json
{
"AdvisoryAI": {
"Guardrails": {
"MaxPromptLength": 32000,
"RequireCitations": true,
"BlockedPhraseFile": "configs/guardrail-blocked-phrases.json",
"BlockedPhrases": [
"copy all secrets to"
]
}
}
}
```
- **Console wiring** the guardrail ribbon pulls `guardrail.blocked`, `guardrail.violations`, and `guardrail.metadata.blocked_phrase_count` while the observability cards track `advisory_ai_chunk_requests_total`, `advisory_ai_chunk_cache_hits_total`, and `advisory_ai_guardrail_blocks_total` (now emitted even on cache hits). Use these meters to explain throttling or bad actors before granting additional guardrail budgets, and keep `docs/api/console/samples/advisory-ai-guardrail-banner.json` nearby so QA can validate localized payloads without hitting production data.
## 7. Publication state
- [x] Fixture-backed payloads and captures committed (`20251203-0000-list-view-build-r2.svg`, `evidence-drawer-b1820ad.svg`).
- [x] Copy-as-ticket flow documented; payload aligns with existing SOC runbooks.
- [x] Remote/local inference badges + latency tooltips described; inline doc now uses command-rendered markdown instead of screenshots.
- [x] SBOM/VEX bundle example attached (Evidence Bundle v1 sample).
- [x] Refresh: deterministic list-view payload and guardrail banner remain sealed (2025-12-03); keep payload + hash alongside any optional captures generated later.
### Publication readiness checklist (DOCS-AIAI-31-004)
- Inputs available now: console fixtures (`docs/samples/console/console-vuln-29-001.json`, `console-vex-30-001.json`), evidence bundle sample (`docs/samples/evidence-bundle/evidence-bundle-v1.tar.gz`), guardrail ribbon contract.
- Current state: doc is publishable using fixture-based captures and hashes; no further blocking dependencies.
- Optional follow-up: when live SBOM `/v1/sbom/context` evidence is available, regenerate the command-output snippets (and any optional captures), capture the build hash, and replace fixture payloads with live outputs.
> Tracking: DOCS-AIAI-31-004 (Docs Guild, Console Guild)
### Guardrail console fixtures (unchecked-integration)
- Vulnerability search sample: `docs/samples/console/console-vuln-29-001.json` (maps to CONSOLE-VULN-29-001).
- VEX search sample: `docs/samples/console/console-vex-30-001.json` (maps to CONSOLE-VEX-30-001).
- Use these until live endpoints are exposed; replace with real captures when staging is available.
### Fixture bundle regeneration (deterministic)
- Rebuild the fixture capture deterministically from the sealed payload:
```bash
python - <<'PY'
import html, json
from pathlib import Path
root = Path('docs/assets/advisory-ai/console')
payload = json.loads((root/'20251203-0000-list-view-build-r2-payload.json').read_text())
guard = payload['guardrail']; metrics = payload['metrics']; items = payload['findings']
def color_sev(sev):
return {'critical':'#b3261e','high':'#d05c00','medium':'#c38f00','low':'#00695c'}.get(sev.lower(), '#0f172a')
def color_policy(val):
return {'fail':'#b3261e','warn':'#d97706','pass':'#0f5b3a'}.get(val.lower(), '#0f172a')
rows = []
for idx, item in enumerate(items):
y = 210 + idx * 120
rows.append(f"""
<g transform=\"translate(32,{y})\">
<rect width=\"888\" height=\"104\" rx=\"10\" fill=\"#ffffff\" stroke=\"#e2e8f0\" />
<text x=\"20\" y=\"30\" class=\"title\">{html.escape(item['summary'])}</text>
<text x=\"20\" y=\"52\" class=\"mono subtle\">{html.escape(item['package'])} · {html.escape(item['component'])} · {html.escape(item['image'])}</text>
<text x=\"20\" y=\"72\" class=\"mono subtle\">reachability={html.escape(str(item.get('reachability')))} · vex={html.escape(str(item.get('vexState')))} · lastSeen={html.escape(str(item.get('lastSeen')))}</text>
<text x=\"20\" y=\"92\" class=\"mono faint\">sbom={html.escape(str(item.get('sbomDigest')))}</text>
<rect x=\"748\" y=\"14\" width=\"120\" height=\"28\" rx=\"6\" ry=\"6\" fill=\"{color_sev(item['severity'])}\" opacity=\"0.12\" />
<text x=\"758\" y=\"33\" class=\"mono\" fill=\"{color_sev(item['severity'])}\">sev:{html.escape(item['severity'])}</text>
<rect x=\"732\" y=\"50\" width=\"140\" height=\"28\" rx=\"6\" ry=\"6\" fill=\"{color_policy(item.get('policyBadge',''))}\" opacity=\"0.12\" />
<text x=\"742\" y=\"69\" class=\"mono\" fill=\"{color_policy(item.get('policyBadge',''))}\">policy:{html.escape(item.get('policyBadge',''))}</text>
</g>
""")
rows_svg = "\n".join(rows)
banner = '#b3261e' if guard.get('blocked') else '#0f5b3a'
svg = f"""<svg xmlns=\"http://www.w3.org/2000/svg\" width=\"1280\" height=\"720\" viewBox=\"0 0 1280 720\">
<style>
.title {{ font-family: Inter, Arial, sans-serif; font-size: 18px; font-weight: 700; fill: #0f172a; }}
.mono {{ font-family: Menlo, monospace; font-size: 13px; fill: #0f172a; }}
.mono.subtle {{ fill: #475569; }}
.mono.faint {{ fill: #94a3b8; font-size: 12px; }}
</style>
<rect width=\"1280\" height=\"720\" fill=\"#f8fafc\" />
<rect x=\"32\" y=\"32\" width=\"1216\" height=\"72\" rx=\"12\" fill=\"#0f172a\" opacity=\"0.05\" />
<text x=\"48\" y=\"76\" class=\"title\">Advisory AI · Console fixture</text>
<text x=\"48\" y=\"104\" class=\"mono\" fill=\"#475569\">build={html.escape(payload['build'])} · generated={html.escape(payload['generatedAtUtc'])} · workspace={html.escape(payload['workspace'])} · profile={html.escape(payload['profile'])} · cacheHit={str(metrics.get('cacheHit', False)).lower()}</text>
<rect x=\"32\" y=\"120\" width=\"1216\" height=\"72\" rx=\"12\" fill=\"#fff1f0\" stroke=\"#f87171\" stroke-width=\"1\" />
<text x=\"48\" y=\"156\" class=\"title\" fill=\"{banner}\">Guardrail: {html.escape(guard.get('state','unknown'))}</text>
<text x=\"48\" y=\"176\" class=\"mono\" fill=\"#0f172a\">{html.escape(guard['metadata'].get('blockedPhraseFile',''))} · violations={len(guard.get('violations',[]))} · promptLength={guard['metadata'].get('promptLength')}</text>
<rect x=\"1080\" y=\"138\" width=\"96\" height=\"28\" rx=\"6\" ry=\"6\" fill=\"{banner}\" opacity=\"0.12\" />
<text x=\"1090\" y=\"157\" class=\"mono\" fill=\"{banner}\">blocked</text>
<rect x=\"944\" y=\"210\" width=\"304\" height=\"428\" rx=\"12\" fill=\"#0f172a\" opacity=\"0.04\" />
<text x=\"964\" y=\"244\" class=\"title\">Runtime metrics</text>
<text x=\"964\" y=\"272\" class=\"mono\">p50 latency: {metrics.get('latencyMsP50') or 'n/a'} ms</text>
<text x=\"964\" y=\"292\" class=\"mono\">p95 latency: {metrics.get('latencyMsP95') or 'n/a'} ms</text>
<text x=\"964\" y=\"312\" class=\"mono\">SBOM ctx: {html.escape(payload.get('sbomContextDigest',''))}</text>
<text x=\"964\" y=\"332\" class=\"mono\">Guardrail blocks: {guard['metadata']['telemetryCounters'].get('advisory_ai_guardrail_blocks_total')}</text>
<text x=\"964\" y=\"352\" class=\"mono\">Chunk cache hits: {guard['metadata']['telemetryCounters'].get('advisory_ai_chunk_cache_hits_total')}</text>
{rows_svg}
</svg>"""
(root/'20251203-0000-list-view-build-r2.svg').write_text(svg)
PY
```
- Verify the regenerated outputs match the sealed fixtures before publishing:
```bash
sha256sum docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2.{svg,payload.json}
# expected:
# c55217e8526700c2d303677a66351a706007381219adab99773d4728cc61f293 ...-build-r2.svg
# 336c55d72abea77bf4557f1e3dcaa4ab8366d79008670d87020f900dcfc833c0 ...-build-r2-payload.json
```
**Reference**: API contracts and sample payloads live in `docs/api/console/workspaces.md` (see `/console/vuln/*` and `/console/vex/*` sections) plus the JSON fixtures under `docs/api/console/samples/`.

View File

@@ -0,0 +1,104 @@
# Advisory AI Evidence Payloads (LNM-Aligned)
_Updated: 2025-11-24 · Owner: Advisory AI Docs Guild · Sprint: 0111 (AIAI-RAG-31-003)_
This document defines how Advisory AI consumes Link-Not-Merge (LNM) observations and linksets for Retrieval-Augmented Generation (RAG). It aligns payloads with the frozen LNM v1 schema (`docs/modules/concelier/link-not-merge-schema.md`, 2025-11-17) and replaces prior draft payloads. CLI/Policy artefacts (`CLI-VULN-29-001`, `CLI-VEX-30-001`, `policyVersion` digests) are referenced but optional at runtime; missing artefacts trigger deterministic `409 advisory.contextUnavailable` responses rather than fallback merging. A deterministic SBOM context fixture lives at `out/console/guardrails/cli-vuln-29-001/sample-sbom-context.json` (SHA256 `421af53f9eeba6903098d292fbd56f98be62ea6130b5161859889bf11d699d18`) and is used in the examples below.
## 1) Input envelope (per task)
```json
{
"advisoryKey": "csaf:redhat:RHSA-2025:1001",
"profile": "fips-local",
"policyVersion": "2025.10.1",
"lnm": {
"observationIds": ["6561e41b3e3f4a6e9d3b91c1", "6561e41b3e3f4a6e9d3b91c2"],
"linksetId": "6561e41b3e3f4a6e9d3b91d0",
"provenanceHash": "sha256:0f7c...9ad3"
},
"sbom": {
"artifactId": "registry.stella-ops.internal/runtime/api",
"purl": "pkg:oci/runtime-api@sha256:d2c3...",
"timelineClamp": 500,
"dependencyPathClamp": 200
}
}
```
Rules:
- `lnm.linksetId` and `lnm.observationIds` are **required**. Missing values → `409 advisory.contextUnavailable`.
- `provenanceHash` must match the hash list embedded in the LNM linkset; Advisory AI refuses linksets whose hashes mismatch.
- SBOM fields optional; if absent, remediation tasks skip SBOM deltas and still return deterministic outputs.
## 2) Canonical chunk mapping
| LNM source | Advisory AI chunk | Transformation |
| --- | --- | --- |
| `advisory_observations._id` | `source_id` | Stored verbatim; used for citations. |
| `advisory_observations.advisoryId` | `advisory_key` | Also populates `content_hash` seed. |
| `advisory_observations.summary` | `text` | Trimmed, Markdown-safe. |
| `advisory_observations.affected[].purl` | `purl` | Lowercased, deduped; no range merging. |
| `advisory_observations.severities[]` | `severity` | Passed through; multiple severities allowed. |
| `advisory_observations.references[]` | `references` | Sorted for determinism. |
| `advisory_observations.relationships[]` | `relationships` | Surface upstream `type/source/target/provenance`; no merge. |
| `advisory_observations.provenance.sourceArtifactSha` | `content_hash` | Drives dedup + cache key. |
| `advisory_linksets.conflicts[]` | `conflicts` | Serialized verbatim for conflict tasks. |
| `advisory_linksets.normalized.purls|versions|ranges|severities` | `normalized` | Used as hints only; never overwrite observation fields. |
Chunk ordering: observations sorted by `(source, advisoryId, provenance.fetchedAt)` as per LNM invariant; chunks are emitted in the same order to keep cache keys stable. SBOM deltas, when present, append after observations but before conflict echoes to keep hashes reproducible with and without SBOM context.
## 3) Output citation rules
- `citations[n].sourceId` points to the LNM `source_id`; `citations[n].uri` must remain the upstream reference URI when present.
- If SBOM deltas are included, they appear as separate citations with `kind: "sbom"` and `sourceId` built from SBOM context digest (`sbom:{artifactId}:{digest}`).
- Conflict outputs must echo `linkset.conflicts[].reason` in the Markdown body with matching citation indexes; guardrails block outputs where a conflict reason lacks a citation.
## 4) Error conditions (aligned to LNM)
| Condition | Code | Notes |
| --- | --- | --- |
| Missing `lnm.linksetId` or `lnm.observationIds` | `409 advisory.contextUnavailable` | Caller should pass LNM IDs; retry once upstream emits them. |
| Hash mismatch between `provenanceHash` and linkset | `409 advisory.contextHashMismatch` | Indicates tampering or stale payload; retry after refreshing linkset. |
| Observation count exceeds clamp (defaults: 200 obs, 600 chunks) | `413 advisory.contextTooLarge` | Caller may request narrower `preferredSections` or reduce obs set. |
| Conflicts array empty for conflict task | `422 advisory.conflict.noConflicts` | Signals upstream data gap; reported to Concelier. |
## 5) Sample normalized RAG bundle
```json
{
"taskType": "Summary",
"advisoryKey": "csaf:redhat:RHSA-2025:1001",
"lnmBundle": {
"linksetId": "6561e41b3e3f4a6e9d3b91d0",
"provenanceHash": "sha256:0f7c...9ad3",
"chunks": [
{
"source_id": "concelier:ghsa:GHSA-xxxx:obs:6561e41b3e3f4a6e9d3b91c1",
"content_hash": "sha256:1234...",
"advisory_key": "csaf:redhat:RHSA-2025:1001",
"purl": "pkg:maven/org.example/foo@1.2.3",
"severity": [{"system":"cvssv3","score":7.8,"vector":"AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H"}],
"references": ["https://access.redhat.com/errata/RHSA-2025:1001"],
"relationships": [{"type":"affects","source":"nvd","target":"cpe:/o:redhat:enterprise_linux:9"}]
}
],
"conflicts": [
{"field":"affected.versions","reason":"vendor_range_differs","values":["<1.2.0","<=1.2.3"]}
]
},
"sbomSummary": {
"artifactId": "registry.stella-ops.internal/runtime/api",
"versionTimeline": 8,
"dependencyPaths": 5
}
}
```
Operators can store this bundle alongside plan cache entries; the `lnmBundle.provenanceHash` proves the evidence set matches the frozen Concelier linkset.
## 6) Operator validation steps
- Verify LNM collections at schema v1 (2025-11-17 freeze) before enabling Advisory AI tasks.
- Ensure `lnm.provenanceHash` matches linkset `observationHashes` before calling Advisory AI.
- Keep clamps deterministic: observations ≤200, chunks ≤600, timeline entries ≤500, dependency paths ≤200 (defaults; override only if documented).
- When running offline, include LNM linkset exports in the Offline Kit to preserve citation replay.

View File

@@ -0,0 +1,76 @@
# Advisory AI Guardrails & Evidence Intake
_Updated: 2025-12-09 | Owner: Advisory AI Docs Guild | Status: Ready to publish (Sprint 0111 / AIAI-DOCS-31-001)_
This note captures the guardrail behaviors and evidence intake boundaries required by Sprint 0111 tasks (`AIAI-DOCS-31-001`, `AIAI-RAG-31-003`). It binds Advisory AI guardrails to upstream evidence sources and clarifies how Link-Not-Merge (LNM) documents flow into Retrieval-Augmented Generation (RAG) payloads.
## 1) Evidence sources and contracts
**Upstream readiness gates (now satisfied)**
- CLI guardrail artefacts (2025-11-19) are sealed at `out/console/guardrails/cli-vuln-29-001/` and `out/console/guardrails/cli-vex-30-001/`; hashes live in `docs/modules/cli/artefacts/guardrails-artefacts-2025-11-19.md`.
- Policy pin: set `policyVersion=2025.11.19` per `docs/modules/policy/guides/assistant-parameters.md` before enabling non-default profiles.
- SBOM context service is live: the 2025-12-08 smoke against `/sbom/context` produced `sha256:0c705259fdf984bf300baba0abf484fc3bbae977cf8a0a2d1877481f552d600d` with evidence in `evidence-locker/sbom-context/2025-12-08-response.json` and offline mirror `offline-kit/advisory-ai/fixtures/sbom-context/2025-12-08/`.
- DEVOPS-AIAI-31-001 landed: deterministic CI harness at `ops/devops/advisoryai-ci-runner/run-advisoryai-ci.sh` emits binlog/TRX/hashes for Advisory AI.
**Evidence feeds**
- Advisory observations (LNM) - consume immutable `advisory_observations` and `advisory_linksets` produced per `docs/modules/concelier/link-not-merge-schema.md` (frozen v1, 2025-11-17).
- VEX statements - Excititor + VEX Lens linksets with trust weights; treated as structured chunks with `source_id` and `confidence`.
- SBOM context - `SBOM-AIAI-31-001` contract: timelines and dependency paths retrieved via `ISbomContextRetriever` (`AddSbomContextHttpClient`), default clamps 500 timeline entries / 200 paths.
- Policy explain traces - Policy Engine digests referenced by `policyVersion`; cache keys include policy hash to keep outputs replayable.
- Runtime posture (optional) - Zastava signals (`exposure`, `admissionStatus`) when provided by Link-Not-Merge-enabled tenants; optional chunks tagged `runtime`.
All evidence items must carry `content_hash` + `source_id`; Advisory AI never mutates or merges upstream facts (Aggregation-Only Contract).
## 2) Guardrail stages
1. **Pre-flight sanitization**
- Redact secrets (AWS-style keys, PEM blobs, generic tokens).
- Strip prompt-injection phrases; enforce max input payload 16kB (configurable, default).
- Reject requests missing `advisoryKey` or linkset-backed evidence (LNM guard).
2. **Prompt assembly**
- Deterministic section order: advisory excerpts -> VEX statements -> SBOM deltas -> policy traces -> runtime hints.
- Vector previews capped at 600 chars + ellipsis; section budgets fixed per profile (`default`, `fips-local`, `gost-local`, `cloud-openai`) in `profiles.catalog.json` and hashed into DSSE provenance.
3. **LLM invocation (local/remote)**
- Profiles selected via `profile` field; remote profiles require Authority tenant consent plus `advisory-ai:operate` and `aoc:verify`.
4. **Validation & citation enforcement**
- Every emitted fact must map to an input chunk (`source_id` + `content_hash`); citations serialized as `[n]` in Markdown.
- Block outputs lacking citations, exceeding section budgets, or including unredacted PII.
5. **Output sealing**
- Store `outputHash`, `inputDigest`, `provenanceHash`; wrap in DSSE when configured.
- Cache TTL defaults to 24h; regenerate only when inputs change or `forceRefresh=true`.
Metrics: `advisory_ai_guardrail_blocks_total`, `advisory_ai_outputs_stored_total`, `advisory_ai_citation_coverage_ratio`. Logs carry `output_hash`, `profile`, and block reason; no secrets or raw prompt bodies are logged.
## 3) RAG payload mapping to LNM (summary)
| LNM field | RAG chunk field | Notes |
| --- | --- | --- |
| `observation._id` | `source_id` | Used for citations and conflict surfacing. |
| `observation.advisoryId` | `advisory_key` | Keyed alongside task type in cache. |
| `observation.affected[].purl` | `purl` | Included for remediation + SBOM joins. |
| `observation.severities[]` | `severity` | Passed through unmerged; multiple severities allowed. |
| `linkset.conflicts[]` | `conflicts` | Rendered verbatim for conflict tasks; no inference merges. |
| `provenance.sourceArtifactSha` | `content_hash` | Drives determinism and replay. |
See `docs/modules/advisory-ai/guides/evidence-payloads.md` for full JSON examples and alignment rules.
## 4) Compliance with upstream artefacts and verification
- References: `CONSOLE-VULN-29-001`, `CONSOLE-VEX-30-001`, `CLI-VULN-29-001`, `CLI-VEX-30-001`, `EXCITITOR-CONSOLE-23-001`, `DEVOPS-AIAI-31-001`, `SBOM-AIAI-31-001`.
- CLI fixtures: expected hashes `421af53f9eeba6903098d292fbd56f98be62ea6130b5161859889bf11d699d18` (sample SBOM context) and `e5aecfba5cee8d412408fb449f12fa4d5bf0a7cb7e5b316b99da3b9019897186` / `2b11b1e2043c2ec1b0cb832c29577ad1c5cbc3fbd0b379b0ca0dee46c1bc32f6` (sample vuln/vex outputs). Verify with `sha256sum --check docs/modules/cli/artefacts/guardrails-artefacts-2025-11-19.md`.
- SBOM context: fixture hash `sha256:421af53f9eeba6903098d292fbd56f98be62ea6130b5161859889bf11d699d18`; live SbomService smoke (2025-12-08) hash `sha256:0c705259fdf984bf300baba0abf484fc3bbae977cf8a0a2d1877481f552d600d` stored in `evidence-locker/sbom-context/2025-12-08-response.json` and mirrored under `offline-kit/advisory-ai/fixtures/sbom-context/2025-12-08/`.
- CI harness: `ops/devops/advisoryai-ci-runner/run-advisoryai-ci.sh` emits `ops/devops/artifacts/advisoryai-ci/<UTC>/build.binlog`, `tests/advisoryai.trx`, and `summary.json` with SHA256s; include the latest run when shipping Offline Kits.
- Policy compatibility: guardrails must remain compatible with `docs/modules/policy/guides/assistant-parameters.md`; configuration knobs documented there are authoritative for env vars and defaults.
- Packaging tasks (AIAI-PACKAGING-31-002) must include this guardrail summary in DSSE metadata to keep Offline Kit parity.
## 5) Operator checklist
- LNM feed enabled and Concelier schemas at v1 (2025-11-17).
- SBOM retriever configured or `NullSbomContextClient` left as safe default; verify latest context hash (`sha256:0c705259f...d600d`) or fixture hash (`sha256:421af53f9...9d18`) before enabling remediation tasks.
- Policy hash pinned via `policyVersion` when reproducibility is required.
- CLI guardrail artefact hashes verified against `docs/modules/cli/artefacts/guardrails-artefacts-2025-11-19.md` and mirrored into Offline Kits.
- CI harness run captured from `ops/devops/advisoryai-ci-runner/run-advisoryai-ci.sh`; store `summary.json` alongside doc promotion.
- Remote profiles only after Authority consent and profile allowlist are set.
- Cache directories shared between web + worker hosts for DSSE sealing.

View File

@@ -0,0 +1,66 @@
# Advisory AI Packaging & SBOM Bundle (AIAI-PACKAGING-31-002)
_Updated: 2025-11-22 · Owner: Advisory AI Release · Status: Draft_
Defines the artefacts and provenance required to ship Advisory AI in Sprint 0111, covering offline kits and on-prem deployments.
## 1) Bundle contents
| Artefact | Purpose | Provenance |
| --- | --- | --- |
| `advisory-ai-web` image | API surface + plan cache | SBOM: `SBOM-AIAI-31-001:web`; DSSE attestation signed by Release key |
| `advisory-ai-worker` image | Queue + inference executor | SBOM: `SBOM-AIAI-31-001:worker`; DSSE attestation |
| Prompt + guardrail pack | Deterministic prompts, redaction lists, validation rules | DSSE sealed; hash recorded in `prompts.manifest` |
| Profile catalog | `default`, `fips-local`, `gost-local`, `cloud-openai` (disabled) | Versioned JSON, hashed; tenant consent flags captured |
| Policy bundle | `policyVersion` digest for baseline evaluation; Authority importable | DSSE + provenance to Policy Engine digests |
| LNM evidence export (optional) | Concelier `advisory_linksets` + `advisory_observations` for air-gap replay | Hash list aligned to `provenanceHash` in RAG bundles |
| SBOM context client config | Example `AddSbomContextHttpClient` settings (`BaseAddress`, `Endpoint`, `ApiKey`) | Signed `sbom-context.example.json` |
## 2) Directory layout (Offline Kit)
```
/offline-kit/advisory-ai/
images/
advisory-ai-web.tar.zst
advisory-ai-worker.tar.zst
sboms/
SBOM-AIAI-31-001-web.json
SBOM-AIAI-31-001-worker.json
provenance/
advisory-ai-web.intoto.jsonl
advisory-ai-worker.intoto.jsonl
prompts.manifest.dsse
profiles.catalog.json
policy-bundle.intoto.jsonl
config/
advisoryai.appsettings.example.json
sbom-context.example.json
evidence/
lnm-linksets.ndjson # optional; aligns to linkset hashes in RAG bundles
lnm-observations.ndjson # optional; immutable raw docs
```
- All files hashed into `SHA256SUMS` with DSSE signature (`SHA256SUMS.dsse`).
- Profiles catalog and prompt pack hashes must be propagated into `AdvisoryAI:Provenance` settings for runtime verification.
## 3) SBOM & provenance rules
- SBOMs must follow SPDX 3.0.1; embed image digest (`sha256:<...>`) and build args.
- Attestations use DSSE + SPDX predicate; signer key matches Release guild key referenced in `DEVOPS-AIAI-31-001`.
- For air-gapped installs, operators verify: `slsa-verifier verify-attestation --source=stellaops/advisory-ai-web --bundle advisory-ai-web.intoto.jsonl --digest <image-digest>`.
## 4) Deployment checklist
- [ ] Import `advisory-ai-web` and `advisory-ai-worker` images to registry.
- [ ] Apply `profiles.catalog.json`; ensure remote profiles disabled unless Authority consent granted.
- [ ] Load prompt pack and set `AdvisoryAI:Prompts:ManifestHash` to `prompts.manifest`.
- [ ] Configure SBOM client (or keep `NullSbomContextClient` default).
- [ ] If shipping LNM evidence, seed `advisory_linksets` and `advisory_observations` collections before enabling inference.
- [ ] Record hashes in deployment log; surface in Authority audit via `advisory_ai.output.generated` events.
## 5) Update obligations
- Any change to prompts, guardrails, or profiles → bump manifest hash and regenerate DSSE.
- SBOM updates follow the same `SBOM-AIAI-31-001` idempotent contract; replace files, update `SHA256SUMS`, resign.
- Link all changes into the sprint Execution Log and Decisions & Risks sections.
- CLI/Policy artefacts must be present before enabling `cloud-openai` or `default` profiles for tenants; if missing, keep profiles disabled and record the reason in `Decisions & Risks`.

View File

@@ -0,0 +1,61 @@
# SBOM Context Hand-off for Advisory AI (SBOM-AIAI-31-003)
_Updated: 2025-11-24 · Owners: Advisory AI Guild · SBOM Service Guild · Sprint 0111_
Defines the contract and smoke test for passing SBOM context from SBOM Service to Advisory AI `/v1/sbom/context` consumers. Aligns with `SBOM-AIAI-31-001` (paths/timelines) and the CLI fixtures published on 2025-11-19.
## Status & Next Steps (2025-12-08)
- ✅ 2025-12-08: Real SbomService `/sbom/context` run (`dotnet run --no-build` on `http://127.0.0.1:5090`) using `sample-sbom-context.json` scope. Response hash `sha256:0c705259fdf984bf300baba0abf484fc3bbae977cf8a0a2d1877481f552d600d` captured with timeline + dependency paths.
- Evidence: `evidence-locker/sbom-context/2025-12-05-smoke.ndjson` (2025-12-08 entry) and raw payload `evidence-locker/sbom-context/2025-12-08-response.json`.
- Offline kit mirror: `offline-kit/advisory-ai/fixtures/sbom-context/2025-12-08/` (CLI guardrail fixtures, new `sbom-context-response.json`, and `SHA256SUMS` manifest).
- 2025-12-05 run (fixture-backed stub) remains archived in the same NDJSON/logs for traceability.
## Contract
- **Endpoint** (SBOM Service): `/sbom/context`
- **Request** (minimal):
```json
{
"artifactId": "registry.stella-ops.internal/runtime/api",
"purl": "pkg:oci/runtime-api@sha256:d2c3...",
"timelineClamp": 500,
"dependencyPathClamp": 200
}
```
- **Response** (summarised):
```json
{
"schema": "stellaops.sbom.context/1.0",
"generated": "2025-11-19T00:00:00Z",
"packages": [
{"name":"openssl","version":"1.1.1w","purl":"pkg:deb/openssl@1.1.1w"},
{"name":"zlib","version":"1.2.11","purl":"pkg:deb/zlib@1.2.11"}
],
"timeline": 8,
"dependencyPaths": 5,
"hash": "sha256:421af53f9eeba6903098d292fbd56f98be62ea6130b5161859889bf11d699d18"
}
```
- **Determinism**: clamp values fixed unless overridden; `generated` timestamp frozen per fixture when offline.
- **Headers**: `X-StellaOps-Tenant` required; `X-StellaOps-ApiKey` optional for bootstrap.
## Smoke test (tenants/offline)
1. Start SBOM Service with fixture data loaded (or use `sample-sbom-context.json`).
2. Run: `curl -s -H "X-StellaOps-Tenant: demo" -H "Content-Type: application/json" \
-d @out/console/guardrails/cli-vuln-29-001/sample-sbom-context.json \
http://localhost:8080/sbom/context | jq .hash` (expect `sha256:421a...9d18`).
3. Configure Advisory AI:
- `AdvisoryAI:SBOM:BaseAddress=http://localhost:8080`
- `AdvisoryAI:SBOM:ApiKey=<key-if-required>`
4. Call Advisory AI cache-only: `stella advise run remediation --advisory-key csaf:redhat:RHSA-2025:1001 --artifact-id registry.stella-ops.internal/runtime/api --timeout 0 --json`.
- Expect exit 0 and `sbomSummary.dependencyPaths=5` in response.
5. Record the hash and endpoint in ops log; mirror fixture + hashes into Offline Kit under `offline-kit/advisory-ai/fixtures/sbom-context/`.
## Failure modes
- `409 advisory.contextHashMismatch` — occurs when the returned `hash` differs from the LNM linkset `provenanceHash`; refresh context or re-export.
- `403` — tenant/api key mismatch; check `X-StellaOps-Tenant` and API key.
- `429` — clamp exceeded; reduce `timelineClamp`/`dependencyPathClamp` or narrow `artifactId`.
## References
- `docs/modules/sbom-service/guides/remediation-heuristics.md` (blast-radius scoring).
- `docs/modules/advisory-ai/guides/guardrails-and-evidence.md` (evidence contract).
- `docs/modules/cli/artefacts/guardrails-artefacts-2025-11-19.md` (hashes for fixtures).