Files
git.stella-ops.org/docs/advisory-ai/console.md
StellaOps Bot 600f3a7a3c
Some checks failed
AOC Guard CI / aoc-guard (push) Has been cancelled
AOC Guard CI / aoc-verify (push) Has been cancelled
Concelier Attestation Tests / attestation-tests (push) Has been cancelled
Docs CI / lint-and-preview (push) Has been cancelled
Console CI / console-ci (push) Has been cancelled
Export Center CI / export-ci (push) Has been cancelled
feat(graph): introduce graph.inspect.v1 contract and schema for SBOM relationships
- Added graph.inspect.v1 documentation outlining payload structure and determinism rules.
- Created JSON schema for graph.inspect.v1 to enforce payload validation.
- Defined mapping rules for graph relationships, advisories, and VEX statements.

feat(notifications): establish remediation blueprint for gaps NR1-NR10

- Documented requirements, evidence, and tests for Notifier runtime.
- Specified deliverables and next steps for addressing identified gaps.

docs(notifications): organize operations and schemas documentation

- Created README files for operations, schemas, and security notes to clarify deliverables and policies.

feat(advisory): implement PostgreSQL caching for Link-Not-Merge linksets

- Created database schema for advisory linkset cache.
- Developed repository for managing advisory linkset cache operations.
- Added tests to ensure correct functionality of the AdvisoryLinksetCacheRepository.
2025-12-04 09:36:59 +02:00

21 KiB
Raw Blame History

Advisory AI Console Workflows

Last updated: 2025-12-04

This guide documents the forthcoming Advisory AI console experience so that console, docs, and QA guilds share a single reference while the new endpoints finish landing.

1. Entry points & navigation

  • Dashboard tile: Advisory AI card on the console overview routes to /console/vuln/advisory-ai once CONSOLE-VULN-29-001 ships. The tile must include the current model build stamp and data freshness time.
  • Deep links: Copy-as-ticket payloads link back into the console using /console/vex/{statementId} (CONSOLE-VEX-30-001). Provide fallbacks that open the Evidence modal with a toast if the workspace is still loading.

2. Evidence surfacing

Workflow Required API Notes
Findings overview GET /console/vuln/findings Must include policy verdict badge, VEX justification summary, and last-seen timestamps.
Evidence drawer GET /console/vex/statements/{id} Stream SSE chunk descriptions so long-form provenance renders progressively.
Copy as ticket POST /console/vuln/tickets Returns signed payload + attachment list for JIRA/ServiceNow templates.

2.1 Plan composer vs response panel

  • Plan inspector (left rail) mirrors the orchestrator output: structured chunks, SBOM summary, dependency counts, and cache key. Surface cache hits with the “Reused plan” badge that reads from plan.planFromCache.
  • Prompt preview must show the sanitized prompt and the raw inference response side-by-side once CONSOLE-VULN-29-001 exposes /console/vuln/advisory-ai/{cacheKey}. Always label the sanitized prompt “Guardrail-safe prompt”.
  • Citations: render as [n] Source Name chips that scroll the evidence drawer to the matching chunk. Use the chunk ID from prompt.citations[*].chunkId to keep navigation deterministic.
  • Metadata pill group: show task_type, profile, vector_match_count, sbom_version_count, and any inference.* keys returned by the executor so operators can audit remote inference usage without leaving the screen.

Deterministic fixture snapshot (command output, replaces inline screenshot):

python - <<'PY'
import json, pathlib
payload_path = pathlib.Path('docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2-payload.json')
data = json.loads(payload_path.read_text())
metrics = data.get('metrics', {})
guard = data.get('guardrail', {})
violations = guard.get('violations', [])
print(f"# Advisory AI list view fixture (build {data.get('build')})")
print(f"- workspace: {data.get('workspace')} | generated: {data.get('generatedAtUtc')} | profile: {data.get('profile')} | cacheHit: {str(metrics.get('cacheHit', False)).lower()}")
meta = guard.get('metadata', {})
print(f"- guardrail: state={guard.get('state')} blocked={str(guard.get('blocked', False)).lower()} violations={len(violations)} promptLength={meta.get('promptLength')} blockedPhraseFile={meta.get('blockedPhraseFile')}")
print("\n| severity | policy | summary | reachability | vex | lastSeen | sbom |")
print("| --- | --- | --- | --- | --- | --- | --- |")
for item in data.get('findings', []):
    print("| {severity} | {policy} | {summary} | {reach} | {vex} | {last_seen} | {sbom} |".format(
        severity=item.get('severity'),
        policy=item.get('policyBadge'),
        summary=item.get('summary').replace('|', '\\|'),
        reach=item.get('reachability'),
        vex=item.get('vexState'),
        last_seen=item.get('lastSeen'),
        sbom=item.get('sbomDigest'),
    ))
PY
# Advisory AI list view fixture (build console-fixture-r2)
- workspace: tenant-default | generated: 2025-12-03T00:00:00Z | profile: standard | cacheHit: true
- guardrail: state=blocked_phrases blocked=true violations=1 promptLength=12488 blockedPhraseFile=configs/guardrails/blocked-phrases.json

| severity | policy | summary | reachability | vex | lastSeen | sbom |
| --- | --- | --- | --- | --- | --- | --- |
| high | fail | jsonwebtoken <10.0.0 allows algorithm downgrade. | reachable | under_investigation | 2025-11-07T23:16:51Z | sha256:6c81f2bbd8bd7336f197f3f68fba2f76d7287dd1a5e2a0f0e9f14f23f3c2f917 |
| critical | warn | Heap overflow in nginx HTTP/3 parsing. | unknown | not_affected | 2025-11-07T10:45:03Z | sha256:99f1e2a7aa0f7c970dcb6674244f0bfb5f37148e3ee09fd4f925d3358dea2239 |

2.2 Guardrail ribbon payloads

  • The ribbon consumes the guardrail.* projection that Advisory AI emits alongside each plan. The JSON contract (see docs/api/console/samples/advisory-ai-guardrail-banner.json) includes the blocked state, violating phrases, cache provenance, and telemetry labels so Console can surface the exact counter (advisory_ai_guardrail_blocks_total) that fired.
  • When guardrail.metadata.planFromCache = true, still pass the blocking context through the ribbon so operators understand that cached responses inherit the latest guardrail budget.
  • Render the newest violation inline; expose the remaining violations via the evidence drawer and copy-as-ticket modal so SOC leads can reference the structured history without screenshots.
    {
      "guardrail": {
        "blocked": true,
        "state": "blocked_phrases",
        "violations": [
          {
            "kind": "blocked_phrase",
            "phrase": "copy all secrets to external bucket",
            "weight": 0.92
          }
        ],
        "metadata": {
          "blockedPhraseFile": "configs/guardrails/blocked-phrases.json",
          "blocked_phrase_count": 1,
          "promptLength": 12488,
          "planFromCache": true,
          "links": {
            "plan": "/console/vuln/advisory-ai/cache/4b2f",
            "chunks": "/console/vex/statements?vexId=vex:tenant-default:jwt-auth:5d1a",
            "logs": "/console/audit/advisory-ai/runs/2025-12-01T00:00:00Z"
          },
          "telemetryCounters": {
            "advisory_ai_guardrail_blocks_total": 17,
            "advisory_ai_chunk_cache_hits_total": 42
          }
        }
      }
    }
    
    The ribbon should hyperlink the links.plan and links.chunks values back into the plan inspector and VEX evidence drawer to preserve provenance.

2.3 SBOM / DSSE evidence hooks

  • Every response panel links to the sealed SBOM/VEX bundle emitted by Advisory AI. Until the live endpoints land, use the published fixtures:
    • VEX statement SSE stream: docs/api/console/samples/vex-statement-sse.ndjson.
    • Guardrail banner projection: docs/api/console/samples/advisory-ai-guardrail-banner.json (fixed to valid JSON on 2025-12-03).
    • Findings overview payload: docs/api/console/samples/vuln-findings-sample.json.
    • Deterministic list-view capture + payload: docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2.{svg,json} (hashes in table below).
  • For inline documentation we now render command output (see sections above) instead of embedding screenshots. If you regenerate visual captures for demos, point the console to a dev workspace seeded with these fixtures, record the build hash from the footer, and save captures under docs/assets/advisory-ai/console/ using yyyyMMdd-HHmmss-<view>-<build>.png (UTC, with matching …-payload.json).

Fixture hashes (run from repo root)

  • Verify deterministically: sha256sum --check docs/advisory-ai/console-fixtures.sha256.
Fixture sha256 Notes
docs/api/console/samples/advisory-ai-guardrail-banner.json bd85eb2ab4528825c17cd0549b547c2d1a6a5e8ee697a6b4615119245665cc02 Guardrail ribbon projection.
docs/api/console/samples/vex-statement-sse.ndjson 57d7bf9ab226b561e19b3e23e3c8d6c88a3a1252c1ea471ef03bf7a237de8079 SSE stream sample.
docs/api/console/samples/vuln-findings-sample.json af3459e8cf7179c264d1ac1f82a968e26e273e7e45cd103c8966d0dd261c3029 Findings overview payload.
docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2-payload.json 336c55d72abea77bf4557f1e3dcaa4ab8366d79008670d87020f900dcfc833c0 List-view sealed payload.
docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2.svg c55217e8526700c2d303677a66351a706007381219adab99773d4728cc61f293 Deterministic list-view capture.
docs/assets/advisory-ai/console/evidence-drawer-b1820ad.svg 9bc89861ba873c7f470c5a30c97fb2cd089d6af23b085fba2095e88f8d1f8ede Evidence drawer mock (keep until live capture).
docs/samples/console/console-vex-30-001.json f6093257134f38033abb88c940d36f7985b48f4f79870d5b6310d70de5a586f9 Console VEX search fixture.
docs/samples/console/console-vuln-29-001.json 921bcb360454e801bb006a3df17f62e1fcfecaaccda471ae66f167147539ad1e Console vuln search fixture.

3. Accessibility & offline requirements

  • Console screens must pass WCAG 2.2 AA contrast and provide focus order that matches the keyboard shortcuts planned for Advisory AI (see docs/advisory-ai/overview.md).
  • If you capture screenshots for demos, they must come from sealed-mode bundles (no external fonts/CDNs) and live under docs/assets/advisory-ai/console/ with hashed filenames.
  • Modal dialogs need aria-describedby attributes referencing the explanation text returned by the API; translation strings must live with existing locale packs.

3.1 Guardrail & inference status

  • Display a guardrail ribbon at the top of the response panel with three states:
    • Blocked (red) when guardrail.blocked = true → show blocked phrase count and require the operator to acknowledge before the response JSON is revealed.
    • Warnings (amber) when guardrail.violations.Length > 0 but not blocked.
    • Clean (green) otherwise.
  • If the executor falls back to sanitized prompts (inference.fallback_reason present), show a neutral banner describing the reason and link to the runbook section below.
  • Surface inference.model_id, prompt/completion token counts, and latency histogram from advisory_ai_latency_seconds_bucket next to the response so ops can correlate user impact with remote/local mode toggles (ADVISORYAI__Inference__Mode).

Guardrail ribbon projection (command output, replaces mock screenshot):

python - <<'PY'
import json, pathlib
p = pathlib.Path('docs/api/console/samples/advisory-ai-guardrail-banner.json')
obj = json.loads(p.read_text())
guard = obj['guardrail']
meta = guard['metadata']
print('# Guardrail ribbon projection (banner sample)')
print(f"- blocked: {guard['blocked']} | state: {guard['state']} | violations: {len(guard['violations'])}")
print(f"- planFromCache: {meta.get('planFromCache')} | blockedPhraseFile: {meta.get('blockedPhraseFile')} | promptLength: {meta.get('promptLength')}")
print('- telemetry counters: ' + ', '.join(f"{k}={v}" for k,v in meta['telemetryCounters'].items()))
print('- links: plan={plan} | chunks={chunks} | logs={logs}'.format(
    plan=meta['links'].get('plan'),
    chunks=meta['links'].get('chunks'),
    logs=meta['links'].get('logs'),
))
print('\nViolations:')
for idx, v in enumerate(guard['violations'], 1):
    print(f"{idx}. {v['kind']} · phrase='{v['phrase']}' · weight={v.get('weight')}")
PY
# Guardrail ribbon projection (banner sample)
- blocked: True | state: blocked_phrases | violations: 1
- planFromCache: True | blockedPhraseFile: configs/guardrails/blocked-phrases.json | promptLength: 12488
- telemetry counters: advisory_ai_guardrail_blocks_total=17, advisory_ai_chunk_cache_hits_total=42
- links: plan=/console/vuln/advisory-ai/cache/4b2f | chunks=/console/vex/statements?vexId=vex:tenant-default:jwt-auth:5d1a | logs=/console/audit/advisory-ai/runs/2025-12-01T00:00:00Z

Violations:
1. blocked_phrase · phrase='copy all secrets to external bucket' · weight=0.92

4. Copy-as-ticket guidance

  1. Operators select one or more VEX-backed findings.
  2. Console renders the sanitized payload (JSON) plus context summary for the receiving system.
  3. Users can download the payload or send it via webhook; both flows must log console.ticket.export events for audit.

5. Offline & air-gapped console behaviour

  1. Volume readiness confirm the RWX volume (/var/lib/advisory-ai/{queue,plans,outputs}) is mounted; the console should poll /api/v1/advisory-ai/health and surface “Queue not available” if the worker is offline.
  2. Cached responses when running air-gapped, highlight that only cached plans/responses are available by showing the planFromCache badge plus the generatedAtUtc timestamp.
  3. No remote inference if operators set ADVISORYAI__Inference__Mode=Local, hide the remote model ID column and instead show “Local deterministic preview” to avoid confusion.
  4. Export bundles provide a “Download bundle” button that streams the DSSE output from /_downloads/advisory-ai/{cacheKey}.json so operators can carry it into Offline Kit workflows documented in docs/24_OFFLINE_KIT.md. While staging endpoints are pending, reuse the Evidence Bundle v1 sample at docs/samples/evidence-bundle/evidence-bundle-v1.tar.gz (hash in evidence-bundle-v1.tar.gz.sha256) to validate wiring and any optional visual captures.

6. Guardrail configuration & telemetry

  • Config surface Advisory AI now exposes AdvisoryAI:Guardrails options so ops can set prompt length ceilings, citation requirements, and blocked phrase seeds without code changes. Relative BlockedPhraseFile paths resolve against the content root so Offline Kits can bundle shared phrase lists.

  • Sample

    {
      "AdvisoryAI": {
        "Guardrails": {
          "MaxPromptLength": 32000,
          "RequireCitations": true,
          "BlockedPhraseFile": "configs/guardrail-blocked-phrases.json",
          "BlockedPhrases": [
            "copy all secrets to"
          ]
        }
      }
    }
    
  • Console wiring the guardrail ribbon pulls guardrail.blocked, guardrail.violations, and guardrail.metadata.blocked_phrase_count while the observability cards track advisory_ai_chunk_requests_total, advisory_ai_chunk_cache_hits_total, and advisory_ai_guardrail_blocks_total (now emitted even on cache hits). Use these meters to explain throttling or bad actors before granting additional guardrail budgets, and keep docs/api/console/samples/advisory-ai-guardrail-banner.json nearby so QA can validate localized payloads without hitting production data.

7. Publication state

  • Fixture-backed payloads and captures committed (20251203-0000-list-view-build-r2.svg, evidence-drawer-b1820ad.svg).
  • Copy-as-ticket flow documented; payload aligns with existing SOC runbooks.
  • Remote/local inference badges + latency tooltips described; inline doc now uses command-rendered markdown instead of screenshots.
  • SBOM/VEX bundle example attached (Evidence Bundle v1 sample).
  • Refresh: deterministic list-view payload and guardrail banner remain sealed (2025-12-03); keep payload + hash alongside any optional captures generated later.

Publication readiness checklist (DOCS-AIAI-31-004)

  • Inputs available now: console fixtures (docs/samples/console/console-vuln-29-001.json, console-vex-30-001.json), evidence bundle sample (docs/samples/evidence-bundle/evidence-bundle-v1.tar.gz), guardrail ribbon contract.
  • Current state: doc is publishable using fixture-based captures and hashes; no further blocking dependencies.
  • Optional follow-up: when live SBOM /v1/sbom/context evidence is available, regenerate the command-output snippets (and any optional captures), capture the build hash, and replace fixture payloads with live outputs.

Tracking: DOCS-AIAI-31-004 (Docs Guild, Console Guild)

Guardrail console fixtures (unchecked-integration)

  • Vulnerability search sample: docs/samples/console/console-vuln-29-001.json (maps to CONSOLE-VULN-29-001).
  • VEX search sample: docs/samples/console/console-vex-30-001.json (maps to CONSOLE-VEX-30-001).
  • Use these until live endpoints are exposed; replace with real captures when staging is available.

Fixture bundle regeneration (deterministic)

  • Rebuild the fixture capture deterministically from the sealed payload:
python - <<'PY'
import html, json
from pathlib import Path
root = Path('docs/assets/advisory-ai/console')
payload = json.loads((root/'20251203-0000-list-view-build-r2-payload.json').read_text())
guard = payload['guardrail']; metrics = payload['metrics']; items = payload['findings']

def color_sev(sev):
    return {'critical':'#b3261e','high':'#d05c00','medium':'#c38f00','low':'#00695c'}.get(sev.lower(), '#0f172a')
def color_policy(val):
    return {'fail':'#b3261e','warn':'#d97706','pass':'#0f5b3a'}.get(val.lower(), '#0f172a')

rows = []
for idx, item in enumerate(items):
    y = 210 + idx * 120
    rows.append(f"""
<g transform=\"translate(32,{y})\">
  <rect width=\"888\" height=\"104\" rx=\"10\" fill=\"#ffffff\" stroke=\"#e2e8f0\" />
  <text x=\"20\" y=\"30\" class=\"title\">{html.escape(item['summary'])}</text>
  <text x=\"20\" y=\"52\" class=\"mono subtle\">{html.escape(item['package'])} · {html.escape(item['component'])} · {html.escape(item['image'])}</text>
  <text x=\"20\" y=\"72\" class=\"mono subtle\">reachability={html.escape(str(item.get('reachability')))} · vex={html.escape(str(item.get('vexState')))} · lastSeen={html.escape(str(item.get('lastSeen')))}</text>
  <text x=\"20\" y=\"92\" class=\"mono faint\">sbom={html.escape(str(item.get('sbomDigest')))}</text>
  <rect x=\"748\" y=\"14\" width=\"120\" height=\"28\" rx=\"6\" ry=\"6\" fill=\"{color_sev(item['severity'])}\" opacity=\"0.12\" />
  <text x=\"758\" y=\"33\" class=\"mono\" fill=\"{color_sev(item['severity'])}\">sev:{html.escape(item['severity'])}</text>
  <rect x=\"732\" y=\"50\" width=\"140\" height=\"28\" rx=\"6\" ry=\"6\" fill=\"{color_policy(item.get('policyBadge',''))}\" opacity=\"0.12\" />
  <text x=\"742\" y=\"69\" class=\"mono\" fill=\"{color_policy(item.get('policyBadge',''))}\">policy:{html.escape(item.get('policyBadge',''))}</text>
</g>
""")

rows_svg = "\n".join(rows)
banner = '#b3261e' if guard.get('blocked') else '#0f5b3a'
svg = f"""<svg xmlns=\"http://www.w3.org/2000/svg\" width=\"1280\" height=\"720\" viewBox=\"0 0 1280 720\">
<style>
  .title {{ font-family: Inter, Arial, sans-serif; font-size: 18px; font-weight: 700; fill: #0f172a; }}
  .mono {{ font-family: Menlo, monospace; font-size: 13px; fill: #0f172a; }}
  .mono.subtle {{ fill: #475569; }}
  .mono.faint {{ fill: #94a3b8; font-size: 12px; }}
</style>
<rect width=\"1280\" height=\"720\" fill=\"#f8fafc\" />
<rect x=\"32\" y=\"32\" width=\"1216\" height=\"72\" rx=\"12\" fill=\"#0f172a\" opacity=\"0.05\" />
<text x=\"48\" y=\"76\" class=\"title\">Advisory AI · Console fixture</text>
<text x=\"48\" y=\"104\" class=\"mono\" fill=\"#475569\">build={html.escape(payload['build'])} · generated={html.escape(payload['generatedAtUtc'])} · workspace={html.escape(payload['workspace'])} · profile={html.escape(payload['profile'])} · cacheHit={str(metrics.get('cacheHit', False)).lower()}</text>
<rect x=\"32\" y=\"120\" width=\"1216\" height=\"72\" rx=\"12\" fill=\"#fff1f0\" stroke=\"#f87171\" stroke-width=\"1\" />
<text x=\"48\" y=\"156\" class=\"title\" fill=\"{banner}\">Guardrail: {html.escape(guard.get('state','unknown'))}</text>
<text x=\"48\" y=\"176\" class=\"mono\" fill=\"#0f172a\">{html.escape(guard['metadata'].get('blockedPhraseFile',''))} · violations={len(guard.get('violations',[]))} · promptLength={guard['metadata'].get('promptLength')}</text>
<rect x=\"1080\" y=\"138\" width=\"96\" height=\"28\" rx=\"6\" ry=\"6\" fill=\"{banner}\" opacity=\"0.12\" />
<text x=\"1090\" y=\"157\" class=\"mono\" fill=\"{banner}\">blocked</text>
<rect x=\"944\" y=\"210\" width=\"304\" height=\"428\" rx=\"12\" fill=\"#0f172a\" opacity=\"0.04\" />
<text x=\"964\" y=\"244\" class=\"title\">Runtime metrics</text>
<text x=\"964\" y=\"272\" class=\"mono\">p50 latency: {metrics.get('latencyMsP50') or 'n/a'} ms</text>
<text x=\"964\" y=\"292\" class=\"mono\">p95 latency: {metrics.get('latencyMsP95') or 'n/a'} ms</text>
<text x=\"964\" y=\"312\" class=\"mono\">SBOM ctx: {html.escape(payload.get('sbomContextDigest',''))}</text>
<text x=\"964\" y=\"332\" class=\"mono\">Guardrail blocks: {guard['metadata']['telemetryCounters'].get('advisory_ai_guardrail_blocks_total')}</text>
<text x=\"964\" y=\"352\" class=\"mono\">Chunk cache hits: {guard['metadata']['telemetryCounters'].get('advisory_ai_chunk_cache_hits_total')}</text>
{rows_svg}
</svg>"""

(root/'20251203-0000-list-view-build-r2.svg').write_text(svg)
PY
  • Verify the regenerated outputs match the sealed fixtures before publishing:
sha256sum docs/assets/advisory-ai/console/20251203-0000-list-view-build-r2.{svg,payload.json}
# expected:
# c55217e8526700c2d303677a66351a706007381219adab99773d4728cc61f293  ...-build-r2.svg
# 336c55d72abea77bf4557f1e3dcaa4ab8366d79008670d87020f900dcfc833c0  ...-build-r2-payload.json

Reference: API contracts and sample payloads live in docs/api/console/workspaces.md (see /console/vuln/* and /console/vex/* sections) plus the JSON fixtures under docs/api/console/samples/.