Files
git.stella-ops.org/docs/advisory-ai/console.md
master 7b01c7d6ac
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
feat: Add comprehensive product advisories for improved scanner functionality
- Introduced a blueprint for explainable quiet alerts, detailing phases for SBOM, VEX readiness, and attestations.
- Developed a roadmap for deterministic diff-aware rescans, enhancing scanner speed and efficiency.
- Implemented a hash-based SBOM layer cache to optimize container scans by reusing previous results.
- Created a multi-runtime reachability corpus to validate function-level reachability across various programming languages.
- Proposed a stable SBOM model using SPDX 3.0.1 for persistence and CycloneDX 1.6 for interchange.
- Established a validation plan for quiet scans, focusing on provenance and CI integration.
- Documented guidelines for the Findings Ledger module, outlining roles, execution rules, and testing protocols.
2025-11-17 00:09:26 +02:00

9.2 KiB
Raw Blame History

Advisory AI Console Workflows

Last updated: 2025-11-12

This guide documents the forthcoming Advisory AI console experience so that console, docs, and QA guilds share a single reference while the new endpoints finish landing.

1. Entry points & navigation

  • Dashboard tile: Advisory AI card on the console overview routes to /console/vuln/advisory-ai once CONSOLE-VULN-29-001 ships. The tile must include the current model build stamp and data freshness time.
  • Deep links: Copy-as-ticket payloads link back into the console using /console/vex/{statementId} (CONSOLE-VEX-30-001). Provide fallbacks that open the Evidence modal with a toast if the workspace is still loading.

2. Evidence surfacing

Workflow Required API Notes
Findings overview GET /console/vuln/findings Must include policy verdict badge, VEX justification summary, and last-seen timestamps.
Evidence drawer GET /console/vex/statements/{id} Stream SSE chunk descriptions so long-form provenance renders progressively.
Copy as ticket POST /console/vuln/tickets Returns signed payload + attachment list for JIRA/ServiceNow templates.

2.1 Plan composer vs response panel

  • Plan inspector (left rail) mirrors the orchestrator output: structured chunks, SBOM summary, dependency counts, and cache key. Surface cache hits with the “Reused plan” badge that reads from plan.planFromCache.
  • Prompt preview must show the sanitized prompt and the raw inference response side-by-side once CONSOLE-VULN-29-001 exposes /console/vuln/advisory-ai/{cacheKey}. Always label the sanitized prompt “Guardrail-safe prompt”.
  • Citations: render as [n] Source Name chips that scroll the evidence drawer to the matching chunk. Use the chunk ID from prompt.citations[*].chunkId to keep navigation deterministic.
  • Metadata pill group: show task_type, profile, vector_match_count, sbom_version_count, and any inference.* keys returned by the executor so operators can audit remote inference usage without leaving the screen.

List view mock Mock capture generated from the sealed data model to illustrate required widgets until live screenshots ship.

2.2 Guardrail ribbon payloads

  • The ribbon consumes the guardrail.* projection that Advisory AI emits alongside each plan. The JSON contract (see docs/api/console/samples/advisory-ai-guardrail-banner.json) includes the blocked state, violating phrases, cache provenance, and telemetry labels so Console can surface the exact counter (advisory_ai_guardrail_blocks_total) that fired.
  • When guardrail.metadata.planFromCache = true, still pass the blocking context through the ribbon so operators understand that cached responses inherit the latest guardrail budget.
  • Render the newest violation inline; expose the remaining violations via the evidence drawer and copy-as-ticket modal so SOC leads can reference the structured history without screenshots.
    {
      "guardrail": {
        "blocked": true,
        "state": "blocked_phrases",
        "violations": [
          {
            "kind": "blocked_phrase",
            "phrase": "copy all secrets to"
          }
        ],
        "metadata": {
          "blockedPhraseFile": "configs/guardrails/blocked-phrases.json",
          "promptLength": 12488,
          "planFromCache": true
        }
      }
    }
    
    The ribbon should hyperlink the links.plan and links.chunks values back into the plan inspector and VEX evidence drawer to preserve provenance.

2.3 SBOM / DSSE evidence hooks

  • Every response panel links to the sealed SBOM/VEX bundle emitted by Advisory AI. Until the live endpoints land, use the published fixtures:
    • VEX statement SSE stream: docs/api/console/samples/vex-statement-sse.ndjson
    • Guardrail banner projection: docs/api/console/samples/advisory-ai-guardrail-banner.json
    • Findings overview payload: docs/api/console/samples/vuln-findings-sample.json
  • When capturing screenshots, point the console to a dev workspace seeded with the above fixtures and record the build hash displayed in the footer to keep captures reproducible.
  • Store captures under docs/assets/advisory-ai/console/ using the scheme yyyyMMdd-HHmmss-<view>-<build>.png (UTC clock) so regeneration is deterministic. Keep the original JSON alongside each screenshot by saving the response as …-payload.json in the same folder.

3. Accessibility & offline requirements

  • Console screens must pass WCAG 2.2 AA contrast and provide focus order that matches the keyboard shortcuts planned for Advisory AI (see docs/advisory-ai/overview.md).
  • All screenshots captured for this doc must come from sealed-mode bundles (no external fonts/CDNs). Store them under docs/assets/advisory-ai/console/ with hashed filenames.
  • Modal dialogs need aria-describedby attributes referencing the explanation text returned by the API; translation strings must live with existing locale packs.

3.1 Guardrail & inference status

  • Display a guardrail ribbon at the top of the response panel with three states:
    • Blocked (red) when guardrail.blocked = true → show blocked phrase count and require the operator to acknowledge before the response JSON is revealed.
    • Warnings (amber) when guardrail.violations.Length > 0 but not blocked.
    • Clean (green) otherwise.
  • If the executor falls back to sanitized prompts (inference.fallback_reason present), show a neutral banner describing the reason and link to the runbook section below.
  • Surface inference.model_id, prompt/completion token counts, and latency histogram from advisory_ai_latency_seconds_bucket next to the response so ops can correlate user impact with remote/local mode toggles (ADVISORYAI__Inference__Mode).

Evidence drawer mock Mock capture showing plan inspector vs response panel; replace with live console screenshot once CONSOLE-VULN-29-001 lands.

4. Copy-as-ticket guidance

  1. Operators select one or more VEX-backed findings.
  2. Console renders the sanitized payload (JSON) plus context summary for the receiving system.
  3. Users can download the payload or send it via webhook; both flows must log console.ticket.export events for audit.

5. Offline & air-gapped console behaviour

  1. Volume readiness confirm the RWX volume (/var/lib/advisory-ai/{queue,plans,outputs}) is mounted; the console should poll /api/v1/advisory-ai/health and surface “Queue not available” if the worker is offline.
  2. Cached responses when running air-gapped, highlight that only cached plans/responses are available by showing the planFromCache badge plus the generatedAtUtc timestamp.
  3. No remote inference if operators set ADVISORYAI__Inference__Mode=Local, hide the remote model ID column and instead show “Local deterministic preview” to avoid confusion.
  4. Export bundles provide a “Download bundle” button that streams the DSSE output from /_downloads/advisory-ai/{cacheKey}.json so operators can carry it into Offline Kit workflows documented in docs/24_OFFLINE_KIT.md.

6. Guardrail configuration & telemetry

  • Config surface Advisory AI now exposes AdvisoryAI:Guardrails options so ops can set prompt length ceilings, citation requirements, and blocked phrase seeds without code changes. Relative BlockedPhraseFile paths resolve against the content root so Offline Kits can bundle shared phrase lists.

  • Sample

    {
      "AdvisoryAI": {
        "Guardrails": {
          "MaxPromptLength": 32000,
          "RequireCitations": true,
          "BlockedPhraseFile": "configs/guardrail-blocked-phrases.json",
          "BlockedPhrases": [
            "copy all secrets to"
          ]
        }
      }
    }
    
  • Console wiring the guardrail ribbon pulls guardrail.blocked, guardrail.violations, and guardrail.metadata.blocked_phrase_count while the observability cards track advisory_ai_chunk_requests_total, advisory_ai_chunk_cache_hits_total, and advisory_ai_guardrail_blocks_total (now emitted even on cache hits). Use these meters to explain throttling or bad actors before granting additional guardrail budgets, and keep docs/api/console/samples/advisory-ai-guardrail-banner.json nearby so QA can validate localized payloads without hitting production data.

5. Open items before publication

  • Replace placeholder API responses with captures from the first merged build of CONSOLE-VULN-29-001 / CONSOLE-VEX-30-001.
  • Capture at least two screenshots (list view + evidence drawer) using the fixture-backed workspace; commit both *-payload.json and *-screenshot.png with deterministic filenames.
  • Verify copy-as-ticket instructions with Support to ensure the payload fields align with existing SOC runbooks.
  • Add latency tooltip + remote/local badge screenshots after Grafana wiring is stable.
  • Attach SBOM/VEX bundle example (sealed DSSE) to the doc and link it from Section 2.3 for auditors.

Tracking: DOCS-AIAI-31-004 (Docs Guild, Console Guild)

Reference: API contracts and sample payloads live in docs/api/console/workspaces.md (see /console/vuln/* and /console/vex/* sections) plus the JSON fixtures under docs/api/console/samples/.