Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
- Implemented LdapDistinguishedNameHelper for escaping RDN and filter values. - Created AuthorityCredentialAuditContext and IAuthorityCredentialAuditContextAccessor for managing credential audit context. - Developed StandardCredentialAuditLogger with tests for success, failure, and lockout events. - Introduced AuthorityAuditSink for persisting audit records with structured logging. - Added CryptoPro related classes for certificate resolution and signing operations.
5.6 KiB
5.6 KiB
Advisory AI Console Workflows
Last updated: 2025-11-07
This guide documents the forthcoming Advisory AI console experience so that console, docs, and QA guilds share a single reference while the new endpoints finish landing.
1. Entry points & navigation
- Dashboard tile:
Advisory AIcard on the console overview routes to/console/vuln/advisory-aionce CONSOLE-VULN-29-001 ships. The tile must include the current model build stamp and data freshness time. - Deep links: Copy-as-ticket payloads link back into the console using
/console/vex/{statementId}(CONSOLE-VEX-30-001). Provide fallbacks that open the Evidence modal with a toast if the workspace is still loading.
2. Evidence surfacing
| Workflow | Required API | Notes |
|---|---|---|
| Findings overview | GET /console/vuln/findings |
Must include policy verdict badge, VEX justification summary, and last-seen timestamps. |
| Evidence drawer | GET /console/vex/statements/{id} |
Stream SSE chunk descriptions so long-form provenance renders progressively. |
| Copy as ticket | POST /console/vuln/tickets |
Returns signed payload + attachment list for JIRA/ServiceNow templates. |
2.1 Plan composer vs response panel
- Plan inspector (left rail) mirrors the orchestrator output: structured chunks, SBOM summary, dependency counts, and cache key. Surface cache hits with the “Reused plan” badge that reads from
plan.planFromCache. - Prompt preview must show the sanitized prompt and the raw inference response side-by-side once CONSOLE-VULN-29-001 exposes
/console/vuln/advisory-ai/{cacheKey}. Always label the sanitized prompt “Guardrail-safe prompt”. - Citations: render as
[n] Source Namechips that scroll the evidence drawer to the matching chunk. Use the chunk ID fromprompt.citations[*].chunkIdto keep navigation deterministic. - Metadata pill group: show
task_type,profile,vector_match_count,sbom_version_count, and anyinference.*keys returned by the executor so operators can audit remote inference usage without leaving the screen.
Mock capture generated from the sealed data model to illustrate required widgets until live screenshots ship.
3. Accessibility & offline requirements
- Console screens must pass WCAG 2.2 AA contrast and provide focus order that matches the keyboard shortcuts planned for Advisory AI (see
docs/advisory-ai/overview.md). - All screenshots captured for this doc must come from sealed-mode bundles (no external fonts/CDNs). Store them under
docs/assets/advisory-ai/console/with hashed filenames. - Modal dialogs need
aria-describedbyattributes referencing the explanation text returned by the API; translation strings must live with existing locale packs.
3.1 Guardrail & inference status
- Display a guardrail ribbon at the top of the response panel with three states:
Blocked(red) whenguardrail.blocked = true→ show blocked phrase count and require the operator to acknowledge before the response JSON is revealed.Warnings(amber) whenguardrail.violations.Length > 0but not blocked.Clean(green) otherwise.
- If the executor falls back to sanitized prompts (
inference.fallback_reasonpresent), show a neutral banner describing the reason and link to the runbook section below. - Surface
inference.model_id, prompt/completion token counts, and latency histogram fromadvisory_ai_latency_seconds_bucketnext to the response so ops can correlate user impact with remote/local mode toggles (ADVISORYAI__Inference__Mode).
Mock capture showing plan inspector vs response panel; replace with live console screenshot once CONSOLE-VULN-29-001 lands.
4. Copy-as-ticket guidance
- Operators select one or more VEX-backed findings.
- Console renders the sanitized payload (JSON) plus context summary for the receiving system.
- Users can download the payload or send it via webhook; both flows must log
console.ticket.exportevents for audit.
5. Offline & air-gapped console behaviour
- Volume readiness – confirm the RWX volume (
/var/lib/advisory-ai/{queue,plans,outputs}) is mounted; the console should poll/api/v1/advisory-ai/healthand surface “Queue not available” if the worker is offline. - Cached responses – when running air-gapped, highlight that only cached plans/responses are available by showing the
planFromCachebadge plus thegeneratedAtUtctimestamp. - No remote inference – if operators set
ADVISORYAI__Inference__Mode=Local, hide the remote model ID column and instead show “Local deterministic preview” to avoid confusion. - Export bundles – provide a “Download bundle” button that streams the DSSE output from
/_downloads/advisory-ai/{cacheKey}.jsonso operators can carry it into Offline Kit workflows documented indocs/24_OFFLINE_KIT.md.
5. Open items before publication
- Replace placeholder API responses with captures from the first merged build of CONSOLE-VULN-29-001 / CONSOLE-VEX-30-001.
- Capture at least two screenshots (list view + evidence drawer) once UI polish is complete.
- Verify copy-as-ticket instructions with Support to ensure the payload fields align with existing SOC runbooks.
- Add latency tooltip + remote/local badge screenshots after Grafana wiring is stable.
Tracking: DOCS-AIAI-31-004 (Docs Guild, Console Guild)
Reference: API contracts and sample payloads live in docs/api/console/workspaces.md (see /console/vuln/* and /console/vex/* sections) plus the JSON fixtures under docs/api/console/samples/.