- Implemented comprehensive tests for verdict artifact generation to ensure deterministic outputs across various scenarios, including identical inputs, parallel execution, and change ordering. - Created helper methods for generating sample verdict inputs and computing canonical hashes. - Added tests to validate the stability of canonical hashes, proof spine ordering, and summary statistics. - Introduced a new PowerShell script to update SHA256 sums for files, ensuring accurate hash generation and file integrity checks.
1.4 KiB
1.4 KiB
Console Observability
This document describes Console observability expectations: what telemetry matters, how to correlate UI actions with backend traces, and what to surface in air-gapped deployments.
What to Measure (UI)
Recommended UI metrics include:
- Time-to-verdict (TTFV): from navigation to verdict banner rendered.
- Time-to-evidence: from clicking a fact/badge to evidence preview available.
- Export latency and success rate: evidence bundle generation time and failures.
- Mute/exception usage: how often operators suppress or escalate findings (counts, reversal rate).
What to Log (Structured)
Console logs should be structured and tenant-scoped:
tenantId,actor,actionTypeartifactId/ image digestfindingId/ vulnerability identifiers (when relevant)traceId/ correlation IDs that tie UI requests to backend traces
Error Surfaces
Operators need actionable error messaging:
- Distinguish client validation errors from server failures.
- Provide a copyable correlation/trace ID for support.
- Avoid leaking stack traces or secrets into UI notifications.
Offline / Sealed Mode Telemetry
In sealed mode, surface:
- snapshot identity and staleness budgets
- which data is stale vs fresh (policy pack version, VEX snapshot time, feed ages)
References
- UI telemetry guidance:
docs/observability/ui-telemetry.md - Accessibility baseline:
docs/accessibility.md