Here’s a crisp product idea you can drop straight into Stella Ops: a **VEX “proof spine”**—an interactive, signed chain that shows exactly *why* a vuln is **not exploitable**, end‑to‑end. --- # What it is (plain speak) * A **proof spine** is a linear (but zoomable) chain of facts: *vuln → package → reachable symbol → guarded path → runtime context → policy verdict*. * Each segment is **cryptographically signed** (DSSE, in‑toto style) so users can audit who/what asserted it, with hashes for inputs/outputs. * In the UI, the chain appears as **locked graph segments**. Users can expand a segment to see the evidence, but they can’t alter it without breaking the signature. --- # Why it’s different * **From “scanner says so” to “here’s the evidence.”** This leap is what Trivy/Snyk static readouts don’t fully deliver: deterministic reachability + proof‑linked UX. * **Time‑to‑Evidence (TtE)** drops: the path from alert → proof is one click, reducing back‑and‑forth with security and auditors. * **Replayable & sovereign:** works offline, and every step is reproducible in air‑gapped audits. --- # Minimal UX spec (fast to ship) 1. **Evidence Rail (left side)** * Badges per segment: *SBOM*, *Match*, *Reachability*, *Guards*, *Runtime*, *Policy*. * Each badge shows status: ✅ verified, ⚠️ partial, ❌ missing, ⏳ pending. 2. **Chain Canvas (center)** * Segments render as locked pills connected by a line. * Clicking a pill opens an **Evidence Drawer** with: * Inputs (hashes, versions), Tool ID, Who signed (key ID), Signature, Timestamp. * “Reproduce” button → prefilled `stellaops scan --replay `. 3. **Verdict Capsule (top‑right)** * Final VEX statement (e.g., `not_affected: guarded-by-feature-flag`) with signer, expiry, and policy that produced it. 4. **Audit Mode toggle** * Freezes the view, shows raw DSSE envelopes and canonical JSON of each step. --- # Data model (lean) * `ProofSegment` * `type`: `SBOM|Match|Reachability|Guard|Runtime|Policy` * `inputs`: array of `{name, hash, mediaType}` * `result`: JSON blob (canonicalized) * `attestation`: DSSE envelope * `tool_id`, `version`, `started_at`, `finished_at` * `ProofSpine` * `vuln_id`, `artifact_id`, `segments[]`, `verdict`, `spine_hash` --- # Deterministic pipeline (dev notes) 1. **SBOM lock** → hash the SBOM slice relevant to the package. 2. **Vuln match** → store matcher inputs (CPE/PURL rules) and result. 3. **Reachability pass** → static callgraph diff with symbol list; record *exact* rule set and graph hash. 4. **Guard analysis** → record predicates (feature flags, config gates) and satisfiability result. 5. **Runtime sampling (optional)** → link eBPF trace or app telemetry digest. 6. **Policy evaluation** → lattice rule IDs + decision; emit final VEX statement. 7. DSSE‑sign each step; **link by previous segment hash** (spine = mini‑Merkle chain). --- # Quick .NET 10 implementation hints * **Canonical JSON:** `System.Text.Json` with deterministic ordering; pre‑normalize floats/timestamps. * **DSSE:** wrap payloads, sign with your Authority service; store `key_id`, `sig`, `alg`. * **Hashing:** SHA‑256 of canonical result; spine hash = hash(concat of segment hashes). * **Replay manifests:** emit a single `scan.replay.json` containing feed versions, ruleset IDs, and all input hashes. --- # Tiny UI contract for Angular * Component: `ProofSpineComponent` * `@Input() spine: ProofSpine` * Emits: `replayRequested(spine_hash)`, `segmentOpened(segment_id)` * Drawer shows: `inputs`, `result`, `attestation`, `reproduce` CTA. * Badge colors map to verification state from backend (`verified/partial/missing/pending`). --- # How it lands value fast * Gives customers a **credible “not exploitable”** stance with audit‑ready proofs. * Shortens investigations (SecOps, Dev, Compliance speak the same artifact). * Creates a **moat**: deterministic, signed evidence chains—hard to copy with pure static lists. If you want, I’ll draft the C# models, the DSSE signer interface, and the Angular component skeleton next. Good, let’s turn the “proof spine” into something you can actually brief to devs, UX, and auditors as a concrete capability. I’ll structure it around: domain model, lifecycle, storage, signing & trust, UX, and dev/testing guidelines. --- ## 1. Scope the Proof Spine precisely ### Core intent A **Proof Spine** is the *minimal signed chain of reasoning* that justifies a VEX verdict for a given `(artifact, vulnerability)` pair. It must be: * Deterministic: same inputs → bit-identical spine. * Replayable: every step has enough context to re-run it. * Verifiable: each step is DSSE-signed, chained by hashes. * Decoupled: you can verify a spine even if Scanner/Vexer code changes later. ### Non-goals (so devs don’t overextend) * Not a general logging system. * Not a full provenance graph (that’s for your Proof-of-Integrity Graph). * Not a full data warehouse of all intermediate findings. It’s a curated, compressed reasoning chain. --- ## 2. Domain model: from “nice idea” to strict contract Think in terms of three primitives: 1. `ProofSpine` 2. `ProofSegment` 3. `ReplayManifest` ### 2.1 `ProofSpine` (aggregate root) Per `(ArtifactId, VulnerabilityId, PolicyProfileId)` you have at most one **latest** active spine. Key fields: * `SpineId` (ULID / GUID): stable ID for references and URLs. * `ArtifactId` (image digest, repo+tag, etc.). * `VulnerabilityId` (CVE, GHSA, etc.). * `PolicyProfileId` (which lattice/policy produced the verdict). * `Segments[]` (ordered; see below). * `Verdict` (`affected`, `not_affected`, `fixed`, `under_investigation`, etc.). * `VerdictReason` (short machine code, e.g. `unreachable-code`, `guarded-runtime-config`). * `RootHash` (hash of concatenated segment hashes). * `ScanRunId` (link back to scan execution). * `CreatedAt`, `SupersededBySpineId?`. C# sketch: ```csharp public sealed record ProofSpine( string SpineId, string ArtifactId, string VulnerabilityId, string PolicyProfileId, IReadOnlyList Segments, string Verdict, string VerdictReason, string RootHash, string ScanRunId, DateTimeOffset CreatedAt, string? SupersededBySpineId ); ``` ### 2.2 `ProofSegment` (atomic evidence step) Each segment represents **one logical transformation**: Examples of `SegmentType`: * `SBOM_SLICE` – “Which components are relevant?” * `MATCH` – “Which SBOM component matches this vuln feed record?” * `REACHABILITY` – “Is the vulnerable symbol reachable in this build?” * `GUARD_ANALYSIS` – “Is this path gated by config/feature flag?” * `RUNTIME_OBSERVATION` – “Was this code observed at runtime?” * `POLICY_EVAL` – “How did the lattice/policy combine evidence?” Fields: * `SegmentId` * `SegmentType` * `Index` (0-based position in spine) * `Inputs` (canonical JSON) * `Result` (canonical JSON) * `InputHash` (`SHA256(canonical(Inputs))`) * `ResultHash` * `PrevSegmentHash` (optional for first segment) * `Envelope` (DSSE payload + signature) * `ToolId`, `ToolVersion` * `Status` (`verified`, `partial`, `invalid`, `unknown`) C# sketch: ```csharp public sealed record ProofSegment( string SegmentId, string SegmentType, int Index, string InputHash, string ResultHash, string? PrevSegmentHash, DsseEnvelope Envelope, string ToolId, string ToolVersion, string Status ); public sealed record DsseEnvelope( string PayloadType, string PayloadBase64, IReadOnlyList Signatures ); public sealed record DsseSignature( string KeyId, string SigBase64 ); ``` ### 2.3 `ReplayManifest` (reproducibility anchor) A `ReplayManifest` is emitted per scan run and referenced by multiple spines: * `ReplayManifestId` * `Feeds` (names + versions + digests) * `Rulesets` (reachability rules version, lattice policy version) * `Tools` (scanner, sbomer, vexer versions) * `Environment` (OS, arch, container image digest where the scan ran) This is what your CLI will take: ```bash stellaops scan --replay --artifact --vuln ``` --- ## 3. Lifecycle: where the spine is built in Stella Ops ### 3.1 Producer components The following services contribute segments: * `Sbomer` → `SBOM_SLICE` * `Scanner` → `MATCH`, maybe `RUNTIME_OBSERVATION` if it integrates runtime traces * `Reachability Engine` inside `Scanner` / dedicated module → `REACHABILITY` * `Guard Analyzer` (config/feature flag evaluator) → `GUARD_ANALYSIS` * `Vexer/Excititor` → `POLICY_EVAL`, final verdict * `Authority` → optional cross-signing / endorsement segment (`TRUST_ASSERTION`) Important: each microservice **emits its own segments**, not a full spine. A small orchestrator (inside Vexer or a dedicated `ProofSpineBuilder`) collects, orders, and chains them. ### 3.2 Build sequence Example for a “not affected due to guard” verdict: 1. `Sbomer` produces `SBOM_SLICE` segment for `(Artifact, Vuln)` and DSSE-signs it. 2. `Scanner` takes slice, produces `MATCH` segment (component X -> vuln Y). 3. `Reachability` produces `REACHABILITY` segment (symbol reachable or not). 4. `Guard Analyzer` produces `GUARD_ANALYSIS` segment (path is gated by `feature_x_enabled=false` under current policy context). 5. `Vexer` evaluates lattice, produces `POLICY_EVAL` segment with final VEX statement `not_affected`. 6. `ProofSpineBuilder`: * Sorts segments by predetermined order. * Chains `PrevSegmentHash`. * Computes `RootHash`. * Stores `ProofSpine` in canonical store and exposes it via API/GraphQL. --- ## 4. Storage & PostgreSQL patterns You are moving more to Postgres for canonical data, so think: ### 4.1 Tables (conceptual) `proof_spines`: * `spine_id` (PK) * `artifact_id` * `vuln_id` * `policy_profile_id` * `verdict` * `verdict_reason` * `root_hash` * `scan_run_id` * `created_at` * `superseded_by_spine_id` (nullable) * `segment_count` Indexes: * `(artifact_id, vuln_id, policy_profile_id)` * `(scan_run_id)` * `(root_hash)` `proof_segments`: * `segment_id` (PK) * `spine_id` (FK) * `idx` * `segment_type` * `input_hash` * `result_hash` * `prev_segment_hash` * `envelope` (bytea or text) * `tool_id` * `tool_version` * `status` * `created_at` Optional `proof_segment_payloads` if you want fast JSONB search on `inputs` / `result`: * `segment_id` (PK) FK * `inputs_jsonb` * `result_jsonb` Guidelines: * Use **append-only** semantics: never mutate segments; supersede by new spine. * Partition `proof_spines` and `proof_segments` by time or `scan_run_id` if volume is large. * Keep envelopes as raw bytes; only parse/validate on demand or asynchronously for indexing. --- ## 5. Signing, keys, and trust model ### 5.1 Signers At minimum: * One keypair per *service* (Sbomer, Scanner, Reachability, Vexer). * Optional: vendor keys for imported spines/segments. Key management: * Keys and key IDs are owned by `Authority` service. * Services obtain signing keys via short-lived tokens or integrate with HSM/Key vault under Authority control. * Key rotation: * Keys have validity intervals. * Spines keep `KeyId` in each DSSE signature. * Authority maintains a trust table: which keys are trusted for which `SegmentType` and time window. ### 5.2 Verification flow When UI loads a spine: 1. Fetch `ProofSpine` + `ProofSegments`. 2. For each segment: * Verify DSSE signature via Authority API. * Validate `PrevSegmentHash` integrity. 3. Compute `RootHash` and check against stored `RootHash`. 4. Expose per-segment `status` to UI: `verified`, `untrusted-key`, `signature-failed`, `hash-mismatch`. This drives the badge colors in the UX. --- ## 6. UX: from “rail + pills” to full flows Think of three primary UX contexts: 1. **Vulnerability detail → “Explain why not affected”** 2. **Audit view → “Show me all evidence behind this VEX statement”** 3. **Developer triage → “Where exactly did the reasoning go conservative?”** ### 6.1 Spine view patterns For each `(artifact, vuln)`: * **Top summary bar** * Verdict pill: `Not affected (guarded by runtime config)` * Confidence / verification status: e.g. `Proof verified`, `Partial proof`. * Links: * “Download Proof Spine” (JSON/DSSE bundle). * “Replay this analysis” (CLI snippet). * **Spine stepper** * Horizontal list of segments (SBOM → Match → Reachability → Guard → Policy). * Each segment displays: * Type * Service name * Status (icon + color) * On click: open side drawer. * **Side drawer (segment detail)** * `Who`: `ToolId`, `ToolVersion`, `KeyId`. * `When`: timestamps. * `Inputs`: * Pretty-printed subset with “Show canonical JSON” toggle. * `Result`: * Human-oriented short explanation + raw JSON view. * `Attestation`: * Signature summary: `Signature verified / Key untrusted / Invalid`. * `PrevSegmentHash` & `ResultHash` (shortened with copy icons). * “Run this step in isolation” button if you support it (nice-to-have). ### 6.2 Time-to-Evidence (TtE) integration You already asked for guidelines on “Tracking UX Health with Time-to-Evidence”. Use the spine as the data source: * Measure `TtE` as: * `time_from_alert_opened_to_first_spine_view` OR * `time_from_alert_opened_to_verdict_understood`. * Instrument events: * `spine_opened`, `segment_opened`, `segment_scrolled_to_end`, `replay_clicked`. * Use this to spot UX bottlenecks: * Too many irrelevant segments. * Missing human explanations. * Overly verbose JSON. ### 6.3 Multiple paths and partial evidence You might have: * Static reachability: says “unreachable”. * Runtime traces: not collected. * Policy: chooses conservative path. UI guidelines: * Allow small branching visualization if you ever model alternative reasoning paths, but for v1: * Treat missing segments as explicit `pending` / `unknown`. * Show them as grey pills: “Runtime observation: not available”. --- ## 7. Replay & offline/air-gap story For air-gapped Stella Ops this is one of your moats. ### 7.1 Manifest shape `ReplayManifest` (JSON, canonicalized): * `manifest_id` * `generated_at` * `tools`: * `{ "id": "Scanner", "version": "10.1.3", "image_digest": "..." }` * etc. * `feeds`: * `{ "name": "nvd", "version": "2025-11-30T00:00:00Z", "hash": "..." }` * `policies`: * `{ "policy_profile_id": "default-eu", "version": "3.4.0", "hash": "..." }` CLI contract: ```bash stellaops scan \ --replay-manifest \ --artifact \ --vuln \ --explain ``` Replay guarantees: * If the artifact and feeds are still available, replay reproduces: * identical segments, * identical `RootHash`, * identical verdict. * If anything changed: * CLI clearly marks divergence: “Recomputed proof differs from stored spine (hash mismatch).” ### 7.2 Offline bundle integration Your offline update kit should: * Ship manifests alongside feed bundles. * Keep a small index “manifest_id → bundle file”. * Allow customers to verify that a spine produced 6 months ago used feed version X that they still have in archive. --- ## 8. Performance, dedup, and scaling ### 8.1 Dedup segments Many artifacts share partial reasoning, e.g.: * Same base image SBOM slice. * Same reachability result for a shared library. You have options: 1. **Simple v1:** keep segments embedded in spines. Optimize later. 2. **Advanced:** deduplicate by `ResultHash` + `SegmentType` + `ToolId`: * Store unique “segment payloads” in a table keyed by that combination. * `ProofSegment` then references payload via foreign key. Guideline for now: instruct devs to design with **possible dedup** in mind (segment payloads should be referable). ### 8.2 Retention strategy * Keep full spines for: * Recent scans (e.g., last 90 days) for triage. * Any spines that were exported to auditors or regulators. * For older scans: * Option A: keep only `POLICY_EVAL` + `RootHash` + short summary. * Option B: archive full spines to object storage (S3/minio) keyed by `RootHash`. --- ## 9. Security & multi-tenant boundaries Stella Ops will likely serve many customers / environments. Guidelines: * `SpineId` is globally unique, but all queries must be scope-checked by: * `TenantId` * `EnvironmentId` * Authority verifies not only signatures, but also **key scopes**: * Key X is only allowed to sign for Tenant T / Environment E, or for system-wide tools. * Never leak: * File paths, * Internal IPs, * Customer-specific configs, in the human-friendly explanation. Those can stay in the canonical JSON, which is exposed only in advanced / audit mode. --- ## 10. Developer & tester guidelines ### 10.1 For implementors (C# / .NET 10) * Use a **single deterministic JSON serializer** (e.g. wrapper around `System.Text.Json`) with: * Stable property order. * Standardized timestamp format (UTC ISO 8601). * Explicit numeric formats (no locale-dependent decimals). * Before signing: * Canonicalize JSON. * Hash bytes directly. * Never change canonicalization semantics in a minor version. If you must, bump a major version and record it in `ReplayManifest`. ### 10.2 For test engineers Build a curated suite of fixture scenarios: 1. “Straightforward not affected”: * Unreachable symbol, no runtime data, conservative policy: still `not_affected` due to unreachable. 2. “Guarded at runtime”: * Reachable symbol, but guard based on config → `not_affected`. 3. “Missing segment”: * Remove `REACHABILITY` segment → policy should downgrade to `affected` or `under_investigation`. 4. “Signature tampering”: * Flip a byte in one DSSE payload → UI must show `invalid` and mark entire spine as compromised. 5. “Key revocation”: * Mark a key untrusted → segments signed with it become `untrusted-key` and spine is partially verified. Provide golden JSON for: * `ProofSpine` object. * Each `ProofSegment` envelope. * Expected `RootHash`. * Expected UI status per segment. --- ## 11. How this ties into your moats This Proof Spine is not just “nice UX”: * It is the **concrete substrate** for: * Trust Algebra Studio (the lattice engine acts on segments and outputs `POLICY_EVAL` segments). * Proof-Market Ledger (publish `RootHash` + minimal metadata). * Deterministic, replayable scans (spine + manifest). * Competitors can show “reasons”, but you are explicitly providing: * Signed, chain-of-evidence reasoning, * With deterministic replay, * Packaged for regulators and procurement. --- If you want, next step I can draft: * A proto/JSON schema for `ProofSpine` bundles for export/import. * A minimal set of REST/GraphQL endpoints for querying spines from UI and external auditors.