## 1) Define the product primitive (non-negotiable) ### Directive (shared) **The product’s primary output is not “findings.” It is a “Risk Verdict Attestation” (RVA).** Everything else (SBOMs, CVEs, VEX, reachability, reports) is *supporting evidence* referenced by the RVA. ### What “first-class artifact” means in practice 1. **The verdict is an OCI artifact “referrer” attached to a specific image/artifact digest** via OCI 1.1 `subject` and discoverable via the referrers API. ([opencontainers.org][1]) 2. **The verdict is cryptographically signed** (at least one supported signing pathway). * DSSE is a standard approach for signing attestations, and cosign supports creating/verifying in‑toto attestations signed with DSSE. ([Sigstore][2]) * Notation is a widely deployed approach for signing/verifying OCI artifacts in enterprise environments. ([Microsoft Learn][3]) --- ## 2) Directions for Product Managers (PM) ### A. Write the “Risk Verdict Attestation v1” product contract **Deliverable:** A one-page contract + schema that product and customers can treat as an API. Minimum fields the contract must standardize: * **Subject binding:** exact OCI digest, repo/name, platform (if applicable) * **Verdict:** `PASS | FAIL | PASS_WITH_EXCEPTIONS | INDETERMINATE` * **Policy reference:** policy ID, policy digest, policy version, enforcement mode * **Knowledge snapshot reference:** snapshot ID + digest (see replay semantics below) * **Evidence references:** digests/pointers for SBOM, VEX inputs, vuln feed snapshot, reachability proof(s), config snapshot, and unknowns summary * **Reason codes:** stable machine-readable codes (`RISK.CVE.REACHABLE`, `RISK.VEX.NOT_AFFECTED`, `RISK.UNKNOWN.INPUT_MISSING`, etc.) * **Human explanation stub:** short rationale text plus links/IDs for deeper evidence **Key PM rule:** the contract must be **stable and versioned**, with explicit deprecation rules. If you can’t maintain compatibility, ship a new version (v2), don’t silently mutate v1. Why: OCI referrers create long-lived metadata chains. Breaking them is a customer trust failure. ### B. Define strict replay semantics as a product requirement (not “nice to have”) PM must specify what “same inputs” means. At minimum, inputs include: * artifact digest (subject) * policy bundle digest * vulnerability dataset snapshot digest(s) * VEX bundle digest(s) * SBOM digest(s) or SBOM generation recipe digest * scoring rules version/digest * engine version * reachability configuration version/digest (if enabled) **Product acceptance criterion:** When a user re-runs evaluation in “replay mode” using the same knowledge snapshot and policy digest, the **verdict and reason codes must match** (byte-for-byte identical predicate is ideal; if not, the deterministic portion must match exactly). OCI 1.1 and ORAS guidance also implies you should avoid shoving large evidence into annotations; store large evidence as blobs and reference by digest. ([opencontainers.org][1]) ### C. Make “auditor evidence extraction” a first-order user journey Define the auditor journey as a separate persona: * Auditor wants: “Prove why you blocked/allowed artifact X at time Y.” * They should be able to: 1. Verify the signature chain 2. Extract the decision + evidence package 3. Replay the evaluation 4. Produce a human-readable report without bespoke consulting **PM feature requirements (v1)** * `explain` experience that outputs: * decision summary * policy used * evidence references and hashes * top N reasons (with stable codes) * unknowns and assumptions * `export-audit-package` experience: * exports a ZIP (or OCI bundle) containing the RVA, its referenced evidence artifacts, and a machine-readable manifest listing all digests * `verify` experience: * verifies signature + policy expectations (who is trusted to sign; which predicate type(s) are acceptable) Cosign explicitly supports creating/verifying in‑toto attestations (DSSE-signed) and even validating custom predicates against policy languages like Rego/CUE—this is a strong PM anchor for ecosystem interoperability. ([Sigstore][2]) --- ## 3) Directions for Development Managers (Dev/Eng) ### A. Implement OCI attachment correctly (artifact, referrer, fallback) **Engineering decisions:** 1. Store RVA as an OCI artifact manifest with: * `artifactType` set to your verdict media type * `subject` pointing to the exact image/artifact digest being evaluated OCI 1.1 introduced these fields for associating metadata artifacts and retrieving them via the referrers API. ([opencontainers.org][1]) 2. Support discovery via: * Referrers API (`GET /v2//referrers/`) when registry supports it * **Fallback “tagged index” strategy** for registries that don’t support referrers (OCI 1.1 guidance calls out a fallback tag approach and client responsibilities). ([opencontainers.org][1]) **Dev acceptance tests** * Push subject image → push RVA artifact with `subject` → query referrers → RVA appears. * On a registry without referrers support: fallback retrieval still works. ### B. Use a standard attestation envelope and signing flow For attestations, the lowest friction pathway is: * in‑toto Statement + DSSE envelope * Sign/verify using cosign-compatible workflows (so customers can verify without you) ([Sigstore][2]) DSSE matters because it: * authenticates message + type * avoids canonicalization pitfalls * supports arbitrary encodings ([GitHub][4]) **Engineering rule:** the signed payload must include enough data to replay and audit (policy + knowledge snapshot digests), but avoid embedding huge evidence blobs directly. ### C. Build determinism into the evaluation core (not bolted on) **“Same inputs → same verdict” is a software architecture constraint.** It fails if any of these are non-deterministic: * fetching “latest” vulnerability DB at runtime * unstable iteration order (maps/hashes) * timestamps included as decision inputs * concurrency races changing aggregation order * floating point scoring without canonical rounding **Engineering requirements** 1. Create a **Knowledge Snapshot** object (content-addressed): * a manifest listing every dataset input by digest and version 2. The evaluation function becomes: * `Verdict = Evaluate(subject_digest, policy_digest, knowledge_snapshot_digest, engine_version, options_digest)` 3. The RVA must embed those digests so replay is possible offline. **Dev acceptance tests** * Run Evaluate twice with same snapshot/policy → verdict + reason codes identical. * Run Evaluate with one dataset changed (snapshot digest differs) → RVA must reflect changed snapshot digest. ### D. Treat “evidence” as a graph of content-addressed artifacts Implement evidence storage with these rules: * Large evidence artifacts are stored as OCI blobs/artifacts (SBOM, VEX bundle, reachability proof graph, config snapshot). * RVA references evidence by digest and type. * “Explain” traverses this graph and renders: * a machine-readable explanation JSON * a human-readable report ORAS guidance highlights artifact typing via `artifactType` in OCI 1.1 and suggests keeping manifests manageable; don’t overload annotations. ([oras.land][5]) ### E. Provide a verification and policy enforcement path You want customers to be able to enforce “only run artifacts with an approved RVA predicate.” Two practical patterns: * **Cosign verification of attestations** (customers can do `verify-attestation` and validate predicate structure; cosign supports validating attestations with policy languages like Rego/CUE). ([Sigstore][2]) * **Notation signatures** for organizations that standardize on Notary/Notation for OCI signing/verification workflows. ([Microsoft Learn][3]) Engineering should not hard-code one choice; implement an abstraction: * signing backend: `cosign/DSSE` first * optional: notation signature over the RVA artifact for environments that require it --- ## 4) Minimal “v1” spec by example (what your teams should build) ### A. OCI artifact requirements (registry-facing) * artifact is discoverable as a referrer via `subject` linkage and `artifactType` classification (OCI 1.1). ([opencontainers.org][1]) ### B. Attestation payload structure (contract-facing) In code terms (illustrative only), build on the in‑toto Statement model: ```json { "_type": "https://in-toto.io/Statement/v0.1", "subject": [ { "name": "oci://registry.example.com/team/app", "digest": { "sha256": "" } } ], "predicateType": "https://stellaops.dev/attestations/risk-verdict/v1", "predicate": { "verdict": "FAIL", "reasonCodes": ["RISK.CVE.REACHABLE", "RISK.POLICY.THRESHOLD_EXCEEDED"], "policy": { "id": "prod-gate", "digest": "sha256:" }, "knowledgeSnapshot": { "id": "ks-2025-12-19", "digest": "sha256:" }, "evidence": { "sbom": { "digest": "sha256:", "format": "cyclonedx-json" }, "vexBundle": { "digest": "sha256:", "format": "openvex" }, "vulnData": { "digest": "sha256:" }, "reachability": { "digest": "sha256:" }, "unknowns": { "count": 2, "digest": "sha256:" } }, "engine": { "name": "stella-eval", "version": "1.3.0" } } } ``` Cosign supports creating and verifying in‑toto attestations (DSSE-signed), which is exactly the interoperability you want for customer-side verification. ([Sigstore][2]) --- ## 5) Definition of Done (use this to align PM/Eng and prevent scope drift) ### v1 must satisfy all of the following: 1. **OCI-attached:** RVA is stored as an OCI artifact referrer to the subject digest and discoverable (referrers API + fallback mode). ([opencontainers.org][1]) 2. **Signed:** RVA can be verified by a standard toolchain (cosign at minimum). ([Sigstore][2]) 3. **Replayable:** Given the embedded policy + knowledge snapshot digests, the evaluation can be replayed and produces the same verdict + reason codes. 4. **Auditor extractable:** One command produces an audit package containing: * RVA attestation * policy bundle * knowledge snapshot manifest * referenced evidence artifacts * an “explanation report” rendering the decision 5. **Stable contract:** predicate schema is versioned and validated (strict JSON schema checks; backwards compatibility rules).