house keeping work

This commit is contained in:
StellaOps Bot
2025-12-19 22:19:08 +02:00
parent 91f3610b9d
commit 5b57b04484
64 changed files with 4702 additions and 4 deletions

View File

@@ -0,0 +1,299 @@
## 1) Anchor the differentiator in one sentence everyone repeats
**Positioning invariant:**
Stella Ops does not “consume VEX to suppress findings.” Stella Ops **verifies who made the claim, scores how much to trust it, deterministically applies it to a decision, and emits a signed, replayable verdict**.
Everything you ship should make that sentence more true.
---
## 2) Shared vocabulary PMs/DMs must standardize
If you dont align on these, youll ship features that look similar to competitors but do not compound into a moat.
### Core objects
- **VEX source**: a distribution channel and issuer identity (e.g., vendor feed, distro feed, OCI-attached attestation).
- **Issuer identity**: cryptographic identity used to sign/attest the VEX (key/cert/OIDC identity), not a string.
- **VEX statement**: one claim about one vulnerability status for one or more products; common statuses include *Not Affected, Affected, Fixed, Under Investigation* (terminology varies by format). citeturn6view1turn10view0
- **Verification result**: cryptographic + semantic verification facts about a VEX document/source.
- **Trust score**: deterministic numeric/ranked evaluation of the source and/or statement quality.
- **Decision**: a policy outcome (pass/fail/needs-review) for a specific artifact or release.
- **Attestation**: signed statement bound to an artifact (e.g., OCI artifact) that captures decision + evidence.
- **Knowledge snapshot**: frozen set of inputs (VEX docs, keys, policies, vulnerability DB versions, scoring code version) required for deterministic replay.
---
## 3) Product Manager guidelines
### 3.1 Treat “VEX source onboarding” as a first-class product workflow
Your differentiator collapses if VEX is just “upload a file.”
**PM requirements:**
1. **VEX Source Registry UI/API**
- Add/edit a source: URL/feed/OCI pattern, update cadence, expected issuer(s), allowed formats.
- Define trust policy per source (thresholds, allowed statuses, expiry, overrides).
2. **Issuer enrollment & key lifecycle**
- Capture: issuer identity, trust anchor, rotation, revocation/deny-list, “break-glass disable.”
3. **Operational status**
- Source health: last fetch, last verified doc, signature failures, schema failures, drift.
**Why it matters:** customers will only operationalize VEX at scale if they can **govern it like a dependency feed**, not like a manual exception list.
### 3.2 Make “verification” visible, not implied
If users cant see it, they wont trust it—and auditors wont accept it.
**Minimum UX per VEX document/statement:**
- Verification status: **Verified / Unverified / Failed**
- Issuer identity: who signed it (and via what trust anchor)
- Format + schema validation status (OpenVEX JSON schema exists and is explicitly recommended for validation). citeturn10view0
- Freshness: timestamp, last updated
- Product mapping coverage: “X of Y products matched to SBOM/components”
### 3.3 Provide “trust score explanations” as a primary UI primitive
Trust scoring must not feel like a magic number.
**UX requirement:** every trust score shows a **breakdown** (e.g., Identity 30/30, Authority 20/25, Freshness 8/10, Evidence quality 6/10…).
This is both:
- a user adoption requirement (security teams will challenge it), and
- a moat hardener (competitors rarely expose scoring mechanics).
### 3.4 Define policy experiences that force deterministic coupling
You are not building a “VEX viewer.” You are building **decisioning**.
Policies must allow:
- “Accept VEX only if verified AND trust score ≥ threshold”
- “Accept Not Affected only if justification/impact statement exists”
- “If conflicting VEX exists, resolve by trust-weighted precedence”
- “For unverified VEX, treat status as Under Investigation (or Unknown), not Not Affected”
This aligns with CSAFs VEX profile expectation that *known_not_affected* should have an impact statement (machine-readable flag or human-readable justification). citeturn1view1
### 3.5 Ship “audit export” as a product feature, not a report
Auditors want to know:
- which VEX claims were applied,
- who asserted them,
- what trust policy allowed them,
- and what was the resulting decision.
ENISAs SBOM guidance explicitly emphasizes “historical snapshots” and “evidence chain integrity” as success criteria for SBOM/VEX integration programs. citeturn8view0
So your product needs:
- exportable evidence bundles (machine-readable)
- signed verdicts linked to the artifact
- replay semantics (“recompute this exact decision later”)
### 3.6 MVP scoping: start with sources that prove the model
For early product proof, prioritize sources that:
- are official,
- have consistent structure,
- publish frequently,
- contain configuration nuance.
Example: Ubuntu publishes VEX following OpenVEX, emphasizing exploitability in specific configurations and providing official distribution points (tarball + GitHub). citeturn9view0turn6view0
This gives you a clean first dataset for verification/trust scoring behaviors.
---
## 4) Development Manager guidelines
### 4.1 Architect it as a pipeline with hard boundaries
Do not mix verification, scoring, and decisioning in one component. You need isolatable, testable stages.
**Recommended pipeline stages:**
1. **Ingest**
- Fetch from registry/OCI
- Deduplicate by content hash
2. **Parse & normalize**
- Convert OpenVEX / CSAF VEX / CycloneDX VEX into a **canonical internal VEX model**
- Note: OpenVEX explicitly calls out that CycloneDX VEX uses different status/justification labels and may need translation. citeturn10view0
3. **Verify (cryptographic + semantic)**
4. **Trust score (pure function)**
5. **Conflict resolve**
6. **Decision**
7. **Attest + persist snapshot**
### 4.2 Verification must include both cryptography and semantics
#### Cryptographic verification (minimum bar)
- Verify signature/attestation against expected issuer identity.
- Validate certificate/identity chains per customer trust anchors.
- Support OCI-attached artifacts and “signature-of-signature” patterns (Sigstore describes countersigning: signature artifacts can themselves be signed). citeturn1view3
#### Semantic verification (equally important)
- Schema validation (OpenVEX provides JSON schema guidance). citeturn10view0
- Vulnerability identifier validity (CVE/aliases)
- Product reference validity (e.g., purl)
- Statement completeness rules:
- “Not affected” must include rationale; CSAF VEX profile requires an impact statement for known_not_affected in flags or threats. citeturn1view1
- Cross-check the statement scope to known SBOM/components:
- If the VEX references products that do not exist in the artifact SBOM, the claim should not affect the decision (or should reduce trust sharply).
### 4.3 Trust scoring must be deterministic by construction
If trust scoring varies between runs, you cannot produce replayable, attestable decisions.
**Rules for determinism:**
- Trust score is a **pure function** of:
- VEX document hash
- verification result
- source configuration (immutable version)
- scoring algorithm version
- evaluation timestamp (explicit input, included in snapshot)
- Never call external services during scoring unless responses are captured and hashed into the snapshot.
### 4.4 Implement two trust concepts: Source Trust and Statement Quality
Do not overload one score to do everything.
- **Source Trust**: “how much do we trust the issuer/channel?”
- **Statement Quality**: “how well-formed, specific, justified is this statement?”
You can then combine them:
`TrustScore = f(SourceTrust, StatementQuality, Freshness, TrackRecord)`
### 4.5 Conflict resolution must be policy-driven, not hard-coded
Conflicting VEX is inevitable:
- vendor vs distro
- older vs newer
- internal vs external
Resolve via:
- deterministic precedence rules configured per tenant
- trust-weighted tie-breakers
- “newer statement wins” only when issuer is the same or within the same trust class
### 4.6 Store VEX and decision inputs as content-addressed artifacts
If you want replayability, you must be able to reconstruct the “world state.”
**Persist:**
- VEX docs (by digest)
- verification artifacts (signature bundles, cert chains)
- normalized VEX statements (canonical form)
- trust score + breakdown + algorithm version
- policy bundle + version
- vulnerability DB snapshot identifiers
- decision output + evidence pointers
---
## 5) A practical trust scoring rubric you can hand to teams
Use a 0100 score with defined buckets. The weights below are a starting point; what matters is consistency and explainability.
### 5.1 Source Trust (060)
1. **Issuer identity verified (025)**
- 0 if unsigned/unverifiable
- 25 if signature verified to a known trust anchor
2. **Issuer authority alignment (020)**
- 20 if issuer is the product supplier/distro maintainer for that component set
- lower if third party / aggregator
3. **Distribution integrity (015)**
- extra credit if the VEX is distributed as an attestation bound to an artifact and/or uses auditable signature patterns (e.g., countersigning). citeturn1view3turn10view0
### 5.2 Statement Quality (040)
1. **Scope specificity (015)**
- exact product IDs (purl), versions, architectures, etc.
2. **Justification/impact present and structured (015)**
- CSAF VEX expects impact statement for known_not_affected; Ubuntu maps “not_affected” to justifications like `vulnerable_code_not_present`. citeturn1view1turn9view0
3. **Freshness (010)**
- based on statement/document timestamps (explicitly hashed into snapshot)
### Score buckets
- **90100**: Verified + authoritative + high-quality → eligible for gating
- **7089**: Verified but weaker evidence/scope → eligible with policy constraints
- **4069**: Mixed/partial trust → informational, not gating by default
- **039**: Unverified/low quality → do not affect decisions
---
## 6) Tight coupling to deterministic decisioning: what “coupling” means in practice
### 6.1 VEX must be an input to the same deterministic evaluation engine that produces the verdict
Do not build “VEX handling” as a sidecar that produces annotations.
**Decision engine inputs must include:**
- SBOM / component graph
- vulnerability findings
- normalized VEX statements
- verification results + trust scores
- tenant policy bundle
- evaluation timestamp + snapshot identifiers
The engine output must include:
- final status per vulnerability (affected/not affected/fixed/under investigation/unknown)
- **why** (evidence pointers)
- the policy rule(s) that caused it
### 6.2 Default posture: fail-safe, not fail-open
Recommended defaults:
- **Unverified VEX never suppresses vulnerabilities.**
- Trust score below threshold never suppresses.
- “Not affected” without justification/impact statement never suppresses.
This is aligned with CSAF VEX expectations and avoids the easiest suppression attack vector. citeturn1view1
### 6.3 Make uncertainty explicit
If VEX conflicts or is low trust, your decisioning must produce explicit states like:
- “Unknown (insufficient trusted VEX)”
- “Under Investigation”
That is consistent with common VEX status vocabulary and avoids false certainty. citeturn6view1turn9view0
---
## 7) Tight coupling to attestations: what to attest, when, and why
### 7.1 Attest **decisions**, not just documents
Competitors already sign SBOMs. Your moat is signing the **verdict** with the evidence chain.
Each signed verdict should bind:
- subject artifact digest (container/image/package)
- decision output (pass/fail/etc.)
- hashes of:
- VEX docs used
- verification artifacts
- trust scoring breakdown
- policy bundle
- vulnerability DB snapshot identifiers
### 7.2 Make attestations replayable
Your attestation must contain enough references (digests) that the system can:
- re-run the decision in an air-gapped environment
- obtain the same outputs
This aligns with “historical snapshots” / “evidence chain integrity” expectations in modern SBOM programs. citeturn8view0
### 7.3 Provide two attestations (recommended)
1. **VEX intake attestation** (optional but powerful)
- “We ingested and verified this VEX doc from issuer X under policy Y.”
2. **Risk verdict attestation** (core differentiator)
- “Given SBOM, vulnerabilities, verified VEX, and policy snapshot, the artifact is acceptable/unacceptable.”
Sigstores countersigning concept illustrates that you can add layers of trust over artifacts/signatures; your verdict is the enterprise-grade layer. citeturn1view3
---
## 8) “Definition of Done” checklists (use in roadmaps)
### PM DoD for VEX Trust (ship criteria)
- A customer can onboard a VEX source and see issuer identity + verification state.
- Trust score exists with a visible breakdown and policy thresholds.
- Policies can gate on trust score + verification.
- Audit export: per release, show which VEX claims affected the final decision.
### DM DoD for Deterministic + Attestable
- Same inputs → identical trust score and decision (golden tests).
- All inputs content-addressed and captured in a snapshot bundle.
- Attestation includes digests of all relevant inputs and a decision summary.
- No network dependency at evaluation time unless recorded in snapshot.
---
## 9) Metrics that prove you differentiated
Track these from the first pilot:
1. **% of decisions backed by verified VEX** (not just present)
2. **% of “not affected” outcomes with cryptographic verification + justification**
3. **Replay success rate** (recompute verdict from snapshot)
4. **Time-to-audit** (minutes to produce evidence chain for a release)
5. **False suppression rate** (should be effectively zero with fail-safe defaults)