Remove global.json and add extensive documentation for SBOM-first supply chain spine, diff-aware releases, binary intelligence graph, reachability proofs, smart-diff evidence, risk budget visualization, and weighted confidence for VEX sources. Introduce solution file for Concelier web service project.

This commit is contained in:
StellaOps Bot
2025-12-26 11:27:18 +02:00
parent 4f6dd4de83
commit e59b5e257c
11 changed files with 695 additions and 143790 deletions

View File

@@ -1,9 +0,0 @@
{
"solution": {
"path": "src/Concelier/StellaOps.Concelier.sln",
"projects": [
"StellaOps.Concelier.WebService/StellaOps.Concelier.WebService.csproj",
"__Tests/StellaOps.Concelier.WebService.Tests/StellaOps.Concelier.WebService.Tests.csproj"
]
}
}

View File

@@ -0,0 +1,175 @@
Heres a simple, practical way to think about a **SBOMfirst, VEXready supplychain spine** and the **evidence graph + smartdiff** you can build on top of it—starting from zero and ending with reproducible, signed decisions.
# SBOMfirst spine (VEXready)
**Goal:** make the SBOM the canonical graph of “whats inside,” then layer signed evidence (build, scans, policy) so every verdict is portable, replayable, and auditable across registries.
**Core choices**
* **Canonical graph:** treat **CycloneDX 1.6** and **SPDX 3.x** as firstclass. Keep both in sync; normalize component IDs (PURL/CPE), hashes, licenses, and relationships.
* **Attestations:** use **intoto + DSSE** for all lifecycle facts:
* build (SLSA provenance),
* scan results (vuln, secrets, IaC, reachability),
* policy evaluation (allow/deny, risk budgets, exceptions).
* **Storage/transport:** publish everything as **OCIattached artifacts** via **OCI Referrers**:
* `image:tag` → SBOM (spdx/cdx), VEX, SARIF, provenance, policy verdicts, exception notes—each a referrer with media type + signature.
* **Signatures:** cosign/sigstore (or your regional crypto: eIDAS/FIPS/GOST/SM) for **contentaddressed** blobs.
**Minimum viable workflow**
1. **Build step**
* Produce identical SBOMs in CycloneDX and SPDX.
* Emit SLSAstyle provenance attestation.
2. **Scan step(s)**
* OS + language deps + container layers; add **reachability proofs** where possible.
* Emit one **scan attestation per tool** (dont conflate).
3. **Policy step**
* Evaluate policies (e.g., OPA/Rego or your lattice rules) **against the SBOM graph + scan evidence**.
* Emit a **signed policy verdict attestation** (pass/fail + reasons + unknowns count).
4. **Publish**
* Push image, then push SBOMs, VEX, scan attestations, policy verdicts as **OCI referrers**.
5. **Verify / consume**
* Pull the images **referrer set**; verify signatures; reconstruct graph locally; **replay** the policy evaluation deterministically.
**Data model tips**
* Stable identifiers: PURLs for packages, digests for layers, BuildID for binaries.
* Edges: `component→dependsOn`, `component→vulnerability`, `component→evidence(attestation)`, `component→policyClaim`.
* Keep **time (asof)** and **source** on every node/edge for replay.
# Evidence graph + smartdiff
**Goal:** persist an **explainability graph** (findings ↔ components ↔ provenance ↔ policies) and compute **signed deltaverdicts** on diffs to drive precise impact analysis and quiet noise.
**What to store**
* **Provenance:** who built it, from what, when (commit, builder, materials).
* **Findings:** CVEs, misconfigs, secrets, license flags, each with source tool, version, rule, confidence, timestamp.
* **Policies & verdicts:** rule set version, inputs hashes, outcome, rationale.
* **Reachability subgraphs:** the minimal path proving exploitability (e.g., symbol → function → package → process start).
**Smartdiff algorithm (high level)**
* Compare two images (or SBOM graphs) **by component identity + version + hash**.
* For each change class:
* **Added/removed/changed component**
* **New/cleared/changed finding**
* **Changed reachability path**
* **Changed policy version/inputs**
* Reevaluate only affected subgraph; produce a **Delta Verdict**:
* `status`: safer / riskequal / riskhigher
* `why`: list of netnew reachable vulns, removed reachable vulns, policy/exception impacts
* `evidenceRefs`: hashes of attestations used
* **Sign the delta verdict (DSSE)** and publish it as an **OCI referrer** too.
**UX essentials**
* Artifact page shows: **“Evidence Stack”** (SBOM, scans, VEX, policy, provenance) with green checks for signatures.
* **Smartdiff view:** left vs right image → “netnew reachable CVEs (+3)”, “downgraded risk (1)” with drilldowns to the exact path/evidence.
* **Explain button:** expands to show **why** a CVE is (not) applicable (feature flag off, code path unreachable, kernel mitigation present, etc.).
* **Replay badge:** “Deterministic ✅” (inputs hashes match; verdict reproducible).
# Implementation checklist (teamready)
**Pipelines**
* [ ] Build: emit SBOM (CDX + SPDX), SLSA provenance (intoto/DSSE), sign all.
* [ ] Scan: OS + language + config + (optional) eBPF/runtime; one attestation per tool.
* [ ] Policy: evaluate rules → signed verdict attestation; include **unknowns count**.
* [ ] Publish: push all as OCI referrers; enable verification gate on pull/deploy.
**Schema & IDs**
* [ ] Normalize component IDs (PURL/CPE) + strong hashes; map binaries (BuildID → package).
* [ ] Evidence graph store: Postgres (authoritative) + cache (Valkey) for queries.
* [ ] Index by image digest; maintain **asof** snapshots for timetravel.
**Determinism**
* [ ] Lock feeds, rule versions, tool versions; record all **input digests**.
* [ ] Provide a `replay.yaml` manifest capturing inputs → expected verdict hash.
**Security & sovereignty**
* [ ] Pluggable crypto: eIDAS/FIPS/GOST/SM; offline bundle export/import.
* [ ] Airgapped profile: Postgresonly with documented tradeoffs.
**APIs & types (suggested media types)**
* `application/vnd.cyclonedx+json`
* `application/spdx+json`
* `application/vnd.in-toto+json; statement=provenance|scan|policy`
* `application/vnd.stella.verdict+json` (your signed verdict/delta)
**Minimal object examples (sketches)**
*Attestation (scan)*
```json
{
"type": "https://in-toto.io/Statement/v1",
"predicateType": "https://stella.dev/scan/v1",
"subject": [{"name": "registry/app@sha256:…", "digest": {"sha256": "..."} }],
"predicate": {
"tool": {"name": "scannerX", "version": "1.4.2"},
"inputs": {"sbom": "sha256:…", "db": "sha256:…"},
"findings": [{"id": "CVE-2025-1234", "component": "pkg:pypi/xyz@1.2.3", "severity": "HIGH"}]
}
}
```
*Policy verdict (replayable)*
```json
{
"type": "https://in-toto.io/Statement/v1",
"predicateType": "https://stella.dev/verdict/v1",
"subject": [{"name": "registry/app@sha256:…"}],
"predicate": {
"policy": {"id": "prod.v1.7", "hash": "sha256:…"},
"inputs": {"sbom": "sha256:…", "scans": ["sha256:…","sha256:…"]},
"unknowns": 2,
"decision": "allow",
"reasons": [
"CVE-2025-1234 not reachable (path pruned)",
"License policy ok"
]
}
}
```
*Delta verdict (smartdiff)*
```json
{
"predicateType": "https://stella.dev/delta-verdict/v1",
"predicate": {
"from": "sha256:old", "to": "sha256:new",
"impact": "risk-higher",
"changes": {
"componentsAdded": ["pkg:apk/openssl@3.2.1-r1"],
"reachableVulnsAdded": ["CVE-2025-2222"]
},
"evidenceRefs": ["sha256:scanA", "sha256:policyV1"]
}
}
```
# Operating rules you can adopt today
* **Everything is evidence.** If it influenced a decision, its an attestation you can sign and attach.
* **Same inputs → same verdict.** If not, treat it as a bug.
* **Unknowns budgeted by policy.** E.g., “fail prod if unknowns > 0; warn in dev.”
* **Diffs decide deployments.** Gate on the **delta verdict**, not raw CVE counts.
* **Portable by default.** If you move registries, your decisions move with the image via referrers.
If you want, I can turn this into starter repos (SBOM/attestation schemas, OCIreferrer publish/verify CLI, and a smartdiff service stub in .NET 10) so your team can plug it into your current pipelines without a big rewrite.

View File

@@ -0,0 +1,61 @@
Heres a tight, practical pattern you can lift for StellaOps: **make exceptions firstclass, auditable objects** and **gate releases on risk deltas (diffaware checks)**—mirroring what top scanners do, but with stronger evidence and autorevalidation.
### 1) Exceptions as auditable objects
Competitor cues
* **Snyk** lets users ignore issues with a required reason and optional expiry (UI/CLI; `.snyk` policy). Ignored items can autoresurface when a fix exists. ([Snyk User Docs][1])
* **Anchore** models **policy allowlists** (named sets of exceptions) applied during evaluation/mapping. ([Anchore Documentation][2])
* **Prisma Cloud** supports vulnerability rules/CVE exceptions to soften or block findings. ([Prisma Cloud][3])
What to ship (StellaOps)
* **Exception entity**: `{scope, subject(CVE/pkg/path), reason(text), evidenceRefs[], createdBy, createdAt, expiresAt?, policyBinding, signature}`
* **Signed rationale + evidence**: require a justification plus **linked proofs** (attestation IDs, VEX note, reachability subgraph slice). Store as an **OCIattached attestation** to the SBOM/VEX artifact.
* **Autoexpiry & revalidation gates**: scheduler retests on expiry or when feeds mark “fix available / EPSS ↑ / reachability ↑”; on failure, **flip gate to “needs rereview”** and notify.
* **Audit view**: timeline of exception lifecycle; show who/why, evidence, and rechecks; exportable as an “audit pack.”
* **Policy hooks**: “allow only if: reason ∧ evidence present ∧ max TTL ≤ X ∧ owner = teamY.”
* **Inheritance**: repo→image→env scoping with explicit shadowing (surface conflicts).
### 2) Diffaware release gates (“delta verdicts”)
Competitor cues
* **Snyk PR Checks** scan *changes* and gate merges with a severity threshold; results show issue diffs per PR. ([Snyk User Docs][4])
What to ship (StellaOps)
* **Graph deltas**: on each commit/image, compute `Δ(SBOM graph, reachability graph, VEX claims)`.
* **Delta verdict** (signed, replayable): `PASS | WARN | FAIL` + **proof links** to:
* attestation bundle (intoto/DSSE),
* **reachability subgraph** showing new execution paths to vulnerable symbols,
* policy evaluation trace.
* **Sidebyside UI**: “before vs after” risks; highlight *newly reachable* vulns and *fixed/mitigated* ones; oneclick **Create Exception** (enforces reason+evidence+TTL).
* **Enforcement knobs**: perbranch/env risk budgets; fail if `unknowns > N` or if any exception lacks evidence/TTL.
* **Supply chain scope**: run the same gate on baseimage bumps and dependency updates.
### Minimal data model (sketch)
* `Exception`: id, scope, subject, reason, evidenceRefs[], ttl, status, sig.
* `DeltaVerdict`: id, baseRef, headRef, changes[], policyOutcome, proofs[], sig.
* `Proof`: type(`attestation|reachability|vex|log`), uri, hash.
### CLI / API ergonomics (examples)
* `stella exception create --cve CVE-2025-1234 --scope image:repo/app:tag --reason "Feature disabled" --evidence att:sha256:… --ttl 30d`
* `stella verify delta --from abc123 --to def456 --policy prod.json --print-proofs`
### Guardrails out of the box
* **No silent ignores**: exceptions are visible in results (action changes, not deletion)—same spirit as Anchore. ([Anchore Documentation][2])
* **Resurface on fix**: if a fix exists, force rereview (parity with Snyk behavior). ([Snyk User Docs][1])
* **Rulebased blocking**: allow “hard/soft fail” like Prisma enforcement. ([Prisma Cloud][5])
If you want, I can turn this into a short product spec (API + UI wireframe + policy snippets) tailored to your StellaOps modules (Policy Engine, Vexer, Attestor).
[1]: https://docs.snyk.io/manage-risk/prioritize-issues-for-fixing/ignore-issues?utm_source=chatgpt.com "Ignore issues | Snyk User Docs"
[2]: https://docs.anchore.com/current/docs/overview/concepts/policy/policies/?utm_source=chatgpt.com "Policies and Evaluation"
[3]: https://docs.prismacloud.io/en/compute-edition/22-12/admin-guide/vulnerability-management/configure-vuln-management-rules?utm_source=chatgpt.com "Vulnerability management rules - Prisma Cloud Documentation"
[4]: https://docs.snyk.io/scan-with-snyk/pull-requests/pull-request-checks?utm_source=chatgpt.com "Pull Request checks | Snyk User Docs"
[5]: https://docs.prismacloud.io/en/enterprise-edition/content-collections/application-security/risk-management/monitor-and-manage-code-build/enforcement?utm_source=chatgpt.com "Enforcement - Prisma Cloud Documentation"

View File

@@ -0,0 +1,145 @@
Heres a compact blueprint for a **binarylevel knowledge base** that maps ELF BuildIDs / PE signatures to vulnerable functions, patch lineage, and reachability hints—so your scanner can act like a provenanceaware “binary oracle,” not just a CVE lookup.
---
# Why this matters (in plain terms)
* **Same version ≠ same risk.** Distros (and vendors) frequently **backport** fixes without bumping versions. Only the **binary** tells the truth.
* **Functionlevel matching** turns noisy “package has CVE” into precise “this exact function range is vulnerable in your binary.”
* **Reachability hints** cut triage noise by ranking vulns the code path can actually hit at runtime.
---
# Minimal starter schema (MVP)
Keep it tiny so it grows with real evidence:
**artifacts**
* `id (pk)`
* `platform` (linux, windows)
* `format` (ELF, PE)
* `build_id` (ELF `.note.gnu.build-id`), `pdb_guid` / `pe_imphash` (Windows)
* `sha256` (wholefile)
* `compiler_fingerprint` (e.g., `gcc-13.2`, `msvc-19.39`)
* `source_hint` (optional: pname/version if known)
**symbols**
* `artifact_id (fk)`
* `symbol_name`
* `addr_start`, `addr_end` (or RVA for PE)
* `section`, `file_offset` (optional)
**vuln_segments**
* `id (pk)`
* `cve_id` (CVEYYYYNNNN)
* `function_signature` (normalized name + arity)
* `byte_sig` (short stable pattern around the vulnerable hunk)
* `patch_sig` (pattern from fixed hunk)
* `evidence_ref` (link to patch diff, commit, or NVD note)
* `backport_flag` (bool)
* `introduced_in`, `fixed_in` (semver-ish text; note “backport” when used)
**matches**
* `artifact_id (fk)`, `vuln_segment_id (fk)`
* `match_type` (`byte`, `range`, `symbol`)
* `confidence` (01)
* `explain` (why we think this matches)
**reachability_hints**
* `artifact_id (fk)`, `symbol_name`
* `hint_type` (`imported`, `exported`, `hot`, `ebpf_seen`, `graph_core`)
* `weight` (0100)
---
# How the oracle answers “Am I affected?”
1. **Identify**: Look up by BuildID / PE signature; fall back to file hash.
2. **Locate**: Map symbols → address ranges; scan for `byte_sig`/`patch_sig`.
3. **Decide**:
* if `patch_sig` present ⇒ **Not affected (backported)**.
* if `byte_sig` present and reachable (weighted) ⇒ **Affected (prioritized)**.
* if only `byte_sig` present, unreachable ⇒ **Affected (low priority)**.
* if neither ⇒ **Unknown**.
4. **Explain**: Attach `evidence_ref`, the exact offsets, and the reason (match_type + reachability).
---
# Ingestion pipeline (no humans in the loop)
* **Fingerprinting**: extract BuildID / PE GUID; compute `sha256`.
* **Symbol map**: parse DWARF/PDB if present; else fall back to heuristics (ELF `symtab`, PE exports).
* **Patch intelligence**: autodiff upstream commits (plus major distros) → synthesize short **byte signatures** around changed hunks (stable across relocations).
* **Evidence links**: store URLs/commit IDs for crossaudit.
* **Noise control**: only accept a vuln signature if it hits N≥3 independent binaries across distros (tunable).
---
# Deterministic verdicts (fit to StellaOps)
* **Inputs**: `(artifact fingerprint, vuln_segments@version, reachability@policy)`
* **Output**: **Signed OCI attestation** “verdict.json” (same inputs → same verdict).
* **Replay**: keep rule bundle & feed hashes for audit.
* **Backport precedence**: `patch_sig` beats package version claims every time.
---
# Fast path to MVP (2 sprints)
* Add a **BuildID/PE indexer** to Scanner.
* Teach Feedser/Vexer to ingest `vuln_segments` (with `byte_sig`/`patch_sig`).
* Implement matching + verdict attestation; surface **“Backported & Safe”** vs **“Affected & Reachable”** badges in UI.
* Seed DB with 10 highimpact CVEs (OpenSSL, zlib, xz, glibc, libxml2, curl, musl, busybox, OpenSSH, sudo).
---
# Example: SQL skeleton (Postgres)
```sql
create table artifacts(
id bigserial primary key,
platform text, format text,
build_id text, pdb_guid text, pe_imphash text,
sha256 bytea not null unique,
compiler_fingerprint text, source_hint text
);
create table symbols(
artifact_id bigint references artifacts(id),
symbol_name text, addr_start bigint, addr_end bigint,
section text, file_offset bigint
);
create table vuln_segments(
id bigserial primary key,
cve_id text, function_signature text,
byte_sig bytea, patch_sig bytea,
evidence_ref text, backport_flag boolean,
introduced_in text, fixed_in text
);
create table matches(
artifact_id bigint references artifacts(id),
vuln_segment_id bigint references vuln_segments(id),
match_type text, confidence real, explain text
);
create table reachability_hints(
artifact_id bigint references artifacts(id),
symbol_name text, hint_type text, weight int
);
```
---
If you want, I can:
* drop in a tiny **.NET 10** matcher (ELF/PE parsers + bytewindow scanner),
* wire verdicts as **OCI attestations** in your current pipeline,
* and prep the first **10 CVE byte/patch signatures** to seed the DB.

View File

@@ -0,0 +1,71 @@
Heres a crisp way to think about “reachability” that makes triage sane and auditable: **treat it like a cryptographic proof**—a minimal, reproducible chain that shows *why* a vuln can (or cannot) hit runtime.
### The idea (plain English)
* **Reachability** asks: “Could data flow from an attacker to the vulnerable code path during real execution?”
* **Proof-carrying reachability** says: “Dont just say yes/no—hand me a *proof chain* I can re-run.”
Think: the shortest, lossless breadcrumb trail from entrypoint → sinks, with the exact build + policy context that made it true.
### What the “proof” contains
1. **Scope hash**: content digests for artifact(s) (image layers, SBOM nodes, commit IDs, compiler flags).
2. **Policy hash**: the decision rules used (e.g., “prod disallows unknowns > 0”; “vendor VEX outranks distro unless backport tag present”).
3. **Graph snippet**: the *minimal subgraph* (call/data/control edges) that connects:
* external entrypoint(s) → user-controlled sources → validators (if any) → vulnerable function(s)/sink(s).
4. **Conditions**: feature flags, env vars, platform guards, version ranges, eBPF-observed edges (if present).
5. **Verdict** (signed): A → {Affected | Not Affected | Under-Constrained} with reason codes.
6. **Replay manifest**: the inputs needed to recompute the same verdict (feeds, rules, versions, hashes).
### Why this helps
* **Auditable**: Every “Not Affected” is defensible (no hand-wavy “scanner says so”).
* **Deterministic**: Same inputs → same verdict (great for change control and regulators).
* **Compact**: You store only the *minimal subgraph*, not the whole monolith.
### Minimal proof example (sketch)
* Artifact: `svc.payments:1.4.7` (image digest `sha256:…`)
* CVE: `CVE-2024-XYZ` in `libyaml 0.2.5`
* Entry: `POST /import`, body → `YamlDeserializer.Parse`
* Guards: none (no schema/whitelist prior to parse)
* Edge chain: `HttpBody → Parse(bytes) → LoadNode() → vulnerable_path()`
* Condition: feature flag `BULK_IMPORT=true`
* Verdict: **Affected**
* Signed DSSE envelope over {scope hash, policy hash, graph snippet JSON, conditions, verdict}.
### How to build it (practical checklist)
* **During build**
* Emit SBOM (source & binary) with function/file symbols where possible.
* Capture compiler/linker flags; normalize paths; include feature flags default state.
* **During analysis**
* Static: slice the call graph to the *shortest* source→sink chain; attach type-state facts (e.g., “validated length”).
* Deps: map CVEs to precise symbol/ABI surfaces (not just package names).
* Backports: require explicit evidence (patch IDs, symbol presence) before downgrading severity.
* **During runtime (optional but strong)**
* eBPF trace to confirm edges observed; store hashes of kprobes/uprobes programs and sampling window.
* **During decisioning**
* Apply merge policy (vendor VEX, distro notes, internal tests) deterministically; hash the policy.
* Emit one DSSE/attestation per verdict; include replay manifest.
### UI that wont overwhelm
* **Default card**: Verdict + “Why?” (one-line chain) + “Replay” button.
* **Expand**: shows the 510 edge subgraph, conditions, and signed envelope.
* **Compare builds**: side-by-side proof deltas (edges added/removed, policy change, backport flip).
### Operating modes
* **Strict** (prod): Unknowns → fail-closed; proofs required for Not Affected.
* **Lenient** (dev): Unknowns tolerated; proofs optional but encouraged; allow “Under-Constrained”.
### What to measure
* Proof generation rate, median proof size (KB), replay success %, proof dedup ratio, and “unknowns” burn-down.
If you want, I can turn this into a ready-to-ship spec for StellaOps (attestation schema, JSON examples, API routes, and a tiny .NET verifier).

View File

@@ -0,0 +1,86 @@
Heres a crisp idea you can put to work right away: **treat SBOM diffs as a firstclass, signed evidence object**—not just “what components changed,” but also **VEX claim deltas** and **attestation (intoto/DSSE) deltas**. This makes vulnerability verdicts **deterministically replayable** and **auditready** across release gates.
### Why this matters (plain speak)
* **Less noise, faster go/nogo:** Only retriage what truly changed (package, reachability, config, or vendor stance), not the whole universe.
* **Deterministic audits:** Same inputs → same verdict. Auditors can replay checks exactly.
* **Tighter release gates:** Policies evaluate the *delta verdict*, not raw scans.
### Evidence model (minimal but complete)
* **Subject:** OCI digest of image/artifact.
* **Baseline:** SBOMG (graph hash), VEX set hash, policy + rules hash, feed snapshots (CVE JSON digests), toolchain + config hashes.
* **Delta:**
* `components_added/removed/updated` (with semver + source/distro origin)
* `reachability_delta` (edges added/removed in call/file/path graph)
* `settings_delta` (flags, env, CAPs, eBPF signals)
* `vex_delta` (perCVE claim transitions: *affected → not_affected → fixed*, with reason codes)
* `attestation_delta` (buildprovenance step or signer changes)
* **Verdict:** Signed “delta verdict” (allow/block/risk_budget_consume) with rationale pointers into the deltas.
* **Provenance:** DSSE envelope, intoto link to baseline + new inputs.
### Deterministic replay contract
Pin and record:
* Feed snapshots (CVE/VEX advisories) + hashes
* Scanner versions + rule packs + lattice/policy version
* SBOM generator version + mode (CycloneDX 1.6 / SPDX 3.0.1)
* Reachability engine settings (language analyzers, eBPF taps)
* Merge semantics ID (see below)
Replayer rehydrates these **exact** inputs and must reproduce the same verdict bitforbit.
### Merge semantics (stop “vendor > distro > internal” naïveté)
Define a policycontrolled lattice for claims, e.g.:
* **Orderings:** `exploit_observed > affected > under_investigation > fixed > not_affected`
* **Source weights:** vendor, distro, internal SCA, runtime sensor, pentest
* **Conflict rules:** tiebreaks, quorum, freshness windows, required evidence hooks (e.g., “not_affected because feature flag X=off, proven by config attestation Y”)
### Where it lives in the product
* **UI:** “Diff & Verdict” panel on each PR/build → shows SBOM/VEX/attestation deltas and the signed delta verdict; oneclick export of the DSSE envelope.
* **API/Artifact:** Publish as an **OCIattached attestation** (`application/vnd.stella.delta-verdict+json`) alongside SBOM + VEX.
* **Pipelines:** Release gate consumes only the delta verdict (fast path); full scan can run asynchronously for deep telemetry.
### Minimal schema sketch (JSON)
```json
{
"subject": {"ociDigest": "sha256:..."},
"inputs": {
"feeds": [{"type":"cve","digest":"sha256:..."},{"type":"vex","digest":"sha256:..."}],
"tools": {"sbomer":"1.6.3","reach":"0.9.0","policy":"lattice-2025.12"},
"baseline": {"sbomG":"sha256:...","vexSet":"sha256:..."}
},
"delta": {
"components": {"added":[...],"removed":[...],"updated":[...]},
"reachability": {"edgesAdded":[...],"edgesRemoved":[...]},
"settings": {"changed":[...]},
"vex": [{"cve":"CVE-2025-1234","from":"affected","to":"not_affected","reason":"config_flag_off","evidenceRef":"att#cfg-42"}],
"attestations": {"changed":[...]}
},
"verdict": {"decision":"allow","riskBudgetUsed":2,"policyId":"lattice-2025.12","explanationRefs":["vex[0]","reachability.edgesRemoved[3]"]},
"signing": {"dsse":"...","signer":"stella-authority"}
}
```
### Rollout checklist (StellaOps framing)
* **Sbomer:** emit **graphhash** (stable canonicalization) and diff vs previous SBOMG.
* **Vexer:** compute VEX claim deltas + reason codes; apply lattice merge; expose `vexDelta[]`.
* **Attestor:** snapshot feed digests, tool/rule versions, and config; produce DSSE bundle.
* **Policy Engine:** evaluate deltas → produce **delta verdict** with strict replay semantics.
* **Router/Timeline:** store delta verdicts as auditable objects; enable “replay build N” button.
* **CLI/CI:** `stella delta-verify --subject <digest> --envelope delta.json.dsse` → must return identical verdict.
### Guardrails
* Canonicalize and sort everything before hashing.
* Record unknowns explicitly and let policy act on them (e.g., “fail if unknowns > N in prod”).
* No network during replay except to fetch pinned digests.
If you want, I can draft the precise CycloneDX extension fields + an OCI media type registration, plus .NET 10 interfaces for Sbomer/Vexer/Attestor to emit/consume this today.

View File

@@ -0,0 +1,58 @@
Heres a simple way to make “risk budget” feel like a real, live dashboard rather than a dusty policy—plus the one visualization that best explains “budget burn” to PMs.
### First, quick background (plain English)
* **Risk budget** = how much unresolved risk were willing to carry for a release (e.g., 100 “risk points”).
* **Burn** = how fast we consume that budget as unknowns/alerts pop up, minus how much we “pay back” by fixing/mitigating.
### What to show on the dashboard
1. **Heatmap of Unknowns (Where are we blind?)**
* Rows = components/services; columns = risk categories (vulns, compliance, perf, data, supply-chain).
* Cell value = *unknowns count × severity weight* (unknown ≠ unimportant; its the most dangerous).
* Click-through reveals: last evidence timestamp, owners, next probe.
2. **Delta Table (Risk Decay per Release)**
* Each release row compares **Before vs After**: total risk, unknowns, known-high, accepted, deferred.
* Include a **“risk retired”** column (points dropped due to fixes/mitigations) and **“risk shifted”** (moved to exceptions).
3. **Exception Ledger (Auditable)**
* Every accepted risk has an ID, owner, expiry, evidence note, and auto-reminder.
### The best single chart for PMs: **Risk Budget Burn-Up**
*(This is the one slide theyll get immediately.)*
* **X-axis:** calendar dates up to code freeze.
* **Y-axis:** risk points.
* **Two lines:**
* **Budget (flat or stepped)** = allowable risk over time (e.g., 100 pts until T2, then 60).
* **Actual Risk (cumulative)** = unknowns + knowns mitigations (daily snapshot).
* **Shaded area** between lines = **Headroom** (green) or **Overrun** (red).
* Add **vertical markers** for major changes (feature freeze, pen-test start, dependency bump).
* Add **burn targets** (dotted) to show where you must be each week to land inside budget.
### How to compute the numbers (lightweight)
* **Risk points** = Σ(issue_severity_weight × exposure_factor × evidence_freshness_penalty).
* **Unknown penalty**: if no evidence ≤ N days, apply multiplier (e.g., ×1.5).
* **Decay**: when a fix lands *and* evidence is refreshed, subtract points that day.
* **Guardrail**: fail gate if **unknowns > K** *or* **Actual Risk > Budget** within T days of release.
### Minimal artifacts to ship
* **Schema:** `issue_id, component, category, severity, is_unknown, exposure, evidence_date, status, owner`.
* **Daily snapshot job:** materialize totals + unknowns + mitigations per component.
* **One chart, one table, one heatmap** (dont overdo it).
### Copypaste labels for the board
* **Top-left KPI:** “Headroom: 28 pts (green)”
* **Badges:** “Unknowns↑ +6 (24h)”, “Risk retired 18 (7d)”, “Exceptions expiring: 3”
* **Callout:** “At current burn, overrun in 5 days—pull forward libX fix or scopecut Y.”
If you want, I can mock this with sample data (CSV → chart) so your team sees exactly how it looks.

View File

@@ -0,0 +1,90 @@
Heres a compact, practical way to rank conflicting vulnerability evidence (VEX) by **freshness vs. confidence**—so your system picks the best truth without handholding.
---
# A scoring lattice for VEX sources
**Goal:** Given multiple signals (VEX statements, advisories, bug trackers, scanner detections), compute a single verdict with a transparent score and a proof trail.
## 1) Normalize inputs → “evidence atoms”
For every item, extract:
* **scope** (package@version, image@digest, file hash)
* **claim** (affected, not_affected, under_investigation, fixed)
* **reason** (reachable?, feature flag off, vulnerable code not present, platform not impacted)
* **provenance** (who said it, how its signed)
* **when** (issued_at, observed_at, expires_at)
* **supporting artifacts** (SBOM ref, intoto link, CVE IDs, PoC link)
## 2) Confidence (C) and Freshness (F)
**Confidence C (01)** (multiply factors; cap at 1):
* **Signature strength:** DSSE + Sigstore/Rekor inclusion (0.35), plus hardwarebacked key or org OIDC (0.15)
* **Source reputation:** NVD (0.20), major distro PSIRT (0.20), upstream vendor (0.20), reputable CERT (0.15), small vendor (0.10)
* **Evidence quality:** reachability proof / test (0.25), code diff linking (0.20), deterministic build link (0.15), “reason” present (0.10)
* **Consensus bonus:** ≥2 independent concurring sources (+0.10)
**Freshness F (01)** (monotone decay):
* F = exp(Δdays / τ) with τ tuned per source class (e.g., **τ=30** vendor VEX, **τ=90** NVD, **τ=14** exploitactive feeds).
* **Update reset:** new attestation with same subject resets Δdays.
* **Expiry clamp:** if `now > expires_at`, set F=0.
## 3) Claim strength (S_claim)
Map claim → base weight:
* not_affected (0.9), fixed (0.8), affected (0.7), under_investigation (0.4)
* **Reason multipliers:** reachable? (+0.15 to “affected”), “feature flag off” (+0.10 to “not_affected”), platform mismatch (+0.10), backport patch note (+0.10 if patch commit hash provided)
## 4) Overall score & lattice merge
Per evidence `e`:
**Score(e) = C(e) × F(e) × S_claim(e)**
Then, merge in a **distributive lattice** ordered by:
1. **Claim precedence** (not_affected > fixed > affected > under_investigation)
2. Break ties by **Score(e)**
3. If competing top claims within ε (e.g., 0.05), **escalate to “disputed”** and surface both with proofs.
**Policy hooks:** allow orglevel overrides (e.g., “prod must treat under_investigation as affected unless reachability=false proof present”).
## 5) Worked example: smallvendor Sigstore VEX vs 6monthold NVD note
* **Small vendor VEX (signed, Sigstore, reason: code path unreachable, issued 7 days ago):**
C ≈ signature (0.35) + smallvendor (0.10) + reason (0.10) + evidence (reachability +0.25) = ~0.70
F = exp(7/30) ≈ 0.79
S_claim (not_affected + reason) = 0.9 + 0.10 = 1.0 (cap at 1)
**Score ≈ 0.70 × 0.79 × 1.0 = 0.55**
* **NVD entry (affected; no extra reasoning; last updated 180 days ago):**
C ≈ NVD (0.20) = 0.20
F = exp(180/90) ≈ 0.14
S_claim (affected) = 0.7
**Score ≈ 0.20 × 0.14 × 0.7 = 0.02**
**Outcome:** vendor VEX decisively wins; lattice yields **not_affected** with linked proofs. If NVD updates tomorrow, its F jumps and the lattice may flip—deterministically.
## 6) Implementation notes (fits StellaOps modules)
* **Where:** run in **scanner.webservice** (per your standing rule), keep Concelier/Excitors as preserveprune pipes.
* **Storage:** Postgres as SoR; Valkey as cache for score shards.
* **Inputs:** CycloneDX/SPDX IDs, intoto attestations, Rekor proofs, feed timestamps.
* **Outputs:**
* **Signed “verdict attestation”** (OCIattached) with inputs hashes + chosen path in lattice.
* **Delta verdicts** when any input changes (freshness decay counts as change).
* **UI:** “Trust Algebra” panel showing (C,F,S_claim), decay timeline, and “why this won.”
## 7) Guardrails & ops
* **Replayability:** include τ values, weights, and source catalog in the attested policy so anyone can recompute the same score.
* **Backports:** add a “patchaware” booster only if commit hash maps to shipped build (prove via diff or package changelog).
* **Airgapped:** mirror Rekor; cache trust anchors; freeze decay at scan time but recompute at policyevaluation time.
---
If you want, I can drop this into a readytorun JSON/YAML policy bundle (with τ/weights defaults) and a tiny C# evaluator stub you can wire into **Policy Engine → Vexer** right away.

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,9 @@
{
"solution": {
"path": "Concelier/StellaOps.Concelier.sln",
"projects": [
"StellaOps.Concelier.WebService\\StellaOps.Concelier.WebService.csproj",
"__Tests\\StellaOps.Concelier.WebService.Tests\\StellaOps.Concelier.WebService.Tests.csproj"
]
}
}

View File

@@ -1,6 +0,0 @@
{
"sdk": {
"version": "10.0.100",
"rollForward": "latestMinor"
}
}