save progress
This commit is contained in:
@@ -0,0 +1,57 @@
|
||||
Here’s a practical way to make Stella Ops truly **offline‑ready** while upgrading provenance and compliance: combine **DSSE signing** with **in‑toto link attestations**, and cache entries in a **local Rekor‑style Merkle log** so builds remain provable even when air‑gapped—nicely aligned with the eIDAS 2.0 shift toward Qualified Trust Services. ([GitHub][1])
|
||||
|
||||
**Why these pieces**
|
||||
|
||||
* **DSSE**: a simple, envelope‑style signature that authenticates payload + type; widely used for attestations (e.g., cosign). Works well in offline flows. ([GitHub][1])
|
||||
* **in‑toto**: records step‑by‑step build “link” metadata (materials, command, products) you can verify later. v1.0 attestation framework is stable. ([in-toto][2])
|
||||
* **Rekor‑style transparency log**: append‑only Merkle tree with inclusion/consistency proofs; you can run it privately/offline and shard as it grows. ([Sigstore][3])
|
||||
* **eIDAS 2.0 context**: new implementing acts and timelines tighten expectations on trust services and evidence—your local, verifiable log + signatures help map to those controls. ([Fabasoft][4])
|
||||
|
||||
**Minimal design (deterministic, air‑gapped)**
|
||||
|
||||
1. **Per‑step capture**
|
||||
|
||||
* Wrap each build step with `in-toto-run` to emit **link** attestations (DSSE‑wrapped). Pin inputs/outputs by digest; record exact argv/env. ([in-toto][2])
|
||||
2. **Deterministic DSSE**
|
||||
|
||||
* Generate DSSE envelopes from normalized JSON (no clocks/paths). Sign with offline keys (HSM/PKCS#11 or file keys) and tag with predicate type (SLSA provenance, scan evidence, policy results). ([GitHub][1])
|
||||
3. **Local transparency log**
|
||||
|
||||
* Store every envelope (and SBOM/VEX) in a **local Rekor clone** (same API & Merkle proofs). Enable periodic **sharding** and snapshot the tree head into your evidence bundle. ([Sigstore][5])
|
||||
4. **Proof bundle**
|
||||
|
||||
* For each artifact, export: artifact digest, DSSE envelopes, in‑toto links, Rekor **inclusion + consistency proofs** (tree size, root). Verifiers can check integrity without the network. ([Su3][6])
|
||||
5. **Online/Offline bridge (optional)**
|
||||
|
||||
* When connected, mirror your local tree to a public/partner log; when fully air‑gapped, use a **witness** or transfer pack to sync later. ([Sigstore][7])
|
||||
|
||||
**Developer checklist (Stella Ops modules)**
|
||||
|
||||
* **Attestor/Authority**: DSSE signers, key policy (offline HSM first, PQ‑ready later). ([GitHub][1])
|
||||
* **Builder/Router**: inject `in-toto-run` wrappers; emit link predicates; stamp build IDs. ([in-toto][2])
|
||||
* **Ledger**: private Rekor‑compatible service (API parity, Merkle proofs, sharding). ([GitHub][8])
|
||||
* **Verifier/Policy Engine**: verify DSSE, validate in‑toto layout, enforce “must‑have links,” and check Rekor proofs before promotion. ([in-toto][9])
|
||||
* **Compliance**: map evidence to eIDAS 2.0/QTSP expectations (immutability, auditability, key control, incident reporting). ([Fabasoft][4])
|
||||
|
||||
**CLI flow (sketch)**
|
||||
|
||||
* `stella build --record` → emits DSSE+in‑toto links
|
||||
* `stella attestor sign --dsse file.json` → writes envelope
|
||||
* `stella ledger put *.dsse` → returns entry IDs + proofs
|
||||
* `stella verify --artifact <digest> --bundle <proofs.tgz>` → offline verify DSSE, in‑toto layout, Merkle proofs (inclusion/consistency)
|
||||
|
||||
**UX nudge**
|
||||
|
||||
* Show a **“Provenance Card”** on each artifact: green checks for DSSE, required links present, Rekor inclusion proof verified, and a “clipboard‑copy” of the tree head for audit packets. (When online, add “mirrored to public log” badge.)
|
||||
|
||||
If you want, I can draft the DSSE predicates we’ll use (build, scan, policy), the Rekor‑compatible schema for the local log, and a tiny verifier in C# to validate DSSE + Merkle proofs offline.
|
||||
|
||||
[1]: https://github.com/secure-systems-lab/dsse?utm_source=chatgpt.com "DSSE: Dead Simple Signing Envelope"
|
||||
[2]: https://in-toto.readthedocs.io/en/latest/command-line-tools/in-toto-run.html?utm_source=chatgpt.com "in-toto-run — in-toto 3.0.0 documentation"
|
||||
[3]: https://docs.sigstore.dev/about/faq/?utm_source=chatgpt.com "Frequently asked questions"
|
||||
[4]: https://www.fabasoft.com/en/news/eidas-new-rules-digital-signatures?utm_source=chatgpt.com "eIDAS 2.0: New rules for digital signatures"
|
||||
[5]: https://docs.sigstore.dev/logging/sharding/?utm_source=chatgpt.com "Sharding"
|
||||
[6]: https://su3.io/posts/witnessing-sigstore-from-ethereum?utm_source=chatgpt.com "Witnessing Sigstore's transparency log from the Ethereum ..."
|
||||
[7]: https://docs.sigstore.dev/logging/cli/?utm_source=chatgpt.com "CLI"
|
||||
[8]: https://github.com/SigStore/rekor/blob/main/openapi.yaml?utm_source=chatgpt.com "openapi.yaml - sigstore/rekor"
|
||||
[9]: https://in-toto.io/docs/specs/?utm_source=chatgpt.com "Specifications"
|
||||
@@ -0,0 +1,32 @@
|
||||
Here’s a quick heads‑up that saves a *ton* of pain when sorting package versions on RHEL/Fedora/SUSE‑style systems: **never compare RPM versions as plain strings.** RPM compares **EVR** — `Epoch:Version-Release` — left‑to‑right, and if epochs differ, it stops right there. Missing epoch is treated as `0`. Backports (e.g., old Version with higher Release) and vendor epochs will break naive compares. Use an **rpmvercmp‑equivalent** and persist versions as a 3‑tuple `(epoch, version, release)`. ([RPM][1])
|
||||
|
||||
**Why this matters**
|
||||
|
||||
* `1:1.0-1` **>** `0:2.0-100` because `1` (epoch) beats everything after. ([RPM][1])
|
||||
* Fedora/Red Hat guidelines explicitly say EVR ordering governs upgrade paths; epochs are the most significant input and shouldn’t be removed once added. ([Fedora Docs][2])
|
||||
|
||||
**Correct approach (any language)**
|
||||
|
||||
* Parse to **NEVRA** (Name, Epoch, Version, Release, Arch), then compare by **EVR** using rpm’s algorithm; don’t roll your own string logic. ([Docs.rs][3])
|
||||
* If you can’t link against librpm, use a well‑known **rpmvercmp** implementation for your stack. Python and PHP have ready helpers. ([PyPI][4])
|
||||
|
||||
**Drop‑in options**
|
||||
|
||||
* **Python**: `rpm-vercmp` (pure Python) for EVR compares. Store `epoch` as int (default `0`), `version`/`release` as strings, and call the comparator. ([PyPI][4])
|
||||
* **.NET/C#**: no official rpmvercmp, but mirror the spec: split EVR, compare epochs numerically; for `version`/`release`, compare segment‑by‑segment using rpm rules (alphanumeric runs; numeric segments compare as integers; tildes sort before anything, etc.). (Spec summary in rpm‑version(7).) ([RPM][1])
|
||||
* **Rust/Go**: model NEVRA (existing crates/docs show structure) and wire a comparator consistent with rpmvercmp. ([Docs.rs][3])
|
||||
|
||||
**Practical tips for your pipelines**
|
||||
|
||||
* **Persist EVR**, not strings like `“1.2.3-4.el9”`. Keep `epoch` explicitly; don’t drop `0`. ([Fedora Docs][2])
|
||||
* **Normalize inputs** (e.g., from `rpm -q` vs `repoquery`) so missing epochs don’t cause mismatches. ([CPAN][5])
|
||||
* **Backport‑aware sorting**: rely on EVR, *not* semver. Semver comparisons will misorder distro backports. (Fedora docs highlight EVR as authoritative.) ([Red Hat Docs][6])
|
||||
|
||||
If you want, I can sketch a tiny C# `RpmEvrComparer` tailored to your .NET 10 repos and wire it into your SBOM/VEX flows so Feedser/Vexer sort updates correctly.
|
||||
|
||||
[1]: https://rpm.org/docs/6.0.x/man/rpm-version.7?utm_source=chatgpt.com "rpm-version(7)"
|
||||
[2]: https://docs.fedoraproject.org/en-US/packaging-guidelines/Versioning/?utm_source=chatgpt.com "Versioning Guidelines - Fedora Docs"
|
||||
[3]: https://docs.rs/rpm/latest/rpm/struct.Nevra.html?utm_source=chatgpt.com "Nevra in rpm - Rust"
|
||||
[4]: https://pypi.org/project/rpm-vercmp/?utm_source=chatgpt.com "rpm-vercmp"
|
||||
[5]: https://www.cpan.org/modules/by-module/RPM/RPM-NEVRA-v0.0.5.readme?utm_source=chatgpt.com "RPM-NEVRA-v0.0.5.readme"
|
||||
[6]: https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/10/html/packaging_and_distributing_software/packaging-software?utm_source=chatgpt.com "Chapter 6. Packaging software"
|
||||
@@ -0,0 +1,99 @@
|
||||
Here’s a practical way to make your vulnerability signals “point‑in‑time correct,” so a deployment on (say) 2025‑10‑17 is evaluated against exactly what vendors knew on 2025‑10‑17—not today’s retroactive data.
|
||||
|
||||
# Why this matters
|
||||
|
||||
Vendor feeds change: CVEs get split/merged, severities are re‑scored, ranges are corrected. If you don’t snapshot advisories with dates, your scanner can’t reproduce past results or pass audits.
|
||||
|
||||
# Core ideas (simple terms)
|
||||
|
||||
* **Immutable, dated snapshots:** store every advisory feed exactly as fetched, tagged by retrieval timestamp.
|
||||
* **Point‑in‑time resolution:** when you ask “is v1.2.3 affected as of 2025‑10‑17?”, evaluate only the snapshots at or before that date.
|
||||
* **Version‑aware schemas:** use formats that encode version ranges precisely so queries are deterministic.
|
||||
|
||||
# Feeds to mirror (daily or hourly)
|
||||
|
||||
* **OSV** (Open Source Vulnerabilities). Great for ecosystem packages; models affected ranges and fixed versions cleanly.
|
||||
* **Vendor OVAL** (e.g., Red Hat, Debian, Ubuntu, SUSE). Machine‑readable OS advisories with package/build info.
|
||||
* Optional: **NVD JSON**, **GitHub Advisories**, **Alpine secdb**, **Oracle ELSA**, etc.
|
||||
|
||||
# Minimal storage model (works well with Postgres + object store)
|
||||
|
||||
* Object store (e.g., S3 or MinIO):
|
||||
|
||||
* `feeds/{provider}/{name}/YYYY/MM/DD/HH/{hash}.{json|xml}` (immutable blobs)
|
||||
* `feeds/{provider}/{name}/LATEST` → pointer to newest blob (for ops only)
|
||||
* DB tables:
|
||||
|
||||
* `feed_snapshot(id, provider, feed_name, fetched_at, blob_uri, sha256)`
|
||||
* `advisory_index(snapshot_id, advisory_id, ecosystem, package, introduced, fixed, last_modified, severity, cwe, cve)`
|
||||
* `affected_artifact(advisory_id, package, version_range_expr, fixed_version)`
|
||||
* `os_pkg_match(advisory_id, distro, arch, src_pkg, bin_pkg, evr_range)` (for RPM/DPKG EVR)
|
||||
|
||||
# Ingest (pseudo‑ops)
|
||||
|
||||
* Fetch → verify checksum → write blob → record `feed_snapshot`.
|
||||
* Parse to normalized rows:
|
||||
|
||||
* **OSV:** read `affected[].ranges`, `events` (`introduced`, `fixed`, `last_affected`) and `versions[]`.
|
||||
* **OVAL:** normalize EVR constraints (RPM `epoch:version-release`, DPKG `version`) to range predicates.
|
||||
* Never mutate past snapshots; publish a new snapshot on each crawl.
|
||||
|
||||
# Point‑in‑time query (deterministic)
|
||||
|
||||
```
|
||||
INPUT: package=name, version=v, ecosystem=e, as_of=DATE
|
||||
1) S := latest feed_snapshot per provider where fetched_at <= as_of
|
||||
2) A := advisories from S where package=name AND ecosystem=e
|
||||
3) Return advisories where version ∈ union(version_range_expr) AND (fixed_version is null OR v < fixed_version)
|
||||
```
|
||||
|
||||
For OS distros, evaluate EVR ranges using distro rules (RPM vs DPKG).
|
||||
|
||||
# Practical commands (curl examples)
|
||||
|
||||
* **Mirror OSV (package‑scoped)**
|
||||
|
||||
```
|
||||
curl -s https://api.osv.dev/v1/query \
|
||||
-H 'content-type: application/json' \
|
||||
-d '{"package":{"ecosystem":"PyPI","name":"requests"}}' \
|
||||
> feeds/osv/pyPI/2026/01/01/00/requests.json
|
||||
```
|
||||
* **Mirror Red Hat OVAL (RHEL 9 example)**
|
||||
|
||||
```
|
||||
curl -s https://www.redhat.com/security/data/oval/v2/RHEL9/oval.xml \
|
||||
> feeds/redhat/oval/RHEL9/2026/01/01/00/oval.xml
|
||||
```
|
||||
|
||||
# Version‑range evaluation tips
|
||||
|
||||
* **SemVer packages (OSV):** build a small evaluator that applies `introduced/fixed/last_affected` events in order; treat pre‑releases carefully.
|
||||
* **RPM (RHEL/Fedora):** compare EVR with rpmvercmp semantics; don’t string‑compare.
|
||||
* **DPKG (Debian/Ubuntu):** implement dpkg version ordering (tilde `~`, epoch).
|
||||
|
||||
# Reproducibility features to add in Stella Ops
|
||||
|
||||
* Record **crawl manifest** (URLs + hashes); include it in scan attestations (DSSE/in‑toto).
|
||||
* Store **policy version** and **feed snapshot ids** alongside every scan result.
|
||||
* Expose a “**As‑Of Date**” selector in UI/CLI:
|
||||
|
||||
* `stella scan --as-of 2025-10-17 --distro rhel:9 --sbom sbom.cdx.json`
|
||||
* Provide a **diff view**: “why today ≠ last month?” (new advisory added, severity change, range corrected).
|
||||
|
||||
# Lightweight retention policy
|
||||
|
||||
* Keep **all daily snapshots for 90 days**, then weekly for a year, then monthly afterward.
|
||||
* Deduplicate blobs by SHA‑256 to save space.
|
||||
|
||||
# Failure modes & guardrails
|
||||
|
||||
* Vendor feed downtime → fall back to previous snapshot; mark crawl as **degraded**.
|
||||
* Advisory withdrawals/merges → keep old snapshot; show status change in diffs.
|
||||
* Timezone drift → store all `fetched_at` in UTC; accept only monotonic timestamps.
|
||||
|
||||
If you want, I can sketch:
|
||||
|
||||
* a Postgres schema (DDL),
|
||||
* a tiny C# range evaluator for OSV + RPM/DPKG EVR,
|
||||
* a cron/Actions workflow to mirror OSV + Red Hat OVAL with immutable paths.
|
||||
Reference in New Issue
Block a user