save progress

This commit is contained in:
StellaOps Bot
2026-01-02 15:52:31 +02:00
parent 2dec7e6a04
commit f46bde5575
174 changed files with 20793 additions and 8307 deletions

View File

@@ -0,0 +1,57 @@
Heres a practical way to make StellaOps truly **offlineready** while upgrading provenance and compliance: combine **DSSE signing** with **intoto link attestations**, and cache entries in a **local Rekorstyle Merkle log** so builds remain provable even when airgapped—nicely aligned with the eIDAS2.0 shift toward Qualified Trust Services. ([GitHub][1])
**Why these pieces**
* **DSSE**: a simple, envelopestyle signature that authenticates payload + type; widely used for attestations (e.g., cosign). Works well in offline flows. ([GitHub][1])
* **intoto**: records stepbystep build “link” metadata (materials, command, products) you can verify later. v1.0 attestation framework is stable. ([in-toto][2])
* **Rekorstyle transparency log**: appendonly Merkle tree with inclusion/consistency proofs; you can run it privately/offline and shard as it grows. ([Sigstore][3])
* **eIDAS2.0 context**: new implementing acts and timelines tighten expectations on trust services and evidence—your local, verifiable log + signatures help map to those controls. ([Fabasoft][4])
**Minimal design (deterministic, airgapped)**
1. **Perstep capture**
* Wrap each build step with `in-toto-run` to emit **link** attestations (DSSEwrapped). Pin inputs/outputs by digest; record exact argv/env. ([in-toto][2])
2. **Deterministic DSSE**
* Generate DSSE envelopes from normalized JSON (no clocks/paths). Sign with offline keys (HSM/PKCS#11 or file keys) and tag with predicate type (SLSA provenance, scan evidence, policy results). ([GitHub][1])
3. **Local transparency log**
* Store every envelope (and SBOM/VEX) in a **local Rekor clone** (same API & Merkle proofs). Enable periodic **sharding** and snapshot the tree head into your evidence bundle. ([Sigstore][5])
4. **Proof bundle**
* For each artifact, export: artifact digest, DSSE envelopes, intoto links, Rekor **inclusion + consistency proofs** (tree size, root). Verifiers can check integrity without the network. ([Su3][6])
5. **Online/Offline bridge (optional)**
* When connected, mirror your local tree to a public/partner log; when fully airgapped, use a **witness** or transfer pack to sync later. ([Sigstore][7])
**Developer checklist (StellaOps modules)**
* **Attestor/Authority**: DSSE signers, key policy (offline HSM first, PQready later). ([GitHub][1])
* **Builder/Router**: inject `in-toto-run` wrappers; emit link predicates; stamp build IDs. ([in-toto][2])
* **Ledger**: private Rekorcompatible service (API parity, Merkle proofs, sharding). ([GitHub][8])
* **Verifier/Policy Engine**: verify DSSE, validate intoto layout, enforce “musthave links,” and check Rekor proofs before promotion. ([in-toto][9])
* **Compliance**: map evidence to eIDAS2.0/QTSP expectations (immutability, auditability, key control, incident reporting). ([Fabasoft][4])
**CLI flow (sketch)**
* `stella build --record` → emits DSSE+intoto links
* `stella attestor sign --dsse file.json` → writes envelope
* `stella ledger put *.dsse` → returns entry IDs + proofs
* `stella verify --artifact <digest> --bundle <proofs.tgz>` → offline verify DSSE, intoto layout, Merkle proofs (inclusion/consistency)
**UX nudge**
* Show a **“Provenance Card”** on each artifact: green checks for DSSE, required links present, Rekor inclusion proof verified, and a “clipboardcopy” of the tree head for audit packets. (When online, add “mirrored to public log” badge.)
If you want, I can draft the DSSE predicates well use (build, scan, policy), the Rekorcompatible schema for the local log, and a tiny verifier in C# to validate DSSE + Merkle proofs offline.
[1]: https://github.com/secure-systems-lab/dsse?utm_source=chatgpt.com "DSSE: Dead Simple Signing Envelope"
[2]: https://in-toto.readthedocs.io/en/latest/command-line-tools/in-toto-run.html?utm_source=chatgpt.com "in-toto-run — in-toto 3.0.0 documentation"
[3]: https://docs.sigstore.dev/about/faq/?utm_source=chatgpt.com "Frequently asked questions"
[4]: https://www.fabasoft.com/en/news/eidas-new-rules-digital-signatures?utm_source=chatgpt.com "eIDAS 2.0: New rules for digital signatures"
[5]: https://docs.sigstore.dev/logging/sharding/?utm_source=chatgpt.com "Sharding"
[6]: https://su3.io/posts/witnessing-sigstore-from-ethereum?utm_source=chatgpt.com "Witnessing Sigstore's transparency log from the Ethereum ..."
[7]: https://docs.sigstore.dev/logging/cli/?utm_source=chatgpt.com "CLI"
[8]: https://github.com/SigStore/rekor/blob/main/openapi.yaml?utm_source=chatgpt.com "openapi.yaml - sigstore/rekor"
[9]: https://in-toto.io/docs/specs/?utm_source=chatgpt.com "Specifications"

View File

@@ -0,0 +1,32 @@
Heres a quick headsup that saves a *ton* of pain when sorting package versions on RHEL/Fedora/SUSEstyle systems: **never compare RPM versions as plain strings.** RPM compares **EVR**`Epoch:Version-Release` — lefttoright, and if epochs differ, it stops right there. Missing epoch is treated as `0`. Backports (e.g., old Version with higher Release) and vendor epochs will break naive compares. Use an **rpmvercmpequivalent** and persist versions as a 3tuple `(epoch, version, release)`. ([RPM][1])
**Why this matters**
* `1:1.0-1` **>** `0:2.0-100` because `1` (epoch) beats everything after. ([RPM][1])
* Fedora/Red Hat guidelines explicitly say EVR ordering governs upgrade paths; epochs are the most significant input and shouldnt be removed once added. ([Fedora Docs][2])
**Correct approach (any language)**
* Parse to **NEVRA** (Name, Epoch, Version, Release, Arch), then compare by **EVR** using rpms algorithm; dont roll your own string logic. ([Docs.rs][3])
* If you cant link against librpm, use a wellknown **rpmvercmp** implementation for your stack. Python and PHP have ready helpers. ([PyPI][4])
**Dropin options**
* **Python**: `rpm-vercmp` (pure Python) for EVR compares. Store `epoch` as int (default `0`), `version`/`release` as strings, and call the comparator. ([PyPI][4])
* **.NET/C#**: no official rpmvercmp, but mirror the spec: split EVR, compare epochs numerically; for `version`/`release`, compare segmentbysegment using rpm rules (alphanumeric runs; numeric segments compare as integers; tildes sort before anything, etc.). (Spec summary in rpmversion(7).) ([RPM][1])
* **Rust/Go**: model NEVRA (existing crates/docs show structure) and wire a comparator consistent with rpmvercmp. ([Docs.rs][3])
**Practical tips for your pipelines**
* **Persist EVR**, not strings like `“1.2.3-4.el9”`. Keep `epoch` explicitly; dont drop `0`. ([Fedora Docs][2])
* **Normalize inputs** (e.g., from `rpm -q` vs `repoquery`) so missing epochs dont cause mismatches. ([CPAN][5])
* **Backportaware sorting**: rely on EVR, *not* semver. Semver comparisons will misorder distro backports. (Fedora docs highlight EVR as authoritative.) ([Red Hat Docs][6])
If you want, I can sketch a tiny C# `RpmEvrComparer` tailored to your .NET 10 repos and wire it into your SBOM/VEX flows so Feedser/Vexer sort updates correctly.
[1]: https://rpm.org/docs/6.0.x/man/rpm-version.7?utm_source=chatgpt.com "rpm-version(7)"
[2]: https://docs.fedoraproject.org/en-US/packaging-guidelines/Versioning/?utm_source=chatgpt.com "Versioning Guidelines - Fedora Docs"
[3]: https://docs.rs/rpm/latest/rpm/struct.Nevra.html?utm_source=chatgpt.com "Nevra in rpm - Rust"
[4]: https://pypi.org/project/rpm-vercmp/?utm_source=chatgpt.com "rpm-vercmp"
[5]: https://www.cpan.org/modules/by-module/RPM/RPM-NEVRA-v0.0.5.readme?utm_source=chatgpt.com "RPM-NEVRA-v0.0.5.readme"
[6]: https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/10/html/packaging_and_distributing_software/packaging-software?utm_source=chatgpt.com "Chapter 6. Packaging software"

View File

@@ -0,0 +1,99 @@
Heres a practical way to make your vulnerability signals “pointintime correct,” so a deployment on (say) 20251017 is evaluated against exactly what vendors knew on 20251017—not todays retroactive data.
# Why this matters
Vendor feeds change: CVEs get split/merged, severities are rescored, ranges are corrected. If you dont snapshot advisories with dates, your scanner cant reproduce past results or pass audits.
# Core ideas (simple terms)
* **Immutable, dated snapshots:** store every advisory feed exactly as fetched, tagged by retrieval timestamp.
* **Pointintime resolution:** when you ask “is v1.2.3 affected as of 20251017?”, evaluate only the snapshots at or before that date.
* **Versionaware schemas:** use formats that encode version ranges precisely so queries are deterministic.
# Feeds to mirror (daily or hourly)
* **OSV** (Open Source Vulnerabilities). Great for ecosystem packages; models affected ranges and fixed versions cleanly.
* **Vendor OVAL** (e.g., Red Hat, Debian, Ubuntu, SUSE). Machinereadable OS advisories with package/build info.
* Optional: **NVD JSON**, **GitHub Advisories**, **Alpine secdb**, **Oracle ELSA**, etc.
# Minimal storage model (works well with Postgres + object store)
* Object store (e.g., S3 or MinIO):
* `feeds/{provider}/{name}/YYYY/MM/DD/HH/{hash}.{json|xml}` (immutable blobs)
* `feeds/{provider}/{name}/LATEST` → pointer to newest blob (for ops only)
* DB tables:
* `feed_snapshot(id, provider, feed_name, fetched_at, blob_uri, sha256)`
* `advisory_index(snapshot_id, advisory_id, ecosystem, package, introduced, fixed, last_modified, severity, cwe, cve)`
* `affected_artifact(advisory_id, package, version_range_expr, fixed_version)`
* `os_pkg_match(advisory_id, distro, arch, src_pkg, bin_pkg, evr_range)` (for RPM/DPKG EVR)
# Ingest (pseudoops)
* Fetch → verify checksum → write blob → record `feed_snapshot`.
* Parse to normalized rows:
* **OSV:** read `affected[].ranges`, `events` (`introduced`, `fixed`, `last_affected`) and `versions[]`.
* **OVAL:** normalize EVR constraints (RPM `epoch:version-release`, DPKG `version`) to range predicates.
* Never mutate past snapshots; publish a new snapshot on each crawl.
# Pointintime query (deterministic)
```
INPUT: package=name, version=v, ecosystem=e, as_of=DATE
1) S := latest feed_snapshot per provider where fetched_at <= as_of
2) A := advisories from S where package=name AND ecosystem=e
3) Return advisories where version ∈ union(version_range_expr) AND (fixed_version is null OR v < fixed_version)
```
For OS distros, evaluate EVR ranges using distro rules (RPM vs DPKG).
# Practical commands (curl examples)
* **Mirror OSV (packagescoped)**
```
curl -s https://api.osv.dev/v1/query \
-H 'content-type: application/json' \
-d '{"package":{"ecosystem":"PyPI","name":"requests"}}' \
> feeds/osv/pyPI/2026/01/01/00/requests.json
```
* **Mirror Red Hat OVAL (RHEL 9 example)**
```
curl -s https://www.redhat.com/security/data/oval/v2/RHEL9/oval.xml \
> feeds/redhat/oval/RHEL9/2026/01/01/00/oval.xml
```
# Versionrange evaluation tips
* **SemVer packages (OSV):** build a small evaluator that applies `introduced/fixed/last_affected` events in order; treat prereleases carefully.
* **RPM (RHEL/Fedora):** compare EVR with rpmvercmp semantics; dont stringcompare.
* **DPKG (Debian/Ubuntu):** implement dpkg version ordering (tilde `~`, epoch).
# Reproducibility features to add in StellaOps
* Record **crawl manifest** (URLs + hashes); include it in scan attestations (DSSE/intoto).
* Store **policy version** and **feed snapshot ids** alongside every scan result.
* Expose a “**AsOf Date**” selector in UI/CLI:
* `stella scan --as-of 2025-10-17 --distro rhel:9 --sbom sbom.cdx.json`
* Provide a **diff view**: “why today ≠ last month?” (new advisory added, severity change, range corrected).
# Lightweight retention policy
* Keep **all daily snapshots for 90 days**, then weekly for a year, then monthly afterward.
* Deduplicate blobs by SHA256 to save space.
# Failure modes & guardrails
* Vendor feed downtime → fall back to previous snapshot; mark crawl as **degraded**.
* Advisory withdrawals/merges → keep old snapshot; show status change in diffs.
* Timezone drift → store all `fetched_at` in UTC; accept only monotonic timestamps.
If you want, I can sketch:
* a Postgres schema (DDL),
* a tiny C# range evaluator for OSV + RPM/DPKG EVR,
* a cron/Actions workflow to mirror OSV + Red Hat OVAL with immutable paths.