feat: Implement distro-native version comparison for RPM, Debian, and Alpine packages

- Add RpmVersionComparer for RPM version comparison with epoch, version, and release handling.
- Introduce DebianVersion for parsing Debian EVR (Epoch:Version-Release) strings.
- Create ApkVersion for parsing Alpine APK version strings with suffix support.
- Define IVersionComparator interface for version comparison with proof-line generation.
- Implement VersionComparisonResult struct to encapsulate comparison results and proof lines.
- Add tests for Debian and RPM version comparers to ensure correct functionality and edge case handling.
- Create project files for the version comparison library and its tests.
This commit is contained in:
StellaOps Bot
2025-12-22 09:49:38 +02:00
parent aff0ceb2fe
commit 634233dfed
112 changed files with 31925 additions and 1813 deletions

View File

@@ -0,0 +1,469 @@
Im sharing a **competitive securitytool matrix** that you can immediately plug into StellaOps strategy discussions — it maps real, *comparable evidence* from public sources to categories where most current tools fall short. Below the CSV is a short Markdown commentary that highlights gaps & opportunities StellaOps can exploit.
---
## 🧠 Competitive Security Tool Matrix (CSV)
**Columns:**
`Tool,SBOM Fidelity,VEX Handling,Explainability,SmartDiff,CallStack Reachability,Deterministic Scoring,Unknowns State,Ecosystem Integrations,Policy Engine,Offline/AirGapped,Provenance/Attestations,Public Evidence`
```
Tool,SBOM Fidelity,VEX Handling,Explainability,SmartDiff,CallStack Reachability,Deterministic Scoring,Unknowns State,Ecosystem Integrations,Policy Engine,Offline/AirGapped,Provenance/Attestations,Public Evidence
Trivy (open),CycloneDX/SPDX support (basic),Partial* (SBOM ext refs),Low,No,No,Moderate,No,Strong CI/CD/K8s,Minimal,Unknown,SBOM only evidence; VEX support request exists but unmerged⟨*⟩,:contentReference[oaicite:0]{index=0}
Grype/Syft,Strong CycloneDX/SPDX (generator + scanner),None documented,Low,No,No,Moderate,No,Strong CI/CD/K8s,Policy minimal,Unknown,Syft can create signed SBOMs but not full attestations,:contentReference[oaicite:1]{index=1}
Snyk,SBOM export likely (platform),Unknown/limited,Vuln context explainability (reports),No,No,Proprietary risk scoring,Partial integrations,Strong Black/White list policies in UI,Unknown,Unknown (not focused on attestations),:contentReference[oaicite:2]{index=2}
Prisma Cloud,Enterprise SBOM + vuln scanning,Runtime exploitability contexts?*,Enterprise dashboards,No formal smartdiff,No,Risk prioritization,Supports multicloud integrations,Rich policy engines (CNAPP),Supports offline deployment?,Unknown attestations capabilities,:contentReference[oaicite:3]{index=3}
Aqua (enterprise),SBOM via Trivy,Unknown commercial VEX support,Some explainability in reports,No documented smartdiff,No,Risk prioritization,Comprehensive integrations (cloud/CI/CD/SIEM),Enterprise policy supports compliance,Airgapped options in enterprise,Focus on compliance attestations?,:contentReference[oaicite:4]{index=4}
Anchore Enterprise,Strong SBOM mgmt + format support,Policy engine can ingest SBOM + vulnerability sources,Moderate (reports & SBOM insights),Potential policy diff,No explicit reachability analysis,Moderate policy scoring,Partial,Rich integrations (CI/CD/registry),Policyascode,Airgapped deploy supported,SBOM provenance & signing via Syft/intoto,:contentReference[oaicite:5]{index=5}
StellaOps,High fidelity SBOM (CycloneDX/SPDX) planned,Native VEX ingestion + decisioning,Explainability + proof extracts,Smartdiff tech planned,Callstack reachability analysis,Deterministic scoring with proofs,Explicit unknowns state,Integrations with CI/CD/SIGSTORE,Declarative multimodal policy engine,Full offline/airgapped support,Provenance/attestations via DSSE/intoto,StellaOps internal vision
```
---
## 📌 Key Notes, Gaps & Opportunities (Markdown)
### **SBOM Fidelity**
* **Open tools (Trivy, Syft)** already support CycloneDX/SPDX output, but mostly as flat SBOM artifacts without longterm repositories or versioned diffing. ([Ox Security][1])
* **Opportunity:** Provide *repository + lineage + merge semantics* with proofs — not just generation.
### **VEX Handling**
* Trivy has an open feature request for dynamic VEX ingestion. ([GitHub][2])
* Most competitors either lack VEX support or have no *decisioning logic* based on exploitability.
* **Opportunity:** Firstclass VEX ingestion with evaluation rules + automated scoring.
### **Explainability**
* Commercial tools (Prisma/Snyk) offer UI report context and devoriented remediation guidance. ([Snyk][3])
* OSS tools provide flat scan outputs with minimal causal trace.
* **Opportunity:** Link vulnerability flags back to *proven code paths*, enriched with SBOM + call reachability.
### **SmartDiff & Unknowns State**
* No major tool advertising *smart diffing* between SBOMs for incremental risk deltas across releases.
* **Opportunity:** Automate risk deltas between SBOMs with uncertainty margins.
### **CallStack Reachability**
* None of these tools publicly document callstack based exploit reachability analysis outofthebox.
* **Opportunity:** Integrate dynamic/static reachability evidence that elevates scanning from surface report → *impact map*.
### **Deterministic Scoring**
* Snyk & Prisma offer proprietary scoring that blends severity + context. ([TrustRadius][4])
* But these arent reproducible with *signed verdicts*.
* **Opportunity:** Provide *deterministic, attestable scoring proofs*.
### **Ecosystem Integrations**
* Trivy/Grype excel at lightweight CI/CD and Kubernetes. ([Echo][5])
* Enterprise products integrate deeply into cloud/registry. ([Palo Alto Networks][6])
* **Opportunity:** Expand *sigstore/notation* based pipelines and automated attestation flows.
### **Policy Engine**
* Prisma & Aqua have mature enterprise policies. ([Aqua][7])
* OSS tools have limited simple allow/deny.
* **Opportunity:** Provide *lattice/constraint policies* with proof outputs.
### **Offline/AirGapped**
* Anchore supports airgapped deployment in enterprise contexts. ([Anchore][8])
* Support across all open tools is adhoc at best.
* **Opportunity:** Builtin deterministic offline modes with offline SBOM stores and VEX ingestion.
### **Provenance/Attestations**
* Syft supports SBOM output in various formats; also *intoto* for attestations. ([Ox Security][1])
* Most competitors dont prominently advertise *attestation pipelines*.
* **Opportunity:** Endtoend DSSE/intoto provenance with immutable proofs.
---
### 📌 Public Evidence Links
* **Trivy / Syft / Grype SBOM support & formats:** CycloneDX/SPDX; Syft as generator + Grype scanner. ([Ox Security][1])
* **Trivy VEX feature request:** exists but not mainstream yet. ([GitHub][2])
* **Snyk platform capability:** scans containers, IaC, devfirst prioritization. ([Snyk][3])
* **Prisma Cloud container security:** lifecycle scanning + policy. ([Palo Alto Networks][6])
* **Anchore Enterprise SBOM mgmt & policy:** central imports + CI/CD ecosystem. ([Anchore Docs][9])
---
If youd like this in **Excel, JSON, or Mermaid graph form** next, I can generate it — just tell me the format you prefer.
[1]: https://www.ox.security/blog/sbom-tools/?utm_source=chatgpt.com "Top 5 SBOM Tools for Securing the Software Supply Chain"
[2]: https://github.com/aquasecurity/trivy/discussions/8047?utm_source=chatgpt.com "Support for dynamic VEX retrieval from \"External ..."
[3]: https://snyk.io/?utm_source=chatgpt.com "Snyk AI-powered Developer Security Platform | AI-powered ..."
[4]: https://www.trustradius.com/compare-products/palo-alto-networks-prisma-cloud-vs-snyk?utm_source=chatgpt.com "Palo Alto Networks Prisma Cloud vs Snyk"
[5]: https://www.echohq.com/blog/best-container-scanning-tools?utm_source=chatgpt.com "10 Best Container Scanning Tools for 2025"
[6]: https://www.paloaltonetworks.com/prisma/cloud/container-security?utm_source=chatgpt.com "Container Security Solutions"
[7]: https://www.aquasec.com/integrations/?utm_source=chatgpt.com "Integrations - Aqua Security"
[8]: https://anchore.com/blog/effortless-sbom-analysis-how-anchore-enterprise-simplifies-integration/?utm_source=chatgpt.com "Effortless SBOM Analysis with Anchore Enterprise"
[9]: https://docs.anchore.com/current/docs/sbom_management/?utm_source=chatgpt.com "SBOM Management"
Below is a **deep, implementation-grade elaboration** that turns the comparison matrix into **actionable architecture and product differentiation** for Stella Ops. I will deliberately avoid marketing language and focus on **mechanics, evidence, and execution gaps**.
---
## 1. Why existing tools plateau (root cause analysis)
Across Trivy, Syft/Grype, Snyk, Prisma, Aqua, Anchore, there is a **structural ceiling** they all hit — regardless of OSS vs enterprise.
### Shared structural limitations
1. **SBOM is treated as a static artifact**
* Generated → stored → scanned.
* No concept of *evolving truth*, lineage, or replayability.
2. **Vulnerability scoring is probabilistic, not provable**
* CVSS + vendor heuristics.
* Cannot answer: *“Show me why this CVE is exploitable here.”*
3. **Exploitability ≠ reachability**
* “Runtime context” ≠ call-path proof.
4. **Diffing is file-level, not semantic**
* Image hash change ≠ security delta understanding.
5. **Offline support is operational, not epistemic**
* You can run it offline, but you cannot **prove** what knowledge state was used.
These are not accidental omissions. They arise from **tooling lineage**:
* Trivy/Syft grew from *package scanners*
* Snyk grew from *developer remediation UX*
* Prisma/Aqua grew from *policy & compliance platforms*
None were designed around **forensic reproducibility or trust algebra**.
---
## 2. SBOM fidelity: what “high fidelity” actually means
Most tools claim CycloneDX/SPDX support. That is **necessary but insufficient**.
### Current reality
| Dimension | Industry tools |
| ----------------------- | ---------------------- |
| Component identity | Package name + version |
| Binary provenance | Weak or absent |
| Build determinism | None |
| Dependency graph | Flat or shallow |
| Layer attribution | Partial |
| Rebuild reproducibility | Not supported |
### What Stella Ops must do differently
**SBOM must become a *stateful ledger*, not a document.**
Concrete requirements:
* **Component identity = (source + digest + build recipe hash)**
* **Binary → source mapping**
* ELF Build-ID / Mach-O UUID / PE timestamp+hash
* **Layer-aware dependency graphs**
* Not “package depends on X”
* But “binary symbol A resolves to shared object B via loader rule C”
* **Replay manifest**
* Exact feeds
* Exact policies
* Exact scoring rules
* Exact timestamps
* Hash of everything
This is the foundation for *deterministic replayable scans* — something none of the competitors even attempt.
---
## 3. VEX handling: ingestion vs decisioning
Most vendors misunderstand VEX.
### What competitors do
* Accept VEX as:
* Metadata
* Annotation
* Suppression rule
* No **formal reasoning** over VEX statements.
### What Stella Ops must do
VEX is not a comment — it is a **logical claim**.
Each VEX statement:
```
IF
product == X
AND component == Y
AND version in range Z
THEN
status ∈ {not_affected, affected, fixed, under_investigation}
BECAUSE
justification J
WITH
evidence E
```
Stella Ops advantage:
* VEX statements become **inputs to a lattice merge**
* Conflicting VEX from:
* Vendor
* Distro
* Internal analysis
* Runtime evidence
* Are resolved **deterministically** via policy, not precedence hacks.
This unlocks:
* Vendor-supplied proofs
* Customer-supplied overrides
* Jurisdiction-specific trust rules
---
## 4. Explainability: reports vs proofs
### Industry “explainability”
* “This vulnerability is high because…”
* Screenshots, UI hints, remediation text.
### Required explainability
Security explainability must answer **four non-negotiable questions**:
1. **What exact evidence triggered this finding?**
2. **What code or binary path makes it reachable?**
3. **What assumptions are being made?**
4. **What would falsify this conclusion?**
No existing scanner answers #4.
### Stella Ops model
Each finding emits:
* Evidence bundle:
* SBOM nodes
* Call-graph edges
* Loader resolution
* Runtime symbol presence
* Assumption set:
* Compiler flags
* Runtime configuration
* Feature gates
* Confidence score **derived from evidence density**, not CVSS
This is explainability suitable for:
* Auditors
* Regulators
* Courts
* Defense procurement
---
## 5. Smart-Diff: the missing primitive
All tools compare:
* Image A vs Image B
* Result: *“+3 CVEs, 1 CVE”*
This is **noise-centric diffing**.
### What Smart-Diff must mean
Diff not *artifacts*, but **security meaning**.
Examples:
* Same CVE remains, but:
* Call path removed → risk collapses
* New binary added, but:
* Dead code → no reachable risk
* Dependency upgraded, but:
* ABI unchanged → no exposure delta
Implementation direction:
* Diff **reachability graphs**
* Diff **policy outcomes**
* Diff **trust weights**
* Diff **unknowns**
Output:
> “This release reduces exploitability surface by 41%, despite +2 CVEs.”
No competitor does this.
---
## 6. Call-stack reachability: why runtime context isnt enough
### Current vendor claim
“Runtime exploitability analysis.”
Reality:
* Usually:
* Process exists
* Library loaded
* Port open
This is **coarse correlation**, not proof.
### Stella Ops reachability model
Reachability requires **three layers**:
1. **Static call graph**
* From entrypoints to vulnerable symbols
2. **Binary resolution**
* Dynamic loader rules
* Symbol versioning
3. **Runtime gating**
* Feature flags
* Configuration
* Environment
Only when **all three align** does exploitability exist.
This makes false positives *structurally impossible*, not heuristically reduced.
---
## 7. Deterministic scoring: replacing trust with math
Every competitor uses:
* CVSS
* EPSS
* Proprietary weighting
Problem:
* Scores are **non-reproducible**
* Cannot be attested
* Cannot be audited
### Stella Ops scoring
Score = deterministic function of:
* Evidence count
* Evidence strength
* Assumption penalties
* Trust source weights
* Policy constraints
Same inputs → same outputs → forever.
This enables:
* Signed risk decisions
* Cross-org verification
* Legal defensibility
---
## 8. Unknowns as a first-class state
Industry tools suppress uncertainty.
Stella Ops must **surface it**.
States:
* Known-safe
* Known-vulnerable
* **Unknown-reachable**
* **Unknown-unreachable**
Unknowns are **risk**, but different from vulnerabilities.
This is critical for:
* Air-gapped environments
* Novel exploits
* Zero-day windows
No competitor models this explicitly.
---
## 9. Offline / air-gapped: epistemic completeness
Competitors:
* “Yes, we can run offline.”
But cannot answer:
> “Exactly what knowledge did you have when you made this decision?”
Stella Ops must:
* Snapshot feeds
* Snapshot policies
* Snapshot scoring rules
* Snapshot trust anchors
Then bind them cryptographically to scan results.
This creates **forensic replayability**, not just offline execution.
---
## 10. Strategic conclusion (no hype)
### Where Stella Ops can **objectively dominate**
1. Deterministic replayable scans
2. Formal VEX reasoning
3. Reachability-backed exploitability
4. Semantic smart-diff
5. Evidence-first explainability
6. Unknowns modeling
7. Jurisdiction-ready offline trust
### Where competitors remain stronger (for now)
* Mass-market UX polish
* SaaS onboarding friction
* Marketplace integrations
These are solvable later. The **hard parts** above are not.
---
If you want, next I can:
* Convert this into **developer implementation milestones**
* Map features → **moat strength vs competitors**
* Design the **Trust Algebra / Lattice Engine spec**
* Produce a **regulator-grade threat & evidence model**
Tell me which vector you want next.

View File

@@ -0,0 +1,247 @@
Yes — you need **dedicated UI affordances** for “Verdict + DeltaVerdict + Evidence” because the interaction model is fundamentally different from a classic “vulnerability list” UI.
But you do **not** necessarily need a whole new toplevel product area on day one.
The right approach is usually:
1. **Embed the experience where decisions happen** (build/release/PR gates).
2. Add **one dedicated “Compare / Delta” screen** (a focused view) reachable from those contexts.
3. Introduce a **top-level “Assurance/Audit” workspace only if you have compliance-heavy users** who need cross-project oversight.
Below is a concrete way to implement both options and when to choose each.
---
## When a dedicated UI is warranted
A dedicated UI is justified if at least **two** of these are true:
* You have **multiple repos/services** and security/compliance need to see **fleet-wide deltas**, not just per build.
* You need **approval workflows** (exceptions, risk acceptance, “ship with waiver”).
* You need **auditor-grade artifact browsing**: signatures, provenance, replay, evidence packs.
* Developers complain about “scan noise” and need **diff-first triage** to be fast.
* You have separate personas: **Dev**, **Security**, **Compliance/Audit** — each needs different default views.
If those arent true, keep it embedded and light.
---
## Recommended approach (most teams): Dedicated “Compare view” + embedded panels
### Where it belongs in the existing UI
Assuming your current navigation is something like:
**Projects → Repos → Builds/Releases → Findings/Vulnerabilities**
Then “DeltaVerdict” belongs primarily in **Build/Release details**, not in the global vulnerability list.
**Add two key entry points:**
1. A **status + delta summary** on every Build/Release page (above the fold).
2. A **Compare** action that opens a dedicated comparison screen (or tab).
### Information architecture (practical, minimal)
On the **Build/Release details page**, add a header section:
* **Verdict chip**: Allowed / Blocked / Warn
* **Delta chip**: “+2 new exploitable highs”, “Reachability flip: yes/no”, “Unknowns: +3”
* **Baseline**: “Compared to: v1.4.2 (last green in prod)”
* **Actions**:
* **Compare** (opens dedicated delta view)
* **Download Evidence Pack**
* **Verify Signatures**
* **Replay** (copy command / show determinism hash)
Then add a tab set:
* **Delta (default)**
* Components (SBOM)
* Vulnerabilities
* Reachability
* VEX / Claims
* Attestations (hashes, signatures, provenance)
#### Why “Delta” should be the default tab
The users first question in a release is: *What changed that affects risk?*
If you make them start in a full vuln list, you rebuild the noise problem.
---
## How the dedicated “Compare / Delta” view should work
Think of it as a “git diff”, but for risk and provenance.
### 1) Baseline selection (must be explicit and explainable)
Top of the Compare view:
* **Base** selector (default chosen by system):
* “Last green verdict in same environment”
* “Previous release tag”
* “Parent commit / merge-base”
* **Head** selector:
* Current build/release
* Show **why** the baseline was chosen (small text):
“Selected last prod release with Allowed verdict under policy P123.”
This matters because auditors will ask “why did you compare against *that*?”
### 2) Delta summary strip (fast triage)
A horizontal strip with only the key deltas:
* **New exploitable vulns:** N (by severity)
* **Reachability flips:** N (new reachable / newly unreachable)
* **Component changes:** +A / R / ~C
* **VEX claim flips:** N
* **Policy/feed drift:** policy changed? feed snapshot changed? stale?
### 3) Three-pane layout (best for speed)
Left: **Delta categories** (counts)
* New exploitable vulns
* Newly reachable
* Component adds/removes
* Changed versions
* Claim changes
* Unknowns / missing data
Middle: **List of changed items** (sorted by risk)
* Each item shows: component, version, CVE (if applicable), exploitability, reachability, current disposition (VEX), gating rule triggered
Right: **Proof / explanation panel**
* “Why is it blocked?”
* Shows:
* the **policy rule** that fired (with rule ID)
* the **witness path** for reachability (minimal path)
* the **claim sources** for VEX (vendor/distro/internal) and merge explanation
* links to the exact **envelope hashes** involved
This is where “proof-carrying” becomes usable.
### 4) Actionables output (make it operational)
At the top of the item list include a “What to do next” section:
* Upgrade component X → version Y
* Patch CVE Z
* Add/confirm VEX claim with evidence
* Reduce reachability (feature flag, build config)
* Resolve unknowns (SBOM missing for module A)
This prevents the compare screen from becoming yet another “informational dashboard.”
---
## If you do NOT create any new dedicated view
If you strongly want zero new screens, the minimum acceptable integration is:
* Add a **Delta toggle** on the existing Vulnerabilities page:
* “All findings” vs “Changes since baseline”
* Add a **baseline selector** on that page.
* Add an **Attestations panel** on the Build/Release page for evidence pack + signature verification.
This can work, but it tends to fail as the system grows because:
* Vulnerability list UIs are optimized for volume browsing, not causal proof
* Reachability and VEX explanation become buried
* Auditors still need a coherent “verdict story”
If you go this route, at least add a **“Compare drawer”** (modal) that shows the delta summary and links into filtered views.
---
## When you SHOULD add a top-level dedicated UI (“Assurance” workspace)
Create a dedicated left-nav item only when you have these needs:
1. **Cross-project oversight**: “show me all new exploitable highs introduced this week across org.”
2. **Audit operations**: evidence pack management, replay logs, signature verification at scale.
3. **Policy governance**: browse policy versions, rollout status, exceptions, owners.
4. **Release approvals**: security sign-off steps, waivers, expiry dates.
### What that workspace would contain
* **Overview dashboard**
* blocked releases (by reason)
* new risk deltas by team/repo
* unknowns trend
* stale feed snapshot alerts
* **Comparisons**
* search by repo/build/tag and compare any two artifacts
* **Attestations & Evidence**
* list of verdicts/delta verdicts with verification status
* evidence pack download and replay
* **Policies & Exceptions**
* policy versions, diffs, who changed what
* exceptions with expiry and justification
This becomes the home for Security/Compliance, while Devs stay in the build/release context.
---
## Implementation details that make the UI “work” (avoid common failures)
### 1) Idempotent “Compute delta” behavior
When user opens Compare view:
* UI requests DeltaVerdict by `{base_verdict_hash, head_verdict_hash, policy_hash}`.
* If not present, backend computes it.
* UI shows deterministic progress (“pending”), not “scanning…”.
### 2) Determinism and trust indicators
Every compare screen should surface:
* Determinism hash
* Policy version/hash
* Feed snapshot timestamp/age
* Signature verification status
If verification fails, the UI must degrade clearly (red banner, disable “Approved” actions).
### 3) Baseline rules must be visible
Auditors hate “magic.”
Show baseline selection logic and allow override.
### 4) Dont show full graphs by default
Default to:
* minimal witness path(s)
* minimal changed subgraph
* expand-on-demand for deep investigation
### 5) Role-based access
* Developers: see deltas, actionables, witness paths
* Security: see claims sources, merge rationale, policy reasoning
* Audit: see signatures, replay, evidence pack
---
## Decision recommendation (most likely correct)
* Build **embedded panels** + a **dedicated Compare/Delta view** reachable from Build/Release and PR checks.
* Delay a top-level “Assurance” workspace until you see real demand from security/compliance for cross-project oversight and approvals.
This gives you the usability benefits of “diff-first” without fragmenting navigation or building a parallel UI too early.
If you share (even roughly) your existing nav structure (what pages exist today), I can map the exact placements and propose a concrete IA tree and page wireframe outline aligned to your current UI.