docs consolidation and others

This commit is contained in:
master
2026-01-06 19:02:21 +02:00
parent d7bdca6d97
commit 4789027317
849 changed files with 16551 additions and 66770 deletions

22
docs/product/README.md Normal file
View File

@@ -0,0 +1,22 @@
# Product Strategy & Positioning
Product strategy, competitive analysis, and marketing bridge documents.
## Contents
| Document | Purpose |
|----------|---------|
| [competitive-landscape.md](competitive-landscape.md) | 15-vendor competitive analysis with structural moat explanation |
| [claims-citation-index.md](claims-citation-index.md) | Evidence citations backing product claims |
| [moat-strategy-summary.md](moat-strategy-summary.md) | Strategic positioning and defensibility |
| [decision-capsules.md](decision-capsules.md) | Decision Capsules concept (audit-grade evidence bundles) |
| [evidence-linked-vex.md](evidence-linked-vex.md) | Evidence-linked VEX technical bridge |
| [hybrid-reachability.md](hybrid-reachability.md) | Hybrid reachability feature positioning |
| [reachability-benchmark-launch.md](reachability-benchmark-launch.md) | Reachability benchmark launch materials |
## Audience
- Product management
- Sales engineering
- Technical marketing
- Engineering prioritization

38
docs/product/checklist.md Normal file
View File

@@ -0,0 +1,38 @@
# Evaluation Checklist 30-Day Adoption Plan
## Day 01: Kick the Tires
- [ ] Follow the [Quickstart](../quickstart.md) to run the first scan and confirm quota headers (`X-Stella-Quota-Remaining`).
- [ ] Capture the deterministic replay bundle (`stella replay export`) to verify SRM evidence.
- [ ] Log into the Console, review the explain trace for the latest scan, and test policy waiver creation.
## Day 27: Prove Fit
- [ ] Import the [Offline Update Kit](../OFFLINE_KIT.md) and confirm feeds refresh with no Internet access.
- [ ] Apply a sovereign CryptoProfile matching your regulatory environment (FIPS, eIDAS, GOST, SM).
- [ ] Run policy simulations with your SBOMs using `stella policy simulate --input <sbom>`; log explain outcomes for review.
- [ ] Validate attestation workflows by exporting DSSE bundles and replaying them on a secondary host.
## Day 814: Integrate
- [ ] Wire the CLI into CI/CD to gate images using exit codes and `X-Stella-Quota-Remaining` telemetry.
- [ ] Configure `StellaOps.Notify` with at least one channel (email/webhook) and confirm digest delivery.
- [ ] Map existing advisory/VEX sources to Concelier connectors; note any feeds requiring custom plug-ins.
- [ ] Review `StellaOps.Policy.Engine` audit logs to ensure waiver ownership and expiry meet governance needs.
## Day 1530: Harden & Measure
- [ ] Follow the [Security Hardening Guide](../SECURITY_HARDENING_GUIDE.md) to rotate keys and enable mTLS across modules.
- [ ] Enable observability pipelines (metrics + OpenTelemetry) to capture scan throughput and policy outcomes.
- [ ] Run performance checks against the [Performance Workbook](../PERFORMANCE_WORKBOOK.md) targets; note P95 latencies.
- [ ] Document operational runbooks (install, upgrade, rollback) referencing [Release Engineering Playbook](../RELEASE_ENGINEERING_PLAYBOOK.md).
## Decision Gates
| Question | Evidence to collect | Source |
|----------|--------------------|--------|
| Can we operate fully offline? | Offline kit import logs, quota JWT validation without Internet | Quickstart, Offline Kit guide |
| Are findings explainable and reproducible? | SRM replay results, policy explain traces | Key features, Policy Engine UI |
| Does it meet regional compliance? | CryptoProfile application, Attestor/Rekor mirror configuration | Sovereign crypto docs, Attestor guide |
**Next step:** once the checklist is green, plan production rollout with module-specific architecture docs under `docs/modules/`.

View File

@@ -0,0 +1,219 @@
# Competitive Claims Citation Index
## Purpose
This document is the **authoritative source** for all competitive positioning claims made by StellaOps. All marketing materials, sales collateral, and documentation must reference claims from this index to ensure accuracy and consistency.
**Last Updated:** 2025-12-20
**Next Review:** 2026-03-20
---
## Claim Categories
### 1. Determinism Claims
| ID | Claim | Evidence | Confidence | Verified | Next Review |
|----|-------|----------|------------|----------|-------------|
| DET-001 | "StellaOps produces bit-identical scan outputs given identical inputs" | `tests/determinism/` golden fixtures; CI workflow `scanner-determinism.yml` | High | 2025-12-14 | 2026-03-14 |
| DET-002 | "All CVSS scoring decisions are receipted with cryptographic InputHash" | `ReceiptBuilder.cs:164-190`; InputHash computation implementation | High | 2025-12-14 | 2026-03-14 |
| DET-003 | "No competitor offers deterministic replay manifests for audit-grade reproducibility" | Source audit: Trivy v0.55, Grype v0.80, Snyk CLI v1.1292 | High | 2025-12-14 | 2026-03-14 |
| DET-004 | "Content-addressed proof bundles with Merkle roots enable cryptographic score verification" | `docs/db/SPECIFICATION.md` Section 5.7 (scanner.proof_bundle); `scanner scan replay --verify-proof` | High | 2025-12-20 | 2026-03-20 |
### 2. Reachability Claims
| ID | Claim | Evidence | Confidence | Verified | Next Review |
|----|-------|----------|------------|----------|-------------|
| REACH-001 | "Hybrid static + runtime reachability analysis reduces noise by 60-85%" | `docs/product-advisories/14-Dec-2025 - Reachability Analysis Technical Reference.md` | High | 2025-12-14 | 2026-03-14 |
| REACH-002 | "Signed reachability graphs with DSSE attestation" | `src/Attestor/` module; DSSE envelope implementation | High | 2025-12-14 | 2026-03-14 |
| REACH-003 | "~85% of critical vulnerabilities in containers are in inactive code" | Sysdig 2024 Container Security Report (external) | Medium | 2025-11-01 | 2026-02-01 |
| REACH-004 | "Multi-language support: Java, C#, Go, JavaScript, TypeScript, Python" | Language analyzer implementations in `src/Scanner/Analyzers/` | High | 2025-12-14 | 2026-03-14 |
### 3. VEX & Lattice Claims
| ID | Claim | Evidence | Confidence | Verified | Next Review |
|----|-------|----------|------------|----------|-------------|
| VEX-001 | "OpenVEX lattice semantics with deterministic state transitions" | `src/Excititor/` VEX engine; lattice documentation | High | 2025-12-14 | 2026-03-14 |
| VEX-002 | "VEX consensus from multiple sources (vendor, tool, analyst)" | `VexConsensusRefreshService.cs`; consensus algorithm | High | 2025-12-14 | 2026-03-14 |
| VEX-003 | "Seven-state lattice: CR, SR, SU, DT, DV, DA, U" | `docs/product-advisories/14-Dec-2025 - Triage and Unknowns Technical Reference.md` | High | 2025-12-14 | 2026-03-14 |
### 3a. Unknowns & Ambiguity Claims
| ID | Claim | Evidence | Confidence | Verified | Next Review |
|----|-------|----------|------------|----------|-------------|
| UNKNOWNS-001 | "Two-factor unknowns ranking: uncertainty + exploit pressure (defer centrality)" | `docs/db/SPECIFICATION.md` Section 5.6 (policy.unknowns); `SPRINT_3500_0001_0001_deeper_moat_master.md` | High | 2025-12-20 | 2026-03-20 |
| UNKNOWNS-002 | "Band-based prioritization: HOT/WARM/COLD/RESOLVED for triage queues" | `policy.unknowns.band` column; band CHECK constraint | High | 2025-12-20 | 2026-03-20 |
| UNKNOWNS-003 | "No competitor offers systematic unknowns tracking with escalation workflows" | Source audit: Trivy v0.55, Grype v0.80, Snyk CLI v1.1292 | High | 2025-12-20 | 2026-03-20 |
### 4. Attestation Claims
| ID | Claim | Evidence | Confidence | Verified | Next Review |
|----|-------|----------|------------|----------|-------------|
| ATT-001 | "DSSE-signed attestations for all evidence artifacts" | `src/Attestor/StellaOps.Attestor.Envelope/` | High | 2025-12-14 | 2026-03-14 |
| ATT-002 | "Optional Sigstore Rekor transparency logging" | `src/Attestor/StellaOps.Attestor.Rekor/` integration | High | 2025-12-14 | 2026-03-14 |
| ATT-003 | "in-toto attestation format support" | in-toto predicates in attestation module | High | 2025-12-14 | 2026-03-14 |
| ATT-004 | "Regional crypto support: eIDAS, FIPS, GOST, SM" | `StellaOps.Cryptography` with plugin architecture | Medium | 2025-12-14 | 2026-03-14 |
### 4a. Proof & Evidence Chain Claims
| ID | Claim | Evidence | Confidence | Verified | Next Review |
|----|-------|----------|------------|----------|-------------|
| PROOF-001 | "Deterministic proof ledgers with canonical JSON and CBOR serialization" | `docs/db/SPECIFICATION.md` Section 5.6-5.7 (policy.proof_segments, scanner.proof_bundle) | High | 2025-12-20 | 2026-03-20 |
| PROOF-002 | "Cryptographic proof chains link scans to frozen feed state via Merkle roots" | `scanner.scan_manifest` (concelier_snapshot_hash, excititor_snapshot_hash) | High | 2025-12-20 | 2026-03-20 |
| PROOF-003 | "Score replay command verifies proof integrity against original calculation" | `stella score replay --scan <id> --verify-proof`; `docs/OFFLINE_KIT.md` Section 2.2 | High | 2025-12-20 | 2026-03-20 |
### 5. Offline & Air-Gap Claims
| ID | Claim | Evidence | Confidence | Verified | Next Review |
|----|-------|----------|------------|----------|-------------|
| OFF-001 | "Full offline/air-gap operation capability" | `docs/airgap/`; offline kit implementation | High | 2025-12-14 | 2026-03-14 |
| OFF-002 | "Offline scans produce identical results to online (same advisory date)" | `docs/airgap/offline-parity-verification.md` (pending) | Medium | TBD | TBD |
| OFF-003 | "Risk bundles include NVD, KEV, EPSS data" | `docs/airgap/risk-bundles.md`; bundle manifest schema | High | 2025-12-14 | 2026-03-14 |
| OFF-004 | "DSSE-signed offline bundles for integrity verification" | Bundle signing implementation | High | 2025-12-14 | 2026-03-14 |
### 6. CVSS & Risk Scoring Claims
| ID | Claim | Evidence | Confidence | Verified | Next Review |
|----|-------|----------|------------|----------|-------------|
| CVSS-001 | "Full CVSS v4.0 MacroVector-based scoring with 324 lookup combinations" | `MacroVectorLookup.cs` | High | 2025-12-14 | 2026-03-14 |
| CVSS-002 | "Support for CVSS v2.0, v3.0, v3.1, and v4.0 vectors" | `CvssV2Engine.cs`, `CvssV3Engine.cs`, `CvssEngineFactory.cs` | High | 2025-12-14 | 2026-03-14 |
| CVSS-003 | "Threat Metrics (Exploit Maturity) integration per v4.0 spec" | `CvssV4Engine.cs:365-375` | High | 2025-12-14 | 2026-03-14 |
| CVSS-004 | "EPSS percentile-based risk bonuses (99th=+10%, 90th=+5%, 50th=+2%)" | `CvssKevEpssProvider.cs` | High | 2025-12-14 | 2026-03-14 |
| CVSS-005 | "KEV (Known Exploited Vulnerabilities) +20% risk bonus" | `CvssKevProvider.cs:33` | High | 2025-12-14 | 2026-03-14 |
### 7. SBOM Claims
| ID | Claim | Evidence | Confidence | Verified | Next Review |
|----|-------|----------|------------|----------|-------------|
| SBOM-001 | "SPDX 3.0.1 and CycloneDX 1.6 output formats" | SBOM generator implementations | High | 2025-12-14 | 2026-03-14 |
| SBOM-002 | "Multi-ecosystem support: APK, DEB, RPM, npm, Maven, NuGet, PyPI, Go, Cargo" | Ecosystem analyzers in `src/Scanner/` | High | 2025-12-14 | 2026-03-14 |
| SBOM-003 | "Deterministic SBOM generation (same image = same SBOM)" | SBOM determinism tests | High | 2025-12-14 | 2026-03-14 |
---
## Competitive Comparison Claims
### vs. Trivy
| ID | Claim | Evidence | Confidence | Verified | Next Review |
|----|-------|----------|------------|----------|-------------|
| COMP-TRIVY-001 | "Trivy lacks lattice VEX semantics (boolean only)" | Trivy v0.55.0 source: `pkg/vex/` | High | 2025-12-14 | 2026-03-14 |
| COMP-TRIVY-002 | "Trivy lacks deterministic replay manifests" | Trivy v0.55.0 source audit | High | 2025-12-14 | 2026-03-14 |
| COMP-TRIVY-003 | "Trivy lacks native reachability analysis" | Trivy v0.55.0 feature matrix | High | 2025-12-14 | 2026-03-14 |
### vs. Grype
| ID | Claim | Evidence | Confidence | Verified | Next Review |
|----|-------|----------|------------|----------|-------------|
| COMP-GRYPE-001 | "Grype lacks DSSE attestation signing" | Grype v0.80.0 source audit | High | 2025-12-14 | 2026-03-14 |
| COMP-GRYPE-002 | "Grype lacks VEX state lattice (affected/not_affected only)" | Grype v0.80.0 VEX implementation | High | 2025-12-14 | 2026-03-14 |
| COMP-GRYPE-003 | "Grype lacks CVSS v4.0 scoring" | Grype v0.80.0 feature matrix | Medium | 2025-12-14 | 2026-03-14 |
### vs. Snyk
| ID | Claim | Evidence | Confidence | Verified | Next Review |
|----|-------|----------|------------|----------|-------------|
| COMP-SNYK-001 | "Snyk lacks deterministic replay manifests" | Snyk CLI v1.1292 audit | High | 2025-12-14 | 2026-03-14 |
| COMP-SNYK-002 | "Snyk's reachability is limited to specific languages" | Snyk documentation review | Medium | 2025-12-14 | 2026-03-14 |
| COMP-SNYK-003 | "Snyk lacks offline/air-gap capability" | Snyk architecture documentation | High | 2025-12-14 | 2026-03-14 |
---
## Confidence Levels
| Level | Percentage | Definition |
|-------|------------|------------|
| **High** | 80-100% | Verified against source code or authoritative documentation |
| **Medium** | 50-80% | Based on documentation or limited testing; needs deeper verification |
| **Low** | <50% | Unverified or based on indirect evidence; requires validation |
---
## Update Process
### Verification Schedule
1. **Quarterly Review**: All claims reviewed every 90 days
2. **Major Version Triggers**: Re-verify when competitors release major versions
3. **Market Events**: Re-verify after significant market announcements
### Verification Steps
1. **Source Audit**: Review competitor source code (if open source)
2. **Documentation Review**: Check official documentation
3. **Feature Testing**: Test specific features when possible
4. **Third-Party Sources**: Cross-reference analyst reports
### Update Workflow
```
1. Identify claim requiring update
2. Conduct verification per type
3. Update evidence column
4. Update confidence level if changed
5. Set new verified date
6. Set next review date
7. Document changes in execution log
```
---
## Deprecation Policy
### Stale Claims
Claims older than **6 months** without verification are marked **STALE**:
- STALE claims must NOT be used in external communications
- STALE claims require immediate re-verification or removal
- Marketing team notified of all STALE claims
### Invalidated Claims
When a claim becomes false (e.g., competitor adds feature):
1. Mark claim as **INVALID**
2. Remove from all active materials within 7 days
3. Update competitive documentation
4. Notify stakeholders
---
## Usage Guidelines
### For Marketing
- Reference claims by ID (e.g., "Per DET-001...")
- Include verification date in footnotes
- Do not paraphrase claims without SME review
### For Sales
- Use claims matrix for competitive conversations
- Check confidence levels before customer commitments
- Report feedback on claim accuracy
### For Documentation
- Link to this index for competitive statements
- Update cross-references when claims change
- Flag questionable claims to Docs Guild
---
## Execution Log
| Date | Update | Owner |
|------|--------|-------|
| 2025-12-14 | Initial claims index created | Docs Guild |
| 2025-12-14 | Added CVSS v2/v3 engine claims (CVSS-002) | AI Implementation |
| 2025-12-14 | Added EPSS integration claims (CVSS-004) | AI Implementation |
| 2025-12-20 | Added DET-004 (content-addressed proof bundles) | Agent |
| 2025-12-20 | Added PROOF-001/002/003 (deterministic proof ledgers, proof chains, score replay) | Agent |
| 2025-12-20 | Added UNKNOWNS-001/002/003 (two-factor ranking, band prioritization, competitor gap) | Agent |
---
## References
- `docs/product-advisories/14-Dec-2025 - CVSS and Competitive Analysis Technical Reference.md`
- `docs/product/competitive-landscape.md`
- `docs/benchmarks/accuracy-metrics-framework.md`

View File

@@ -0,0 +1,194 @@
# Competitive Landscape
> **TL;DR:** Stella Ops isn't a scanner that outputs findings. It's a platform that outputs **attestable decisions that can be replayed**. That difference survives auditors, regulators, and supply-chain propagation.
Source: internal advisory "23-Nov-2025 - Stella Ops vs Competitors", updated Jan 2026. This summary distils a 15-vendor comparison into actionable positioning notes for sales/PMM and engineering prioritization.
---
## Verification Metadata
| Field | Value |
|-------|-------|
| **Last Updated** | 2026-01-03 |
| **Last Verified** | 2025-12-14 |
| **Next Review** | 2026-03-14 |
| **Claims Index** | [`docs/product/claims-citation-index.md`](claims-citation-index.md) |
| **Verification Method** | Source code audit (OSS), documentation review, feature testing |
**Confidence Levels:**
- **High (80-100%)**: Verified against source code or authoritative documentation
- **Medium (50-80%)**: Based on documentation or limited testing; needs deeper verification
- **Low (<50%)**: Unverified or based on indirect evidence; requires validation
---
## Why Competitors Plateau (Structural Analysis)
The scanner market evolved from three distinct origins. Each origin created architectural assumptions that make Stella Ops' capabilities structurally difficult to retrofit.
| Origin | Representatives | What They Optimized For | Why They Can't Easily Catch Up |
|--------|----------------|------------------------|-------------------------------|
| **Package Scanners** | Trivy, Syft/Grype | Fast CLI, broad ecosystem coverage | No forensic reproducibility in architecture; VEX is boolean, not lattice; no DSSE for reachability graphs |
| **Developer UX** | Snyk | IDE integration, fix PRs, onboarding | SaaS-only (offline impossible); no attestation infrastructure; reachability limited to specific languages |
| **Policy/Compliance** | Prisma Cloud, Aqua | Runtime protection, CNAPP breadth | No deterministic replay; no cryptographic provenance for verdicts; no semantic diff |
| **SBOM Operations** | Anchore | SBOM storage, lifecycle | No lattice VEX reasoning; no signed reachability graphs; no regional crypto profiles |
### The Core Problem
**Scanners output findings. Stella Ops outputs decisions.**
A finding says "CVE-2024-1234 exists in this package." A decision says "CVE-2024-1234 is reachable via this call path, vendor VEX says not_affected but our runtime disagrees, creating a conflict that policy must resolve, and here's the signed proof chain."
This isn't a feature gapit's a category difference. Retrofitting it requires:
- Rearchitecting the evidence model (content-addressed, not row-based)
- Adding lattice logic to VEX handling (not just filtering)
- Instrumenting reachability at three layers (static, binary, runtime)
- Building deterministic replay infrastructure (frozen feeds, manifests, seeds)
- Implementing regional crypto profiles (not just "signing")
---
## Stella Ops moats (why we win)
| Moat | Description | Claim IDs | Confidence |
|------|-------------|-----------|------------|
| **Deterministic replay** | Feed+rules snapshotting; graph/SBOM/VEX re-run bit-for-bit with manifest hashes | DET-001, DET-002, DET-003 | High |
| **Hybrid reachability attestations** | Graph-level DSSE always; optional edge-bundle DSSE for runtime/init/contested edges; Rekor-backed | REACH-001, REACH-002, ATT-001, ATT-002 | High |
| **Lattice-based VEX engine** | Merges advisories, runtime hits, reachability, waivers with explainable paths | VEX-001, VEX-002, VEX-003 | High |
| **Crypto sovereignty** | FIPS/eIDAS/GOST/SM/PQC profiles and offline mirrors as first-class knobs | ATT-004 | Medium |
| **Proof graph** | DSSE + transparency across SBOM, call-graph, VEX, replay manifests | ATT-001, ATT-002, ATT-003 | High |
## Top takeaways (sales-ready)
### The Five One-Liners
| # | One-Liner | What It Means | Claim IDs |
|---|-----------|---------------|-----------|
| 1 | "We don't output findings; we output attestable decisions that can be replayed." | Given identical inputs, Stella produces identical outputs. Any verdict from 6 months ago can be re-verified today with `stella replay srm.yaml`. | DET-001, DET-003 |
| 2 | "We treat VEX as a logical claim system, not a suppression file." | K4 lattice logic aggregates multiple VEX sources, detects conflicts, and produces explainable dispositions with proof links. | VEX-001, VEX-002 |
| 3 | "We provide proof of exploitability in *this* artifact, not just a badge." | Three-layer reachability (static graph + binary + runtime) with DSSE-signed call paths. Not "potentially reachable" but "here's the exact path." | REACH-001, REACH-002 |
| 4 | "We explain what changed in exploitable surface area, not what changed in CVE count." | Smart-Diff outputs "This release reduces exploitability by 41% despite +2 CVEs" semantic risk deltas, not raw numbers. | |
| 5 | "We quantify uncertainty and gate on it." | Unknowns are first-class state with bands (HOT/WARM/COLD), decay algorithms, and policy budgets. Uncertainty is risk; we surface and score it. | UNKNOWNS-001, UNKNOWNS-002 |
### Verified Gaps (High Confidence)
| # | Gap | Evidence | Claim IDs |
|---|-----|----------|-----------|
| 1 | No competitor offers deterministic replay with frozen feeds | Source audit: Trivy v0.55, Grype v0.80, Snyk CLI v1.1292 | DET-003 |
| 2 | None sign reachability graphs; we sign graphs and (optionally) edge bundles | Feature matrix analysis | REACH-002 |
| 3 | Sovereign crypto profiles (FIPS/eIDAS/GOST/SM/PQC) are unique to Stella Ops | Architecture review | ATT-004 |
| 4 | Lattice VEX with conflict detection is unmatched; others ship boolean VEX or none | Trivy pkg/vex source; Grype VEX implementation | VEX-001, COMP-TRIVY-001, COMP-GRYPE-002 |
| 5 | Offline/air-gap with mirrored transparency is rare; we ship it by default | Documentation and feature testing | OFF-001, OFF-004 |
## Where others fall short (detailed)
### Capability Gap Matrix
| Capability | Trivy | Grype | Snyk | Prisma | Aqua | Anchore | Stella Ops |
|-----------|-------|-------|------|--------|------|---------|------------|
| **Deterministic replay** | No | No | No | No | No | No | Yes |
| **VEX lattice (K4 logic)** | Boolean only | Boolean only | None | None | Limited | Limited | Full K4 |
| **Signed reachability graphs** | No | No | No | No | No | No | Yes (DSSE) |
| **Binary-level backport detection** | No | No | No | No | No | No | Tier 1-4 |
| **Semantic risk diff** | No | No | No | No | No | No | Yes |
| **Unknowns as state** | Hidden | Hidden | Hidden | Hidden | Hidden | Hidden | First-class |
| **Regional crypto (GOST/SM)** | No | No | No | No | No | No | Yes |
| **Offline parity** | Medium | Medium | No | Strong | Medium | Good | Full |
### Specific Gaps by Competitor
| Gap | What This Means | Related Claims | Verified |
|-----|-----------------|----------------|----------|
| **No deterministic replay** | A scan from last month cannot be re-run to produce identical results. Feed drift, analyzer changes, and non-deterministic ordering break reproducibility. Auditors cannot verify past decisions. | DET-003, COMP-TRIVY-002, COMP-GRYPE-001, COMP-SNYK-001 | 2025-12-14 |
| **No lattice/VEX merge** | VEX is either absent or treated as a suppression filter. When vendor says "not_affected" but runtime shows the function was called, these tools can't represent the conflictthey pick one or the other. | COMP-TRIVY-001, COMP-GRYPE-002 | 2025-12-14 |
| **No signed reachability** | Reachability claims are assertions, not proofs. There's no cryptographic binding between "this CVE is reachable" and the call path that proves it. | COMP-GRYPE-001, REACH-002 | 2025-12-14 |
| **No semantic diff** | Tools report "+3 CVEs" without context. They can't say "exploitable surface decreased despite new CVEs" because they don't track reachability deltas. | | 2025-12-14 |
| **Offline/sovereign gaps** | Snyk is SaaS-only. Others have partial offline support but no regional crypto (GOST, SM2, eIDAS) and no sealed knowledge snapshots for air-gapped reproducibility. | COMP-SNYK-003, ATT-004 | 2025-12-14 |
## Snapshot table (condensed)
| Vendor | SBOM Gen | SBOM Ingest | Attest (DSSE) | Rekor | Offline | Primary gaps vs Stella | Related Claims |
|--------|----------|-------------|---------------|-------|---------|------------------------|----------------|
| Trivy | Yes | Yes | Cosign | Query | Strong | No replay, no lattice | COMP-TRIVY-001, COMP-TRIVY-002, COMP-TRIVY-003 |
| Syft/Grype | Yes | Yes | Cosign-only | Indir | Medium | No replay, no lattice | COMP-GRYPE-001, COMP-GRYPE-002, COMP-GRYPE-003 |
| Snyk | Yes | Limited | No | No | Weak | No attest/VEX/replay | COMP-SNYK-001, COMP-SNYK-002, COMP-SNYK-003 |
| Prisma | Yes | Limited | No | No | Strong | No attest/replay | |
| AWS (Inspector/Signer) | Partial | Partial | Notary v2 | No | Weak | Closed, no replay | |
| Google | Yes | Yes | Yes | Opt | Weak | No offline/lattice | |
| GitHub | Yes | Partial | Yes | Yes | No | No replay/crypto opts | |
| GitLab | Yes | Limited | Partial | No | Medium | No replay/lattice | |
| Microsoft Defender | Partial | Partial | No | No | Weak | No attest/reachability | |
| Anchore Enterprise | Yes | Yes | Some | No | Good | No sovereign crypto | |
| JFrog Xray | Yes | Yes | No | No | Medium | No attest/lattice | |
| Tenable | Partial | Limited | No | No | Weak | Not SBOM/VEX-focused | |
| Qualys | Limited | Limited | No | No | Medium | No attest/lattice | |
| Rezilion | Yes | Yes | No | No | Medium | Runtime-only; no DSSE | |
| Chainguard | Yes | Yes | Yes | Yes | Medium | No replay/lattice | |
## How to use this doc
- Sales/PMM: pull talking points and the gap list when building battlecards.
- Product: map gaps to roadmap; keep replay/lattice/sovereign as primary differentiators.
- Engineering: ensure new features keep determinism + sovereign crypto front-and-center; link reachability attestations into proof graph.
## Cross-links
- Vision: `docs/VISION.md` (Moats section)
- Architecture: `docs/ARCHITECTURE_REFERENCE.md`
- Reachability moat details: `docs/modules/reach-graph/guides/lead.md`
- Source advisory: `docs/product-advisories/23-Nov-2025 - Stella Ops vs Competitors.md`
- **Claims Citation Index**: [`docs/product/claims-citation-index.md`](claims-citation-index.md)
---
## Battlecard Appendix (snippet-ready)
### Elevator Pitches (by Audience)
| Audience | Pitch |
|----------|-------|
| **CISO/Security Leader** | "Stella Ops turns vulnerability noise into auditable decisions. Every verdict is signed, replayable, and proves *why* something is or isn't exploitable." |
| **Compliance/Audit** | "Unlike scanners that output findings, we output decisions with proof chains. Six months from now, you can replay any verdict bit-for-bit to prove what you knew and when." |
| **DevSecOps Engineer** | "Tired of triaging the same CVE across 50 images? Stella deduplicates by root cause, shows reachability proofs, and explains exactly what to fix and why." |
| **Air-gap/Regulated** | "Full offline parity with regional crypto (FIPS/GOST/SM/eIDAS). Sealed knowledge snapshots ensure your air-gapped environment produces identical results to connected." |
### One-Liners with Proof Points
| One-Liner | Proof Point | Claims |
|-----------|-------------|--------|
| *Replay or it's noise* | `stella replay srm.yaml --assert-digest <sha>` reproduces any past scan bit-for-bit | DET-001, DET-003 |
| *Signed reachability, not guesses* | Graph-level DSSE always; edge-bundle DSSE for contested paths; Rekor-backed | REACH-001, REACH-002 |
| *Sovereign-first* | FIPS/eIDAS/GOST/SM/PQC profiles as config; multi-sig with regional roots | ATT-004 |
| *Trust algebra, not suppression files* | K4 lattice merges advisories, runtime, reachability, waivers; conflicts are explicit state | VEX-001, VEX-002 |
| *Semantic risk deltas* | "Exploitability dropped 41% despite +2 CVEs" not just CVE counts | |
### Objection Handlers
| Objection | Response | Supporting Claims |
|-----------|----------|-------------------|
| "We already sign SBOMs." | Great start. But do you sign call-graphs and VEX decisions? Can you replay a scan from 6 months ago and get identical results? We do both. | DET-001, REACH-002 |
| "Cosign/Rekor is enough." | Cosign signs artifacts. We sign *decisions*. Without deterministic manifests and reachability proofs, you can sign findings but can't audit *why* a vuln was reachable. | DET-003, REACH-002 |
| "Our runtime traces show reachability." | Runtime is one signal. We fuse it with static call graphs and VEX lattice into a signed, replayable verdict. You can quarantine or dispute individual edges, not just all-or-nothing. | REACH-001, VEX-002 |
| "Snyk does reachability." | Snyk's reachability is language-limited (Java, JavaScript), SaaS-only, and unsigned. We support 6+ languages, work offline, and sign every call path with DSSE. | COMP-SNYK-002, COMP-SNYK-003, REACH-002 |
| "We use Trivy and it's free." | Trivy is excellent for broad coverage. We're for organizations that need audit-grade reproducibility, VEX reasoning, and signed proofs. Different use cases. | COMP-TRIVY-001, COMP-TRIVY-002 |
| "Can't you just add this to Trivy?" | Trivy's architecture assumes findings, not decisions. Retrofitting deterministic replay, lattice VEX, and proof chains would require fundamental rearchitecturenot just features. | |
### Demo Scenarios
| Scenario | What to Show | Command |
|----------|-------------|---------|
| **Determinism** | Run scan twice, show identical digests | `stella scan --image <img> --srm-out a.yaml && stella scan --image <img> --srm-out b.yaml && diff a.yaml b.yaml` |
| **Replay** | Replay a week-old scan, verify identical output | `stella replay srm.yaml --assert-digest <sha>` |
| **Reachability proof** | Show signed call path from entrypoint to vulnerable symbol | `stella graph show --cve CVE-XXXX-YYYY --artifact <digest>` |
| **VEX conflict** | Show lattice handling vendor vs runtime disagreement | Trust Algebra Studio UI or `stella vex evaluate --artifact <digest>` |
| **Offline parity** | Import sealed bundle, scan, compare to online result | `stella rootpack import bundle.tar.gz && stella scan --offline ...` |
### Leave-Behind Materials
- **Reachability deep-dive:** `docs/modules/reach-graph/guides/lead.md`
- **Competitive landscape:** This document
- **Proof architecture:** `docs/modules/platform/proof-driven-moats-architecture.md`
- **Key features:** `docs/key-features.md`
## Sources
- Full advisory: `docs/product-advisories/23-Nov-2025 - Stella Ops vs Competitors.md`
- Claims Citation Index: `docs/product/claims-citation-index.md`

View File

@@ -0,0 +1,170 @@
# Decision Capsules — Audit-Grade Evidence Bundles
> Status: Marketing Bridge Document · December 2025
> Audience: Technical buyers, security architects, compliance teams
<!-- TODO: Review for separate approval - new marketing bridge doc -->
## Executive Summary
Stella Ops isn't just another scanner—it's a different product category: **deterministic, evidence-linked vulnerability decisions** that survive auditors, regulators, and supply-chain propagation.
**Decision Capsules** are the mechanism that makes this possible: content-addressed bundles that seal every scan result with all inputs, outputs, and evidence needed to reproduce and verify vulnerability decisions. This is the heart of audit-grade assurance—every decision becomes a provable, replayable fact.
**Key message**: "Prove every fix, audit every finding."
---
## What is a Decision Capsule?
A Decision Capsule is a signed, immutable bundle containing:
| Component | Description | Purpose |
|-----------|-------------|---------|
| **Exact SBOM** | The precise software bill of materials used for the scan | Reproducibility |
| **Vuln feed snapshots** | Frozen advisory data (NVD, OSV, GHSA, etc.) at scan time | Consistency |
| **Reachability evidence** | Static call-graph artifacts + runtime traces | Proof of analysis |
| **Policy version** | Lattice rules and threshold configuration | Explainability |
| **Derived VEX** | The vulnerability status decision with justification | Outcome |
| **DSSE signatures** | Cryptographic signatures over all contents | Integrity |
```
┌─────────────────────────────────────────────────────────────┐
│ Decision Capsule │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────┐ ┌─────────────┐ ┌──────────────────┐ │
│ │ SBOM │ │ Vuln Feeds │ │ Reachability │ │
│ │ (exact) │ │ (snapshots) │ │ Evidence │ │
│ └─────────┘ └─────────────┘ └──────────────────┘ │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌──────────────────┐ │
│ │ Policy Ver │ │ Derived VEX │ │ DSSE Signatures │ │
│ │ + Lattice │ │ + Justify. │ │ (integrity) │ │
│ └─────────────┘ └─────────────┘ └──────────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
---
## Why Decision Capsules Matter
### For Security Teams
- **Reproduce any finding**: Re-run a scan from 6 months ago with identical results
- **Trust the evidence**: Every decision has cryptographic proof
- **Explain to stakeholders**: Clear justification for every block/allow decision
### For Compliance Teams
- **Audit-ready artifacts**: Evidence bundles meet regulatory requirements
- **Chain of custody**: Full provenance from scan to decision
- **Tamper-evident**: Any modification breaks the signature
### For Developers
- **No "works on my machine"**: Reproducible results across environments
- **Fast debugging**: Trace exactly why a vulnerability was flagged
- **CI/CD integration**: Capsules fit into existing pipelines
---
## Competitive Differentiation
| Capability | Stella Ops | Competitors |
|------------|------------|-------------|
| **Sealed evidence** | Decision Capsules with DSSE signatures | Scan reports (mutable) |
| **Reproducibility** | Bit-for-bit replay from frozen feeds | "Re-scan" with current data |
| **Evidence linking** | Every VEX decision has proof pointers | VEX statements without proof |
| **Offline verification** | Full verification without network | Requires SaaS connection |
**Battlecard one-liner**: "Prove every fix, audit every finding—Decision Capsules seal evidence so you can replay scans bit-for-bit."
---
## Technical Details
### Capsule Format
```yaml
apiVersion: capsule.stellaops.dev/v1
metadata:
id: "cap-2025-12-11-abc123"
timestamp: "2025-12-11T14:30:00Z"
scan_id: "scan-xyz789"
inputs:
sbom:
format: "cyclonedx@1.6"
digest: "sha256:..."
feeds:
- name: "nvd"
snapshot: "2025-12-11"
digest: "sha256:..."
- name: "osv"
snapshot: "2025-12-11"
digest: "sha256:..."
policy:
version: "corp-policy@2025-12-01"
digest: "sha256:..."
reachability:
graph_hash: "blake3:..."
edge_bundles: ["bundle:001", "bundle:002"]
outputs:
vex:
format: "openvex"
digest: "sha256:..."
findings:
digest: "sha256:..."
signatures:
- scheme: "DSSE"
profile: "FIPS-140-3"
signer: "build-ca@corp"
```
### CLI Commands
```bash
# Create a capsule during scan
stella scan --image reg/app@sha256:... --capsule-out capsule.yaml
# Replay a capsule
stella replay capsule.yaml --assert-digest sha256:...
# Verify capsule integrity
stella capsule verify capsule.yaml
# Extract evidence for audit
stella capsule export capsule.yaml --format audit-bundle
```
---
## Integration with Four Capabilities
Decision Capsules connect all four capabilities:
1. **Signed Reachability** → Reachability evidence sealed in capsule
2. **Deterministic Replay** → Capsule enables bit-for-bit replay
3. **Explainable Policy** → Policy version + derived VEX in capsule
4. **Sovereign Offline** → Capsule verifiable without network
---
## Customer Scenarios
### Scenario 1: Regulatory Audit
"Show me the evidence for this CVE decision from 6 months ago."
→ Replay the Decision Capsule, get identical results, provide the signed evidence bundle.
### Scenario 2: Incident Response
"This vulnerability was marked not_affected—prove it."
→ Extract the reachability evidence from the capsule showing the vulnerable code path is not reachable.
### Scenario 3: Supply Chain Attestation
"Provide proof that this image was scanned and passed policy."
→ Share the Decision Capsule; downstream consumers can verify the signature independently.
---
## Related Documentation
- `docs/key-features.md` — Feature overview
- `docs/VISION.md` — Product vision and moats
- `docs/modules/reach-graph/guides/lattice.md` — Reachability scoring
- `docs/VEX_CONSENSUS_GUIDE.md` — VEX consensus and issuer trust

View File

@@ -0,0 +1,228 @@
# Evidence-Linked VEX — Proof-Backed Vulnerability Decisions
> Status: Marketing Bridge Document · December 2025
> Audience: Technical buyers, security architects, compliance teams
<!-- TODO: Review for separate approval - new marketing bridge doc -->
## Executive Summary
Stella Ops isn't just another scanner—it's a different product category: **deterministic, evidence-linked vulnerability decisions** that survive auditors, regulators, and supply-chain propagation.
**Evidence-Linked VEX** is how those decisions are structured: every vulnerability status assertion includes pointers to the underlying proof. Unlike traditional VEX that simply states "not_affected" without explanation, Stella Ops provides a complete evidence graph connecting the decision to its inputs.
**Key message**: "VEX you can prove."
---
## What is Evidence-Linked VEX?
Every VEX decision in Stella Ops links to an **evidence graph** containing:
| Evidence Type | Description | Link Format |
|---------------|-------------|-------------|
| **SBOM match** | Component identity confirmation | `sbom_hash`, `purl` |
| **Vuln snapshot** | Exact advisory data at decision time | `advisory_snapshot_id` |
| **Reachability proof** | Static/runtime analysis artifacts | `reach_decision_id`, `graph_hash` |
| **Runtime observation** | Process traces, method hits | `runtime_trace_id` |
| **Mitigation evidence** | WAF rules, config flags, patches | `mitigation_id` |
```
┌────────────────────────────────────────────────────────────┐
│ VEX Decision: NOT_AFFECTED │
├────────────────────────────────────────────────────────────┤
│ │
│ "CVE-2025-1234 does not affect pkg:nuget/Example@1.2.3" │
│ │
│ Evidence Graph: │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ SBOM Match │───▶│ Vuln Record │───▶│ Reach Proof │ │
│ │ sha256:abc │ │ nvd-snap-01 │ │ graph:xyz │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │ │
│ ▼ │
│ ┌─────────────┐ │
│ │ Runtime Obs │ │
│ │ trace:456 │ │
│ └─────────────┘ │
└────────────────────────────────────────────────────────────┘
```
---
## Why Evidence-Linked VEX Matters
### The Problem with Traditional VEX
Traditional VEX statements are **assertions without proof**:
```json
{
"vulnerability": "CVE-2025-1234",
"status": "not_affected",
"justification": "vulnerable_code_not_present"
}
```
Questions this doesn't answer:
- How do you know the vulnerable code isn't present?
- What analysis was performed?
- Can this decision be independently verified?
- What happens when the advisory changes?
### The Evidence-Linked Solution
Stella Ops VEX includes **proof pointers**:
```json
{
"vulnerability": "CVE-2025-1234",
"status": "not_affected",
"justification": "vulnerable_code_not_present",
"evidence_refs": {
"sbom": "sha256:abc123...",
"advisory_snapshot": "nvd-2025-12-01",
"reachability": {
"decision_id": "reach:xyz789",
"graph_hash": "blake3:...",
"score": 22,
"state": "POSSIBLE"
},
"mitigations": ["mit:waf-rule-123"]
},
"replay_bundle": "capsule:2025-12-11-abc"
}
```
This enables:
- **Independent verification**: Anyone can follow the proof chain
- **Deterministic replay**: Re-run the exact analysis
- **Audit compliance**: Evidence meets regulatory requirements
---
## Explicit "Unknown" State Handling
A key differentiator: Stella Ops explicitly handles **incomplete data**.
| Traditional Approach | Stella Ops Approach |
|---------------------|---------------------|
| Binary: affected/not_affected | Three states: affected/not_affected/under_investigation |
| Missing data → assume safe | Missing data → mark "Unknown" |
| False negatives possible | Incomplete data surfaced explicitly |
**Why this matters**: Incomplete data never leads to false safety. If reachability analysis is inconclusive, the decision stays `under_investigation` until sufficient evidence is gathered.
---
## Competitive Differentiation
| Capability | Stella Ops | Competitors |
|------------|------------|-------------|
| **VEX output** | Evidence-linked with proof graph | Simple VEX statements |
| **Verification** | Independent proof chain verification | Trust the vendor |
| **Unknown handling** | Explicit `under_investigation` state | Binary yes/no |
| **Replay** | Bit-for-bit from Decision Capsules | Not possible |
**Battlecard one-liner**: "Competitors export VEX formats; Stella provides VEX you can prove."
---
## Evidence Graph Structure
### Full Evidence Chain
```
Component (PURL)
├──▶ SBOM Document
│ └── digest: sha256:...
├──▶ Vulnerability Record
│ ├── source: NVD
│ ├── snapshot_id: nvd-2025-12-01
│ └── digest: sha256:...
├──▶ Reachability Analysis
│ ├── static_graph: blake3:...
│ ├── runtime_traces: [trace:001, trace:002]
│ ├── score: 22 (POSSIBLE)
│ └── evidence: [edge:abc, edge:def]
├──▶ Mitigations
│ ├── waf_rule: rule:xyz
│ └── config_flag: flag:disabled
└──▶ Policy Decision
├── version: corp-policy@2025-12-01
├── digest: sha256:...
└── threshold: score < 25 → not_affected
```
### Evidence Types
| Evidence Kind | Confidence Impact | Source |
|---------------|-------------------|--------|
| `StaticCallEdge` | +30 base score | IL/bytecode analysis |
| `RuntimeMethodHit` | +60 base score | EventPipe/JFR |
| `UserInputSource` | +80 base score | Taint analysis |
| `WafRulePresent` | -20 mitigation | WAF connector |
| `PatchLevel` | -40 mitigation | Patch diff |
---
## VEX Propagation
Evidence-Linked VEX enables **scalable supply chain sharing**:
1. **Generate**: Create evidence-linked VEX from scan results
2. **Sign**: Apply DSSE signatures with your trust root
3. **Share**: Propagate to downstream consumers
4. **Verify**: Consumers verify independently using proof pointers
```bash
# Generate evidence-linked VEX
stella vex generate --scan scan-123 --format openvex --include-evidence
# Export for downstream
stella vex export --decisions "decision:abc123" --bundle evidence-bundle.tar
# Downstream verification
stella vex verify evidence-bundle.tar --trust-root downstream-ca
```
---
## Integration with Four Capabilities
Evidence-Linked VEX connects to the four capabilities:
1. **Signed Reachability** → Reachability proof in evidence graph
2. **Deterministic Replay** → Evidence reproducible via Decision Capsules
3. **Explainable Policy** → Policy version and thresholds traced
4. **Sovereign Offline** → Evidence verifiable without network
---
## Customer Scenarios
### Scenario 1: Vendor VEX Verification
"The vendor says this CVE doesn't affect us—can we trust it?"
→ Check their evidence graph; verify the reachability analysis matches your deployment.
### Scenario 2: Compliance Audit
"Prove this CVE was properly analyzed."
→ Show the evidence chain from SBOM → advisory → reachability → decision.
### Scenario 3: Supply Chain Propagation
"Pass our VEX decisions to downstream consumers."
→ Export evidence-linked VEX; consumers can independently verify.
---
## Related Documentation
- `docs/VEX_CONSENSUS_GUIDE.md` — VEX consensus and issuer trust
- `docs/modules/reach-graph/guides/lattice.md` — Reachability scoring model
- `docs/product/decision-capsules.md` — Decision Capsules overview
- `docs/product/hybrid-reachability.md` — Hybrid analysis

View File

@@ -0,0 +1,239 @@
# Hybrid Reachability — Static + Runtime Analysis
> Status: Marketing Bridge Document · December 2025
> Audience: Technical buyers, security architects, compliance teams
<!-- TODO: Review for separate approval - new marketing bridge doc -->
## Executive Summary
Stella Ops isn't just another scanner—it's a different product category: **deterministic, evidence-linked vulnerability decisions** that survive auditors, regulators, and supply-chain propagation.
**Hybrid Reachability** is how we achieve accurate impact analysis: combining static call-graph analysis with runtime process tracing to determine whether vulnerable code is actually reachable. Both edge types are separately attestable with DSSE signatures, providing true hybrid analysis with cryptographic proof.
**Key message**: "True hybrid reachability—static and runtime signals share one verdict."
---
## What is Hybrid Reachability?
Traditional reachability analysis uses either:
- **Static analysis**: Examines code without executing it (call graphs, data flow)
- **Runtime analysis**: Observes actual execution (method hits, stack traces)
Stella Ops uses **both** and reconciles them into a unified reachability decision:
```
┌─────────────────────────────────────────────────────────────┐
│ Hybrid Reachability │
├────────────────────────┬────────────────────────────────────┤
│ Static Analysis │ Runtime Analysis │
├────────────────────────┼────────────────────────────────────┤
│ • IL/bytecode walkers │ • .NET EventPipe │
│ • ASP.NET routing │ • JVM JFR │
│ • Call-graph edges │ • Node inspector │
│ • Entry-point prox. │ • Go/Rust probes │
├────────────────────────┴────────────────────────────────────┤
│ │
│ Lattice Engine │
│ ┌─────────────────────────────────────┐ │
│ │ Merge signals → Score → VEX status │ │
│ └─────────────────────────────────────┘ │
│ │
│ ┌─────────────────────────────────────┐ │
│ │ DSSE Attestation (Graph + Edges) │ │
│ └─────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
---
## Why Hybrid Matters
### Limitations of Static-Only Analysis
| Issue | Description | Impact |
|-------|-------------|--------|
| **Over-reporting** | Reports paths that never execute | Alert fatigue |
| **Dynamic dispatch** | Misses reflection, DI, runtime binding | False negatives |
| **Framework magic** | ASP.NET/Spring routing not fully modeled | Incomplete coverage |
| **Configuration** | Doesn't account for runtime config | Context-blind |
### Limitations of Runtime-Only Analysis
| Issue | Description | Impact |
|-------|-------------|--------|
| **Coverage gaps** | Only sees executed paths | Misses rare paths |
| **Environment-specific** | Results vary by test coverage | Non-deterministic |
| **No proactive detection** | Requires traffic to observe | Reactive, not preventive |
| **Attack surface** | May miss dormant vulnerabilities | Security risk |
### Hybrid Solution
| Signal Type | Strength | Weakness | Hybrid Benefit |
|-------------|----------|----------|----------------|
| Static | Comprehensive coverage | Over-reports | Runtime filters false positives |
| Runtime | Ground truth | Incomplete | Static catches unexercised paths |
**Result**: Higher confidence with lower false positive/negative rates.
---
## Reachability Lattice
Stella Ops uses a **confidence lattice** with explicit states:
```
UNOBSERVED (09)
< POSSIBLE (1029)
< STATIC_PATH (3059)
< DYNAMIC_SEEN (6079)
< DYNAMIC_USER_TAINTED (8099)
< EXPLOIT_CONSTRAINTS_REMOVED (100)
```
| State | Evidence Required | VEX Mapping |
|-------|-------------------|-------------|
| UNOBSERVED | None | under_investigation |
| POSSIBLE | Lockfile-only | under_investigation |
| STATIC_PATH | Static call-graph edge | under_investigation |
| DYNAMIC_SEEN | Runtime method hit | affected |
| DYNAMIC_USER_TAINTED | User input reaches vuln | affected |
| EXPLOIT_CONSTRAINTS_REMOVED | Full exploit chain | affected |
**Key feature**: The `under_investigation` state explicitly handles incomplete data—Stella never marks something "safe" without sufficient evidence.
---
## Attestation Model
Both static and runtime edges are attestable:
### Graph-Level Attestation (Required)
```yaml
level: 0
payload: richgraph-v1
signature: DSSE
storage: cas://reachability/graphs/{blake3}
rekor: always
```
### Edge-Bundle Attestation (Selective)
```yaml
level: 1
payload: edge-bundle (≤512 edges)
criteria:
- source: runtime
- source: init_array/constructors
- status: contested/quarantined
signature: DSSE
storage: cas://reachability/edges/{graph_hash}/{bundle_id}
rekor: configurable
```
This enables:
- **Prove specific paths**: Attest individual runtime-observed edges
- **Dispute resolution**: Quarantine/revoke specific edges
- **Offline verification**: Verify without network access
---
## Competitive Differentiation
| Capability | Stella Ops | Competitors |
|------------|------------|-------------|
| **Analysis type** | Hybrid (static + runtime) | Usually one or the other |
| **Attestation** | DSSE for both edge types | None or graph-only |
| **Unknown handling** | Explicit `under_investigation` | Binary yes/no |
| **Edge-level proof** | Selective edge-bundle DSSE | Not available |
**Battlecard one-liner**: "Static analysis sees code paths; runtime sees execution. Stella sees both—and proves it."
---
## Signal Sources
### Static Analysis Signals
| Signal | Source | Evidence Kind |
|--------|--------|---------------|
| Call-graph edges | Roslyn/IL walkers | `StaticCallEdge` |
| Entry-point proximity | Framework routing models | `StaticEntryPointProximity` |
| Package declarations | Lockfile/manifest | `StaticPackageDeclaredOnly` |
### Runtime Analysis Signals
| Signal | Source | Evidence Kind |
|--------|--------|---------------|
| Method hits | .NET EventPipe | `RuntimeMethodHit` |
| Stack samples | JVM JFR | `RuntimeStackSample` |
| HTTP routes | ASP.NET/Spring routing | `RuntimeHttpRouteHit` |
| User input | Taint analysis | `UserInputSource` |
### Mitigation Signals
| Signal | Source | Effect |
|--------|--------|--------|
| WAF rules | WAF connectors | Score reduction |
| Config flags | Config snapshot | Score reduction |
| Network isolation | Container policy | Score reduction |
---
## Integration with Four Capabilities
Hybrid Reachability is **Capability #1** of four:
1. **Signed Reachability** ← This document
2. **Deterministic Replay** → Reachability evidence in Decision Capsules
3. **Explainable Policy** → Reachability feeds the lattice VEX engine
4. **Sovereign Offline** → All analysis verifiable without network
---
## Customer Scenarios
### Scenario 1: False Positive Reduction
"We're drowning in vulnerability alerts."
→ Hybrid analysis shows 70% of reported CVEs have no reachable path; focus on the 30% that matter.
### Scenario 2: Runtime Validation
"Static analysis says this is reachable—is it really?"
→ Runtime probes observed 0 hits over 30 days; downgrade to `under_investigation`.
### Scenario 3: Audit Proof
"Prove the vulnerable code path is not reachable."
→ Show the signed reachability graph with static call-graph (no path) + runtime traces (no hits).
### Scenario 4: Contested Edge
"We disagree with this reachability finding."
→ Mark the edge as disputed; policy excludes it; recompute reachability; surface the delta.
---
## CLI Integration
```bash
# Scan with hybrid reachability
stella scan --image reg/app@sha256:... --reachability hybrid
# Verify reachability graph
stella graph verify --graph blake3:abc123
# Show reachability decision for a CVE
stella reach show --cve CVE-2025-1234 --component pkg:nuget/Example@1.2.3
# Export edge bundles for audit
stella reach export --graph blake3:abc123 --bundles-only
```
---
## Related Documentation
- `docs/modules/reach-graph/guides/hybrid-attestation.md` — Attestation technical details
- `docs/modules/reach-graph/guides/lattice.md` — Scoring model
- `docs/product/decision-capsules.md` — Decision Capsules overview
- `docs/product/evidence-linked-vex.md` — Evidence-linked VEX

View File

@@ -0,0 +1,162 @@
# StellaOps Moat Strategy Summary
**Date**: 2026-01-03
**Source**: Product Advisories (19-Dec-2025 Moat Series), Competitive Analysis (Jan 2026)
**Status**: DOCUMENTED
---
## Executive Summary
> **Core Thesis:** Stella Ops isn't a scanner that outputs findings. It's a platform that outputs **attestable decisions that can be replayed**.
StellaOps competitive moats are built on **decision integrity**—deterministic, attestable, replayable security verdicts—not just scanner features. This is a category difference, not a feature gap.
### The Category Shift
| Traditional Scanners | Stella Ops |
|---------------------|------------|
| Output findings | Output decisions |
| VEX as suppression | VEX as logical claims |
| Reachability as badge | Reachability as proof |
| CVE counts | Semantic risk deltas |
| Hide unknowns | Surface and score unknowns |
| Online-first | Offline-first with parity |
## Moat Strength Rankings
### Understanding the Scale
| Level | Definition | Defensibility |
|-------|------------|---------------|
| **5** | Structural moat | New primitives, strong defensibility, durable switching cost. Requires fundamental rearchitecture to replicate. |
| **4** | Strong moat | Difficult multi-domain engineering. Incumbents have partial analogs but retrofitting is expensive. |
| **3** | Moderate moat | Others can build. Differentiation is execution + packaging. |
| **2** | Weak moat | Table-stakes soon. Limited defensibility. |
| **1** | Commodity | Widely available in OSS or easy to replicate. |
### Ranked Capabilities
| Level | Capability | Why It's Defensible | Module(s) | Status |
|-------|-----------|---------------------|-----------|--------|
| **5** | Signed, replayable risk verdicts | Requires deterministic eval + proof schema + knowledge snapshots + frozen feeds. No competitor has this architecture. | `Attestor`, `ReplayVerifier`, `Scanner` | Implemented |
| **4** | VEX decisioning (K4 lattice) | Formal conflict resolution using Belnap logic. Requires rethinking VEX from suppression to claims. | `VexLens`, `TrustLatticeEngine`, `Excititor` | Implemented |
| **4** | Reachability with proofs | Three-layer (static + binary + runtime) with DSSE-signed call paths. Not "potentially reachable" but "here's the proof." | `ReachGraph`, `Scanner.VulnSurfaces`, `PathWitnessBuilder` | Implemented |
| **4** | Smart-Diff (semantic risk delta) | Graph-based diff over reachability + VEX. Outputs meaning ("exploitability dropped 41%"), not numbers ("+3 CVEs"). | `MaterialRiskChangeDetector`, `Scanner.ReachabilityDrift` | Implemented |
| **4** | Unknowns as first-class state | Uncertainty budgets, bands (HOT/WARM/COLD), decay algorithms, policy gates. | `Policy`, `Signals`, `UnknownStateLedger` | Implemented |
| **4** | Air-gapped epistemic mode | Sealed knowledge snapshots, offline reproducibility, regional crypto (GOST/SM/eIDAS). | `AirGap.Controller`, `CryptoProfile`, `RootPack` | Implemented |
| **3** | SBOM ledger + lineage | Table stakes; differentiated via semantic diff + evidence joins + deterministic generation. | `SbomService`, `BinaryIndex` | Implemented |
| **3** | Policy engine with proofs | Common; moat is proof output + deterministic replay + K4 integration. | `Policy`, `TrustLatticeEngine` | Implemented |
| **1-2** | Integrations | Necessary but not defensible. Anyone can build CI/CD plugins. | Various | Ongoing |
## Core Moat Thesis (One-Liners)
Use these in sales conversations, marketing materials, and internal alignment.
| Capability | One-Liner | What It Actually Means |
|-----------|-----------|------------------------|
| **Deterministic verdicts** | "We don't output findings; we output attestable decisions that can be replayed." | Given identical inputs, Stella produces identical outputs. `stella replay srm.yaml` reproduces any past scan bit-for-bit. |
| **VEX decisioning** | "We treat VEX as a logical claim system, not a suppression file." | K4 lattice (Unknown/True/False/Conflict) aggregates multiple VEX sources. Conflicts are explicit state, not hidden. |
| **Reachability proofs** | "We provide proof of exploitability in *this* artifact, not just a badge." | Three-layer reachability with DSSE-signed call paths. Not "potentially reachable" but "here's the exact path from entrypoint to vuln." |
| **Smart-Diff** | "We explain what changed in exploitable surface area, not what changed in CVE count." | Output: "Exploitability dropped 41% despite +2 CVEs." Semantic meaning, not raw numbers. |
| **Unknowns modeling** | "We quantify uncertainty and gate on it." | Unknowns have bands (HOT/WARM/COLD), decay algorithms, and policy budgets. Uncertainty is risk—we surface and score it. |
## Implementation Status
### Core Moats (All Implemented)
| Capability | Key Modules | Evidence |
|-----------|-------------|----------|
| **Signed verdicts** | `Attestor`, `Signer`, `ReplayVerifier` | DSSE envelopes, SRM manifests, bit-for-bit replay |
| **VEX decisioning (K4)** | `VexLens`, `TrustLatticeEngine` | 110+ tests passing; CycloneDX/OpenVEX/CSAF normalizers |
| **Reachability proofs** | `ReachGraph`, `PathWitnessBuilder` | DSSE-signed graphs; edge-bundle attestations |
| **Smart-Diff** | `MaterialRiskChangeDetector`, `RiskStateSnapshot` | R1-R4 rules; priority scoring; SARIF output |
| **Unknowns modeling** | `UnknownStateLedger`, `Policy` | Bands (HOT/WARM/COLD); decay algorithms |
| **Air-gapped mode** | `AirGap.Controller`, `RootPack` | Sealed snapshots; regional crypto |
| **Binary backport** | `Feedser`, `BinaryIndex`, `SourceIntel` | Tier 1-3 complete; Tier 4 (binary fingerprinting) in progress |
### Moat Enhancement Roadmap
| Enhancement | Priority | Sprint Coverage |
|-------------|----------|-----------------|
| OCI-attached verdict attestations | P0 | 4300_0001_0001 |
| One-command audit replay CLI | P0 | 4300_0001_0002 |
| VEX Hub aggregation layer | P1 | 4500_0001_* |
| Trust scoring of VEX sources | P1 | 4500_0001_0002 |
| Tier 4 binary fingerprinting | P1 | 7204-7206 |
| SBOM historical lineage | P2 | 4600_0001_* |
## Competitor Positioning
### Where to Compete (and How)
| Competitor | Their Strength | Don't Compete On | Win With |
|-----------|----------------|------------------|----------|
| **Snyk** | Developer UX, fix PRs, onboarding | Adoption velocity | Proof-carrying reachability, offline capability, attestation chain |
| **Prisma Cloud** | CNAPP breadth, graph investigation | Platform completeness | Decision integrity, deterministic replay, semantic diff |
| **Anchore** | SBOM operations maturity | SBOM storage | Lattice VEX, signed reachability, proof chains |
| **Aqua/Trivy** | Runtime protection, broad coverage | Ecosystem breadth | Forensic reproducibility, K4 logic, regional crypto |
### Our Winning Positions
| Position | What It Means | Proof Point |
|----------|--------------|-------------|
| **Decision integrity** | Every verdict is deterministic, attestable, and replayable | `stella replay srm.yaml --assert-digest <sha>` |
| **Proof portability** | Evidence bundles work offline and survive audits | Decision Capsules with sealed SBOM/VEX/reachability/policy |
| **Semantic change control** | Risk deltas show meaning, not numbers | "Exploitability dropped 41% despite +2 CVEs" |
| **Sovereign deployment** | Self-hosted, regional crypto, air-gap parity | GOST/SM/eIDAS profiles; RootPack bundles |
### Where We're Ahead
1. **VEX decisioning** — K4 lattice with conflict detection; no competitor has this
2. **Smart-Diff** — Semantic risk deltas with priority scoring; unique
3. **Signed reachability** — DSSE graphs + edge bundles; unique
4. **Deterministic replay** — Bit-for-bit reproducibility; unique
5. **Regional crypto** — FIPS/eIDAS/GOST/SM/PQC; unique
### Where Competitors Lead (For Now)
| Area | Competitor Lead | Our Response |
|------|-----------------|--------------|
| Mass-market UX polish | Snyk | Focus on power users who need proofs |
| SaaS onboarding friction | Snyk, Prisma | Offer both SaaS and self-hosted |
| Marketplace integrations | All major players | Prioritize based on customer demand |
| Ecosystem breadth | Trivy | Focus on depth over breadth |
---
## Quick Reference
### Key Documents
- **Competitive Landscape**: `docs/product/competitive-landscape.md`
- **Claims Index**: `docs/product/claims-citation-index.md`
- **Proof Architecture**: `docs/modules/platform/proof-driven-moats-architecture.md`
- **Key Features**: `docs/key-features.md`
- **Moat Gap Analysis**: `docs/modules/platform/moat-gap-analysis.md`
### Key Commands (Demo-Ready)
```bash
# Determinism proof
stella scan --image <img> --srm-out a.yaml
stella scan --image <img> --srm-out b.yaml
diff a.yaml b.yaml # Identical
# Replay proof
stella replay srm.yaml --assert-digest <sha>
# Reachability proof
stella graph show --cve CVE-XXXX-YYYY --artifact <digest>
# VEX evaluation
stella vex evaluate --artifact <digest>
# Offline scan
stella rootpack import bundle.tar.gz
stella scan --offline --image <digest>
```
---
**Last Updated**: 2026-01-03

View File

@@ -0,0 +1,48 @@
# Reachability Benchmark Launch (BENCH-LAUNCH-513-017)
## Audience
- Security engineering and platform teams evaluating reachability analysis tools.
- Benchmark participants (vendors, OSS maintainers) who need deterministic scoring.
## Positioning
- **Deterministic by default:** fixed seeds, SOURCE_DATE_EPOCH builds, sorted outputs.
- **Offline ready:** no registry pulls or telemetry; baselines run without network.
- **Explainable:** truth sets include static/dynamic evidence; scorer rewards path + guards.
- **Vendor-neutral:** Semgrep / CodeQL / Stella baselines provided for comparison.
## Whats included
- Cases across JS, Python, C (Java pending JDK availability).
- Schemas for cases, entrypoints, truth, and submissions.
- Baselines: Semgrep, CodeQL, Stella (offline).
- Tooling: scorer (`rb-score`), leaderboard (`rb-compare`), deterministic CI script (`ci/run-ci.sh`).
- Static site (`website/`) for quick start + leaderboard view.
## How to try it
```bash
# Build and validate
python tools/build/build_all.py --cases cases
python tools/validate.py --schemas schemas
# Run baselines (offline)
bash baselines/semgrep/run_all.sh cases /tmp/semgrep
bash baselines/stella/run_all.sh cases /tmp/stella
bash baselines/codeql/run_all.sh cases /tmp/codeql
# Score your submission
tools/scorer/rb_score.py --truth benchmark/truth/<aggregate>.json --submission submission.json --format json
```
## Key dates
- 2025-12-01: Public beta (v1.0.0 schemas, JS/PY/C cases, offline baselines).
- 2025-12-15 (target): Add Java track once JDK available in CI.
- Quarterly: hidden set rotation + leaderboard refresh.
## Calls to action
- Vendors: submit offlinereproducible `submission.json` for inclusion on the public leaderboard.
- Practitioners: run baselines locally to benchmark internal pipelines.
- OSS: propose new cases via PR; follow determinism checklist in `docs/submission-guide.md`.
## Risks & mitigations
- **Java track blocked (JDK)** — provide runner with JDK>=17; until then Java is excluded from CI.
- **Hidden set leakage** — governed by rotation policy in `docs/governance.md`; no public release of hidden cases.
- **Telemetry drift** — all runner scripts disable telemetry by env; reviewers verify no network calls.

View File

@@ -0,0 +1,15 @@
# Roadmap (detailed)
This folder expands `docs/ROADMAP.md` into evidence-oriented guidance that stays valid even when timelines shift.
Scheduling and staffing live outside the documentation layer; this roadmap stays date-free on purpose.
## Documents
- `docs/roadmap/maturity-model.md` — Capability maturity levels and the evidence expected at each level.
## Canonical references by area
- Architecture overview: `docs/ARCHITECTURE_OVERVIEW.md`
- High-level architecture: `docs/ARCHITECTURE_OVERVIEW.md`
- Offline posture and workflows: `docs/OFFLINE_KIT.md`, `docs/modules/airgap/guides/overview.md`
- Determinism principles: `docs/key-features.md`, `docs/testing/connector-fixture-discipline.md`
- Security boundaries and roles: `docs/security/scopes-and-roles.md`, `docs/security/tenancy-overview.md`

View File

@@ -0,0 +1,66 @@
# Capability maturity model
This document defines what "shipped" means for StellaOps capabilities. Each area progresses through the same maturity levels; the concrete evidence differs by domain.
## Maturity levels
| Level | Meaning | Evidence posture |
| --- | --- | --- |
| **Foundation** | Works end-to-end with deterministic outputs. | Golden fixtures, stable ordering, replay-friendly artifacts. |
| **Hardened** | Safe for regulated environments. | Isolation boundaries, audit trail, reproducible upgrades, operational runbooks. |
| **Sovereign** | Crypto + operations are independent by default. | Bring-your-own trust roots, offline bundles, configurable crypto profiles. |
| **Ecosystem** | Extensible and integrable without losing determinism. | Stable plugin/SDK contracts, compatibility suites, offline distribution story. |
## Scanning & SBOM
| Level | What exists | Minimum evidence |
| --- | --- | --- |
| Foundation | Deterministic SBOM generation and stable identifiers. | Fixture-backed scans producing byte-stable SBOMs and normalized findings. |
| Hardened | Deterministic "replay" of scans and decisions. | Replay test vectors and a documented, versioned artifact layout. |
| Sovereign | Offline-ready feeds and trust roots. | Fully air-gapped scan runbook and importer/controller workflows. |
| Ecosystem | Extensible analyzers and outputs. | Compatibility tests for plugins and exporters; no network required. |
## Advisory ingestion
| Level | What exists | Minimum evidence |
| --- | --- | --- |
| Foundation | Normalizers and deterministic merges into canonical stores. | Repeatable ingestion runs with stable IDs and ordering. |
| Hardened | Schema validation and drift controls. | Locked schemas, test fixtures, and failure modes documented. |
| Sovereign | Mirror-first and offline bundle imports. | Offline bundle format documented; import determinism verified. |
| Ecosystem | Connector library growth without regressions. | Connector conformance suite and fixture discipline. |
## VEX & verdicts
| Level | What exists | Minimum evidence |
| --- | --- | --- |
| Foundation | OpenVEX ingestion and stable verdict outcomes. | Deterministic merges, explainable reasoning, stable verdict IDs. |
| Hardened | Trust model and audit trail. | Trust lattice rules documented; replay tests for merges/verdicts. |
| Sovereign | Bring-your-own trust roots and issuer governance. | Offline trust root provisioning and rotation procedures. |
| Ecosystem | Multiple issuer ecosystems and integrations. | Compatibility tests and validated importer adapters. |
## Policy engine
| Level | What exists | Minimum evidence |
| --- | --- | --- |
| Foundation | Deterministic policy evaluation with consistent precedence. | Policy packs + golden decisions with stable ordering. |
| Hardened | Audit-grade policy traces. | Decision trace artifacts and replay tests for policy outputs. |
| Sovereign | Operator-controlled policy distribution. | Offline pack distribution and verification story. |
| Ecosystem | Policy contracts for third parties. | Compatibility suite and safe upgrade policy guarantees. |
## Offline kit & air-gap workflows
| Level | What exists | Minimum evidence |
| --- | --- | --- |
| Foundation | Documented offline concepts and supported workflows. | `docs/OFFLINE_KIT.md` plus importer/controller docs and examples. |
| Hardened | Deterministic imports and verified indexes. | Byte-stable indexes with reproducible hash outputs across machines. |
| Sovereign | Independent trust anchors and mirrors. | Trust-root provisioning docs and an air-gapped "day-2 ops" runbook. |
| Ecosystem | Third-party bundles and toolchain integrations. | Conformance tests and offline bundle validation tooling. |
## Operations, observability, and security
| Level | What exists | Minimum evidence |
| --- | --- | --- |
| Foundation | Clear service boundaries and deployment profiles. | Compose profiles and documented defaults. |
| Hardened | Runbooks, dashboards, and incident workflows. | Offline-importable dashboards and operational checklists. |
| Sovereign | Crypto agility and least-privilege by default. | Configurable crypto profiles and role/scopes documentation. |
| Ecosystem | Stable operator and SDK surfaces. | Versioned APIs and compatibility guarantees. |