Files
git.stella-ops.org/docs/product/competitive-landscape.md
2026-01-17 12:32:01 +02:00

285 lines
19 KiB
Markdown
Raw Permalink Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# Competitive Landscape
> **TL;DR:** Stella Ops Suite isn't a scanner or a deployment tool—it's a **release control plane** that gates releases using reachability-aware security and produces **attestable decisions that can be replayed**. Non-Kubernetes container estates finally get a central release authority.
Source: internal advisories "23-Nov-2025 - Stella Ops vs Competitors" and "09-Jan-2026 - Stella Ops Pivot", updated Jan 2026. This summary covers both release orchestration and security positioning.
---
## The New Category: Release Control Plane
**Stella Ops Suite** occupies a unique position by combining:
- Release orchestration (promotions, approvals, workflows)
- Security decisioning as a gate (not a blocker)
- Non-Kubernetes target specialization
- Evidence-linked decisions with deterministic replay
### Why Competitors Can't Easily Catch Up (Release Orchestration)
| Category | Representatives | What They Optimized For | Why They Can't Easily Catch Up |
|----------|----------------|------------------------|-------------------------------|
| **CI/CD Tools** | GitHub Actions, Jenkins, GitLab CI | Running pipelines, build automation | No central release authority; no audit-grade evidence; deployment is afterthought |
| **CD Orchestrators** | Octopus, Harness, Spinnaker | Deployment automation, Kubernetes | Security is bolt-on; non-K8s is second-class; pricing punishes automation |
| **Registries** | Harbor, JFrog Artifactory | Artifact storage, scanning | No release governance; no promotion workflows; no deployment execution |
| **Scanners/CNAPP** | Trivy, Snyk, Aqua | Vulnerability detection | No release orchestration; findings don't integrate with promotion gates |
### Stella Ops Suite Positioning
| vs. Category | Why Stella Wins |
|--------------|-----------------|
| **vs. CI/CD tools** | They run pipelines; we provide central release authority with audit-grade evidence |
| **vs. CD orchestrators** | They bolt on security; we integrate it as gates. They punish automation with per-project pricing; we don't |
| **vs. Registries** | They store and scan; we govern releases and orchestrate deployments |
| **vs. Scanners** | They output findings; we output release decisions with evidence packets |
### Unique Differentiators (Release Orchestration)
| Differentiator | What It Means |
|----------------|---------------|
| **Non-Kubernetes Specialization** | Docker hosts, Compose, ECS, Nomad are first-class—not afterthoughts |
| **Digest-First Release Identity** | Releases are immutable OCI digests, not mutable tags |
| **Security Gates in Promotion** | Scan on build, evaluate on release, re-evaluate on CVE updates |
| **Evidence Packets** | Every release decision is cryptographically signed and replayable |
| **Cost Model** | No per-seat, per-project, per-deployment tax. Environments + new digests/day |
---
## Direct Comparisons vs CD Tools
These comparisons focus on where release governance, evidence export, and audit replay are required in addition to pipeline automation.
### Stella Ops Suite vs GitLab CI/CD
**Where GitLab excels:** pipeline automation, source control integration, developer workflow.
**Where Stella Ops Suite differs:**
- **Release authority** is centralized and environment-aware; not just a pipeline stage.
- **Evidence export** (Decision Capsules) is built-in and replayable months later.
- **NonK8s estates** are firstclass (Compose, VM/SSH targets, airgapped deployments).
**Bottom line:** GitLab runs pipelines; Stella Ops governs promotions with proof.
### Stella Ops Suite vs GitHub Actions
**Where GitHub excels:** PR automation, CI visibility, marketplace actions.
**Where Stella Ops Suite differs:**
- **Promotion rules** and approvals are explicit, audited, and bound to artifact digests.
- **Deterministic replay** lets auditors re-verify release decisions.
- **Offline/sovereign** operation is supported without external SaaS dependencies.
**Bottom line:** Actions automate builds; Stella Ops enforces release decisions with audit-grade evidence.
### Stella Ops Suite vs Jenkins
**Where Jenkins excels:** flexible CI, onprem extensibility.
**Where Stella Ops Suite differs:**
- **Release orchestration** includes environment graphs, approvals, and rollback semantics.
- **Evidence-grade gating** ties reachability, VEX, and policy to each promotion.
- **Exportable proof** makes compliance verification deterministic.
**Bottom line:** Jenkins executes pipelines; Stella Ops provides release governance with proof.
### Stella Ops Suite vs Harness
**Where Harness excels:** deployment automation, feature flags, multicloud rollout UX.
**Where Stella Ops Suite differs:**
- **Security evidence is a gate**, not an afterthought, and is bound to the artifact digest.
- **Decision Capsules** provide verifiable, portable audit packets.
- **NonK8s container estates** are a primary target, not a secondary path.
**Bottom line:** Harness automates delivery; Stella Ops governs releases and their evidence trail.
## Security Positioning (Original Analysis)
---
## Verification Metadata
| Field | Value |
|-------|-------|
| **Last Updated** | 2026-01-03 |
| **Last Verified** | 2025-12-14 |
| **Next Review** | 2026-03-14 |
| **Claims Index** | [`docs/product/claims-citation-index.md`](claims-citation-index.md) |
| **Verification Method** | Source code audit (OSS), documentation review, feature testing |
**Confidence Levels:**
- **High (80-100%)**: Verified against source code or authoritative documentation
- **Medium (50-80%)**: Based on documentation or limited testing; needs deeper verification
- **Low (<50%)**: Unverified or based on indirect evidence; requires validation
---
## Why Competitors Plateau (Structural Analysis)
The scanner market evolved from three distinct origins. Each origin created architectural assumptions that make Stella Ops' capabilities structurally difficult to retrofit.
| Origin | Representatives | What They Optimized For | Why They Can't Easily Catch Up |
|--------|----------------|------------------------|-------------------------------|
| **Package Scanners** | Trivy, Syft/Grype | Fast CLI, broad ecosystem coverage | No forensic reproducibility in architecture; VEX is boolean, not lattice; no DSSE for reachability graphs |
| **Developer UX** | Snyk | IDE integration, fix PRs, onboarding | SaaS-only (offline impossible); no attestation infrastructure; reachability limited to specific languages |
| **Policy/Compliance** | Prisma Cloud, Aqua | Runtime protection, CNAPP breadth | No deterministic replay; no cryptographic provenance for verdicts; no semantic diff |
| **SBOM Operations** | Anchore | SBOM storage, lifecycle | No lattice VEX reasoning; no signed reachability graphs; no regional crypto profiles |
### The Core Problem
**Scanners output findings. Stella Ops outputs decisions.**
A finding says "CVE-2024-1234 exists in this package." A decision says "CVE-2024-1234 is reachable via this call path, vendor VEX says not_affected but our runtime disagrees, creating a conflict that policy must resolve, and here's the signed proof chain."
This isn't a feature gapit's a category difference. Retrofitting it requires:
- Rearchitecting the evidence model (content-addressed, not row-based)
- Adding lattice logic to VEX handling (not just filtering)
- Instrumenting reachability at three layers (static, binary, runtime)
- Building deterministic replay infrastructure (frozen feeds, manifests, seeds)
- Implementing regional crypto profiles (not just "signing")
---
## Stella Ops moats (why we win)
| Moat | Description | Claim IDs | Confidence |
|------|-------------|-----------|------------|
| **Deterministic replay** | Feed+rules snapshotting; graph/SBOM/VEX re-run bit-for-bit with manifest hashes | DET-001, DET-002, DET-003 | High |
| **Hybrid reachability attestations** | Graph-level DSSE always; optional edge-bundle DSSE for runtime/init/contested edges; Rekor-backed | REACH-001, REACH-002, ATT-001, ATT-002 | High |
| **Lattice-based VEX engine** | Merges advisories, runtime hits, reachability, waivers with explainable paths | VEX-001, VEX-002, VEX-003 | High |
| **Crypto sovereignty** | FIPS/eIDAS/GOST/SM/PQC profiles and offline mirrors as first-class knobs | ATT-004 | Medium |
| **Proof graph** | DSSE + transparency across SBOM, call-graph, VEX, replay manifests | ATT-001, ATT-002, ATT-003 | High |
## Top takeaways (sales-ready)
### The Five One-Liners
| # | One-Liner | What It Means | Claim IDs |
|---|-----------|---------------|-----------|
| 1 | "We don't output findings; we output attestable decisions that can be replayed." | Given identical inputs, Stella produces identical outputs. Any verdict from 6 months ago can be re-verified today with `stella replay srm.yaml`. | DET-001, DET-003 |
| 2 | "We treat VEX as a logical claim system, not a suppression file." | K4 lattice logic aggregates multiple VEX sources, detects conflicts, and produces explainable dispositions with proof links. | VEX-001, VEX-002 |
| 3 | "We provide proof of exploitability in *this* artifact, not just a badge." | Three-layer reachability (static graph + binary + runtime) with DSSE-signed call paths. Not "potentially reachable" but "here's the exact path." | REACH-001, REACH-002 |
| 4 | "We explain what changed in exploitable surface area, not what changed in CVE count." | Smart-Diff outputs "This release reduces exploitability by 41% despite +2 CVEs" semantic risk deltas, not raw numbers. | |
| 5 | "We quantify uncertainty and gate on it." | Unknowns are first-class state with bands (HOT/WARM/COLD), decay algorithms, and policy budgets. Uncertainty is risk; we surface and score it. | UNKNOWNS-001, UNKNOWNS-002 |
### Verified Gaps (High Confidence)
| # | Gap | Evidence | Claim IDs |
|---|-----|----------|-----------|
| 1 | No competitor offers deterministic replay with frozen feeds | Source audit: Trivy v0.55, Grype v0.80, Snyk CLI v1.1292 | DET-003 |
| 2 | None sign reachability graphs; we sign graphs and (optionally) edge bundles | Feature matrix analysis | REACH-002 |
| 3 | Sovereign crypto profiles (FIPS/eIDAS/GOST/SM/PQC) are unique to Stella Ops | Architecture review | ATT-004 |
| 4 | Lattice VEX with conflict detection is unmatched; others ship boolean VEX or none | Trivy pkg/vex source; Grype VEX implementation | VEX-001, COMP-TRIVY-001, COMP-GRYPE-002 |
| 5 | Offline/air-gap with mirrored transparency is rare; we ship it by default | Documentation and feature testing | OFF-001, OFF-004 |
## Where others fall short (detailed)
### Capability Gap Matrix
| Capability | Trivy | Grype | Snyk | Prisma | Aqua | Anchore | Stella Ops |
|-----------|-------|-------|------|--------|------|---------|------------|
| **Deterministic replay** | No | No | No | No | No | No | Yes |
| **VEX lattice (K4 logic)** | Boolean only | Boolean only | None | None | Limited | Limited | Full K4 |
| **Signed reachability graphs** | No | No | No | No | No | No | Yes (DSSE) |
| **Binary-level backport detection** | No | No | No | No | No | No | Tier 1-4 |
| **Semantic risk diff** | No | No | No | No | No | No | Yes |
| **Unknowns as state** | Hidden | Hidden | Hidden | Hidden | Hidden | Hidden | First-class |
| **Regional crypto (GOST/SM)** | No | No | No | No | No | No | Yes |
| **Offline parity** | Medium | Medium | No | Strong | Medium | Good | Full |
### Specific Gaps by Competitor
| Gap | What This Means | Related Claims | Verified |
|-----|-----------------|----------------|----------|
| **No deterministic replay** | A scan from last month cannot be re-run to produce identical results. Feed drift, analyzer changes, and non-deterministic ordering break reproducibility. Auditors cannot verify past decisions. | DET-003, COMP-TRIVY-002, COMP-GRYPE-001, COMP-SNYK-001 | 2025-12-14 |
| **No lattice/VEX merge** | VEX is either absent or treated as a suppression filter. When vendor says "not_affected" but runtime shows the function was called, these tools can't represent the conflictthey pick one or the other. | COMP-TRIVY-001, COMP-GRYPE-002 | 2025-12-14 |
| **No signed reachability** | Reachability claims are assertions, not proofs. There's no cryptographic binding between "this CVE is reachable" and the call path that proves it. | COMP-GRYPE-001, REACH-002 | 2025-12-14 |
| **No semantic diff** | Tools report "+3 CVEs" without context. They can't say "exploitable surface decreased despite new CVEs" because they don't track reachability deltas. | | 2025-12-14 |
| **Offline/sovereign gaps** | Snyk is SaaS-only. Others have partial offline support but no regional crypto (GOST, SM2, eIDAS) and no sealed knowledge snapshots for air-gapped reproducibility. | COMP-SNYK-003, ATT-004 | 2025-12-14 |
## Snapshot table (condensed)
| Vendor | SBOM Gen | SBOM Ingest | Attest (DSSE) | Rekor | Offline | Primary gaps vs Stella | Related Claims |
|--------|----------|-------------|---------------|-------|---------|------------------------|----------------|
| Trivy | Yes | Yes | Cosign | Query | Strong | No replay, no lattice | COMP-TRIVY-001, COMP-TRIVY-002, COMP-TRIVY-003 |
| Syft/Grype | Yes | Yes | Cosign-only | Indir | Medium | No replay, no lattice | COMP-GRYPE-001, COMP-GRYPE-002, COMP-GRYPE-003 |
| Snyk | Yes | Limited | No | No | Weak | No attest/VEX/replay | COMP-SNYK-001, COMP-SNYK-002, COMP-SNYK-003 |
| Prisma | Yes | Limited | No | No | Strong | No attest/replay | |
| AWS (Inspector/Signer) | Partial | Partial | Notary v2 | No | Weak | Closed, no replay | |
| Google | Yes | Yes | Yes | Opt | Weak | No offline/lattice | |
| GitHub | Yes | Partial | Yes | Yes | No | No replay/crypto opts | |
| GitLab | Yes | Limited | Partial | No | Medium | No replay/lattice | |
| Microsoft Defender | Partial | Partial | No | No | Weak | No attest/reachability | |
| Anchore Enterprise | Yes | Yes | Some | No | Good | No sovereign crypto | |
| JFrog Xray | Yes | Yes | No | No | Medium | No attest/lattice | |
| Tenable | Partial | Limited | No | No | Weak | Not SBOM/VEX-focused | |
| Qualys | Limited | Limited | No | No | Medium | No attest/lattice | |
| Rezilion | Yes | Yes | No | No | Medium | Runtime-only; no DSSE | |
| Chainguard | Yes | Yes | Yes | Yes | Medium | No replay/lattice | |
## How to use this doc
- Sales/PMM: pull talking points and the gap list when building battlecards.
- Product: map gaps to roadmap; keep replay/lattice/sovereign as primary differentiators.
- Engineering: ensure new features keep determinism + sovereign crypto front-and-center; link reachability attestations into proof graph.
## Cross-links
- Vision: `docs/VISION.md` (Moats section)
- Architecture: `docs/ARCHITECTURE_REFERENCE.md`
- Reachability moat details: `docs/modules/reach-graph/guides/lead.md`
- Source advisory: `docs/product/advisories/23-Nov-2025 - Stella Ops vs Competitors.md`
- **Claims Citation Index**: [`docs/product/claims-citation-index.md`](claims-citation-index.md)
---
## Battlecard Appendix (snippet-ready)
### Elevator Pitches (by Audience)
| Audience | Pitch |
|----------|-------|
| **CISO/Security Leader** | "Stella Ops turns vulnerability noise into auditable decisions. Every verdict is signed, replayable, and proves *why* something is or isn't exploitable." |
| **Compliance/Audit** | "Unlike scanners that output findings, we output decisions with proof chains. Six months from now, you can replay any verdict bit-for-bit to prove what you knew and when." |
| **DevSecOps Engineer** | "Tired of triaging the same CVE across 50 images? Stella deduplicates by root cause, shows reachability proofs, and explains exactly what to fix and why." |
| **Air-gap/Regulated** | "Full offline parity with regional crypto (FIPS/GOST/SM/eIDAS). Sealed knowledge snapshots ensure your air-gapped environment produces identical results to connected." |
### One-Liners with Proof Points
| One-Liner | Proof Point | Claims |
|-----------|-------------|--------|
| *Replay or it's noise* | `stella replay srm.yaml --assert-digest <sha>` reproduces any past scan bit-for-bit | DET-001, DET-003 |
| *Signed reachability, not guesses* | Graph-level DSSE always; edge-bundle DSSE for contested paths; Rekor-backed | REACH-001, REACH-002 |
| *Sovereign-first* | FIPS/eIDAS/GOST/SM/PQC profiles as config; multi-sig with regional roots | ATT-004 |
| *Trust algebra, not suppression files* | K4 lattice merges advisories, runtime, reachability, waivers; conflicts are explicit state | VEX-001, VEX-002 |
| *Semantic risk deltas* | "Exploitability dropped 41% despite +2 CVEs" not just CVE counts | |
### Objection Handlers
| Objection | Response | Supporting Claims |
|-----------|----------|-------------------|
| "We already sign SBOMs." | Great start. But do you sign call-graphs and VEX decisions? Can you replay a scan from 6 months ago and get identical results? We do both. | DET-001, REACH-002 |
| "Cosign/Rekor is enough." | Cosign signs artifacts. We sign *decisions*. Without deterministic manifests and reachability proofs, you can sign findings but can't audit *why* a vuln was reachable. | DET-003, REACH-002 |
| "Our runtime traces show reachability." | Runtime is one signal. We fuse it with static call graphs and VEX lattice into a signed, replayable verdict. You can quarantine or dispute individual edges, not just all-or-nothing. | REACH-001, VEX-002 |
| "Snyk does reachability." | Snyk's reachability is language-limited (Java, JavaScript), SaaS-only, and unsigned. We support 6+ languages, work offline, and sign every call path with DSSE. | COMP-SNYK-002, COMP-SNYK-003, REACH-002 |
| "We use Trivy and it's free." | Trivy is excellent for broad coverage. We're for organizations that need audit-grade reproducibility, VEX reasoning, and signed proofs. Different use cases. | COMP-TRIVY-001, COMP-TRIVY-002 |
| "Can't you just add this to Trivy?" | Trivy's architecture assumes findings, not decisions. Retrofitting deterministic replay, lattice VEX, and proof chains would require fundamental rearchitecturenot just features. | |
### Demo Scenarios
| Scenario | What to Show | Command |
|----------|-------------|---------|
| **Determinism** | Run scan twice, show identical digests | `stella scan --image <img> --srm-out a.yaml && stella scan --image <img> --srm-out b.yaml && diff a.yaml b.yaml` |
| **Replay** | Replay a week-old scan, verify identical output | `stella replay srm.yaml --assert-digest <sha>` |
| **Reachability proof** | Show signed call path from entrypoint to vulnerable symbol | `stella graph show --cve CVE-XXXX-YYYY --artifact <digest>` |
| **VEX conflict** | Show lattice handling vendor vs runtime disagreement | Trust Algebra Studio UI or `stella vex evaluate --artifact <digest>` |
| **Offline parity** | Import sealed bundle, scan, compare to online result | `stella rootpack import bundle.tar.gz && stella scan --offline ...` |
### Leave-Behind Materials
- **Reachability deep-dive:** `docs/modules/reach-graph/guides/lead.md`
- **Competitive landscape:** This document
- **Proof architecture:** `docs/modules/platform/proof-driven-moats-architecture.md`
- **Key features:** `docs/key-features.md`
## Sources
- Full advisory: `docs/product/advisories/23-Nov-2025 - Stella Ops vs Competitors.md`
- Claims Citation Index: `docs/product/claims-citation-index.md`