10 KiB
CVSS and Competitive Analysis Technical Reference
Source Advisories:
- 29-Nov-2025 - CVSS v4.0 Momentum in Vulnerability Management
- 30-Nov-2025 - Comparative Evidence Patterns for Stella Ops
- 03-Dec-2025 - Next‑Gen Scanner Differentiators and Evidence Moat
Last Updated: 2025-12-14
1. CVSS V4.0 INTEGRATION
1.1 Requirements
- Vendors (NVD, GitHub, Microsoft, Snyk) shipping CVSS v4 signals
- Awareness needed for receipt schemas, reporting, UI alignment
1.2 Determinism & Offline
- Keep CVSS vector parsing deterministic
- Pin scoring library versions in receipts
- Avoid live API dependency
- Rely on mirrored NVD feeds or frozen samples
1.3 Schema Mapping
- Map impacts to receipt schemas
- Identify UI/reporting deltas for transparency
- Note in sprint Decisions & Risks for CVSS receipts
2. SCANNER DISCREPANCIES ANALYSIS
2.1 Trivy vs Grype Comparative Study (927 images)
Findings:
- Tools disagreed on total vulnerability counts and specific CVE IDs
- Grype: ~603,259 vulns; Trivy: ~473,661 vulns
- Exact match in only 9.2% of cases (80 out of 865 vulnerable images)
- Even with same counts, specific vulnerability IDs differed
Root Causes:
- Divergent vulnerability databases
- Differing matching logic
- Incomplete visibility
2.2 VEX Tools Consistency Study (2025)
Tools Tested:
- Trivy
- Grype
- OWASP DepScan
- Docker Scout
- Snyk CLI
- OSV-Scanner
- Vexy
Results:
- Low consistency/similarity across container scanners
- DepScan: 18,680 vulns; Vexy: 191 vulns (2 orders of magnitude difference)
- Pairwise Jaccard indices very low (near 0)
- 4 most consistent tools shared only ~18% common vulnerabilities
2.3 Implications for StellaOps
Moats Needed:
- Golden-fixture benchmarks (container images with known, audited vulnerabilities)
- Deterministic, replayable scans
- Cryptographic integrity
- VEX/SBOM proofs
Metrics:
- Closure rate: Time from flagged to confirmed exploitable
- Proof coverage: % of dependencies with valid SBOM/VEX proofs
- Differential-closure: Impact of database updates or policy changes on prior scan results
3. RUNTIME REACHABILITY APPROACHES
3.1 Runtime-Aware Vulnerability Prioritization
Approach:
- Monitor container workloads at runtime to determine which vulnerable components are actually used
- Use eBPF-based monitors, dynamic tracers, or built-in profiling
- Construct runtime call graph or dependency graph
- Map vulnerabilities to code entities (functions/modules)
- If execution trace covers entity, vulnerability is "reachable"
Findings: ~85% of critical vulns in containers are in inactive code (Sysdig)
3.2 Reachability Analysis Techniques
Static:
- Call-graph analysis (Snyk reachability, CodeQL)
- All possible paths
Dynamic:
- Runtime observation (loaded modules, invoked functions)
- Actual runtime paths
Granularity Levels:
- Function-level (precise, limited languages: Java, .NET)
- Package/module-level (broader, coarse)
Hybrid Approach: Combine static (all possible paths) + dynamic (actual runtime paths)
4. CONTAINER PROVENANCE & SUPPLY CHAIN
4.1 In-Toto/DSSE Framework (NDSS 2024)
Purpose:
- Track chain of custody in software builds
- Signed metadata (attestations) for each step
- DSSE: Dead Simple Signing Envelope for standardized signing
4.2 Scudo System
Features:
- Combines in-toto with Uptane
- Verifies build process and final image
- Full verification on client inefficient; verify upstream and trust summary
- Client checks final signature + hash only
4.3 Supply Chain Verification
Signers:
- Developer key signs code commit
- CI key signs build attestation
- Scanner key signs vulnerability attestation
- Release key signs container image
Verification Optimization: Repository verifies in-toto attestations; client verifies final metadata only
5. VENDOR EVIDENCE PATTERNS
5.1 Snyk
Evidence Handling:
- Runtime insights integration (Nov 2025)
- Evolution from static-scan noise to prioritized workflow
- Deployment context awareness
VEX Support:
- CycloneDX VEX format
- Reachability-aware suppression
5.2 GitHub Advanced Security
Features:
- CodeQL for static analysis
- Dependency graph
- Dependabot alerts
- Security advisories
Evidence:
- SARIF output
- SBOM generation (SPDX)
5.3 Aqua Security
Approach:
- Runtime protection
- Image scanning
- Kubernetes security
Evidence:
- Dynamic runtime traces
- Network policy violations
5.4 Anchore/Grype
Features:
- Open-source scanner
- Policy-based compliance
- SBOM generation
Evidence:
- CycloneDX/SPDX SBOM
- Vulnerability reports (JSON)
5.5 Prisma Cloud
Features:
- Cloud-native security
- Runtime defense
- Compliance monitoring
Evidence:
- Multi-cloud attestations
- Compliance reports
6. STELLAOPS DIFFERENTIATORS
6.1 Reachability-with-Evidence
Why it Matters:
- Snyk Container integrating runtime insights as "signal" (Nov 2025)
- Evolution from static-scan noise to prioritized, actionable workflow
- Deployment context: what's running, what's reachable, what's exploitable
Implication: Container security triage relies on runtime/context signals
6.2 Proof-First Architecture
Advantages:
- Every claim backed by DSSE-signed attestations
- Cryptographic integrity
- Audit trail
- Offline verification
6.3 Deterministic Scanning
Advantages:
- Reproducible results
- Bit-identical outputs given same inputs
- Replay manifests
- Golden fixture benchmarks
6.4 VEX-First Decisioning
Advantages:
- Exploitability modeled in OpenVEX
- Lattice logic for stable outcomes
- Evidence-linked justifications
6.5 Offline/Air-Gap First
Advantages:
- No hidden network dependencies
- Bundled feeds, keys, Rekor snapshots
- Verifiable without internet access
7. COMPETITIVE POSITIONING
7.1 Market Segments
| Vendor | Strength | Weakness vs StellaOps |
|---|---|---|
| Snyk | Developer experience | Less deterministic, SaaS-only |
| Aqua | Runtime protection | Less reachability precision |
| Anchore | Open-source, SBOM | Less proof infrastructure |
| Prisma Cloud | Cloud-native breadth | Less offline/air-gap support |
| GitHub | Integration with dev workflow | Less cryptographic proof chain |
7.2 StellaOps Unique Value
- Deterministic + Provable: Bit-identical scans with cryptographic proofs
- Reachability + Runtime: Hybrid static/dynamic analysis
- Offline/Sovereign: Air-gap operation with regional crypto (FIPS/GOST/eIDAS/SM)
- VEX-First: Evidence-backed decisioning, not just alerting
- AGPL-3.0: Self-hostable, no vendor lock-in
8. MOAT METRICS
8.1 Proof Coverage
proof_coverage = findings_with_valid_receipts / total_findings
Target: ≥95%
8.2 Closure Rate
closure_rate = time_from_flagged_to_confirmed_exploitable
Target: P95 < 24 hours
8.3 Differential-Closure Impact
differential_impact = findings_changed_after_db_update / total_findings
Target: <5% (non-code changes)
8.4 False Positive Reduction
fp_reduction = (baseline_fp_rate - stella_fp_rate) / baseline_fp_rate
Target: ≥50% vs baseline scanner
8.5 Reachability Accuracy
reachability_accuracy = correct_r0_r1_r2_r3_classifications / total_classifications
Target: ≥90%
9. COMPETITIVE INTELLIGENCE TRACKING
9.1 Feature Parity Matrix
| Feature | Snyk | Aqua | Anchore | Prisma | StellaOps |
|---|---|---|---|---|---|
| SBOM Generation | ✓ | ✓ | ✓ | ✓ | ✓ |
| VEX Support | ✓ | ✗ | Partial | ✗ | ✓ |
| Reachability Analysis | ✓ | ✗ | ✗ | ✗ | ✓ |
| Runtime Evidence | ✓ | ✓ | ✗ | ✓ | ✓ |
| Cryptographic Proofs | ✗ | ✗ | ✗ | ✗ | ✓ |
| Deterministic Scans | ✗ | ✗ | ✗ | ✗ | ✓ |
| Offline/Air-Gap | ✗ | Partial | ✗ | ✗ | ✓ |
| Regional Crypto | ✗ | ✗ | ✗ | ✗ | ✓ |
9.2 Monitoring Strategy
- Track vendor release notes
- Monitor GitHub repos for feature announcements
- Participate in security conferences
- Engage with customer feedback
- Update competitive matrix quarterly
10. MESSAGING FRAMEWORK
10.1 Core Message
"StellaOps provides deterministic, proof-backed vulnerability management with reachability analysis for offline/air-gapped environments."
10.2 Key Differentiators (Elevator Pitch)
- Deterministic: Same inputs → same outputs, every time
- Provable: Cryptographic proof chains for every decision
- Reachable: Static + runtime analysis, not just presence
- Sovereign: Offline operation, regional crypto compliance
- Open: AGPL-3.0, self-hostable, no lock-in
10.3 Target Personas
- Security Engineers: Need proof-backed decisions for audits
- DevOps Teams: Need deterministic scans in CI/CD
- Compliance Officers: Need offline/air-gap for regulated environments
- Platform Engineers: Need self-hostable, sovereign solution
11. BENCHMARKING PROTOCOL
11.1 Comparative Test Suite
Images:
- 50 representative production images
- Known vulnerabilities labeled
- Reachability ground truth established
Metrics:
- Precision (1 - FP rate)
- Recall (TP / (TP + FN))
- F1 score
- Scan time (P50, P95)
- Determinism (identical outputs over 10 runs)
11.2 Test Execution
# Run StellaOps scan
stellaops scan --image test-image:v1 --output stella-results.json
# Run competitor scans
trivy image --format json test-image:v1 > trivy-results.json
grype test-image:v1 -o json > grype-results.json
snyk container test test-image:v1 --json > snyk-results.json
# Compare results
stellaops benchmark compare \
--ground-truth ground-truth.json \
--stella stella-results.json \
--trivy trivy-results.json \
--grype grype-results.json \
--snyk snyk-results.json
11.3 Results Publication
- Publish benchmarks quarterly
- Open-source test images and ground truth
- Invite community contributions
- Document methodology transparently
Document Version: 1.0 Target Platform: .NET 10, PostgreSQL ≥16, Angular v17