audit work, fixed StellaOps.sln warnings/errors, fixed tests, sprints work, new advisories

This commit is contained in:
master
2026-01-07 18:49:59 +02:00
parent 04ec098046
commit 608a7f85c0
866 changed files with 56323 additions and 6231 deletions

View File

@@ -0,0 +1,543 @@
# Epic 3500: Score Proofs & Reachability FAQ
**Sprint:** SPRINT_3500_0004_0004
**Last Updated:** 2025-12-20
This FAQ covers the most common questions about Score Proofs, Reachability Analysis, and Unknowns Management features introduced in Epic 3500.
---
## Table of Contents
1. [General Questions](#general-questions)
2. [Score Proofs](#score-proofs)
3. [Reachability Analysis](#reachability-analysis)
4. [Unknowns Queue](#unknowns-queue)
5. [Integration & Deployment](#integration--deployment)
6. [Performance](#performance)
7. [Troubleshooting](#troubleshooting)
---
## General Questions
### Q: What is Epic 3500?
**A:** Epic 3500 introduces three major capabilities to StellaOps:
1. **Score Proofs**: Cryptographically verifiable attestations proving that vulnerability scores are deterministic and reproducible
2. **Reachability Analysis**: Static analysis determining whether vulnerable code paths are actually reachable from your application
3. **Unknowns Management**: Tracking and triaging components that cannot be fully analyzed
### Q: Do I need all three features?
**A:** The features work independently but provide the most value together:
- **Score Proofs alone**: Useful for compliance and audit trails
- **Reachability alone**: Useful for prioritizing remediation
- **Together**: Full attack surface context with cryptographic proof
### Q: What's the minimum version required?
**A:** Epic 3500 features require:
- StellaOps Scanner v2.5.0+
- StellaOps CLI v2.5.0+
- .NET 10 runtime (for self-hosted deployments)
### Q: Are these features available in air-gapped environments?
**A:** Yes. All Epic 3500 features support air-gap operation:
- Score Proofs can be generated and verified offline
- Reachability analysis requires no network connectivity
- Unknowns data persists locally
See the [Air-Gap Operations Guide](../operations/airgap-operations-runbook.md) for details.
---
## Score Proofs
### Q: What exactly is a Score Proof?
**A:** A Score Proof is a DSSE-signed attestation bundle containing:
```
Score Proof Bundle
├── scan_manifest.json # Content-addressed inputs (SBOM, feeds)
├── proof.dsse # DSSE-signed attestation
├── merkle_proof.json # Merkle tree proof for individual findings
└── replay_instructions.md # How to reproduce the scan
```
The proof guarantees that given the same inputs, anyone can reproduce the exact same vulnerability scores.
### Q: Why do I need Score Proofs?
**A:** Score Proofs solve several problems:
| Problem | Solution |
|---------|----------|
| "Scanner gave different results yesterday" | Manifest captures exact inputs |
| "How do I prove to auditors this was the score?" | DSSE signatures provide non-repudiation |
| "Can I trust this third-party scan?" | Independent verification possible |
| "Which advisory version was used?" | All feed digests recorded |
### Q: How do I generate a Score Proof?
**A:** Enable proofs during scanning:
```bash
# CLI
stella scan --sbom ./sbom.json --generate-proof --output ./scan-with-proof/
# API
POST /api/v1/scans
{
"sbomDigest": "sha256:abc...",
"options": {
"generateProof": true
}
}
```
### Q: How do I verify a Score Proof?
**A:** Use the verify command:
```bash
# Verify signature and Merkle root
stella proof verify ./scan-with-proof/proof.dsse
# Full replay verification (re-runs scan)
stella score replay ./scan-with-proof/ --verify
```
### Q: Can I verify proofs without network access?
**A:** Yes. Verification only requires:
- The proof bundle
- A trusted public key
- Optionally, the original inputs (for replay)
See: [Proof Verification Runbook](../operations/proof-verification-runbook.md)
### Q: What signing algorithms are supported?
**A:** Current support includes:
| Algorithm | Status | Use Case |
|-----------|--------|----------|
| ECDSA P-256 | ✅ Supported | Default, widely compatible |
| ECDSA P-384 | ✅ Supported | Higher security |
| RSA-2048 | ✅ Supported | Legacy compatibility |
| Ed25519 | ✅ Supported | Modern, fast |
| PQC (ML-DSA) | 🔜 Roadmap | Post-quantum ready |
### Q: How long are proofs valid?
**A:** Proofs don't expire, but their trust depends on:
- Key validity at signing time
- Certificate chain validity (if using X.509)
- Rekor timestamp (if transparency log enabled)
Best practice: Archive proofs with their verification materials.
### Q: What's the overhead of generating proofs?
**A:** Typical overhead:
- **Time**: +5-15% scan duration
- **Storage**: +10-20KB per scan (proof bundle)
- **CPU**: Minimal (signing is fast)
For detailed benchmarks, see [Performance Workbook](../PERFORMANCE_WORKBOOK.md).
---
## Reachability Analysis
### Q: What is reachability analysis?
**A:** Reachability analysis answers: "Can vulnerable code actually be executed?"
It analyzes your application's call graph to determine if vulnerable functions in dependencies are reachable from your entry points.
### Q: What are the reachability verdicts?
**A:**
| Verdict | Meaning | Action |
|---------|---------|--------|
| `REACHABLE_STATIC` | Vulnerable code is on an executable path | **Prioritize fix** |
| `POSSIBLY_REACHABLE` | May be reachable under certain conditions | **Review** |
| `NOT_REACHABLE` | No path from entry points to vulnerable code | **Lower priority** |
| `UNKNOWN` | Analysis couldn't determine reachability | **Manual review** |
### Q: What's the difference from EPSS?
**A:**
| Metric | What It Measures | Data Source |
|--------|------------------|-------------|
| **EPSS** | Probability of exploitation in the wild | Threat intelligence |
| **Reachability** | Whether your code can trigger the vuln | Your application's call graph |
They're complementary: EPSS tells you "how likely is exploitation globally", reachability tells you "can it affect me specifically".
### Q: How do I enable reachability analysis?
**A:**
```bash
# CLI - analyze SBOM with reachability
stella scan --sbom ./sbom.json --reachability
# With call graph input
stella scan --sbom ./sbom.json --call-graph ./callgraph.json
# Generate call graph first
stella scan graph ./src --output ./callgraph.json
```
### Q: What languages are supported for call graph analysis?
**A:**
| Language | Support Level | Notes |
|----------|---------------|-------|
| Java | Full | Requires bytecode |
| JavaScript/TS | Full | Requires source |
| Python | Full | Requires source |
| Go | Full | Requires source or binary |
| C# | Partial | Basic support |
| C/C++ | Limited | Best-effort |
### Q: What if my language isn't supported?
**A:** You can:
1. Provide a custom call graph in standard format
2. Use `UNKNOWN` verdict handling
3. Combine with other prioritization signals (EPSS, VEX)
### Q: How accurate is reachability analysis?
**A:** Accuracy depends on several factors:
| Factor | Impact |
|--------|--------|
| Call graph completeness | Higher completeness = better accuracy |
| Dynamic dispatch | May cause under-reporting (conservative) |
| Reflection/eval | May cause under-reporting |
| Language support | Full support = more accurate |
StellaOps errs on the side of caution: if uncertain, it reports `POSSIBLY_REACHABLE` rather than false negatives.
### Q: What's the performance impact?
**A:** Reachability analysis adds:
- **Call graph generation**: 10-60 seconds depending on codebase size
- **Reachability computation**: 1-10 seconds per scan
- **Memory**: Call graph size varies (typically 10-100MB)
### Q: Can I cache call graphs?
**A:** Yes. If your code hasn't changed, reuse the call graph:
```bash
# Cache the call graph
stella scan graph ./src --output ./callgraph.json
# Reuse in subsequent scans
stella scan --sbom ./sbom.json --call-graph ./callgraph.json
```
---
## Unknowns Queue
### Q: What is the unknowns queue?
**A:** The unknowns queue tracks components that couldn't be fully analyzed:
- Packages without advisory mappings
- Unrecognized file formats
- Resolution failures
- Unsupported ecosystems
### Q: Why should I care about unknowns?
**A:** Unknowns represent blind spots:
```
❌ 5% unknown = 5% of your attack surface is invisible
```
Unmanaged unknowns can hide critical vulnerabilities.
### Q: How do I view unknowns?
**A:**
```bash
# List pending unknowns
stella unknowns list
# Get statistics
stella unknowns stats
# Export for analysis
stella unknowns list --format csv > unknowns.csv
```
### Q: How do I resolve unknowns?
**A:** Common resolution paths:
| Unknown Type | Resolution |
|--------------|------------|
| Internal package | Mark as `internal_package` |
| Missing mapping | Submit CPE/PURL mapping |
| Feed delay | Update feeds, reprocess |
| Unsupported format | Convert to supported format |
```bash
# Resolve as internal package
stella unknowns resolve <id> --resolution internal_package
# After feed update, reprocess
stella feeds update --all
stella unknowns reprocess
```
### Q: What's a good unknowns rate?
**A:** Target metrics:
| Metric | Good | Warning | Critical |
|--------|------|---------|----------|
| Unknown package % | < 5% | 5-10% | > 10% |
| Pending queue depth | < 50 | 50-100 | > 100 |
| Avg resolution time | < 7d | 7-14d | > 14d |
### Q: Can I automate unknown handling?
**A:** Yes, using patterns:
```yaml
# config/unknowns.yaml
internalPatterns:
- "@mycompany/*"
- "internal-*"
autoResolution:
- match: "reason = NO_ADVISORY_MATCH AND ecosystem = internal"
resolution: internal_package
```
---
## Integration & Deployment
### Q: How do I integrate with CI/CD?
**A:** Example GitHub Actions workflow:
```yaml
- name: Scan with proofs and reachability
run: |
stella scan \
--sbom ./sbom.json \
--generate-proof \
--reachability \
--output ./results/
- name: Fail on reachable criticals
run: |
REACHABLE=$(stella query --filter "reachability=REACHABLE_STATIC AND severity=CRITICAL" --count)
if [ "$REACHABLE" -gt 0 ]; then
exit 1
fi
```
### Q: Can I use these features in GitLab CI?
**A:** Yes, same commands work in any CI system. See [CI Integration Guide](../ci/).
### Q: What API endpoints are available?
**A:** Key endpoints:
| Feature | Endpoint | Method |
|---------|----------|--------|
| Generate proof | `/api/v1/scans/{id}/proof` | GET |
| Verify proof | `/api/v1/proofs/verify` | POST |
| Reachability | `/api/v1/scans/{id}/reachability` | GET |
| Unknowns | `/api/v1/unknowns` | GET/POST/DELETE |
Full API reference: [API Documentation](../api/)
### Q: How do I configure retention?
**A:** Configure in `appsettings.json`:
```json
{
"Retention": {
"Proofs": {
"DefaultDays": 365,
"MaxDays": 1825
},
"Reachability": {
"CacheHours": 24
},
"Unknowns": {
"ArchiveAfterDays": 90
}
}
}
```
---
## Performance
### Q: What's the performance impact of enabling all features?
**A:** Typical combined overhead:
| Feature | Time Overhead | Storage Overhead |
|---------|---------------|------------------|
| Base scan | Baseline | Baseline |
| + Proof generation | +5-15% | +20KB |
| + Reachability | +50-200% | +50KB |
| + All features | +60-220% | +70KB |
### Q: How do I optimize for large codebases?
**A:**
1. **Cache call graphs** for incremental scans
2. **Use parallel processing** for multi-repo scans
3. **Limit reachability depth** for very large graphs
4. **Use deterministic mode** only when needed
```bash
# Limit BFS depth for large graphs
stella scan --reachability --reachability-depth 10
```
### Q: Are there size limits?
**A:**
| Resource | Default Limit | Configurable |
|----------|---------------|--------------|
| SBOM size | 50MB | Yes |
| Call graph nodes | 1M | Yes |
| Proof bundle size | 10MB | Yes |
| Unknowns queue | 10K items | Yes |
---
## Troubleshooting
### Q: Score replay gives different results
**A:** Check:
1. Feed versions match (`stella feeds status`)
2. Algorithm version matches
3. No clock skew (UTC timestamps)
4. Same configuration settings
```bash
# Verify manifest
stella proof verify --manifest-only ./proof.dsse
```
### Q: Reachability shows everything as UNKNOWN
**A:** This usually means:
1. Call graph wasn't generated
2. Language not supported
3. Source code not available
```bash
# Check call graph status
stella scan graph status ./callgraph.json
```
### Q: Unknowns queue growing rapidly
**A:** Common causes:
1. **Feed staleness**: Update feeds
2. **Internal packages**: Configure internal patterns
3. **New ecosystems**: Check language support
```bash
# Diagnose
stella unknowns stats --by-reason
```
### Q: Proof verification fails
**A:** Check:
1. Public key matches signing key
2. Certificate not expired
3. Bundle not corrupted
```bash
# Detailed verification
stella proof verify ./proof.dsse --verbose
```
### Q: Where do I get more help?
**A:**
- [Operations Runbooks](../operations/)
- [Troubleshooting Guide](troubleshooting-guide.md)
- [Architecture Documentation](../ARCHITECTURE_OVERVIEW.md)
- Support: support@stellaops.example.com
---
## Quick Reference
### Commands Cheat Sheet
```bash
# Score Proofs
stella scan --generate-proof # Generate proof
stella proof verify ./proof.dsse # Verify proof
stella score replay ./bundle/ # Replay scan
# Reachability
stella scan graph ./src # Generate call graph
stella scan --reachability # Scan with reachability
stella reachability query --filter # Query reachability data
# Unknowns
stella unknowns list # List unknowns
stella unknowns resolve <id> # Resolve unknown
stella unknowns stats # Statistics
```
### Configuration Quick Reference
```json
{
"ScoreProofs": {
"Enabled": true,
"SigningAlgorithm": "ECDSA-P256"
},
"Reachability": {
"Enabled": true,
"MaxDepth": 50,
"CacheEnabled": true
},
"Unknowns": {
"AutoResolveInternal": true,
"InternalPatterns": ["@company/*"]
}
}
```
---
**Feedback?** Submit issues or suggestions via the project's issue tracker.

View File

@@ -0,0 +1,279 @@
# Score Proofs & Reachability FAQ
**Sprint:** SPRINT_3500_0004_0004
**Audience:** All Users
---
## General Questions
### Q: What is the difference between Score Proofs and traditional scanning?
**A:** Traditional scanners produce results that may vary between runs and lack auditability. Score Proofs provide:
- **Reproducibility**: Same inputs always produce same outputs
- **Verifiability**: Cryptographic proof of how scores were computed
- **Traceability**: Complete audit trail from inputs to findings
- **Transparency**: Optional anchoring to public transparency logs
### Q: Do I need Score Proofs for compliance?
**A:** Score Proofs are valuable for:
- **SOC 2**: Evidence of security scanning processes
- **PCI DSS**: Proof of vulnerability assessments
- **HIPAA**: Documentation of security controls
- **ISO 27001**: Audit trail for security activities
### Q: Can I use reachability without Score Proofs?
**A:** Yes, the features are independent. You can:
- Use reachability alone to prioritize vulnerabilities
- Use Score Proofs alone for audit trails
- Use both for maximum value
---
## Score Proofs
### Q: What's included in a proof bundle?
**A:** A proof bundle contains:
```
proof-bundle/
├── manifest.json # All input digests and configuration
├── attestations/ # DSSE signatures
├── inputs/ # (Optional) Actual input data
└── bundle.sig # Bundle signature
```
### Q: How long should I retain proof bundles?
**A:** Recommended retention:
- **Active releases**: Forever (or product lifetime)
- **Previous releases**: 3 years minimum
- **Development builds**: 90 days
### Q: Can I verify proofs offline?
**A:** Yes, with preparation:
1. Export the proof bundle with inputs
2. Ensure trust anchors (public keys) are available
3. Use `stella proof verify --offline`
### Q: What happens if advisory data changes?
**A:** The original proof remains valid because it references the advisory data **at the time of scan** (by digest). Replaying with new advisory data will produce a different manifest.
### Q: How do I compare scans over time?
**A:** Use the diff command:
```bash
stella score diff --scan-id $SCAN1 --compare $SCAN2
```
This shows:
- New vulnerabilities
- Resolved vulnerabilities
- Score changes
- Input differences
---
## Reachability
### Q: How accurate is reachability analysis?
**A:** Accuracy depends on call graph quality:
- **Complete call graphs**: 85-95% accuracy
- **Partial call graphs**: 60-80% accuracy
- **No call graph**: No reachability (all UNKNOWN)
### Q: Why is my finding marked UNKNOWN?
**A:** Common causes:
1. No call graph uploaded
2. Call graph doesn't include the affected package
3. Vulnerable function symbol couldn't be resolved
**Solution**: Check call graph coverage:
```bash
stella scan graph summary --scan-id $SCAN_ID
```
### Q: Can reflection-based calls be detected?
**A:** Partially. The system:
- Detects common reflection patterns in supported frameworks
- Marks reflection-based paths as `POSSIBLY_REACHABLE`
- Allows manual hints for custom reflection
### Q: What's the difference between POSSIBLY_REACHABLE and REACHABLE_STATIC?
**A:**
- **POSSIBLY_REACHABLE**: Path exists but involves heuristic edges (reflection, dynamic dispatch). Confidence is lower.
- **REACHABLE_STATIC**: All edges in the path are statically proven. High confidence.
### Q: How do I improve call graph coverage?
**A:**
1. **Enable whole-program analysis** during build
2. **Include all modules** in the build
3. **Add framework hints** for DI/AOP
4. **Upload runtime traces** for dynamic evidence
### Q: Does reachability work for interpreted languages?
**A:** Yes, but with caveats:
- **Python/JS**: Static analysis provides best-effort call graphs
- **Ruby**: Limited support, many edges are heuristic
- **Runtime traces**: Significantly improve accuracy for all interpreted languages
---
## Unknowns
### Q: Should I be worried about unknowns?
**A:** Unknowns represent blind spots. High-priority unknowns (score ≥12) should be investigated. Low-priority unknowns can be tracked but don't require immediate action.
### Q: How do I reduce the number of unknowns?
**A:**
1. **Add mappings**: Contribute CPE mappings to public databases
2. **Use supported packages**: Replace unmappable dependencies
3. **Contact vendors**: Request CVE IDs for security issues
4. **Build internal registry**: Map internal packages to advisories
### Q: What's the difference between suppress and resolve?
**A:**
| Action | Use When | Duration | Audit |
|--------|----------|----------|-------|
| Suppress | Accept risk temporarily | Has expiration | Reviewed periodically |
| Resolve | Issue is addressed | Permanent | Closed with evidence |
### Q: Can unknowns block my pipeline?
**A:** Yes, you can configure policies:
```bash
# Block on critical unknowns
stella unknowns list --min-score 20 --status pending --output-format json | jq 'length'
```
---
## Air-Gap / Offline
### Q: Can I run fully offline?
**A:** Yes, with an offline kit containing:
- Frozen advisory feeds
- Trust anchors (public keys)
- Time anchor (trusted timestamp)
- Configuration files
### Q: How fresh is offline advisory data?
**A:** As fresh as when the offline kit was created. Update kits regularly:
```bash
# On connected system
stella airgap prepare --feeds nvd,ghsa --output offline-kit/
```
### Q: How do I handle transparency logs offline?
**A:** Offline mode uses a local proof ledger instead of Sigstore Rekor. The local ledger provides:
- Chain integrity (hash links)
- Tamper evidence
- Export capability for later anchoring
---
## Performance
### Q: How long does reachability computation take?
**A:** Depends on graph size:
| Graph Size | Typical Duration |
|------------|------------------|
| <10K nodes | <30 seconds |
| 10K-100K nodes | 30s - 3 minutes |
| 100K-1M nodes | 3-15 minutes |
| >1M nodes | 15+ minutes |
### Q: Can I speed up scans?
**A:** Yes:
1. **Enable caching**: `--cache enabled`
2. **Limit depth**: `--max-depth 15`
3. **Partition analysis**: `--partition-by artifact`
4. **Enable parallelism**: `--parallel true`
### Q: How much storage do proofs require?
**A:**
- **Manifest only**: ~50-100 KB per scan
- **With inputs**: 10-500 MB depending on SBOM/call graph size
- **Full bundle**: Add ~50% for signatures and metadata
---
## Integration
### Q: Which CI/CD systems are supported?
**A:** Any CI/CD system that can run CLI commands:
- GitHub Actions
- GitLab CI
- Jenkins
- Azure Pipelines
- CircleCI
- Buildkite
### Q: How do I integrate with SIEM?
**A:** Export findings and unknowns as NDJSON:
```bash
# Findings
stella scan findings --scan-id $SCAN_ID --output-format ndjson > /var/log/stella/findings.ndjson
# Unknowns
stella unknowns export --workspace-id $WS_ID --format ndjson > /var/log/stella/unknowns.ndjson
```
### Q: Can I generate SARIF for code scanning?
**A:** Yes:
```bash
stella reachability findings --scan-id $SCAN_ID --output-format sarif > results.sarif
```
### Q: Is there an API for everything?
**A:** Yes, the CLI wraps the REST API. See [API Reference](../api/score-proofs-reachability-api-reference.md) for endpoints.
---
## Troubleshooting Quick Reference
| Issue | Likely Cause | Quick Fix |
|-------|--------------|-----------|
| Replay produces different results | Missing inputs | `stella proof inspect --check-inputs` |
| Too many UNKNOWN reachability | Incomplete call graph | `stella scan graph summary` |
| Signature verification fails | Key rotation | `stella trust list` |
| Computation timeout | Large graph | Increase `--timeout` |
| Many unmapped_purl unknowns | Internal packages | Add internal registry mappings |
---
## Related Documentation
- [Score Proofs Concept Guide](./score-proofs-concept-guide.md)
- [Reachability Concept Guide](./reachability-concept-guide.md)
- [Unknowns Management Guide](./unknowns-management-guide.md)
- [Troubleshooting Guide](./troubleshooting-guide.md)
---
**Last Updated**: 2025-12-20
**Version**: 1.0.0
**Sprint**: 3500.0004.0004