Sprint 3500.0004.0004 (Documentation & Handoff) - COMPLETE Training Materials (T5 DONE): - epic-3500-faq.md: Comprehensive FAQ for Score Proofs/Reachability - video-tutorial-scripts.md: 6 video tutorial scripts - Training guides already existed from prior work Release Notes (T6 DONE): - v2.5.0-release-notes.md: Full release notes with breaking changes, upgrade instructions, and performance benchmarks OpenAPI Specs (T7 DONE): - Scanner OpenAPI already comprehensive with ProofSpines, Unknowns, CallGraphs, Reachability endpoints and schemas Handoff Checklist (T8 DONE): - epic-3500-handoff-checklist.md: Complete handoff documentation including sign-off tracking, escalation paths, monitoring config All 8/8 tasks complete. Sprint DONE. Epic 3500 documentation deliverables complete.
280 lines
7.9 KiB
Markdown
280 lines
7.9 KiB
Markdown
# Score Proofs & Reachability FAQ
|
|
|
|
**Sprint:** SPRINT_3500_0004_0004
|
|
**Audience:** All Users
|
|
|
|
---
|
|
|
|
## General Questions
|
|
|
|
### Q: What is the difference between Score Proofs and traditional scanning?
|
|
|
|
**A:** Traditional scanners produce results that may vary between runs and lack auditability. Score Proofs provide:
|
|
- **Reproducibility**: Same inputs always produce same outputs
|
|
- **Verifiability**: Cryptographic proof of how scores were computed
|
|
- **Traceability**: Complete audit trail from inputs to findings
|
|
- **Transparency**: Optional anchoring to public transparency logs
|
|
|
|
### Q: Do I need Score Proofs for compliance?
|
|
|
|
**A:** Score Proofs are valuable for:
|
|
- **SOC 2**: Evidence of security scanning processes
|
|
- **PCI DSS**: Proof of vulnerability assessments
|
|
- **HIPAA**: Documentation of security controls
|
|
- **ISO 27001**: Audit trail for security activities
|
|
|
|
### Q: Can I use reachability without Score Proofs?
|
|
|
|
**A:** Yes, the features are independent. You can:
|
|
- Use reachability alone to prioritize vulnerabilities
|
|
- Use Score Proofs alone for audit trails
|
|
- Use both for maximum value
|
|
|
|
---
|
|
|
|
## Score Proofs
|
|
|
|
### Q: What's included in a proof bundle?
|
|
|
|
**A:** A proof bundle contains:
|
|
```
|
|
proof-bundle/
|
|
├── manifest.json # All input digests and configuration
|
|
├── attestations/ # DSSE signatures
|
|
├── inputs/ # (Optional) Actual input data
|
|
└── bundle.sig # Bundle signature
|
|
```
|
|
|
|
### Q: How long should I retain proof bundles?
|
|
|
|
**A:** Recommended retention:
|
|
- **Active releases**: Forever (or product lifetime)
|
|
- **Previous releases**: 3 years minimum
|
|
- **Development builds**: 90 days
|
|
|
|
### Q: Can I verify proofs offline?
|
|
|
|
**A:** Yes, with preparation:
|
|
1. Export the proof bundle with inputs
|
|
2. Ensure trust anchors (public keys) are available
|
|
3. Use `stella proof verify --offline`
|
|
|
|
### Q: What happens if advisory data changes?
|
|
|
|
**A:** The original proof remains valid because it references the advisory data **at the time of scan** (by digest). Replaying with new advisory data will produce a different manifest.
|
|
|
|
### Q: How do I compare scans over time?
|
|
|
|
**A:** Use the diff command:
|
|
```bash
|
|
stella score diff --scan-id $SCAN1 --compare $SCAN2
|
|
```
|
|
|
|
This shows:
|
|
- New vulnerabilities
|
|
- Resolved vulnerabilities
|
|
- Score changes
|
|
- Input differences
|
|
|
|
---
|
|
|
|
## Reachability
|
|
|
|
### Q: How accurate is reachability analysis?
|
|
|
|
**A:** Accuracy depends on call graph quality:
|
|
- **Complete call graphs**: 85-95% accuracy
|
|
- **Partial call graphs**: 60-80% accuracy
|
|
- **No call graph**: No reachability (all UNKNOWN)
|
|
|
|
### Q: Why is my finding marked UNKNOWN?
|
|
|
|
**A:** Common causes:
|
|
1. No call graph uploaded
|
|
2. Call graph doesn't include the affected package
|
|
3. Vulnerable function symbol couldn't be resolved
|
|
|
|
**Solution**: Check call graph coverage:
|
|
```bash
|
|
stella scan graph summary --scan-id $SCAN_ID
|
|
```
|
|
|
|
### Q: Can reflection-based calls be detected?
|
|
|
|
**A:** Partially. The system:
|
|
- Detects common reflection patterns in supported frameworks
|
|
- Marks reflection-based paths as `POSSIBLY_REACHABLE`
|
|
- Allows manual hints for custom reflection
|
|
|
|
### Q: What's the difference between POSSIBLY_REACHABLE and REACHABLE_STATIC?
|
|
|
|
**A:**
|
|
- **POSSIBLY_REACHABLE**: Path exists but involves heuristic edges (reflection, dynamic dispatch). Confidence is lower.
|
|
- **REACHABLE_STATIC**: All edges in the path are statically proven. High confidence.
|
|
|
|
### Q: How do I improve call graph coverage?
|
|
|
|
**A:**
|
|
1. **Enable whole-program analysis** during build
|
|
2. **Include all modules** in the build
|
|
3. **Add framework hints** for DI/AOP
|
|
4. **Upload runtime traces** for dynamic evidence
|
|
|
|
### Q: Does reachability work for interpreted languages?
|
|
|
|
**A:** Yes, but with caveats:
|
|
- **Python/JS**: Static analysis provides best-effort call graphs
|
|
- **Ruby**: Limited support, many edges are heuristic
|
|
- **Runtime traces**: Significantly improve accuracy for all interpreted languages
|
|
|
|
---
|
|
|
|
## Unknowns
|
|
|
|
### Q: Should I be worried about unknowns?
|
|
|
|
**A:** Unknowns represent blind spots. High-priority unknowns (score ≥12) should be investigated. Low-priority unknowns can be tracked but don't require immediate action.
|
|
|
|
### Q: How do I reduce the number of unknowns?
|
|
|
|
**A:**
|
|
1. **Add mappings**: Contribute CPE mappings to public databases
|
|
2. **Use supported packages**: Replace unmappable dependencies
|
|
3. **Contact vendors**: Request CVE IDs for security issues
|
|
4. **Build internal registry**: Map internal packages to advisories
|
|
|
|
### Q: What's the difference between suppress and resolve?
|
|
|
|
**A:**
|
|
| Action | Use When | Duration | Audit |
|
|
|--------|----------|----------|-------|
|
|
| Suppress | Accept risk temporarily | Has expiration | Reviewed periodically |
|
|
| Resolve | Issue is addressed | Permanent | Closed with evidence |
|
|
|
|
### Q: Can unknowns block my pipeline?
|
|
|
|
**A:** Yes, you can configure policies:
|
|
```bash
|
|
# Block on critical unknowns
|
|
stella unknowns list --min-score 20 --status pending --output-format json | jq 'length'
|
|
```
|
|
|
|
---
|
|
|
|
## Air-Gap / Offline
|
|
|
|
### Q: Can I run fully offline?
|
|
|
|
**A:** Yes, with an offline kit containing:
|
|
- Frozen advisory feeds
|
|
- Trust anchors (public keys)
|
|
- Time anchor (trusted timestamp)
|
|
- Configuration files
|
|
|
|
### Q: How fresh is offline advisory data?
|
|
|
|
**A:** As fresh as when the offline kit was created. Update kits regularly:
|
|
```bash
|
|
# On connected system
|
|
stella airgap prepare --feeds nvd,ghsa --output offline-kit/
|
|
```
|
|
|
|
### Q: How do I handle transparency logs offline?
|
|
|
|
**A:** Offline mode uses a local proof ledger instead of Sigstore Rekor. The local ledger provides:
|
|
- Chain integrity (hash links)
|
|
- Tamper evidence
|
|
- Export capability for later anchoring
|
|
|
|
---
|
|
|
|
## Performance
|
|
|
|
### Q: How long does reachability computation take?
|
|
|
|
**A:** Depends on graph size:
|
|
| Graph Size | Typical Duration |
|
|
|------------|------------------|
|
|
| <10K nodes | <30 seconds |
|
|
| 10K-100K nodes | 30s - 3 minutes |
|
|
| 100K-1M nodes | 3-15 minutes |
|
|
| >1M nodes | 15+ minutes |
|
|
|
|
### Q: Can I speed up scans?
|
|
|
|
**A:** Yes:
|
|
1. **Enable caching**: `--cache enabled`
|
|
2. **Limit depth**: `--max-depth 15`
|
|
3. **Partition analysis**: `--partition-by artifact`
|
|
4. **Enable parallelism**: `--parallel true`
|
|
|
|
### Q: How much storage do proofs require?
|
|
|
|
**A:**
|
|
- **Manifest only**: ~50-100 KB per scan
|
|
- **With inputs**: 10-500 MB depending on SBOM/call graph size
|
|
- **Full bundle**: Add ~50% for signatures and metadata
|
|
|
|
---
|
|
|
|
## Integration
|
|
|
|
### Q: Which CI/CD systems are supported?
|
|
|
|
**A:** Any CI/CD system that can run CLI commands:
|
|
- GitHub Actions
|
|
- GitLab CI
|
|
- Jenkins
|
|
- Azure Pipelines
|
|
- CircleCI
|
|
- Buildkite
|
|
|
|
### Q: How do I integrate with SIEM?
|
|
|
|
**A:** Export findings and unknowns as NDJSON:
|
|
```bash
|
|
# Findings
|
|
stella scan findings --scan-id $SCAN_ID --output-format ndjson > /var/log/stella/findings.ndjson
|
|
|
|
# Unknowns
|
|
stella unknowns export --workspace-id $WS_ID --format ndjson > /var/log/stella/unknowns.ndjson
|
|
```
|
|
|
|
### Q: Can I generate SARIF for code scanning?
|
|
|
|
**A:** Yes:
|
|
```bash
|
|
stella reachability findings --scan-id $SCAN_ID --output-format sarif > results.sarif
|
|
```
|
|
|
|
### Q: Is there an API for everything?
|
|
|
|
**A:** Yes, the CLI wraps the REST API. See [API Reference](../api/score-proofs-reachability-api-reference.md) for endpoints.
|
|
|
|
---
|
|
|
|
## Troubleshooting Quick Reference
|
|
|
|
| Issue | Likely Cause | Quick Fix |
|
|
|-------|--------------|-----------|
|
|
| Replay produces different results | Missing inputs | `stella proof inspect --check-inputs` |
|
|
| Too many UNKNOWN reachability | Incomplete call graph | `stella scan graph summary` |
|
|
| Signature verification fails | Key rotation | `stella trust list` |
|
|
| Computation timeout | Large graph | Increase `--timeout` |
|
|
| Many unmapped_purl unknowns | Internal packages | Add internal registry mappings |
|
|
|
|
---
|
|
|
|
## Related Documentation
|
|
|
|
- [Score Proofs Concept Guide](./score-proofs-concept-guide.md)
|
|
- [Reachability Concept Guide](./reachability-concept-guide.md)
|
|
- [Unknowns Management Guide](./unknowns-management-guide.md)
|
|
- [Troubleshooting Guide](./troubleshooting-guide.md)
|
|
|
|
---
|
|
|
|
**Last Updated**: 2025-12-20
|
|
**Version**: 1.0.0
|
|
**Sprint**: 3500.0004.0004
|