docs(sprint-3500.0004.0004): Complete documentation handoff
Sprint 3500.0004.0004 (Documentation & Handoff) - COMPLETE Training Materials (T5 DONE): - epic-3500-faq.md: Comprehensive FAQ for Score Proofs/Reachability - video-tutorial-scripts.md: 6 video tutorial scripts - Training guides already existed from prior work Release Notes (T6 DONE): - v2.5.0-release-notes.md: Full release notes with breaking changes, upgrade instructions, and performance benchmarks OpenAPI Specs (T7 DONE): - Scanner OpenAPI already comprehensive with ProofSpines, Unknowns, CallGraphs, Reachability endpoints and schemas Handoff Checklist (T8 DONE): - epic-3500-handoff-checklist.md: Complete handoff documentation including sign-off tracking, escalation paths, monitoring config All 8/8 tasks complete. Sprint DONE. Epic 3500 documentation deliverables complete.
This commit is contained in:
505
docs/training/video-tutorial-scripts.md
Normal file
505
docs/training/video-tutorial-scripts.md
Normal file
@@ -0,0 +1,505 @@
|
||||
# Video Tutorial Scripts
|
||||
|
||||
**Sprint:** SPRINT_3500_0004_0004
|
||||
**Format:** Tutorial scripts for screen recording
|
||||
|
||||
This document contains scripts for video tutorials covering Score Proofs, Reachability Analysis, and Unknowns Management.
|
||||
|
||||
---
|
||||
|
||||
## Video 1: Introduction to Score Proofs (5 min)
|
||||
|
||||
### Script
|
||||
|
||||
**[Opening - Logo/Title Card]**
|
||||
|
||||
> Welcome to StellaOps. In this tutorial, you'll learn how Score Proofs provide cryptographic guarantees that your vulnerability scores are reproducible and verifiable.
|
||||
|
||||
**[Screen: Terminal]**
|
||||
|
||||
> Let's start with a typical vulnerability scan. Here I have an SBOM for my application.
|
||||
|
||||
```bash
|
||||
stella scan --sbom ./sbom.json
|
||||
```
|
||||
|
||||
> The scan produces findings, but how do we know these results are accurate? Can we prove they're reproducible?
|
||||
|
||||
**[Screen: Terminal - enable proofs]**
|
||||
|
||||
> Let's run the same scan with proof generation enabled.
|
||||
|
||||
```bash
|
||||
stella scan --sbom ./sbom.json --generate-proof --output ./scan-results/
|
||||
```
|
||||
|
||||
> Notice the `--generate-proof` flag. StellaOps now creates a cryptographic attestation alongside the findings.
|
||||
|
||||
**[Screen: File browser showing proof bundle]**
|
||||
|
||||
> Here's what got generated:
|
||||
> - `manifest.json` contains content-addressed references to every input
|
||||
> - `proof.dsse` is the signed attestation
|
||||
> - `findings.json` has the actual vulnerability results
|
||||
|
||||
**[Screen: Terminal - verify]**
|
||||
|
||||
> Anyone with this bundle can verify the proof:
|
||||
|
||||
```bash
|
||||
stella proof verify ./scan-results/proof.dsse
|
||||
```
|
||||
|
||||
> The signature is valid, and we can see exactly which inputs produced these results.
|
||||
|
||||
**[Screen: Terminal - replay]**
|
||||
|
||||
> We can even replay the scan to prove determinism:
|
||||
|
||||
```bash
|
||||
stella score replay ./scan-results/ --verify
|
||||
```
|
||||
|
||||
> Same inputs produce the exact same outputs. This is powerful for audits and compliance.
|
||||
|
||||
**[Closing]**
|
||||
|
||||
> That's Score Proofs in action. For more details, check the documentation or watch our deep-dive tutorials.
|
||||
|
||||
---
|
||||
|
||||
## Video 2: Understanding Reachability Analysis (7 min)
|
||||
|
||||
### Script
|
||||
|
||||
**[Opening]**
|
||||
|
||||
> Vulnerability scanners often overwhelm teams with alerts. Reachability analysis helps you focus on what actually matters.
|
||||
|
||||
**[Screen: Slide showing vulnerability count]**
|
||||
|
||||
> Imagine your scanner finds 200 vulnerabilities. Studies show 80-90% are unreachable—the vulnerable code path never executes in your application.
|
||||
|
||||
**[Screen: Terminal]**
|
||||
|
||||
> Let's see reachability in action. First, we generate a call graph:
|
||||
|
||||
```bash
|
||||
stella scan graph ./src --output ./callgraph.json
|
||||
```
|
||||
|
||||
> This analyzes your source code to map function calls.
|
||||
|
||||
**[Screen: Visualization of call graph]**
|
||||
|
||||
> The call graph shows which functions call which. Starting from your entry points, we can trace all possible execution paths.
|
||||
|
||||
**[Screen: Terminal - scan with reachability]**
|
||||
|
||||
> Now let's scan with reachability enabled:
|
||||
|
||||
```bash
|
||||
stella scan --sbom ./sbom.json --call-graph ./callgraph.json --reachability
|
||||
```
|
||||
|
||||
**[Screen: Results showing reachability verdicts]**
|
||||
|
||||
> Look at the results:
|
||||
> - CVE-2024-1234: `REACHABLE_STATIC` - this IS on an executable path
|
||||
> - CVE-2024-5678: `NOT_REACHABLE` - safely ignore this one
|
||||
> - CVE-2024-9012: `POSSIBLY_REACHABLE` - needs review
|
||||
|
||||
**[Screen: Terminal - filter]**
|
||||
|
||||
> We can filter to only actionable items:
|
||||
|
||||
```bash
|
||||
stella query --filter "reachability=REACHABLE_STATIC"
|
||||
```
|
||||
|
||||
> From 200 findings, we're now focused on 20 that actually matter.
|
||||
|
||||
**[Screen: Diagram showing path]**
|
||||
|
||||
> For reachable vulnerabilities, we can see the exact path:
|
||||
|
||||
```bash
|
||||
stella reachability explain --cve CVE-2024-1234
|
||||
```
|
||||
|
||||
> `Main.java:15` calls `Service.java:42`, which calls the vulnerable function.
|
||||
|
||||
**[Closing]**
|
||||
|
||||
> Reachability analysis transforms alert fatigue into focused action. Enable it today.
|
||||
|
||||
---
|
||||
|
||||
## Video 3: Managing the Unknowns Queue (5 min)
|
||||
|
||||
### Script
|
||||
|
||||
**[Opening]**
|
||||
|
||||
> Sometimes StellaOps encounters packages it can't fully analyze. The Unknowns Queue helps you track and resolve these blind spots.
|
||||
|
||||
**[Screen: Terminal - scan with unknowns]**
|
||||
|
||||
> When we scan, some packages may be flagged as unknown:
|
||||
|
||||
```bash
|
||||
stella scan --sbom ./sbom.json
|
||||
# ...
|
||||
# ⚠️ 12 unknowns detected
|
||||
```
|
||||
|
||||
**[Screen: Terminal - list unknowns]**
|
||||
|
||||
> Let's see what's in the queue:
|
||||
|
||||
```bash
|
||||
stella unknowns list
|
||||
```
|
||||
|
||||
> We have:
|
||||
> - `internal-auth-lib@1.0.0` - no advisory match
|
||||
> - `custom-logger@2.1.0` - checksum miss
|
||||
> - A couple more...
|
||||
|
||||
**[Screen: Terminal - show details]**
|
||||
|
||||
> Let's look at one in detail:
|
||||
|
||||
```bash
|
||||
stella unknowns show UNK-001
|
||||
```
|
||||
|
||||
> This is `internal-auth-lib`—it's our internal package, so of course there's no public advisory!
|
||||
|
||||
**[Screen: Terminal - resolve]**
|
||||
|
||||
> We can resolve it:
|
||||
|
||||
```bash
|
||||
stella unknowns resolve UNK-001 --resolution internal_package --note "Our internal auth library"
|
||||
```
|
||||
|
||||
> Now it won't keep appearing.
|
||||
|
||||
**[Screen: Terminal - patterns]**
|
||||
|
||||
> For bulk handling, configure patterns:
|
||||
|
||||
```bash
|
||||
stella config set unknowns.internalPatterns '@mycompany/*,internal-*'
|
||||
```
|
||||
|
||||
> All packages matching these patterns are auto-classified.
|
||||
|
||||
**[Screen: Terminal - stats]**
|
||||
|
||||
> Track your queue health:
|
||||
|
||||
```bash
|
||||
stella unknowns stats
|
||||
```
|
||||
|
||||
> Target: less than 5% unknown packages.
|
||||
|
||||
**[Closing]**
|
||||
|
||||
> Regular unknown management keeps your security coverage complete. Don't let blind spots accumulate.
|
||||
|
||||
---
|
||||
|
||||
## Video 4: Integrating with CI/CD (8 min)
|
||||
|
||||
### Script
|
||||
|
||||
**[Opening]**
|
||||
|
||||
> Let's add Score Proofs and Reachability to your CI/CD pipeline.
|
||||
|
||||
**[Screen: GitHub Actions YAML]**
|
||||
|
||||
> Here's a GitHub Actions workflow:
|
||||
|
||||
```yaml
|
||||
name: Security Scan
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
security:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Generate SBOM
|
||||
run: syft . -o cyclonedx-json > sbom.json
|
||||
|
||||
- name: Generate Call Graph
|
||||
run: stella scan graph ./src --output callgraph.json
|
||||
|
||||
- name: Scan with Proofs and Reachability
|
||||
run: |
|
||||
stella scan \
|
||||
--sbom sbom.json \
|
||||
--call-graph callgraph.json \
|
||||
--generate-proof \
|
||||
--reachability \
|
||||
--output ./results/
|
||||
|
||||
- name: Check for Reachable Criticals
|
||||
run: |
|
||||
COUNT=$(stella query \
|
||||
--input ./results/ \
|
||||
--filter "reachability=REACHABLE_STATIC AND severity=CRITICAL" \
|
||||
--count)
|
||||
if [ "$COUNT" -gt 0 ]; then
|
||||
echo "Found $COUNT reachable critical vulnerabilities"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Upload Proof Bundle
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: security-proof
|
||||
path: ./results/
|
||||
```
|
||||
|
||||
**[Screen: Pipeline running]**
|
||||
|
||||
> Let's trigger the pipeline... and watch it run.
|
||||
|
||||
**[Screen: Pipeline success/failure]**
|
||||
|
||||
> The pipeline:
|
||||
> 1. Generates an SBOM
|
||||
> 2. Creates a call graph
|
||||
> 3. Scans with proofs and reachability
|
||||
> 4. Fails if reachable criticals exist
|
||||
|
||||
**[Screen: Artifacts]**
|
||||
|
||||
> The proof bundle is saved as an artifact for auditing.
|
||||
|
||||
**[Screen: PR comment]**
|
||||
|
||||
> You can even post results to PRs:
|
||||
|
||||
```yaml
|
||||
- name: Comment Results
|
||||
run: |
|
||||
stella report --format markdown > report.md
|
||||
gh pr comment --body-file report.md
|
||||
```
|
||||
|
||||
**[Closing]**
|
||||
|
||||
> Automated security with proof. Check out our GitLab and Jenkins templates too.
|
||||
|
||||
---
|
||||
|
||||
## Video 5: Air-Gap Operations (6 min)
|
||||
|
||||
### Script
|
||||
|
||||
**[Opening]**
|
||||
|
||||
> For classified or restricted environments, StellaOps works fully offline. Let's see how.
|
||||
|
||||
**[Screen: Diagram - Air-gapped setup]**
|
||||
|
||||
> In an air-gapped environment, there's no internet access. We use offline kits to bring in vulnerability data.
|
||||
|
||||
**[Screen: Terminal - connected side]**
|
||||
|
||||
> On a connected machine, prepare the offline kit:
|
||||
|
||||
```bash
|
||||
stella offline-kit create \
|
||||
--include-feeds nvd,github,osv \
|
||||
--include-trust-bundle \
|
||||
--output ./offline-kit.tar.gz
|
||||
```
|
||||
|
||||
**[Screen: USB/secure transfer]**
|
||||
|
||||
> Transfer this securely to the air-gapped environment.
|
||||
|
||||
**[Screen: Terminal - air-gapped side]**
|
||||
|
||||
> On the air-gapped machine, import:
|
||||
|
||||
```bash
|
||||
stella offline-kit import ./offline-kit.tar.gz
|
||||
```
|
||||
|
||||
**[Screen: Terminal - verify]**
|
||||
|
||||
> Verify the kit integrity:
|
||||
|
||||
```bash
|
||||
stella offline-kit verify
|
||||
```
|
||||
|
||||
> Signatures match—we can trust this data.
|
||||
|
||||
**[Screen: Terminal - scan]**
|
||||
|
||||
> Now scan as usual:
|
||||
|
||||
```bash
|
||||
stella scan --sbom ./sbom.json --offline
|
||||
```
|
||||
|
||||
> Results use the imported feeds. No network required.
|
||||
|
||||
**[Screen: Terminal - generate proof]**
|
||||
|
||||
> Proofs work offline too:
|
||||
|
||||
```bash
|
||||
stella scan --sbom ./sbom.json --offline --generate-proof
|
||||
```
|
||||
|
||||
**[Screen: Terminal - verify offline]**
|
||||
|
||||
> And verification:
|
||||
|
||||
```bash
|
||||
stella proof verify ./proof.dsse --offline
|
||||
```
|
||||
|
||||
> Everything works without connectivity.
|
||||
|
||||
**[Closing]**
|
||||
|
||||
> StellaOps is sovereign by design. Your security doesn't depend on cloud availability.
|
||||
|
||||
---
|
||||
|
||||
## Video 6: Deep Dive - Deterministic Replay (10 min)
|
||||
|
||||
### Script
|
||||
|
||||
**[Opening]**
|
||||
|
||||
> Score replay is a powerful auditing feature. Let's explore how it works under the hood.
|
||||
|
||||
**[Screen: Diagram - Replay architecture]**
|
||||
|
||||
> When you generate a proof, StellaOps captures:
|
||||
> - Content-addressed SBOM (sha256 hash)
|
||||
> - Feed snapshots at exact timestamps
|
||||
> - Algorithm version
|
||||
> - Configuration state
|
||||
|
||||
**[Screen: JSON manifest]**
|
||||
|
||||
> Here's a manifest:
|
||||
|
||||
```json
|
||||
{
|
||||
"scanId": "scan-12345",
|
||||
"inputs": {
|
||||
"sbom": {
|
||||
"digest": "sha256:a1b2c3...",
|
||||
"format": "cyclonedx-1.6"
|
||||
},
|
||||
"feeds": {
|
||||
"nvd": {
|
||||
"digest": "sha256:d4e5f6...",
|
||||
"asOf": "2025-01-15T00:00:00Z"
|
||||
}
|
||||
}
|
||||
},
|
||||
"environment": {
|
||||
"algorithmVersion": "2.5.0"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**[Screen: Terminal - replay]**
|
||||
|
||||
> To replay, StellaOps:
|
||||
> 1. Retrieves inputs by their digests
|
||||
> 2. Restores configuration
|
||||
> 3. Re-runs the exact computation
|
||||
|
||||
```bash
|
||||
stella score replay ./proof-bundle/ --verbose
|
||||
```
|
||||
|
||||
**[Screen: Output comparison]**
|
||||
|
||||
> The output shows:
|
||||
> - ✅ Finding 1 matches
|
||||
> - ✅ Finding 2 matches
|
||||
> - ✅ All scores identical
|
||||
> - ✅ Merkle root matches
|
||||
|
||||
**[Screen: Diagram - Merkle tree]**
|
||||
|
||||
> Individual findings can be verified using Merkle proofs—without replaying the entire scan.
|
||||
|
||||
```bash
|
||||
stella proof verify-finding --proof ./proof.dsse --finding-id F-001
|
||||
```
|
||||
|
||||
**[Screen: Terminal - handling drift]**
|
||||
|
||||
> What if feeds have updated?
|
||||
|
||||
```bash
|
||||
stella score replay ./proof-bundle/
|
||||
# ⚠️ Feed digest mismatch: NVD
|
||||
# Expected: sha256:d4e5f6...
|
||||
# Current: sha256:x7y8z9...
|
||||
```
|
||||
|
||||
> Use feed freezing:
|
||||
|
||||
```bash
|
||||
stella feeds freeze --from-manifest ./manifest.json
|
||||
stella score replay ./proof-bundle/ --frozen
|
||||
```
|
||||
|
||||
**[Closing]**
|
||||
|
||||
> Deterministic replay provides true auditability. Every score can be proven, every time.
|
||||
|
||||
---
|
||||
|
||||
## Recording Notes
|
||||
|
||||
### Equipment
|
||||
- Screen recording: OBS Studio or Camtasia
|
||||
- Resolution: 1920x1080
|
||||
- Terminal font size: 16-18pt
|
||||
- Use dark theme for terminal
|
||||
|
||||
### Preparation
|
||||
- Clean terminal history
|
||||
- Pre-stage all files
|
||||
- Have backup recordings of long-running commands
|
||||
- Test all commands before recording
|
||||
|
||||
### Post-Production
|
||||
- Add chapter markers
|
||||
- Include closed captions
|
||||
- Export to MP4 (H.264) for compatibility
|
||||
- Upload to internal video platform
|
||||
|
||||
### Duration Targets
|
||||
- Introduction videos: 5-7 minutes
|
||||
- Deep dives: 8-12 minutes
|
||||
- Quick tips: 2-3 minutes
|
||||
|
||||
---
|
||||
|
||||
## Revision History
|
||||
|
||||
| Version | Date | Author | Changes |
|
||||
|---------|------|--------|---------|
|
||||
| 1.0 | 2025-12-20 | Agent | Initial scripts created |
|
||||
Reference in New Issue
Block a user