audit work, fixed StellaOps.sln warnings/errors, fixed tests, sprints work, new advisories
This commit is contained in:
58
docs/dev/contributing/api-contracts.md
Normal file
58
docs/dev/contributing/api-contracts.md
Normal file
@@ -0,0 +1,58 @@
|
||||
# Contributing to API Contracts
|
||||
|
||||
Last updated: 2025-11-25
|
||||
|
||||
## Scope
|
||||
Guidelines for editing service OpenAPI specs, lint rules, compatibility checks, and release artefacts across StellaOps services.
|
||||
|
||||
## Required tools
|
||||
- Node.js 20.x + pnpm 9.x
|
||||
- Spectral CLI (invoked via `pnpm api:lint` in `src/Api/StellaOps.Api.OpenApi`)
|
||||
- `diff2html` (optional) for human-friendly compat reports
|
||||
|
||||
## Workflow (per change)
|
||||
1) Edit service OAS under `src/Api/StellaOps.Api.OpenApi/services/<service>/*.yaml`.
|
||||
2) Run lint + compose + compat + changelog from repo root:
|
||||
```bash
|
||||
pnpm --filter @stella/api-openapi api:lint # spectral
|
||||
pnpm --filter @stella/api-openapi api:compose # build stella.yaml
|
||||
pnpm --filter @stella/api-openapi api:compat # compare against baseline
|
||||
pnpm --filter @stella/api-openapi api:changelog # generate digest/signature
|
||||
```
|
||||
3) Review outputs:
|
||||
- `out/api/stella.yaml` (composed spec)
|
||||
- `out/api/compat-report.json` (+ `.html` if generated)
|
||||
- `out/api/changelog/` (digest + signature for SDK pipeline)
|
||||
4) Update examples: ensure request/response examples exist for changed endpoints; add to `examples/` directories.
|
||||
5) Commit changes with notes on breaking/additive results; attach compat report paths.
|
||||
|
||||
## Lint rules (Spectral)
|
||||
- House rules live in `.spectral.yaml` under `src/Api/StellaOps.Api.OpenApi`.
|
||||
- Enforce: tagged operations, error envelope shape, schema refs reuse, pagination tokens, RBAC scopes, standard headers (`traceparent`, `x-correlation-id`).
|
||||
|
||||
## Compatibility checks
|
||||
- Baseline file: `stella-baseline.yaml` (kept in repo under `out/api/` and updated per release).
|
||||
- `api:compat` flags additive/breaking/unchanged deltas. Breaking changes require approval from API Governance Guild and affected service guilds.
|
||||
|
||||
## Examples & error envelopes
|
||||
- Every operation must have at least one request + response example.
|
||||
- Error responses must use the standard envelope (`error.code`, `error.message`, `trace_id`).
|
||||
|
||||
## Offline/air-gap
|
||||
- Keep `pnpm-lock.yaml` pinned; store `node_modules/.pnpm` cache in Offline Kit when needed.
|
||||
- Include composed spec, compat report, and changelog artefacts in offline bundles under `offline/api/` with checksums.
|
||||
|
||||
## Release handoff
|
||||
- Deliverables for each release:
|
||||
- `out/api/stella.yaml`
|
||||
- `out/api/compat-report.json` (+ `.html` if produced)
|
||||
- `out/api/changelog/*` (digest, signature, manifest)
|
||||
- Provide SHA256 checksums for artefacts and note `source_commit` + `timestamp` in release notes.
|
||||
|
||||
## Review checklist
|
||||
- [ ] Lint clean (`api:lint`)
|
||||
- [ ] Examples present for changed endpoints
|
||||
- [ ] Compat report reviewed (additive/breaking noted)
|
||||
- [ ] Changelog artefacts generated + checksums
|
||||
- [ ] Offline bundle artefacts staged (if required)
|
||||
- [ ] Docs/UI/SDK owners notified of breaking changes
|
||||
336
docs/dev/contributing/canonicalization-determinism.md
Normal file
336
docs/dev/contributing/canonicalization-determinism.md
Normal file
@@ -0,0 +1,336 @@
|
||||
# Canonicalization & Determinism Patterns
|
||||
|
||||
**Version:** 1.0
|
||||
**Date:** December 2025
|
||||
**Sprint:** SPRINT_20251226_007_BE_determinism_gaps (DET-GAP-20)
|
||||
|
||||
> **Audience:** All StellaOps contributors working on code that produces digests, attestations, or replayable outputs.
|
||||
> **Goal:** Ensure byte-identical outputs for identical inputs across platforms, time, and Rust/Go/Node re-implementations.
|
||||
|
||||
---
|
||||
|
||||
## 1. Why Determinism Matters
|
||||
|
||||
StellaOps is built on **proof-of-state**: every verdict, attestation, and replay must be reproducible. Non-determinism breaks:
|
||||
|
||||
- **Signature verification:** Different serialization → different digest → invalid signature.
|
||||
- **Replay guarantees:** Feed snapshots that produce different hashes cannot be replayed.
|
||||
- **Audit trails:** Compliance teams require bit-exact reproduction of historical scans.
|
||||
- **Cross-platform compatibility:** Windows/Linux/macOS must produce identical outputs.
|
||||
|
||||
---
|
||||
|
||||
## 2. RFC 8785 JSON Canonicalization Scheme (JCS)
|
||||
|
||||
All JSON that participates in digest computation **must** use RFC 8785 JCS. This includes:
|
||||
|
||||
- Attestation payloads (DSSE)
|
||||
- Verdict JSON
|
||||
- Policy evaluation results
|
||||
- Feed snapshot manifests
|
||||
- Proof bundles
|
||||
|
||||
### 2.1 The Rfc8785JsonCanonicalizer
|
||||
|
||||
Use the `Rfc8785JsonCanonicalizer` class for all canonical JSON operations:
|
||||
|
||||
```csharp
|
||||
using StellaOps.Attestor.ProofChain.Json;
|
||||
|
||||
// Create canonicalizer (optionally with NFC normalization)
|
||||
var canonicalizer = new Rfc8785JsonCanonicalizer(enableNfcNormalization: true);
|
||||
|
||||
// Canonicalize JSON
|
||||
string canonical = canonicalizer.Canonicalize(jsonString);
|
||||
|
||||
// Or from JsonElement
|
||||
string canonical = canonicalizer.Canonicalize(jsonElement);
|
||||
```
|
||||
|
||||
### 2.2 JCS Rules Summary
|
||||
|
||||
RFC 8785 requires:
|
||||
|
||||
1. **No whitespace** between tokens.
|
||||
2. **Lexicographic key ordering** within objects.
|
||||
3. **Number serialization:** No leading zeros, no trailing zeros after decimal, integers without decimal point.
|
||||
4. **String escaping:** Minimal escaping (only `"`, `\`, and control chars).
|
||||
5. **UTF-8 encoding** without BOM.
|
||||
|
||||
### 2.3 Common Mistakes
|
||||
|
||||
❌ **Wrong:** Using `JsonSerializer.Serialize()` directly for digest input.
|
||||
|
||||
```csharp
|
||||
// WRONG - non-deterministic ordering
|
||||
var json = JsonSerializer.Serialize(obj);
|
||||
var hash = SHA256.HashData(Encoding.UTF8.GetBytes(json));
|
||||
```
|
||||
|
||||
✅ **Correct:** Canonicalize before hashing.
|
||||
|
||||
```csharp
|
||||
// CORRECT - deterministic
|
||||
var canonicalizer = new Rfc8785JsonCanonicalizer();
|
||||
var canonical = canonicalizer.Canonicalize(obj);
|
||||
var hash = SHA256.HashData(Encoding.UTF8.GetBytes(canonical));
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Unicode NFC Normalization
|
||||
|
||||
Different platforms may store the same string in different Unicode normalization forms. Enable NFC normalization when:
|
||||
|
||||
- Processing user-supplied strings
|
||||
- Aggregating data from multiple sources
|
||||
- Working with file paths or identifiers from different systems
|
||||
|
||||
```csharp
|
||||
// Enable NFC for cross-platform string stability
|
||||
var canonicalizer = new Rfc8785JsonCanonicalizer(enableNfcNormalization: true);
|
||||
```
|
||||
|
||||
When NFC is enabled, all strings are normalized via `string.Normalize(NormalizationForm.FormC)` before serialization.
|
||||
|
||||
---
|
||||
|
||||
## 4. Resolver Boundary Pattern
|
||||
|
||||
**Key principle:** All data entering or leaving a "resolver" (a service that produces verdicts, attestations, or replayable state) must be canonicalized.
|
||||
|
||||
### 4.1 What Is a Resolver Boundary?
|
||||
|
||||
A resolver boundary is any point where:
|
||||
|
||||
- Data is **serialized** for storage, transmission, or signing
|
||||
- Data is **hashed** to produce a digest
|
||||
- Data is **compared** for equality in replay validation
|
||||
|
||||
### 4.2 Boundary Enforcement
|
||||
|
||||
At resolver boundaries:
|
||||
|
||||
1. **Canonicalize** all JSON payloads using `Rfc8785JsonCanonicalizer`.
|
||||
2. **Sort** collections deterministically (alphabetically by key or ID).
|
||||
3. **Normalize** timestamps to ISO 8601 UTC with `Z` suffix.
|
||||
4. **Freeze** dictionaries using `FrozenDictionary` for stable iteration order.
|
||||
|
||||
### 4.3 Example: Feed Snapshot Coordinator
|
||||
|
||||
```csharp
|
||||
public sealed class FeedSnapshotCoordinatorService : IFeedSnapshotCoordinator
|
||||
{
|
||||
private readonly FrozenDictionary<string, IFeedSourceProvider> _providers;
|
||||
|
||||
public FeedSnapshotCoordinatorService(IEnumerable<IFeedSourceProvider> providers, ...)
|
||||
{
|
||||
// Sort providers alphabetically for deterministic digest computation
|
||||
_providers = providers
|
||||
.OrderBy(p => p.SourceId, StringComparer.Ordinal)
|
||||
.ToFrozenDictionary(p => p.SourceId, p => p, StringComparer.OrdinalIgnoreCase);
|
||||
}
|
||||
|
||||
private string ComputeCompositeDigest(IReadOnlyList<SourceSnapshot> sources)
|
||||
{
|
||||
// Sources are already sorted by SourceId (alphabetically)
|
||||
using var sha256 = SHA256.Create();
|
||||
foreach (var source in sources.OrderBy(s => s.SourceId, StringComparer.Ordinal))
|
||||
{
|
||||
// Append each source digest to the hash computation
|
||||
var digestBytes = Encoding.UTF8.GetBytes(source.Digest);
|
||||
sha256.TransformBlock(digestBytes, 0, digestBytes.Length, null, 0);
|
||||
}
|
||||
sha256.TransformFinalBlock([], 0, 0);
|
||||
return $"sha256:{Convert.ToHexString(sha256.Hash!).ToLowerInvariant()}";
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Timestamp Handling
|
||||
|
||||
### 5.1 Rules
|
||||
|
||||
1. **Always use UTC** - never local time.
|
||||
2. **ISO 8601 format** with `Z` suffix: `2025-12-27T14:30:00Z`
|
||||
3. **Consistent precision** - truncate to seconds unless milliseconds are required.
|
||||
4. **Use TimeProvider** for testability.
|
||||
|
||||
### 5.2 Example
|
||||
|
||||
```csharp
|
||||
// CORRECT - UTC with Z suffix
|
||||
var timestamp = timeProvider.GetUtcNow().ToString("yyyy-MM-ddTHH:mm:ssZ");
|
||||
|
||||
// WRONG - local time
|
||||
var wrong = DateTime.Now.ToString("o");
|
||||
|
||||
// WRONG - inconsistent format
|
||||
var wrong2 = DateTimeOffset.UtcNow.ToString();
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Numeric Stability
|
||||
|
||||
### 6.1 Avoid Floating Point for Determinism
|
||||
|
||||
Floating-point arithmetic can produce different results on different platforms. For deterministic values:
|
||||
|
||||
- Use `decimal` for scores, percentages, and monetary values.
|
||||
- Use `int` or `long` for counts and identifiers.
|
||||
- If floating-point is unavoidable, document the acceptable epsilon and rounding rules.
|
||||
|
||||
### 6.2 Number Serialization
|
||||
|
||||
RFC 8785 requires specific number formatting:
|
||||
|
||||
- Integers: no decimal point (`42`, not `42.0`)
|
||||
- Decimals: no trailing zeros (`3.14`, not `3.140`)
|
||||
- No leading zeros (`0.5`, not `00.5`)
|
||||
|
||||
The `Rfc8785JsonCanonicalizer` handles this automatically.
|
||||
|
||||
---
|
||||
|
||||
## 7. Collection Ordering
|
||||
|
||||
### 7.1 Rule
|
||||
|
||||
All collections that participate in digest computation must have **deterministic order**.
|
||||
|
||||
### 7.2 Implementation
|
||||
|
||||
```csharp
|
||||
// CORRECT - use FrozenDictionary for stable iteration
|
||||
var orderedDict = items
|
||||
.OrderBy(x => x.Key, StringComparer.Ordinal)
|
||||
.ToFrozenDictionary(x => x.Key, x => x.Value);
|
||||
|
||||
// CORRECT - sort before iteration
|
||||
foreach (var item in items.OrderBy(x => x.Id, StringComparer.Ordinal))
|
||||
{
|
||||
// ...
|
||||
}
|
||||
|
||||
// WRONG - iteration order is undefined
|
||||
foreach (var item in dictionary)
|
||||
{
|
||||
// Order may vary between runs
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Audit Hash Logging
|
||||
|
||||
For debugging determinism issues, use the `AuditHashLogger`:
|
||||
|
||||
```csharp
|
||||
using StellaOps.Attestor.ProofChain.Audit;
|
||||
|
||||
var auditLogger = new AuditHashLogger(logger);
|
||||
|
||||
// Log both raw and canonical hashes
|
||||
auditLogger.LogHashAudit(
|
||||
rawContent,
|
||||
canonicalContent,
|
||||
"sha256:abc...",
|
||||
"verdict",
|
||||
"scan-123",
|
||||
metadata);
|
||||
```
|
||||
|
||||
This enables post-mortem analysis of canonicalization issues.
|
||||
|
||||
---
|
||||
|
||||
## 9. Testing Determinism
|
||||
|
||||
### 9.1 Required Tests
|
||||
|
||||
Every component that produces digests must have tests verifying:
|
||||
|
||||
1. **Idempotency:** Same input → same digest (multiple calls).
|
||||
2. **Permutation invariance:** Reordering input collections → same digest.
|
||||
3. **Cross-platform:** Windows/Linux/macOS produce identical outputs.
|
||||
|
||||
### 9.2 Example Test
|
||||
|
||||
```csharp
|
||||
[Fact]
|
||||
public async Task CreateSnapshot_ProducesDeterministicDigest()
|
||||
{
|
||||
// Arrange
|
||||
var sources = CreateTestSources();
|
||||
|
||||
// Act - create multiple snapshots with same data
|
||||
var bundle1 = await coordinator.CreateSnapshotAsync();
|
||||
var bundle2 = await coordinator.CreateSnapshotAsync();
|
||||
|
||||
// Assert - digests must be identical
|
||||
Assert.Equal(bundle1.CompositeDigest, bundle2.CompositeDigest);
|
||||
}
|
||||
|
||||
[Fact]
|
||||
public async Task CreateSnapshot_OrderIndependent()
|
||||
{
|
||||
// Arrange - sources in different orders
|
||||
var sourcesAscending = sources.OrderBy(s => s.Id);
|
||||
var sourcesDescending = sources.OrderByDescending(s => s.Id);
|
||||
|
||||
// Act
|
||||
var bundle1 = await CreateWithSources(sourcesAscending);
|
||||
var bundle2 = await CreateWithSources(sourcesDescending);
|
||||
|
||||
// Assert - digest must be identical regardless of input order
|
||||
Assert.Equal(bundle1.CompositeDigest, bundle2.CompositeDigest);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10. Determinism Manifest Schema
|
||||
|
||||
All replayable artifacts must include a determinism manifest conforming to the JSON Schema at:
|
||||
|
||||
`docs/technical/testing/schemas/determinism-manifest.schema.json`
|
||||
|
||||
Key fields:
|
||||
- `schemaVersion`: Must be `"1.0"`.
|
||||
- `artifactType`: One of `verdict`, `attestation`, `snapshot`, `proof`, `sbom`, `vex`.
|
||||
- `hashAlgorithm`: One of `sha256`, `sha384`, `sha512`.
|
||||
- `ordering`: One of `alphabetical`, `timestamp`, `insertion`, `canonical`.
|
||||
- `determinismGuarantee`: One of `strict`, `relaxed`, `best_effort`.
|
||||
|
||||
---
|
||||
|
||||
## 11. Checklist for Contributors
|
||||
|
||||
Before submitting a PR that involves digests or attestations:
|
||||
|
||||
- [ ] JSON is canonicalized via `Rfc8785JsonCanonicalizer` before hashing.
|
||||
- [ ] NFC normalization is enabled if user-supplied strings are involved.
|
||||
- [ ] Collections are sorted deterministically before iteration.
|
||||
- [ ] Timestamps are UTC with ISO 8601 format and `Z` suffix.
|
||||
- [ ] Numeric values avoid floating-point where possible.
|
||||
- [ ] Unit tests verify digest idempotency and permutation invariance.
|
||||
- [ ] Determinism manifest schema is validated for new artifact types.
|
||||
|
||||
---
|
||||
|
||||
## 12. Related Documents
|
||||
|
||||
- [docs/technical/testing/schemas/determinism-manifest.schema.json](../technical/testing/schemas/determinism-manifest.schema.json) - JSON Schema for manifests
|
||||
- [docs/modules/policy/design/policy-determinism-tests.md](../modules/policy/design/policy-determinism-tests.md) - Policy engine determinism
|
||||
- [docs/technical/testing/TEST_SUITE_OVERVIEW.md](../technical/testing/TEST_SUITE_OVERVIEW.md) - Testing strategy
|
||||
|
||||
---
|
||||
|
||||
## 13. Change Log
|
||||
|
||||
| Version | Date | Notes |
|
||||
|---------|------------|----------------------------------------------------|
|
||||
| 1.0 | 2025-12-27 | Initial version per DET-GAP-20. |
|
||||
301
docs/dev/contributing/corpus-contribution-guide.md
Normal file
301
docs/dev/contributing/corpus-contribution-guide.md
Normal file
@@ -0,0 +1,301 @@
|
||||
# Corpus Contribution Guide
|
||||
|
||||
**Sprint:** SPRINT_3500_0003_0001
|
||||
**Task:** CORPUS-014 - Document corpus contribution guide
|
||||
|
||||
## Overview
|
||||
|
||||
The Ground-Truth Corpus is a collection of validated test samples used to measure scanner accuracy. Each sample has known reachability status and expected findings, enabling deterministic quality metrics.
|
||||
|
||||
## Corpus Structure
|
||||
|
||||
```
|
||||
datasets/reachability/
|
||||
├── corpus.json # Index of all samples
|
||||
├── schemas/
|
||||
│ └── corpus-sample.v1.json # JSON schema for samples
|
||||
├── samples/
|
||||
│ ├── gt-0001/ # Sample directory
|
||||
│ │ ├── sample.json # Sample metadata
|
||||
│ │ ├── expected.json # Expected findings
|
||||
│ │ ├── sbom.json # Input SBOM
|
||||
│ │ └── source/ # Optional source files
|
||||
│ └── ...
|
||||
└── baselines/
|
||||
└── v1.0.0.json # Baseline metrics
|
||||
```
|
||||
|
||||
## Sample Format
|
||||
|
||||
### sample.json
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "gt-0001",
|
||||
"name": "Python SQL Injection - Reachable",
|
||||
"description": "Flask app with reachable SQL injection via user input",
|
||||
"language": "python",
|
||||
"ecosystem": "pypi",
|
||||
"scenario": "webapi",
|
||||
"entrypoints": ["app.py:main"],
|
||||
"reachability_tier": "tainted_sink",
|
||||
"created_at": "2025-01-15T00:00:00Z",
|
||||
"author": "security-team",
|
||||
"tags": ["sql-injection", "flask", "reachable"]
|
||||
}
|
||||
```
|
||||
|
||||
### expected.json
|
||||
|
||||
```json
|
||||
{
|
||||
"findings": [
|
||||
{
|
||||
"vuln_key": "CVE-2024-1234:pkg:pypi/sqlalchemy@1.4.0",
|
||||
"tier": "tainted_sink",
|
||||
"rule_key": "py.sql.injection.param_concat",
|
||||
"sink_class": "sql",
|
||||
"location_hint": "app.py:42"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Contributing a Sample
|
||||
|
||||
### Step 1: Choose a Scenario
|
||||
|
||||
Select a scenario that is not well-covered in the corpus:
|
||||
|
||||
| Scenario | Description | Example |
|
||||
|----------|-------------|---------|
|
||||
| `webapi` | Web application endpoint | Flask, FastAPI, Express |
|
||||
| `cli` | Command-line tool | argparse, click, commander |
|
||||
| `job` | Background/scheduled job | Celery, cron script |
|
||||
| `lib` | Library code | Reusable package |
|
||||
|
||||
### Step 2: Create Sample Directory
|
||||
|
||||
```bash
|
||||
cd datasets/reachability/samples
|
||||
mkdir gt-NNNN
|
||||
cd gt-NNNN
|
||||
```
|
||||
|
||||
Use the next available sample ID (check `corpus.json` for the highest).
|
||||
|
||||
### Step 3: Create Minimal Reproducible Case
|
||||
|
||||
**Requirements:**
|
||||
- Smallest possible code to demonstrate the vulnerability
|
||||
- Real or realistic vulnerability (use CVE when possible)
|
||||
- Clear entrypoint definition
|
||||
- Deterministic behavior (no network, no randomness)
|
||||
|
||||
**Example Python Sample:**
|
||||
|
||||
```python
|
||||
# app.py - gt-0001
|
||||
from flask import Flask, request
|
||||
import sqlite3
|
||||
|
||||
app = Flask(__name__)
|
||||
|
||||
@app.route("/user")
|
||||
def get_user():
|
||||
user_id = request.args.get("id") # Taint source
|
||||
conn = sqlite3.connect(":memory:")
|
||||
# SQL injection: user_id flows to query without sanitization
|
||||
result = conn.execute(f"SELECT * FROM users WHERE id = {user_id}") # Taint sink
|
||||
return str(result.fetchall())
|
||||
|
||||
if __name__ == "__main__":
|
||||
app.run()
|
||||
```
|
||||
|
||||
### Step 4: Define Expected Findings
|
||||
|
||||
Create `expected.json` with all expected findings:
|
||||
|
||||
```json
|
||||
{
|
||||
"findings": [
|
||||
{
|
||||
"vuln_key": "CWE-89:pkg:pypi/flask@2.0.0",
|
||||
"tier": "tainted_sink",
|
||||
"rule_key": "py.sql.injection",
|
||||
"sink_class": "sql",
|
||||
"location_hint": "app.py:13",
|
||||
"notes": "User input from request.args flows to sqlite3.execute"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Step 5: Create SBOM
|
||||
|
||||
Generate or create an SBOM for the sample:
|
||||
|
||||
```json
|
||||
{
|
||||
"bomFormat": "CycloneDX",
|
||||
"specVersion": "1.6",
|
||||
"version": 1,
|
||||
"components": [
|
||||
{
|
||||
"type": "library",
|
||||
"name": "flask",
|
||||
"version": "2.0.0",
|
||||
"purl": "pkg:pypi/flask@2.0.0"
|
||||
},
|
||||
{
|
||||
"type": "library",
|
||||
"name": "sqlite3",
|
||||
"version": "3.39.0",
|
||||
"purl": "pkg:pypi/sqlite3@3.39.0"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Step 6: Update Corpus Index
|
||||
|
||||
Add entry to `corpus.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "gt-0001",
|
||||
"path": "samples/gt-0001",
|
||||
"language": "python",
|
||||
"tier": "tainted_sink",
|
||||
"scenario": "webapi",
|
||||
"expected_count": 1
|
||||
}
|
||||
```
|
||||
|
||||
### Step 7: Validate Locally
|
||||
|
||||
```bash
|
||||
# Run corpus validation
|
||||
dotnet test tests/reachability/StellaOps.Reachability.FixtureTests \
|
||||
--filter "FullyQualifiedName~CorpusFixtureTests"
|
||||
|
||||
# Run benchmark
|
||||
stellaops bench corpus run --sample gt-0001 --verbose
|
||||
```
|
||||
|
||||
## Tier Guidelines
|
||||
|
||||
### Imported Tier Samples
|
||||
|
||||
For `imported` tier samples:
|
||||
- Vulnerability in a dependency
|
||||
- No execution path to vulnerable code
|
||||
- Package is in lockfile but not called
|
||||
|
||||
**Example:** Unused dependency with known CVE.
|
||||
|
||||
### Executed Tier Samples
|
||||
|
||||
For `executed` tier samples:
|
||||
- Vulnerable code is called from entrypoint
|
||||
- No user-controlled data reaches the vulnerability
|
||||
- Static or coverage analysis proves execution
|
||||
|
||||
**Example:** Hardcoded SQL query (no injection).
|
||||
|
||||
### Tainted→Sink Tier Samples
|
||||
|
||||
For `tainted_sink` tier samples:
|
||||
- User-controlled input reaches vulnerable code
|
||||
- Clear source → sink data flow
|
||||
- Include sink class taxonomy
|
||||
|
||||
**Example:** User input to SQL query, command execution, etc.
|
||||
|
||||
## Sink Classes
|
||||
|
||||
When contributing `tainted_sink` samples, specify the sink class:
|
||||
|
||||
| Sink Class | Description | Examples |
|
||||
|------------|-------------|----------|
|
||||
| `sql` | SQL injection | sqlite3.execute, cursor.execute |
|
||||
| `command` | Command injection | os.system, subprocess.run |
|
||||
| `ssrf` | Server-side request forgery | requests.get, urllib.urlopen |
|
||||
| `path` | Path traversal | open(), os.path.join |
|
||||
| `deser` | Deserialization | pickle.loads, yaml.load |
|
||||
| `eval` | Code evaluation | eval(), exec() |
|
||||
| `xxe` | XML external entity | lxml.parse, ET.parse |
|
||||
| `xss` | Cross-site scripting | innerHTML, document.write |
|
||||
|
||||
## Quality Criteria
|
||||
|
||||
Samples must meet these criteria:
|
||||
|
||||
- [ ] **Deterministic**: Same input → same output
|
||||
- [ ] **Minimal**: Smallest code to demonstrate
|
||||
- [ ] **Documented**: Clear description and notes
|
||||
- [ ] **Validated**: Passes local tests
|
||||
- [ ] **Realistic**: Based on real vulnerability patterns
|
||||
- [ ] **Self-contained**: No external network calls
|
||||
|
||||
## Negative Samples
|
||||
|
||||
Include "negative" samples where scanner should NOT find vulnerabilities:
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "gt-0050",
|
||||
"name": "Python SQL - Properly Sanitized",
|
||||
"tier": "imported",
|
||||
"expected_count": 0,
|
||||
"notes": "Uses parameterized queries, no injection possible"
|
||||
}
|
||||
```
|
||||
|
||||
## Review Process
|
||||
|
||||
1. Create PR with new sample(s)
|
||||
2. CI runs validation tests
|
||||
3. Security team reviews expected findings
|
||||
4. QA team verifies determinism
|
||||
5. Merge and update baseline
|
||||
|
||||
## Updating Baselines
|
||||
|
||||
After adding samples, update baseline metrics:
|
||||
|
||||
```bash
|
||||
# Generate new baseline
|
||||
stellaops bench corpus run --all --output baselines/v1.1.0.json
|
||||
|
||||
# Compare to previous
|
||||
stellaops bench corpus compare baselines/v1.0.0.json baselines/v1.1.0.json
|
||||
```
|
||||
|
||||
## FAQ
|
||||
|
||||
### How many samples should I contribute?
|
||||
|
||||
Start with 2-3 high-quality samples covering different aspects of the same vulnerability class.
|
||||
|
||||
### Can I use synthetic vulnerabilities?
|
||||
|
||||
Yes, but prefer real CVE patterns when possible. Synthetic samples should document the vulnerability pattern clearly.
|
||||
|
||||
### What if my sample has multiple findings?
|
||||
|
||||
Include all expected findings in `expected.json`. Multi-finding samples are valuable for testing.
|
||||
|
||||
### How do I test tier classification?
|
||||
|
||||
Run with verbose output:
|
||||
```bash
|
||||
stellaops bench corpus run --sample gt-NNNN --verbose --show-evidence
|
||||
```
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Tiered Precision Curves](../benchmarks/tiered-precision-curves.md)
|
||||
- [Reachability Analysis](../product-advisories/14-Dec-2025%20-%20Reachability%20Analysis%20Technical%20Reference.md)
|
||||
- [Corpus Index Schema](../../datasets/reachability/schemas/corpus-sample.v1.json)
|
||||
25
docs/dev/onboarding/FAQ_MATRIX.md
Executable file
25
docs/dev/onboarding/FAQ_MATRIX.md
Executable file
@@ -0,0 +1,25 @@
|
||||
# FAQ (stakeholder matrix)
|
||||
|
||||
## Quick answers
|
||||
|
||||
| Question | Short answer |
|
||||
| --- | --- |
|
||||
| What is StellaOps? | A sovereign, offline-first container-security platform focused on deterministic, replayable evidence: SBOMs, advisories, VEX, policy decisions, and attestations bound to image digests. |
|
||||
| What makes it "deterministic"? | The same inputs produce the same outputs (stable ordering, stable IDs, replayable artifacts). Determinism is treated as a product feature and enforced by tests and fixtures. |
|
||||
| Does it run fully offline? | Yes. Offline operation is a first-class workflow (bundles, mirrors, importer/controller). See `docs/OFFLINE_KIT.md` and `docs/modules/airgap/guides/overview.md`. |
|
||||
| Which formats are supported? | SBOMs: SPDX 3.0.1 and CycloneDX 1.7 (1.6 backward compatible). VEX: OpenVEX-first decisioning with issuer trust and consensus. Attestations: in-toto/DSSE where enabled. |
|
||||
| How do I deploy it? | Use deterministic bundles under `deploy/` (Compose/Helm) with digests sourced from `deploy/releases/`. Start with `docs/INSTALL_GUIDE.md`. |
|
||||
| How do policy gates work? | Policy combines VEX-first inputs with lattice/precedence rules so outcomes are stable and explainable. See `docs/modules/policy/guides/vex-trust-model.md`. |
|
||||
| Is multi-tenancy supported? | Yes; tenancy boundaries and roles/scopes are documented and designed to support regulated environments. See `docs/security/tenancy-overview.md` and `docs/security/scopes-and-roles.md`. |
|
||||
| Can I extend it? | Yes: connectors, plugins, and policy packs are designed to be composable without losing determinism. Start with module dossiers under `docs/modules/`. |
|
||||
| Where is the roadmap? | `docs/ROADMAP.md` (priority bands + definition of "done"). |
|
||||
| Where do I find deeper docs? | `docs/technical/README.md` is the detailed index; `docs/modules/` contains per-module dossiers. |
|
||||
|
||||
## Further reading
|
||||
- Vision: `docs/VISION.md`
|
||||
- Feature matrix: `docs/FEATURE_MATRIX.md`
|
||||
- Architecture overview: `docs/ARCHITECTURE_OVERVIEW.md`
|
||||
- High-level architecture: `docs/ARCHITECTURE_REFERENCE.md`
|
||||
- Offline kit: `docs/OFFLINE_KIT.md`
|
||||
- Install guide: `docs/INSTALL_GUIDE.md`
|
||||
- Quickstart: `docs/quickstart.md`
|
||||
503
docs/dev/onboarding/concepts/reachability-concept-guide.md
Normal file
503
docs/dev/onboarding/concepts/reachability-concept-guide.md
Normal file
@@ -0,0 +1,503 @@
|
||||
# Reachability Analysis Concept Guide
|
||||
|
||||
**Sprint:** SPRINT_3500_0004_0004
|
||||
**Audience:** Developers, Security Engineers, DevOps
|
||||
|
||||
## Introduction
|
||||
|
||||
Reachability Analysis determines whether vulnerable code can actually be reached during program execution. This guide explains how StellaOps uses call graphs, BFS traversal, and confidence scoring to separate actionable vulnerabilities from noise.
|
||||
|
||||
---
|
||||
|
||||
## The Problem: Alert Fatigue
|
||||
|
||||
Traditional vulnerability scanners report every known CVE in your dependencies:
|
||||
|
||||
```
|
||||
❌ CVE-2024-1234 in lodash@4.17.20 (CRITICAL)
|
||||
❌ CVE-2024-5678 in express@4.18.0 (HIGH)
|
||||
❌ CVE-2024-9012 in moment@2.29.0 (MEDIUM)
|
||||
... 247 more findings
|
||||
```
|
||||
|
||||
**The reality:**
|
||||
- 80-90% of reported vulnerabilities are **unreachable**
|
||||
- Teams waste time investigating false positives
|
||||
- Real risks get lost in the noise
|
||||
- Security fatigue leads to ignored alerts
|
||||
|
||||
---
|
||||
|
||||
## The Solution: Reachability Analysis
|
||||
|
||||
StellaOps analyzes your application's **call graph** to determine if vulnerable functions are actually invoked:
|
||||
|
||||
```
|
||||
✅ CVE-2024-1234 in lodash@4.17.20 - UNREACHABLE (safe to ignore)
|
||||
⚠️ CVE-2024-5678 in express@4.18.0 - POSSIBLY_REACHABLE (review)
|
||||
🔴 CVE-2024-9012 in moment@2.29.0 - REACHABLE_STATIC (fix required)
|
||||
```
|
||||
|
||||
Result: Focus on the 10-20% that actually matter.
|
||||
|
||||
---
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### 1. Call Graph
|
||||
|
||||
A **Call Graph** represents function calls in your application:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Your Application │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌────────────────┐ │
|
||||
│ │ HTTP Endpoint │ ← Entrypoint │
|
||||
│ │ /api/orders │ │
|
||||
│ └───────┬────────┘ │
|
||||
│ │ calls │
|
||||
│ ▼ │
|
||||
│ ┌────────────────┐ ┌────────────────┐ │
|
||||
│ │ OrderService │─────▶│ PaymentService │ │
|
||||
│ │ .processOrder()│ │ .charge() │ │
|
||||
│ └───────┬────────┘ └────────────────┘ │
|
||||
│ │ calls │
|
||||
│ ▼ │
|
||||
│ ┌────────────────┐ │
|
||||
│ │ lodash.merge() │ ← Vulnerable function │
|
||||
│ │ (CVE-2024-1234)│ │
|
||||
│ └────────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Components:**
|
||||
- **Nodes**: Functions, methods, classes
|
||||
- **Edges**: Call relationships
|
||||
- **Entrypoints**: Where execution begins (HTTP routes, CLI commands, etc.)
|
||||
|
||||
### 2. Entrypoints
|
||||
|
||||
**Entrypoints** are where external input enters your application:
|
||||
|
||||
| Kind | Examples |
|
||||
|------|----------|
|
||||
| HTTP | `GET /api/orders`, `POST /users` |
|
||||
| gRPC | `OrderService.GetOrder` |
|
||||
| Message Queue | `orders.created` consumer |
|
||||
| CLI | `./app --process-file` |
|
||||
| Scheduled | Cron job, background worker |
|
||||
|
||||
### 3. Reachability Status
|
||||
|
||||
Each vulnerability gets one of these statuses:
|
||||
|
||||
| Status | Meaning | Action |
|
||||
|--------|---------|--------|
|
||||
| `UNREACHABLE` | No path from any entrypoint | Safe to ignore |
|
||||
| `POSSIBLY_REACHABLE` | Path exists via indirect/heuristic edges | Review |
|
||||
| `REACHABLE_STATIC` | Direct static path exists | Prioritize fix |
|
||||
| `REACHABLE_PROVEN` | Runtime trace confirms execution | Fix immediately |
|
||||
| `UNKNOWN` | Insufficient call graph data | Investigate |
|
||||
|
||||
### 4. Edge Types
|
||||
|
||||
Call graph edges have different confidence levels:
|
||||
|
||||
| Edge Type | Confidence | Description |
|
||||
|-----------|------------|-------------|
|
||||
| `direct_call` | High | Static function call |
|
||||
| `virtual_dispatch` | Medium | Interface/virtual method |
|
||||
| `reflection` | Low | Reflection-based call |
|
||||
| `dynamic` | Low | Dynamic dispatch |
|
||||
| `heuristic` | Very Low | Inferred relationship |
|
||||
|
||||
### 5. Confidence Score
|
||||
|
||||
**Confidence** quantifies how certain we are about reachability (0.0 to 1.0):
|
||||
|
||||
```
|
||||
Confidence = weighted_sum([
|
||||
staticPathExists × 0.50,
|
||||
allEdgesStatic × 0.20,
|
||||
noReflection × 0.10,
|
||||
runtimeConfirmed × 0.15,
|
||||
symbolResolved × 0.05
|
||||
])
|
||||
```
|
||||
|
||||
Example:
|
||||
- Static path exists: +0.50
|
||||
- All edges are direct calls: +0.20
|
||||
- No reflection: +0.10
|
||||
- Not runtime confirmed: +0.00
|
||||
- All symbols resolved: +0.05
|
||||
- **Total: 0.85**
|
||||
|
||||
---
|
||||
|
||||
## How It Works
|
||||
|
||||
### Step 1: Call Graph Generation
|
||||
|
||||
Your build system generates a call graph using one of these approaches:
|
||||
|
||||
**Build-time extraction** (most accurate):
|
||||
```bash
|
||||
# .NET (roslyn)
|
||||
dotnet build --generate-call-graph
|
||||
|
||||
# Java (gradle plugin)
|
||||
./gradlew generateCallGraph
|
||||
|
||||
# Node.js (static analysis)
|
||||
npx @stellaops/callgraph-generator .
|
||||
```
|
||||
|
||||
**Upload to StellaOps**:
|
||||
```bash
|
||||
stella scan graph upload --scan-id $SCAN_ID --file callgraph.json
|
||||
```
|
||||
|
||||
### Step 2: Entrypoint Detection
|
||||
|
||||
StellaOps identifies entrypoints automatically:
|
||||
|
||||
```json
|
||||
{
|
||||
"entrypoints": [
|
||||
{
|
||||
"kind": "http",
|
||||
"route": "GET /api/orders/{id}",
|
||||
"method": "MyApp.Controllers.OrdersController::Get",
|
||||
"framework": "aspnetcore"
|
||||
},
|
||||
{
|
||||
"kind": "grpc",
|
||||
"service": "OrderService",
|
||||
"method": "MyApp.Services.OrderGrpcService::GetOrder",
|
||||
"framework": "grpc-dotnet"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Step 3: BFS Traversal
|
||||
|
||||
For each vulnerability, BFS finds paths from entrypoints:
|
||||
|
||||
```
|
||||
Queue: [HTTP /api/orders → OrdersController::Get]
|
||||
|
||||
Step 1: Visit OrdersController::Get
|
||||
→ Neighbors: [OrderService::Process, Logger::Log]
|
||||
→ Add to queue: OrderService::Process, Logger::Log
|
||||
|
||||
Step 2: Visit OrderService::Process
|
||||
→ Neighbors: [Lodash::merge (VULNERABLE!)]
|
||||
→ PATH FOUND! Depth = 2
|
||||
|
||||
Result: REACHABLE_STATIC
|
||||
Path: /api/orders → OrdersController::Get → OrderService::Process → Lodash::merge
|
||||
```
|
||||
|
||||
### Step 4: Confidence Calculation
|
||||
|
||||
Based on the path quality:
|
||||
|
||||
```yaml
|
||||
path:
|
||||
- node: OrdersController::Get
|
||||
edge_type: entrypoint
|
||||
- node: OrderService::Process
|
||||
edge_type: direct_call # +0.50 static
|
||||
- node: Lodash::merge
|
||||
edge_type: direct_call # +0.20 all static
|
||||
|
||||
factors:
|
||||
staticPathExists: 0.50
|
||||
allEdgesStatic: 0.20
|
||||
noReflection: 0.10
|
||||
runtimeConfirmed: 0.00
|
||||
symbolResolved: 0.05
|
||||
|
||||
confidence: 0.85
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Understanding Results
|
||||
|
||||
### Explain Query
|
||||
|
||||
Get a detailed explanation for any finding:
|
||||
|
||||
```bash
|
||||
stella reachability explain \
|
||||
--scan-id $SCAN_ID \
|
||||
--cve CVE-2024-1234 \
|
||||
--purl "pkg:npm/lodash@4.17.20"
|
||||
```
|
||||
|
||||
**Output:**
|
||||
```
|
||||
Status: REACHABLE_STATIC
|
||||
Confidence: 0.85
|
||||
|
||||
Shortest Path (depth=2):
|
||||
[0] MyApp.Controllers.OrdersController::Get(Guid)
|
||||
Entrypoint: HTTP GET /api/orders/{id}
|
||||
[1] MyApp.Services.OrderService::Process(Order)
|
||||
Edge: static (direct_call)
|
||||
[2] Lodash.merge(Object, Object) [VULNERABLE]
|
||||
Edge: static (direct_call)
|
||||
|
||||
Why Reachable:
|
||||
- Static call path exists from HTTP entrypoint
|
||||
- All edges are statically proven (no heuristics)
|
||||
- Vulnerable function is directly invoked
|
||||
|
||||
Confidence Factors:
|
||||
staticPathExists: +0.50
|
||||
allEdgesStatic: +0.20
|
||||
noReflection: +0.10
|
||||
runtimeConfirmed: +0.00
|
||||
symbolResolved: +0.05
|
||||
```
|
||||
|
||||
### Interpreting Status
|
||||
|
||||
| Status | What it means | What to do |
|
||||
|--------|---------------|------------|
|
||||
| `UNREACHABLE` | No code path calls the vulnerable function | Safe to deprioritize; track for visibility |
|
||||
| `POSSIBLY_REACHABLE` | Path exists but involves heuristics | Review the path; add call graph data if missing |
|
||||
| `REACHABLE_STATIC` | Static analysis proves reachability | Prioritize remediation |
|
||||
| `REACHABLE_PROVEN` | Runtime data confirms execution | Fix immediately; exploitability confirmed |
|
||||
| `UNKNOWN` | Call graph incomplete | Improve call graph coverage |
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Generate Complete Call Graphs
|
||||
|
||||
Incomplete call graphs lead to `UNKNOWN` status:
|
||||
|
||||
```bash
|
||||
# Check call graph completeness
|
||||
stella scan graph summary --scan-id $SCAN_ID
|
||||
|
||||
# Output:
|
||||
# Nodes: 12,345 (expected: ~15,000 for project size)
|
||||
# Coverage: 82%
|
||||
# Orphan nodes: 234
|
||||
```
|
||||
|
||||
**Tips for better coverage:**
|
||||
- Include all modules in build
|
||||
- Enable whole-program analysis
|
||||
- Include test code (may reveal paths)
|
||||
|
||||
### 2. Review `POSSIBLY_REACHABLE` Findings
|
||||
|
||||
These often indicate:
|
||||
- Reflection use
|
||||
- Dynamic dispatch
|
||||
- Framework magic (DI, AOP)
|
||||
|
||||
```bash
|
||||
# Get details
|
||||
stella reachability explain \
|
||||
--scan-id $SCAN_ID \
|
||||
--cve CVE-2024-5678 \
|
||||
--all-paths
|
||||
```
|
||||
|
||||
### 3. Add Runtime Evidence
|
||||
|
||||
Runtime traces increase confidence:
|
||||
|
||||
```bash
|
||||
# Enable runtime instrumentation
|
||||
stella scan run \
|
||||
--image $IMAGE \
|
||||
--include-runtime \
|
||||
--runtime-profile production-traces.json
|
||||
```
|
||||
|
||||
### 4. Handle `UNKNOWN` Appropriately
|
||||
|
||||
Don't ignore unknowns—they represent gaps:
|
||||
|
||||
```bash
|
||||
# List unknowns
|
||||
stella reachability findings --scan-id $SCAN_ID --status UNKNOWN
|
||||
|
||||
# Common causes:
|
||||
# - External library without call graph
|
||||
# - Native code (FFI)
|
||||
# - Dynamic languages without type info
|
||||
```
|
||||
|
||||
### 5. Integrate with CI/CD
|
||||
|
||||
```yaml
|
||||
# Example GitHub Actions
|
||||
- name: Run reachability scan
|
||||
run: |
|
||||
stella scan run --image $IMAGE --reachability enabled
|
||||
|
||||
- name: Check for reachable vulnerabilities
|
||||
run: |
|
||||
# Fail if any HIGH+ CVE is reachable
|
||||
REACHABLE=$(stella reachability findings \
|
||||
--scan-id $SCAN_ID \
|
||||
--status REACHABLE_STATIC,REACHABLE_PROVEN \
|
||||
--output-format json | jq 'length')
|
||||
|
||||
if [ "$REACHABLE" -gt 0 ]; then
|
||||
echo "Found $REACHABLE reachable vulnerabilities!"
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Call Graph Formats
|
||||
|
||||
### Supported Formats
|
||||
|
||||
| Format | Extension | Use Case |
|
||||
|--------|-----------|----------|
|
||||
| JSON | `.json` | Standard interchange |
|
||||
| NDJSON | `.ndjson` | Large graphs (streaming) |
|
||||
| DOT | `.dot` | Visualization |
|
||||
| Custom | `.cg` | StellaOps native |
|
||||
|
||||
### JSON Schema
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.0",
|
||||
"language": "dotnet",
|
||||
"nodes": [
|
||||
{
|
||||
"id": "sha256:abc123...",
|
||||
"symbol": "MyApp.Services.OrderService::Process",
|
||||
"kind": "method",
|
||||
"location": {
|
||||
"file": "Services/OrderService.cs",
|
||||
"line": 42
|
||||
}
|
||||
}
|
||||
],
|
||||
"edges": [
|
||||
{
|
||||
"source": "sha256:abc123...",
|
||||
"target": "sha256:def456...",
|
||||
"type": "direct_call",
|
||||
"location": {
|
||||
"file": "Services/OrderService.cs",
|
||||
"line": 55
|
||||
}
|
||||
}
|
||||
],
|
||||
"entrypoints": [
|
||||
{
|
||||
"nodeId": "sha256:ghi789...",
|
||||
"kind": "http",
|
||||
"route": "GET /api/orders/{id}"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Reachability Analysis System │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ Call Graph │──▶│ Entrypoint │──▶│ Reachability│ │
|
||||
│ │ Parser │ │ Detector │ │ Engine │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
│ │ │ │ │
|
||||
│ ▼ ▼ ▼ │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ Graph │ │ Framework │ │ Path │ │
|
||||
│ │ Store │ │ Adapters │ │ Cache │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
│ │
|
||||
│ Symbol Resolution │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ CVE → │──▶│ Symbol │──▶│ Node │ │
|
||||
│ │ Function │ │ Matcher │ │ Lookup │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "Too many UNKNOWN findings"
|
||||
|
||||
**Cause**: Incomplete call graph
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
# Check coverage
|
||||
stella scan graph summary --scan-id $SCAN_ID
|
||||
|
||||
# Regenerate with more options
|
||||
# For .NET:
|
||||
dotnet build --generate-call-graph --whole-program
|
||||
```
|
||||
|
||||
### "False UNREACHABLE"
|
||||
|
||||
**Cause**: Missing edge (reflection, dynamic dispatch)
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
# Check for known patterns
|
||||
stella scan graph validate --scan-id $SCAN_ID
|
||||
|
||||
# Add hints for reflection patterns
|
||||
stella scan run --reflection-hints reflection-config.json
|
||||
```
|
||||
|
||||
### "Computation timeout"
|
||||
|
||||
**Cause**: Large graph, deep paths
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
# Increase timeout
|
||||
stella reachability compute --scan-id $SCAN_ID --timeout 600s
|
||||
|
||||
# Or limit depth
|
||||
stella reachability compute --scan-id $SCAN_ID --max-depth 15
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Reachability CLI Reference](../cli/reachability-cli-reference.md)
|
||||
- [Reachability API Reference](../api/score-proofs-reachability-api-reference.md)
|
||||
- [Reachability Runbook](../operations/reachability-runbook.md)
|
||||
- [Score Proofs Concept Guide](./score-proofs-concept-guide.md)
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Version**: 1.0.0
|
||||
**Sprint**: 3500.0004.0004
|
||||
378
docs/dev/onboarding/concepts/score-proofs-concept-guide.md
Normal file
378
docs/dev/onboarding/concepts/score-proofs-concept-guide.md
Normal file
@@ -0,0 +1,378 @@
|
||||
# Score Proofs Concept Guide
|
||||
|
||||
**Sprint:** SPRINT_3500_0004_0004
|
||||
**Audience:** Developers, Security Engineers, DevOps
|
||||
|
||||
## Introduction
|
||||
|
||||
Score Proofs provide cryptographic evidence that vulnerability scores can be independently verified and reproduced. This guide explains the concepts, architecture, and use cases for Score Proofs in StellaOps.
|
||||
|
||||
---
|
||||
|
||||
## What are Score Proofs?
|
||||
|
||||
### The Problem
|
||||
|
||||
Traditional vulnerability scanners produce scores, but:
|
||||
- **Non-reproducible**: Re-running a scan may yield different results
|
||||
- **Opaque**: No visibility into how scores were computed
|
||||
- **Untraceable**: No audit trail linking scores to inputs
|
||||
- **Time-sensitive**: Advisory data changes constantly
|
||||
|
||||
### The Solution
|
||||
|
||||
Score Proofs address these issues by:
|
||||
|
||||
1. **Content-addressing all inputs**: Every piece of data is identified by its cryptographic hash
|
||||
2. **Recording computation parameters**: Algorithm versions, timestamps, configuration
|
||||
3. **Creating verifiable attestations**: DSSE-signed bundles that can be independently verified
|
||||
4. **Enabling deterministic replay**: Same inputs always produce same outputs
|
||||
|
||||
---
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### 1. Scan Manifest
|
||||
|
||||
A **Scan Manifest** is the immutable record of everything that went into a scan:
|
||||
|
||||
```yaml
|
||||
manifest:
|
||||
scanId: "scan-12345"
|
||||
digest: "sha256:abc123..." # Content hash of the manifest itself
|
||||
timestamp: "2025-12-20T10:00:00Z"
|
||||
inputs:
|
||||
sbom:
|
||||
digest: "sha256:def456..."
|
||||
format: "cyclonedx-1.6"
|
||||
advisoryFeeds:
|
||||
- feedId: "nvd"
|
||||
digest: "sha256:ghi789..."
|
||||
asOf: "2025-12-20T00:00:00Z"
|
||||
- feedId: "ghsa"
|
||||
digest: "sha256:jkl012..."
|
||||
asOf: "2025-12-20T00:00:00Z"
|
||||
callGraph:
|
||||
digest: "sha256:mno345..."
|
||||
nodes: 12345
|
||||
vexDocuments:
|
||||
- digest: "sha256:pqr678..."
|
||||
|
||||
configuration:
|
||||
scoringAlgorithm: "cvss-4.0"
|
||||
reachabilityEnabled: true
|
||||
unknownsHandling: "flag"
|
||||
|
||||
environment:
|
||||
scannerVersion: "1.0.0"
|
||||
feedVersions:
|
||||
nvd: "2025.12.20"
|
||||
ghsa: "2025.12.20"
|
||||
```
|
||||
|
||||
**Key Properties:**
|
||||
- Every input has a content hash (digest)
|
||||
- Configuration is explicitly recorded
|
||||
- Environment versions are captured
|
||||
- The manifest itself has a digest
|
||||
|
||||
### 2. Proof Bundle
|
||||
|
||||
A **Proof Bundle** packages the manifest with cryptographic attestations:
|
||||
|
||||
```
|
||||
proof-bundle/
|
||||
├── manifest.json # The scan manifest
|
||||
├── attestations/
|
||||
│ ├── manifest.dsse # DSSE signature over manifest
|
||||
│ ├── sbom.dsse # SBOM attestation
|
||||
│ └── findings.dsse # Findings attestation
|
||||
├── inputs/ # Optional: actual input data
|
||||
│ ├── sbom.json
|
||||
│ └── callgraph.ndjson
|
||||
└── bundle.sig # Bundle signature
|
||||
```
|
||||
|
||||
### 3. Deterministic Replay
|
||||
|
||||
**Replay** is the process of re-executing a scan using the exact inputs from a manifest:
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────────┐
|
||||
│ Proof Bundle │────▶│ Replay Engine │────▶│ New Findings │
|
||||
│ (manifest + │ │ (same algo) │ │ (must match) │
|
||||
│ inputs) │ │ │ │ │
|
||||
└─────────────────┘ └──────────────────┘ └─────────────────┘
|
||||
```
|
||||
|
||||
**Guarantees:**
|
||||
- Same inputs → Same outputs (byte-identical)
|
||||
- Different inputs → Different outputs (with diff report)
|
||||
- Missing inputs → Validation failure
|
||||
|
||||
### 4. Proof Ledger
|
||||
|
||||
The **Proof Ledger** is an append-only chain of proof records:
|
||||
|
||||
```
|
||||
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
|
||||
│ Proof 1 │────▶│ Proof 2 │────▶│ Proof 3 │
|
||||
│ │ │ │ │ │
|
||||
│ prevHash: ∅ │ │ prevHash: P1 │ │ prevHash: P2 │
|
||||
│ manifest: M1 │ │ manifest: M2 │ │ manifest: M3 │
|
||||
│ hash: P1 │ │ hash: P2 │ │ hash: P3 │
|
||||
└──────────────┘ └──────────────┘ └──────────────┘
|
||||
```
|
||||
|
||||
This creates an **audit trail** where:
|
||||
- Each proof links to its predecessor
|
||||
- Tampering breaks the chain
|
||||
- History is verifiable
|
||||
|
||||
---
|
||||
|
||||
## How It Works
|
||||
|
||||
### Step 1: Scan Execution
|
||||
|
||||
During a scan, the system:
|
||||
1. Collects all inputs (SBOM, advisories, call graph, VEX)
|
||||
2. Computes digests for each input
|
||||
3. Records configuration and environment
|
||||
4. Executes the scoring algorithm
|
||||
5. Generates findings with provenance
|
||||
|
||||
### Step 2: Manifest Creation
|
||||
|
||||
After scoring:
|
||||
1. All input digests are assembled into a manifest
|
||||
2. Findings are attached with their own digests
|
||||
3. The manifest is signed using DSSE
|
||||
|
||||
### Step 3: Proof Registration
|
||||
|
||||
The proof is registered:
|
||||
1. Appended to the proof ledger (with chain link)
|
||||
2. Optionally anchored to Sigstore Rekor (transparency log)
|
||||
3. Stored in content-addressed storage
|
||||
|
||||
### Step 4: Verification
|
||||
|
||||
To verify a proof:
|
||||
1. Retrieve the proof bundle
|
||||
2. Verify all signatures
|
||||
3. Check chain integrity (prev_hash)
|
||||
4. Optionally replay the computation
|
||||
5. Compare outputs
|
||||
|
||||
---
|
||||
|
||||
## Use Cases
|
||||
|
||||
### 1. Audit Compliance
|
||||
|
||||
**Scenario**: An auditor asks "How did you arrive at this vulnerability score?"
|
||||
|
||||
**With Score Proofs**:
|
||||
```bash
|
||||
# Show the proof
|
||||
stella proof inspect --scan-id $SCAN_ID
|
||||
|
||||
# Auditor can verify independently
|
||||
stella proof verify --scan-id $SCAN_ID
|
||||
```
|
||||
|
||||
The auditor sees exactly which advisories, SBOM version, and VEX documents were used.
|
||||
|
||||
### 2. Dispute Resolution
|
||||
|
||||
**Scenario**: A vendor disputes a finding, claiming it was fixed.
|
||||
|
||||
**With Score Proofs**:
|
||||
```bash
|
||||
# Replay with the original inputs
|
||||
stella score replay --scan-id $SCAN_ID
|
||||
|
||||
# Compare with current data
|
||||
stella score diff --scan-id $SCAN_ID --compare-latest
|
||||
```
|
||||
|
||||
The diff shows what changed and why.
|
||||
|
||||
### 3. Regulatory Evidence
|
||||
|
||||
**Scenario**: A regulator requires proof that security scans were performed.
|
||||
|
||||
**With Score Proofs**:
|
||||
```bash
|
||||
# Export evidence bundle
|
||||
stella proof export --scan-id $SCAN_ID --output evidence.zip
|
||||
|
||||
# Contains signed attestations, timestamps, and chain links
|
||||
```
|
||||
|
||||
### 4. CI/CD Integration
|
||||
|
||||
**Scenario**: Ensure pipeline decisions are traceable.
|
||||
|
||||
**With Score Proofs**:
|
||||
```yaml
|
||||
# In CI pipeline
|
||||
- name: Scan with proofs
|
||||
run: |
|
||||
stella scan run --image $IMAGE --proof-mode full
|
||||
stella proof export --scan-id $SCAN_ID --output proof.zip
|
||||
|
||||
- name: Upload evidence
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: security-proof
|
||||
path: proof.zip
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Air-Gap Considerations
|
||||
|
||||
Score Proofs work offline with some preparation:
|
||||
|
||||
### Offline Kit Contents
|
||||
|
||||
```
|
||||
offline-kit/
|
||||
├── feeds/ # Frozen advisory feeds
|
||||
├── trust/ # Trust anchors (public keys)
|
||||
├── time-anchor/ # Trusted timestamp
|
||||
└── config/ # Offline configuration
|
||||
```
|
||||
|
||||
### Key Differences
|
||||
|
||||
| Feature | Online Mode | Offline Mode |
|
||||
|---------|-------------|--------------|
|
||||
| Advisory feeds | Real-time | Frozen snapshot |
|
||||
| Time source | NTP/Sigstore | Time anchor |
|
||||
| Transparency | Rekor | Local ledger |
|
||||
| Key rotation | Dynamic | Pre-provisioned |
|
||||
|
||||
### Offline Workflow
|
||||
|
||||
```bash
|
||||
# Prepare offline kit
|
||||
stella airgap prepare --feeds nvd,ghsa --output offline-kit/
|
||||
|
||||
# Transfer to air-gapped system
|
||||
# ... (physical media transfer)
|
||||
|
||||
# Run offline scan
|
||||
stella scan run --offline --kit offline-kit/ --image $IMAGE
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Score Proofs System │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ Scanner │──▶│ Manifest │──▶│ Signer │ │
|
||||
│ │ Engine │ │ Generator │ │ (DSSE) │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
│ │ │ │ │
|
||||
│ ▼ ▼ ▼ │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ Input │ │ Proof │ │ Transparency│ │
|
||||
│ │ Store │ │ Ledger │ │ Log │ │
|
||||
│ │ (CAS) │ │ (Append) │ │ (Rekor) │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
│ │
|
||||
│ Replay Engine │
|
||||
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
|
||||
│ │ Bundle │──▶│ Replay │──▶│ Diff │ │
|
||||
│ │ Loader │ │ Executor │ │ Engine │ │
|
||||
│ └─────────────┘ └─────────────┘ └─────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Always Enable Proofs
|
||||
|
||||
```bash
|
||||
# In scanner configuration
|
||||
stella config set proof.mode full
|
||||
stella config set proof.transparency enabled
|
||||
```
|
||||
|
||||
### 2. Archive Proof Bundles
|
||||
|
||||
Store proof bundles alongside your build artifacts:
|
||||
- Same retention period as releases
|
||||
- Include in backup procedures
|
||||
- Index for searchability
|
||||
|
||||
### 3. Verify Periodically
|
||||
|
||||
Don't just create proofs—verify them:
|
||||
```bash
|
||||
# Weekly verification job
|
||||
stella proof verify --since "7 days ago" --output report.json
|
||||
```
|
||||
|
||||
### 4. Plan for Offline Scenarios
|
||||
|
||||
Even if you operate online, prepare offline capability:
|
||||
- Maintain offline kits
|
||||
- Test offline workflows quarterly
|
||||
- Document offline procedures
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "Replay produces different results"
|
||||
|
||||
**Possible causes:**
|
||||
1. Missing input data (check bundle completeness)
|
||||
2. Algorithm version mismatch
|
||||
3. Non-deterministic configuration
|
||||
|
||||
**Resolution:**
|
||||
```bash
|
||||
stella proof inspect --scan-id $SCAN_ID --check-inputs
|
||||
```
|
||||
|
||||
### "Signature verification failed"
|
||||
|
||||
**Possible causes:**
|
||||
1. Key rotation (old key not trusted)
|
||||
2. Bundle tampering
|
||||
3. Corrupted download
|
||||
|
||||
**Resolution:**
|
||||
```bash
|
||||
stella proof verify --scan-id $SCAN_ID --verbose
|
||||
# Check trust anchors
|
||||
stella trust list
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Score Proofs CLI Reference](../cli/score-proofs-cli-reference.md)
|
||||
- [Score Proofs API Reference](../api/score-proofs-reachability-api-reference.md)
|
||||
- [Score Proofs Runbook](../operations/score-proofs-runbook.md)
|
||||
- [Air-Gap Runbook](../airgap/score-proofs-reachability-airgap-runbook.md)
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Version**: 1.0.0
|
||||
**Sprint**: 3500.0004.0004
|
||||
12
docs/dev/onboarding/contribution-checklist.md
Normal file
12
docs/dev/onboarding/contribution-checklist.md
Normal file
@@ -0,0 +1,12 @@
|
||||
# Contribution Checklist (Stub)
|
||||
|
||||
Use with ONBOARD-GAPS-300-015.
|
||||
|
||||
- [ ] Confirm `docs:` trailer in commits (value or `docs: n/a`).
|
||||
- [ ] Run `dotnet test --blame-crash --blame-hang --results-directory artifacts/test-results`.
|
||||
- [ ] Keep seeds fixed (default 1337) and `TZ=UTC` when running tests.
|
||||
- [ ] Update or create `inputs.lock` when adding fixtures or acceptance packs.
|
||||
- [ ] For DSSE changes: include signer IDs and offline verification steps.
|
||||
- [ ] Secret handling: no secrets in repo; use `.env.sample` patterns.
|
||||
- [ ] Rekor/mirror workflow: prefer mirrored bundle; never live-log in CI.
|
||||
- [ ] Cross-link docs changes in sprint/AGENTS when applicable.
|
||||
358
docs/dev/onboarding/dev-quickstart.md
Normal file
358
docs/dev/onboarding/dev-quickstart.md
Normal file
@@ -0,0 +1,358 @@
|
||||
# StellaOps Developer Quickstart
|
||||
|
||||
> **Audience:** Mid-level .NET developers
|
||||
> **Goal:** Get you productive on StellaOps in 1–2 days, with special focus on determinism, cryptographic attestations, and the canonical data model.
|
||||
|
||||
---
|
||||
|
||||
This quickstart mirrors the 29-Nov-2025 Developer Onboarding advisory (`docs/product-advisories/29-Nov-2025 - StellaOps – Mid-Level .NET Onboarding (Quick Start).md`) and keeps the determinism-first guidance in sync with that release note.
|
||||
|
||||
## 1. What You’re Building (Context)
|
||||
|
||||
StellaOps is a sovereign, air-gap-friendly platform that turns **SBOMs → VEX** with a fully **replayable, deterministic trust graph**.
|
||||
|
||||
Core concepts:
|
||||
|
||||
- **Deterministic scans:** Same inputs → same graph, hashes, and verdicts.
|
||||
- **Cryptographic attestations:** DSSE/in-toto envelopes, optional PQC.
|
||||
- **Trust lattice:** Merges vendor VEX, runtime signals, configs, etc. into a single deterministic verdict.
|
||||
- **Audit trail:** Every decision is reproducible from stored inputs and proofs.
|
||||
|
||||
**Offline/determinism essentials (read first):**
|
||||
|
||||
- Install from the curated offline kit (no network); pin SDK + tool versions in `inputs.lock`.
|
||||
- Use DSSE-signed configs and keep signing keys in offline `~/.stellaops/keys` with short-lived tokens.
|
||||
- Run `dotnet format` / `dotnet test` with `--blame-crash --blame-hang` using fixed seeds (`Random(1337)`) to avoid flakiness.
|
||||
- Capture DB/queue matrix upfront: PostgreSQL (pinned version) and local cache paths; set `TZ=UTC` for all runs.
|
||||
|
||||
If you think “content-addressed trust pipeline for SBOMs + VEX,” you’re in the right mental model.
|
||||
|
||||
---
|
||||
|
||||
## 2. Repository & Docs Map
|
||||
|
||||
Start by opening these projects **in order**:
|
||||
|
||||
1. `src/StellaOps.Scanner.WebService/`
|
||||
Scanning endpoints, rule plumbing, and calls into the trust lattice.
|
||||
2. `src/StellaOps.Vexer/` (a.k.a. *Excititor*)
|
||||
VEX verdict engine and trust-merge logic.
|
||||
3. `src/StellaOps.Sbomer/`
|
||||
SBOM ingest / normalize (CycloneDX, SPDX).
|
||||
4. `src/StellaOps.Authority/`
|
||||
Key management, DSSE/in-toto attestations, license tokens, Rekor integration.
|
||||
5. `src/StellaOps.Scheduler/`
|
||||
Batch processing, replay orchestration.
|
||||
6. `src/StellaOps.Shared/CanonicalModel/`
|
||||
Canonical entities & graph IDs. **Read this carefully** – it underpins determinism.
|
||||
|
||||
Starter issues to grab on day 1 (all offline-friendly):
|
||||
|
||||
- Add DSSE verification to a small CLI path (`stella verify --local-only`).
|
||||
- Extend `inputs.lock` examples with a pinned scanner/DB matrix.
|
||||
- Write a deterministic unit test for canonical ID ordering.
|
||||
- Improve `docs/` cross-links (Developer Quickstart ↔ platform architecture) and ensure `docs:` trailer appears in commits.
|
||||
|
||||
UI note: Console remains in flux; focus on backend determinism first, then follow UI sprints 0209/0215 for micro-interactions and proof-linked VEX updates.
|
||||
|
||||
## 3. Environment & DB matrix
|
||||
|
||||
- PostgreSQL: 16.x (pin in `inputs.lock`).
|
||||
- Offline feeds: `offline-cache-2025-11-30` (scanner, advisories, VEX).
|
||||
- Timezone: `TZ=UTC` for all tests and tooling.
|
||||
|
||||
## 4. Secrets & signing
|
||||
|
||||
- Store short-lived signing keys in `~/.stellaops/keys` (gitignored); never commit secrets.
|
||||
- Use DSSE for pack manifests and fixtures; include signer IDs.
|
||||
- For Rekor: use mirrored bundle (no live log writes); verify receipts offline.
|
||||
|
||||
## 5. Contribution checklist
|
||||
|
||||
See `docs/onboarding/contribution-checklist.md` for the minimal gates (docs trailer, seeds, inputs.lock, DSSE, secrets).
|
||||
|
||||
Helpful docs:
|
||||
|
||||
- `docs/modules/platform/*` – protocols (DSSE envelopes, lattice terms, trust receipts).
|
||||
- `docs/technical/architecture/*` - high-level diagrams and flows.
|
||||
|
||||
---
|
||||
|
||||
## 3. Local Dev Setup
|
||||
|
||||
### 3.1 Prerequisites
|
||||
|
||||
- **.NET 10 SDK** (preview as specified in repo).
|
||||
- **Docker** (for DB, queues, object storage).
|
||||
- **Node.js** (for Angular UI, if you’re touching the frontend).
|
||||
- **WSL2** (optional but convenient on Windows).
|
||||
|
||||
### 3.2 Bring Up Infra
|
||||
|
||||
From the repo root:
|
||||
|
||||
```bash
|
||||
# Bring up core infra for offline / air-gap friendly dev
|
||||
docker compose -f compose/offline-kit.yml up -d
|
||||
```
|
||||
|
||||
This usually includes:
|
||||
|
||||
- **PostgreSQL** (v16+) - Primary database for all services.
|
||||
- **Valkey** (v8.0) - Redis-compatible cache, event streams, and queues.
|
||||
- **RustFS** - S3-compatible object storage for SBOMs and artifacts.
|
||||
|
||||
### 3.3 Configure Environment
|
||||
|
||||
```bash
|
||||
cp env/example.local.env .env
|
||||
```
|
||||
|
||||
Key settings:
|
||||
|
||||
- `STELLAOPS_DB=Postgres`.
|
||||
- `AUTHORITY_*` – key material and config (see comments in `example.local.env`).
|
||||
- Optional: `AUTHORITY_PQC=on` to enable post-quantum keys (Dilithium).
|
||||
|
||||
### 3.4 Build & Run Backend
|
||||
|
||||
```bash
|
||||
# Restore & build everything
|
||||
dotnet restore
|
||||
dotnet build -c Debug
|
||||
|
||||
# Run a focused slice for development
|
||||
dotnet run --project src/StellaOps.Authority/StellaOps.Authority.csproj
|
||||
dotnet run --project src/StellaOps.Scanner.WebService/StellaOps.Scanner.WebService.csproj
|
||||
```
|
||||
|
||||
Health checks (adjust ports if needed):
|
||||
|
||||
```bash
|
||||
curl -s http://localhost:5080/health # Authority
|
||||
curl -s http://localhost:5081/health # Scanner
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Deterministic Sanity Tests
|
||||
|
||||
These tests prove your local environment is configured correctly for **determinism**. If any of these fail due to snapshot mismatch, fix your environment before writing new features.
|
||||
|
||||
### 4.1 SBOM → VEX “Not Affected” (Reachability False)
|
||||
|
||||
```bash
|
||||
dotnet test tests/Determinism/Det_SbomToVex_NotAffected.csproj
|
||||
```
|
||||
|
||||
**What it checks:**
|
||||
|
||||
- Two consecutive runs with the same SBOM produce identical `GraphRevisionID` and DSSE payload hashes.
|
||||
|
||||
If they differ, inspect:
|
||||
|
||||
- JSON canonicalization.
|
||||
- Locale / culture.
|
||||
- Line endings.
|
||||
|
||||
### 4.2 In-toto Chain: Source → Build → Image Attestation
|
||||
|
||||
```bash
|
||||
dotnet test tests/Attestations/Att_InToto_Chain.csproj
|
||||
```
|
||||
|
||||
**What it checks:**
|
||||
|
||||
- DSSE envelope canonicalization is stable.
|
||||
- Signature over CBOR-canonical JSON matches the stored hash.
|
||||
- Full in-toto chain can be replayed deterministically.
|
||||
|
||||
### 4.3 Lattice Merge: Vendor VEX + Runtime Signal
|
||||
|
||||
```bash
|
||||
dotnet test tests/Lattice/Lattice_VendorPlusRuntime.csproj
|
||||
```
|
||||
|
||||
**What it checks:**
|
||||
|
||||
- Merge verdict is stable regardless of input set order.
|
||||
- Resulting `TrustReceipt` is byte-for-byte identical between runs.
|
||||
|
||||
If any “golden” snapshots differ, you likely have:
|
||||
|
||||
- Non-canonical JSON.
|
||||
- Unstable enumeration (e.g., iterating `Dictionary<>` directly).
|
||||
- Locale or newline drift.
|
||||
|
||||
---
|
||||
|
||||
## 5. Coding Conventions (Determinism & Crypto)
|
||||
|
||||
These are **non-negotiable** in code that affects trust graphs, proofs, or attestations.
|
||||
|
||||
### 5.1 JSON & Canonicalization
|
||||
|
||||
- Use the **`CanonicalJson`** helper whenever a payload is hashed, signed, or used for IDs.
|
||||
- Rules: UTF-8, sorted keys, no insignificant whitespace, `\n` line endings.
|
||||
|
||||
### 5.2 DSSE Envelopes
|
||||
|
||||
- `payloadType` must always be `application/vnd.stellaops.trust+json`.
|
||||
- Sign over the canonicalized bytes.
|
||||
|
||||
```csharp
|
||||
var payload = CanonicalJson.Serialize(trustDoc);
|
||||
var env = DsseEnvelope.Create("application/vnd.stellaops.trust+json", payload);
|
||||
var signed = await keyRing.SignAsync(env.CanonicalizeBytes());
|
||||
await rekor.SubmitAsync(signed, RekorMode.OfflineMirrorIfAirgapped);
|
||||
```
|
||||
|
||||
### 5.3 Hashing
|
||||
|
||||
- **BLAKE3** for internal content addressing.
|
||||
- **SHA-256** where interop demands it.
|
||||
- Never mix algorithms within the same ID type.
|
||||
|
||||
### 5.4 Keys & Algorithms
|
||||
|
||||
- Default signatures: **Ed25519** via `Authority.KeyRing`.
|
||||
- Optional PQC: **Dilithium** when `AUTHORITY_PQC=on`.
|
||||
- Always go through the keyring abstraction; never manage raw keys manually.
|
||||
|
||||
### 5.5 Time & Clocks
|
||||
|
||||
- Use `Instant`/`DateTimeOffset` (UTC), truncated to milliseconds.
|
||||
- Never use `DateTime.Now` or local clocks in canonical data.
|
||||
|
||||
### 5.6 IDs & Graph Nodes
|
||||
|
||||
- Canonical/public IDs derive from hashes of canonical bytes.
|
||||
- DB primary keys are implementation details.
|
||||
- Do not depend on DB auto-increment or implicit sort order when hashing.
|
||||
|
||||
### 5.7 VEX Verdicts
|
||||
|
||||
Every VEX verdict must:
|
||||
|
||||
- Carry `proofs[]` (reachability, config guards, runtime paths).
|
||||
- Emit a `receipt` signed by Authority, covering verdict, proof hashes, and context.
|
||||
|
||||
---
|
||||
|
||||
## 6. Daily Workflow
|
||||
|
||||
1. Pick a focused issue (see starter tasks below).
|
||||
2. Write tests first, especially determinism scenarios.
|
||||
3. Implement changes with canonicalization boundaries explicit and signing centralized.
|
||||
4. Run `dotnet test --filter Category=Determinism`.
|
||||
5. Commit with the appropriate prefix (`feat(scanner):`, `feat(vexer):`, `feat(authority):`) and mention the affected `GraphRevisionID` if your change alters the trust graph.
|
||||
|
||||
---
|
||||
|
||||
## 7. Suggested Starter Tasks
|
||||
|
||||
These introduce the canonical data model and determinism mindset.
|
||||
|
||||
### 7.1 Normalize CycloneDX Components → Canonical Packages
|
||||
|
||||
**Area:** `StellaOps.Sbomer`
|
||||
|
||||
**Tests:** `tests/Determinism/Det_SbomMapping`
|
||||
|
||||
**Definition of done:**
|
||||
|
||||
- Equivalent SBOMs (even if fields shuffle) yield identical package sets and canonical IDs.
|
||||
- `CanonicalPackageSet.hash` is stable.
|
||||
- Edge cases covered: missing `purl`, duplicate components, case variation.
|
||||
|
||||
### 7.2 Implement “Not-Affected by Configuration” Proof
|
||||
|
||||
**Area:** `StellaOps.Vexer/Proofs/ConfigSwitchProof.cs`
|
||||
|
||||
**Definition of done:**
|
||||
|
||||
- With `FeatureX=false`, CVE-1234 reports `status = not_affected` and the proof records `configPath` + `observed=false`.
|
||||
- Proof hash is deterministic and included in the DSSE receipt.
|
||||
- Lattice merge flips the verdict to `not_affected` when the runtime/config proof weight crosses the threshold, even if the vendor says `affected`.
|
||||
|
||||
### 7.3 Authority Offline Rekor Mirror Submitter
|
||||
|
||||
**Area:** `StellaOps.Authority/Rekor/RekorMirrorClient.cs`
|
||||
|
||||
**Definition of done:**
|
||||
|
||||
- `RekorMode.OfflineMirrorIfAirgapped` records canonical entries (JSON + hash path) locally.
|
||||
- `rekor sync` replays entries in order, preserving entry IDs.
|
||||
- Golden test ensures the same input sequence → same mirror tree hash.
|
||||
|
||||
---
|
||||
|
||||
## 8. Database Notes (PostgreSQL)
|
||||
|
||||
- Use `StellaOps.Shared.Persistence` repository interfaces.
|
||||
- Canonical/public IDs are hash-derived; DB keys are internal details.
|
||||
- Never rely on DB sort order for anything that affects hashes or verdicts; re-canonicalize before hashing and apply deterministic ordering afterwards.
|
||||
|
||||
---
|
||||
|
||||
## 9. Common Pitfalls
|
||||
|
||||
1. Non-canonical JSON (unsorted keys, extra whitespace, mixed `\r\n`).
|
||||
2. Local time creeping into proofs (`DateTime.Now`).
|
||||
3. Unstable GUIDs in tests or canonical entities.
|
||||
4. Unordered collections (`Dictionary<>` iterations, LINQ without `OrderBy`) while hashing or serializing.
|
||||
5. Platform drift (Windows vs Linux newline/culture differences) – always use invariant culture and `\n` in canonical data.
|
||||
|
||||
---
|
||||
|
||||
## 10. Useful Commands
|
||||
|
||||
### 10.1 Determinism Pack
|
||||
|
||||
```bash
|
||||
# Run determinism-tagged fixtures
|
||||
dotnet test --filter Category=Determinism
|
||||
```
|
||||
|
||||
Update golden snapshots deliberately:
|
||||
|
||||
```bash
|
||||
dotnet test --filter Category=Determinism -- \
|
||||
TestRunParameters.Parameter(name="UpdateSnapshots", value="true")
|
||||
```
|
||||
|
||||
### 10.2 Quick API Smoke
|
||||
|
||||
```bash
|
||||
curl -s http://localhost:5080/health
|
||||
|
||||
curl -s -X POST \
|
||||
http://localhost:5081/scan \
|
||||
-H "Content-Type: application/json" \
|
||||
-d @samples/nginx.sbom.json
|
||||
```
|
||||
|
||||
### 10.3 Verify DSSE Signature Locally
|
||||
|
||||
```bash
|
||||
dotnet run --project tools/StellaOps.Tools.Verify -- file trust.receipt.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. Glossary (Ask-Once)
|
||||
|
||||
- **SBOM** – Software Bill of Materials (CycloneDX/SPDX).
|
||||
- **VEX** – Vulnerability Exploitability eXchange: verdicts include `affected`, `not_affected`, `under_investigation`.
|
||||
- **DSSE** – Dead Simple Signing Envelope; we sign canonical bytes.
|
||||
- **In-toto** – Supply-chain attestation framework for source → build → artifact chains.
|
||||
- **Lattice** – Rule system merging multiple verdicts/proofs into deterministic outcomes.
|
||||
- **GraphRevisionID** – Hash of the canonical trust graph; acts like a build number for audits.
|
||||
|
||||
Welcome aboard. Your best “map” is:
|
||||
|
||||
1. Read the CanonicalModel types.
|
||||
2. Run the determinism tests.
|
||||
3. Ship one of the starter tasks with deterministic, test-covered changes.
|
||||
|
||||
Keep everything **canonical, hashable, and replayable** and you’ll fit right in.
|
||||
543
docs/dev/onboarding/faq/epic-3500-faq.md
Normal file
543
docs/dev/onboarding/faq/epic-3500-faq.md
Normal file
@@ -0,0 +1,543 @@
|
||||
# Epic 3500: Score Proofs & Reachability FAQ
|
||||
|
||||
**Sprint:** SPRINT_3500_0004_0004
|
||||
**Last Updated:** 2025-12-20
|
||||
|
||||
This FAQ covers the most common questions about Score Proofs, Reachability Analysis, and Unknowns Management features introduced in Epic 3500.
|
||||
|
||||
---
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [General Questions](#general-questions)
|
||||
2. [Score Proofs](#score-proofs)
|
||||
3. [Reachability Analysis](#reachability-analysis)
|
||||
4. [Unknowns Queue](#unknowns-queue)
|
||||
5. [Integration & Deployment](#integration--deployment)
|
||||
6. [Performance](#performance)
|
||||
7. [Troubleshooting](#troubleshooting)
|
||||
|
||||
---
|
||||
|
||||
## General Questions
|
||||
|
||||
### Q: What is Epic 3500?
|
||||
|
||||
**A:** Epic 3500 introduces three major capabilities to StellaOps:
|
||||
|
||||
1. **Score Proofs**: Cryptographically verifiable attestations proving that vulnerability scores are deterministic and reproducible
|
||||
2. **Reachability Analysis**: Static analysis determining whether vulnerable code paths are actually reachable from your application
|
||||
3. **Unknowns Management**: Tracking and triaging components that cannot be fully analyzed
|
||||
|
||||
### Q: Do I need all three features?
|
||||
|
||||
**A:** The features work independently but provide the most value together:
|
||||
|
||||
- **Score Proofs alone**: Useful for compliance and audit trails
|
||||
- **Reachability alone**: Useful for prioritizing remediation
|
||||
- **Together**: Full attack surface context with cryptographic proof
|
||||
|
||||
### Q: What's the minimum version required?
|
||||
|
||||
**A:** Epic 3500 features require:
|
||||
- StellaOps Scanner v2.5.0+
|
||||
- StellaOps CLI v2.5.0+
|
||||
- .NET 10 runtime (for self-hosted deployments)
|
||||
|
||||
### Q: Are these features available in air-gapped environments?
|
||||
|
||||
**A:** Yes. All Epic 3500 features support air-gap operation:
|
||||
- Score Proofs can be generated and verified offline
|
||||
- Reachability analysis requires no network connectivity
|
||||
- Unknowns data persists locally
|
||||
|
||||
See the [Air-Gap Operations Guide](../operations/airgap-operations-runbook.md) for details.
|
||||
|
||||
---
|
||||
|
||||
## Score Proofs
|
||||
|
||||
### Q: What exactly is a Score Proof?
|
||||
|
||||
**A:** A Score Proof is a DSSE-signed attestation bundle containing:
|
||||
|
||||
```
|
||||
Score Proof Bundle
|
||||
├── scan_manifest.json # Content-addressed inputs (SBOM, feeds)
|
||||
├── proof.dsse # DSSE-signed attestation
|
||||
├── merkle_proof.json # Merkle tree proof for individual findings
|
||||
└── replay_instructions.md # How to reproduce the scan
|
||||
```
|
||||
|
||||
The proof guarantees that given the same inputs, anyone can reproduce the exact same vulnerability scores.
|
||||
|
||||
### Q: Why do I need Score Proofs?
|
||||
|
||||
**A:** Score Proofs solve several problems:
|
||||
|
||||
| Problem | Solution |
|
||||
|---------|----------|
|
||||
| "Scanner gave different results yesterday" | Manifest captures exact inputs |
|
||||
| "How do I prove to auditors this was the score?" | DSSE signatures provide non-repudiation |
|
||||
| "Can I trust this third-party scan?" | Independent verification possible |
|
||||
| "Which advisory version was used?" | All feed digests recorded |
|
||||
|
||||
### Q: How do I generate a Score Proof?
|
||||
|
||||
**A:** Enable proofs during scanning:
|
||||
|
||||
```bash
|
||||
# CLI
|
||||
stella scan --sbom ./sbom.json --generate-proof --output ./scan-with-proof/
|
||||
|
||||
# API
|
||||
POST /api/v1/scans
|
||||
{
|
||||
"sbomDigest": "sha256:abc...",
|
||||
"options": {
|
||||
"generateProof": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Q: How do I verify a Score Proof?
|
||||
|
||||
**A:** Use the verify command:
|
||||
|
||||
```bash
|
||||
# Verify signature and Merkle root
|
||||
stella proof verify ./scan-with-proof/proof.dsse
|
||||
|
||||
# Full replay verification (re-runs scan)
|
||||
stella score replay ./scan-with-proof/ --verify
|
||||
```
|
||||
|
||||
### Q: Can I verify proofs without network access?
|
||||
|
||||
**A:** Yes. Verification only requires:
|
||||
- The proof bundle
|
||||
- A trusted public key
|
||||
- Optionally, the original inputs (for replay)
|
||||
|
||||
See: [Proof Verification Runbook](../operations/proof-verification-runbook.md)
|
||||
|
||||
### Q: What signing algorithms are supported?
|
||||
|
||||
**A:** Current support includes:
|
||||
|
||||
| Algorithm | Status | Use Case |
|
||||
|-----------|--------|----------|
|
||||
| ECDSA P-256 | ✅ Supported | Default, widely compatible |
|
||||
| ECDSA P-384 | ✅ Supported | Higher security |
|
||||
| RSA-2048 | ✅ Supported | Legacy compatibility |
|
||||
| Ed25519 | ✅ Supported | Modern, fast |
|
||||
| PQC (ML-DSA) | 🔜 Roadmap | Post-quantum ready |
|
||||
|
||||
### Q: How long are proofs valid?
|
||||
|
||||
**A:** Proofs don't expire, but their trust depends on:
|
||||
- Key validity at signing time
|
||||
- Certificate chain validity (if using X.509)
|
||||
- Rekor timestamp (if transparency log enabled)
|
||||
|
||||
Best practice: Archive proofs with their verification materials.
|
||||
|
||||
### Q: What's the overhead of generating proofs?
|
||||
|
||||
**A:** Typical overhead:
|
||||
- **Time**: +5-15% scan duration
|
||||
- **Storage**: +10-20KB per scan (proof bundle)
|
||||
- **CPU**: Minimal (signing is fast)
|
||||
|
||||
For detailed benchmarks, see [Performance Workbook](../PERFORMANCE_WORKBOOK.md).
|
||||
|
||||
---
|
||||
|
||||
## Reachability Analysis
|
||||
|
||||
### Q: What is reachability analysis?
|
||||
|
||||
**A:** Reachability analysis answers: "Can vulnerable code actually be executed?"
|
||||
|
||||
It analyzes your application's call graph to determine if vulnerable functions in dependencies are reachable from your entry points.
|
||||
|
||||
### Q: What are the reachability verdicts?
|
||||
|
||||
**A:**
|
||||
|
||||
| Verdict | Meaning | Action |
|
||||
|---------|---------|--------|
|
||||
| `REACHABLE_STATIC` | Vulnerable code is on an executable path | **Prioritize fix** |
|
||||
| `POSSIBLY_REACHABLE` | May be reachable under certain conditions | **Review** |
|
||||
| `NOT_REACHABLE` | No path from entry points to vulnerable code | **Lower priority** |
|
||||
| `UNKNOWN` | Analysis couldn't determine reachability | **Manual review** |
|
||||
|
||||
### Q: What's the difference from EPSS?
|
||||
|
||||
**A:**
|
||||
|
||||
| Metric | What It Measures | Data Source |
|
||||
|--------|------------------|-------------|
|
||||
| **EPSS** | Probability of exploitation in the wild | Threat intelligence |
|
||||
| **Reachability** | Whether your code can trigger the vuln | Your application's call graph |
|
||||
|
||||
They're complementary: EPSS tells you "how likely is exploitation globally", reachability tells you "can it affect me specifically".
|
||||
|
||||
### Q: How do I enable reachability analysis?
|
||||
|
||||
**A:**
|
||||
|
||||
```bash
|
||||
# CLI - analyze SBOM with reachability
|
||||
stella scan --sbom ./sbom.json --reachability
|
||||
|
||||
# With call graph input
|
||||
stella scan --sbom ./sbom.json --call-graph ./callgraph.json
|
||||
|
||||
# Generate call graph first
|
||||
stella scan graph ./src --output ./callgraph.json
|
||||
```
|
||||
|
||||
### Q: What languages are supported for call graph analysis?
|
||||
|
||||
**A:**
|
||||
|
||||
| Language | Support Level | Notes |
|
||||
|----------|---------------|-------|
|
||||
| Java | Full | Requires bytecode |
|
||||
| JavaScript/TS | Full | Requires source |
|
||||
| Python | Full | Requires source |
|
||||
| Go | Full | Requires source or binary |
|
||||
| C# | Partial | Basic support |
|
||||
| C/C++ | Limited | Best-effort |
|
||||
|
||||
### Q: What if my language isn't supported?
|
||||
|
||||
**A:** You can:
|
||||
1. Provide a custom call graph in standard format
|
||||
2. Use `UNKNOWN` verdict handling
|
||||
3. Combine with other prioritization signals (EPSS, VEX)
|
||||
|
||||
### Q: How accurate is reachability analysis?
|
||||
|
||||
**A:** Accuracy depends on several factors:
|
||||
|
||||
| Factor | Impact |
|
||||
|--------|--------|
|
||||
| Call graph completeness | Higher completeness = better accuracy |
|
||||
| Dynamic dispatch | May cause under-reporting (conservative) |
|
||||
| Reflection/eval | May cause under-reporting |
|
||||
| Language support | Full support = more accurate |
|
||||
|
||||
StellaOps errs on the side of caution: if uncertain, it reports `POSSIBLY_REACHABLE` rather than false negatives.
|
||||
|
||||
### Q: What's the performance impact?
|
||||
|
||||
**A:** Reachability analysis adds:
|
||||
- **Call graph generation**: 10-60 seconds depending on codebase size
|
||||
- **Reachability computation**: 1-10 seconds per scan
|
||||
- **Memory**: Call graph size varies (typically 10-100MB)
|
||||
|
||||
### Q: Can I cache call graphs?
|
||||
|
||||
**A:** Yes. If your code hasn't changed, reuse the call graph:
|
||||
|
||||
```bash
|
||||
# Cache the call graph
|
||||
stella scan graph ./src --output ./callgraph.json
|
||||
|
||||
# Reuse in subsequent scans
|
||||
stella scan --sbom ./sbom.json --call-graph ./callgraph.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Unknowns Queue
|
||||
|
||||
### Q: What is the unknowns queue?
|
||||
|
||||
**A:** The unknowns queue tracks components that couldn't be fully analyzed:
|
||||
- Packages without advisory mappings
|
||||
- Unrecognized file formats
|
||||
- Resolution failures
|
||||
- Unsupported ecosystems
|
||||
|
||||
### Q: Why should I care about unknowns?
|
||||
|
||||
**A:** Unknowns represent blind spots:
|
||||
|
||||
```
|
||||
❌ 5% unknown = 5% of your attack surface is invisible
|
||||
```
|
||||
|
||||
Unmanaged unknowns can hide critical vulnerabilities.
|
||||
|
||||
### Q: How do I view unknowns?
|
||||
|
||||
**A:**
|
||||
|
||||
```bash
|
||||
# List pending unknowns
|
||||
stella unknowns list
|
||||
|
||||
# Get statistics
|
||||
stella unknowns stats
|
||||
|
||||
# Export for analysis
|
||||
stella unknowns list --format csv > unknowns.csv
|
||||
```
|
||||
|
||||
### Q: How do I resolve unknowns?
|
||||
|
||||
**A:** Common resolution paths:
|
||||
|
||||
| Unknown Type | Resolution |
|
||||
|--------------|------------|
|
||||
| Internal package | Mark as `internal_package` |
|
||||
| Missing mapping | Submit CPE/PURL mapping |
|
||||
| Feed delay | Update feeds, reprocess |
|
||||
| Unsupported format | Convert to supported format |
|
||||
|
||||
```bash
|
||||
# Resolve as internal package
|
||||
stella unknowns resolve <id> --resolution internal_package
|
||||
|
||||
# After feed update, reprocess
|
||||
stella feeds update --all
|
||||
stella unknowns reprocess
|
||||
```
|
||||
|
||||
### Q: What's a good unknowns rate?
|
||||
|
||||
**A:** Target metrics:
|
||||
|
||||
| Metric | Good | Warning | Critical |
|
||||
|--------|------|---------|----------|
|
||||
| Unknown package % | < 5% | 5-10% | > 10% |
|
||||
| Pending queue depth | < 50 | 50-100 | > 100 |
|
||||
| Avg resolution time | < 7d | 7-14d | > 14d |
|
||||
|
||||
### Q: Can I automate unknown handling?
|
||||
|
||||
**A:** Yes, using patterns:
|
||||
|
||||
```yaml
|
||||
# config/unknowns.yaml
|
||||
internalPatterns:
|
||||
- "@mycompany/*"
|
||||
- "internal-*"
|
||||
autoResolution:
|
||||
- match: "reason = NO_ADVISORY_MATCH AND ecosystem = internal"
|
||||
resolution: internal_package
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration & Deployment
|
||||
|
||||
### Q: How do I integrate with CI/CD?
|
||||
|
||||
**A:** Example GitHub Actions workflow:
|
||||
|
||||
```yaml
|
||||
- name: Scan with proofs and reachability
|
||||
run: |
|
||||
stella scan \
|
||||
--sbom ./sbom.json \
|
||||
--generate-proof \
|
||||
--reachability \
|
||||
--output ./results/
|
||||
|
||||
- name: Fail on reachable criticals
|
||||
run: |
|
||||
REACHABLE=$(stella query --filter "reachability=REACHABLE_STATIC AND severity=CRITICAL" --count)
|
||||
if [ "$REACHABLE" -gt 0 ]; then
|
||||
exit 1
|
||||
fi
|
||||
```
|
||||
|
||||
### Q: Can I use these features in GitLab CI?
|
||||
|
||||
**A:** Yes, same commands work in any CI system. See [CI Integration Guide](../ci/).
|
||||
|
||||
### Q: What API endpoints are available?
|
||||
|
||||
**A:** Key endpoints:
|
||||
|
||||
| Feature | Endpoint | Method |
|
||||
|---------|----------|--------|
|
||||
| Generate proof | `/api/v1/scans/{id}/proof` | GET |
|
||||
| Verify proof | `/api/v1/proofs/verify` | POST |
|
||||
| Reachability | `/api/v1/scans/{id}/reachability` | GET |
|
||||
| Unknowns | `/api/v1/unknowns` | GET/POST/DELETE |
|
||||
|
||||
Full API reference: [API Documentation](../api/)
|
||||
|
||||
### Q: How do I configure retention?
|
||||
|
||||
**A:** Configure in `appsettings.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"Retention": {
|
||||
"Proofs": {
|
||||
"DefaultDays": 365,
|
||||
"MaxDays": 1825
|
||||
},
|
||||
"Reachability": {
|
||||
"CacheHours": 24
|
||||
},
|
||||
"Unknowns": {
|
||||
"ArchiveAfterDays": 90
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance
|
||||
|
||||
### Q: What's the performance impact of enabling all features?
|
||||
|
||||
**A:** Typical combined overhead:
|
||||
|
||||
| Feature | Time Overhead | Storage Overhead |
|
||||
|---------|---------------|------------------|
|
||||
| Base scan | Baseline | Baseline |
|
||||
| + Proof generation | +5-15% | +20KB |
|
||||
| + Reachability | +50-200% | +50KB |
|
||||
| + All features | +60-220% | +70KB |
|
||||
|
||||
### Q: How do I optimize for large codebases?
|
||||
|
||||
**A:**
|
||||
|
||||
1. **Cache call graphs** for incremental scans
|
||||
2. **Use parallel processing** for multi-repo scans
|
||||
3. **Limit reachability depth** for very large graphs
|
||||
4. **Use deterministic mode** only when needed
|
||||
|
||||
```bash
|
||||
# Limit BFS depth for large graphs
|
||||
stella scan --reachability --reachability-depth 10
|
||||
```
|
||||
|
||||
### Q: Are there size limits?
|
||||
|
||||
**A:**
|
||||
|
||||
| Resource | Default Limit | Configurable |
|
||||
|----------|---------------|--------------|
|
||||
| SBOM size | 50MB | Yes |
|
||||
| Call graph nodes | 1M | Yes |
|
||||
| Proof bundle size | 10MB | Yes |
|
||||
| Unknowns queue | 10K items | Yes |
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Q: Score replay gives different results
|
||||
|
||||
**A:** Check:
|
||||
1. Feed versions match (`stella feeds status`)
|
||||
2. Algorithm version matches
|
||||
3. No clock skew (UTC timestamps)
|
||||
4. Same configuration settings
|
||||
|
||||
```bash
|
||||
# Verify manifest
|
||||
stella proof verify --manifest-only ./proof.dsse
|
||||
```
|
||||
|
||||
### Q: Reachability shows everything as UNKNOWN
|
||||
|
||||
**A:** This usually means:
|
||||
1. Call graph wasn't generated
|
||||
2. Language not supported
|
||||
3. Source code not available
|
||||
|
||||
```bash
|
||||
# Check call graph status
|
||||
stella scan graph status ./callgraph.json
|
||||
```
|
||||
|
||||
### Q: Unknowns queue growing rapidly
|
||||
|
||||
**A:** Common causes:
|
||||
1. **Feed staleness**: Update feeds
|
||||
2. **Internal packages**: Configure internal patterns
|
||||
3. **New ecosystems**: Check language support
|
||||
|
||||
```bash
|
||||
# Diagnose
|
||||
stella unknowns stats --by-reason
|
||||
```
|
||||
|
||||
### Q: Proof verification fails
|
||||
|
||||
**A:** Check:
|
||||
1. Public key matches signing key
|
||||
2. Certificate not expired
|
||||
3. Bundle not corrupted
|
||||
|
||||
```bash
|
||||
# Detailed verification
|
||||
stella proof verify ./proof.dsse --verbose
|
||||
```
|
||||
|
||||
### Q: Where do I get more help?
|
||||
|
||||
**A:**
|
||||
- [Operations Runbooks](../operations/)
|
||||
- [Troubleshooting Guide](troubleshooting-guide.md)
|
||||
- [Architecture Documentation](../ARCHITECTURE_OVERVIEW.md)
|
||||
- Support: support@stellaops.example.com
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
### Commands Cheat Sheet
|
||||
|
||||
```bash
|
||||
# Score Proofs
|
||||
stella scan --generate-proof # Generate proof
|
||||
stella proof verify ./proof.dsse # Verify proof
|
||||
stella score replay ./bundle/ # Replay scan
|
||||
|
||||
# Reachability
|
||||
stella scan graph ./src # Generate call graph
|
||||
stella scan --reachability # Scan with reachability
|
||||
stella reachability query --filter # Query reachability data
|
||||
|
||||
# Unknowns
|
||||
stella unknowns list # List unknowns
|
||||
stella unknowns resolve <id> # Resolve unknown
|
||||
stella unknowns stats # Statistics
|
||||
```
|
||||
|
||||
### Configuration Quick Reference
|
||||
|
||||
```json
|
||||
{
|
||||
"ScoreProofs": {
|
||||
"Enabled": true,
|
||||
"SigningAlgorithm": "ECDSA-P256"
|
||||
},
|
||||
"Reachability": {
|
||||
"Enabled": true,
|
||||
"MaxDepth": 50,
|
||||
"CacheEnabled": true
|
||||
},
|
||||
"Unknowns": {
|
||||
"AutoResolveInternal": true,
|
||||
"InternalPatterns": ["@company/*"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
**Feedback?** Submit issues or suggestions via the project's issue tracker.
|
||||
279
docs/dev/onboarding/faq/faq.md
Normal file
279
docs/dev/onboarding/faq/faq.md
Normal file
@@ -0,0 +1,279 @@
|
||||
# Score Proofs & Reachability FAQ
|
||||
|
||||
**Sprint:** SPRINT_3500_0004_0004
|
||||
**Audience:** All Users
|
||||
|
||||
---
|
||||
|
||||
## General Questions
|
||||
|
||||
### Q: What is the difference between Score Proofs and traditional scanning?
|
||||
|
||||
**A:** Traditional scanners produce results that may vary between runs and lack auditability. Score Proofs provide:
|
||||
- **Reproducibility**: Same inputs always produce same outputs
|
||||
- **Verifiability**: Cryptographic proof of how scores were computed
|
||||
- **Traceability**: Complete audit trail from inputs to findings
|
||||
- **Transparency**: Optional anchoring to public transparency logs
|
||||
|
||||
### Q: Do I need Score Proofs for compliance?
|
||||
|
||||
**A:** Score Proofs are valuable for:
|
||||
- **SOC 2**: Evidence of security scanning processes
|
||||
- **PCI DSS**: Proof of vulnerability assessments
|
||||
- **HIPAA**: Documentation of security controls
|
||||
- **ISO 27001**: Audit trail for security activities
|
||||
|
||||
### Q: Can I use reachability without Score Proofs?
|
||||
|
||||
**A:** Yes, the features are independent. You can:
|
||||
- Use reachability alone to prioritize vulnerabilities
|
||||
- Use Score Proofs alone for audit trails
|
||||
- Use both for maximum value
|
||||
|
||||
---
|
||||
|
||||
## Score Proofs
|
||||
|
||||
### Q: What's included in a proof bundle?
|
||||
|
||||
**A:** A proof bundle contains:
|
||||
```
|
||||
proof-bundle/
|
||||
├── manifest.json # All input digests and configuration
|
||||
├── attestations/ # DSSE signatures
|
||||
├── inputs/ # (Optional) Actual input data
|
||||
└── bundle.sig # Bundle signature
|
||||
```
|
||||
|
||||
### Q: How long should I retain proof bundles?
|
||||
|
||||
**A:** Recommended retention:
|
||||
- **Active releases**: Forever (or product lifetime)
|
||||
- **Previous releases**: 3 years minimum
|
||||
- **Development builds**: 90 days
|
||||
|
||||
### Q: Can I verify proofs offline?
|
||||
|
||||
**A:** Yes, with preparation:
|
||||
1. Export the proof bundle with inputs
|
||||
2. Ensure trust anchors (public keys) are available
|
||||
3. Use `stella proof verify --offline`
|
||||
|
||||
### Q: What happens if advisory data changes?
|
||||
|
||||
**A:** The original proof remains valid because it references the advisory data **at the time of scan** (by digest). Replaying with new advisory data will produce a different manifest.
|
||||
|
||||
### Q: How do I compare scans over time?
|
||||
|
||||
**A:** Use the diff command:
|
||||
```bash
|
||||
stella score diff --scan-id $SCAN1 --compare $SCAN2
|
||||
```
|
||||
|
||||
This shows:
|
||||
- New vulnerabilities
|
||||
- Resolved vulnerabilities
|
||||
- Score changes
|
||||
- Input differences
|
||||
|
||||
---
|
||||
|
||||
## Reachability
|
||||
|
||||
### Q: How accurate is reachability analysis?
|
||||
|
||||
**A:** Accuracy depends on call graph quality:
|
||||
- **Complete call graphs**: 85-95% accuracy
|
||||
- **Partial call graphs**: 60-80% accuracy
|
||||
- **No call graph**: No reachability (all UNKNOWN)
|
||||
|
||||
### Q: Why is my finding marked UNKNOWN?
|
||||
|
||||
**A:** Common causes:
|
||||
1. No call graph uploaded
|
||||
2. Call graph doesn't include the affected package
|
||||
3. Vulnerable function symbol couldn't be resolved
|
||||
|
||||
**Solution**: Check call graph coverage:
|
||||
```bash
|
||||
stella scan graph summary --scan-id $SCAN_ID
|
||||
```
|
||||
|
||||
### Q: Can reflection-based calls be detected?
|
||||
|
||||
**A:** Partially. The system:
|
||||
- Detects common reflection patterns in supported frameworks
|
||||
- Marks reflection-based paths as `POSSIBLY_REACHABLE`
|
||||
- Allows manual hints for custom reflection
|
||||
|
||||
### Q: What's the difference between POSSIBLY_REACHABLE and REACHABLE_STATIC?
|
||||
|
||||
**A:**
|
||||
- **POSSIBLY_REACHABLE**: Path exists but involves heuristic edges (reflection, dynamic dispatch). Confidence is lower.
|
||||
- **REACHABLE_STATIC**: All edges in the path are statically proven. High confidence.
|
||||
|
||||
### Q: How do I improve call graph coverage?
|
||||
|
||||
**A:**
|
||||
1. **Enable whole-program analysis** during build
|
||||
2. **Include all modules** in the build
|
||||
3. **Add framework hints** for DI/AOP
|
||||
4. **Upload runtime traces** for dynamic evidence
|
||||
|
||||
### Q: Does reachability work for interpreted languages?
|
||||
|
||||
**A:** Yes, but with caveats:
|
||||
- **Python/JS**: Static analysis provides best-effort call graphs
|
||||
- **Ruby**: Limited support, many edges are heuristic
|
||||
- **Runtime traces**: Significantly improve accuracy for all interpreted languages
|
||||
|
||||
---
|
||||
|
||||
## Unknowns
|
||||
|
||||
### Q: Should I be worried about unknowns?
|
||||
|
||||
**A:** Unknowns represent blind spots. High-priority unknowns (score ≥12) should be investigated. Low-priority unknowns can be tracked but don't require immediate action.
|
||||
|
||||
### Q: How do I reduce the number of unknowns?
|
||||
|
||||
**A:**
|
||||
1. **Add mappings**: Contribute CPE mappings to public databases
|
||||
2. **Use supported packages**: Replace unmappable dependencies
|
||||
3. **Contact vendors**: Request CVE IDs for security issues
|
||||
4. **Build internal registry**: Map internal packages to advisories
|
||||
|
||||
### Q: What's the difference between suppress and resolve?
|
||||
|
||||
**A:**
|
||||
| Action | Use When | Duration | Audit |
|
||||
|--------|----------|----------|-------|
|
||||
| Suppress | Accept risk temporarily | Has expiration | Reviewed periodically |
|
||||
| Resolve | Issue is addressed | Permanent | Closed with evidence |
|
||||
|
||||
### Q: Can unknowns block my pipeline?
|
||||
|
||||
**A:** Yes, you can configure policies:
|
||||
```bash
|
||||
# Block on critical unknowns
|
||||
stella unknowns list --min-score 20 --status pending --output-format json | jq 'length'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Air-Gap / Offline
|
||||
|
||||
### Q: Can I run fully offline?
|
||||
|
||||
**A:** Yes, with an offline kit containing:
|
||||
- Frozen advisory feeds
|
||||
- Trust anchors (public keys)
|
||||
- Time anchor (trusted timestamp)
|
||||
- Configuration files
|
||||
|
||||
### Q: How fresh is offline advisory data?
|
||||
|
||||
**A:** As fresh as when the offline kit was created. Update kits regularly:
|
||||
```bash
|
||||
# On connected system
|
||||
stella airgap prepare --feeds nvd,ghsa --output offline-kit/
|
||||
```
|
||||
|
||||
### Q: How do I handle transparency logs offline?
|
||||
|
||||
**A:** Offline mode uses a local proof ledger instead of Sigstore Rekor. The local ledger provides:
|
||||
- Chain integrity (hash links)
|
||||
- Tamper evidence
|
||||
- Export capability for later anchoring
|
||||
|
||||
---
|
||||
|
||||
## Performance
|
||||
|
||||
### Q: How long does reachability computation take?
|
||||
|
||||
**A:** Depends on graph size:
|
||||
| Graph Size | Typical Duration |
|
||||
|------------|------------------|
|
||||
| <10K nodes | <30 seconds |
|
||||
| 10K-100K nodes | 30s - 3 minutes |
|
||||
| 100K-1M nodes | 3-15 minutes |
|
||||
| >1M nodes | 15+ minutes |
|
||||
|
||||
### Q: Can I speed up scans?
|
||||
|
||||
**A:** Yes:
|
||||
1. **Enable caching**: `--cache enabled`
|
||||
2. **Limit depth**: `--max-depth 15`
|
||||
3. **Partition analysis**: `--partition-by artifact`
|
||||
4. **Enable parallelism**: `--parallel true`
|
||||
|
||||
### Q: How much storage do proofs require?
|
||||
|
||||
**A:**
|
||||
- **Manifest only**: ~50-100 KB per scan
|
||||
- **With inputs**: 10-500 MB depending on SBOM/call graph size
|
||||
- **Full bundle**: Add ~50% for signatures and metadata
|
||||
|
||||
---
|
||||
|
||||
## Integration
|
||||
|
||||
### Q: Which CI/CD systems are supported?
|
||||
|
||||
**A:** Any CI/CD system that can run CLI commands:
|
||||
- GitHub Actions
|
||||
- GitLab CI
|
||||
- Jenkins
|
||||
- Azure Pipelines
|
||||
- CircleCI
|
||||
- Buildkite
|
||||
|
||||
### Q: How do I integrate with SIEM?
|
||||
|
||||
**A:** Export findings and unknowns as NDJSON:
|
||||
```bash
|
||||
# Findings
|
||||
stella scan findings --scan-id $SCAN_ID --output-format ndjson > /var/log/stella/findings.ndjson
|
||||
|
||||
# Unknowns
|
||||
stella unknowns export --workspace-id $WS_ID --format ndjson > /var/log/stella/unknowns.ndjson
|
||||
```
|
||||
|
||||
### Q: Can I generate SARIF for code scanning?
|
||||
|
||||
**A:** Yes:
|
||||
```bash
|
||||
stella reachability findings --scan-id $SCAN_ID --output-format sarif > results.sarif
|
||||
```
|
||||
|
||||
### Q: Is there an API for everything?
|
||||
|
||||
**A:** Yes, the CLI wraps the REST API. See [API Reference](../api/score-proofs-reachability-api-reference.md) for endpoints.
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting Quick Reference
|
||||
|
||||
| Issue | Likely Cause | Quick Fix |
|
||||
|-------|--------------|-----------|
|
||||
| Replay produces different results | Missing inputs | `stella proof inspect --check-inputs` |
|
||||
| Too many UNKNOWN reachability | Incomplete call graph | `stella scan graph summary` |
|
||||
| Signature verification fails | Key rotation | `stella trust list` |
|
||||
| Computation timeout | Large graph | Increase `--timeout` |
|
||||
| Many unmapped_purl unknowns | Internal packages | Add internal registry mappings |
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Score Proofs Concept Guide](./score-proofs-concept-guide.md)
|
||||
- [Reachability Concept Guide](./reachability-concept-guide.md)
|
||||
- [Unknowns Management Guide](./unknowns-management-guide.md)
|
||||
- [Troubleshooting Guide](./troubleshooting-guide.md)
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Version**: 1.0.0
|
||||
**Sprint**: 3500.0004.0004
|
||||
492
docs/dev/onboarding/troubleshooting-guide.md
Normal file
492
docs/dev/onboarding/troubleshooting-guide.md
Normal file
@@ -0,0 +1,492 @@
|
||||
# Score Proofs & Reachability Troubleshooting Guide
|
||||
|
||||
**Sprint:** SPRINT_3500_0004_0004
|
||||
**Audience:** Operations, Support, Security Engineers
|
||||
|
||||
---
|
||||
|
||||
## Quick Diagnostic Commands
|
||||
|
||||
```bash
|
||||
# Check system health
|
||||
stella status
|
||||
|
||||
# Verify scan completed successfully
|
||||
stella scan status --scan-id $SCAN_ID
|
||||
|
||||
# Check reachability computation status
|
||||
stella reachability job-status --job-id $JOB_ID
|
||||
|
||||
# Verify proof integrity
|
||||
stella proof verify --scan-id $SCAN_ID --verbose
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Score Proofs Issues
|
||||
|
||||
### 1. Replay Produces Different Results
|
||||
|
||||
**Symptoms:**
|
||||
- `stella score replay` output differs from original
|
||||
- Verification fails with "hash mismatch"
|
||||
|
||||
**Possible Causes:**
|
||||
|
||||
| Cause | Diagnosis | Solution |
|
||||
|-------|-----------|----------|
|
||||
| Missing inputs | `stella proof inspect --check-inputs` shows gaps | Export with `--include-inputs` |
|
||||
| Algorithm version mismatch | Check `environment.scannerVersion` in manifest | Use matching scanner version |
|
||||
| Non-deterministic config | Review `configuration` section | Enable `--deterministic` mode |
|
||||
| Feed drift | Compare `advisoryFeeds.asOf` timestamps | Use frozen feeds |
|
||||
|
||||
**Resolution Steps:**
|
||||
|
||||
```bash
|
||||
# Step 1: Inspect the proof
|
||||
stella proof inspect --scan-id $SCAN_ID
|
||||
|
||||
# Step 2: Check for missing inputs
|
||||
stella proof inspect --scan-id $SCAN_ID --check-inputs
|
||||
|
||||
# Step 3: If inputs missing, re-export with data
|
||||
stella proof export --scan-id $SCAN_ID --include-inputs --output proof-full.zip
|
||||
|
||||
# Step 4: Retry replay
|
||||
stella score replay --scan-id $SCAN_ID --bundle proof-full.zip
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Signature Verification Failed
|
||||
|
||||
**Symptoms:**
|
||||
- "Invalid signature" or "Signature verification failed"
|
||||
- `stella proof verify` returns error
|
||||
|
||||
**Possible Causes:**
|
||||
|
||||
| Cause | Diagnosis | Solution |
|
||||
|-------|-----------|----------|
|
||||
| Key rotation | Check `stella trust list` for key dates | Import new trust anchor |
|
||||
| Corrupted bundle | Verify file integrity | Re-download bundle |
|
||||
| Wrong trust root | Check issuer in attestation | Configure correct trust |
|
||||
| Tampered content | Hash mismatch in bundle | Investigate tampering |
|
||||
|
||||
**Resolution Steps:**
|
||||
|
||||
```bash
|
||||
# Step 1: Verbose verification
|
||||
stella proof verify --scan-id $SCAN_ID --verbose
|
||||
|
||||
# Step 2: Check trust anchors
|
||||
stella trust list
|
||||
|
||||
# Step 3: If key rotated, import new anchor
|
||||
stella trust import --file new-public-key.pem
|
||||
|
||||
# Step 4: Retry verification
|
||||
stella proof verify --scan-id $SCAN_ID
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Proof Chain Broken
|
||||
|
||||
**Symptoms:**
|
||||
- "Chain integrity violation"
|
||||
- "prev_hash mismatch"
|
||||
|
||||
**Possible Causes:**
|
||||
|
||||
| Cause | Diagnosis | Solution |
|
||||
|-------|-----------|----------|
|
||||
| Database corruption | Check Postgres logs | Restore from backup |
|
||||
| Manual modification | Audit access logs | Investigate, restore |
|
||||
| Storage failure | Check disk health | Repair/restore |
|
||||
|
||||
**Resolution Steps:**
|
||||
|
||||
```bash
|
||||
# Step 1: Check chain status
|
||||
stella proof status --scan-id $SCAN_ID
|
||||
|
||||
# Step 2: Find break point
|
||||
stella proof list --since "30 days" --verify-chain
|
||||
|
||||
# Step 3: If database issue
|
||||
# Check Postgres logs
|
||||
# Restore from backup if needed
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. Proof Export Fails
|
||||
|
||||
**Symptoms:**
|
||||
- "Failed to export proof bundle"
|
||||
- Timeout during export
|
||||
|
||||
**Possible Causes:**
|
||||
|
||||
| Cause | Diagnosis | Solution |
|
||||
|-------|-----------|----------|
|
||||
| Large inputs | Check SBOM/graph size | Use `--exclude-inputs` |
|
||||
| Storage full | Check disk space | Clear space or use different path |
|
||||
| Network timeout | Check network connectivity | Increase timeout |
|
||||
|
||||
**Resolution Steps:**
|
||||
|
||||
```bash
|
||||
# Step 1: Export without inputs (smaller)
|
||||
stella proof export --scan-id $SCAN_ID --output proof.zip
|
||||
|
||||
# Step 2: If still fails, check disk
|
||||
# Windows: Get-Volume | Format-Table
|
||||
# Linux: df -h
|
||||
|
||||
# Step 3: Try alternative location
|
||||
stella proof export --scan-id $SCAN_ID --output /tmp/proof.zip
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Reachability Issues
|
||||
|
||||
### 1. Too Many UNKNOWN Findings
|
||||
|
||||
**Symptoms:**
|
||||
- Most vulnerabilities show `UNKNOWN` reachability status
|
||||
- Coverage percentage is low
|
||||
|
||||
**Possible Causes:**
|
||||
|
||||
| Cause | Diagnosis | Solution |
|
||||
|-------|-----------|----------|
|
||||
| No call graph | `stella scan graph summary` returns empty | Upload call graph |
|
||||
| Incomplete graph | Low node count | Regenerate with more options |
|
||||
| Symbol mismatch | Symbols not resolved | Check symbol resolution |
|
||||
|
||||
**Resolution Steps:**
|
||||
|
||||
```bash
|
||||
# Step 1: Check if call graph exists
|
||||
stella scan graph summary --scan-id $SCAN_ID
|
||||
|
||||
# Step 2: If missing, generate and upload
|
||||
# .NET example:
|
||||
dotnet build --generate-call-graph
|
||||
stella scan graph upload --scan-id $SCAN_ID --file callgraph.json
|
||||
|
||||
# Step 3: Verify entrypoints detected
|
||||
stella scan graph entrypoints --scan-id $SCAN_ID
|
||||
|
||||
# Step 4: Recompute reachability
|
||||
stella reachability compute --scan-id $SCAN_ID --force
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. False UNREACHABLE Findings
|
||||
|
||||
**Symptoms:**
|
||||
- Known-reachable code marked UNREACHABLE
|
||||
- Security team reports false negatives
|
||||
|
||||
**Possible Causes:**
|
||||
|
||||
| Cause | Diagnosis | Solution |
|
||||
|-------|-----------|----------|
|
||||
| Missing edges | Graph incomplete | Add missing calls |
|
||||
| Reflection not detected | Edge type missing | Add reflection hints |
|
||||
| Entrypoint not detected | Check entrypoints list | Add manual entrypoint |
|
||||
|
||||
**Resolution Steps:**
|
||||
|
||||
```bash
|
||||
# Step 1: Explain the specific finding
|
||||
stella reachability explain --scan-id $SCAN_ID \
|
||||
--cve CVE-2024-XXXX \
|
||||
--purl "pkg:type/name@version" \
|
||||
--verbose
|
||||
|
||||
# Step 2: Check if entrypoint is known
|
||||
stella scan graph entrypoints --scan-id $SCAN_ID | grep -i "suspected-entry"
|
||||
|
||||
# Step 3: Add missing entrypoint if needed
|
||||
stella scan graph upload --scan-id $SCAN_ID \
|
||||
--file additional-entrypoints.json \
|
||||
--merge
|
||||
|
||||
# Step 4: Recompute
|
||||
stella reachability compute --scan-id $SCAN_ID --force
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Computation Timeout
|
||||
|
||||
**Symptoms:**
|
||||
- "Computation exceeded timeout"
|
||||
- Job stuck at percentage
|
||||
|
||||
**Possible Causes:**
|
||||
|
||||
| Cause | Diagnosis | Solution |
|
||||
|-------|-----------|----------|
|
||||
| Large graph | Check node/edge count | Increase timeout |
|
||||
| Deep paths | Max depth too high | Reduce max depth |
|
||||
| Cycles | Graph has loops | Enable cycle detection |
|
||||
|
||||
**Resolution Steps:**
|
||||
|
||||
```bash
|
||||
# Step 1: Check graph size
|
||||
stella scan graph summary --scan-id $SCAN_ID
|
||||
|
||||
# Step 2: Increase timeout
|
||||
stella reachability compute --scan-id $SCAN_ID --timeout 900s
|
||||
|
||||
# Step 3: Or reduce depth
|
||||
stella reachability compute --scan-id $SCAN_ID --max-depth 10
|
||||
|
||||
# Step 4: Or partition analysis
|
||||
stella reachability compute --scan-id $SCAN_ID --partition-by artifact
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 4. Inconsistent Results Between Runs
|
||||
|
||||
**Symptoms:**
|
||||
- Same scan produces different reachability results
|
||||
- Status changes between POSSIBLY_REACHABLE and UNKNOWN
|
||||
|
||||
**Possible Causes:**
|
||||
|
||||
| Cause | Diagnosis | Solution |
|
||||
|-------|-----------|----------|
|
||||
| Non-deterministic mode | Check config | Enable deterministic mode |
|
||||
| Concurrent modifications | Check job logs | Serialize jobs |
|
||||
| Caching issues | Clear cache | Disable or clear cache |
|
||||
|
||||
**Resolution Steps:**
|
||||
|
||||
```bash
|
||||
# Step 1: Enable deterministic mode
|
||||
stella reachability compute --scan-id $SCAN_ID --deterministic --seed "fixed-seed"
|
||||
|
||||
# Step 2: Clear cache if needed
|
||||
stella cache clear --scope reachability
|
||||
|
||||
# Step 3: Re-run computation
|
||||
stella reachability compute --scan-id $SCAN_ID --force
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Unknowns Issues
|
||||
|
||||
### 1. Unknowns Not Appearing
|
||||
|
||||
**Symptoms:**
|
||||
- Expected unknowns not in registry
|
||||
- Count seems too low
|
||||
|
||||
**Possible Causes:**
|
||||
|
||||
| Cause | Diagnosis | Solution |
|
||||
|-------|-----------|----------|
|
||||
| Auto-suppress enabled | Check workspace settings | Disable auto-suppress |
|
||||
| Filter active | Check list filters | Clear filters |
|
||||
| Different workspace | Verify workspace ID | Use correct workspace |
|
||||
|
||||
**Resolution Steps:**
|
||||
|
||||
```bash
|
||||
# Step 1: List without filters
|
||||
stella unknowns list --workspace-id $WS_ID --status all
|
||||
|
||||
# Step 2: Check workspace settings
|
||||
stella config get unknowns.auto-suppress
|
||||
|
||||
# Step 3: Disable auto-suppress if needed
|
||||
stella config set unknowns.auto-suppress false
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Resolution Not Persisting
|
||||
|
||||
**Symptoms:**
|
||||
- Resolved unknowns reappear
|
||||
- Status resets to pending
|
||||
|
||||
**Possible Causes:**
|
||||
|
||||
| Cause | Diagnosis | Solution |
|
||||
|-------|-----------|----------|
|
||||
| Scope too narrow | Check resolution scope | Use broader scope |
|
||||
| New occurrence | Different scan/artifact | Resolve at workspace level |
|
||||
| Database issue | Check error logs | Contact support |
|
||||
|
||||
**Resolution Steps:**
|
||||
|
||||
```bash
|
||||
# Step 1: Check current scope
|
||||
stella unknowns show --id $UNKNOWN_ID
|
||||
|
||||
# Step 2: Re-resolve with broader scope
|
||||
stella unknowns resolve --id $UNKNOWN_ID \
|
||||
--resolution mapped \
|
||||
--scope workspace \
|
||||
--comment "Resolving at workspace level"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 3. Priority Score Incorrect
|
||||
|
||||
**Symptoms:**
|
||||
- Low priority for critical component
|
||||
- Scoring doesn't reflect risk
|
||||
|
||||
**Possible Causes:**
|
||||
|
||||
| Cause | Diagnosis | Solution |
|
||||
|-------|-----------|----------|
|
||||
| Missing context | Automatic scoring limited | Manually escalate |
|
||||
| Outdated metadata | Component info stale | Refresh metadata |
|
||||
|
||||
**Resolution Steps:**
|
||||
|
||||
```bash
|
||||
# Step 1: Escalate with correct severity
|
||||
stella unknowns escalate --id $UNKNOWN_ID \
|
||||
--reason "Handles authentication - critical despite low auto-score" \
|
||||
--severity critical
|
||||
|
||||
# Step 2: Request scoring review
|
||||
# Add comment explaining the discrepancy
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Air-Gap / Offline Issues
|
||||
|
||||
### 1. Offline Kit Import Fails
|
||||
|
||||
**Symptoms:**
|
||||
- "Invalid offline kit"
|
||||
- "Trust anchor missing"
|
||||
|
||||
**Possible Causes:**
|
||||
|
||||
| Cause | Diagnosis | Solution |
|
||||
|-------|-----------|----------|
|
||||
| Corrupted transfer | Verify checksums | Re-transfer |
|
||||
| Missing components | Check kit contents | Re-generate kit |
|
||||
| Version mismatch | Check scanner version | Use matching versions |
|
||||
|
||||
**Resolution Steps:**
|
||||
|
||||
```bash
|
||||
# Step 1: Verify kit integrity
|
||||
sha256sum offline-kit.tar.gz
|
||||
# Compare with manifest.sha256
|
||||
|
||||
# Step 2: Check kit contents
|
||||
tar -tzf offline-kit.tar.gz | head -20
|
||||
|
||||
# Step 3: If incomplete, regenerate on connected system
|
||||
stella airgap prepare --feeds nvd,ghsa --output offline-kit/
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 2. Time Anchor Issues
|
||||
|
||||
**Symptoms:**
|
||||
- "Time anchor expired"
|
||||
- "Cannot verify timestamp"
|
||||
|
||||
**Possible Causes:**
|
||||
|
||||
| Cause | Diagnosis | Solution |
|
||||
|-------|-----------|----------|
|
||||
| Old kit | Check time anchor date | Refresh kit |
|
||||
| Clock drift | Check system clock | Sync system time |
|
||||
| Expired anchor | Anchor has TTL | Generate new anchor |
|
||||
|
||||
**Resolution Steps:**
|
||||
|
||||
```bash
|
||||
# Step 1: Check time anchor
|
||||
cat offline-kit/time-anchor/timestamp.json
|
||||
|
||||
# Step 2: If expired, generate new (on connected system)
|
||||
stella airgap prepare-time-anchor --output offline-kit/time-anchor/
|
||||
|
||||
# Step 3: Transfer and use new anchor
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Code Reference
|
||||
|
||||
| Error Code | Category | Meaning | Typical Resolution |
|
||||
|------------|----------|---------|-------------------|
|
||||
| E1001 | Proof | Manifest hash mismatch | Re-export with inputs |
|
||||
| E1002 | Proof | Signature invalid | Check trust anchors |
|
||||
| E1003 | Proof | Chain broken | Restore from backup |
|
||||
| E2001 | Reach | No call graph | Upload call graph |
|
||||
| E2002 | Reach | Computation timeout | Increase timeout |
|
||||
| E2003 | Reach | Symbol not resolved | Check symbol DB |
|
||||
| E3001 | Unknown | Resolution conflict | Use broader scope |
|
||||
| E3002 | Unknown | Invalid category | Check category value |
|
||||
| E4001 | Airgap | Invalid kit | Re-generate kit |
|
||||
| E4002 | Airgap | Time anchor expired | Refresh anchor |
|
||||
|
||||
---
|
||||
|
||||
## Getting Help
|
||||
|
||||
### Collecting Diagnostics
|
||||
|
||||
```bash
|
||||
# Generate diagnostic bundle
|
||||
stella diagnostic collect --output diagnostics.zip
|
||||
|
||||
# Include specific scan
|
||||
stella diagnostic collect --scan-id $SCAN_ID --output diagnostics.zip
|
||||
```
|
||||
|
||||
### Log Locations
|
||||
|
||||
| Component | Log Path |
|
||||
|-----------|----------|
|
||||
| Scanner | `/var/log/stella/scanner.log` |
|
||||
| Reachability | `/var/log/stella/reachability.log` |
|
||||
| Proofs | `/var/log/stella/proofs.log` |
|
||||
| CLI | `~/.stella/logs/cli.log` |
|
||||
|
||||
### Support Channels
|
||||
|
||||
- Documentation: `docs/` directory
|
||||
- Issues: Internal issue tracker
|
||||
- Emergency: On-call security team
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Score Proofs Runbook](../operations/score-proofs-runbook.md)
|
||||
- [Reachability Runbook](../operations/reachability-runbook.md)
|
||||
- [Unknowns Queue Runbook](../operations/unknowns-queue-runbook.md)
|
||||
- [Air-Gap Runbook](../airgap/score-proofs-reachability-airgap-runbook.md)
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Version**: 1.0.0
|
||||
**Sprint**: 3500.0004.0004
|
||||
484
docs/dev/onboarding/unknowns-management-guide.md
Normal file
484
docs/dev/onboarding/unknowns-management-guide.md
Normal file
@@ -0,0 +1,484 @@
|
||||
# Unknowns Management Guide
|
||||
|
||||
**Sprint:** SPRINT_3500_0004_0004
|
||||
**Audience:** Security Engineers, SOC Analysts, DevOps
|
||||
|
||||
## Introduction
|
||||
|
||||
The **Unknowns Registry** tracks components that cannot be fully analyzed due to missing data, unrecognized formats, or resolution failures. This guide explains how to manage unknowns effectively to maintain scan coverage and reduce blind spots.
|
||||
|
||||
---
|
||||
|
||||
## What are Unknowns?
|
||||
|
||||
During vulnerability scanning, some components cannot be analyzed:
|
||||
|
||||
```
|
||||
Scan Results:
|
||||
✅ 245 components analyzed
|
||||
⚠️ 12 unknowns registered
|
||||
|
||||
Unknowns:
|
||||
- pkg:npm/internal-lib@1.0.0 → unmapped_purl (no CVE mapping)
|
||||
- pkg:pypi/custom-tool@2.3.1 → checksum_miss (not in advisory DB)
|
||||
- [native binary] → language_gap (unsupported)
|
||||
```
|
||||
|
||||
**Why Unknowns Matter:**
|
||||
- They represent **blind spots** in your security posture
|
||||
- Untracked unknowns accumulate over time
|
||||
- Some may hide critical vulnerabilities
|
||||
|
||||
---
|
||||
|
||||
## Categories of Unknowns
|
||||
|
||||
| Category | Description | Common Causes |
|
||||
|----------|-------------|---------------|
|
||||
| `unmapped_purl` | No CVE/advisory mapping exists | Internal packages, new releases |
|
||||
| `checksum_miss` | Binary checksum not found | Modified binaries, custom builds |
|
||||
| `language_gap` | Language not supported | COBOL, Fortran, proprietary |
|
||||
| `parsing_failure` | Manifest couldn't be parsed | Corrupted files, unusual formats |
|
||||
| `network_timeout` | Advisory feed unavailable | Network issues, rate limiting |
|
||||
| `unrecognized_format` | Unknown file format | Custom packaging, obfuscation |
|
||||
|
||||
---
|
||||
|
||||
## The 2-Factor Ranking System
|
||||
|
||||
Unknowns are prioritized using two factors:
|
||||
|
||||
### Factor 1: Vulnerability Potential (0-5)
|
||||
|
||||
How likely is this component to have vulnerabilities?
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 5 | External dependency with network exposure |
|
||||
| 4 | External dependency, no network exposure |
|
||||
| 3 | Internal dependency with external data handling |
|
||||
| 2 | Internal dependency, limited exposure |
|
||||
| 1 | Development-only tooling |
|
||||
| 0 | Static assets, documentation |
|
||||
|
||||
### Factor 2: Impact Potential (0-5)
|
||||
|
||||
If vulnerable, how severe would exploitation be?
|
||||
|
||||
| Score | Criteria |
|
||||
|-------|----------|
|
||||
| 5 | Handles authentication, encryption, or PII |
|
||||
| 4 | Business-critical functionality |
|
||||
| 3 | User-facing features |
|
||||
| 2 | Internal tooling |
|
||||
| 1 | Logging, monitoring |
|
||||
| 0 | No runtime impact |
|
||||
|
||||
### Combined Priority
|
||||
|
||||
```
|
||||
Priority = Vulnerability × Impact
|
||||
```
|
||||
|
||||
| Priority Score | Level | Action |
|
||||
|----------------|-------|--------|
|
||||
| 20-25 | Critical | Investigate immediately |
|
||||
| 12-19 | High | Review within 24 hours |
|
||||
| 6-11 | Medium | Review within 1 week |
|
||||
| 1-5 | Low | Track and monitor |
|
||||
|
||||
---
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### 1. Daily Triage
|
||||
|
||||
Review and categorize new unknowns:
|
||||
|
||||
```bash
|
||||
# Step 1: Get summary
|
||||
stella unknowns summary --workspace-id $WS_ID
|
||||
|
||||
# Step 2: List high-priority pending
|
||||
stella unknowns list --status pending --min-score 12
|
||||
|
||||
# Step 3: Review each
|
||||
stella unknowns show --id unknown-001
|
||||
|
||||
# Step 4: Take action (escalate, resolve, or suppress)
|
||||
stella unknowns escalate --id unknown-001 --reason "Needs security review"
|
||||
```
|
||||
|
||||
### 2. Escalation
|
||||
|
||||
When an unknown requires expert review:
|
||||
|
||||
```bash
|
||||
stella unknowns escalate \
|
||||
--id unknown-001 \
|
||||
--reason "Custom cryptographic library - needs audit" \
|
||||
--assignee security-team \
|
||||
--severity high \
|
||||
--due-date 2025-12-27
|
||||
```
|
||||
|
||||
**When to escalate:**
|
||||
- Priority score ≥ 12
|
||||
- Cryptographic or security-related components
|
||||
- Components handling sensitive data
|
||||
- Public-facing dependencies
|
||||
|
||||
### 3. Resolution
|
||||
|
||||
Mark unknowns as resolved with documentation:
|
||||
|
||||
```bash
|
||||
# Resolved: Mapping added
|
||||
stella unknowns resolve --id unknown-001 \
|
||||
--resolution mapped \
|
||||
--comment "Added CPE mapping to internal DB"
|
||||
|
||||
# Resolved: Not applicable
|
||||
stella unknowns resolve --id unknown-002 \
|
||||
--resolution not_applicable \
|
||||
--comment "Development-only tool, not in production"
|
||||
|
||||
# Resolved: Component removed
|
||||
stella unknowns resolve --id unknown-003 \
|
||||
--resolution removed \
|
||||
--comment "Replaced with supported alternative"
|
||||
```
|
||||
|
||||
### 4. Suppression
|
||||
|
||||
Accept risk for known-safe unknowns:
|
||||
|
||||
```bash
|
||||
stella unknowns suppress \
|
||||
--id unknown-004 \
|
||||
--reason "Internal UI component, no external exposure" \
|
||||
--expires 2026-01-01 \
|
||||
--scope workspace \
|
||||
--approver security@example.com
|
||||
```
|
||||
|
||||
**Suppression best practices:**
|
||||
- Always set an expiration date
|
||||
- Document the risk assessment
|
||||
- Require approver for production suppressions
|
||||
- Review suppressions quarterly
|
||||
|
||||
### 5. Bulk Triage
|
||||
|
||||
Process multiple unknowns efficiently:
|
||||
|
||||
```bash
|
||||
# Create triage file
|
||||
cat > triage.json << 'EOF'
|
||||
{
|
||||
"decisions": [
|
||||
{
|
||||
"id": "unknown-001",
|
||||
"action": "resolve",
|
||||
"resolution": "mapped",
|
||||
"comment": "Added mapping"
|
||||
},
|
||||
{
|
||||
"id": "unknown-002",
|
||||
"action": "suppress",
|
||||
"reason": "Dev-only tool",
|
||||
"expires": "2026-01-01"
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
|
||||
# Preview
|
||||
stella unknowns bulk-triage --file triage.json --dry-run
|
||||
|
||||
# Apply
|
||||
stella unknowns bulk-triage --file triage.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Understanding Unknown Details
|
||||
|
||||
### Example Unknown Record
|
||||
|
||||
```bash
|
||||
stella unknowns show --id unknown-001 --verbose
|
||||
```
|
||||
|
||||
**Output:**
|
||||
```yaml
|
||||
id: unknown-001
|
||||
purl: pkg:npm/custom-auth@3.2.1
|
||||
category: unmapped_purl
|
||||
status: pending
|
||||
|
||||
scoring:
|
||||
vulnerabilityPotential: 5 # External auth library
|
||||
impactPotential: 5 # Handles authentication
|
||||
priorityScore: 25 # CRITICAL
|
||||
|
||||
metadata:
|
||||
firstSeen: 2025-12-15T08:00:00Z
|
||||
lastSeen: 2025-12-20T10:00:00Z
|
||||
affectedScans: 15
|
||||
affectedImages: 3
|
||||
|
||||
context:
|
||||
files:
|
||||
- node_modules/custom-auth/package.json
|
||||
- package-lock.json
|
||||
dependencyPath:
|
||||
- app → express → custom-auth
|
||||
|
||||
analysis:
|
||||
reason: "No CVE/advisory mapping exists for this package"
|
||||
attempts:
|
||||
- source: nvd
|
||||
result: no_match
|
||||
- source: ghsa
|
||||
result: no_match
|
||||
- source: osv
|
||||
result: no_match
|
||||
suggestions:
|
||||
- "Search for upstream security advisories"
|
||||
- "Contact vendor for security information"
|
||||
- "Consider static analysis of source code"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Status Lifecycle
|
||||
|
||||
```
|
||||
┌─────────────┐
|
||||
│ pending │ ◄──── New unknown detected
|
||||
└──────┬──────┘
|
||||
│
|
||||
├───────────────┬───────────────┐
|
||||
│ │ │
|
||||
▼ ▼ ▼
|
||||
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
|
||||
│ escalated │ │ suppressed │ │ resolved │
|
||||
└──────┬──────┘ └──────┬──────┘ └─────────────┘
|
||||
│ │
|
||||
▼ ▼
|
||||
┌─────────────┐ ┌─────────────┐
|
||||
│ resolved │ │ (expires) │────▶ pending
|
||||
└─────────────┘ └─────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration Patterns
|
||||
|
||||
### CI/CD Pipeline
|
||||
|
||||
```yaml
|
||||
# .github/workflows/security.yml
|
||||
- name: Scan for vulnerabilities
|
||||
run: stella scan run --image $IMAGE
|
||||
|
||||
- name: Check unknowns threshold
|
||||
run: |
|
||||
CRITICAL_UNKNOWNS=$(stella unknowns list \
|
||||
--scan-id $SCAN_ID \
|
||||
--min-score 20 \
|
||||
--output-format json | jq 'length')
|
||||
|
||||
if [ "$CRITICAL_UNKNOWNS" -gt 0 ]; then
|
||||
echo "::error::$CRITICAL_UNKNOWNS critical unknowns detected"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Archive unknowns report
|
||||
run: |
|
||||
stella unknowns export \
|
||||
--scan-id $SCAN_ID \
|
||||
--output unknowns-${{ github.sha }}.json
|
||||
```
|
||||
|
||||
### Metrics and Monitoring
|
||||
|
||||
Track unknowns over time:
|
||||
|
||||
```bash
|
||||
# Export metrics for Prometheus/Grafana
|
||||
stella unknowns summary --workspace-id $WS_ID --output-format json > /metrics/unknowns.json
|
||||
```
|
||||
|
||||
**Key metrics to track:**
|
||||
- Total unknowns count (by status)
|
||||
- Average time to resolution
|
||||
- Suppression rate
|
||||
- Unknown-to-finding ratio
|
||||
|
||||
### SIEM Integration
|
||||
|
||||
Forward unknowns to your SIEM:
|
||||
|
||||
```bash
|
||||
# Export in SIEM-compatible format
|
||||
stella unknowns export \
|
||||
--workspace-id $WS_ID \
|
||||
--status pending,escalated \
|
||||
--format ndjson \
|
||||
--output /var/log/stella/unknowns.ndjson
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### 1. Don't Ignore Unknowns
|
||||
|
||||
Every unknown is a potential blind spot:
|
||||
|
||||
```bash
|
||||
# Weekly review of all unknowns
|
||||
stella unknowns list --status pending --sort firstSeen --order asc
|
||||
```
|
||||
|
||||
### 2. Set SLAs by Priority
|
||||
|
||||
| Priority | Target Resolution |
|
||||
|----------|-------------------|
|
||||
| Critical (20-25) | 24 hours |
|
||||
| High (12-19) | 3 days |
|
||||
| Medium (6-11) | 1 week |
|
||||
| Low (1-5) | 2 weeks |
|
||||
|
||||
### 3. Document Everything
|
||||
|
||||
Resolution comments help future triage:
|
||||
|
||||
```bash
|
||||
# Good
|
||||
stella unknowns resolve --id unknown-001 \
|
||||
--resolution mapped \
|
||||
--comment "Added CPE cpe:2.3:a:vendor:product:1.0.0. Mapping verified against vendor advisory VA-2025-001."
|
||||
|
||||
# Bad
|
||||
stella unknowns resolve --id unknown-001 \
|
||||
--resolution mapped \
|
||||
--comment "fixed"
|
||||
```
|
||||
|
||||
### 4. Review Suppressions Regularly
|
||||
|
||||
```bash
|
||||
# List suppressions expiring soon
|
||||
stella unknowns list \
|
||||
--status suppressed \
|
||||
--max-age "30 days" \
|
||||
--output-format table
|
||||
```
|
||||
|
||||
### 5. Improve Coverage Over Time
|
||||
|
||||
Track which categories generate the most unknowns:
|
||||
|
||||
```bash
|
||||
stella unknowns summary --workspace-id $WS_ID
|
||||
|
||||
# If unmapped_purl is dominant:
|
||||
# - Consider adding internal package registry
|
||||
# - Work with vendors on CPE mappings
|
||||
# - Contribute mappings to public databases
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### "Too many unknowns from one category"
|
||||
|
||||
**Cause**: Systemic gap in coverage
|
||||
|
||||
**Solutions by category:**
|
||||
|
||||
| Category | Solution |
|
||||
|----------|----------|
|
||||
| `unmapped_purl` | Add internal package mappings; request vendor CVE IDs |
|
||||
| `checksum_miss` | Submit binaries to advisory databases |
|
||||
| `language_gap` | File feature request; consider SAST tools |
|
||||
| `parsing_failure` | Report bug; provide sample files |
|
||||
|
||||
### "Unknowns keep reappearing after resolution"
|
||||
|
||||
**Cause**: Resolution not persisted or scope too narrow
|
||||
|
||||
**Solution**:
|
||||
```bash
|
||||
# Check resolution scope
|
||||
stella unknowns show --id unknown-001
|
||||
|
||||
# Expand scope if needed
|
||||
stella unknowns resolve --id unknown-001 \
|
||||
--resolution mapped \
|
||||
--scope workspace # Not just scan
|
||||
```
|
||||
|
||||
### "Priority scoring seems wrong"
|
||||
|
||||
**Cause**: Automatic scoring may not reflect context
|
||||
|
||||
**Solution**: Manually adjust or provide context:
|
||||
```bash
|
||||
# When escalating, provide context
|
||||
stella unknowns escalate --id unknown-001 \
|
||||
--reason "Handles PII despite low automatic score" \
|
||||
--severity high
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## FAQ
|
||||
|
||||
### Q: Should unknowns block deployments?
|
||||
|
||||
**A**: It depends on your risk tolerance:
|
||||
- **Strict**: Block on any critical unknown
|
||||
- **Moderate**: Block on critical, warn on high
|
||||
- **Permissive**: Warn only, track for trending
|
||||
|
||||
### Q: How do I reduce unknowns over time?
|
||||
|
||||
**A**:
|
||||
1. Contribute mappings to public databases
|
||||
2. Maintain internal mapping database
|
||||
3. Replace unmappable dependencies with alternatives
|
||||
4. Work with vendors to publish CPE data
|
||||
|
||||
### Q: What's the difference between suppress and resolve?
|
||||
|
||||
**A**:
|
||||
- **Suppress**: Acknowledges risk, sets expiration, reviewed periodically
|
||||
- **Resolve**: Permanently closes, requires evidence or justification
|
||||
|
||||
### Q: Can I automate triage?
|
||||
|
||||
**A**: Yes, use bulk-triage with rules:
|
||||
```bash
|
||||
# Auto-resolve known-safe patterns
|
||||
stella unknowns bulk-triage --file auto-rules.json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Related Documentation
|
||||
|
||||
- [Unknowns CLI Reference](../cli/unknowns-cli-reference.md)
|
||||
- [Unknowns API Reference](../api/score-proofs-reachability-api-reference.md)
|
||||
- [Unknowns Queue Runbook](../operations/unknowns-queue-runbook.md)
|
||||
- [Score Proofs Concept Guide](./score-proofs-concept-guide.md)
|
||||
|
||||
---
|
||||
|
||||
**Last Updated**: 2025-12-20
|
||||
**Version**: 1.0.0
|
||||
**Sprint**: 3500.0004.0004
|
||||
505
docs/dev/onboarding/video-tutorial-scripts.md
Normal file
505
docs/dev/onboarding/video-tutorial-scripts.md
Normal file
@@ -0,0 +1,505 @@
|
||||
# Video Tutorial Scripts
|
||||
|
||||
**Sprint:** SPRINT_3500_0004_0004
|
||||
**Format:** Tutorial scripts for screen recording
|
||||
|
||||
This document contains scripts for video tutorials covering Score Proofs, Reachability Analysis, and Unknowns Management.
|
||||
|
||||
---
|
||||
|
||||
## Video 1: Introduction to Score Proofs (5 min)
|
||||
|
||||
### Script
|
||||
|
||||
**[Opening - Logo/Title Card]**
|
||||
|
||||
> Welcome to StellaOps. In this tutorial, you'll learn how Score Proofs provide cryptographic guarantees that your vulnerability scores are reproducible and verifiable.
|
||||
|
||||
**[Screen: Terminal]**
|
||||
|
||||
> Let's start with a typical vulnerability scan. Here I have an SBOM for my application.
|
||||
|
||||
```bash
|
||||
stella scan --sbom ./sbom.json
|
||||
```
|
||||
|
||||
> The scan produces findings, but how do we know these results are accurate? Can we prove they're reproducible?
|
||||
|
||||
**[Screen: Terminal - enable proofs]**
|
||||
|
||||
> Let's run the same scan with proof generation enabled.
|
||||
|
||||
```bash
|
||||
stella scan --sbom ./sbom.json --generate-proof --output ./scan-results/
|
||||
```
|
||||
|
||||
> Notice the `--generate-proof` flag. StellaOps now creates a cryptographic attestation alongside the findings.
|
||||
|
||||
**[Screen: File browser showing proof bundle]**
|
||||
|
||||
> Here's what got generated:
|
||||
> - `manifest.json` contains content-addressed references to every input
|
||||
> - `proof.dsse` is the signed attestation
|
||||
> - `findings.json` has the actual vulnerability results
|
||||
|
||||
**[Screen: Terminal - verify]**
|
||||
|
||||
> Anyone with this bundle can verify the proof:
|
||||
|
||||
```bash
|
||||
stella proof verify ./scan-results/proof.dsse
|
||||
```
|
||||
|
||||
> The signature is valid, and we can see exactly which inputs produced these results.
|
||||
|
||||
**[Screen: Terminal - replay]**
|
||||
|
||||
> We can even replay the scan to prove determinism:
|
||||
|
||||
```bash
|
||||
stella score replay ./scan-results/ --verify
|
||||
```
|
||||
|
||||
> Same inputs produce the exact same outputs. This is powerful for audits and compliance.
|
||||
|
||||
**[Closing]**
|
||||
|
||||
> That's Score Proofs in action. For more details, check the documentation or watch our deep-dive tutorials.
|
||||
|
||||
---
|
||||
|
||||
## Video 2: Understanding Reachability Analysis (7 min)
|
||||
|
||||
### Script
|
||||
|
||||
**[Opening]**
|
||||
|
||||
> Vulnerability scanners often overwhelm teams with alerts. Reachability analysis helps you focus on what actually matters.
|
||||
|
||||
**[Screen: Slide showing vulnerability count]**
|
||||
|
||||
> Imagine your scanner finds 200 vulnerabilities. Studies show 80-90% are unreachable—the vulnerable code path never executes in your application.
|
||||
|
||||
**[Screen: Terminal]**
|
||||
|
||||
> Let's see reachability in action. First, we generate a call graph:
|
||||
|
||||
```bash
|
||||
stella scan graph ./src --output ./callgraph.json
|
||||
```
|
||||
|
||||
> This analyzes your source code to map function calls.
|
||||
|
||||
**[Screen: Visualization of call graph]**
|
||||
|
||||
> The call graph shows which functions call which. Starting from your entry points, we can trace all possible execution paths.
|
||||
|
||||
**[Screen: Terminal - scan with reachability]**
|
||||
|
||||
> Now let's scan with reachability enabled:
|
||||
|
||||
```bash
|
||||
stella scan --sbom ./sbom.json --call-graph ./callgraph.json --reachability
|
||||
```
|
||||
|
||||
**[Screen: Results showing reachability verdicts]**
|
||||
|
||||
> Look at the results:
|
||||
> - CVE-2024-1234: `REACHABLE_STATIC` - this IS on an executable path
|
||||
> - CVE-2024-5678: `NOT_REACHABLE` - safely ignore this one
|
||||
> - CVE-2024-9012: `POSSIBLY_REACHABLE` - needs review
|
||||
|
||||
**[Screen: Terminal - filter]**
|
||||
|
||||
> We can filter to only actionable items:
|
||||
|
||||
```bash
|
||||
stella query --filter "reachability=REACHABLE_STATIC"
|
||||
```
|
||||
|
||||
> From 200 findings, we're now focused on 20 that actually matter.
|
||||
|
||||
**[Screen: Diagram showing path]**
|
||||
|
||||
> For reachable vulnerabilities, we can see the exact path:
|
||||
|
||||
```bash
|
||||
stella reachability explain --cve CVE-2024-1234
|
||||
```
|
||||
|
||||
> `Main.java:15` calls `Service.java:42`, which calls the vulnerable function.
|
||||
|
||||
**[Closing]**
|
||||
|
||||
> Reachability analysis transforms alert fatigue into focused action. Enable it today.
|
||||
|
||||
---
|
||||
|
||||
## Video 3: Managing the Unknowns Queue (5 min)
|
||||
|
||||
### Script
|
||||
|
||||
**[Opening]**
|
||||
|
||||
> Sometimes StellaOps encounters packages it can't fully analyze. The Unknowns Queue helps you track and resolve these blind spots.
|
||||
|
||||
**[Screen: Terminal - scan with unknowns]**
|
||||
|
||||
> When we scan, some packages may be flagged as unknown:
|
||||
|
||||
```bash
|
||||
stella scan --sbom ./sbom.json
|
||||
# ...
|
||||
# ⚠️ 12 unknowns detected
|
||||
```
|
||||
|
||||
**[Screen: Terminal - list unknowns]**
|
||||
|
||||
> Let's see what's in the queue:
|
||||
|
||||
```bash
|
||||
stella unknowns list
|
||||
```
|
||||
|
||||
> We have:
|
||||
> - `internal-auth-lib@1.0.0` - no advisory match
|
||||
> - `custom-logger@2.1.0` - checksum miss
|
||||
> - A couple more...
|
||||
|
||||
**[Screen: Terminal - show details]**
|
||||
|
||||
> Let's look at one in detail:
|
||||
|
||||
```bash
|
||||
stella unknowns show UNK-001
|
||||
```
|
||||
|
||||
> This is `internal-auth-lib`—it's our internal package, so of course there's no public advisory!
|
||||
|
||||
**[Screen: Terminal - resolve]**
|
||||
|
||||
> We can resolve it:
|
||||
|
||||
```bash
|
||||
stella unknowns resolve UNK-001 --resolution internal_package --note "Our internal auth library"
|
||||
```
|
||||
|
||||
> Now it won't keep appearing.
|
||||
|
||||
**[Screen: Terminal - patterns]**
|
||||
|
||||
> For bulk handling, configure patterns:
|
||||
|
||||
```bash
|
||||
stella config set unknowns.internalPatterns '@mycompany/*,internal-*'
|
||||
```
|
||||
|
||||
> All packages matching these patterns are auto-classified.
|
||||
|
||||
**[Screen: Terminal - stats]**
|
||||
|
||||
> Track your queue health:
|
||||
|
||||
```bash
|
||||
stella unknowns stats
|
||||
```
|
||||
|
||||
> Target: less than 5% unknown packages.
|
||||
|
||||
**[Closing]**
|
||||
|
||||
> Regular unknown management keeps your security coverage complete. Don't let blind spots accumulate.
|
||||
|
||||
---
|
||||
|
||||
## Video 4: Integrating with CI/CD (8 min)
|
||||
|
||||
### Script
|
||||
|
||||
**[Opening]**
|
||||
|
||||
> Let's add Score Proofs and Reachability to your CI/CD pipeline.
|
||||
|
||||
**[Screen: GitHub Actions YAML]**
|
||||
|
||||
> Here's a GitHub Actions workflow:
|
||||
|
||||
```yaml
|
||||
name: Security Scan
|
||||
on: [push, pull_request]
|
||||
|
||||
jobs:
|
||||
security:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Generate SBOM
|
||||
run: syft . -o cyclonedx-json > sbom.json
|
||||
|
||||
- name: Generate Call Graph
|
||||
run: stella scan graph ./src --output callgraph.json
|
||||
|
||||
- name: Scan with Proofs and Reachability
|
||||
run: |
|
||||
stella scan \
|
||||
--sbom sbom.json \
|
||||
--call-graph callgraph.json \
|
||||
--generate-proof \
|
||||
--reachability \
|
||||
--output ./results/
|
||||
|
||||
- name: Check for Reachable Criticals
|
||||
run: |
|
||||
COUNT=$(stella query \
|
||||
--input ./results/ \
|
||||
--filter "reachability=REACHABLE_STATIC AND severity=CRITICAL" \
|
||||
--count)
|
||||
if [ "$COUNT" -gt 0 ]; then
|
||||
echo "Found $COUNT reachable critical vulnerabilities"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
- name: Upload Proof Bundle
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: security-proof
|
||||
path: ./results/
|
||||
```
|
||||
|
||||
**[Screen: Pipeline running]**
|
||||
|
||||
> Let's trigger the pipeline... and watch it run.
|
||||
|
||||
**[Screen: Pipeline success/failure]**
|
||||
|
||||
> The pipeline:
|
||||
> 1. Generates an SBOM
|
||||
> 2. Creates a call graph
|
||||
> 3. Scans with proofs and reachability
|
||||
> 4. Fails if reachable criticals exist
|
||||
|
||||
**[Screen: Artifacts]**
|
||||
|
||||
> The proof bundle is saved as an artifact for auditing.
|
||||
|
||||
**[Screen: PR comment]**
|
||||
|
||||
> You can even post results to PRs:
|
||||
|
||||
```yaml
|
||||
- name: Comment Results
|
||||
run: |
|
||||
stella report --format markdown > report.md
|
||||
gh pr comment --body-file report.md
|
||||
```
|
||||
|
||||
**[Closing]**
|
||||
|
||||
> Automated security with proof. Check out our GitLab and Jenkins templates too.
|
||||
|
||||
---
|
||||
|
||||
## Video 5: Air-Gap Operations (6 min)
|
||||
|
||||
### Script
|
||||
|
||||
**[Opening]**
|
||||
|
||||
> For classified or restricted environments, StellaOps works fully offline. Let's see how.
|
||||
|
||||
**[Screen: Diagram - Air-gapped setup]**
|
||||
|
||||
> In an air-gapped environment, there's no internet access. We use offline kits to bring in vulnerability data.
|
||||
|
||||
**[Screen: Terminal - connected side]**
|
||||
|
||||
> On a connected machine, prepare the offline kit:
|
||||
|
||||
```bash
|
||||
stella offline-kit create \
|
||||
--include-feeds nvd,github,osv \
|
||||
--include-trust-bundle \
|
||||
--output ./offline-kit.tar.gz
|
||||
```
|
||||
|
||||
**[Screen: USB/secure transfer]**
|
||||
|
||||
> Transfer this securely to the air-gapped environment.
|
||||
|
||||
**[Screen: Terminal - air-gapped side]**
|
||||
|
||||
> On the air-gapped machine, import:
|
||||
|
||||
```bash
|
||||
stella offline-kit import ./offline-kit.tar.gz
|
||||
```
|
||||
|
||||
**[Screen: Terminal - verify]**
|
||||
|
||||
> Verify the kit integrity:
|
||||
|
||||
```bash
|
||||
stella offline-kit verify
|
||||
```
|
||||
|
||||
> Signatures match—we can trust this data.
|
||||
|
||||
**[Screen: Terminal - scan]**
|
||||
|
||||
> Now scan as usual:
|
||||
|
||||
```bash
|
||||
stella scan --sbom ./sbom.json --offline
|
||||
```
|
||||
|
||||
> Results use the imported feeds. No network required.
|
||||
|
||||
**[Screen: Terminal - generate proof]**
|
||||
|
||||
> Proofs work offline too:
|
||||
|
||||
```bash
|
||||
stella scan --sbom ./sbom.json --offline --generate-proof
|
||||
```
|
||||
|
||||
**[Screen: Terminal - verify offline]**
|
||||
|
||||
> And verification:
|
||||
|
||||
```bash
|
||||
stella proof verify ./proof.dsse --offline
|
||||
```
|
||||
|
||||
> Everything works without connectivity.
|
||||
|
||||
**[Closing]**
|
||||
|
||||
> StellaOps is sovereign by design. Your security doesn't depend on cloud availability.
|
||||
|
||||
---
|
||||
|
||||
## Video 6: Deep Dive - Deterministic Replay (10 min)
|
||||
|
||||
### Script
|
||||
|
||||
**[Opening]**
|
||||
|
||||
> Score replay is a powerful auditing feature. Let's explore how it works under the hood.
|
||||
|
||||
**[Screen: Diagram - Replay architecture]**
|
||||
|
||||
> When you generate a proof, StellaOps captures:
|
||||
> - Content-addressed SBOM (sha256 hash)
|
||||
> - Feed snapshots at exact timestamps
|
||||
> - Algorithm version
|
||||
> - Configuration state
|
||||
|
||||
**[Screen: JSON manifest]**
|
||||
|
||||
> Here's a manifest:
|
||||
|
||||
```json
|
||||
{
|
||||
"scanId": "scan-12345",
|
||||
"inputs": {
|
||||
"sbom": {
|
||||
"digest": "sha256:a1b2c3...",
|
||||
"format": "cyclonedx-1.6"
|
||||
},
|
||||
"feeds": {
|
||||
"nvd": {
|
||||
"digest": "sha256:d4e5f6...",
|
||||
"asOf": "2025-01-15T00:00:00Z"
|
||||
}
|
||||
}
|
||||
},
|
||||
"environment": {
|
||||
"algorithmVersion": "2.5.0"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**[Screen: Terminal - replay]**
|
||||
|
||||
> To replay, StellaOps:
|
||||
> 1. Retrieves inputs by their digests
|
||||
> 2. Restores configuration
|
||||
> 3. Re-runs the exact computation
|
||||
|
||||
```bash
|
||||
stella score replay ./proof-bundle/ --verbose
|
||||
```
|
||||
|
||||
**[Screen: Output comparison]**
|
||||
|
||||
> The output shows:
|
||||
> - ✅ Finding 1 matches
|
||||
> - ✅ Finding 2 matches
|
||||
> - ✅ All scores identical
|
||||
> - ✅ Merkle root matches
|
||||
|
||||
**[Screen: Diagram - Merkle tree]**
|
||||
|
||||
> Individual findings can be verified using Merkle proofs—without replaying the entire scan.
|
||||
|
||||
```bash
|
||||
stella proof verify-finding --proof ./proof.dsse --finding-id F-001
|
||||
```
|
||||
|
||||
**[Screen: Terminal - handling drift]**
|
||||
|
||||
> What if feeds have updated?
|
||||
|
||||
```bash
|
||||
stella score replay ./proof-bundle/
|
||||
# ⚠️ Feed digest mismatch: NVD
|
||||
# Expected: sha256:d4e5f6...
|
||||
# Current: sha256:x7y8z9...
|
||||
```
|
||||
|
||||
> Use feed freezing:
|
||||
|
||||
```bash
|
||||
stella feeds freeze --from-manifest ./manifest.json
|
||||
stella score replay ./proof-bundle/ --frozen
|
||||
```
|
||||
|
||||
**[Closing]**
|
||||
|
||||
> Deterministic replay provides true auditability. Every score can be proven, every time.
|
||||
|
||||
---
|
||||
|
||||
## Recording Notes
|
||||
|
||||
### Equipment
|
||||
- Screen recording: OBS Studio or Camtasia
|
||||
- Resolution: 1920x1080
|
||||
- Terminal font size: 16-18pt
|
||||
- Use dark theme for terminal
|
||||
|
||||
### Preparation
|
||||
- Clean terminal history
|
||||
- Pre-stage all files
|
||||
- Have backup recordings of long-running commands
|
||||
- Test all commands before recording
|
||||
|
||||
### Post-Production
|
||||
- Add chapter markers
|
||||
- Include closed captions
|
||||
- Export to MP4 (H.264) for compatibility
|
||||
- Upload to internal video platform
|
||||
|
||||
### Duration Targets
|
||||
- Introduction videos: 5-7 minutes
|
||||
- Deep dives: 8-12 minutes
|
||||
- Quick tips: 2-3 minutes
|
||||
|
||||
---
|
||||
|
||||
## Revision History
|
||||
|
||||
| Version | Date | Author | Changes |
|
||||
|---------|------|--------|---------|
|
||||
| 1.0 | 2025-12-20 | Agent | Initial scripts created |
|
||||
@@ -1,8 +0,0 @@
|
||||
id: excitor-my-provider
|
||||
assembly: StellaOps.Excitor.Connectors.MyProvider.dll
|
||||
entryPoint: StellaOps.Excitor.Connectors.MyProvider.MyConnectorPlugin
|
||||
description: |
|
||||
Example connector template. Replace metadata before shipping.
|
||||
tags:
|
||||
- excitor
|
||||
- template
|
||||
@@ -1,12 +0,0 @@
|
||||
<Project Sdk="Microsoft.NET.Sdk">
|
||||
<PropertyGroup>
|
||||
<TargetFramework>net10.0</TargetFramework>
|
||||
<Nullable>enable</Nullable>
|
||||
<ImplicitUsings>enable</ImplicitUsings>
|
||||
<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
|
||||
</PropertyGroup>
|
||||
<ItemGroup>
|
||||
<!-- Adjust the relative path when copying this template into a repo -->
|
||||
<ProjectReference Include="..\..\..\..\src\StellaOps.Excitor.Connectors.Abstractions\StellaOps.Excitor.Connectors.Abstractions.csproj" />
|
||||
</ItemGroup>
|
||||
</Project>
|
||||
@@ -1,72 +0,0 @@
|
||||
using System.Collections.Generic;
|
||||
using System.Collections.Immutable;
|
||||
using System.Runtime.CompilerServices;
|
||||
using Microsoft.Extensions.Logging;
|
||||
using StellaOps.Excitor.Connectors.Abstractions;
|
||||
using StellaOps.Excitor.Core;
|
||||
|
||||
namespace StellaOps.Excitor.Connectors.MyProvider;
|
||||
|
||||
public sealed class MyConnector : VexConnectorBase
|
||||
{
|
||||
private readonly IEnumerable<IVexConnectorOptionsValidator<MyConnectorOptions>> _validators;
|
||||
private MyConnectorOptions? _options;
|
||||
|
||||
public MyConnector(VexConnectorDescriptor descriptor, ILogger<MyConnector> logger, TimeProvider timeProvider, IEnumerable<IVexConnectorOptionsValidator<MyConnectorOptions>> validators)
|
||||
: base(descriptor, logger, timeProvider)
|
||||
{
|
||||
_validators = validators;
|
||||
}
|
||||
|
||||
public override ValueTask ValidateAsync(VexConnectorSettings settings, CancellationToken cancellationToken)
|
||||
{
|
||||
_options = VexConnectorOptionsBinder.Bind(
|
||||
Descriptor,
|
||||
settings,
|
||||
validators: _validators);
|
||||
|
||||
LogConnectorEvent(LogLevel.Information, "validate", "MyConnector configuration loaded.",
|
||||
new Dictionary<string, object?>
|
||||
{
|
||||
["catalogUri"] = _options.CatalogUri,
|
||||
["maxParallelRequests"] = _options.MaxParallelRequests,
|
||||
});
|
||||
|
||||
return ValueTask.CompletedTask;
|
||||
}
|
||||
|
||||
public override IAsyncEnumerable<VexRawDocument> FetchAsync(VexConnectorContext context, CancellationToken cancellationToken)
|
||||
{
|
||||
if (_options is null)
|
||||
{
|
||||
throw new InvalidOperationException("Connector not validated.");
|
||||
}
|
||||
|
||||
return FetchInternalAsync(context, cancellationToken);
|
||||
}
|
||||
|
||||
private async IAsyncEnumerable<VexRawDocument> FetchInternalAsync(VexConnectorContext context, [EnumeratorCancellation] CancellationToken cancellationToken)
|
||||
{
|
||||
LogConnectorEvent(LogLevel.Information, "fetch", "Fetching catalog window...");
|
||||
|
||||
// Replace with real HTTP logic.
|
||||
await Task.Delay(10, cancellationToken);
|
||||
|
||||
var metadata = BuildMetadata(builder => builder
|
||||
.Add("sourceUri", _options!.CatalogUri)
|
||||
.Add("window", context.Since?.ToString("O") ?? "full"));
|
||||
|
||||
yield return CreateRawDocument(
|
||||
VexDocumentFormat.CsafJson,
|
||||
new Uri($"{_options.CatalogUri.TrimEnd('/')}/sample.json"),
|
||||
new byte[] { 0x7B, 0x7D },
|
||||
metadata);
|
||||
}
|
||||
|
||||
public override ValueTask<VexClaimBatch> NormalizeAsync(VexRawDocument document, CancellationToken cancellationToken)
|
||||
{
|
||||
var claims = ImmutableArray<VexClaim>.Empty;
|
||||
var diagnostics = ImmutableDictionary<string, string>.Empty;
|
||||
return ValueTask.FromResult(new VexClaimBatch(document, claims, diagnostics));
|
||||
}
|
||||
}
|
||||
@@ -1,16 +0,0 @@
|
||||
using System.ComponentModel.DataAnnotations;
|
||||
|
||||
namespace StellaOps.Excitor.Connectors.MyProvider;
|
||||
|
||||
public sealed class MyConnectorOptions
|
||||
{
|
||||
[Required]
|
||||
[Url]
|
||||
public string CatalogUri { get; set; } = default!;
|
||||
|
||||
[Required]
|
||||
public string ApiKey { get; set; } = default!;
|
||||
|
||||
[Range(1, 32)]
|
||||
public int MaxParallelRequests { get; set; } = 4;
|
||||
}
|
||||
@@ -1,15 +0,0 @@
|
||||
using System.Collections.Generic;
|
||||
using StellaOps.Excitor.Connectors.Abstractions;
|
||||
|
||||
namespace StellaOps.Excitor.Connectors.MyProvider;
|
||||
|
||||
public sealed class MyConnectorOptionsValidator : IVexConnectorOptionsValidator<MyConnectorOptions>
|
||||
{
|
||||
public void Validate(VexConnectorDescriptor descriptor, MyConnectorOptions options, IList<string> errors)
|
||||
{
|
||||
if (!options.CatalogUri.StartsWith("https://", StringComparison.OrdinalIgnoreCase))
|
||||
{
|
||||
errors.Add("CatalogUri must use HTTPS.");
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,27 +0,0 @@
|
||||
using Microsoft.Extensions.DependencyInjection;
|
||||
using Microsoft.Extensions.Logging;
|
||||
using StellaOps.Plugin;
|
||||
using StellaOps.Excitor.Connectors.Abstractions;
|
||||
using StellaOps.Excitor.Core;
|
||||
|
||||
namespace StellaOps.Excitor.Connectors.MyProvider;
|
||||
|
||||
public sealed class MyConnectorPlugin : IConnectorPlugin
|
||||
{
|
||||
private static readonly VexConnectorDescriptor Descriptor = new(
|
||||
id: "excitor:my-provider",
|
||||
kind: VexProviderKind.Vendor,
|
||||
displayName: "My Provider VEX");
|
||||
|
||||
public string Name => Descriptor.DisplayName;
|
||||
|
||||
public bool IsAvailable(IServiceProvider services) => true;
|
||||
|
||||
public IFeedConnector Create(IServiceProvider services)
|
||||
{
|
||||
var logger = services.GetRequiredService<ILogger<MyConnector>>();
|
||||
var timeProvider = services.GetRequiredService<TimeProvider>();
|
||||
var validators = services.GetServices<IVexConnectorOptionsValidator<MyConnectorOptions>>();
|
||||
return new MyConnector(Descriptor, logger, timeProvider, validators);
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user