feat: Implement distro-native version comparison for RPM, Debian, and Alpine packages

- Add RpmVersionComparer for RPM version comparison with epoch, version, and release handling.
- Introduce DebianVersion for parsing Debian EVR (Epoch:Version-Release) strings.
- Create ApkVersion for parsing Alpine APK version strings with suffix support.
- Define IVersionComparator interface for version comparison with proof-line generation.
- Implement VersionComparisonResult struct to encapsulate comparison results and proof lines.
- Add tests for Debian and RPM version comparers to ensure correct functionality and edge case handling.
- Create project files for the version comparison library and its tests.
This commit is contained in:
StellaOps Bot
2025-12-22 09:49:38 +02:00
parent aff0ceb2fe
commit df94136727
111 changed files with 30413 additions and 1813 deletions

View File

@@ -0,0 +1,346 @@
# Sprint 2000.0003.0001 · Alpine Connector and APK Version Comparator
## Topic & Scope
- Implement Alpine Linux advisory connector for Concelier.
- Implement APK version comparator following Alpine's versioning semantics.
- Integrate with existing distro connector framework.
- **Working directory:** `src/Concelier/__Libraries/StellaOps.Concelier.Connector.Distro.Alpine/`
## Advisory Reference
- **Source:** `docs/product-advisories/archived/22-Dec-2025 - Getting Distro Backport Logic Right.md`
- **Gap Identified:** Alpine/APK support explicitly recommended but not implemented anywhere in codebase or scheduled sprints.
## Dependencies & Concurrency
- **Upstream**: None (uses existing connector framework)
- **Downstream**: Scanner distro detection, BinaryIndex Alpine corpus (future)
- **Safe to parallelize with**: SPRINT_2000_0003_0002 (Version Tests)
## Documentation Prerequisites
- `docs/modules/concelier/architecture.md`
- `src/Concelier/__Libraries/StellaOps.Concelier.Connector.Distro.Debian/` (reference implementation)
- Alpine Linux secdb format: https://secdb.alpinelinux.org/
---
## Tasks
### T1: Create APK Version Comparator
**Assignee**: Concelier Team
**Story Points**: 5
**Status**: TODO
**Dependencies**: —
**Description**:
Implement Alpine APK version comparison semantics. APK versions follow a simplified EVR model with `-r<pkgrel>` suffix.
**Implementation Path**: `src/Concelier/__Libraries/StellaOps.Concelier.Merge/Comparers/ApkVersion.cs`
**APK Version Format**:
```
<version>-r<pkgrel>
Examples:
1.2.3-r0
1.2.3_alpha-r1
1.2.3_pre2-r0
```
**APK Version Rules**:
- Underscore suffixes sort: `_alpha` < `_beta` < `_pre` < `_rc` < (none) < `_p` (patch)
- Numeric segments compare numerically
- `-r<N>` is the package release number (like RPM release)
- Letters in version compare lexicographically
**Implementation**:
```csharp
namespace StellaOps.Concelier.Merge.Comparers;
/// <summary>
/// Compares Alpine APK package versions following apk-tools versioning rules.
/// </summary>
public sealed class ApkVersionComparer : IComparer<ApkVersion>, IComparer<string>
{
public static readonly ApkVersionComparer Instance = new();
public int Compare(ApkVersion? x, ApkVersion? y)
{
if (x is null && y is null) return 0;
if (x is null) return -1;
if (y is null) return 1;
// Compare version part
var versionCmp = CompareVersionString(x.Version, y.Version);
if (versionCmp != 0) return versionCmp;
// Compare pkgrel
return x.PkgRel.CompareTo(y.PkgRel);
}
public int Compare(string? x, string? y)
{
if (!ApkVersion.TryParse(x, out var xVer))
return string.Compare(x, y, StringComparison.Ordinal);
if (!ApkVersion.TryParse(y, out var yVer))
return string.Compare(x, y, StringComparison.Ordinal);
return Compare(xVer, yVer);
}
private static int CompareVersionString(string a, string b)
{
// Implement APK version comparison:
// 1. Split into segments (numeric, alpha, suffix)
// 2. Compare segment by segment
// 3. Handle _alpha, _beta, _pre, _rc, _p suffixes
// ...
}
private static readonly Dictionary<string, int> SuffixOrder = new()
{
["_alpha"] = -4,
["_beta"] = -3,
["_pre"] = -2,
["_rc"] = -1,
[""] = 0,
["_p"] = 1
};
}
public readonly record struct ApkVersion
{
public required string Version { get; init; }
public required int PkgRel { get; init; }
public string? Suffix { get; init; }
public static bool TryParse(string? input, out ApkVersion result)
{
result = default;
if (string.IsNullOrWhiteSpace(input)) return false;
// Parse: <version>-r<pkgrel>
var rIndex = input.LastIndexOf("-r", StringComparison.Ordinal);
if (rIndex < 0)
{
result = new ApkVersion { Version = input, PkgRel = 0 };
return true;
}
var versionPart = input[..rIndex];
var pkgRelPart = input[(rIndex + 2)..];
if (!int.TryParse(pkgRelPart, out var pkgRel))
return false;
result = new ApkVersion { Version = versionPart, PkgRel = pkgRel };
return true;
}
public override string ToString() => $"{Version}-r{PkgRel}";
}
```
**Acceptance Criteria**:
- [ ] APK version parsing implemented
- [ ] Suffix ordering (_alpha < _beta < _pre < _rc < none < _p)
- [ ] PkgRel comparison working
- [ ] Edge cases: versions with letters, multiple underscores
- [ ] Unit tests with 30+ cases
---
### T2: Create Alpine SecDB Parser
**Assignee**: Concelier Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1
**Description**:
Parse Alpine Linux security database format (JSON).
**Implementation Path**: `src/Concelier/__Libraries/StellaOps.Concelier.Connector.Distro.Alpine/Internal/AlpineSecDbParser.cs`
**SecDB Format** (from https://secdb.alpinelinux.org/):
```json
{
"distroversion": "v3.20",
"reponame": "main",
"urlprefix": "https://secdb.alpinelinux.org/",
"packages": [
{
"pkg": {
"name": "openssl",
"secfixes": {
"3.1.4-r0": ["CVE-2023-5678"],
"3.1.3-r0": ["CVE-2023-1234", "CVE-2023-5555"]
}
}
}
]
}
```
**Acceptance Criteria**:
- [ ] Parse secdb JSON format
- [ ] Extract package name, version, CVEs
- [ ] Map to `AffectedVersionRange` with `RangeKind = "apk"`
---
### T3: Implement AlpineConnector
**Assignee**: Concelier Team
**Story Points**: 5
**Status**: TODO
**Dependencies**: T1, T2
**Description**:
Implement the full Alpine advisory connector following existing distro connector patterns.
**Implementation Path**: `src/Concelier/__Libraries/StellaOps.Concelier.Connector.Distro.Alpine/AlpineConnector.cs`
**Project Structure**:
```
StellaOps.Concelier.Connector.Distro.Alpine/
├── StellaOps.Concelier.Connector.Distro.Alpine.csproj
├── AlpineConnector.cs
├── Configuration/
│ └── AlpineOptions.cs
├── Internal/
│ ├── AlpineSecDbParser.cs
│ └── AlpineMapper.cs
└── Dto/
└── AlpineSecDbDto.cs
```
**Supported Releases**:
- v3.18, v3.19, v3.20 (latest stable)
- edge (rolling)
**Acceptance Criteria**:
- [ ] Fetch secdb from https://secdb.alpinelinux.org/
- [ ] Parse all branches (main, community)
- [ ] Map to Advisory model with `type: "apk"`
- [ ] Preserve native APK version in ranges
- [ ] Integration tests with real secdb fixtures
---
### T4: Register Alpine Connector in DI
**Assignee**: Concelier Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T3
**Description**:
Register Alpine connector in Concelier WebService and add configuration.
**Implementation Path**: `src/Concelier/StellaOps.Concelier.WebService/Extensions/ConnectorServiceExtensions.cs`
**Configuration** (`etc/concelier.yaml`):
```yaml
concelier:
sources:
- name: alpine
kind: secdb
baseUrl: https://secdb.alpinelinux.org/
signature: { type: none }
enabled: true
releases: [v3.18, v3.19, v3.20]
```
**Acceptance Criteria**:
- [ ] Connector registered via DI
- [ ] Configuration options working
- [ ] Health check includes Alpine source status
---
### T5: Unit and Integration Tests
**Assignee**: Concelier Team
**Story Points**: 5
**Status**: TODO
**Dependencies**: T1-T4
**Test Matrix**:
| Test Category | Count | Description |
|---------------|-------|-------------|
| APK Version Comparison | 30+ | Suffix ordering, pkgrel, edge cases |
| SecDB Parsing | 10+ | Real fixtures from secdb |
| Connector Integration | 5+ | End-to-end with mock HTTP |
| Golden Files | 3 | Per-release determinism |
**Test Fixtures** (from real Alpine images):
```
alpine:3.18 → apk info -v openssl → 3.1.4-r0
alpine:3.19 → apk info -v curl → 8.5.0-r0
alpine:3.20 → apk info -v zlib → 1.3.1-r0
```
**Acceptance Criteria**:
- [ ] 30+ APK version comparison tests
- [ ] SecDB parsing tests with real fixtures
- [ ] Integration tests pass
- [ ] Golden file regression tests
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | | Concelier Team | Create APK Version Comparator |
| 2 | T2 | TODO | T1 | Concelier Team | Create Alpine SecDB Parser |
| 3 | T3 | TODO | T1, T2 | Concelier Team | Implement AlpineConnector |
| 4 | T4 | TODO | T3 | Concelier Team | Register Alpine Connector in DI |
| 5 | T5 | TODO | T1-T4 | Concelier Team | Unit and Integration Tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from advisory gap analysis. Alpine/APK identified as critical missing distro support. | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| SecDB over OVAL | Decision | Concelier Team | Alpine uses secdb JSON, not OVAL. Simpler to parse. |
| APK suffix ordering | Decision | Concelier Team | Follow apk-tools source for authoritative ordering |
| No GPG verification | Risk | Concelier Team | Alpine secdb is not signed. May add integrity check via HTTPS + known hash. |
---
## Success Criteria
- [ ] All 5 tasks marked DONE
- [ ] APK version comparator production-ready
- [ ] Alpine connector ingesting advisories
- [ ] 30+ version comparison tests passing
- [ ] Integration tests with real secdb
- [ ] `dotnet build` succeeds
- [ ] `dotnet test` succeeds with 100% pass rate
---
## References
- Advisory: `docs/product-advisories/archived/22-Dec-2025 - Getting Distro Backport Logic Right.md`
- Alpine SecDB: https://secdb.alpinelinux.org/
- APK version comparison: https://gitlab.alpinelinux.org/alpine/apk-tools
- Existing Debian connector: `src/Concelier/__Libraries/StellaOps.Concelier.Connector.Distro.Debian/`
---
*Document Version: 1.0.0*
*Created: 2025-12-22*

View File

@@ -0,0 +1,357 @@
# Sprint 2000.0003.0002 · Comprehensive Distro Version Comparison Tests
## Topic & Scope
- Expand version comparator test coverage to 50-100 cases per distro.
- Create golden files for regression testing.
- Add real-image cross-check tests using container fixtures.
- **Working directory:** `src/Concelier/__Tests/StellaOps.Concelier.Merge.Tests/`
## Advisory Reference
- **Source:** `docs/product-advisories/archived/22-Dec-2025 - Getting Distro Backport Logic Right.md`
- **Gap Identified:** Current test coverage is 12 tests total (7 NEVRA, 5 EVR). Advisory recommends 50-100 per distro plus golden files and real-image cross-checks.
## Dependencies & Concurrency
- **Upstream**: None (tests existing code)
- **Downstream**: None
- **Safe to parallelize with**: SPRINT_2000_0003_0001 (Alpine Connector)
## Documentation Prerequisites
- `src/Concelier/__Libraries/StellaOps.Concelier.Merge/Comparers/Nevra.cs`
- `src/Concelier/__Libraries/StellaOps.Concelier.Merge/Comparers/DebianEvr.cs`
- RPM versioning: https://rpm.org/user_doc/versioning.html
- Debian policy: https://www.debian.org/doc/debian-policy/ch-controlfields.html#version
---
## Tasks
### T1: Expand NEVRA (RPM) Test Corpus
**Assignee**: Concelier Team
**Story Points**: 5
**Status**: TODO
**Dependencies**: —
**Description**:
Create comprehensive test corpus for RPM NEVRA version comparison covering all edge cases.
**Implementation Path**: `src/Concelier/__Tests/StellaOps.Concelier.Merge.Tests/Comparers/NevraComparerTests.cs`
**Test Categories** (minimum 50 cases):
| Category | Cases | Examples |
|----------|-------|----------|
| Epoch precedence | 10 | `0:9.9-9` < `1:1.0-1`, missing epoch = 0 |
| Numeric version ordering | 10 | `1.2.3` < `1.2.10`, `1.9` < `1.10` |
| Alpha/numeric segments | 10 | `1.0a` < `1.0b`, `1.0` < `1.0a` |
| Tilde pre-releases | 10 | `1.0~rc1` < `1.0~rc2` < `1.0`, `1.0~` < `1.0` |
| Release qualifiers | 10 | `1.0-1.el8` < `1.0-1.el9`, `1.0-1.el8_5` < `1.0-2.el8` |
| Backport patterns | 10 | `1.0-1.el8` vs `1.0-1.el8_5.1` (security backport) |
| Architecture ordering | 5 | `x86_64` vs `aarch64` vs `noarch` |
**Test Data Format** (table-driven):
```csharp
public static TheoryData<string, string, int> NevraComparisonCases => new()
{
// Epoch precedence
{ "0:1.0-1.el8", "1:0.1-1.el8", -1 }, // Epoch wins
{ "1.0-1.el8", "0:1.0-1.el8", 0 }, // Missing epoch = 0
{ "2:1.0-1", "1:9.9-9", 1 }, // Higher epoch wins
// Numeric ordering
{ "1.9-1", "1.10-1", -1 }, // 9 < 10
{ "1.02-1", "1.2-1", 0 }, // Leading zeros ignored
// Tilde pre-releases
{ "1.0~rc1-1", "1.0-1", -1 }, // Tilde sorts before release
{ "1.0~alpha-1", "1.0~beta-1", -1 }, // Alpha < beta lexically
{ "1.0~~-1", "1.0~-1", -1 }, // Double tilde < single
// Release qualifiers (RHEL backports)
{ "1.0-1.el8", "1.0-1.el8_5", -1 }, // Base < security update
{ "1.0-1.el8_5", "1.0-1.el8_5.1", -1 }, // Incremental backport
{ "1.0-1.el8", "1.0-1.el9", -1 }, // el8 < el9
// ... 50+ more cases
};
[Theory]
[MemberData(nameof(NevraComparisonCases))]
public void Compare_NevraVersions_ReturnsExpectedOrder(string left, string right, int expected)
{
var result = Math.Sign(NevraComparer.Instance.Compare(left, right));
Assert.Equal(expected, result);
}
```
**Acceptance Criteria**:
- [ ] 50+ test cases for NEVRA comparison
- [ ] All edge cases from advisory covered (epochs, tildes, release qualifiers)
- [ ] Test data documented with comments explaining each case
---
### T2: Expand Debian EVR Test Corpus
**Assignee**: Concelier Team
**Story Points**: 5
**Status**: TODO
**Dependencies**: —
**Description**:
Create comprehensive test corpus for Debian EVR version comparison.
**Implementation Path**: `src/Concelier/__Tests/StellaOps.Concelier.Merge.Tests/Comparers/DebianEvrComparerTests.cs`
**Test Categories** (minimum 50 cases):
| Category | Cases | Examples |
|----------|-------|----------|
| Epoch precedence | 10 | `1:1.0-1` > `0:9.9-9`, missing epoch = 0 |
| Upstream version | 10 | `1.2.3` < `1.2.10`, letter/number transitions |
| Tilde pre-releases | 10 | `1.0~rc1` < `1.0`, `2.0~beta` < `2.0~rc` |
| Debian revision | 10 | `1.0-1` < `1.0-2`, `1.0-1ubuntu1` patterns |
| Ubuntu specific | 10 | `1.0-1ubuntu0.1` backports, `1.0-1build1` rebuilds |
| Native packages | 5 | No revision (e.g., `1.0` vs `1.0-1`) |
**Ubuntu Backport Patterns**:
```csharp
// Ubuntu security backports follow specific patterns
{ "1.0-1", "1.0-1ubuntu0.1", -1 }, // Security backport
{ "1.0-1ubuntu0.1", "1.0-1ubuntu0.2", -1 }, // Incremental backport
{ "1.0-1ubuntu1", "1.0-1ubuntu2", -1 }, // Ubuntu delta update
{ "1.0-1build1", "1.0-1build2", -1 }, // Rebuild
{ "1.0-1+deb12u1", "1.0-1+deb12u2", -1 }, // Debian stable update
```
**Acceptance Criteria**:
- [ ] 50+ test cases for Debian EVR comparison
- [ ] Ubuntu-specific patterns covered
- [ ] Debian stable update patterns (+debNuM)
- [ ] Test data documented with comments
---
### T3: Create Golden Files for Regression Testing
**Assignee**: Concelier Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1, T2
**Description**:
Create golden files that capture expected comparison results for regression testing.
**Implementation Path**: `src/Concelier/__Tests/StellaOps.Concelier.Merge.Tests/Fixtures/Golden/`
**Golden File Format** (NDJSON):
```json
{"left":"0:1.0-1.el8","right":"1:0.1-1.el8","expected":-1,"distro":"rpm","note":"epoch precedence"}
{"left":"1.0~rc1-1","right":"1.0-1","expected":-1,"distro":"rpm","note":"tilde pre-release"}
```
**Files**:
```
Fixtures/Golden/
├── rpm_version_comparison.golden.ndjson
├── deb_version_comparison.golden.ndjson
├── apk_version_comparison.golden.ndjson (after SPRINT_2000_0003_0001)
└── README.md (format documentation)
```
**Test Runner**:
```csharp
[Fact]
public async Task Compare_GoldenFile_AllCasesPass()
{
var goldenPath = Path.Combine(TestContext.CurrentContext.TestDirectory,
"Fixtures", "Golden", "rpm_version_comparison.golden.ndjson");
var lines = await File.ReadAllLinesAsync(goldenPath);
var failures = new List<string>();
foreach (var line in lines.Where(l => !string.IsNullOrWhiteSpace(l)))
{
var tc = JsonSerializer.Deserialize<GoldenTestCase>(line)!;
var actual = Math.Sign(NevraComparer.Instance.Compare(tc.Left, tc.Right));
if (actual != tc.Expected)
failures.Add($"FAIL: {tc.Left} vs {tc.Right}: expected {tc.Expected}, got {actual} ({tc.Note})");
}
Assert.Empty(failures);
}
```
**Acceptance Criteria**:
- [ ] Golden files created for RPM, Debian, APK
- [ ] 100+ cases per distro in golden files
- [ ] Golden file test runner implemented
- [ ] README documenting format and how to add cases
---
### T4: Real Image Cross-Check Tests
**Assignee**: Concelier Team
**Story Points**: 5
**Status**: TODO
**Dependencies**: T1, T2
**Description**:
Create integration tests that pull real container images, extract package versions, and validate comparisons against known advisory data.
**Implementation Path**: `src/Concelier/__Tests/StellaOps.Concelier.Integration.Tests/DistroVersionCrossCheckTests.cs`
**Test Images**:
```csharp
public static TheoryData<string, string[]> TestImages => new()
{
{ "registry.access.redhat.com/ubi9:latest", new[] { "openssl", "curl", "zlib" } },
{ "debian:12-slim", new[] { "openssl", "libcurl4", "zlib1g" } },
{ "ubuntu:22.04", new[] { "openssl", "curl", "zlib1g" } },
{ "alpine:3.20", new[] { "openssl", "curl", "zlib" } },
};
```
**Test Flow**:
1. Pull image using Testcontainers
2. Extract package versions (`rpm -q`, `dpkg-query -W`, `apk info -v`)
3. Look up known CVEs for those packages
4. Verify that version comparison correctly identifies fixed vs. vulnerable
**Implementation**:
```csharp
[Theory]
[MemberData(nameof(TestImages))]
public async Task CrossCheck_RealImage_VersionComparisonCorrect(string image, string[] packages)
{
await using var container = new ContainerBuilder()
.WithImage(image)
.WithCommand("sleep", "infinity")
.Build();
await container.StartAsync();
foreach (var pkg in packages)
{
// Extract installed version
var installedVersion = await ExtractPackageVersionAsync(container, pkg);
// Get known advisory fixed version (from fixtures)
var advisory = GetTestAdvisory(pkg);
if (advisory == null) continue;
// Compare using appropriate comparator
var comparer = GetComparerForImage(image);
var isFixed = comparer.Compare(installedVersion, advisory.FixedVersion) >= 0;
// Verify against expected status
Assert.Equal(advisory.ExpectedFixed, isFixed);
}
}
```
**Test Fixtures** (known CVE data):
```json
{
"package": "openssl",
"cve": "CVE-2023-5678",
"distro": "alpine",
"fixedVersion": "3.1.4-r0",
"vulnerableVersions": ["3.1.3-r0", "3.1.2-r0"]
}
```
**Acceptance Criteria**:
- [ ] Testcontainers integration working
- [ ] 4 distro images tested (UBI9, Debian 12, Ubuntu 22.04, Alpine 3.20)
- [ ] At least 3 packages per image validated
- [ ] CI-friendly (images cached, deterministic)
---
### T5: Document Test Corpus and Contribution Guide
**Assignee**: Concelier Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1-T4
**Description**:
Document the test corpus structure and how to add new test cases.
**Implementation Path**: `src/Concelier/__Tests/StellaOps.Concelier.Merge.Tests/README.md`
**Documentation Contents**:
- Test corpus structure
- How to add new version comparison cases
- Golden file format and tooling
- Real image cross-check setup
- Known edge cases and their rationale
**Acceptance Criteria**:
- [ ] README created with complete documentation
- [ ] Examples for adding new test cases
- [ ] CI badge showing test coverage
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Concelier Team | Expand NEVRA (RPM) Test Corpus |
| 2 | T2 | TODO | — | Concelier Team | Expand Debian EVR Test Corpus |
| 3 | T3 | TODO | T1, T2 | Concelier Team | Create Golden Files for Regression Testing |
| 4 | T4 | TODO | T1, T2 | Concelier Team | Real Image Cross-Check Tests |
| 5 | T5 | TODO | T1-T4 | Concelier Team | Document Test Corpus and Contribution Guide |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from advisory gap analysis. Test coverage identified as insufficient (12 tests vs 300+ recommended). | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Table-driven tests | Decision | Concelier Team | Use xUnit TheoryData for maintainability |
| Golden files in NDJSON | Decision | Concelier Team | Easy to diff, append, and parse |
| Testcontainers for real images | Decision | Concelier Team | CI-friendly, reproducible |
| Image pull latency | Risk | Concelier Team | Cache images in CI; use slim variants |
---
## Success Criteria
- [ ] All 5 tasks marked DONE
- [ ] 50+ NEVRA comparison tests
- [ ] 50+ Debian EVR comparison tests
- [ ] Golden files with 100+ cases per distro
- [ ] Real image cross-check tests passing
- [ ] Documentation complete
- [ ] `dotnet test` succeeds with 100% pass rate
---
## References
- Advisory: `docs/product-advisories/archived/22-Dec-2025 - Getting Distro Backport Logic Right.md`
- RPM versioning: https://rpm.org/user_doc/versioning.html
- Debian policy: https://www.debian.org/doc/debian-policy/ch-controlfields.html#version
- Existing tests: `src/Concelier/__Tests/StellaOps.Concelier.Merge.Tests/`
---
*Document Version: 1.0.0*
*Created: 2025-12-22*

View File

@@ -28,7 +28,7 @@ These features address gaps no competitor has filled per `docs/market/competitiv
## Source Documents
**Primary Advisory**: `docs/product-advisories/unprocessed/16-Dec-2025 - Building a Deeper Moat Beyond Reachability.md`
**Primary Advisory**: `docs/product-advisories/archived/16-Dec-2025 - Building a Deeper Moat Beyond Reachability.md`
**Related Documentation**:
- `docs/07_HIGH_LEVEL_ARCHITECTURE.md` — System topology, trust boundaries

View File

@@ -258,7 +258,7 @@ graph TD
- [Scanner AGENTS Guide](../../src/Scanner/AGENTS_SCORE_PROOFS.md) FOR AGENTS
**Source Advisory**:
- [16-Dec-2025 - Building a Deeper Moat Beyond Reachability](../product-advisories/unprocessed/16-Dec-2025 - Building a Deeper Moat Beyond Reachability.md)
- [16-Dec-2025 - Building a Deeper Moat Beyond Reachability](../product-advisories/archived/16-Dec-2025 - Building a Deeper Moat Beyond Reachability.md)
---

View File

@@ -0,0 +1,293 @@
# SPRINT_3600_0004_0001 - Node.js Babel Integration
**Status:** TODO
**Priority:** P1 - HIGH
**Module:** Scanner
**Working Directory:** `src/Scanner/__Libraries/StellaOps.Scanner.CallGraph/Extraction/Node/`
**Estimated Effort:** Medium
**Dependencies:** SPRINT_3600_0003_0001 (Drift Detection Engine) - DONE
---
## Topic & Scope
Implement full @babel/traverse integration for Node.js call graph extraction. The current `NodeCallGraphExtractor` is a skeleton/trace-based implementation. This sprint delivers production-grade AST analysis for JavaScript/TypeScript projects.
---
## Documentation Prerequisites
- `docs/product-advisories/17-Dec-2025 - Reachability Drift Detection.md` (archived)
- `docs/modules/scanner/reachability-drift.md`
- `src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Node/AGENTS.md`
- `bench/reachability-benchmark/README.md`
---
## Wave Coordination
Single wave with parallel tracks:
- Track A: Babel AST infrastructure
- Track B: Framework-specific entrypoint detection
- Track C: Sink detection patterns
- Track D: Edge extraction and call graph building
---
## Interlocks
- Must produce stable node IDs compatible with existing `CallGraphSnapshot` model
- Must align with `bench/reachability-benchmark/` Node.js test cases
- Must integrate with existing `ICallGraphExtractor` interface
---
## Action Tracker
| Date (UTC) | Action | Owner | Notes |
|---|---|---|---|
| 2025-12-22 | Created sprint from gap analysis | Agent | Initial |
---
## 1. OBJECTIVE
Deliver production-grade Node.js call graph extraction:
1. **Babel AST Parsing** - Full @babel/traverse integration
2. **Framework Entrypoints** - Express, Fastify, Koa, NestJS, Hapi detection
3. **Sink Detection** - JavaScript-specific dangerous APIs
4. **Edge Extraction** - Function calls, method invocations, dynamic imports
---
## 2. TECHNICAL DESIGN
### 2.1 Architecture
```
┌─────────────────────────────────────────────────────────────────┐
│ NodeCallGraphExtractor │
├─────────────────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────────┐ │
│ │ BabelParser │ │ AstWalker │ │ CallGraphBuilder │ │
│ │ (external) │ │ (traverse) │ │ (nodes, edges, sinks) │ │
│ └─────────────┘ └─────────────┘ └─────────────────────────┘ │
│ │ │ │ │
│ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐│
│ │ Framework Detectors ││
│ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌───────┐ ││
│ │ │ Express │ │ Fastify │ │ Koa │ │ NestJS │ │ Hapi │ ││
│ │ └─────────┘ └─────────┘ └─────────┘ └─────────┘ └───────┘ ││
│ └─────────────────────────────────────────────────────────────┘│
│ │ │
│ ▼ │
│ ┌─────────────────────────────────────────────────────────────┐│
│ │ Sink Matchers ││
│ │ child_process.exec | fs.writeFile | eval | Function() ││
│ │ http.request | crypto.createCipher | sql.query ││
│ └─────────────────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────────────────┘
```
### 2.2 External Tool Integration
The extractor invokes an external Node.js tool for AST parsing:
```bash
# Tool location: tools/stella-callgraph-node/
npx stella-callgraph-node \
--root /path/to/project \
--output json \
--include-tests false \
--max-depth 100
```
Output format (JSON):
```json
{
"nodes": [
{
"id": "src/controllers/user.js:UserController.getUser",
"symbol": "UserController.getUser",
"file": "src/controllers/user.js",
"line": 42,
"visibility": "public",
"isEntrypoint": true,
"entrypointType": "express_handler",
"isSink": false
}
],
"edges": [
{
"source": "src/controllers/user.js:UserController.getUser",
"target": "src/services/db.js:query",
"kind": "direct",
"callSite": "src/controllers/user.js:45"
}
],
"entrypoints": ["src/controllers/user.js:UserController.getUser"],
"sinks": ["src/services/db.js:query"]
}
```
### 2.3 Framework Entrypoint Detection
| Framework | Detection Pattern | Entrypoint Type |
|-----------|------------------|-----------------|
| Express | `app.get()`, `app.post()`, `router.use()` | `express_handler` |
| Fastify | `fastify.get()`, `fastify.route()` | `fastify_handler` |
| Koa | `router.get()`, middleware functions | `koa_handler` |
| NestJS | `@Get()`, `@Post()`, `@Controller()` | `nestjs_controller` |
| Hapi | `server.route()` | `hapi_handler` |
| Generic | `module.exports`, `export default` | `module_export` |
### 2.4 Sink Detection Patterns
```javascript
// Command Execution
child_process.exec()
child_process.spawn()
child_process.execSync()
require('child_process').exec()
// SQL Injection
connection.query() // without parameterization
knex.raw()
sequelize.query()
// File Operations
fs.writeFile()
fs.writeFileSync()
fs.appendFile()
// Deserialization
JSON.parse() // with untrusted input
eval()
Function()
vm.runInContext()
// SSRF
http.request()
https.request()
axios() // with user-controlled URL
fetch()
// Crypto (weak)
crypto.createCipher() // deprecated
crypto.createDecipher()
```
### 2.5 Node ID Generation
Stable, deterministic node IDs:
```javascript
// Pattern: {relative_file}:{export_name}.{function_name}
// Examples:
"src/controllers/user.js:UserController.getUser"
"src/services/db.js:module.query"
"src/utils/crypto.js:default.encrypt"
```
---
## Delivery Tracker
| # | Task ID | Status | Description | Notes |
|---|---------|--------|-------------|-------|
| 1 | NODE-001 | TODO | Create stella-callgraph-node tool scaffold | `tools/stella-callgraph-node/` |
| 2 | NODE-002 | TODO | Implement Babel parser integration | @babel/parser, @babel/traverse |
| 3 | NODE-003 | TODO | Implement AST walker for function declarations | FunctionDeclaration, ArrowFunction |
| 4 | NODE-004 | TODO | Implement call expression extraction | CallExpression, MemberExpression |
| 5 | NODE-005 | TODO | Implement Express entrypoint detection | app.get/post/put/delete patterns |
| 6 | NODE-006 | TODO | Implement Fastify entrypoint detection | fastify.route patterns |
| 7 | NODE-007 | TODO | Implement Koa entrypoint detection | router.get patterns |
| 8 | NODE-008 | TODO | Implement NestJS entrypoint detection | Decorator-based (@Get, @Post) |
| 9 | NODE-009 | TODO | Implement Hapi entrypoint detection | server.route patterns |
| 10 | NODE-010 | TODO | Implement sink detection (child_process) | exec, spawn, execSync |
| 11 | NODE-011 | TODO | Implement sink detection (SQL) | query, raw, knex |
| 12 | NODE-012 | TODO | Implement sink detection (fs) | writeFile, appendFile |
| 13 | NODE-013 | TODO | Implement sink detection (eval/Function) | Dynamic code execution |
| 14 | NODE-014 | TODO | Implement sink detection (http/fetch) | SSRF patterns |
| 15 | NODE-015 | TODO | Update NodeCallGraphExtractor to invoke tool | Process execution + JSON parsing |
| 16 | NODE-016 | TODO | Implement BabelResultParser | JSON to CallGraphSnapshot |
| 17 | NODE-017 | TODO | Unit tests for AST parsing | Various JS patterns |
| 18 | NODE-018 | TODO | Unit tests for entrypoint detection | All frameworks |
| 19 | NODE-019 | TODO | Unit tests for sink detection | All categories |
| 20 | NODE-020 | TODO | Integration tests with benchmark cases | `bench/reachability-benchmark/node/` |
| 21 | NODE-021 | TODO | Golden fixtures for determinism | Stable node IDs, edge ordering |
| 22 | NODE-022 | TODO | TypeScript support | .ts/.tsx file handling |
| 23 | NODE-023 | TODO | ESM/CommonJS module resolution | import/require handling |
| 24 | NODE-024 | TODO | Dynamic import detection | import() expressions |
---
## 3. ACCEPTANCE CRITERIA
### 3.1 AST Parsing
- [ ] Parses JavaScript files (.js, .mjs, .cjs)
- [ ] Parses TypeScript files (.ts, .tsx)
- [ ] Handles ESM imports/exports
- [ ] Handles CommonJS require/module.exports
- [ ] Handles dynamic imports
### 3.2 Entrypoint Detection
- [ ] Detects Express route handlers
- [ ] Detects Fastify route handlers
- [ ] Detects Koa middleware/routes
- [ ] Detects NestJS controllers
- [ ] Detects Hapi routes
- [ ] Classifies entrypoint types correctly
### 3.3 Sink Detection
- [ ] Detects command execution sinks
- [ ] Detects SQL injection sinks
- [ ] Detects file write sinks
- [ ] Detects eval/Function sinks
- [ ] Detects SSRF sinks
- [ ] Classifies sink categories correctly
### 3.4 Call Graph Quality
- [ ] Produces stable, deterministic node IDs
- [ ] Correctly extracts call edges
- [ ] Handles method chaining
- [ ] Handles callback patterns
- [ ] Handles Promise chains
### 3.5 Performance
- [ ] Parses 100K LOC project in < 60s
- [ ] Memory usage < 2GB for large projects
---
## Decisions & Risks
| ID | Decision | Rationale |
|----|----------|-----------|
| NODE-DEC-001 | External Node.js tool | Babel runs in Node.js; separate process avoids .NET interop complexity |
| NODE-DEC-002 | JSON output format | Simple, debuggable, compatible with existing parser infrastructure |
| NODE-DEC-003 | Framework-specific detectors | Different frameworks have different routing patterns |
| ID | Risk | Mitigation |
|----|------|------------|
| NODE-RISK-001 | Dynamic dispatch hard to trace | Conservative analysis; mark as "dynamic" call kind |
| NODE-RISK-002 | Callback hell complexity | Limit depth; focus on direct calls first |
| NODE-RISK-003 | Monorepo/workspace support | Start with single-package; extend later |
---
## Execution Log
| Date (UTC) | Update | Owner |
|---|---|---|
| 2025-12-22 | Created sprint from gap analysis | Agent |
---
## References
- **Master Sprint**: `SPRINT_3600_0001_0001_reachability_drift_master.md`
- **Advisory**: `docs/product-advisories/archived/17-Dec-2025 - Reachability Drift Detection.md`
- **Babel Docs**: https://babeljs.io/docs/babel-traverse
- **Existing Extractor**: `src/Scanner/__Libraries/StellaOps.Scanner.CallGraph/Extraction/Node/`

View File

@@ -0,0 +1,325 @@
# SPRINT_3600_0005_0001 - Policy CI Gate Integration
**Status:** TODO
**Priority:** P1 - HIGH
**Module:** Policy, Scanner, CLI
**Working Directory:** `src/Policy/StellaOps.Policy.Engine/Gates/`
**Estimated Effort:** Small
**Dependencies:** SPRINT_3600_0003_0001 (Drift Detection Engine) - DONE
---
## Topic & Scope
Integrate reachability drift detection with the Policy module's CI gate system. This enables automated PR/commit blocking based on new reachable paths to vulnerable sinks. Also implements exit code semantics for CLI integration.
---
## Documentation Prerequisites
- `docs/product-advisories/17-Dec-2025 - Reachability Drift Detection.md` (§6)
- `docs/modules/policy/architecture.md`
- `src/Policy/AGENTS.md`
- `src/Cli/AGENTS.md`
---
## Wave Coordination
Single wave:
1. Policy gate conditions for drift
2. Exit code implementation in CLI
3. VEX candidate auto-emission on drift
---
## Interlocks
- Must integrate with existing `PolicyGateEvaluator`
- Must integrate with existing `VexCandidateEmitter` in Scanner
- CLI exit codes must align with shell conventions (0=success, non-zero=action needed)
---
## Action Tracker
| Date (UTC) | Action | Owner | Notes |
|---|---|---|---|
| 2025-12-22 | Created sprint from gap analysis | Agent | Initial |
---
## 1. OBJECTIVE
Enable CI/CD pipelines to gate on reachability drift:
1. **Policy Gate Conditions** - Block PRs when new reachable paths to affected sinks detected
2. **Exit Codes** - Semantic exit codes for CLI tooling
3. **VEX Auto-Emission** - Generate VEX candidates when reachability changes
---
## 2. TECHNICAL DESIGN
### 2.1 Policy Gate Conditions
Extend `PolicyGateEvaluator` with drift-aware conditions:
```yaml
# Policy configuration (etc/policy.yaml)
smart_diff:
gates:
# Block: New reachable paths to affected sinks
- id: drift_block_affected
condition: "delta_reachable > 0 AND vex_status IN ['affected', 'under_investigation']"
action: block
message: "New reachable paths to vulnerable sinks detected"
severity: critical
# Warn: New paths to any sink (informational)
- id: drift_warn_new_paths
condition: "delta_reachable > 0"
action: warn
message: "New reachable paths detected - review recommended"
severity: medium
# Block: KEV now reachable
- id: drift_block_kev
condition: "delta_reachable > 0 AND is_kev = true"
action: block
message: "Known Exploited Vulnerability now reachable"
severity: critical
# Auto-allow: VEX confirms not_affected
- id: drift_allow_mitigated
condition: "vex_status = 'not_affected' AND vex_justification IN ['component_not_present', 'vulnerable_code_not_in_execute_path']"
action: allow
auto_mitigate: true
```
### 2.2 Gate Evaluation Context
```csharp
// File: src/Policy/StellaOps.Policy.Engine/Gates/DriftGateContext.cs
namespace StellaOps.Policy.Engine.Gates;
/// <summary>
/// Context for drift-aware gate evaluation.
/// </summary>
public sealed record DriftGateContext
{
/// <summary>
/// Number of sinks that became reachable in this scan.
/// </summary>
public required int DeltaReachable { get; init; }
/// <summary>
/// Number of sinks that became unreachable (mitigated).
/// </summary>
public required int DeltaUnreachable { get; init; }
/// <summary>
/// Whether any newly reachable sink is linked to a KEV.
/// </summary>
public required bool HasKevReachable { get; init; }
/// <summary>
/// VEX status of newly reachable sinks.
/// </summary>
public required IReadOnlyList<string> NewlyReachableVexStatuses { get; init; }
/// <summary>
/// Highest CVSS score among newly reachable sinks.
/// </summary>
public double? MaxCvss { get; init; }
/// <summary>
/// Highest EPSS score among newly reachable sinks.
/// </summary>
public double? MaxEpss { get; init; }
}
```
### 2.3 Exit Code Semantics
| Code | Meaning | Description |
|------|---------|-------------|
| 0 | Success, no drift | No material reachability changes detected |
| 1 | Success, info drift | New paths detected but not to affected sinks |
| 2 | Hardening regression | Previously mitigated paths now reachable again |
| 3 | KEV reachable | Known Exploited Vulnerability now reachable |
| 10 | Input error | Invalid scan ID, missing parameters |
| 11 | Analysis error | Call graph extraction failed |
| 12 | Storage error | Database/cache unavailable |
| 13 | Policy error | Gate evaluation failed |
```csharp
// File: src/Cli/StellaOps.Cli/Commands/DriftExitCodes.cs
namespace StellaOps.Cli.Commands;
/// <summary>
/// Exit codes for drift analysis commands.
/// </summary>
public static class DriftExitCodes
{
public const int Success = 0;
public const int InfoDrift = 1;
public const int HardeningRegression = 2;
public const int KevReachable = 3;
public const int InputError = 10;
public const int AnalysisError = 11;
public const int StorageError = 12;
public const int PolicyError = 13;
public static int FromDriftResult(ReachabilityDriftResult result, DriftGateContext context)
{
if (context.HasKevReachable)
return KevReachable;
if (context.DeltaReachable > 0 && context.NewlyReachableVexStatuses.Contains("affected"))
return HardeningRegression;
if (context.DeltaReachable > 0)
return InfoDrift;
return Success;
}
}
```
### 2.4 VEX Candidate Auto-Emission
When drift detection identifies that a sink became unreachable, automatically emit a VEX candidate:
```csharp
// Integration point in ReachabilityDriftDetector
public async Task<ReachabilityDriftResult> DetectWithVexEmissionAsync(
CallGraphSnapshot baseGraph,
CallGraphSnapshot headGraph,
IReadOnlyList<CodeChangeFact> codeChanges,
CancellationToken cancellationToken = default)
{
var result = Detect(baseGraph, headGraph, codeChanges);
// Emit VEX candidates for newly unreachable sinks
foreach (var sink in result.NewlyUnreachable)
{
await _vexCandidateEmitter.EmitAsync(new VexCandidate
{
VulnerabilityId = sink.AssociatedVulns.FirstOrDefault()?.CveId,
ProductKey = sink.Path.Entrypoint.Package,
Status = "not_affected",
Justification = "vulnerable_code_not_in_execute_path",
Trigger = VexCandidateTrigger.SinkUnreachable,
Evidence = new VexEvidence
{
DriftResultId = result.Id,
SinkNodeId = sink.SinkNodeId,
Cause = sink.Cause.Description
}
}, cancellationToken);
}
return result;
}
```
### 2.5 CLI Integration
```bash
# Drift analysis with gate evaluation
stella scan drift \
--base-scan abc123 \
--head-scan def456 \
--policy etc/policy.yaml \
--output sarif
# Exit code reflects gate decision
echo $? # 0, 1, 2, 3, or 10+
```
---
## Delivery Tracker
| # | Task ID | Status | Description | Notes |
|---|---------|--------|-------------|-------|
| 1 | GATE-001 | TODO | Create DriftGateContext model | Policy module |
| 2 | GATE-002 | TODO | Extend PolicyGateEvaluator with drift conditions | `delta_reachable`, `is_kev` |
| 3 | GATE-003 | TODO | Add drift gate configuration schema | YAML validation |
| 4 | GATE-004 | TODO | Create DriftExitCodes class | CLI module |
| 5 | GATE-005 | TODO | Implement exit code mapping logic | FromDriftResult |
| 6 | GATE-006 | TODO | Wire exit codes to `stella scan drift` command | CLI |
| 7 | GATE-007 | TODO | Integrate VEX candidate emission in drift detector | Scanner |
| 8 | GATE-008 | TODO | Add VexCandidateTrigger.SinkUnreachable | Extend enum |
| 9 | GATE-009 | TODO | Unit tests for drift gate evaluation | All conditions |
| 10 | GATE-010 | TODO | Unit tests for exit code mapping | All scenarios |
| 11 | GATE-011 | TODO | Integration tests for CLI exit codes | End-to-end |
| 12 | GATE-012 | TODO | Integration tests for VEX auto-emission | Drift -> VEX flow |
| 13 | GATE-013 | TODO | Update policy configuration schema | Add smart_diff.gates |
| 14 | GATE-014 | TODO | Document gate configuration options | In operations guide |
---
## 3. ACCEPTANCE CRITERIA
### 3.1 Policy Gates
- [ ] Evaluates `delta_reachable > 0` condition correctly
- [ ] Evaluates `is_kev = true` condition correctly
- [ ] Evaluates combined conditions (AND/OR)
- [ ] Returns correct gate action (block/warn/allow)
- [ ] Supports auto_mitigate flag
### 3.2 Exit Codes
- [ ] Returns 0 for no drift
- [ ] Returns 1 for info-level drift
- [ ] Returns 2 for hardening regression
- [ ] Returns 3 for KEV reachable
- [ ] Returns 10+ for errors
### 3.3 VEX Auto-Emission
- [ ] Emits VEX candidate when sink becomes unreachable
- [ ] Sets correct justification (`vulnerable_code_not_in_execute_path`)
- [ ] Links to drift result as evidence
- [ ] Does not emit for already-unreachable sinks
### 3.4 CLI Integration
- [ ] `stella scan drift` command respects gates
- [ ] Exit code reflects gate decision
- [ ] SARIF output includes gate results
---
## Decisions & Risks
| ID | Decision | Rationale |
|----|----------|-----------|
| GATE-DEC-001 | Exit code 3 for KEV | KEV is highest severity, distinct from hardening regression |
| GATE-DEC-002 | Auto-emit VEX only for unreachable | Reachable sinks need human review |
| GATE-DEC-003 | Policy YAML for gate config | Consistent with existing policy configuration |
| ID | Risk | Mitigation |
|----|------|------------|
| GATE-RISK-001 | False positive blocks | Warn-first approach; require explicit block config |
| GATE-RISK-002 | VEX spam on large diffs | Rate limit emission; batch by CVE |
| GATE-RISK-003 | Exit code conflicts | Document clearly; 10+ reserved for errors |
---
## Execution Log
| Date (UTC) | Update | Owner |
|---|---|---|
| 2025-12-22 | Created sprint from gap analysis | Agent |
---
## References
- **Drift Sprint**: `SPRINT_3600_0003_0001_drift_detection_engine.md`
- **Policy Module**: `src/Policy/StellaOps.Policy.Engine/`
- **CLI Module**: `src/Cli/StellaOps.Cli/`
- **VEX Emitter**: `src/Scanner/__Libraries/StellaOps.Scanner.SmartDiff/Detection/VexCandidateEmitter.cs`

View File

@@ -0,0 +1,224 @@
# SPRINT_3600_0006_0001 - Documentation Finalization
**Status:** TODO
**Priority:** P0 - CRITICAL
**Module:** Documentation
**Working Directory:** `docs/`
**Estimated Effort:** Medium
**Dependencies:** SPRINT_3600_0003_0001 (Drift Detection Engine) - DONE
---
## Topic & Scope
Finalize documentation for the Reachability Drift Detection feature set. This sprint creates architecture documentation, API reference, and operations guide.
---
## Documentation Prerequisites
- `docs/product-advisories/17-Dec-2025 - Reachability Drift Detection.md` (to be archived)
- `docs/implplan/SPRINT_3600_0002_0001_call_graph_infrastructure.md`
- `docs/implplan/SPRINT_3600_0003_0001_drift_detection_engine.md`
- Source code implementations in `src/Scanner/__Libraries/`
---
## Wave Coordination
Single wave:
1. Architecture documentation
2. API reference
3. Operations guide
4. Advisory archival
---
## Interlocks
- Must align with implemented code
- Must follow existing documentation patterns
- Must be validated against actual API responses
---
## Action Tracker
| Date (UTC) | Action | Owner | Notes |
|---|---|---|---|
| 2025-12-22 | Created sprint from gap analysis | Agent | Initial |
---
## 1. OBJECTIVE
Deliver comprehensive documentation:
1. **Architecture Doc** - Technical design, data flow, component interactions
2. **API Reference** - Endpoint specifications, request/response models
3. **Operations Guide** - Deployment, configuration, monitoring
4. **Advisory Archival** - Move processed advisory to archived folder
---
## 2. DELIVERABLES
### 2.1 Architecture Document
**Location:** `docs/modules/scanner/reachability-drift.md`
**Outline:**
1. Overview & Purpose
2. Key Concepts
- Call Graph
- Reachability Analysis
- Drift Detection
- Cause Attribution
3. Data Flow Diagram
4. Component Architecture
- Call Graph Extractors
- Reachability Analyzer
- Drift Detector
- Path Compressor
- Cause Explainer
5. Language Support Matrix
6. Storage Schema
- PostgreSQL tables
- Valkey caching
7. API Endpoints (summary)
8. Integration Points
- Policy module
- VEX emission
- Attestation
9. Performance Characteristics
10. References
### 2.2 API Reference
**Location:** `docs/api/scanner-drift-api.md`
**Outline:**
1. Overview
2. Authentication & Authorization
3. Endpoints
- `GET /scans/{scanId}/drift`
- `GET /drift/{driftId}/sinks`
- `POST /scans/{scanId}/compute-reachability`
- `GET /scans/{scanId}/reachability/components`
- `GET /scans/{scanId}/reachability/findings`
- `GET /scans/{scanId}/reachability/explain`
4. Request/Response Models
5. Error Codes
6. Rate Limiting
7. Examples (curl, SDK)
### 2.3 Operations Guide
**Location:** `docs/operations/reachability-drift-guide.md`
**Outline:**
1. Prerequisites
2. Configuration
- Scanner service
- Valkey cache
- Policy gates
3. Deployment Modes
- Standalone
- Kubernetes
- Air-gapped
4. Monitoring & Metrics
- Key metrics
- Grafana dashboards
- Alert thresholds
5. Troubleshooting
6. Performance Tuning
7. Backup & Recovery
8. Security Considerations
---
## Delivery Tracker
| # | Task ID | Status | Description | Notes |
|---|---------|--------|-------------|-------|
| 1 | DOC-001 | TODO | Create architecture doc structure | `docs/modules/scanner/reachability-drift.md` |
| 2 | DOC-002 | TODO | Write Overview & Purpose section | Architecture doc |
| 3 | DOC-003 | TODO | Write Key Concepts section | Architecture doc |
| 4 | DOC-004 | TODO | Create data flow diagram (Mermaid) | Architecture doc |
| 5 | DOC-005 | TODO | Write Component Architecture section | Architecture doc |
| 6 | DOC-006 | TODO | Write Language Support Matrix | Architecture doc |
| 7 | DOC-007 | TODO | Write Storage Schema section | Architecture doc |
| 8 | DOC-008 | TODO | Write Integration Points section | Architecture doc |
| 9 | DOC-009 | TODO | Create API reference structure | `docs/api/scanner-drift-api.md` |
| 10 | DOC-010 | TODO | Document GET /scans/{scanId}/drift | API reference |
| 11 | DOC-011 | TODO | Document GET /drift/{driftId}/sinks | API reference |
| 12 | DOC-012 | TODO | Document POST /scans/{scanId}/compute-reachability | API reference |
| 13 | DOC-013 | TODO | Document request/response models | API reference |
| 14 | DOC-014 | TODO | Add curl/SDK examples | API reference |
| 15 | DOC-015 | TODO | Create operations guide structure | `docs/operations/reachability-drift-guide.md` |
| 16 | DOC-016 | TODO | Write Configuration section | Operations guide |
| 17 | DOC-017 | TODO | Write Deployment Modes section | Operations guide |
| 18 | DOC-018 | TODO | Write Monitoring & Metrics section | Operations guide |
| 19 | DOC-019 | TODO | Write Troubleshooting section | Operations guide |
| 20 | DOC-020 | TODO | Update src/Scanner/AGENTS.md | Add final contract refs |
| 21 | DOC-021 | TODO | Archive advisory | Move to `docs/product-advisories/archived/` |
| 22 | DOC-022 | TODO | Update docs/README.md | Add links to new docs |
| 23 | DOC-023 | TODO | Peer review | Technical accuracy check |
---
## 3. ACCEPTANCE CRITERIA
### 3.1 Architecture Doc
- [ ] Covers all implemented components
- [ ] Data flow diagram is accurate
- [ ] Language support matrix is complete
- [ ] Storage schema matches migrations
- [ ] Integration points are documented
### 3.2 API Reference
- [ ] All endpoints documented
- [ ] Request/response models are accurate
- [ ] Error codes are complete
- [ ] Examples are tested and working
### 3.3 Operations Guide
- [ ] Configuration options are complete
- [ ] Deployment modes are documented
- [ ] Metrics are defined
- [ ] Troubleshooting covers common issues
### 3.4 Archival
- [ ] Advisory moved to archived folder
- [ ] Links updated in sprint files
- [ ] No broken references
---
## Decisions & Risks
| ID | Decision | Rationale |
|----|----------|-----------|
| DOC-DEC-001 | Mermaid for diagrams | Renders in GitLab/GitHub, text-based |
| DOC-DEC-002 | Separate ops guide | Different audience than architecture |
| DOC-DEC-003 | Archive after docs complete | Ensure traceability |
| ID | Risk | Mitigation |
|----|------|------------|
| DOC-RISK-001 | Docs become stale | Link to source code; version docs |
| DOC-RISK-002 | Missing edge cases | Review with QA team |
---
## Execution Log
| Date (UTC) | Update | Owner |
|---|---|---|
| 2025-12-22 | Created sprint from gap analysis | Agent |
---
## References
- **Call Graph Sprint**: `SPRINT_3600_0002_0001_call_graph_infrastructure.md`
- **Drift Sprint**: `SPRINT_3600_0003_0001_drift_detection_engine.md`
- **Advisory**: `docs/product-advisories/17-Dec-2025 - Reachability Drift Detection.md`

View File

@@ -0,0 +1,218 @@
# Sprint 3800.0001.0001 · Binary Call-Edge Enhancement
## Topic & Scope
- Enhance binary call graph extraction with disassembly-based call edge recovery.
- Implement indirect call resolution via PLT/IAT analysis.
- Add dynamic loading detection heuristics for `dlopen`/`LoadLibrary` patterns.
- **Working directory:** `src/Scanner/__Libraries/StellaOps.Scanner.CallGraph/Extraction/Binary/`
## Dependencies & Concurrency
- **Upstream**: None (enhances existing `BinaryCallGraphExtractor`)
- **Downstream**: Sprint 3810 (CVE→Symbol Mapping) benefits from richer call graphs
- **Safe to parallelize with**: Sprint 3830 (VEX Integration), Sprint 3850 (CLI)
## Documentation Prerequisites
- `docs/product-advisories/archived/2025-12-22-binary-reachability/20-Dec-2025 - Layered binary + callstack reachability.md`
- `docs/reachability/binary-reachability-schema.md`
- `src/Scanner/AGENTS.md`
---
## Tasks
### T1: Integrate iced-x86 for x86/x64 Disassembly
**Assignee**: Scanner Team
**Story Points**: 5
**Status**: TODO
**Description**:
Add iced-x86 NuGet package for disassembling x86/x64 code sections to extract direct call instructions.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.CallGraph/Extraction/Binary/Disassembly/`
**Acceptance Criteria**:
- [ ] Add `iced` NuGet package reference
- [ ] `X86Disassembler` class wrapping iced-x86
- [ ] Extract CALL/JMP instructions from `.text` section
- [ ] Handle both 32-bit and 64-bit code
- [ ] Deterministic output (stable instruction ordering)
---
### T2: Add Capstone Bindings for ARM64/Other Architectures
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Description**:
Add Capstone disassembler bindings for ARM64 and other non-x86 architectures.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.CallGraph/Extraction/Binary/Disassembly/`
**Acceptance Criteria**:
- [ ] `CapstoneDisassembler` class for ARM64
- [ ] Architecture detection from ELF/Mach-O headers
- [ ] Extract BL/BLR instructions for ARM64
- [ ] Fallback to symbol-only analysis if arch unsupported
---
### T3: Implement Direct Call Edge Extraction from .text
**Assignee**: Scanner Team
**Story Points**: 8
**Status**: TODO
**Description**:
Extract direct call edges by disassembling `.text` section and resolving call targets.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.CallGraph/Extraction/Binary/`
**Acceptance Criteria**:
- [ ] `DirectCallExtractor` class
- [ ] Parse call instruction operands to resolve target addresses
- [ ] Map addresses to symbols from symbol table
- [ ] Handle relative and absolute call addressing
- [ ] Create edges with `CallKind.Direct` and address-based `CallSite`
- [ ] Performance: <5s for typical 10MB binary
**Edge Model**:
```csharp
new CallGraphEdge(
SourceId: $"native:{binary}/{caller_symbol}",
TargetId: $"native:{binary}/{callee_symbol}",
CallKind: CallKind.Direct,
CallSite: $"0x{instruction_address:X}"
)
```
---
### T4: PLT Stub → GOT Resolution for ELF
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Description**:
Resolve PLT stubs to their GOT entries to determine actual call targets for ELF binaries.
**Acceptance Criteria**:
- [ ] Parse `.plt` section entries
- [ ] Map PLT stubs to GOT slots
- [ ] Resolve GOT entries to symbol names via `.rela.plt`
- [ ] Create edges with `CallKind.Plt` type
- [ ] Handle lazy binding patterns
---
### T5: IAT Thunk Resolution for PE
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Description**:
Resolve Import Address Table thunks for PE binaries to connect call sites to imported functions.
**Acceptance Criteria**:
- [ ] Parse IAT from PE optional header
- [ ] Map thunk addresses to import names
- [ ] Create edges with `CallKind.Iat` type
- [ ] Handle delay-load imports
---
### T6: Dynamic Loading Detection (dlopen/LoadLibrary)
**Assignee**: Scanner Team
**Story Points**: 5
**Status**: TODO
**Description**:
Detect calls to dynamic loading functions and infer loaded library candidates.
**Acceptance Criteria**:
- [ ] Detect calls to `dlopen`, `dlsym` (ELF)
- [ ] Detect calls to `LoadLibraryA/W`, `GetProcAddress` (PE)
- [ ] Extract string literal arguments where resolvable
- [ ] Create edges with `CallKind.Dynamic` and lower confidence
- [ ] Mark as `EdgeConfidence.Medium` for heuristic matches
---
### T7: String Literal Analysis for Dynamic Library Candidates
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Description**:
Analyze string literals near dynamic loading calls to infer library names.
**Acceptance Criteria**:
- [ ] Extract `.rodata`/`.rdata` string references
- [ ] Correlate strings with `dlopen`/`LoadLibrary` call sites
- [ ] Match patterns: `lib*.so*`, `*.dll`
- [ ] Add inferred libs as `unknown` nodes with `is_dynamic=true`
---
### T8: Update BinaryCallGraphExtractor Tests
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Description**:
Add comprehensive tests for new call edge extraction capabilities.
**Implementation Path**: `src/Scanner/__Tests/StellaOps.Scanner.CallGraph.Tests/`
**Acceptance Criteria**:
- [ ] Test fixtures for ELF x86_64, PE x64, Mach-O ARM64
- [ ] Direct call extraction tests
- [ ] PLT/IAT resolution tests
- [ ] Dynamic loading detection tests
- [ ] Determinism tests (same binary same edges)
- [ ] Golden output comparison
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | | Scanner Team | Integrate iced-x86 for x86/x64 Disassembly |
| 2 | T2 | TODO | | Scanner Team | Add Capstone Bindings for ARM64 |
| 3 | T3 | TODO | T1, T2 | Scanner Team | Direct Call Edge Extraction from .text |
| 4 | T4 | TODO | T3 | Scanner Team | PLT Stub GOT Resolution for ELF |
| 5 | T5 | TODO | T3 | Scanner Team | IAT Thunk Resolution for PE |
| 6 | T6 | TODO | T3 | Scanner Team | Dynamic Loading Detection |
| 7 | T7 | TODO | T6 | Scanner Team | String Literal Analysis |
| 8 | T8 | TODO | T1-T7 | Scanner Team | Update BinaryCallGraphExtractor Tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint file created from advisory gap analysis. | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Disassembler choice | Decision | Scanner Team | iced-x86 for x86/x64 (pure .NET), Capstone for ARM64 |
| Performance budget | Risk | Scanner Team | Disassembly adds latency; cap at 5s for 10MB binary |
| Stripped binary handling | Decision | Scanner Team | Use address-based IDs when symbols unavailable |
| Conservative unknowns | Decision | Scanner Team | Mark unresolved indirect calls as Unknown edges |
---
**Sprint Status**: TODO (0/8 tasks complete)

View File

@@ -0,0 +1,120 @@
# Sprint Epic 3800 · Layered Binary + Call-Stack Reachability
## Overview
This epic implements the two-stage reachability map as described in the product advisory "Layered binary + callstack reachability" (20-Dec-2025). It extends Stella Ops' reachability analysis with:
1. **Deeper binary analysis** - Disassembly-based call edge extraction
2. **CVE→Symbol mapping** - Connect vulnerabilities to specific binary functions
3. **Attestable slices** - Minimal proof units for triage decisions
4. **Query & replay APIs** - On-demand reachability queries with verification
5. **VEX automation** - Auto-generate `code_not_reachable` justifications
6. **Runtime traces** - eBPF/ETW-based observed path evidence
7. **OCI storage & CLI** - Artifact management and command-line tools
## Sprint Breakdown
| Sprint | Topic | Tasks | Status |
|--------|-------|-------|--------|
| [3800.0001.0001](SPRINT_3800_0001_0001_binary_call_edge_enhancement.md) | Binary Call-Edge Enhancement | 8 | TODO |
| [3810.0001.0001](SPRINT_3810_0001_0001_cve_symbol_mapping_slice_format.md) | CVE→Symbol Mapping & Slice Format | 7 | TODO |
| [3820.0001.0001](SPRINT_3820_0001_0001_slice_query_replay_apis.md) | Slice Query & Replay APIs | 7 | TODO |
| [3830.0001.0001](SPRINT_3830_0001_0001_vex_integration_policy_binding.md) | VEX Integration & Policy Binding | 6 | TODO |
| [3840.0001.0001](SPRINT_3840_0001_0001_runtime_trace_merge.md) | Runtime Trace Merge | 7 | TODO |
| [3850.0001.0001](SPRINT_3850_0001_0001_oci_storage_cli.md) | OCI Storage & CLI | 8 | TODO |
**Total Tasks**: 43
**Status**: TODO (0/43 complete)
## Recommended Execution Order
```
Sprint 3810 (CVE→Symbol + Slices) ─────────────────┐
├──► Sprint 3820 (Query APIs) ──► Sprint 3830 (VEX)
Sprint 3800 (Binary Enhancement) ──────────────────┘
Sprint 3850 (OCI + CLI) ─────────────────────────────► (parallel with 3830)
Sprint 3840 (Runtime Traces) ────────────────────────► (optional, parallel with 3830-3850)
```
## Key Deliverables
### Schemas & Contracts
| Artifact | Location | Sprint |
|----------|----------|--------|
| Slice predicate schema | `docs/schemas/stellaops-slice.v1.schema.json` | 3810 |
| Slice OCI media type | `application/vnd.stellaops.slice.v1+json` | 3850 |
| Runtime event schema | `docs/schemas/runtime-call-event.schema.json` | 3840 |
### APIs
| Endpoint | Method | Description | Sprint |
|----------|--------|-------------|--------|
| `/api/slices/query` | POST | Query reachability for CVE/symbols | 3820 |
| `/api/slices/{digest}` | GET | Retrieve attested slice | 3820 |
| `/api/slices/replay` | POST | Verify slice reproducibility | 3820 |
### CLI Commands
| Command | Description | Sprint |
|---------|-------------|--------|
| `stella binary submit` | Submit binary graph | 3850 |
| `stella binary info` | Display graph info | 3850 |
| `stella binary symbols` | List symbols | 3850 |
| `stella binary verify` | Verify attestation | 3850 |
### Documentation
| Document | Location | Sprint |
|----------|----------|--------|
| Slice schema specification | `docs/reachability/slice-schema.md` | 3810 |
| CVE→Symbol mapping guide | `docs/reachability/cve-symbol-mapping.md` | 3810 |
| Replay verification guide | `docs/reachability/replay-verification.md` | 3820 |
## Dependencies
### External Libraries
| Library | Purpose | Sprint |
|---------|---------|--------|
| iced-x86 | x86/x64 disassembly | 3800 |
| Capstone | ARM64 disassembly | 3800 |
| libbpf/cilium-ebpf | eBPF collector | 3840 |
### Cross-Module Dependencies
| From | To | Integration Point |
|------|-----|-------------------|
| Scanner | Concelier | Advisory feed for CVE→symbol mapping |
| Scanner | Attestor | DSSE signing for slices |
| Scanner | Excititor | Slice verdict consumption |
| Policy | Scanner | Unknowns budget enforcement |
## Risk Register
| Risk | Impact | Mitigation | Owner |
|------|--------|------------|-------|
| Disassembly performance | High | Cap at 5s per 10MB binary | Scanner Team |
| Missing CVE→symbol mappings | Medium | Fallback to package-level | Scanner Team |
| eBPF kernel compatibility | Medium | Require 5.8+, provide fallback | Platform Team |
| OCI registry compatibility | Low | Test against major registries | Scanner Team |
## Success Metrics
1. **Coverage**: >80% of binary CVEs have symbol-level mapping
2. **Performance**: Slice query <2s for typical graphs
3. **Accuracy**: Replay match rate >99.9%
4. **Adoption**: CLI commands used in >50% of offline deployments
## Related Documentation
- [Product Advisory](../product-advisories/archived/2025-12-22-binary-reachability/20-Dec-2025%20-%20Layered%20binary%20+%20callstack%20reachability.md)
- [Binary Reachability Schema](../reachability/binary-reachability-schema.md)
- [RichGraph Contract](../contracts/richgraph-v1.md)
- [Function-Level Evidence](../reachability/function-level-evidence.md)
---
_Created: 2025-12-22. Owner: Scanner Guild._

View File

@@ -0,0 +1,262 @@
# Sprint 3810.0001.0001 · CVE→Symbol Mapping & Slice Format
## Topic & Scope
- Implement CVE to symbol/function mapping service for binary reachability queries.
- Define and implement the `ReachabilitySlice` schema as minimal attestable proof units.
- Create slice extraction logic to generate focused subgraphs for specific CVE queries.
- **Working directory:** `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Slices/`
## Dependencies & Concurrency
- **Upstream**: Benefits from Sprint 3800 (richer call edges)
- **Downstream**: Sprint 3820 (Query APIs) consumes slices
- **Safe to parallelize with**: Sprint 3800, Sprint 3830
## Documentation Prerequisites
- `docs/product-advisories/archived/2025-12-22-binary-reachability/20-Dec-2025 - Layered binary + callstack reachability.md`
- `docs/reachability/slice-schema.md` (created this sprint)
- `docs/modules/concelier/architecture.md`
---
## Tasks
### T1: Define ReachabilitySlice Schema (DSSE Predicate)
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Description**:
Define the DSSE predicate schema for attestable reachability slices.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Slices/`
**Acceptance Criteria**:
- [ ] `ReachabilitySlice` record with all required fields
- [ ] JSON schema at `docs/schemas/stellaops-slice.v1.schema.json`
- [ ] Predicate type URI: `stellaops.dev/predicates/reachability-slice@v1`
- [ ] Fields: inputs, query, subgraph, verdict, manifest
**Schema Spec**:
```csharp
public sealed record ReachabilitySlice
{
[JsonPropertyName("_type")]
public string Type { get; init; } = "https://stellaops.dev/predicates/reachability-slice/v1";
[JsonPropertyName("inputs")]
public required SliceInputs Inputs { get; init; }
[JsonPropertyName("query")]
public required SliceQuery Query { get; init; }
[JsonPropertyName("subgraph")]
public required SliceSubgraph Subgraph { get; init; }
[JsonPropertyName("verdict")]
public required SliceVerdict Verdict { get; init; }
[JsonPropertyName("manifest")]
public required ScanManifest Manifest { get; init; }
}
public sealed record SliceQuery
{
public string? CveId { get; init; }
public ImmutableArray<string> TargetSymbols { get; init; }
public ImmutableArray<string> Entrypoints { get; init; }
public string? PolicyHash { get; init; }
}
public sealed record SliceVerdict
{
public required string Status { get; init; } // "reachable" | "unreachable" | "unknown"
public required double Confidence { get; init; }
public ImmutableArray<string> Reasons { get; init; }
public ImmutableArray<string> PathWitnesses { get; init; }
}
```
---
### T2: Concelier → Scanner Advisory Feed Integration
**Assignee**: Scanner Team + Concelier Team
**Story Points**: 5
**Status**: TODO
**Description**:
Create integration layer to consume CVE advisory data from Concelier for symbol mapping.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Advisory/`
**Acceptance Criteria**:
- [ ] `IAdvisoryClient` interface for Concelier queries
- [ ] `AdvisoryClient` HTTP implementation
- [ ] Query by CVE ID → get affected packages, functions, symbols
- [ ] Cache advisory data with TTL (1 hour default)
- [ ] Offline fallback to local advisory bundle
---
### T3: Vulnerability Surface Service for CVE → Symbols
**Assignee**: Scanner Team
**Story Points**: 8
**Status**: TODO
**Description**:
Build service that maps CVE identifiers to affected binary symbols/functions.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.VulnSurfaces/`
**Acceptance Criteria**:
- [ ] `IVulnSurfaceService` interface
- [ ] `VulnSurfaceService` implementation
- [ ] Query: CVE + PURL → list of affected symbols
- [ ] Support for function-level granularity
- [ ] Handle missing mappings gracefully (return all public symbols of package)
- [ ] Integration with `StellaOps.Scanner.VulnSurfaces` existing code
**Query Model**:
```csharp
public interface IVulnSurfaceService
{
Task<VulnSurfaceResult> GetAffectedSymbolsAsync(
string cveId,
string purl,
CancellationToken ct = default);
}
public sealed record VulnSurfaceResult
{
public required string CveId { get; init; }
public required string Purl { get; init; }
public required ImmutableArray<AffectedSymbol> Symbols { get; init; }
public required string Source { get; init; } // "patch-diff" | "advisory" | "heuristic"
public required double Confidence { get; init; }
}
```
---
### T4: Slice Extractor (Subgraph from Full Graph)
**Assignee**: Scanner Team
**Story Points**: 5
**Status**: TODO
**Description**:
Implement algorithm to extract minimal subgraph containing paths from entrypoints to target symbols.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Slices/`
**Acceptance Criteria**:
- [ ] `SliceExtractor` class
- [ ] Input: full RichGraph, query (target symbols, entrypoints)
- [ ] Output: minimal subgraph with only relevant nodes/edges
- [ ] BFS/DFS from targets to find all paths to entrypoints
- [ ] Include gate annotations on path edges
- [ ] Deterministic extraction (stable ordering)
---
### T5: Slice DSSE Signing with Content-Addressed Storage
**Assignee**: Scanner Team + Attestor Team
**Story Points**: 3
**Status**: TODO
**Description**:
Sign extracted slices as DSSE envelopes and store in CAS.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Slices/`
**Acceptance Criteria**:
- [ ] `SliceDsseSigner` using existing DSSE infrastructure
- [ ] Content-addressed storage: `cas://slices/{blake3-hash}`
- [ ] Slice digest computation (deterministic)
- [ ] Return `slice_digest` for retrieval
---
### T6: Verdict Computation (Reachable/Unreachable/Unknown)
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Description**:
Compute verdict for slice based on path analysis and unknowns.
**Acceptance Criteria**:
- [ ] `VerdictComputer` class
- [ ] "reachable": at least one path exists with high confidence
- [ ] "unreachable": no paths found and no unknowns on boundaries
- [ ] "unknown": unknowns present on potential paths
- [ ] Confidence score based on edge confidence aggregation
- [ ] Reason codes for verdict explanation
**Verdict Rules**:
```
reachable := path_exists AND min_path_confidence > 0.7
unreachable := NOT path_exists AND unknown_count == 0
unknown := path_exists AND (unknown_count > threshold OR min_confidence < 0.5)
OR NOT path_exists AND unknown_count > 0
```
---
### T7: Slice Schema JSON Validation Tests
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Description**:
Create tests validating slice JSON against schema.
**Implementation Path**: `src/Scanner/__Tests/StellaOps.Scanner.Reachability.Tests/Slices/`
**Acceptance Criteria**:
- [ ] Schema validation tests
- [ ] Round-trip serialization tests
- [ ] Determinism tests (same query → same slice bytes)
- [ ] Golden output comparison
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Scanner Team | Define ReachabilitySlice Schema |
| 2 | T2 | TODO | — | Scanner + Concelier | Advisory Feed Integration |
| 3 | T3 | TODO | T2 | Scanner Team | Vulnerability Surface Service |
| 4 | T4 | TODO | T1 | Scanner Team | Slice Extractor |
| 5 | T5 | TODO | T1, T4 | Scanner + Attestor | Slice DSSE Signing |
| 6 | T6 | TODO | T4 | Scanner Team | Verdict Computation |
| 7 | T7 | TODO | T1-T6 | Scanner Team | Schema Validation Tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint file created from advisory gap analysis. | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Slice granularity | Decision | Scanner Team | One slice per CVE+PURL query |
| Unknown handling | Decision | Scanner Team | Conservative: unknowns → unknown verdict |
| Cache TTL | Decision | Scanner Team | 1 hour for advisory data, configurable |
| Missing CVE→symbol mappings | Risk | Scanner Team | Fallback to package-level (all public symbols) |
---
**Sprint Status**: TODO (0/7 tasks complete)

View File

@@ -0,0 +1,241 @@
# Sprint 3820.0001.0001 · Slice Query & Replay APIs
## Topic & Scope
- Implement query API for on-demand reachability slice generation.
- Implement slice retrieval by digest.
- Implement replay API with byte-for-byte verification.
- **Working directory:** `src/Scanner/StellaOps.Scanner.WebService/Endpoints/` and `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Replay/`
## Dependencies & Concurrency
- **Upstream**: Sprint 3810 (Slice Format) must be complete
- **Downstream**: Sprint 3830 (VEX Integration) consumes slice verdicts
- **Safe to parallelize with**: Sprint 3840 (Runtime Traces)
## Documentation Prerequisites
- `docs/reachability/slice-schema.md`
- `docs/reachability/replay-verification.md` (created this sprint)
- `docs/api/scanner-api.md`
---
## Tasks
### T1: POST /api/slices/query Endpoint
**Assignee**: Scanner Team
**Story Points**: 5
**Status**: TODO
**Description**:
Implement query endpoint that generates reachability slices on demand.
**Implementation Path**: `src/Scanner/StellaOps.Scanner.WebService/Endpoints/SliceEndpoints.cs`
**Acceptance Criteria**:
- [ ] `POST /api/slices/query` endpoint
- [ ] Request body: `{ cve, symbols[], entrypoints[], policy?, scanId }`
- [ ] Response: `{ sliceDigest, verdict, confidence, paths[], cacheHit }`
- [ ] Generate slice using `SliceExtractor` from Sprint 3810
- [ ] Sign and store slice in CAS
- [ ] Return 202 Accepted for async generation of large slices
**Request/Response Contracts**:
```csharp
public sealed record SliceQueryRequest
{
public string? CveId { get; init; }
public ImmutableArray<string> Symbols { get; init; }
public ImmutableArray<string> Entrypoints { get; init; }
public string? PolicyHash { get; init; }
public required string ScanId { get; init; }
}
public sealed record SliceQueryResponse
{
public required string SliceDigest { get; init; }
public required string Verdict { get; init; }
public required double Confidence { get; init; }
public ImmutableArray<string> PathWitnesses { get; init; }
public required bool CacheHit { get; init; }
public string? JobId { get; init; } // For async generation
}
```
---
### T2: GET /api/slices/{digest} Endpoint
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Description**:
Implement retrieval endpoint for attested slices by digest.
**Implementation Path**: `src/Scanner/StellaOps.Scanner.WebService/Endpoints/SliceEndpoints.cs`
**Acceptance Criteria**:
- [ ] `GET /api/slices/{digest}` endpoint
- [ ] Return DSSE envelope with slice predicate
- [ ] Support `Accept: application/json` for JSON slice
- [ ] Support `Accept: application/dsse+json` for DSSE envelope
- [ ] 404 if slice not found in CAS
---
### T3: Slice Caching Layer with TTL
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Description**:
Implement caching for generated slices to avoid redundant computation.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Slices/`
**Acceptance Criteria**:
- [ ] `ISliceCache` interface
- [ ] In-memory cache with configurable TTL (default 1 hour)
- [ ] Cache key: hash of (scanId, query parameters)
- [ ] Cache eviction on memory pressure
- [ ] Metrics: cache hit/miss rate
---
### T4: POST /api/slices/replay Endpoint
**Assignee**: Scanner Team
**Story Points**: 5
**Status**: TODO
**Description**:
Implement replay endpoint that recomputes a slice and verifies byte-for-byte match.
**Implementation Path**: `src/Scanner/StellaOps.Scanner.WebService/Endpoints/SliceEndpoints.cs`
**Acceptance Criteria**:
- [ ] `POST /api/slices/replay` endpoint
- [ ] Request body: `{ sliceDigest }`
- [ ] Response: `{ match, originalDigest, recomputedDigest, diff? }`
- [ ] Rehydrate inputs from CAS
- [ ] Recompute slice with same parameters
- [ ] Compare byte-for-byte
**Response Contract**:
```csharp
public sealed record ReplayResponse
{
public required bool Match { get; init; }
public required string OriginalDigest { get; init; }
public required string RecomputedDigest { get; init; }
public SliceDiff? Diff { get; init; } // Only if !Match
}
public sealed record SliceDiff
{
public ImmutableArray<string> MissingNodes { get; init; }
public ImmutableArray<string> ExtraNodes { get; init; }
public ImmutableArray<string> MissingEdges { get; init; }
public ImmutableArray<string> ExtraEdges { get; init; }
public string? VerdictDiff { get; init; }
}
```
---
### T5: Replay Verification with Diff Output
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Description**:
Implement detailed diff computation when replay doesn't match.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Replay/`
**Acceptance Criteria**:
- [ ] `SliceDiffComputer` class
- [ ] Compare node sets (added/removed)
- [ ] Compare edge sets (added/removed)
- [ ] Compare verdicts
- [ ] Human-readable diff output
- [ ] Deterministic diff ordering
---
### T6: Integration Tests for Slice Workflow
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Description**:
End-to-end tests for slice query and replay workflow.
**Implementation Path**: `src/Scanner/__Tests/StellaOps.Scanner.WebService.Tests/Integration/`
**Acceptance Criteria**:
- [ ] Query → retrieve → verify workflow test
- [ ] Replay match test
- [ ] Replay mismatch test (with tampered inputs)
- [ ] Cache hit test
- [ ] Async generation test for large slices
---
### T7: OpenAPI Spec Updates
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Description**:
Update OpenAPI specification with new slice endpoints.
**Implementation Path**: `docs/api/openapi/scanner.yaml`
**Acceptance Criteria**:
- [ ] Document `POST /api/slices/query`
- [ ] Document `GET /api/slices/{digest}`
- [ ] Document `POST /api/slices/replay`
- [ ] Request/response schemas
- [ ] Error responses
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | Sprint 3810 | Scanner Team | POST /api/slices/query Endpoint |
| 2 | T2 | TODO | T1 | Scanner Team | GET /api/slices/{digest} Endpoint |
| 3 | T3 | TODO | T1 | Scanner Team | Slice Caching Layer |
| 4 | T4 | TODO | T1, T2 | Scanner Team | POST /api/slices/replay Endpoint |
| 5 | T5 | TODO | T4 | Scanner Team | Replay Verification with Diff |
| 6 | T6 | TODO | T1-T5 | Scanner Team | Integration Tests |
| 7 | T7 | TODO | T1-T4 | Scanner Team | OpenAPI Spec Updates |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint file created from advisory gap analysis. | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Async vs sync query | Decision | Scanner Team | Sync for small graphs (<10k nodes), async for larger |
| Cache eviction | Decision | Scanner Team | LRU with 1GB memory cap |
| Replay determinism | Risk | Scanner Team | Must ensure all inputs are CAS-addressed |
| Rate limiting | Decision | Scanner Team | 10 queries/min per tenant default |
---
**Sprint Status**: TODO (0/7 tasks complete)

View File

@@ -0,0 +1,234 @@
# Sprint 3830.0001.0001 · VEX Integration & Policy Binding
## Topic & Scope
- Connect reachability slices to VEX decision automation.
- Implement automatic `code_not_reachable` justification generation.
- Add policy binding to slices with strict/forward/any modes.
- Integrate unknowns budget enforcement into policy evaluation.
- **Working directory:** `src/Excititor/__Libraries/StellaOps.Excititor.Core/` and `src/Policy/__Libraries/`
## Dependencies & Concurrency
- **Upstream**: Sprint 3810 (Slice Format), Sprint 3820 (Query APIs)
- **Downstream**: None (terminal feature sprint)
- **Safe to parallelize with**: Sprint 3840 (Runtime Traces), Sprint 3850 (CLI)
## Documentation Prerequisites
- `docs/reachability/slice-schema.md`
- `docs/modules/excititor/architecture.md`
- `docs/modules/policy/architecture.md`
---
## Tasks
### T1: Excititor ← Slice Verdict Consumption
**Assignee**: Excititor Team + Scanner Team
**Story Points**: 5
**Status**: TODO
**Description**:
Enable Excititor to consume slice verdicts and use them in VEX decisions.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/Reachability/`
**Acceptance Criteria**:
- [ ] `ISliceVerdictConsumer` interface
- [ ] `SliceVerdictConsumer` implementation
- [ ] Query Scanner slice API for CVE+PURL combinations
- [ ] Map slice verdicts to VEX status influence
- [ ] Cache verdicts per scan lifecycle
**Integration Flow**:
```
Finding (CVE+PURL)
→ Query slice verdict
→ If unreachable: suggest not_affected
→ If reachable: maintain affected status
→ If unknown: flag for manual triage
```
---
### T2: Auto-Generate code_not_reachable Justification
**Assignee**: Excititor Team
**Story Points**: 3
**Status**: TODO
**Description**:
Automatically generate VEX justification when slice verdict is "unreachable".
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/Justification/`
**Acceptance Criteria**:
- [ ] `ReachabilityJustificationGenerator` class
- [ ] Generate `code_not_reachable` justification with evidence
- [ ] Include slice digest as evidence reference
- [ ] Include path analysis summary in justification text
- [ ] Support OpenVEX, CSAF, CycloneDX justification formats
**Justification Template**:
```json
{
"category": "code_not_reachable",
"details": "Static analysis determined no execution path from application entrypoints to vulnerable function.",
"evidence": {
"slice_digest": "blake3:abc123...",
"slice_uri": "cas://slices/blake3:abc123...",
"analyzer_version": "scanner.native:1.2.0",
"confidence": 0.95
}
}
```
---
### T3: Policy Binding to Slices (strict/forward/any)
**Assignee**: Policy Team
**Story Points**: 3
**Status**: TODO
**Description**:
Implement policy version binding for slices with validation modes.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Slices/`
**Acceptance Criteria**:
- [ ] `PolicyBinding` record in slice schema
- [ ] `strict`: Slice invalid if policy changes
- [ ] `forward`: Slice valid with newer policy versions
- [ ] `any`: Slice valid with any policy version
- [ ] Policy hash computation from DSL
- [ ] Validation on slice retrieval
**Binding Schema**:
```csharp
public sealed record PolicyBinding
{
public required string PolicyDigest { get; init; }
public required string PolicyVersion { get; init; }
public required DateTimeOffset BoundAt { get; init; }
public required PolicyBindingMode Mode { get; init; }
}
public enum PolicyBindingMode { Strict, Forward, Any }
```
---
### T4: Unknowns Budget Enforcement in Policy
**Assignee**: Policy Team
**Story Points**: 3
**Status**: TODO
**Description**:
Enforce unknowns budget in policy evaluation for slice-based decisions.
**Implementation Path**: `src/Policy/__Libraries/StellaOps.Policy.Engine/`
**Acceptance Criteria**:
- [ ] `UnknownsBudget` policy rule type
- [ ] Configurable threshold per severity/category
- [ ] Block deployment if unknowns exceed budget
- [ ] Report unknowns count in policy evaluation result
- [ ] Support per-environment budgets
**Policy Rule Example**:
```yaml
rules:
- id: unknowns-budget
type: unknowns_budget
config:
max_critical_unknowns: 0
max_high_unknowns: 5
max_medium_unknowns: 20
fail_action: block
```
---
### T5: Feature Flag Gate Conditions in Verdicts
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Description**:
Include feature flag gate information in slice verdicts.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Slices/`
**Acceptance Criteria**:
- [ ] Detect feature flag gates on paths (from existing `FeatureFlagDetector`)
- [ ] Include gate conditions in verdict reasons
- [ ] Mark as "conditionally reachable" when gated
- [ ] Specify flag name/condition required for reachability
**Verdict Extension**:
```csharp
public sealed record GatedPath
{
public required string PathId { get; init; }
public required string GateType { get; init; } // "feature_flag", "config", "auth"
public required string GateCondition { get; init; } // "FEATURE_X=true"
public required bool GateSatisfied { get; init; }
}
```
---
### T6: VEX Export with Reachability Evidence
**Assignee**: Excititor Team
**Story Points**: 3
**Status**: TODO
**Description**:
Include reachability evidence in VEX exports.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Formats.*/`
**Acceptance Criteria**:
- [ ] OpenVEX: Include evidence in statement
- [ ] CSAF: Include in remediation section
- [ ] CycloneDX: Include in analysis metadata
- [ ] Link to slice URI for full evidence
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | Sprint 3820 | Excititor + Scanner | Slice Verdict Consumption |
| 2 | T2 | TODO | T1 | Excititor Team | Auto-Generate code_not_reachable |
| 3 | T3 | TODO | Sprint 3810 | Policy Team | Policy Binding to Slices |
| 4 | T4 | TODO | T3 | Policy Team | Unknowns Budget Enforcement |
| 5 | T5 | TODO | Sprint 3810 | Scanner Team | Feature Flag Gate Conditions |
| 6 | T6 | TODO | T1, T2 | Excititor Team | VEX Export with Evidence |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint file created from advisory gap analysis. | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Auto-justification approval | Decision | Excititor Team | Auto-generated justifications require human approval by default |
| Policy binding default | Decision | Policy Team | Default to `strict` for production |
| Unknowns budget defaults | Decision | Policy Team | Critical=0, High=5, Medium=20 |
| Gated path confidence | Decision | Scanner Team | Gated paths get 0.5x confidence multiplier |
---
**Sprint Status**: TODO (0/6 tasks complete)

View File

@@ -0,0 +1,241 @@
# Sprint 3840.0001.0001 · Runtime Trace Merge
## Topic & Scope
- Implement runtime trace capture via eBPF (Linux) and ETW (Windows).
- Create trace ingestion service for merging observed paths with static analysis.
- Generate "observed path" slices with runtime evidence.
- **Working directory:** `src/Scanner/__Libraries/StellaOps.Scanner.Runtime/` and `src/Zastava/`
## Dependencies & Concurrency
- **Upstream**: Sprint 3810 (Slice Format) for observed-path slices
- **Downstream**: Enhances Sprint 3830 (VEX Integration) with runtime confidence
- **Safe to parallelize with**: Sprint 3850 (CLI)
## Documentation Prerequisites
- `docs/reachability/runtime-facts.md`
- `docs/reachability/runtime-static-union-schema.md`
- `docs/modules/zastava/architecture.md`
---
## Tasks
### T1: eBPF Collector Design (uprobe-based)
**Assignee**: Scanner Team + Platform Team
**Story Points**: 5
**Status**: TODO
**Description**:
Design eBPF-based function tracing collector using uprobes.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Runtime/Ebpf/`
**Acceptance Criteria**:
- [ ] Design document for eBPF collector architecture
- [ ] uprobe attachment strategy for target functions
- [ ] Data format for captured events
- [ ] Ringbuffer configuration for event streaming
- [ ] Security model (CAP_BPF, CAP_PERFMON)
- [ ] Container namespace awareness
**Event Schema**:
```csharp
public sealed record RuntimeCallEvent
{
public required ulong Timestamp { get; init; } // nanoseconds since boot
public required uint Pid { get; init; }
public required uint Tid { get; init; }
public required ulong CallerAddress { get; init; }
public required ulong CalleeAddress { get; init; }
public required string CallerSymbol { get; init; }
public required string CalleeSymbol { get; init; }
public required string BinaryPath { get; init; }
}
```
---
### T2: Linux eBPF Collector Implementation
**Assignee**: Platform Team
**Story Points**: 8
**Status**: TODO
**Description**:
Implement eBPF collector for Linux using libbpf or bpf2go.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Runtime/Ebpf/`
**Acceptance Criteria**:
- [ ] eBPF program for uprobe tracing (BPF CO-RE)
- [ ] User-space loader and event reader
- [ ] Symbol resolution via /proc/kallsyms and binary symbols
- [ ] Ringbuffer-based event streaming
- [ ] Handle ASLR via /proc/pid/maps
- [ ] Graceful degradation without eBPF support
**Technology Choice**:
- Use `bpf2go` for Go-based loader or libbpf-bootstrap
- Alternative: `cilium/ebpf` library
---
### T3: ETW Collector for Windows
**Assignee**: Platform Team
**Story Points**: 8
**Status**: TODO
**Description**:
Implement ETW-based function tracing for Windows.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Runtime/Etw/`
**Acceptance Criteria**:
- [ ] ETW session for CLR and native events
- [ ] Microsoft-Windows-DotNETRuntime provider subscription
- [ ] Stack walking for call chains
- [ ] Symbol resolution via DbgHelp
- [ ] Container-aware (process isolation)
- [ ] Admin privilege handling
---
### T4: Trace Ingestion Service
**Assignee**: Scanner Team
**Story Points**: 5
**Status**: TODO
**Description**:
Create service for ingesting runtime traces and storing in normalized format.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Runtime/Ingestion/`
**Acceptance Criteria**:
- [ ] `ITraceIngestionService` interface
- [ ] `TraceIngestionService` implementation
- [ ] Accept events from eBPF/ETW collectors
- [ ] Normalize to common `RuntimeCallEvent` format
- [ ] Batch writes to storage
- [ ] Deduplication of repeated call patterns
- [ ] CAS storage for trace files
---
### T5: Runtime → Static Graph Merge Algorithm
**Assignee**: Scanner Team
**Story Points**: 5
**Status**: TODO
**Description**:
Implement algorithm to merge runtime observations with static call graphs.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Runtime/`
**Acceptance Criteria**:
- [ ] `RuntimeStaticMerger` class
- [ ] Match runtime events to static graph nodes by symbol
- [ ] Add "observed" annotation to edges
- [ ] Add new edges for runtime-only paths (dynamic dispatch)
- [ ] Timestamp metadata for observation recency
- [ ] Confidence boost for observed paths
**Merge Rules**:
```
For each runtime edge (A → B):
If static edge exists:
Mark edge as "observed"
Add observation timestamp
Boost confidence to 1.0
Else:
Add edge with origin="runtime"
Set confidence based on observation count
```
---
### T6: "Observed Path" Slice Generation
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Description**:
Generate slices that include runtime-observed paths as evidence.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Slices/`
**Acceptance Criteria**:
- [ ] Include `observed_at` timestamps in slice edges
- [ ] New verdict: "observed_reachable" (highest confidence)
- [ ] Include observation count and recency
- [ ] Link to trace CAS artifacts
**Observed Edge Extension**:
```csharp
public sealed record ObservedEdgeMetadata
{
public required DateTimeOffset FirstObserved { get; init; }
public required DateTimeOffset LastObserved { get; init; }
public required int ObservationCount { get; init; }
public required string TraceDigest { get; init; }
}
```
---
### T7: Trace Retention and Pruning Policies
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Description**:
Implement retention policies for runtime trace data.
**Acceptance Criteria**:
- [ ] Configurable retention period (default 30 days)
- [ ] Automatic pruning of old traces
- [ ] Keep traces referenced by active slices
- [ ] Aggregation of old traces into summaries
- [ ] Storage quota enforcement
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Scanner + Platform | eBPF Collector Design |
| 2 | T2 | TODO | T1 | Platform Team | Linux eBPF Collector |
| 3 | T3 | TODO | — | Platform Team | ETW Collector for Windows |
| 4 | T4 | TODO | T2, T3 | Scanner Team | Trace Ingestion Service |
| 5 | T5 | TODO | T4, Sprint 3810 | Scanner Team | Runtime → Static Merge |
| 6 | T6 | TODO | T5 | Scanner Team | Observed Path Slices |
| 7 | T7 | TODO | T4 | Scanner Team | Trace Retention Policies |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint file created from advisory gap analysis. | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| eBPF kernel version | Risk | Platform Team | Requires kernel 5.8+ for CO-RE; fallback needed for older |
| Performance overhead | Risk | Platform Team | Target <5% CPU overhead in production |
| Privacy/security | Decision | Platform Team | Traces contain execution paths; follow data retention policies |
| Windows container support | Risk | Platform Team | ETW in containers has limitations |
---
**Sprint Status**: TODO (0/7 tasks complete)

View File

@@ -0,0 +1,308 @@
# Sprint 3850.0001.0001 · OCI Storage & CLI
## Topic & Scope
- Implement OCI artifact storage for reachability slices.
- Create `stella binary` CLI command group for binary reachability operations.
- **Working directory:** `src/Scanner/__Libraries/StellaOps.Scanner.Storage.Oci/` and `src/Cli/StellaOps.Cli/Commands/Binary/`
## Dependencies & Concurrency
- **Upstream**: Sprint 3810 (Slice Format), Sprint 3820 (Query APIs)
- **Downstream**: None (terminal feature sprint)
- **Safe to parallelize with**: Sprint 3830, Sprint 3840
## Documentation Prerequisites
- `docs/reachability/binary-reachability-schema.md` (BR9 section)
- `docs/24_OFFLINE_KIT.md`
- `src/Cli/StellaOps.Cli/AGENTS.md`
---
## Tasks
### T1: OCI Manifest Builder for Slices
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Description**:
Build OCI manifest structures for storing slices as OCI artifacts.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Storage.Oci/`
**Acceptance Criteria**:
- [ ] `SliceOciManifestBuilder` class
- [ ] Media type: `application/vnd.stellaops.slice.v1+json`
- [ ] Include slice JSON as blob
- [ ] Include DSSE envelope as separate blob
- [ ] Annotations for query metadata
**Manifest Structure**:
```json
{
"schemaVersion": 2,
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"artifactType": "application/vnd.stellaops.slice.v1+json",
"config": {
"mediaType": "application/vnd.stellaops.slice.config.v1+json",
"digest": "sha256:...",
"size": 123
},
"layers": [
{
"mediaType": "application/vnd.stellaops.slice.v1+json",
"digest": "sha256:...",
"size": 45678,
"annotations": {
"org.stellaops.slice.cve": "CVE-2024-1234",
"org.stellaops.slice.verdict": "unreachable"
}
},
{
"mediaType": "application/vnd.dsse+json",
"digest": "sha256:...",
"size": 2345
}
],
"annotations": {
"org.stellaops.slice.query.cve": "CVE-2024-1234",
"org.stellaops.slice.query.purl": "pkg:npm/lodash@4.17.21",
"org.stellaops.slice.created": "2025-12-22T10:00:00Z"
}
}
```
---
### T2: Registry Push Service (Harbor/Zot)
**Assignee**: Scanner Team
**Story Points**: 5
**Status**: TODO
**Description**:
Implement service to push slice artifacts to OCI registries.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Storage.Oci/`
**Acceptance Criteria**:
- [ ] `IOciPushService` interface
- [ ] `OciPushService` implementation
- [ ] Support basic auth and token auth
- [ ] Support Harbor, Zot, GHCR
- [ ] Referrer API support (OCI 1.1)
- [ ] Retry with exponential backoff
- [ ] Offline mode: save to local OCI layout
**Push Flow**:
```
1. Build manifest
2. Push blob: slice.json
3. Push blob: slice.dsse
4. Push config
5. Push manifest
6. (Optional) Create referrer to image
```
---
### T3: stella binary submit Command
**Assignee**: CLI Team
**Story Points**: 3
**Status**: TODO
**Description**:
Implement CLI command to submit binary for reachability analysis.
**Implementation Path**: `src/Cli/StellaOps.Cli/Commands/Binary/`
**Acceptance Criteria**:
- [ ] `stella binary submit --graph <path> --binary <path>`
- [ ] Upload graph to Scanner API
- [ ] Upload binary for analysis (optional)
- [ ] Display submission status
- [ ] Return graph digest
**Usage**:
```bash
# Submit pre-generated graph
stella binary submit --graph ./richgraph.json
# Submit binary for analysis
stella binary submit --binary ./myapp --analyze
# Submit with attestation
stella binary submit --graph ./richgraph.json --sign
```
---
### T4: stella binary info Command
**Assignee**: CLI Team
**Story Points**: 2
**Status**: TODO
**Description**:
Implement CLI command to display binary graph information.
**Implementation Path**: `src/Cli/StellaOps.Cli/Commands/Binary/`
**Acceptance Criteria**:
- [ ] `stella binary info --hash <digest>`
- [ ] Display node/edge counts
- [ ] Display entrypoints
- [ ] Display build-ID and format
- [ ] Display attestation status
- [ ] JSON output option
**Output Format**:
```
Binary Graph: blake3:abc123...
Format: ELF x86_64
Build-ID: gnu-build-id:5f0c7c3c...
Nodes: 1247
Edges: 3891
Entrypoints: 5
Attestation: Signed (Rekor #12345678)
```
---
### T5: stella binary symbols Command
**Assignee**: CLI Team
**Story Points**: 2
**Status**: TODO
**Description**:
Implement CLI command to list symbols from binary graph.
**Implementation Path**: `src/Cli/StellaOps.Cli/Commands/Binary/`
**Acceptance Criteria**:
- [ ] `stella binary symbols --hash <digest>`
- [ ] Filter: `--stripped-only`, `--exported-only`, `--entrypoints-only`
- [ ] Search: `--search <pattern>`
- [ ] Pagination support
- [ ] JSON output option
**Usage**:
```bash
# List all symbols
stella binary symbols --hash blake3:abc123...
# List only stripped (heuristic) symbols
stella binary symbols --hash blake3:abc123... --stripped-only
# Search for specific function
stella binary symbols --hash blake3:abc123... --search "ssl_*"
```
---
### T6: stella binary verify Command
**Assignee**: CLI Team
**Story Points**: 3
**Status**: TODO
**Description**:
Implement CLI command to verify binary graph attestation.
**Implementation Path**: `src/Cli/StellaOps.Cli/Commands/Binary/`
**Acceptance Criteria**:
- [ ] `stella binary verify --graph <path> --dsse <path>`
- [ ] Verify DSSE signature
- [ ] Verify Rekor inclusion (if logged)
- [ ] Verify graph digest matches
- [ ] Display verification result
- [ ] Exit code: 0=valid, 1=invalid
**Verification Flow**:
```
1. Parse DSSE envelope
2. Verify signature against configured keys
3. Extract predicate, verify graph hash
4. (Optional) Verify Rekor inclusion proof
5. Report result
```
---
### T7: CLI Integration Tests
**Assignee**: CLI Team
**Story Points**: 3
**Status**: TODO
**Description**:
Integration tests for binary CLI commands.
**Implementation Path**: `src/Cli/StellaOps.Cli.Tests/`
**Acceptance Criteria**:
- [ ] Submit command test with mock API
- [ ] Info command test
- [ ] Symbols command test with filters
- [ ] Verify command test (valid and invalid cases)
- [ ] Offline mode tests
---
### T8: Documentation Updates
**Assignee**: CLI Team
**Story Points**: 2
**Status**: TODO
**Description**:
Update CLI documentation with binary commands.
**Implementation Path**: `docs/09_API_CLI_REFERENCE.md`
**Acceptance Criteria**:
- [ ] Document all `stella binary` subcommands
- [ ] Usage examples
- [ ] Error codes and troubleshooting
- [ ] Link to binary reachability schema docs
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | Sprint 3810 | Scanner Team | OCI Manifest Builder |
| 2 | T2 | TODO | T1 | Scanner Team | Registry Push Service |
| 3 | T3 | TODO | T2 | CLI Team | stella binary submit |
| 4 | T4 | TODO | — | CLI Team | stella binary info |
| 5 | T5 | TODO | — | CLI Team | stella binary symbols |
| 6 | T6 | TODO | — | CLI Team | stella binary verify |
| 7 | T7 | TODO | T3-T6 | CLI Team | CLI Integration Tests |
| 8 | T8 | TODO | T3-T6 | CLI Team | Documentation Updates |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint file created from advisory gap analysis. | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| OCI media types | Decision | Scanner Team | Use stellaops vendor prefix |
| Registry compatibility | Risk | Scanner Team | Test against Harbor, Zot, GHCR, ACR |
| Offline bundle format | Decision | CLI Team | Use OCI image layout for offline |
| Authentication | Decision | CLI Team | Support docker config.json and explicit creds |
---
**Sprint Status**: TODO (0/8 tasks complete)

View File

@@ -12,7 +12,7 @@
- **Safe to parallelize with**: Unrelated epics
## Documentation Prerequisites
- `docs/product-advisories/unprocessed/moats/20-Dec-2025 - Moat Explanation - Exception management as auditable objects.md`
- `docs/product-advisories/archived/20-Dec-2025 - Moat Explanation - Exception management as auditable objects.md`
- `docs/modules/policy/architecture.md`
- `docs/db/SPECIFICATION.md`

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,412 @@
# Sprint 4000.0002.0001 · Backport Explainability UX
## Topic & Scope
- Add "Compared with" indicator to vulnerability findings showing which comparator was used.
- Implement "Why Fixed" popover showing version comparison steps.
- Display evidence trail for backport determinations.
- **Working directory:** `src/Web/StellaOps.Web/` (Angular UI)
## Advisory Reference
- **Source:** `docs/product-advisories/archived/22-Dec-2025 - Getting Distro Backport Logic Right.md`
- **Gap Identified:** Advisory recommends UX showing "Compared with: RPM EVR / dpkg rules" and "why fixed" popover. No UI work was scheduled.
## Dependencies & Concurrency
- **Upstream**: SPRINT_2000_0003_0001 (Alpine comparator), existing version comparators
- **Downstream**: None
- **Safe to parallelize with**: Backend sprints
## Documentation Prerequisites
- `docs/modules/ui/architecture.md`
- `docs/modules/scanner/architecture.md` (findings model)
- `docs/product-advisories/archived/22-Dec-2025 - Getting Distro Backport Logic Right.md`
---
## Tasks
### T1: Extend Findings API Response
**Assignee**: Backend Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: —
**Description**:
Extend the vulnerability findings API to include version comparison metadata.
**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Models/Findings/VersionComparisonEvidence.cs`
**New Fields**:
```csharp
public sealed record VersionComparisonEvidence
{
/// <summary>
/// Comparator algorithm used (rpm-evr, dpkg, apk, semver).
/// </summary>
public required string Comparator { get; init; }
/// <summary>
/// Installed version in native format.
/// </summary>
public required string InstalledVersion { get; init; }
/// <summary>
/// Fixed version threshold from advisory.
/// </summary>
public required string FixedVersion { get; init; }
/// <summary>
/// Whether the installed version is >= fixed.
/// </summary>
public required bool IsFixed { get; init; }
/// <summary>
/// Human-readable proof lines showing comparison steps.
/// </summary>
public ImmutableArray<string> ProofLines { get; init; } = [];
/// <summary>
/// Advisory source (DSA-1234, RHSA-2025:1234, USN-1234-1).
/// </summary>
public string? AdvisorySource { get; init; }
}
```
**API Response** (`GET /api/v1/scans/{id}/findings/{findingId}`):
```json
{
"findingId": "...",
"cveId": "CVE-2025-12345",
"package": "openssl",
"installedVersion": "1:1.1.1k-1+deb11u1",
"severity": "HIGH",
"status": "fixed",
"versionComparison": {
"comparator": "dpkg",
"installedVersion": "1:1.1.1k-1+deb11u1",
"fixedVersion": "1:1.1.1k-1+deb11u2",
"isFixed": false,
"proofLines": [
"Epoch: 1 == 1 (equal)",
"Upstream: 1.1.1k == 1.1.1k (equal)",
"Revision: 1+deb11u1 < 1+deb11u2 (VULNERABLE)"
],
"advisorySource": "DSA-5678-1"
}
}
```
**Acceptance Criteria**:
- [ ] VersionComparisonEvidence model created
- [ ] API response includes comparison metadata
- [ ] ProofLines generated by comparators
---
### T2: Update Version Comparators to Emit Proof Lines
**Assignee**: Concelier Team
**Story Points**: 5
**Status**: TODO
**Dependencies**: T1
**Description**:
Extend version comparators to optionally emit human-readable proof lines.
**Implementation Path**: `src/Concelier/__Libraries/StellaOps.Concelier.Merge/Comparers/`
**Interface Extension**:
```csharp
public interface IVersionComparator
{
int Compare(string? left, string? right);
/// <summary>
/// Compare with proof generation for explainability.
/// </summary>
VersionComparisonResult CompareWithProof(string? left, string? right);
}
public sealed record VersionComparisonResult(
int Comparison,
ImmutableArray<string> ProofLines);
```
**Example Proof Lines (RPM)**:
```
Epoch: 0 < 1 (left is older)
```
```
Epoch: 1 == 1 (equal)
Version segment 1: 1 == 1 (equal)
Version segment 2: 2 < 3 (left is older)
Result: VULNERABLE (installed < fixed)
```
**Example Proof Lines (Debian)**:
```
Epoch: 1 == 1 (equal)
Upstream version: 1.1.1k == 1.1.1k (equal)
Debian revision: 1+deb11u1 < 1+deb11u2 (left is older)
Result: VULNERABLE (installed < fixed)
```
**Acceptance Criteria**:
- [ ] NEVRA comparator emits proof lines
- [ ] DebianEvr comparator emits proof lines
- [ ] APK comparator emits proof lines (after SPRINT_2000_0003_0001)
- [ ] Unit tests verify proof line content
---
### T3: Create "Compared With" Badge Component
**Assignee**: UI Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1
**Description**:
Create Angular component showing which comparator was used.
**Implementation Path**: `src/Web/StellaOps.Web/src/app/shared/components/comparator-badge/`
**Component**:
```typescript
// comparator-badge.component.ts
@Component({
selector: 'app-comparator-badge',
template: `
<span class="comparator-badge" [class]="comparatorClass">
<mat-icon>compare_arrows</mat-icon>
<span>{{ comparatorLabel }}</span>
</span>
`,
styles: [`
.comparator-badge {
display: inline-flex;
align-items: center;
gap: 4px;
padding: 2px 8px;
border-radius: 4px;
font-size: 12px;
font-weight: 500;
}
.comparator-rpm { background: #fee2e2; color: #991b1b; }
.comparator-dpkg { background: #fef3c7; color: #92400e; }
.comparator-apk { background: #d1fae5; color: #065f46; }
.comparator-semver { background: #e0e7ff; color: #3730a3; }
`]
})
export class ComparatorBadgeComponent {
@Input() comparator!: string;
get comparatorLabel(): string {
switch (this.comparator) {
case 'rpm-evr': return 'RPM EVR';
case 'dpkg': return 'dpkg';
case 'apk': return 'APK';
case 'semver': return 'SemVer';
default: return this.comparator;
}
}
get comparatorClass(): string {
return `comparator-${this.comparator.replace('-', '')}`;
}
}
```
**Usage in Findings Table**:
```html
<td>
{{ finding.installedVersion }}
<app-comparator-badge [comparator]="finding.versionComparison?.comparator">
</app-comparator-badge>
</td>
```
**Acceptance Criteria**:
- [ ] Component created with distro-specific styling
- [ ] Badge shows comparator type (RPM EVR, dpkg, APK, SemVer)
- [ ] Accessible (ARIA labels)
---
### T4: Create "Why Fixed/Vulnerable" Popover
**Assignee**: UI Team
**Story Points**: 5
**Status**: TODO
**Dependencies**: T1, T2, T3
**Description**:
Create popover showing version comparison steps for explainability.
**Implementation Path**: `src/Web/StellaOps.Web/src/app/shared/components/version-proof-popover/`
**Component**:
```typescript
// version-proof-popover.component.ts
@Component({
selector: 'app-version-proof-popover',
template: `
<button mat-icon-button
[matMenuTriggerFor]="proofMenu"
matTooltip="Show comparison details"
aria-label="Show version comparison details">
<mat-icon>help_outline</mat-icon>
</button>
<mat-menu #proofMenu="matMenu" class="version-proof-menu">
<div class="proof-header">
<mat-icon [color]="isFixed ? 'primary' : 'warn'">
{{ isFixed ? 'check_circle' : 'error' }}
</mat-icon>
<span>{{ isFixed ? 'Fixed' : 'Vulnerable' }}</span>
</div>
<div class="proof-comparison">
<div class="version-row">
<span class="label">Installed:</span>
<code>{{ installedVersion }}</code>
</div>
<div class="version-row">
<span class="label">Fixed in:</span>
<code>{{ fixedVersion }}</code>
</div>
</div>
<mat-divider></mat-divider>
<div class="proof-lines">
<div class="proof-title">Comparison steps:</div>
<ol>
<li *ngFor="let line of proofLines">{{ line }}</li>
</ol>
</div>
<div class="proof-source" *ngIf="advisorySource">
<mat-icon>source</mat-icon>
<span>Source: {{ advisorySource }}</span>
</div>
</mat-menu>
`
})
export class VersionProofPopoverComponent {
@Input() comparison!: VersionComparisonEvidence;
get isFixed(): boolean { return this.comparison.isFixed; }
get installedVersion(): string { return this.comparison.installedVersion; }
get fixedVersion(): string { return this.comparison.fixedVersion; }
get proofLines(): string[] { return this.comparison.proofLines; }
get advisorySource(): string | undefined { return this.comparison.advisorySource; }
}
```
**Popover Content Example**:
```
┌─────────────────────────────────────┐
│ ⚠ Vulnerable │
├─────────────────────────────────────┤
│ Installed: 1:1.1.1k-1+deb11u1 │
│ Fixed in: 1:1.1.1k-1+deb11u2 │
├─────────────────────────────────────┤
│ Comparison steps: │
│ 1. Epoch: 1 == 1 (equal) │
│ 2. Upstream: 1.1.1k == 1.1.1k │
│ 3. Revision: 1+deb11u1 < 1+deb11u2 │
│ (VULNERABLE) │
├─────────────────────────────────────┤
│ 📄 Source: DSA-5678-1 │
└─────────────────────────────────────┘
```
**Acceptance Criteria**:
- [ ] Popover shows installed vs fixed versions
- [ ] Step-by-step comparison proof displayed
- [ ] Advisory source linked
- [ ] Accessible keyboard navigation
---
### T5: Integration and E2E Tests
**Assignee**: UI Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1-T4
**Description**:
Add integration tests for the new UI components.
**Test Cases**:
- [ ] ComparatorBadge renders correctly for all comparator types
- [ ] VersionProofPopover opens and displays proof lines
- [ ] Findings table shows comparison metadata
- [ ] E2E test: click proof popover, verify content
**Acceptance Criteria**:
- [ ] Unit tests for components
- [ ] E2E test with Playwright/Cypress
- [ ] Accessibility audit passes
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Backend Team | Extend Findings API Response |
| 2 | T2 | TODO | T1 | Concelier Team | Update Version Comparators to Emit Proof Lines |
| 3 | T3 | TODO | T1 | UI Team | Create "Compared With" Badge Component |
| 4 | T4 | TODO | T1, T2, T3 | UI Team | Create "Why Fixed/Vulnerable" Popover |
| 5 | T5 | TODO | T1-T4 | UI Team | Integration and E2E Tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from advisory gap analysis. UX explainability identified as missing. | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Proof lines in API response | Decision | Backend Team | Include in standard findings response, not separate endpoint |
| Comparator badge styling | Decision | UI Team | Distro-specific colors for quick visual identification |
| Popover vs modal | Decision | UI Team | Popover for quick glance; modal would interrupt workflow |
---
## Success Criteria
- [ ] All 5 tasks marked DONE
- [ ] Comparator badge visible on findings
- [ ] Why Fixed popover shows proof steps
- [ ] E2E tests passing
- [ ] Accessibility audit passes
- [ ] `ng build` succeeds
- [ ] `ng test` succeeds
---
## References
- Advisory: `docs/product-advisories/archived/22-Dec-2025 - Getting Distro Backport Logic Right.md`
- Angular Material: https://material.angular.io/
- Findings API: `docs/api/scanner-findings.yaml`
- UI Architecture: `docs/modules/ui/architecture.md`
---
*Document Version: 1.0.0*
*Created: 2025-12-22*

View File

@@ -753,6 +753,585 @@ export class CompareExportService {
---
### T9: Baseline Rationale Display
**Assignee**: UI Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T2
**Description**:
Show why the baseline was selected (auditor-friendly explanation).
**Implementation Path**: Add to baseline selector component
```typescript
// baseline-rationale.component.ts
@Component({
selector: 'stella-baseline-rationale',
standalone: true,
imports: [CommonModule, MatIconModule, MatTooltipModule],
template: `
<div class="baseline-rationale" *ngIf="rationale()">
<mat-icon>info</mat-icon>
<span class="rationale-text">{{ rationale() }}</span>
<button mat-icon-button (click)="showDetails()" matTooltip="View selection details">
<mat-icon>open_in_new</mat-icon>
</button>
</div>
`
})
export class BaselineRationaleComponent {
rationale = input<string>();
// Example rationales:
// "Selected last prod release with Allowed verdict under policy P-2024-001."
// "Auto-selected: most recent green build on main branch (2h ago)."
// "User override: manually selected v1.4.2 as comparison baseline."
}
```
**Acceptance Criteria**:
- [ ] Shows rationale text below baseline selector
- [ ] Explains why baseline was auto-selected
- [ ] Shows different message for manual override
- [ ] Click opens detailed selection log
---
### T10: Actionables Section ("What to do next")
**Assignee**: UI Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1, Backend API (Sprint 4200.0002.0006)
**Description**:
Show structured recommendations for addressing delta findings.
**Implementation Path**: `actionables-panel.component.ts` (new file)
```typescript
export interface Actionable {
id: string;
type: 'upgrade' | 'patch' | 'vex' | 'config' | 'investigate';
priority: 'critical' | 'high' | 'medium' | 'low';
title: string;
description: string;
component?: string;
targetVersion?: string;
cveIds?: string[];
estimatedEffort?: string;
}
@Component({
selector: 'stella-actionables-panel',
standalone: true,
imports: [CommonModule, MatListModule, MatChipsModule, MatIconModule, MatButtonModule],
template: `
<div class="actionables-panel">
<h4>
<mat-icon>task_alt</mat-icon>
What to do next
</h4>
<mat-list>
<mat-list-item *ngFor="let action of actionables()">
<mat-icon matListItemIcon [class]="'action-' + action.type">
{{ getActionIcon(action.type) }}
</mat-icon>
<div matListItemTitle>
{{ action.title }}
<mat-chip [class]="'priority-' + action.priority">
{{ action.priority }}
</mat-chip>
</div>
<div matListItemLine>{{ action.description }}</div>
<button mat-stroked-button matListItemMeta (click)="applyAction(action)">
Apply
</button>
</mat-list-item>
</mat-list>
<div class="empty-state" *ngIf="actionables().length === 0">
<mat-icon>check_circle</mat-icon>
<p>No immediate actions required</p>
</div>
</div>
`
})
export class ActionablesPanelComponent {
actionables = input<Actionable[]>([]);
getActionIcon(type: string): string {
const icons: Record<string, string> = {
upgrade: 'upgrade',
patch: 'build',
vex: 'description',
config: 'settings',
investigate: 'search'
};
return icons[type] || 'task';
}
}
```
**Acceptance Criteria**:
- [ ] Shows prioritized list of actionables
- [ ] Supports upgrade, patch, VEX, config, investigate types
- [ ] Priority chips with color coding
- [ ] Apply button triggers action workflow
- [ ] Empty state when no actions needed
---
### T11: Determinism Trust Indicators
**Assignee**: UI Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1
**Description**:
Display determinism hash, policy version, feed snapshot, signature status.
**Implementation Path**: `trust-indicators.component.ts` (new file)
```typescript
export interface TrustIndicators {
determinismHash: string;
policyVersion: string;
policyHash: string;
feedSnapshotTimestamp: Date;
feedSnapshotHash: string;
signatureStatus: 'valid' | 'invalid' | 'missing' | 'pending';
signerIdentity?: string;
}
@Component({
selector: 'stella-trust-indicators',
standalone: true,
imports: [CommonModule, MatChipsModule, MatIconModule, MatTooltipModule],
template: `
<div class="trust-indicators" [class.degraded]="indicators()?.signatureStatus !== 'valid'">
<!-- Signature Status Banner (if degraded) -->
<div class="degraded-banner" *ngIf="indicators()?.signatureStatus !== 'valid'">
<mat-icon>warning</mat-icon>
<span>Verification {{ indicators()?.signatureStatus }}: Some actions may be restricted</span>
</div>
<div class="indicators-row">
<div class="indicator" matTooltip="Determinism Hash - Verify reproducibility">
<mat-icon>fingerprint</mat-icon>
<span class="label">Det. Hash:</span>
<code>{{ indicators()?.determinismHash | slice:0:12 }}...</code>
<button mat-icon-button (click)="copyHash('determinism')">
<mat-icon>content_copy</mat-icon>
</button>
</div>
<div class="indicator" matTooltip="Policy Version">
<mat-icon>policy</mat-icon>
<span class="label">Policy:</span>
<code>{{ indicators()?.policyVersion }}</code>
</div>
<div class="indicator" [class.stale]="isFeedStale()"
matTooltip="Feed Snapshot Age">
<mat-icon>{{ isFeedStale() ? 'warning' : 'cloud_done' }}</mat-icon>
<span class="label">Feed:</span>
<span>{{ indicators()?.feedSnapshotTimestamp | date:'short' }}</span>
<span class="age" *ngIf="feedAge() as age">({{ age }})</span>
</div>
<div class="indicator" [class]="'sig-' + indicators()?.signatureStatus">
<mat-icon>{{ getSignatureIcon() }}</mat-icon>
<span class="label">Signature:</span>
<span>{{ indicators()?.signatureStatus }}</span>
</div>
</div>
</div>
`
})
export class TrustIndicatorsComponent {
indicators = input<TrustIndicators>();
feedStaleThresholdHours = 24;
isFeedStale(): boolean {
const ts = this.indicators()?.feedSnapshotTimestamp;
if (!ts) return true;
const age = Date.now() - new Date(ts).getTime();
return age > this.feedStaleThresholdHours * 60 * 60 * 1000;
}
}
```
**Acceptance Criteria**:
- [ ] Shows determinism hash with copy button
- [ ] Shows policy version
- [ ] Shows feed snapshot timestamp with age
- [ ] Shows signature verification status
- [ ] Degraded banner when signature invalid/missing
- [ ] Stale feed warning when > 24h old
---
### T12: Witness Path Visualization
**Assignee**: UI Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T6
**Description**:
Show minimal call path from entrypoint to vulnerable sink.
**Implementation Path**: Add to proof pane
```typescript
export interface WitnessPath {
id: string;
entrypoint: string;
sink: string;
nodes: WitnessNode[];
confidence: 'confirmed' | 'likely' | 'present';
gates: string[];
}
export interface WitnessNode {
method: string;
file?: string;
line?: number;
isEntrypoint?: boolean;
isSink?: boolean;
}
@Component({
selector: 'stella-witness-path',
standalone: true,
imports: [CommonModule, MatIconModule, MatButtonModule],
template: `
<div class="witness-path">
<div class="path-header">
<mat-chip [class]="'confidence-' + path()?.confidence">
{{ path()?.confidence }}
</mat-chip>
<button mat-icon-button (click)="expanded.set(!expanded())"
*ngIf="path()?.nodes?.length > 5">
<mat-icon>{{ expanded() ? 'unfold_less' : 'unfold_more' }}</mat-icon>
</button>
</div>
<div class="path-visualization">
<ng-container *ngFor="let node of visibleNodes(); let i = index; let last = last">
<div class="path-node" [class.entrypoint]="node.isEntrypoint"
[class.sink]="node.isSink">
<div class="node-icon">
<mat-icon *ngIf="node.isEntrypoint">login</mat-icon>
<mat-icon *ngIf="node.isSink">dangerous</mat-icon>
<mat-icon *ngIf="!node.isEntrypoint && !node.isSink">arrow_downward</mat-icon>
</div>
<div class="node-content">
<code class="method">{{ node.method }}</code>
<span class="location" *ngIf="node.file">
{{ node.file }}:{{ node.line }}
</span>
</div>
</div>
<div class="path-connector" *ngIf="!last"></div>
</ng-container>
<div class="collapsed-indicator" *ngIf="!expanded() && hiddenCount() > 0">
<span>... {{ hiddenCount() }} more nodes ...</span>
</div>
</div>
<div class="path-gates" *ngIf="path()?.gates?.length">
<span class="gates-label">Gates:</span>
<mat-chip *ngFor="let gate of path()?.gates">{{ gate }}</mat-chip>
</div>
</div>
`
})
export class WitnessPathComponent {
path = input<WitnessPath>();
expanded = signal(false);
visibleNodes = computed(() => {
const nodes = this.path()?.nodes || [];
if (this.expanded() || nodes.length <= 5) return nodes;
// Show first 2 and last 2
return [...nodes.slice(0, 2), ...nodes.slice(-2)];
});
hiddenCount = computed(() => {
const total = this.path()?.nodes?.length || 0;
return this.expanded() ? 0 : Math.max(0, total - 4);
});
}
```
**Acceptance Criteria**:
- [ ] Shows entrypoint → sink path
- [ ] Collapsible for long paths (> 5 nodes)
- [ ] Shows confidence tier
- [ ] Shows gates (security controls)
- [ ] Expand-on-demand for full path
---
### T13: VEX Claim Merge Explanation
**Assignee**: UI Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T6
**Description**:
Show VEX claim sources and how they were merged.
**Implementation Path**: `vex-merge-explanation.component.ts` (new file)
```typescript
export interface VexClaimSource {
source: 'vendor' | 'distro' | 'internal' | 'community';
document: string;
status: string;
justification?: string;
timestamp: Date;
priority: number;
}
export interface VexMergeResult {
finalStatus: string;
sources: VexClaimSource[];
mergeStrategy: 'priority' | 'latest' | 'conservative';
conflictResolution?: string;
}
@Component({
selector: 'stella-vex-merge-explanation',
standalone: true,
imports: [CommonModule, MatIconModule, MatExpansionModule],
template: `
<mat-expansion-panel>
<mat-expansion-panel-header>
<mat-panel-title>
<mat-icon>merge</mat-icon>
VEX Status: {{ result()?.finalStatus }}
</mat-panel-title>
<mat-panel-description>
{{ result()?.sources?.length }} sources merged
</mat-panel-description>
</mat-expansion-panel-header>
<div class="merge-explanation">
<div class="merge-strategy">
<strong>Strategy:</strong> {{ result()?.mergeStrategy }}
<span *ngIf="result()?.conflictResolution" class="conflict">
({{ result()?.conflictResolution }})
</span>
</div>
<div class="sources-list">
<div class="source" *ngFor="let src of result()?.sources"
[class.winner]="src.status === result()?.finalStatus">
<div class="source-header">
<mat-icon>{{ getSourceIcon(src.source) }}</mat-icon>
<span class="source-type">{{ src.source }}</span>
<span class="source-status">{{ src.status }}</span>
<span class="source-priority">P{{ src.priority }}</span>
</div>
<div class="source-details">
<code>{{ src.document }}</code>
<span class="timestamp">{{ src.timestamp | date:'short' }}</span>
</div>
<div class="justification" *ngIf="src.justification">
{{ src.justification }}
</div>
</div>
</div>
</div>
</mat-expansion-panel>
`
})
export class VexMergeExplanationComponent {
result = input<VexMergeResult>();
getSourceIcon(source: string): string {
const icons: Record<string, string> = {
vendor: 'business',
distro: 'dns',
internal: 'home',
community: 'groups'
};
return icons[source] || 'source';
}
}
```
**Acceptance Criteria**:
- [ ] Shows final merged VEX status
- [ ] Lists all source documents
- [ ] Shows merge strategy used
- [ ] Highlights winning source
- [ ] Shows conflict resolution if any
---
### T14: Role-Based Default Views
**Assignee**: UI Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1, Authority integration
**Description**:
Show different default tabs/content based on user persona.
**Implementation Path**: Add role detection to compare-view.component.ts
```typescript
type UserRole = 'developer' | 'security' | 'audit';
// Role-based defaults
const ROLE_DEFAULTS: Record<UserRole, {defaultTab: string; showFeatures: string[]}> = {
developer: {
defaultTab: 'actionables',
showFeatures: ['actionables', 'witness-paths', 'upgrade-suggestions']
},
security: {
defaultTab: 'claims',
showFeatures: ['vex-merge', 'policy-reasoning', 'claim-sources', 'actionables']
},
audit: {
defaultTab: 'attestations',
showFeatures: ['signatures', 'replay', 'evidence-pack', 'envelope-hashes']
}
};
// In component:
userRole = signal<UserRole>('developer');
roleDefaults = computed(() => ROLE_DEFAULTS[this.userRole()]);
ngOnInit() {
this.authService.getCurrentUserRoles().subscribe(roles => {
if (roles.includes('auditor')) this.userRole.set('audit');
else if (roles.includes('security')) this.userRole.set('security');
else this.userRole.set('developer');
});
}
```
**Acceptance Criteria**:
- [ ] Detects user role from Authority
- [ ] Sets default tab based on role
- [ ] Shows/hides features based on role
- [ ] Developer: actionables first
- [ ] Security: claims/merge first
- [ ] Audit: signatures/replay first
---
### T15: Feed Staleness Warning
**Assignee**: UI Team
**Story Points**: 1
**Status**: TODO
**Dependencies**: T11
**Description**:
Alert banner when vulnerability feed snapshot is stale.
**Implementation**: Included in T11 TrustIndicatorsComponent with `isFeedStale()` check.
**Acceptance Criteria**:
- [ ] Warning icon when feed > 24h old
- [ ] Shows feed age in human-readable format
- [ ] Tooltip explains staleness implications
- [ ] Configurable threshold
---
### T16: Policy Drift Indicator
**Assignee**: UI Team
**Story Points**: 1
**Status**: TODO
**Dependencies**: T11
**Description**:
Show if policy changed between base and head scans.
**Implementation Path**: Add to trust-indicators.component.ts
```typescript
export interface PolicyDrift {
basePolicy: { version: string; hash: string };
headPolicy: { version: string; hash: string };
hasDrift: boolean;
driftSummary?: string;
}
// Add to template:
<div class="policy-drift-warning" *ngIf="policyDrift()?.hasDrift">
<mat-icon>warning</mat-icon>
<span>Policy changed between scans</span>
<button mat-button (click)="showPolicyDiff()">View Changes</button>
</div>
```
**Acceptance Criteria**:
- [ ] Detects policy version/hash mismatch
- [ ] Shows warning banner
- [ ] Links to policy diff view
- [ ] Explains impact on comparison
---
### T17: Replay Command Display
**Assignee**: UI Team
**Story Points**: 1
**Status**: TODO
**Dependencies**: T11
**Description**:
Copy-to-clipboard button for replay command to verify determinism.
**Implementation Path**: Add to trust-indicators.component.ts
```typescript
// Add to template:
<div class="replay-command">
<button mat-stroked-button (click)="copyReplayCommand()">
<mat-icon>terminal</mat-icon>
Copy Replay Command
</button>
</div>
// In component:
copyReplayCommand(): void {
const cmd = `stellaops smart-diff replay \\
--base ${this.baseDigest()} \\
--target ${this.headDigest()} \\
--feed-snapshot ${this.indicators()?.feedSnapshotHash} \\
--policy ${this.indicators()?.policyHash}`;
navigator.clipboard.writeText(cmd);
this.snackBar.open('Replay command copied', 'OK', { duration: 2000 });
}
```
**Acceptance Criteria**:
- [ ] Button copies CLI command
- [ ] Command includes all determinism inputs
- [ ] Snackbar confirms copy
- [ ] Works across browsers
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
@@ -765,6 +1344,15 @@ export class CompareExportService {
| 6 | T6 | TODO | T1, T5 | UI Team | Proof pane |
| 7 | T7 | TODO | T6 | UI Team | Before/After toggle |
| 8 | T8 | TODO | T1 | UI Team | Export delta report |
| 9 | T9 | TODO | T2 | UI Team | Baseline rationale display |
| 10 | T10 | TODO | T1, Backend | UI Team | Actionables section ("What to do next") |
| 11 | T11 | TODO | T1 | UI Team | Determinism trust indicators |
| 12 | T12 | TODO | T6 | UI Team | Witness path visualization |
| 13 | T13 | TODO | T6 | UI Team | VEX claim merge explanation |
| 14 | T14 | TODO | T1, Authority | UI Team | Role-based default views |
| 15 | T15 | TODO | T11 | UI Team | Feed staleness warning |
| 16 | T16 | TODO | T11 | UI Team | Policy drift indicator |
| 17 | T17 | TODO | T11 | UI Team | Replay command display |
---
@@ -773,6 +1361,7 @@ export class CompareExportService {
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-21 | Sprint created from UX Gap Analysis. Smart-Diff UI identified as key comparison feature. | Claude |
| 2025-12-22 | Sprint amended with 9 new tasks (T9-T17) from advisory "21-Dec-2025 - Smart Diff - Reproducibility as a Feature.md". Added baseline rationale, actionables, trust indicators, witness paths, VEX merge explanation, role-based views, feed staleness, policy drift, replay command. | Claude |
---
@@ -784,16 +1373,40 @@ export class CompareExportService {
| Baseline presets | Decision | UI Team | Last green, previous release, main, custom |
| View modes | Decision | UI Team | Side-by-side and unified diff |
| Categories | Decision | UI Team | SBOM, Reachability, VEX, Policy, Findings, Unknowns |
| Baseline rationale | Decision | UI Team | Show auditor-friendly explanation of baseline selection |
| Trust indicators | Decision | UI Team | Show determinism hash, policy version, feed snapshot, signature |
| Role-based defaults | Decision | UI Team | Dev→actionables, Security→claims, Audit→signatures |
| Feed staleness threshold | Decision | UI Team | 24h default, configurable |
| Witness path collapse | Decision | UI Team | Collapse paths > 5 nodes, show first 2 + last 2 |
---
## Dependencies
| Dependency | Sprint | Status | Notes |
|------------|--------|--------|-------|
| Baseline Selection API | 4200.0002.0006 | TODO | Backend API for recommended baselines with rationale |
| Actionables Engine API | 4200.0002.0006 | TODO | Backend API for generating remediation recommendations |
| Authority Role API | Authority | EXISTS | User role detection for role-based views |
| Smart-Diff Backend | 3500 | DONE | Core smart-diff computation |
---
## Success Criteria
- [ ] All 8 tasks marked DONE
- [ ] Baseline can be selected
- [ ] All 17 tasks marked DONE
- [ ] Baseline can be selected with rationale displayed
- [ ] Delta summary shows counts
- [ ] Three-pane layout works
- [ ] Evidence accessible for each change
- [ ] Export works (JSON/PDF)
- [ ] Actionables section shows recommendations
- [ ] Trust indicators visible (hash, policy, feed, signature)
- [ ] Witness paths render with collapse/expand
- [ ] VEX merge explanation shows sources
- [ ] Role-based default views work
- [ ] Feed staleness warning appears when > 24h
- [ ] Policy drift indicator shows when policy changed
- [ ] Replay command copyable
- [ ] `ng build` succeeds
- [ ] `ng test` succeeds

View File

@@ -0,0 +1,884 @@
# Sprint 4200.0002.0006 · Delta Compare Backend API
## Topic & Scope
Backend API endpoints to support the Delta/Compare View UI (Sprint 4200.0002.0003). Provides baseline selection with rationale, actionables generation, and trust indicator data.
**Working directory:** `src/Scanner/StellaOps.Scanner.WebService/`
**Source Advisory**: `docs/product-advisories/21-Dec-2025 - Smart Diff - Reproducibility as a Feature.md`
## Dependencies & Concurrency
- **Upstream**: Sprint 3500 (Smart-Diff core implementation) - DONE
- **Downstream**: Sprint 4200.0002.0003 (Delta Compare View UI)
- **Safe to parallelize with**: Sprint 4200.0002.0004 (CLI Compare)
## Documentation Prerequisites
- `src/Scanner/AGENTS.md`
- `docs/modules/scanner/architecture.md`
- `docs/product-advisories/21-Dec-2025 - Smart Diff - Reproducibility as a Feature.md`
---
## Tasks
### T1: Baseline Selection API
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: —
**Description**:
API endpoint to get recommended baselines with rationale for a given artifact.
**Implementation Path**: `Endpoints/BaselineEndpoints.cs` (new file)
```csharp
// BaselineEndpoints.cs
using Microsoft.AspNetCore.Http.HttpResults;
using StellaOps.Scanner.Core.Models;
namespace StellaOps.Scanner.WebService.Endpoints;
public static class BaselineEndpoints
{
public static void MapBaselineEndpoints(this IEndpointRouteBuilder routes)
{
var group = routes.MapGroup("/api/v1/baselines")
.WithTags("Baselines");
group.MapGet("/recommendations/{artifactDigest}", GetRecommendedBaselines)
.WithName("GetRecommendedBaselines")
.WithSummary("Get recommended baselines for an artifact")
.Produces<BaselineRecommendationsResponse>(StatusCodes.Status200OK);
group.MapGet("/rationale/{baseDigest}/{headDigest}", GetBaselineRationale)
.WithName("GetBaselineRationale")
.WithSummary("Get rationale for a specific baseline selection")
.Produces<BaselineRationaleResponse>(StatusCodes.Status200OK);
}
private static async Task<Ok<BaselineRecommendationsResponse>> GetRecommendedBaselines(
string artifactDigest,
[AsParameters] BaselineQuery query,
IBaselineService baselineService,
CancellationToken ct)
{
var recommendations = await baselineService.GetRecommendationsAsync(
artifactDigest,
query.Environment,
query.PolicyId,
ct);
return TypedResults.Ok(new BaselineRecommendationsResponse
{
ArtifactDigest = artifactDigest,
Recommendations = recommendations,
GeneratedAt = DateTime.UtcNow
});
}
private static async Task<Ok<BaselineRationaleResponse>> GetBaselineRationale(
string baseDigest,
string headDigest,
IBaselineService baselineService,
CancellationToken ct)
{
var rationale = await baselineService.GetRationaleAsync(baseDigest, headDigest, ct);
return TypedResults.Ok(rationale);
}
}
public record BaselineQuery
{
public string? Environment { get; init; }
public string? PolicyId { get; init; }
}
public record BaselineRecommendation
{
public required string Id { get; init; }
public required string Type { get; init; } // "last-green", "previous-release", "main-branch", "custom"
public required string Label { get; init; }
public required string Digest { get; init; }
public required DateTime Timestamp { get; init; }
public required string Rationale { get; init; }
public string? VerdictStatus { get; init; } // "allowed", "blocked", "warn"
public string? PolicyVersion { get; init; }
public bool IsDefault { get; init; }
}
public record BaselineRecommendationsResponse
{
public required string ArtifactDigest { get; init; }
public required IReadOnlyList<BaselineRecommendation> Recommendations { get; init; }
public required DateTime GeneratedAt { get; init; }
}
public record BaselineRationaleResponse
{
public required string BaseDigest { get; init; }
public required string HeadDigest { get; init; }
public required string SelectionType { get; init; }
public required string Rationale { get; init; }
public required string DetailedExplanation { get; init; }
public IReadOnlyList<string>? SelectionCriteria { get; init; }
public DateTime? BaseTimestamp { get; init; }
public DateTime? HeadTimestamp { get; init; }
}
```
**Service Implementation**: `Services/BaselineService.cs` (new file)
```csharp
// BaselineService.cs
namespace StellaOps.Scanner.WebService.Services;
public interface IBaselineService
{
Task<IReadOnlyList<BaselineRecommendation>> GetRecommendationsAsync(
string artifactDigest,
string? environment,
string? policyId,
CancellationToken ct);
Task<BaselineRationaleResponse> GetRationaleAsync(
string baseDigest,
string headDigest,
CancellationToken ct);
}
public class BaselineService : IBaselineService
{
private readonly IScanRepository _scanRepo;
private readonly IPolicyGateService _policyService;
public BaselineService(IScanRepository scanRepo, IPolicyGateService policyService)
{
_scanRepo = scanRepo;
_policyService = policyService;
}
public async Task<IReadOnlyList<BaselineRecommendation>> GetRecommendationsAsync(
string artifactDigest,
string? environment,
string? policyId,
CancellationToken ct)
{
var recommendations = new List<BaselineRecommendation>();
// 1. Last green verdict in same environment
var lastGreen = await _scanRepo.GetLastGreenVerdictAsync(
artifactDigest, environment, policyId, ct);
if (lastGreen != null)
{
recommendations.Add(new BaselineRecommendation
{
Id = "last-green",
Type = "last-green",
Label = "Last Green Build",
Digest = lastGreen.Digest,
Timestamp = lastGreen.CompletedAt,
Rationale = $"Selected last prod release with Allowed verdict under policy {lastGreen.PolicyVersion}.",
VerdictStatus = "allowed",
PolicyVersion = lastGreen.PolicyVersion,
IsDefault = true
});
}
// 2. Previous release tag
var previousRelease = await _scanRepo.GetPreviousReleaseAsync(artifactDigest, ct);
if (previousRelease != null)
{
recommendations.Add(new BaselineRecommendation
{
Id = "previous-release",
Type = "previous-release",
Label = $"Previous Release ({previousRelease.Tag})",
Digest = previousRelease.Digest,
Timestamp = previousRelease.ReleasedAt,
Rationale = $"Previous release tag: {previousRelease.Tag}",
VerdictStatus = previousRelease.VerdictStatus,
IsDefault = lastGreen == null
});
}
// 3. Parent commit / merge-base
var parentCommit = await _scanRepo.GetParentCommitScanAsync(artifactDigest, ct);
if (parentCommit != null)
{
recommendations.Add(new BaselineRecommendation
{
Id = "parent-commit",
Type = "main-branch",
Label = "Parent Commit",
Digest = parentCommit.Digest,
Timestamp = parentCommit.CompletedAt,
Rationale = $"Parent commit on main branch: {parentCommit.CommitSha[..8]}",
VerdictStatus = parentCommit.VerdictStatus,
IsDefault = false
});
}
return recommendations;
}
public async Task<BaselineRationaleResponse> GetRationaleAsync(
string baseDigest,
string headDigest,
CancellationToken ct)
{
var baseScan = await _scanRepo.GetByDigestAsync(baseDigest, ct);
var headScan = await _scanRepo.GetByDigestAsync(headDigest, ct);
var selectionType = DetermineSelectionType(baseScan, headScan);
var rationale = GenerateRationale(selectionType, baseScan, headScan);
var explanation = GenerateDetailedExplanation(selectionType, baseScan, headScan);
return new BaselineRationaleResponse
{
BaseDigest = baseDigest,
HeadDigest = headDigest,
SelectionType = selectionType,
Rationale = rationale,
DetailedExplanation = explanation,
SelectionCriteria = GetSelectionCriteria(selectionType),
BaseTimestamp = baseScan?.CompletedAt,
HeadTimestamp = headScan?.CompletedAt
};
}
private static string DetermineSelectionType(Scan? baseScan, Scan? headScan)
{
// Logic to determine how baseline was selected
if (baseScan?.VerdictStatus == "allowed") return "last-green";
if (baseScan?.ReleaseTag != null) return "previous-release";
return "manual";
}
private static string GenerateRationale(string type, Scan? baseScan, Scan? headScan)
{
return type switch
{
"last-green" => $"Selected last prod release with Allowed verdict under policy {baseScan?.PolicyVersion}.",
"previous-release" => $"Selected previous release: {baseScan?.ReleaseTag}",
"manual" => "User manually selected this baseline for comparison.",
_ => "Baseline selected for comparison."
};
}
private static string GenerateDetailedExplanation(string type, Scan? baseScan, Scan? headScan)
{
return type switch
{
"last-green" => $"This baseline was automatically selected because it represents the most recent scan " +
$"that received an 'Allowed' verdict under the current policy. This ensures you're " +
$"comparing against a known-good state that passed all security gates.",
"previous-release" => $"This baseline corresponds to the previous release tag in your version history. " +
$"Comparing against the previous release helps identify what changed between versions.",
_ => "This baseline was manually selected for comparison."
};
}
private static IReadOnlyList<string> GetSelectionCriteria(string type)
{
return type switch
{
"last-green" => new[] { "Verdict = Allowed", "Same environment", "Most recent" },
"previous-release" => new[] { "Has release tag", "Previous in version order" },
_ => Array.Empty<string>()
};
}
}
```
**Acceptance Criteria**:
- [ ] GET /api/v1/baselines/recommendations/{artifactDigest} returns baseline options
- [ ] GET /api/v1/baselines/rationale/{baseDigest}/{headDigest} returns selection rationale
- [ ] Recommendations sorted by relevance
- [ ] Rationale includes auditor-friendly explanation
- [ ] Deterministic output (same inputs → same recommendations)
---
### T2: Delta Computation API
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1
**Description**:
API endpoint to compute delta verdict between two scans.
**Implementation Path**: `Endpoints/DeltaEndpoints.cs` (new file)
```csharp
// DeltaEndpoints.cs
namespace StellaOps.Scanner.WebService.Endpoints;
public static class DeltaEndpoints
{
public static void MapDeltaEndpoints(this IEndpointRouteBuilder routes)
{
var group = routes.MapGroup("/api/v1/delta")
.WithTags("Delta");
group.MapPost("/compute", ComputeDelta)
.WithName("ComputeDelta")
.WithSummary("Compute delta verdict between two artifacts")
.Produces<DeltaVerdictResponse>(StatusCodes.Status200OK)
.Produces<DeltaVerdictResponse>(StatusCodes.Status202Accepted);
group.MapGet("/{deltaId}", GetDelta)
.WithName("GetDelta")
.WithSummary("Get computed delta by ID")
.Produces<DeltaVerdictResponse>(StatusCodes.Status200OK);
group.MapGet("/{deltaId}/trust-indicators", GetTrustIndicators)
.WithName("GetDeltaTrustIndicators")
.WithSummary("Get trust indicators for a delta")
.Produces<TrustIndicatorsResponse>(StatusCodes.Status200OK);
}
private static async Task<Results<Ok<DeltaVerdictResponse>, Accepted<DeltaVerdictResponse>>> ComputeDelta(
DeltaComputeRequest request,
IDeltaService deltaService,
CancellationToken ct)
{
// Check if already computed
var existing = await deltaService.GetExistingDeltaAsync(
request.BaseVerdictHash,
request.HeadVerdictHash,
request.PolicyHash,
ct);
if (existing != null)
{
return TypedResults.Ok(existing);
}
// Start computation
var pending = await deltaService.StartComputationAsync(request, ct);
return TypedResults.Accepted($"/api/v1/delta/{pending.DeltaId}", pending);
}
private static async Task<Ok<DeltaVerdictResponse>> GetDelta(
string deltaId,
IDeltaService deltaService,
CancellationToken ct)
{
var delta = await deltaService.GetByIdAsync(deltaId, ct);
return TypedResults.Ok(delta);
}
private static async Task<Ok<TrustIndicatorsResponse>> GetTrustIndicators(
string deltaId,
IDeltaService deltaService,
CancellationToken ct)
{
var indicators = await deltaService.GetTrustIndicatorsAsync(deltaId, ct);
return TypedResults.Ok(indicators);
}
}
public record DeltaComputeRequest
{
public required string BaseVerdictHash { get; init; }
public required string HeadVerdictHash { get; init; }
public required string PolicyHash { get; init; }
}
public record DeltaVerdictResponse
{
public required string DeltaId { get; init; }
public required string Status { get; init; } // "pending", "computing", "complete", "failed"
public required string BaseDigest { get; init; }
public required string HeadDigest { get; init; }
public DeltaSummary? Summary { get; init; }
public IReadOnlyList<DeltaCategory>? Categories { get; init; }
public IReadOnlyList<DeltaItem>? Items { get; init; }
public TrustIndicatorsResponse? TrustIndicators { get; init; }
public DateTime ComputedAt { get; init; }
}
public record DeltaSummary
{
public int TotalAdded { get; init; }
public int TotalRemoved { get; init; }
public int TotalChanged { get; init; }
public int NewExploitableVulns { get; init; }
public int ReachabilityFlips { get; init; }
public int VexClaimFlips { get; init; }
public int ComponentChanges { get; init; }
}
public record DeltaCategory
{
public required string Id { get; init; }
public required string Name { get; init; }
public required string Icon { get; init; }
public int Added { get; init; }
public int Removed { get; init; }
public int Changed { get; init; }
}
public record DeltaItem
{
public required string Id { get; init; }
public required string Category { get; init; }
public required string ChangeType { get; init; } // "added", "removed", "changed"
public required string Title { get; init; }
public string? Severity { get; init; }
public string? BeforeValue { get; init; }
public string? AfterValue { get; init; }
public double Priority { get; init; }
}
public record TrustIndicatorsResponse
{
public required string DeterminismHash { get; init; }
public required string PolicyVersion { get; init; }
public required string PolicyHash { get; init; }
public required DateTime FeedSnapshotTimestamp { get; init; }
public required string FeedSnapshotHash { get; init; }
public required string SignatureStatus { get; init; } // "valid", "invalid", "missing", "pending"
public string? SignerIdentity { get; init; }
public PolicyDrift? PolicyDrift { get; init; }
}
public record PolicyDrift
{
public required string BasePolicyVersion { get; init; }
public required string BasePolicyHash { get; init; }
public required string HeadPolicyVersion { get; init; }
public required string HeadPolicyHash { get; init; }
public bool HasDrift { get; init; }
public string? DriftSummary { get; init; }
}
```
**Acceptance Criteria**:
- [ ] POST /api/v1/delta/compute initiates or returns cached delta
- [ ] GET /api/v1/delta/{deltaId} returns delta results
- [ ] GET /api/v1/delta/{deltaId}/trust-indicators returns trust data
- [ ] Idempotent computation (same inputs → same deltaId)
- [ ] 202 Accepted for pending computations
---
### T3: Actionables Engine API
**Assignee**: Scanner Team
**Story Points**: 5
**Status**: TODO
**Dependencies**: T2
**Description**:
API endpoint to generate structured remediation recommendations.
**Implementation Path**: `Endpoints/ActionablesEndpoints.cs` (new file)
```csharp
// ActionablesEndpoints.cs
namespace StellaOps.Scanner.WebService.Endpoints;
public static class ActionablesEndpoints
{
public static void MapActionablesEndpoints(this IEndpointRouteBuilder routes)
{
var group = routes.MapGroup("/api/v1/actionables")
.WithTags("Actionables");
group.MapGet("/delta/{deltaId}", GetDeltaActionables)
.WithName("GetDeltaActionables")
.WithSummary("Get actionable recommendations for a delta")
.Produces<ActionablesResponse>(StatusCodes.Status200OK);
}
private static async Task<Ok<ActionablesResponse>> GetDeltaActionables(
string deltaId,
IActionablesService actionablesService,
CancellationToken ct)
{
var actionables = await actionablesService.GenerateForDeltaAsync(deltaId, ct);
return TypedResults.Ok(actionables);
}
}
public record ActionablesResponse
{
public required string DeltaId { get; init; }
public required IReadOnlyList<Actionable> Actionables { get; init; }
public required DateTime GeneratedAt { get; init; }
}
public record Actionable
{
public required string Id { get; init; }
public required string Type { get; init; } // "upgrade", "patch", "vex", "config", "investigate"
public required string Priority { get; init; } // "critical", "high", "medium", "low"
public required string Title { get; init; }
public required string Description { get; init; }
public string? Component { get; init; }
public string? CurrentVersion { get; init; }
public string? TargetVersion { get; init; }
public IReadOnlyList<string>? CveIds { get; init; }
public string? EstimatedEffort { get; init; }
public ActionableEvidence? Evidence { get; init; }
}
public record ActionableEvidence
{
public string? WitnessId { get; init; }
public string? VexDocumentId { get; init; }
public string? PolicyRuleId { get; init; }
}
```
**Service Implementation**: `Services/ActionablesService.cs` (new file)
```csharp
// ActionablesService.cs
namespace StellaOps.Scanner.WebService.Services;
public interface IActionablesService
{
Task<ActionablesResponse> GenerateForDeltaAsync(string deltaId, CancellationToken ct);
}
public class ActionablesService : IActionablesService
{
private readonly IDeltaService _deltaService;
private readonly IPackageAdvisoryService _advisoryService;
private readonly IVexService _vexService;
public ActionablesService(
IDeltaService deltaService,
IPackageAdvisoryService advisoryService,
IVexService vexService)
{
_deltaService = deltaService;
_advisoryService = advisoryService;
_vexService = vexService;
}
public async Task<ActionablesResponse> GenerateForDeltaAsync(string deltaId, CancellationToken ct)
{
var delta = await _deltaService.GetByIdAsync(deltaId, ct);
var actionables = new List<Actionable>();
foreach (var item in delta.Items ?? Array.Empty<DeltaItem>())
{
var action = await GenerateActionableForItem(item, ct);
if (action != null)
{
actionables.Add(action);
}
}
// Sort by priority
actionables = actionables
.OrderBy(a => GetPriorityOrder(a.Priority))
.ThenBy(a => a.Title)
.ToList();
return new ActionablesResponse
{
DeltaId = deltaId,
Actionables = actionables,
GeneratedAt = DateTime.UtcNow
};
}
private async Task<Actionable?> GenerateActionableForItem(DeltaItem item, CancellationToken ct)
{
return item.Category switch
{
"vulnerabilities" when item.ChangeType == "added" =>
await GenerateVulnActionable(item, ct),
"reachability" when item.ChangeType == "changed" =>
await GenerateReachabilityActionable(item, ct),
"components" when item.ChangeType == "added" =>
await GenerateComponentActionable(item, ct),
"unknowns" =>
GenerateUnknownsActionable(item),
_ => null
};
}
private async Task<Actionable> GenerateVulnActionable(DeltaItem item, CancellationToken ct)
{
// Look up fix version
var fixVersion = await _advisoryService.GetFixVersionAsync(item.Id, ct);
return new Actionable
{
Id = $"action-{item.Id}",
Type = fixVersion != null ? "upgrade" : "investigate",
Priority = item.Severity ?? "medium",
Title = fixVersion != null
? $"Upgrade to fix {item.Title}"
: $"Investigate {item.Title}",
Description = fixVersion != null
? $"Upgrade component to version {fixVersion} to remediate this vulnerability."
: $"New vulnerability detected. Investigate impact and consider VEX statement if not affected.",
TargetVersion = fixVersion,
CveIds = new[] { item.Id }
};
}
private async Task<Actionable> GenerateReachabilityActionable(DeltaItem item, CancellationToken ct)
{
return new Actionable
{
Id = $"action-{item.Id}",
Type = "investigate",
Priority = "high",
Title = $"Review reachability change: {item.Title}",
Description = "Code path reachability changed. Review if vulnerable function is now reachable from entrypoint.",
Evidence = new ActionableEvidence { WitnessId = item.Id }
};
}
private async Task<Actionable> GenerateComponentActionable(DeltaItem item, CancellationToken ct)
{
return new Actionable
{
Id = $"action-{item.Id}",
Type = "investigate",
Priority = "low",
Title = $"New component: {item.Title}",
Description = "New dependency added. Verify it meets security requirements."
};
}
private Actionable GenerateUnknownsActionable(DeltaItem item)
{
return new Actionable
{
Id = $"action-{item.Id}",
Type = "investigate",
Priority = "medium",
Title = $"Resolve unknown: {item.Title}",
Description = "Missing information detected. Provide SBOM or VEX data to resolve."
};
}
private static int GetPriorityOrder(string priority) => priority switch
{
"critical" => 0,
"high" => 1,
"medium" => 2,
"low" => 3,
_ => 4
};
}
```
**Acceptance Criteria**:
- [ ] GET /api/v1/actionables/delta/{deltaId} returns recommendations
- [ ] Actionables sorted by priority
- [ ] Upgrade recommendations include target version
- [ ] Investigate recommendations include evidence links
- [ ] VEX recommendations for not-affected cases
---
### T4: Evidence/Proof API Extensions
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T2
**Description**:
Extend existing evidence API to support delta-specific evidence.
**Implementation Path**: Extend `Endpoints/EvidenceEndpoints.cs`
```csharp
// Add to existing EvidenceEndpoints.cs
group.MapGet("/delta/{deltaId}/items/{itemId}", GetDeltaItemEvidence)
.WithName("GetDeltaItemEvidence")
.WithSummary("Get evidence for a specific delta item")
.Produces<DeltaItemEvidenceResponse>(StatusCodes.Status200OK);
group.MapGet("/delta/{deltaId}/witness-paths", GetDeltaWitnessPaths)
.WithName("GetDeltaWitnessPaths")
.WithSummary("Get witness paths for reachability changes in delta")
.Produces<WitnessPathsResponse>(StatusCodes.Status200OK);
group.MapGet("/delta/{deltaId}/vex-merge/{vulnId}", GetVexMergeExplanation)
.WithName("GetVexMergeExplanation")
.WithSummary("Get VEX merge explanation for a vulnerability")
.Produces<VexMergeExplanationResponse>(StatusCodes.Status200OK);
```
**Response Models**:
```csharp
public record DeltaItemEvidenceResponse
{
public required string ItemId { get; init; }
public required string DeltaId { get; init; }
public object? BeforeEvidence { get; init; }
public object? AfterEvidence { get; init; }
public IReadOnlyList<WitnessPath>? WitnessPaths { get; init; }
public VexMergeExplanationResponse? VexMerge { get; init; }
}
public record WitnessPathsResponse
{
public required string DeltaId { get; init; }
public required IReadOnlyList<WitnessPath> Paths { get; init; }
}
public record WitnessPath
{
public required string Id { get; init; }
public required string Entrypoint { get; init; }
public required string Sink { get; init; }
public required IReadOnlyList<WitnessNode> Nodes { get; init; }
public required string Confidence { get; init; } // "confirmed", "likely", "present"
public IReadOnlyList<string>? Gates { get; init; }
}
public record WitnessNode
{
public required string Method { get; init; }
public string? File { get; init; }
public int? Line { get; init; }
public bool IsEntrypoint { get; init; }
public bool IsSink { get; init; }
}
public record VexMergeExplanationResponse
{
public required string VulnId { get; init; }
public required string FinalStatus { get; init; }
public required IReadOnlyList<VexClaimSource> Sources { get; init; }
public required string MergeStrategy { get; init; } // "priority", "latest", "conservative"
public string? ConflictResolution { get; init; }
}
public record VexClaimSource
{
public required string Source { get; init; } // "vendor", "distro", "internal", "community"
public required string Document { get; init; }
public required string Status { get; init; }
public string? Justification { get; init; }
public required DateTime Timestamp { get; init; }
public int Priority { get; init; }
}
```
**Acceptance Criteria**:
- [ ] GET /api/v1/evidence/delta/{deltaId}/items/{itemId} returns before/after evidence
- [ ] GET /api/v1/evidence/delta/{deltaId}/witness-paths returns call paths
- [ ] GET /api/v1/evidence/delta/{deltaId}/vex-merge/{vulnId} returns merge explanation
- [ ] Witness paths include confidence and gates
---
### T5: OpenAPI Specification Update
**Assignee**: Scanner Team
**Story Points**: 1
**Status**: TODO
**Dependencies**: T1, T2, T3, T4
**Description**:
Update OpenAPI spec with new delta comparison endpoints.
**Implementation Path**: `openapi/scanner-api.yaml`
**Acceptance Criteria**:
- [ ] All new endpoints documented in OpenAPI
- [ ] Request/response schemas defined
- [ ] Examples provided for each endpoint
- [ ] `npm run api:lint` passes
---
### T6: Integration Tests
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1, T2, T3, T4
**Description**:
Integration tests for delta comparison API.
**Implementation Path**: `__Tests/StellaOps.Scanner.WebService.Tests/DeltaApiTests.cs`
**Acceptance Criteria**:
- [ ] Tests for baseline recommendations API
- [ ] Tests for delta computation API
- [ ] Tests for actionables generation
- [ ] Tests for evidence retrieval
- [ ] Tests for idempotent behavior
- [ ] All tests pass with `dotnet test`
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Scanner Team | Baseline Selection API |
| 2 | T2 | TODO | T1 | Scanner Team | Delta Computation API |
| 3 | T3 | TODO | T2 | Scanner Team | Actionables Engine API |
| 4 | T4 | TODO | T2 | Scanner Team | Evidence/Proof API Extensions |
| 5 | T5 | TODO | T1-T4 | Scanner Team | OpenAPI Specification Update |
| 6 | T6 | TODO | T1-T4 | Scanner Team | Integration Tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created to support Delta Compare View UI (Sprint 4200.0002.0003). Derived from advisory "21-Dec-2025 - Smart Diff - Reproducibility as a Feature.md". | Claude |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Idempotent delta computation | Decision | Scanner Team | Cache by (base_hash, head_hash, policy_hash) |
| Baseline selection algorithm | Decision | Scanner Team | Prefer last green, then previous release, then parent commit |
| Actionables priority order | Decision | Scanner Team | critical > high > medium > low |
| VEX merge strategy | Decision | Scanner Team | Priority-based by default (vendor > distro > internal > community) |
---
## Dependencies
| Dependency | Sprint | Status | Notes |
|------------|--------|--------|-------|
| Smart-Diff Core | 3500 | DONE | Core delta computation engine |
| Delta Compare View UI | 4200.0002.0003 | TODO | Consumer of these APIs |
| VEX Service | Excititor | EXISTS | VEX merge logic |
| Package Advisory Service | Concelier | EXISTS | Fix version lookup |
---
## Success Criteria
- [ ] All 6 tasks marked DONE
- [ ] All endpoints return expected responses
- [ ] Baseline selection includes rationale
- [ ] Delta computation is idempotent
- [ ] Actionables are sorted by priority
- [ ] Evidence includes witness paths and VEX merge
- [ ] OpenAPI spec valid
- [ ] Integration tests pass
- [ ] `dotnet build` succeeds
- [ ] `dotnet test` succeeds

View File

@@ -0,0 +1,624 @@
# Sprint 4300.0001.0001 - CLI Attestation Chain Verify Command
## Topic & Scope
- Implement `stella verify image <digest> --require sbom,vex,decision` command
- Discover attestations via OCI referrers API
- Verify DSSE signatures and chain integrity
- Return signed summary; non-zero exit for CI/CD gates
- Support offline verification mode
**Working directory:** `src/Cli/StellaOps.Cli/Commands/`
## Dependencies & Concurrency
- **Upstream (DONE):**
- SPRINT_4100_0003_0002: OCI Referrer Discovery (OciReferrerDiscovery, RvaOciPublisher)
- SPRINT_4100_0003_0001: Risk Verdict Attestation (RvaVerifier)
- **Downstream:** CI/CD integration documentation
- **Safe to parallelize with:** SPRINT_4300_0001_0002, SPRINT_4300_0002_*
## Documentation Prerequisites
- `docs/modules/cli/architecture.md`
- `src/Cli/StellaOps.Cli/AGENTS.md`
- SPRINT_4100_0003_0002 (OCI referrer patterns)
---
## Tasks
### T1: Define VerifyImageCommand
**Assignee**: CLI Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: —
**Description**:
Add `stella verify image` subcommand with attestation requirements.
**Implementation Path**: `Commands/VerifyCommandGroup.cs` (extend)
**Command Signature**:
```
stella verify image <reference>
--require <types> # sbom,vex,decision (comma-separated)
--trust-policy <file> # Trust policy YAML (signers, issuers)
--output <format> # table, json, sarif
--strict # Fail on any missing attestation
--verbose # Show verification details
```
**Implementation**:
```csharp
private static Command BuildVerifyImageCommand(
IServiceProvider services,
Option<bool> verboseOption,
CancellationToken cancellationToken)
{
var referenceArg = new Argument<string>("reference")
{
Description = "Image reference (registry/repo@sha256:digest or registry/repo:tag)"
};
var requireOption = new Option<string[]>("--require", "-r")
{
Description = "Required attestation types: sbom, vex, decision, approval",
AllowMultipleArgumentsPerToken = true
};
requireOption.SetDefaultValue(new[] { "sbom", "vex", "decision" });
var trustPolicyOption = new Option<string?>("--trust-policy")
{
Description = "Path to trust policy file (YAML)"
};
var outputOption = new Option<string>("--output", "-o")
{
Description = "Output format: table, json, sarif"
}.SetDefaultValue("table").FromAmong("table", "json", "sarif");
var strictOption = new Option<bool>("--strict")
{
Description = "Fail if any required attestation is missing"
};
var command = new Command("image", "Verify attestation chain for a container image")
{
referenceArg,
requireOption,
trustPolicyOption,
outputOption,
strictOption,
verboseOption
};
command.SetAction(parseResult =>
{
var reference = parseResult.GetValue(referenceArg) ?? string.Empty;
var require = parseResult.GetValue(requireOption) ?? Array.Empty<string>();
var trustPolicy = parseResult.GetValue(trustPolicyOption);
var output = parseResult.GetValue(outputOption) ?? "table";
var strict = parseResult.GetValue(strictOption);
var verbose = parseResult.GetValue(verboseOption);
return CommandHandlers.HandleVerifyImageAsync(
services, reference, require, trustPolicy, output, strict, verbose, cancellationToken);
});
return command;
}
```
**Acceptance Criteria**:
- [ ] `stella verify image` command registered
- [ ] `--require` accepts comma-separated attestation types
- [ ] `--trust-policy` loads trust configuration
- [ ] `--output` supports table, json, sarif formats
- [ ] `--strict` mode fails on missing attestations
- [ ] Help text documents all options
---
### T2: Implement ImageAttestationVerifier Service
**Assignee**: CLI Team
**Story Points**: 4
**Status**: TODO
**Dependencies**: T1
**Description**:
Create service that discovers and verifies attestations for an image.
**Implementation Path**: `Services/ImageAttestationVerifier.cs` (new)
**Implementation**:
```csharp
namespace StellaOps.Cli.Services;
public sealed class ImageAttestationVerifier : IImageAttestationVerifier
{
private readonly IOciReferrerDiscovery _referrerDiscovery;
private readonly IRvaVerifier _rvaVerifier;
private readonly IDsseVerifier _dsseVerifier;
private readonly ITrustPolicyLoader _trustPolicyLoader;
private readonly ILogger<ImageAttestationVerifier> _logger;
public async Task<ImageVerificationResult> VerifyAsync(
ImageVerificationRequest request,
CancellationToken ct = default)
{
var result = new ImageVerificationResult
{
ImageReference = request.Reference,
ImageDigest = await ResolveDigestAsync(request.Reference, ct),
VerifiedAt = DateTimeOffset.UtcNow
};
// Load trust policy
var trustPolicy = request.TrustPolicyPath is not null
? await _trustPolicyLoader.LoadAsync(request.TrustPolicyPath, ct)
: TrustPolicy.Default;
// Discover attestations via OCI referrers
var referrers = await _referrerDiscovery.ListReferrersAsync(
request.Registry, request.Repository, result.ImageDigest, ct);
if (!referrers.IsSuccess)
{
result.Errors.Add($"Failed to discover referrers: {referrers.Error}");
return result;
}
// Group by attestation type
var attestationsByType = referrers.Referrers
.GroupBy(r => MapArtifactTypeToAttestationType(r.ArtifactType))
.ToDictionary(g => g.Key, g => g.ToList());
// Verify each required attestation type
foreach (var requiredType in request.RequiredTypes)
{
var verification = await VerifyAttestationTypeAsync(
requiredType, attestationsByType, trustPolicy, ct);
result.Attestations.Add(verification);
}
// Compute overall result
result.IsValid = result.Attestations.All(a => a.IsValid || !request.Strict);
result.MissingTypes = request.RequiredTypes
.Except(result.Attestations.Where(a => a.IsValid).Select(a => a.Type))
.ToList();
return result;
}
private async Task<AttestationVerification> VerifyAttestationTypeAsync(
string type,
Dictionary<string, List<ReferrerInfo>> attestationsByType,
TrustPolicy trustPolicy,
CancellationToken ct)
{
if (!attestationsByType.TryGetValue(type, out var referrers) || referrers.Count == 0)
{
return new AttestationVerification
{
Type = type,
IsValid = false,
Status = AttestationStatus.Missing,
Message = $"No {type} attestation found"
};
}
// Verify the most recent attestation
var latest = referrers.OrderByDescending(r => r.Annotations.GetValueOrDefault("created")).First();
// Fetch and verify DSSE envelope
var envelope = await FetchEnvelopeAsync(latest.Digest, ct);
var verifyResult = await _dsseVerifier.VerifyAsync(envelope, trustPolicy, ct);
return new AttestationVerification
{
Type = type,
IsValid = verifyResult.IsValid,
Status = verifyResult.IsValid ? AttestationStatus.Verified : AttestationStatus.Invalid,
Digest = latest.Digest,
SignerIdentity = verifyResult.SignerIdentity,
Message = verifyResult.IsValid ? "Signature valid" : verifyResult.Error,
VerifiedAt = DateTimeOffset.UtcNow
};
}
}
public sealed record ImageVerificationRequest
{
public required string Reference { get; init; }
public required string Registry { get; init; }
public required string Repository { get; init; }
public required IReadOnlyList<string> RequiredTypes { get; init; }
public string? TrustPolicyPath { get; init; }
public bool Strict { get; init; }
}
public sealed record ImageVerificationResult
{
public required string ImageReference { get; init; }
public required string ImageDigest { get; init; }
public required DateTimeOffset VerifiedAt { get; init; }
public bool IsValid { get; set; }
public List<AttestationVerification> Attestations { get; } = [];
public List<string> MissingTypes { get; set; } = [];
public List<string> Errors { get; } = [];
}
public sealed record AttestationVerification
{
public required string Type { get; init; }
public required bool IsValid { get; init; }
public required AttestationStatus Status { get; init; }
public string? Digest { get; init; }
public string? SignerIdentity { get; init; }
public string? Message { get; init; }
public DateTimeOffset? VerifiedAt { get; init; }
}
public enum AttestationStatus
{
Verified,
Invalid,
Missing,
Expired,
UntrustedSigner
}
```
**Acceptance Criteria**:
- [ ] `ImageAttestationVerifier.cs` created
- [ ] Discovers attestations via OCI referrers
- [ ] Verifies DSSE signatures
- [ ] Validates against trust policy
- [ ] Returns comprehensive verification result
- [ ] Handles missing attestations gracefully
---
### T3: Implement Trust Policy Loader
**Assignee**: CLI Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: —
**Description**:
Load and parse trust policy configuration.
**Implementation Path**: `Services/TrustPolicyLoader.cs` (new)
**Trust Policy Schema**:
```yaml
# trust-policy.yaml
version: "1"
attestations:
sbom:
required: true
signers:
- identity: "builder@stellaops.example.com"
issuer: "https://accounts.google.com"
vex:
required: true
signers:
- identity: "security@stellaops.example.com"
decision:
required: true
signers:
- identity: "policy-engine@stellaops.example.com"
approval:
required: false
signers:
- identity: "*@stellaops.example.com"
defaults:
requireRekor: true
maxAge: "168h" # 7 days
```
**Acceptance Criteria**:
- [ ] `TrustPolicyLoader.cs` created
- [ ] Parses YAML trust policy
- [ ] Validates policy structure
- [ ] Default policy when none specified
- [ ] Signer identity matching (exact, wildcard)
---
### T4: Implement Command Handler
**Assignee**: CLI Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1, T2, T3
**Description**:
Implement the command handler that orchestrates verification.
**Implementation Path**: `Commands/CommandHandlers.VerifyImage.cs` (new)
**Implementation**:
```csharp
public static async Task<int> HandleVerifyImageAsync(
IServiceProvider services,
string reference,
string[] require,
string? trustPolicy,
string output,
bool strict,
bool verbose,
CancellationToken ct)
{
var verifier = services.GetRequiredService<IImageAttestationVerifier>();
var console = services.GetRequiredService<IConsoleOutput>();
// Parse reference
var (registry, repository, digest) = ParseImageReference(reference);
var request = new ImageVerificationRequest
{
Reference = reference,
Registry = registry,
Repository = repository,
RequiredTypes = require.ToList(),
TrustPolicyPath = trustPolicy,
Strict = strict
};
var result = await verifier.VerifyAsync(request, ct);
// Output results
switch (output)
{
case "json":
console.WriteJson(result);
break;
case "sarif":
console.WriteSarif(ConvertToSarif(result));
break;
default:
WriteTableOutput(console, result, verbose);
break;
}
// Return exit code
return result.IsValid ? 0 : 1;
}
private static void WriteTableOutput(IConsoleOutput console, ImageVerificationResult result, bool verbose)
{
console.WriteLine($"Image: {result.ImageReference}");
console.WriteLine($"Digest: {result.ImageDigest}");
console.WriteLine();
var table = new ConsoleTable("Type", "Status", "Signer", "Message");
foreach (var att in result.Attestations)
{
var status = att.IsValid ? "[green]PASS[/]" : "[red]FAIL[/]";
table.AddRow(att.Type, status, att.SignerIdentity ?? "-", att.Message ?? "-");
}
console.WriteTable(table);
console.WriteLine();
console.WriteLine(result.IsValid
? "[green]Verification PASSED[/]"
: "[red]Verification FAILED[/]");
if (result.MissingTypes.Count > 0)
{
console.WriteLine($"[yellow]Missing: {string.Join(", ", result.MissingTypes)}[/]");
}
}
```
**Acceptance Criteria**:
- [ ] Command handler implemented
- [ ] Parses image reference (registry/repo@digest or :tag)
- [ ] Table output with colorized status
- [ ] JSON output for automation
- [ ] SARIF output for security tools
- [ ] Exit code 0 on pass, 1 on fail
---
### T5: Add Unit Tests
**Assignee**: CLI Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T4
**Description**:
Comprehensive tests for image verification.
**Implementation Path**: `src/Cli/__Tests/StellaOps.Cli.Tests/Commands/VerifyImageTests.cs`
**Test Cases**:
```csharp
public class VerifyImageTests
{
[Fact]
public async Task Verify_AllAttestationsPresent_ReturnsPass()
{
// Arrange
var verifier = CreateVerifierWithMocks(
sbom: CreateValidAttestation(),
vex: CreateValidAttestation(),
decision: CreateValidAttestation());
var request = CreateRequest(require: new[] { "sbom", "vex", "decision" });
// Act
var result = await verifier.VerifyAsync(request);
// Assert
result.IsValid.Should().BeTrue();
result.Attestations.Should().HaveCount(3);
result.Attestations.Should().OnlyContain(a => a.IsValid);
}
[Fact]
public async Task Verify_MissingAttestation_Strict_ReturnsFail()
{
var verifier = CreateVerifierWithMocks(
sbom: CreateValidAttestation(),
vex: null, // Missing
decision: CreateValidAttestation());
var request = CreateRequest(require: new[] { "sbom", "vex", "decision" }, strict: true);
var result = await verifier.VerifyAsync(request);
result.IsValid.Should().BeFalse();
result.MissingTypes.Should().Contain("vex");
}
[Fact]
public async Task Verify_InvalidSignature_ReturnsFail()
{
var verifier = CreateVerifierWithMocks(
sbom: CreateInvalidAttestation("Bad signature"));
var request = CreateRequest(require: new[] { "sbom" });
var result = await verifier.VerifyAsync(request);
result.IsValid.Should().BeFalse();
result.Attestations[0].Status.Should().Be(AttestationStatus.Invalid);
}
[Fact]
public async Task Verify_UntrustedSigner_ReturnsFail()
{
var verifier = CreateVerifierWithMocks(
sbom: CreateAttestationWithSigner("untrusted@evil.com"));
var request = CreateRequest(
require: new[] { "sbom" },
trustPolicy: CreatePolicyAllowing("trusted@example.com"));
var result = await verifier.VerifyAsync(request);
result.IsValid.Should().BeFalse();
result.Attestations[0].Status.Should().Be(AttestationStatus.UntrustedSigner);
}
[Fact]
public void ParseImageReference_WithDigest_Parses()
{
var (registry, repo, digest) = CommandHandlers.ParseImageReference(
"gcr.io/myproject/myapp@sha256:abc123");
registry.Should().Be("gcr.io");
repo.Should().Be("myproject/myapp");
digest.Should().Be("sha256:abc123");
}
[Fact]
public async Task Handler_ValidResult_ReturnsExitCode0()
{
var services = CreateServicesWithValidVerifier();
var exitCode = await CommandHandlers.HandleVerifyImageAsync(
services, "registry/app@sha256:abc",
new[] { "sbom" }, null, "table", false, false, CancellationToken.None);
exitCode.Should().Be(0);
}
[Fact]
public async Task Handler_InvalidResult_ReturnsExitCode1()
{
var services = CreateServicesWithFailingVerifier();
var exitCode = await CommandHandlers.HandleVerifyImageAsync(
services, "registry/app@sha256:abc",
new[] { "sbom" }, null, "table", true, false, CancellationToken.None);
exitCode.Should().Be(1);
}
}
```
**Acceptance Criteria**:
- [ ] All attestations present test
- [ ] Missing attestation (strict) test
- [ ] Invalid signature test
- [ ] Untrusted signer test
- [ ] Reference parsing tests
- [ ] Exit code tests
- [ ] All 7+ tests pass
---
### T6: Add DI Registration and Integration
**Assignee**: CLI Team
**Story Points**: 1
**Status**: TODO
**Dependencies**: T2, T3
**Description**:
Register services and integrate command.
**Acceptance Criteria**:
- [ ] `IImageAttestationVerifier` registered in DI
- [ ] `ITrustPolicyLoader` registered in DI
- [ ] Command added to verify group
- [ ] Integration test with mock registry
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | CLI Team | Define VerifyImageCommand |
| 2 | T2 | TODO | T1 | CLI Team | Implement ImageAttestationVerifier |
| 3 | T3 | TODO | — | CLI Team | Implement Trust Policy Loader |
| 4 | T4 | TODO | T1, T2, T3 | CLI Team | Implement Command Handler |
| 5 | T5 | TODO | T4 | CLI Team | Add unit tests |
| 6 | T6 | TODO | T2, T3 | CLI Team | Add DI registration |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from Explainable Triage advisory gap analysis (G1). | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Default required types | Decision | CLI Team | sbom,vex,decision as defaults |
| SARIF output | Decision | CLI Team | Enables integration with security scanners |
| Trust policy format | Decision | CLI Team | YAML for human readability |
| Exit codes | Decision | CLI Team | 0=pass, 1=fail, 2=error |
| Risk | Mitigation |
|------|------------|
| Registry auth complexity | Reuse existing OCI auth providers |
| Large referrer lists | Pagination and filtering by type |
| Offline mode | Fallback to local evidence directory |
---
## Success Criteria
- [ ] All 6 tasks marked DONE
- [ ] `stella verify image` command works end-to-end
- [ ] Exit code 1 when attestations missing/invalid
- [ ] Trust policy filtering works
- [ ] 7+ tests passing
- [ ] `dotnet build` succeeds
- [ ] `dotnet test` succeeds

View File

@@ -0,0 +1,181 @@
# SPRINT_4300_0001_0001: OCI Verdict Attestation Referrer Push
## Sprint Metadata
| Field | Value |
|-------|-------|
| **Sprint ID** | 4300_0001_0001 |
| **Title** | OCI Verdict Attestation Referrer Push |
| **Priority** | P0 (Critical) |
| **Moat Strength** | 5 (Structural moat) |
| **Working Directory** | `src/Attestor/`, `src/Scanner/`, `src/Zastava/` |
| **Estimated Effort** | 2 weeks |
| **Dependencies** | VerdictReceiptStatement (exists), ProofSpine (exists) |
---
## Objective
Implement the capability to push signed risk verdicts as OCI referrer artifacts, creating a portable "ship token" that can be attached to container images and verified independently by registries, admission controllers, and audit systems.
This is the **moat anchor** feature: "We don't output findings; we output an attestable decision that can be replayed."
---
## Background
The advisory identifies "Signed, replayable risk verdicts" as a **Moat 5** feature. While `VerdictReceiptStatement` and `ProofSpine` infrastructure exist, the verdict is not yet:
1. Pushed as an OCI artifact referrer (per OCI 1.1 spec)
2. Discoverable via `referrers` API
3. Verifiable standalone without StellaOps backend
Competitors (Syft + Sigstore, cosign) sign SBOMs as attestations, but not **risk decisions end-to-end**.
---
## Deliverables
### D1: OCI Verdict Artifact Schema
- Define `application/vnd.stellaops.verdict.v1+json` media type
- Create OCI manifest structure for verdict bundle
- Include: verdict statement, proof bundle digest, policy snapshot reference
### D2: Verdict Pusher Service
- Implement `IVerdictPusher` interface in `StellaOps.Attestor.OCI`
- Support OCI Distribution 1.1 referrers API
- Handle authentication (bearer token, basic auth)
- Retry logic with backoff
### D3: Scanner Integration
- Hook verdict push into scan completion flow
- Add `--push-verdict` flag to CLI
- Emit telemetry on push success/failure
### D4: Registry Webhook Observer
- Extend Zastava to observe verdict referrers
- Validate verdict signature on webhook
- Store verdict metadata in findings ledger
### D5: Verification CLI
- `stella verdict verify <image-ref>` command
- Fetch verdict via referrers API
- Validate signature and replay inputs
---
## Tasks
### Phase 1: Schema & Models
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| VERDICT-001 | Define OCI verdict media type and manifest schema | TODO | |
| VERDICT-002 | Create `VerdictOciManifest` record in `StellaOps.Attestor.OCI` | TODO | |
| VERDICT-003 | Add verdict artifact type constants | TODO | |
| VERDICT-004 | Write schema validation tests | TODO | |
### Phase 2: Push Infrastructure
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| VERDICT-005 | Implement `IVerdictPusher` interface | TODO | |
| VERDICT-006 | Create `OciVerdictPusher` with referrers API support | TODO | |
| VERDICT-007 | Add registry authentication handling | TODO | |
| VERDICT-008 | Implement retry with exponential backoff | TODO | |
| VERDICT-009 | Add push telemetry (OTEL spans, metrics) | TODO | |
| VERDICT-010 | Integration tests with local registry (testcontainers) | TODO | |
### Phase 3: Scanner Integration
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| VERDICT-011 | Add `VerdictPushOptions` to scan configuration | TODO | |
| VERDICT-012 | Hook pusher into `ScanJobProcessor` completion | TODO | |
| VERDICT-013 | Add `--push-verdict` CLI flag | TODO | |
| VERDICT-014 | Update scan status response with verdict digest | TODO | |
| VERDICT-015 | E2E test: scan -> verdict push -> verify | TODO | |
### Phase 4: Zastava Observer
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| VERDICT-016 | Extend webhook handler for verdict artifacts | TODO | |
| VERDICT-017 | Implement verdict signature validation | TODO | |
| VERDICT-018 | Store verdict metadata in findings ledger | TODO | |
| VERDICT-019 | Add verdict discovery endpoint | TODO | |
### Phase 5: Verification CLI
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| VERDICT-020 | Implement `stella verdict verify` command | TODO | |
| VERDICT-021 | Fetch verdict via referrers API | TODO | |
| VERDICT-022 | Validate DSSE envelope signature | TODO | |
| VERDICT-023 | Verify input digests against manifest | TODO | |
| VERDICT-024 | Output verification report (JSON/human) | TODO | |
---
## Acceptance Criteria
1. **AC1**: Verdict can be pushed to any OCI 1.1 compliant registry
2. **AC2**: Verdict is discoverable via `GET /v2/<name>/referrers/<digest>`
3. **AC3**: `stella verdict verify` succeeds with valid signature
4. **AC4**: Verdict includes sbomDigest, feedsDigest, policyDigest for replay
5. **AC5**: Zastava can observe and validate verdict push events
---
## Technical Notes
### OCI Manifest Structure
```json
{
"schemaVersion": 2,
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"artifactType": "application/vnd.stellaops.verdict.v1+json",
"config": {
"mediaType": "application/vnd.stellaops.verdict.config.v1+json",
"digest": "sha256:...",
"size": 0
},
"layers": [
{
"mediaType": "application/vnd.stellaops.verdict.v1+json",
"digest": "sha256:...",
"size": 1234
}
],
"subject": {
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"digest": "sha256:<image-digest>",
"size": 5678
},
"annotations": {
"org.stellaops.verdict.decision": "pass",
"org.stellaops.verdict.timestamp": "2025-12-22T00:00:00Z"
}
}
```
### Signing
- Use existing `IProofChainSigner` for DSSE envelope
- Support Sigstore (keyless) and local key signing
---
## Risks & Mitigations
| Risk | Impact | Mitigation |
|------|--------|------------|
| Registry doesn't support referrers API | Cannot push | Fallback to tag-based approach |
| Large verdict bundles | Slow push | Compress, reference external proofs |
| Key management complexity | Security | Document key rotation procedures |
---
## Documentation Updates
- [ ] Update `docs/modules/attestor/architecture.md`
- [ ] Add `docs/operations/verdict-attestation-guide.md`
- [ ] Update CLI reference with `verdict` commands

View File

@@ -0,0 +1,511 @@
# Sprint 4300.0001.0002 - Findings Evidence API Endpoint
## Topic & Scope
- Add `GET /api/v1/findings/{findingId}/evidence` endpoint
- Returns consolidated evidence contract matching advisory spec
- Uses existing `EvidenceCompositionService` internally
- Add OpenAPI schema documentation
**Working directory:** `src/Scanner/StellaOps.Scanner.WebService/`
## Dependencies & Concurrency
- **Upstream (DONE):**
- EvidenceCompositionService (SPRINT_3800_0003_0001)
- TriageDbContext entities
- **Downstream:** UI evidence drawer integration
- **Safe to parallelize with:** SPRINT_4300_0001_0001, SPRINT_4300_0002_*
## Documentation Prerequisites
- `docs/modules/scanner/architecture.md`
- `src/Scanner/StellaOps.Scanner.WebService/AGENTS.md`
- SPRINT_3800_0003_0001 (Evidence API models)
---
## Tasks
### T1: Define FindingEvidenceResponse Contract
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: —
**Description**:
Define the response contract matching the advisory specification.
**Implementation Path**: `Contracts/FindingEvidenceContracts.cs` (new or extend)
**Contract**:
```csharp
namespace StellaOps.Scanner.WebService.Contracts;
/// <summary>
/// Consolidated evidence response for a finding.
/// Matches the advisory contract for explainable triage UX.
/// </summary>
public sealed record FindingEvidenceResponse
{
/// <summary>
/// Unique finding identifier.
/// </summary>
[JsonPropertyName("finding_id")]
public required string FindingId { get; init; }
/// <summary>
/// CVE or vulnerability identifier.
/// </summary>
[JsonPropertyName("cve")]
public required string Cve { get; init; }
/// <summary>
/// Affected component details.
/// </summary>
[JsonPropertyName("component")]
public required ComponentInfo Component { get; init; }
/// <summary>
/// Reachable path from entrypoint to vulnerable code.
/// </summary>
[JsonPropertyName("reachable_path")]
public IReadOnlyList<string> ReachablePath { get; init; } = [];
/// <summary>
/// Entrypoint details (HTTP route, CLI command, etc.).
/// </summary>
[JsonPropertyName("entrypoint")]
public EntrypointInfo? Entrypoint { get; init; }
/// <summary>
/// VEX exploitability status.
/// </summary>
[JsonPropertyName("vex")]
public VexStatusInfo? Vex { get; init; }
/// <summary>
/// When this evidence was last observed/generated.
/// </summary>
[JsonPropertyName("last_seen")]
public required DateTimeOffset LastSeen { get; init; }
/// <summary>
/// Content-addressed references to attestations.
/// </summary>
[JsonPropertyName("attestation_refs")]
public IReadOnlyList<string> AttestationRefs { get; init; } = [];
/// <summary>
/// Risk score with explanation.
/// </summary>
[JsonPropertyName("score")]
public ScoreInfo? Score { get; init; }
/// <summary>
/// Boundary exposure information.
/// </summary>
[JsonPropertyName("boundary")]
public BoundaryInfo? Boundary { get; init; }
/// <summary>
/// Evidence freshness and TTL.
/// </summary>
[JsonPropertyName("freshness")]
public FreshnessInfo Freshness { get; init; } = new();
}
public sealed record ComponentInfo
{
[JsonPropertyName("name")]
public required string Name { get; init; }
[JsonPropertyName("version")]
public required string Version { get; init; }
[JsonPropertyName("purl")]
public string? Purl { get; init; }
[JsonPropertyName("ecosystem")]
public string? Ecosystem { get; init; }
}
public sealed record EntrypointInfo
{
[JsonPropertyName("type")]
public required string Type { get; init; } // http, grpc, cli, cron, queue
[JsonPropertyName("route")]
public string? Route { get; init; }
[JsonPropertyName("method")]
public string? Method { get; init; }
[JsonPropertyName("auth")]
public string? Auth { get; init; } // jwt:scope, mtls, apikey, none
}
public sealed record VexStatusInfo
{
[JsonPropertyName("status")]
public required string Status { get; init; } // affected, not_affected, under_investigation, fixed
[JsonPropertyName("justification")]
public string? Justification { get; init; }
[JsonPropertyName("timestamp")]
public DateTimeOffset? Timestamp { get; init; }
[JsonPropertyName("issuer")]
public string? Issuer { get; init; }
}
public sealed record ScoreInfo
{
[JsonPropertyName("risk_score")]
public required int RiskScore { get; init; }
[JsonPropertyName("contributions")]
public IReadOnlyList<ScoreContribution> Contributions { get; init; } = [];
}
public sealed record ScoreContribution
{
[JsonPropertyName("factor")]
public required string Factor { get; init; }
[JsonPropertyName("value")]
public required int Value { get; init; }
[JsonPropertyName("reason")]
public string? Reason { get; init; }
}
public sealed record BoundaryInfo
{
[JsonPropertyName("surface")]
public required string Surface { get; init; }
[JsonPropertyName("exposure")]
public required string Exposure { get; init; } // internet, internal, none
[JsonPropertyName("auth")]
public AuthInfo? Auth { get; init; }
[JsonPropertyName("controls")]
public IReadOnlyList<string> Controls { get; init; } = [];
}
public sealed record AuthInfo
{
[JsonPropertyName("mechanism")]
public required string Mechanism { get; init; }
[JsonPropertyName("required_scopes")]
public IReadOnlyList<string> RequiredScopes { get; init; } = [];
}
public sealed record FreshnessInfo
{
[JsonPropertyName("is_stale")]
public bool IsStale { get; init; }
[JsonPropertyName("expires_at")]
public DateTimeOffset? ExpiresAt { get; init; }
[JsonPropertyName("ttl_remaining_hours")]
public int? TtlRemainingHours { get; init; }
}
```
**Acceptance Criteria**:
- [ ] `FindingEvidenceContracts.cs` created
- [ ] All fields from advisory included
- [ ] JSON property names use snake_case
- [ ] XML documentation on all properties
- [ ] Nullable fields where appropriate
---
### T2: Implement FindingsEvidenceController
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1
**Description**:
Create the REST endpoint for evidence retrieval.
**Implementation Path**: `Controllers/FindingsEvidenceController.cs` (new)
**Implementation**:
```csharp
namespace StellaOps.Scanner.WebService.Controllers;
[ApiController]
[Route("api/v1/findings")]
[Produces("application/json")]
public sealed class FindingsEvidenceController : ControllerBase
{
private readonly IEvidenceCompositionService _evidenceService;
private readonly ITriageQueryService _triageService;
private readonly ILogger<FindingsEvidenceController> _logger;
public FindingsEvidenceController(
IEvidenceCompositionService evidenceService,
ITriageQueryService triageService,
ILogger<FindingsEvidenceController> logger)
{
_evidenceService = evidenceService;
_triageService = triageService;
_logger = logger;
}
/// <summary>
/// Get consolidated evidence for a finding.
/// </summary>
/// <param name="findingId">The finding identifier.</param>
/// <param name="includeRaw">Include raw source locations (requires elevated permissions).</param>
/// <response code="200">Evidence retrieved successfully.</response>
/// <response code="404">Finding not found.</response>
/// <response code="403">Insufficient permissions for raw source.</response>
[HttpGet("{findingId}/evidence")]
[ProducesResponseType(typeof(FindingEvidenceResponse), StatusCodes.Status200OK)]
[ProducesResponseType(StatusCodes.Status404NotFound)]
[ProducesResponseType(StatusCodes.Status403Forbidden)]
public async Task<IActionResult> GetEvidenceAsync(
[FromRoute] string findingId,
[FromQuery] bool includeRaw = false,
CancellationToken ct = default)
{
_logger.LogDebug("Getting evidence for finding {FindingId}", findingId);
// Check permissions for raw source
if (includeRaw && !User.HasClaim("scope", "evidence:raw"))
{
return Forbid("Requires evidence:raw scope for raw source access");
}
// Get finding
var finding = await _triageService.GetFindingAsync(findingId, ct);
if (finding is null)
{
return NotFound(new { error = "Finding not found", findingId });
}
// Compose evidence
var evidence = await _evidenceService.ComposeAsync(finding, includeRaw, ct);
// Map to response
var response = MapToResponse(finding, evidence);
return Ok(response);
}
/// <summary>
/// Get evidence for multiple findings (batch).
/// </summary>
[HttpPost("evidence/batch")]
[ProducesResponseType(typeof(BatchEvidenceResponse), StatusCodes.Status200OK)]
public async Task<IActionResult> GetBatchEvidenceAsync(
[FromBody] BatchEvidenceRequest request,
CancellationToken ct = default)
{
if (request.FindingIds.Count > 100)
{
return BadRequest(new { error = "Maximum 100 findings per batch" });
}
var results = new List<FindingEvidenceResponse>();
foreach (var findingId in request.FindingIds)
{
var finding = await _triageService.GetFindingAsync(findingId, ct);
if (finding is null) continue;
var evidence = await _evidenceService.ComposeAsync(finding, false, ct);
results.Add(MapToResponse(finding, evidence));
}
return Ok(new BatchEvidenceResponse { Findings = results });
}
private static FindingEvidenceResponse MapToResponse(
TriageFinding finding,
ComposedEvidence evidence)
{
return new FindingEvidenceResponse
{
FindingId = finding.Id.ToString(),
Cve = finding.Cve ?? finding.RuleId ?? "unknown",
Component = new ComponentInfo
{
Name = evidence.ComponentName ?? "unknown",
Version = evidence.ComponentVersion ?? "unknown",
Purl = finding.ComponentPurl,
Ecosystem = evidence.Ecosystem
},
ReachablePath = evidence.ReachablePath ?? [],
Entrypoint = evidence.Entrypoint is not null
? new EntrypointInfo
{
Type = evidence.Entrypoint.Type,
Route = evidence.Entrypoint.Route,
Method = evidence.Entrypoint.Method,
Auth = evidence.Entrypoint.Auth
}
: null,
Vex = evidence.VexStatus is not null
? new VexStatusInfo
{
Status = evidence.VexStatus.Status,
Justification = evidence.VexStatus.Justification,
Timestamp = evidence.VexStatus.Timestamp,
Issuer = evidence.VexStatus.Issuer
}
: null,
LastSeen = evidence.LastSeen,
AttestationRefs = evidence.AttestationDigests ?? [],
Score = evidence.Score is not null
? new ScoreInfo
{
RiskScore = evidence.Score.RiskScore,
Contributions = evidence.Score.Contributions
.Select(c => new ScoreContribution
{
Factor = c.Factor,
Value = c.Value,
Reason = c.Reason
}).ToList()
}
: null,
Boundary = evidence.Boundary is not null
? new BoundaryInfo
{
Surface = evidence.Boundary.Surface,
Exposure = evidence.Boundary.Exposure,
Auth = evidence.Boundary.Auth is not null
? new AuthInfo
{
Mechanism = evidence.Boundary.Auth.Mechanism,
RequiredScopes = evidence.Boundary.Auth.Scopes ?? []
}
: null,
Controls = evidence.Boundary.Controls ?? []
}
: null,
Freshness = new FreshnessInfo
{
IsStale = evidence.IsStale,
ExpiresAt = evidence.ExpiresAt,
TtlRemainingHours = evidence.TtlRemainingHours
}
};
}
}
public sealed record BatchEvidenceRequest
{
[JsonPropertyName("finding_ids")]
public required IReadOnlyList<string> FindingIds { get; init; }
}
public sealed record BatchEvidenceResponse
{
[JsonPropertyName("findings")]
public required IReadOnlyList<FindingEvidenceResponse> Findings { get; init; }
}
```
**Acceptance Criteria**:
- [ ] GET `/api/v1/findings/{findingId}/evidence` works
- [ ] POST `/api/v1/findings/evidence/batch` for batch retrieval
- [ ] `includeRaw` parameter with permission check
- [ ] 404 when finding not found
- [ ] 403 when raw access denied
- [ ] Proper error responses
---
### T3: Add OpenAPI Documentation
**Assignee**: Scanner Team
**Story Points**: 1
**Status**: TODO
**Dependencies**: T1, T2
**Description**:
Add OpenAPI schema documentation for the endpoint.
**Implementation Path**: `docs/schemas/findings-evidence-api.openapi.yaml`
**Acceptance Criteria**:
- [ ] OpenAPI spec added
- [ ] All request/response schemas documented
- [ ] Examples included
- [ ] Error responses documented
---
### T4: Add Unit Tests
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T2
**Description**:
Unit tests for the evidence endpoint.
**Test Cases**:
- [ ] Valid finding returns evidence
- [ ] Unknown finding returns 404
- [ ] Raw access without permission returns 403
- [ ] Batch request with mixed results
- [ ] Mapping preserves all fields
**Acceptance Criteria**:
- [ ] 5+ unit tests passing
- [ ] Controller tested with mocks
- [ ] Response mapping tested
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Scanner Team | Define response contract |
| 2 | T2 | TODO | T1 | Scanner Team | Implement controller |
| 3 | T3 | TODO | T1, T2 | Scanner Team | Add OpenAPI docs |
| 4 | T4 | TODO | T2 | Scanner Team | Add unit tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from Explainable Triage advisory gap analysis (G6). | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Snake_case JSON | Decision | Scanner Team | Matches advisory contract |
| Raw access permission | Decision | Scanner Team | evidence:raw scope required |
| Batch limit | Decision | Scanner Team | 100 findings max per request |
---
## Success Criteria
- [ ] All 4 tasks marked DONE
- [ ] Endpoint returns evidence matching advisory contract
- [ ] Performance < 300ms per finding
- [ ] 5+ tests passing
- [ ] `dotnet build` succeeds
- [ ] `dotnet test` succeeds

View File

@@ -0,0 +1,181 @@
# SPRINT_4300_0001_0002: One-Command Audit Replay CLI
## Sprint Metadata
| Field | Value |
|-------|-------|
| **Sprint ID** | 4300_0001_0002 |
| **Title** | One-Command Audit Replay CLI |
| **Priority** | P0 (Critical) |
| **Moat Strength** | 5 (Structural moat) |
| **Working Directory** | `src/Cli/`, `src/__Libraries/StellaOps.Replay.Core/`, `src/AirGap/` |
| **Estimated Effort** | 2 weeks |
| **Dependencies** | ReplayManifest (exists), ReplayVerifier (exists), SPRINT_4300_0001_0001 |
---
## Objective
Implement a single CLI command that enables auditors to replay and verify risk verdicts from a self-contained bundle, without network connectivity or access to the StellaOps backend.
**Moat thesis**: "We don't output findings; we output an attestable decision that can be replayed."
---
## Background
The advisory requires "air-gapped reproducibility" where audits are a "one-command replay." Current implementation has:
- `ReplayManifest` with input hashes
- `ReplayVerifier` with depth levels (HashOnly, FullRecompute, PolicyFreeze)
- `ReplayBundleWriter` for bundle creation
**Gap**: No unified CLI command; manual steps required.
---
## Deliverables
### D1: Audit Bundle Format
- Define `audit-bundle.tar.gz` structure
- Include: manifest, SBOM snapshot, feed snapshot, policy snapshot, verdict
- Add merkle root for integrity
### D2: Bundle Export Command
- `stella audit export --scan-id=<id> --output=./audit.tar.gz`
- Package all inputs and verdict into portable bundle
- Sign bundle manifest
### D3: Bundle Replay Command
- `stella audit replay --bundle=./audit.tar.gz`
- Extract and validate bundle
- Re-execute policy evaluation
- Compare verdict hashes
### D4: Verification Report
- JSON and human-readable output
- Show: input match, verdict match, drift detection
- Exit code: 0=match, 1=drift, 2=error
### D5: Air-Gap Integration
- Integrate with `AirGap.Importer` for offline execution
- Support `--offline` mode (no network checks)
---
## Tasks
### Phase 1: Bundle Format
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| REPLAY-001 | Define audit bundle manifest schema (`audit-manifest.json`) | TODO | |
| REPLAY-002 | Create `AuditBundleWriter` in `StellaOps.Replay.Core` | TODO | |
| REPLAY-003 | Implement merkle root calculation for bundle contents | TODO | |
| REPLAY-004 | Add bundle signature (DSSE envelope) | TODO | |
| REPLAY-005 | Write bundle format specification doc | TODO | |
### Phase 2: Export Command
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| REPLAY-006 | Add `stella audit export` command structure | TODO | |
| REPLAY-007 | Implement scan snapshot fetcher | TODO | |
| REPLAY-008 | Implement feed snapshot exporter (point-in-time) | TODO | |
| REPLAY-009 | Implement policy snapshot exporter | TODO | |
| REPLAY-010 | Package into tar.gz with manifest | TODO | |
| REPLAY-011 | Sign manifest and add to bundle | TODO | |
| REPLAY-012 | Add progress output for large bundles | TODO | |
### Phase 3: Replay Command
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| REPLAY-013 | Add `stella audit replay` command structure | TODO | |
| REPLAY-014 | Implement bundle extractor with validation | TODO | |
| REPLAY-015 | Create isolated replay context (no external calls) | TODO | |
| REPLAY-016 | Load SBOM, feeds, policy from bundle | TODO | |
| REPLAY-017 | Re-execute `TrustLatticeEngine.Evaluate()` | TODO | |
| REPLAY-018 | Compare computed verdict hash with stored | TODO | |
| REPLAY-019 | Detect and report input drift | TODO | |
### Phase 4: Verification Report
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| REPLAY-020 | Define `AuditReplayReport` model | TODO | |
| REPLAY-021 | Implement JSON report formatter | TODO | |
| REPLAY-022 | Implement human-readable report formatter | TODO | |
| REPLAY-023 | Add `--format=json|text` flag | TODO | |
| REPLAY-024 | Set exit codes based on verdict match | TODO | |
### Phase 5: Air-Gap Integration
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| REPLAY-025 | Add `--offline` flag to replay command | TODO | |
| REPLAY-026 | Integrate with `AirGap.Importer` trust store | TODO | |
| REPLAY-027 | Validate time anchor from bundle | TODO | |
| REPLAY-028 | E2E test: export -> transfer -> replay offline | TODO | |
---
## Acceptance Criteria
1. **AC1**: `stella audit export` produces a self-contained bundle
2. **AC2**: `stella audit replay` succeeds with matching verdict on same inputs
3. **AC3**: Replay fails deterministically if any input is modified
4. **AC4**: Works fully offline with `--offline` flag
5. **AC5**: Bundle is verifiable months after creation
---
## Technical Notes
### Bundle Structure
```
audit-bundle.tar.gz
├── audit-manifest.json # Bundle metadata + merkle root
├── audit-manifest.sig # DSSE signature of manifest
├── sbom/
│ └── sbom.spdx.json # SBOM snapshot
├── feeds/
│ ├── advisories.ndjson # Advisory snapshot
│ └── feeds-digest.sha256 # Feed content hash
├── policy/
│ ├── policy-bundle.tar # OPA bundle
│ └── policy-digest.sha256 # Policy hash
├── vex/
│ └── vex-statements.json # VEX claims at time of scan
└── verdict/
├── verdict.json # VerdictReceiptStatement
└── proof-bundle.json # Full proof chain
```
### Replay Semantics
```
same_inputs = (
sha256(sbom) == manifest.sbomDigest &&
sha256(feeds) == manifest.feedsDigest &&
sha256(policy) == manifest.policyDigest
)
same_verdict = sha256(computed_verdict) == manifest.verdictDigest
replay_passed = same_inputs && same_verdict
```
---
## Risks & Mitigations
| Risk | Impact | Mitigation |
|------|--------|------------|
| Bundle size too large | Storage/transfer issues | Support streaming, external references |
| Feed snapshot incomplete | Replay fails | Validate feed coverage before export |
| Clock drift affects time-based rules | Inconsistent replay | Use bundle timestamp as evaluation time |
---
## Documentation Updates
- [ ] Add `docs/operations/audit-replay-guide.md`
- [ ] Update CLI reference with `audit` commands
- [ ] Add air-gap operation runbook

View File

@@ -0,0 +1,376 @@
# Sprint 4300.0002.0001 - Evidence Privacy Controls
## Topic & Scope
- Add `EvidenceRedactionService` for privacy-aware proof views
- Store file hashes, symbol names, line ranges (no raw source by default)
- Gate raw source access behind elevated permissions (Authority scope check)
- Default to redacted proofs
**Working directory:** `src/Scanner/__Libraries/StellaOps.Scanner.Evidence/`
## Dependencies & Concurrency
- **Upstream (DONE):**
- Evidence Bundle models
- Authority scope system
- **Downstream:** Evidence API, UI evidence drawer
- **Safe to parallelize with:** SPRINT_4300_0001_*, SPRINT_4300_0002_0002
## Documentation Prerequisites
- `docs/modules/scanner/architecture.md`
- `docs/modules/authority/architecture.md`
---
## Tasks
### T1: Define Redaction Levels
**Assignee**: Scanner Team
**Story Points**: 1
**Status**: TODO
**Dependencies**: —
**Description**:
Define the redaction levels for evidence.
**Implementation Path**: `Privacy/EvidenceRedactionLevel.cs` (new)
**Implementation**:
```csharp
namespace StellaOps.Scanner.Evidence.Privacy;
/// <summary>
/// Redaction levels for evidence data.
/// </summary>
public enum EvidenceRedactionLevel
{
/// <summary>
/// Full evidence including raw source code.
/// Requires elevated permissions.
/// </summary>
Full = 0,
/// <summary>
/// Standard redaction: file hashes, symbol names, line ranges.
/// No raw source code.
/// </summary>
Standard = 1,
/// <summary>
/// Minimal: only digests and counts.
/// For external sharing.
/// </summary>
Minimal = 2
}
/// <summary>
/// Fields that can be redacted.
/// </summary>
[Flags]
public enum RedactableFields
{
None = 0,
SourceCode = 1 << 0,
FilePaths = 1 << 1,
LineNumbers = 1 << 2,
SymbolNames = 1 << 3,
CallArguments = 1 << 4,
EnvironmentVars = 1 << 5,
InternalUrls = 1 << 6,
All = SourceCode | FilePaths | LineNumbers | SymbolNames | CallArguments | EnvironmentVars | InternalUrls
}
```
**Acceptance Criteria**:
- [ ] Three redaction levels defined
- [ ] RedactableFields flags enum
- [ ] Documentation on each level
---
### T2: Implement EvidenceRedactionService
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1
**Description**:
Service to apply redaction rules to evidence.
**Implementation Path**: `Privacy/EvidenceRedactionService.cs` (new)
**Implementation**:
```csharp
namespace StellaOps.Scanner.Evidence.Privacy;
public interface IEvidenceRedactionService
{
/// <summary>
/// Redacts evidence based on the specified level.
/// </summary>
EvidenceBundle Redact(EvidenceBundle bundle, EvidenceRedactionLevel level);
/// <summary>
/// Redacts specific fields from evidence.
/// </summary>
EvidenceBundle RedactFields(EvidenceBundle bundle, RedactableFields fields);
/// <summary>
/// Determines the appropriate redaction level for a user.
/// </summary>
EvidenceRedactionLevel DetermineLevel(ClaimsPrincipal user);
}
public sealed class EvidenceRedactionService : IEvidenceRedactionService
{
private readonly ILogger<EvidenceRedactionService> _logger;
public EvidenceBundle Redact(EvidenceBundle bundle, EvidenceRedactionLevel level)
{
return level switch
{
EvidenceRedactionLevel.Full => bundle,
EvidenceRedactionLevel.Standard => RedactStandard(bundle),
EvidenceRedactionLevel.Minimal => RedactMinimal(bundle),
_ => RedactStandard(bundle)
};
}
private EvidenceBundle RedactStandard(EvidenceBundle bundle)
{
return bundle with
{
Reachability = bundle.Reachability is not null
? RedactReachability(bundle.Reachability)
: null,
CallStack = bundle.CallStack is not null
? RedactCallStack(bundle.CallStack)
: null,
Provenance = bundle.Provenance // Keep as-is (already redacted)
};
}
private ReachabilityEvidence RedactReachability(ReachabilityEvidence evidence)
{
return evidence with
{
Paths = evidence.Paths.Select(p => new ReachabilityPath
{
PathId = p.PathId,
Steps = p.Steps.Select(s => new ReachabilityStep
{
Node = RedactSymbol(s.Node),
FileHash = s.FileHash, // Keep hash
Lines = s.Lines, // Keep line range
SourceCode = null // Redact source
}).ToList()
}).ToList(),
GraphDigest = evidence.GraphDigest
};
}
private CallStackEvidence RedactCallStack(CallStackEvidence evidence)
{
return evidence with
{
Frames = evidence.Frames.Select(f => new CallFrame
{
Function = RedactSymbol(f.Function),
FileHash = f.FileHash,
Line = f.Line,
Arguments = null, // Redact arguments
Locals = null // Redact locals
}).ToList()
};
}
private string RedactSymbol(string symbol)
{
// Keep class and method names, redact arguments
// "MyClass.MyMethod(string arg1, int arg2)" -> "MyClass.MyMethod(...)"
var parenIndex = symbol.IndexOf('(');
if (parenIndex > 0)
{
return symbol[..parenIndex] + "(...)";
}
return symbol;
}
private EvidenceBundle RedactMinimal(EvidenceBundle bundle)
{
return bundle with
{
Reachability = bundle.Reachability is not null
? new ReachabilityEvidence
{
Result = bundle.Reachability.Result,
Confidence = bundle.Reachability.Confidence,
PathCount = bundle.Reachability.Paths.Count,
Paths = [], // No paths
GraphDigest = bundle.Reachability.GraphDigest
}
: null,
CallStack = null,
Provenance = bundle.Provenance is not null
? new ProvenanceEvidence
{
BuildId = bundle.Provenance.BuildId,
BuildDigest = bundle.Provenance.BuildDigest,
Verified = bundle.Provenance.Verified
}
: null
};
}
public EvidenceRedactionLevel DetermineLevel(ClaimsPrincipal user)
{
if (user.HasClaim("scope", "evidence:full") ||
user.HasClaim("role", "security_admin"))
{
return EvidenceRedactionLevel.Full;
}
if (user.HasClaim("scope", "evidence:standard") ||
user.HasClaim("role", "security_analyst"))
{
return EvidenceRedactionLevel.Standard;
}
return EvidenceRedactionLevel.Minimal;
}
}
```
**Acceptance Criteria**:
- [ ] `EvidenceRedactionService.cs` created
- [ ] Standard redaction removes source code
- [ ] Minimal redaction removes paths and details
- [ ] User-based level determination
- [ ] Symbol redaction preserves method names
---
### T3: Integrate with Evidence Composition
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T2
**Description**:
Integrate redaction into evidence composition flow.
**Implementation Path**: Modify `EvidenceCompositionService.cs`
**Acceptance Criteria**:
- [ ] Redaction applied before response
- [ ] User context passed through
- [ ] Logging for access levels
---
### T4: Add Unit Tests
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T2
**Description**:
Tests for redaction logic.
**Test Cases**:
```csharp
public class EvidenceRedactionServiceTests
{
[Fact]
public void Redact_Standard_RemovesSourceCode()
{
var bundle = CreateBundleWithSource();
var result = _service.Redact(bundle, EvidenceRedactionLevel.Standard);
result.Reachability!.Paths
.SelectMany(p => p.Steps)
.Should().OnlyContain(s => s.SourceCode is null);
}
[Fact]
public void Redact_Standard_KeepsFileHashes()
{
var bundle = CreateBundleWithSource();
var result = _service.Redact(bundle, EvidenceRedactionLevel.Standard);
result.Reachability!.Paths
.SelectMany(p => p.Steps)
.Should().OnlyContain(s => s.FileHash is not null);
}
[Fact]
public void Redact_Minimal_RemovesPaths()
{
var bundle = CreateBundleWithPaths(5);
var result = _service.Redact(bundle, EvidenceRedactionLevel.Minimal);
result.Reachability!.Paths.Should().BeEmpty();
result.Reachability.PathCount.Should().Be(5);
}
[Fact]
public void DetermineLevel_SecurityAdmin_ReturnsFull()
{
var user = CreateUserWithRole("security_admin");
var level = _service.DetermineLevel(user);
level.Should().Be(EvidenceRedactionLevel.Full);
}
[Fact]
public void DetermineLevel_NoScopes_ReturnsMinimal()
{
var user = CreateUserWithNoScopes();
var level = _service.DetermineLevel(user);
level.Should().Be(EvidenceRedactionLevel.Minimal);
}
}
```
**Acceptance Criteria**:
- [ ] Source code removal tested
- [ ] File hash preservation tested
- [ ] Minimal redaction tested
- [ ] User level determination tested
- [ ] 5+ tests passing
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Scanner Team | Define redaction levels |
| 2 | T2 | TODO | T1 | Scanner Team | Implement redaction service |
| 3 | T3 | TODO | T2 | Scanner Team | Integrate with composition |
| 4 | T4 | TODO | T2 | Scanner Team | Add unit tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from Explainable Triage advisory gap analysis (G2). | Agent |
---
## Success Criteria
- [ ] All 4 tasks marked DONE
- [ ] Source code never exposed without permission
- [ ] File hashes and line ranges preserved
- [ ] 5+ tests passing
- [ ] `dotnet build` succeeds

View File

@@ -0,0 +1,167 @@
# SPRINT_4300_0002_0001: Unknowns Budget Policy Integration
## Sprint Metadata
| Field | Value |
|-------|-------|
| **Sprint ID** | 4300_0002_0001 |
| **Title** | Unknowns Budget Policy Integration |
| **Priority** | P1 (High) |
| **Moat Strength** | 4 (Strong moat) |
| **Working Directory** | `src/Policy/`, `src/Signals/`, `src/Scanner/` |
| **Estimated Effort** | 2 weeks |
| **Dependencies** | UncertaintyTier (exists), UnknownStateLedger (exists) |
---
## Objective
Implement policy-level enforcement of unknown budgets, enabling rules like "fail if unknowns > N in production" or "warn if uncertainty tier is T1 for critical components."
**Moat thesis**: "We quantify uncertainty and gate on it."
---
## Background
The advisory identifies "Unknowns as first-class state" as a **Moat 4** feature. Current implementation has:
- `UncertaintyTier` (T1-T4) with entropy classification
- `UnknownStateLedger` tracking marker kinds
- Risk modifiers from uncertainty
**Gap**: No policy integration to enforce unknown budgets.
---
## Deliverables
### D1: Unknown Budget Rule DSL
- Define policy rules for unknown thresholds
- Support tier-based, count-based, and entropy-based rules
- Environment scoping (dev/staging/prod)
### D2: Policy Engine Integration
- Extend `PolicyGateEvaluator` with unknown budget gates
- Add unknown state to evaluation context
- Emit violation on budget exceeded
### D3: Unknown Budget Configuration
- Admin UI for setting budgets per environment
- API endpoints for budget CRUD
- Default budgets per tier
### D4: Reporting & Alerts
- Include unknown budget status in scan reports
- Notify on budget threshold crossings
- Dashboard widget for unknown trends
---
## Tasks
### Phase 1: Policy Rule DSL
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| BUDGET-001 | Define `UnknownBudgetRule` schema | TODO | |
| BUDGET-002 | Add budget rules to policy bundle format | TODO | |
| BUDGET-003 | Create `UnknownBudgetRuleParser` | TODO | |
| BUDGET-004 | Support expressions: `unknowns.count > 10`, `unknowns.tier == T1` | TODO | |
| BUDGET-005 | Add environment scope filter | TODO | |
### Phase 2: Policy Engine Integration
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| BUDGET-006 | Extend `PolicyEvaluationContext` with unknown state | TODO | |
| BUDGET-007 | Add `UnknownBudgetGate` to `PolicyGateEvaluator` | TODO | |
| BUDGET-008 | Implement tier-based gate: block on T1, warn on T2 | TODO | |
| BUDGET-009 | Implement count-based gate: fail if count > threshold | TODO | |
| BUDGET-010 | Implement entropy-based gate: fail if mean entropy > threshold | TODO | |
| BUDGET-011 | Emit `BudgetExceededViolation` with details | TODO | |
| BUDGET-012 | Unit tests for all gate types | TODO | |
### Phase 3: Configuration
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| BUDGET-013 | Add `UnknownBudgetOptions` configuration | TODO | |
| BUDGET-014 | Create budget management API endpoints | TODO | |
| BUDGET-015 | Implement default budgets (prod: T2 max, staging: T1 warn) | TODO | |
| BUDGET-016 | Add budget configuration to policy YAML | TODO | |
### Phase 4: Reporting
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| BUDGET-017 | Add unknown budget section to scan report | TODO | |
| BUDGET-018 | Create `UnknownBudgetExceeded` notification event | TODO | |
| BUDGET-019 | Integrate with Notify module for alerts | TODO | |
| BUDGET-020 | Add budget status to policy evaluation response | TODO | |
---
## Acceptance Criteria
1. **AC1**: Policy can define `unknowns.count <= 5` threshold
2. **AC2**: Policy can define `unknowns.tier != T1` requirement
3. **AC3**: Budget violations appear in scan results
4. **AC4**: Notifications fire on budget exceeded
5. **AC5**: Environment-specific budgets work correctly
---
## Technical Notes
### Policy Rule Examples
```yaml
unknown_budgets:
- name: "production-strict"
environment: "production"
rules:
- tier_max: T2 # Block if any T1 unknowns
- count_max: 5 # Block if > 5 unknowns total
- entropy_max: 0.4 # Block if mean entropy > 0.4
action: block
- name: "staging-warn"
environment: "staging"
rules:
- tier_max: T1 # Warn on T1, allow T2-T4
- count_max: 20
action: warn
```
### Gate Evaluation
```csharp
public sealed class UnknownBudgetGate : IPolicyGate
{
public GateResult Evaluate(UnknownBudgetRule rule, UnknownState state)
{
if (rule.TierMax.HasValue && state.MaxTier < rule.TierMax.Value)
return GateResult.Fail($"Tier {state.MaxTier} exceeds budget {rule.TierMax}");
if (rule.CountMax.HasValue && state.Count > rule.CountMax.Value)
return GateResult.Fail($"Count {state.Count} exceeds budget {rule.CountMax}");
return GateResult.Pass();
}
}
```
---
## Risks & Mitigations
| Risk | Impact | Mitigation |
|------|--------|------------|
| Too strict budgets block all deployments | Adoption friction | Provide sensible defaults, gradual rollout |
| Unknown counting varies by scan | Inconsistent gates | Normalize counting methodology |
---
## Documentation Updates
- [ ] Update `docs/modules/policy/architecture.md`
- [ ] Add `docs/operations/unknown-budgets-guide.md`
- [ ] Update policy DSL reference

View File

@@ -0,0 +1,477 @@
# Sprint 4300.0002.0002 - Evidence TTL Strategy Enforcement
## Topic & Scope
- Implement `EvidenceTtlEnforcer` service
- Define TTL policy per evidence type
- Add staleness checking to policy gate evaluation
- Emit `stale_evidence` warning/block based on configuration
**Working directory:** `src/Policy/__Libraries/StellaOps.Policy/`
## Dependencies & Concurrency
- **Upstream (DONE):**
- Evidence Bundle models
- Policy Engine gates
- **Downstream:** Policy decisions, UI staleness warnings
- **Safe to parallelize with:** SPRINT_4300_0001_*, SPRINT_4300_0002_0001
## Documentation Prerequisites
- `docs/modules/policy/architecture.md`
- Advisory staleness invariant specification
---
## Tasks
### T1: Define TTL Configuration
**Assignee**: Policy Team
**Story Points**: 1
**Status**: TODO
**Dependencies**: —
**Description**:
Define configurable TTL per evidence type.
**Implementation Path**: `Freshness/EvidenceTtlOptions.cs` (new)
**Implementation**:
```csharp
namespace StellaOps.Policy.Freshness;
/// <summary>
/// TTL configuration per evidence type.
/// </summary>
public sealed class EvidenceTtlOptions
{
/// <summary>
/// SBOM evidence TTL. Long because digest is immutable.
/// Default: 30 days.
/// </summary>
public TimeSpan SbomTtl { get; set; } = TimeSpan.FromDays(30);
/// <summary>
/// Boundary evidence TTL. Short because environment changes.
/// Default: 72 hours.
/// </summary>
public TimeSpan BoundaryTtl { get; set; } = TimeSpan.FromHours(72);
/// <summary>
/// Reachability evidence TTL. Medium based on code churn.
/// Default: 7 days.
/// </summary>
public TimeSpan ReachabilityTtl { get; set; } = TimeSpan.FromDays(7);
/// <summary>
/// VEX evidence TTL. Renew on boundary/reachability change.
/// Default: 14 days.
/// </summary>
public TimeSpan VexTtl { get; set; } = TimeSpan.FromDays(14);
/// <summary>
/// Policy decision TTL.
/// Default: 24 hours.
/// </summary>
public TimeSpan PolicyDecisionTtl { get; set; } = TimeSpan.FromHours(24);
/// <summary>
/// Human approval TTL.
/// Default: 30 days.
/// </summary>
public TimeSpan HumanApprovalTtl { get; set; } = TimeSpan.FromDays(30);
/// <summary>
/// Warning threshold as percentage of TTL remaining.
/// Default: 20% (warn when 80% of TTL elapsed).
/// </summary>
public double WarningThresholdPercent { get; set; } = 0.20;
/// <summary>
/// Action when evidence is stale.
/// </summary>
public StaleEvidenceAction StaleAction { get; set; } = StaleEvidenceAction.Warn;
}
/// <summary>
/// Action to take when evidence is stale.
/// </summary>
public enum StaleEvidenceAction
{
/// <summary>
/// Allow but log warning.
/// </summary>
Warn,
/// <summary>
/// Block the decision.
/// </summary>
Block,
/// <summary>
/// Degrade confidence score.
/// </summary>
DegradeConfidence
}
```
**Acceptance Criteria**:
- [ ] TTL options for each evidence type
- [ ] Warning threshold configurable
- [ ] Stale action configurable
- [ ] Sensible defaults
---
### T2: Implement EvidenceTtlEnforcer
**Assignee**: Policy Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1
**Description**:
Service to check and enforce TTL policies.
**Implementation Path**: `Freshness/EvidenceTtlEnforcer.cs` (new)
**Implementation**:
```csharp
namespace StellaOps.Policy.Freshness;
public interface IEvidenceTtlEnforcer
{
/// <summary>
/// Checks freshness of all evidence in a bundle.
/// </summary>
EvidenceFreshnessResult CheckFreshness(EvidenceBundle bundle, DateTimeOffset asOf);
/// <summary>
/// Gets TTL for a specific evidence type.
/// </summary>
TimeSpan GetTtl(EvidenceType type);
/// <summary>
/// Computes expiration time for evidence.
/// </summary>
DateTimeOffset ComputeExpiration(EvidenceType type, DateTimeOffset createdAt);
}
public sealed class EvidenceTtlEnforcer : IEvidenceTtlEnforcer
{
private readonly EvidenceTtlOptions _options;
private readonly ILogger<EvidenceTtlEnforcer> _logger;
public EvidenceTtlEnforcer(
IOptions<EvidenceTtlOptions> options,
ILogger<EvidenceTtlEnforcer> logger)
{
_options = options.Value;
_logger = logger;
}
public EvidenceFreshnessResult CheckFreshness(EvidenceBundle bundle, DateTimeOffset asOf)
{
var checks = new List<EvidenceFreshnessCheck>();
// Check each evidence type
if (bundle.Reachability is not null)
{
checks.Add(CheckType(EvidenceType.Reachability, bundle.Reachability.ComputedAt, asOf));
}
if (bundle.CallStack is not null)
{
checks.Add(CheckType(EvidenceType.CallStack, bundle.CallStack.CapturedAt, asOf));
}
if (bundle.VexStatus is not null)
{
checks.Add(CheckType(EvidenceType.Vex, bundle.VexStatus.Timestamp, asOf));
}
if (bundle.Provenance is not null)
{
checks.Add(CheckType(EvidenceType.Sbom, bundle.Provenance.BuildTime, asOf));
}
// Determine overall status
var anyStale = checks.Any(c => c.Status == FreshnessStatus.Stale);
var anyWarning = checks.Any(c => c.Status == FreshnessStatus.Warning);
return new EvidenceFreshnessResult
{
OverallStatus = anyStale ? FreshnessStatus.Stale
: anyWarning ? FreshnessStatus.Warning
: FreshnessStatus.Fresh,
Checks = checks,
RecommendedAction = anyStale ? _options.StaleAction : StaleEvidenceAction.Warn,
CheckedAt = asOf
};
}
private EvidenceFreshnessCheck CheckType(
EvidenceType type,
DateTimeOffset createdAt,
DateTimeOffset asOf)
{
var ttl = GetTtl(type);
var expiresAt = createdAt + ttl;
var remaining = expiresAt - asOf;
var warningThreshold = ttl * _options.WarningThresholdPercent;
FreshnessStatus status;
if (remaining <= TimeSpan.Zero)
{
status = FreshnessStatus.Stale;
}
else if (remaining <= warningThreshold)
{
status = FreshnessStatus.Warning;
}
else
{
status = FreshnessStatus.Fresh;
}
return new EvidenceFreshnessCheck
{
Type = type,
CreatedAt = createdAt,
ExpiresAt = expiresAt,
Ttl = ttl,
Remaining = remaining > TimeSpan.Zero ? remaining : TimeSpan.Zero,
Status = status,
Message = status switch
{
FreshnessStatus.Stale => $"{type} evidence expired {-remaining.TotalHours:F0}h ago",
FreshnessStatus.Warning => $"{type} evidence expires in {remaining.TotalHours:F0}h",
_ => $"{type} evidence fresh ({remaining.TotalDays:F0}d remaining)"
}
};
}
public TimeSpan GetTtl(EvidenceType type)
{
return type switch
{
EvidenceType.Sbom => _options.SbomTtl,
EvidenceType.Reachability => _options.ReachabilityTtl,
EvidenceType.Boundary => _options.BoundaryTtl,
EvidenceType.Vex => _options.VexTtl,
EvidenceType.PolicyDecision => _options.PolicyDecisionTtl,
EvidenceType.HumanApproval => _options.HumanApprovalTtl,
EvidenceType.CallStack => _options.ReachabilityTtl,
_ => TimeSpan.FromDays(7)
};
}
public DateTimeOffset ComputeExpiration(EvidenceType type, DateTimeOffset createdAt)
{
return createdAt + GetTtl(type);
}
}
public sealed record EvidenceFreshnessResult
{
public required FreshnessStatus OverallStatus { get; init; }
public required IReadOnlyList<EvidenceFreshnessCheck> Checks { get; init; }
public required StaleEvidenceAction RecommendedAction { get; init; }
public required DateTimeOffset CheckedAt { get; init; }
public bool IsAcceptable => OverallStatus != FreshnessStatus.Stale;
public bool HasWarnings => OverallStatus == FreshnessStatus.Warning;
}
public sealed record EvidenceFreshnessCheck
{
public required EvidenceType Type { get; init; }
public required DateTimeOffset CreatedAt { get; init; }
public required DateTimeOffset ExpiresAt { get; init; }
public required TimeSpan Ttl { get; init; }
public required TimeSpan Remaining { get; init; }
public required FreshnessStatus Status { get; init; }
public required string Message { get; init; }
}
public enum FreshnessStatus
{
Fresh,
Warning,
Stale
}
public enum EvidenceType
{
Sbom,
Reachability,
Boundary,
Vex,
PolicyDecision,
HumanApproval,
CallStack
}
```
**Acceptance Criteria**:
- [ ] `EvidenceTtlEnforcer.cs` created
- [ ] Checks all evidence types
- [ ] Warning when approaching expiration
- [ ] Stale detection when expired
- [ ] Configurable via options
---
### T3: Integrate with Policy Gate
**Assignee**: Policy Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T2
**Description**:
Add freshness check to policy gate evaluation.
**Implementation Path**: Modify `PolicyGateEvaluator.cs`
**Integration**:
```csharp
// In PolicyGateEvaluator.EvaluateAsync()
var freshnessResult = _ttlEnforcer.CheckFreshness(evidenceBundle, DateTimeOffset.UtcNow);
if (freshnessResult.OverallStatus == FreshnessStatus.Stale)
{
switch (freshnessResult.RecommendedAction)
{
case StaleEvidenceAction.Block:
return PolicyGateDecision.Blocked("Evidence is stale", freshnessResult.Checks);
case StaleEvidenceAction.DegradeConfidence:
confidence *= 0.5; // Halve confidence for stale evidence
break;
case StaleEvidenceAction.Warn:
default:
warnings.Add("Evidence is stale - consider refreshing");
break;
}
}
```
**Acceptance Criteria**:
- [ ] Freshness checked during gate evaluation
- [ ] Block action prevents approval
- [ ] Degrade action reduces confidence
- [ ] Warn action adds warning message
---
### T4: Add Unit Tests
**Assignee**: Policy Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T2
**Description**:
Tests for TTL enforcement.
**Test Cases**:
```csharp
public class EvidenceTtlEnforcerTests
{
[Fact]
public void CheckFreshness_AllFresh_ReturnsFresh()
{
var bundle = CreateBundle(createdAt: DateTimeOffset.UtcNow.AddHours(-1));
var result = _enforcer.CheckFreshness(bundle, DateTimeOffset.UtcNow);
result.OverallStatus.Should().Be(FreshnessStatus.Fresh);
result.IsAcceptable.Should().BeTrue();
}
[Fact]
public void CheckFreshness_ReachabilityNearExpiry_ReturnsWarning()
{
var bundle = CreateBundle(
reachabilityCreatedAt: DateTimeOffset.UtcNow.AddDays(-6)); // 7 day TTL
var result = _enforcer.CheckFreshness(bundle, DateTimeOffset.UtcNow);
result.OverallStatus.Should().Be(FreshnessStatus.Warning);
result.Checks.First(c => c.Type == EvidenceType.Reachability)
.Status.Should().Be(FreshnessStatus.Warning);
}
[Fact]
public void CheckFreshness_BoundaryExpired_ReturnsStale()
{
var bundle = CreateBundle(
boundaryCreatedAt: DateTimeOffset.UtcNow.AddDays(-5)); // 72h TTL
var result = _enforcer.CheckFreshness(bundle, DateTimeOffset.UtcNow);
result.OverallStatus.Should().Be(FreshnessStatus.Stale);
result.IsAcceptable.Should().BeFalse();
}
[Theory]
[InlineData(EvidenceType.Sbom, 30)]
[InlineData(EvidenceType.Boundary, 3)]
[InlineData(EvidenceType.Reachability, 7)]
[InlineData(EvidenceType.Vex, 14)]
public void GetTtl_ReturnsConfiguredValue(EvidenceType type, int expectedDays)
{
var ttl = _enforcer.GetTtl(type);
ttl.TotalDays.Should().BeApproximately(expectedDays, 0.1);
}
}
```
**Acceptance Criteria**:
- [ ] Fresh evidence test
- [ ] Warning threshold test
- [ ] Stale evidence test
- [ ] TTL values test
- [ ] 5+ tests passing
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Policy Team | Define TTL configuration |
| 2 | T2 | TODO | T1 | Policy Team | Implement enforcer service |
| 3 | T3 | TODO | T2 | Policy Team | Integrate with policy gate |
| 4 | T4 | TODO | T2 | Policy Team | Add unit tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from Explainable Triage advisory gap analysis (G3). | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Default TTLs | Decision | Policy Team | Based on advisory recommendations |
| Warning at 20% | Decision | Policy Team | Gives ~1 day warning for boundary |
| Default action Warn | Decision | Policy Team | Non-breaking, can escalate to Block |
---
## Success Criteria
- [ ] All 4 tasks marked DONE
- [ ] Stale evidence detected correctly
- [ ] Policy gate honors TTL settings
- [ ] 5+ tests passing
- [ ] `dotnet build` succeeds

View File

@@ -0,0 +1,104 @@
# SPRINT_4300_0002_0002: Unknowns Attestation Predicates
## Sprint Metadata
| Field | Value |
|-------|-------|
| **Sprint ID** | 4300_0002_0002 |
| **Title** | Unknowns Attestation Predicates |
| **Priority** | P1 (High) |
| **Moat Strength** | 4 (Strong moat) |
| **Working Directory** | `src/Attestor/`, `src/Signals/`, `src/Unknowns/` |
| **Estimated Effort** | 1 week |
| **Dependencies** | SPRINT_4300_0002_0001, UncertaintyTier (exists) |
---
## Objective
Create in-toto attestation predicates for unknown states, making uncertainty auditable, portable, and verifiable as part of the proof chain.
**Moat thesis**: "We quantify uncertainty and gate on it." — Extended to: uncertainty is attestable.
---
## Background
Unknowns need to be:
1. Recorded in attestations for audit trails
2. Portable with verdicts for external verification
3. Queryable by admission controllers
---
## Deliverables
### D1: Unknown State Attestation Predicate
- Define `uncertainty.stella/v1` predicate type
- Include: tier, entropy, marker kinds, evidence
### D2: Unknown Budget Attestation Predicate
- Define `uncertainty-budget.stella/v1` predicate type
- Include: budget definition, evaluation result, violations
### D3: Integration with Proof Chain
- Emit unknown attestations as part of `ProofSpineAssembler`
- Link to verdict attestation
### D4: Verification Support
- Extend `stella verdict verify` to check unknown attestations
---
## Tasks
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| UATT-001 | Define `UncertaintyStatement` in-toto predicate | TODO | |
| UATT-002 | Define `UncertaintyBudgetStatement` predicate | TODO | |
| UATT-003 | Create statement builders in `StellaOps.Attestor.ProofChain` | TODO | |
| UATT-004 | Integrate into `ProofSpineAssembler` | TODO | |
| UATT-005 | Add unknown attestation to verdict bundle | TODO | |
| UATT-006 | Extend verification CLI for unknown predicates | TODO | |
| UATT-007 | Add JSON schema for predicates | TODO | |
| UATT-008 | Write attestation round-trip tests | TODO | |
---
## Acceptance Criteria
1. **AC1**: Unknown state is captured in attestation
2. **AC2**: Budget evaluation result is attestable
3. **AC3**: Attestations are signed and verifiable
4. **AC4**: Proof chain links unknown to verdict
---
## Technical Notes
### Uncertainty Statement
```json
{
"_type": "https://in-toto.io/Statement/v1",
"subject": [{"digest": {"sha256": "<sbom-digest>"}}],
"predicateType": "uncertainty.stella/v1",
"predicate": {
"graphRevisionId": "...",
"aggregateTier": "T2",
"meanEntropy": 0.35,
"unknownCount": 7,
"markers": [
{"kind": "U1", "count": 3, "entropy": 0.45},
{"kind": "U2", "count": 4, "entropy": 0.28}
],
"evaluatedAt": "2025-12-22T00:00:00Z"
}
}
```
---
## Documentation Updates
- [ ] Update attestation type catalog
- [ ] Add uncertainty predicate specification

View File

@@ -0,0 +1,388 @@
# Sprint 4300.0003.0001 - Predicate Type JSON Schemas
## Topic & Scope
- Create JSON Schema definitions for all stella.ops predicate types
- Add schema validation to attestation creation
- Publish schemas to `docs/schemas/predicates/`
**Working directory:** `docs/schemas/predicates/`, `src/Attestor/`
## Dependencies & Concurrency
- **Upstream (DONE):**
- Existing predicate models in code
- **Downstream:** Schema validation, external tooling
- **Safe to parallelize with:** All SPRINT_4300_*
## Documentation Prerequisites
- Existing predicate implementations
- in-toto specification
---
## Tasks
### T1: Create stella.ops/sbom@v1 Schema
**Assignee**: Attestor Team
**Story Points**: 1
**Status**: TODO
**Dependencies**: —
**Implementation Path**: `docs/schemas/predicates/sbom.v1.schema.json`
**Schema**:
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "https://stella.ops/predicates/sbom@v1",
"title": "StellaOps SBOM Attestation Predicate",
"description": "Predicate for SBOM attestations linking software bill of materials to artifacts.",
"type": "object",
"required": ["format", "digest", "componentCount"],
"properties": {
"format": {
"type": "string",
"enum": ["cyclonedx-1.6", "spdx-3.0.1", "spdx-2.3"],
"description": "SBOM format specification."
},
"digest": {
"type": "string",
"pattern": "^sha256:[a-f0-9]{64}$",
"description": "Content-addressed digest of the SBOM document."
},
"componentCount": {
"type": "integer",
"minimum": 0,
"description": "Number of components in the SBOM."
},
"uri": {
"type": "string",
"format": "uri",
"description": "URI where the full SBOM can be retrieved."
},
"tooling": {
"type": "string",
"description": "Tool used to generate the SBOM."
},
"createdAt": {
"type": "string",
"format": "date-time",
"description": "When the SBOM was generated."
}
},
"additionalProperties": false
}
```
**Acceptance Criteria**:
- [ ] Schema file created
- [ ] Validates against sample data
- [ ] Documents all fields
---
### T2: Create stella.ops/vex@v1 Schema
**Assignee**: Attestor Team
**Story Points**: 1
**Status**: TODO
**Dependencies**: —
**Implementation Path**: `docs/schemas/predicates/vex.v1.schema.json`
**Schema**:
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "https://stella.ops/predicates/vex@v1",
"title": "StellaOps VEX Attestation Predicate",
"description": "Predicate for VEX statements embedded in attestations.",
"type": "object",
"required": ["format", "statements"],
"properties": {
"format": {
"type": "string",
"enum": ["openvex", "csaf-vex", "cyclonedx-vex"],
"description": "VEX format specification."
},
"statements": {
"type": "array",
"items": {
"$ref": "#/$defs/vexStatement"
},
"minItems": 1,
"description": "VEX statements in this attestation."
},
"digest": {
"type": "string",
"pattern": "^sha256:[a-f0-9]{64}$",
"description": "Content-addressed digest of the VEX document."
},
"author": {
"type": "string",
"description": "Author of the VEX statements."
},
"timestamp": {
"type": "string",
"format": "date-time",
"description": "When the VEX was issued."
}
},
"$defs": {
"vexStatement": {
"type": "object",
"required": ["vulnerability", "status"],
"properties": {
"vulnerability": {
"type": "string",
"description": "CVE or vulnerability identifier."
},
"status": {
"type": "string",
"enum": ["affected", "not_affected", "under_investigation", "fixed"],
"description": "VEX status."
},
"justification": {
"type": "string",
"description": "Justification for not_affected status."
},
"products": {
"type": "array",
"items": { "type": "string" },
"description": "Affected products (PURLs)."
}
}
}
},
"additionalProperties": false
}
```
**Acceptance Criteria**:
- [ ] Schema file created
- [ ] VEX statement definition included
- [ ] Validates against sample data
---
### T3: Create stella.ops/reachability@v1 Schema
**Assignee**: Attestor Team
**Story Points**: 1
**Status**: TODO
**Dependencies**: —
**Implementation Path**: `docs/schemas/predicates/reachability.v1.schema.json`
**Schema**:
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"$id": "https://stella.ops/predicates/reachability@v1",
"title": "StellaOps Reachability Attestation Predicate",
"description": "Predicate for reachability analysis results.",
"type": "object",
"required": ["result", "confidence", "graphDigest"],
"properties": {
"result": {
"type": "string",
"enum": ["reachable", "unreachable", "unknown"],
"description": "Reachability analysis result."
},
"confidence": {
"type": "number",
"minimum": 0,
"maximum": 1,
"description": "Confidence score (0-1)."
},
"graphDigest": {
"type": "string",
"pattern": "^sha256:[a-f0-9]{64}$",
"description": "Digest of the call graph used."
},
"paths": {
"type": "array",
"items": {
"$ref": "#/$defs/reachabilityPath"
},
"description": "Paths from entrypoints to vulnerable code."
},
"entrypoints": {
"type": "array",
"items": { "$ref": "#/$defs/entrypoint" },
"description": "Entrypoints considered."
},
"computedAt": {
"type": "string",
"format": "date-time"
},
"expiresAt": {
"type": "string",
"format": "date-time"
}
},
"$defs": {
"reachabilityPath": {
"type": "object",
"required": ["pathId", "steps"],
"properties": {
"pathId": { "type": "string" },
"steps": {
"type": "array",
"items": {
"type": "object",
"properties": {
"node": { "type": "string" },
"fileHash": { "type": "string" },
"lines": {
"type": "array",
"items": { "type": "integer" },
"minItems": 2,
"maxItems": 2
}
}
}
}
}
},
"entrypoint": {
"type": "object",
"required": ["type"],
"properties": {
"type": { "type": "string" },
"route": { "type": "string" },
"auth": { "type": "string" }
}
}
},
"additionalProperties": false
}
```
**Acceptance Criteria**:
- [ ] Schema file created
- [ ] Path and entrypoint definitions
- [ ] Validates against sample data
---
### T4: Create Remaining Predicate Schemas
**Assignee**: Attestor Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: —
**Files**:
- `docs/schemas/predicates/boundary.v1.schema.json`
- `docs/schemas/predicates/policy-decision.v1.schema.json`
- `docs/schemas/predicates/human-approval.v1.schema.json`
**Acceptance Criteria**:
- [ ] All 3 schemas created
- [ ] Match existing model definitions
- [ ] Validate against samples
---
### T5: Add Schema Validation to Attestation Service
**Assignee**: Attestor Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1-T4
**Description**:
Add schema validation when creating attestations.
**Implementation Path**: `src/Attestor/__Libraries/StellaOps.Attestor.Core/Validation/`
**Implementation**:
```csharp
public interface IPredicateSchemaValidator
{
ValidationResult Validate(string predicateType, JsonElement predicate);
}
public sealed class PredicateSchemaValidator : IPredicateSchemaValidator
{
private readonly IReadOnlyDictionary<string, JsonSchema> _schemas;
public PredicateSchemaValidator()
{
_schemas = LoadSchemas();
}
public ValidationResult Validate(string predicateType, JsonElement predicate)
{
if (!_schemas.TryGetValue(predicateType, out var schema))
{
return ValidationResult.Skip($"No schema for {predicateType}");
}
var results = schema.Validate(predicate);
return results.IsValid
? ValidationResult.Valid()
: ValidationResult.Invalid(results.Errors);
}
}
```
**Acceptance Criteria**:
- [ ] Schema loader implemented
- [ ] Validation during attestation creation
- [ ] Graceful handling of unknown predicates
- [ ] Error messages include path
---
### T6: Add Unit Tests
**Assignee**: Attestor Team
**Story Points**: 1
**Status**: TODO
**Dependencies**: T5
**Test Cases**:
- [ ] Valid SBOM predicate passes
- [ ] Invalid VEX status fails
- [ ] Missing required field fails
- [ ] Unknown predicate type skips
**Acceptance Criteria**:
- [ ] 4+ tests passing
- [ ] Coverage for each schema
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Attestor Team | SBOM schema |
| 2 | T2 | TODO | — | Attestor Team | VEX schema |
| 3 | T3 | TODO | — | Attestor Team | Reachability schema |
| 4 | T4 | TODO | — | Attestor Team | Remaining schemas |
| 5 | T5 | TODO | T1-T4 | Attestor Team | Schema validation |
| 6 | T6 | TODO | T5 | Attestor Team | Unit tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from Explainable Triage advisory gap analysis (G4). | Agent |
---
## Success Criteria
- [ ] All 6 tasks marked DONE
- [ ] 6 predicate schemas created
- [ ] Validation integrated
- [ ] 4+ tests passing
- [ ] `dotnet build` succeeds

View File

@@ -0,0 +1,165 @@
# SPRINT_4300_0003_0001: Sealed Knowledge Snapshot Export/Import
## Sprint Metadata
| Field | Value |
|-------|-------|
| **Sprint ID** | 4300_0003_0001 |
| **Title** | Sealed Knowledge Snapshot Export/Import |
| **Priority** | P1 (High) |
| **Moat Strength** | 4 (Strong moat) |
| **Working Directory** | `src/AirGap/`, `src/Concelier/`, `src/Excititor/`, `src/Cli/` |
| **Estimated Effort** | 2 weeks |
| **Dependencies** | AirGap.Importer (exists), ReplayManifest (exists) |
---
## Objective
Implement a "sealed knowledge snapshot" workflow for air-gapped environments, packaging all advisory feeds, VEX statements, and policies into a cryptographically verifiable bundle that can be transferred offline and validated on import.
**Moat thesis**: Air-gapped "runtime" is common; air-gapped **reproducibility** is not.
---
## Background
The advisory identifies air-gapped epistemic mode as **Moat 4**. Current implementation has:
- `AirGap.Controller` with state management
- `ReplayVerifier` with depth levels
- `TrustStore` for offline validation
**Gap**: No unified export/import workflow for knowledge snapshots.
---
## Deliverables
### D1: Knowledge Snapshot Format
- Define snapshot bundle structure
- Include: advisories, VEX, policies, time anchor, trust roots
- Merkle tree for content integrity
### D2: Snapshot Export CLI
- `stella airgap export --output=./knowledge-2025-12-22.tar.gz`
- Point-in-time feed extraction
- Sign snapshot with designated key
### D3: Snapshot Import CLI
- `stella airgap import --bundle=./knowledge-2025-12-22.tar.gz`
- Verify signature and merkle root
- Validate time anchor freshness
- Apply to local database
### D4: Snapshot Diff
- Compare two snapshots
- Report: new advisories, updated VEX, policy changes
### D5: Staleness Policy
- Configurable max age for snapshots
- Warn/block on stale knowledge
---
## Tasks
### Phase 1: Snapshot Format
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| SEAL-001 | Define `KnowledgeSnapshotManifest` schema | TODO | |
| SEAL-002 | Implement merkle tree builder for bundle contents | TODO | |
| SEAL-003 | Create `SnapshotBundleWriter` | TODO | |
| SEAL-004 | Add DSSE signing for manifest | TODO | |
### Phase 2: Export
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| SEAL-005 | Add `stella airgap export` command | TODO | |
| SEAL-006 | Implement advisory snapshot extractor | TODO | |
| SEAL-007 | Implement VEX snapshot extractor | TODO | |
| SEAL-008 | Implement policy bundle extractor | TODO | |
| SEAL-009 | Add time anchor token generation | TODO | |
| SEAL-010 | Package into signed bundle | TODO | |
### Phase 3: Import
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| SEAL-011 | Add `stella airgap import` command | TODO | |
| SEAL-012 | Implement signature verification | TODO | |
| SEAL-013 | Implement merkle root validation | TODO | |
| SEAL-014 | Validate time anchor against staleness policy | TODO | |
| SEAL-015 | Apply advisories to Concelier database | TODO | |
| SEAL-016 | Apply VEX to Excititor database | TODO | |
| SEAL-017 | Apply policies to Policy registry | TODO | |
### Phase 4: Diff & Staleness
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| SEAL-018 | Implement `stella airgap diff` command | TODO | |
| SEAL-019 | Add staleness policy configuration | TODO | |
| SEAL-020 | Emit warnings on stale imports | TODO | |
---
## Acceptance Criteria
1. **AC1**: Export produces self-contained knowledge bundle
2. **AC2**: Import validates signature and merkle root
3. **AC3**: Stale snapshots are rejected (configurable age)
4. **AC4**: Diff shows changes between snapshots
5. **AC5**: Imported knowledge enables offline scans
---
## Technical Notes
### Bundle Structure
```
knowledge-2025-12-22.tar.gz
├── manifest.json # Snapshot metadata + merkle root
├── manifest.sig # DSSE signature
├── time-anchor.json # RFC 3161 or Roughtime token
├── advisories/
│ ├── nvd/ # NVD advisories
│ ├── ghsa/ # GitHub advisories
│ └── ... # Other feeds
├── vex/
│ ├── cisco/
│ ├── redhat/
│ └── ...
├── policies/
│ └── policy-bundle.tar # OPA bundle
└── trust/
└── trust-roots.pem # Signing key roots
```
### Staleness Budget
```yaml
airgap:
staleness:
max_age_hours: 168 # 7 days default
warn_age_hours: 72 # Warn after 3 days
require_time_anchor: true
```
---
## Risks & Mitigations
| Risk | Impact | Mitigation |
|------|--------|------------|
| Large bundle size | Transfer challenges | Incremental updates, compression |
| Key compromise | Trust broken | Support key rotation, revocation lists |
| Time anchor unavailable | Cannot validate freshness | Fallback to operator attestation |
---
## Documentation Updates
- [ ] Add `docs/operations/airgap-knowledge-sync.md`
- [ ] Update air-gap architecture documentation
- [ ] Add staleness policy guide

View File

@@ -0,0 +1,341 @@
# Sprint 4300.0003.0002 - Attestation Completeness Metrics
## Topic & Scope
- Add metrics for attestation completeness and timeliness
- Expose via OpenTelemetry/Prometheus
- Add Grafana dashboard template
**Working directory:** `src/Telemetry/StellaOps.Telemetry.Core/`
## Dependencies & Concurrency
- **Upstream (DONE):**
- TTFS Telemetry (TtfsIngestionService)
- OpenTelemetry integration
- **Downstream:** Grafana dashboards, SLO tracking
- **Safe to parallelize with:** All SPRINT_4300_*
## Documentation Prerequisites
- `docs/modules/telemetry/architecture.md`
- Advisory metrics requirements
---
## Tasks
### T1: Define Attestation Metrics
**Assignee**: Telemetry Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: —
**Description**:
Define the metrics from the advisory.
**Implementation Path**: `Metrics/AttestationMetrics.cs` (new)
**Implementation**:
```csharp
namespace StellaOps.Telemetry.Core.Metrics;
/// <summary>
/// Metrics for attestation completeness and quality.
/// </summary>
public sealed class AttestationMetrics
{
private readonly Meter _meter;
// Counters
private readonly Counter<long> _attestationsCreated;
private readonly Counter<long> _attestationsVerified;
private readonly Counter<long> _attestationsFailed;
// Gauges
private readonly ObservableGauge<double> _completenessRatio;
private readonly ObservableGauge<double> _averageTtfe;
// Histograms
private readonly Histogram<double> _ttfeSeconds;
private readonly Histogram<double> _verificationDuration;
public AttestationMetrics(IMeterFactory meterFactory)
{
_meter = meterFactory.Create("StellaOps.Attestations");
_attestationsCreated = _meter.CreateCounter<long>(
"stella_attestations_created_total",
unit: "{attestation}",
description: "Total attestations created");
_attestationsVerified = _meter.CreateCounter<long>(
"stella_attestations_verified_total",
unit: "{attestation}",
description: "Total attestations verified successfully");
_attestationsFailed = _meter.CreateCounter<long>(
"stella_attestations_failed_total",
unit: "{attestation}",
description: "Total attestation verifications failed");
_ttfeSeconds = _meter.CreateHistogram<double>(
"stella_ttfe_seconds",
unit: "s",
description: "Time to first evidence (alert → evidence panel open)");
_verificationDuration = _meter.CreateHistogram<double>(
"stella_attestation_verification_duration_seconds",
unit: "s",
description: "Time to verify an attestation");
}
/// <summary>
/// Record attestation created.
/// </summary>
public void RecordCreated(string predicateType, string signer)
{
_attestationsCreated.Add(1,
new KeyValuePair<string, object?>("predicate_type", predicateType),
new KeyValuePair<string, object?>("signer", signer));
}
/// <summary>
/// Record attestation verified.
/// </summary>
public void RecordVerified(string predicateType, bool success, TimeSpan duration)
{
if (success)
{
_attestationsVerified.Add(1,
new KeyValuePair<string, object?>("predicate_type", predicateType));
}
else
{
_attestationsFailed.Add(1,
new KeyValuePair<string, object?>("predicate_type", predicateType));
}
_verificationDuration.Record(duration.TotalSeconds,
new KeyValuePair<string, object?>("predicate_type", predicateType),
new KeyValuePair<string, object?>("success", success));
}
/// <summary>
/// Record time to first evidence.
/// </summary>
public void RecordTtfe(TimeSpan duration, string evidenceType)
{
_ttfeSeconds.Record(duration.TotalSeconds,
new KeyValuePair<string, object?>("evidence_type", evidenceType));
}
}
```
**Acceptance Criteria**:
- [ ] Counter: `stella_attestations_created_total`
- [ ] Counter: `stella_attestations_verified_total`
- [ ] Counter: `stella_attestations_failed_total`
- [ ] Histogram: `stella_ttfe_seconds`
- [ ] Histogram: `stella_attestation_verification_duration_seconds`
- [ ] Labels for predicate_type, signer, evidence_type
---
### T2: Add Completeness Ratio Calculator
**Assignee**: Telemetry Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1
**Description**:
Calculate attestation completeness ratio per artifact.
**Implementation**:
```csharp
public interface IAttestationCompletenessCalculator
{
/// <summary>
/// Calculate completeness ratio for an artifact.
/// Complete = has all required attestation types.
/// </summary>
Task<CompletenessResult> CalculateAsync(
string artifactDigest,
IReadOnlyList<string> requiredTypes,
CancellationToken ct = default);
}
public sealed class AttestationCompletenessCalculator : IAttestationCompletenessCalculator
{
private readonly IOciReferrerDiscovery _discovery;
private readonly AttestationMetrics _metrics;
public async Task<CompletenessResult> CalculateAsync(
string artifactDigest,
IReadOnlyList<string> requiredTypes,
CancellationToken ct = default)
{
var referrers = await _discovery.ListReferrersAsync(
/* registry, repo, digest */, ct);
var foundTypes = referrers.Referrers
.Select(r => MapArtifactType(r.ArtifactType))
.Distinct()
.ToHashSet();
var missingTypes = requiredTypes.Except(foundTypes).ToList();
var ratio = (double)(requiredTypes.Count - missingTypes.Count) / requiredTypes.Count;
return new CompletenessResult
{
ArtifactDigest = artifactDigest,
CompletenessRatio = ratio,
FoundTypes = foundTypes.ToList(),
MissingTypes = missingTypes,
IsComplete = missingTypes.Count == 0
};
}
}
public sealed record CompletenessResult
{
public required string ArtifactDigest { get; init; }
public required double CompletenessRatio { get; init; }
public required IReadOnlyList<string> FoundTypes { get; init; }
public required IReadOnlyList<string> MissingTypes { get; init; }
public required bool IsComplete { get; init; }
}
```
**Acceptance Criteria**:
- [ ] Ratio calculation correct
- [ ] Missing types identified
- [ ] Handles partial attestation sets
---
### T3: Add Post-Deploy Reversion Tracking
**Assignee**: Telemetry Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1
**Description**:
Track reversions due to missing proof.
**Implementation**:
```csharp
public sealed class DeploymentMetrics
{
private readonly Counter<long> _deploymentsTotal;
private readonly Counter<long> _reversionsTotal;
public DeploymentMetrics(IMeterFactory meterFactory)
{
var meter = meterFactory.Create("StellaOps.Deployments");
_deploymentsTotal = meter.CreateCounter<long>(
"stella_deployments_total",
unit: "{deployment}",
description: "Total deployments attempted");
_reversionsTotal = meter.CreateCounter<long>(
"stella_post_deploy_reversions_total",
unit: "{reversion}",
description: "Reversions due to missing or invalid proof");
}
public void RecordDeployment(string environment, bool hadCompleteProof)
{
_deploymentsTotal.Add(1,
new KeyValuePair<string, object?>("environment", environment),
new KeyValuePair<string, object?>("complete_proof", hadCompleteProof));
}
public void RecordReversion(string environment, string reason)
{
_reversionsTotal.Add(1,
new KeyValuePair<string, object?>("environment", environment),
new KeyValuePair<string, object?>("reason", reason));
}
}
```
**Acceptance Criteria**:
- [ ] Deployment counter with proof status
- [ ] Reversion counter with reason
- [ ] Environment label
---
### T4: Create Grafana Dashboard Template
**Assignee**: Telemetry Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1, T2, T3
**Description**:
Create Grafana dashboard for attestation metrics.
**Implementation Path**: `deploy/grafana/dashboards/attestation-metrics.json`
**Dashboard Panels**:
1. **Attestation Completeness Gauge** (target: >=95%)
2. **TTFE Distribution** (target: <=30s)
3. **Verification Success Rate**
4. **Post-Deploy Reversions** (trend to zero)
5. **Attestations by Type** (pie chart)
6. **Stale Evidence Alerts** (time series)
**Acceptance Criteria**:
- [ ] Dashboard JSON created
- [ ] All 4 advisory metrics visualized
- [ ] SLO thresholds marked
- [ ] Time range selectors
---
### T5: Add DI Registration
**Assignee**: Telemetry Team
**Story Points**: 1
**Status**: TODO
**Dependencies**: T1, T2, T3
**Acceptance Criteria**:
- [ ] `AttestationMetrics` registered
- [ ] `DeploymentMetrics` registered
- [ ] `IAttestationCompletenessCalculator` registered
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Telemetry Team | Define metrics |
| 2 | T2 | TODO | T1 | Telemetry Team | Completeness calculator |
| 3 | T3 | TODO | T1 | Telemetry Team | Reversion tracking |
| 4 | T4 | TODO | T1-T3 | Telemetry Team | Grafana dashboard |
| 5 | T5 | TODO | T1-T3 | Telemetry Team | DI registration |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from Explainable Triage advisory gap analysis (G5). | Agent |
---
## Success Criteria
- [ ] All 5 tasks marked DONE
- [ ] Metrics exposed via OpenTelemetry
- [ ] Grafana dashboard functional
- [ ] `dotnet build` succeeds

View File

@@ -0,0 +1,126 @@
# SPRINT_4300 MOAT HARDENING: Verdict Attestation & Epistemic Mode
## Program Overview
| Field | Value |
|-------|-------|
| **Program ID** | 4300 (Moat Series) |
| **Theme** | Moat Hardening: Signed Verdicts & Epistemic Operations |
| **Priority** | P0-P1 (Critical to High) |
| **Total Effort** | ~9 weeks |
| **Advisory Source** | 19-Dec-2025 - Stella Ops candidate features mapped to moat strength |
---
## Strategic Context
This sprint program addresses the highest-moat features identified in the competitive analysis advisory. The goal is to harden StellaOps' structural advantages in:
1. **Signed, replayable risk verdicts (Moat 5)** — The anchor differentiator
2. **Unknowns as first-class state (Moat 4)** — Governance primitive
3. **Air-gapped epistemic mode (Moat 4)** — Reproducibility moat
---
## Sprint Breakdown
### P0 Sprints (Critical)
| Sprint ID | Title | Effort | Moat |
|-----------|-------|--------|------|
| 4300_0001_0001 | OCI Verdict Attestation Referrer Push | 2 weeks | 5 |
| 4300_0001_0002 | One-Command Audit Replay CLI | 2 weeks | 5 |
**Outcome**: Verdicts become portable "ship tokens" that can be pushed to registries and replayed offline.
### P1 Sprints (High)
| Sprint ID | Title | Effort | Moat |
|-----------|-------|--------|------|
| 4300_0002_0001 | Unknowns Budget Policy Integration | 2 weeks | 4 |
| 4300_0002_0002 | Unknowns Attestation Predicates | 1 week | 4 |
| 4300_0003_0001 | Sealed Knowledge Snapshot Export/Import | 2 weeks | 4 |
**Outcome**: Uncertainty becomes actionable through policy gates and attestable for audits. Air-gap customers get sealed knowledge bundles.
---
## Related Sprint Programs
| Program | Theme | Moat Focus |
|---------|-------|------------|
| **4400** | Delta Verdicts & Reachability Attestations | Smart-Diff, Reachability |
| **4500** | VEX Hub & Trust Scoring | VEX Distribution Network |
| **4600** | SBOM Lineage & BYOS | SBOM Ledger |
---
## Dependency Graph
```
SPRINT_4300_0001_0001 (OCI Verdict Push)
├──► SPRINT_4300_0001_0002 (Audit Replay CLI)
└──► SPRINT_4400_0001_0001 (Signed Delta Verdict)
SPRINT_4300_0002_0001 (Unknowns Budget)
└──► SPRINT_4300_0002_0002 (Unknowns Attestation)
SPRINT_4300_0003_0001 (Sealed Snapshot)
└──► [Standalone, enables air-gap scenarios]
```
---
## Success Metrics
| Metric | Target | Measurement |
|--------|--------|-------------|
| Verdict push success rate | >99% | OTEL metrics |
| Audit replay pass rate | 100% on same inputs | CI tests |
| Unknown budget violations detected | >0 in test suite | Integration tests |
| Air-gap import success rate | >99% | Manual testing |
---
## Risks & Dependencies
| Risk | Impact | Mitigation |
|------|--------|------------|
| OCI registry incompatibility | Cannot push verdicts | Fallback to tag-based |
| Bundle size too large | Transfer issues | Streaming, compression |
| Key management complexity | Security | Document rotation procedures |
---
## Timeline Recommendation
**Phase 1 (Weeks 1-4)**: P0 Sprints
- OCI Verdict Push + Audit Replay
**Phase 2 (Weeks 5-7)**: P1 Sprints
- Unknowns Budget + Attestations
**Phase 3 (Weeks 8-9)**: P1 Sprints
- Sealed Knowledge Snapshots
---
## Documentation Deliverables
- [ ] `docs/operations/verdict-attestation-guide.md`
- [ ] `docs/operations/audit-replay-guide.md`
- [ ] `docs/operations/unknown-budgets-guide.md`
- [ ] `docs/operations/airgap-knowledge-sync.md`
- [ ] Update attestation type catalog
- [ ] Update CLI reference
---
**Sprint Series Status:** TODO
**Created:** 2025-12-22
**Origin:** Gap analysis of 19-Dec-2025 moat strength advisory

View File

@@ -0,0 +1,171 @@
# SPRINT_4300 Summary - Explainable Triage Gaps
## Overview
This sprint series closes the remaining gaps between the "Designing Explainable Triage and Proof-Linked Evidence" advisory (18-Dec-2025) and the current implementation.
**Origin Advisory:** `docs/product-advisories/18-Dec-2025 - Designing Explainable Triage and ProofLinked Evidence.md`
**Gap Analysis:** `docs/implplan/analysis/4300_explainable_triage_gap_analysis.md`
## Executive Summary
The advisory defined a comprehensive vision for explainable, evidence-linked triage. **~85% was already implemented** through prior sprints (3800, 3801, 4100, 4200 series). This series addresses the remaining **6 gaps**:
| Gap | Description | Sprint | Priority | Effort |
|-----|-------------|--------|----------|--------|
| G1 | CLI attestation chain verify command | 4300.0001.0001 | HIGH | M |
| G6 | Findings evidence API endpoint | 4300.0001.0002 | MEDIUM | S |
| G2 | Evidence privacy controls | 4300.0002.0001 | MEDIUM | M |
| G3 | Evidence TTL enforcement | 4300.0002.0002 | MEDIUM | S |
| G4 | Predicate JSON schemas | 4300.0003.0001 | LOW | S |
| G5 | Attestation completeness metrics | 4300.0003.0002 | LOW | M |
**Total Effort:** ~10-14 days across teams
## Sprint Structure
```
SPRINT_4300 (Explainable Triage Gaps)
├── 0001 (CLI & API)
│ ├── 0001 CLI Attestation Verify Command [HIGH]
│ └── 0002 Findings Evidence API [MEDIUM]
├── 0002 (Evidence Management)
│ ├── 0001 Evidence Privacy Controls [MEDIUM]
│ └── 0002 Evidence TTL Enforcement [MEDIUM]
└── 0003 (Quality & Observability)
├── 0001 Predicate JSON Schemas [LOW]
└── 0002 Attestation Metrics [LOW]
```
## Dependencies
### External Dependencies (Already DONE)
| Dependency | Sprint | Status |
|------------|--------|--------|
| OCI Referrer Discovery | 4100.0003.0002 | DONE |
| Risk Verdict Attestation | 4100.0003.0001 | DONE |
| Human Approval Attestation | 3801.0001.0004 | DONE |
| Approve Button UI | 4100.0005.0001 | DONE |
| Evidence Composition Service | 3800.0003.0001 | DONE |
| Boundary Extractors | 3800.0002.* | DONE |
| Trust Lattice Engine | (core) | DONE |
### Internal Dependencies
```
4300.0001.0001 ─┬─> (none, can start immediately)
4300.0001.0002 ─┤
4300.0002.0001 ─┤
4300.0002.0002 ─┤
4300.0003.0001 ─┤
4300.0003.0002 ─┘
```
All sprints can run in parallel.
## Recommended Execution Order
**Wave 1 (Week 1):** HIGH priority + foundations
- 4300.0001.0001 - CLI Attestation Verify (CLI Team)
- 4300.0001.0002 - Findings Evidence API (Scanner Team)
- 4300.0002.0002 - Evidence TTL Enforcement (Policy Team)
**Wave 2 (Week 2):** MEDIUM + LOW priority
- 4300.0002.0001 - Evidence Privacy Controls (Scanner Team)
- 4300.0003.0001 - Predicate Schemas (Attestor Team)
- 4300.0003.0002 - Attestation Metrics (Telemetry Team)
## Success Criteria (from Advisory)
| # | Criterion | Coverage |
|---|-----------|----------|
| 1 | Every risk row expands to path, boundary, VEX, last-seen in <300ms | 4200.0001.0001 (planned) + 4300.0001.0002 |
| 2 | "Approve" button disabled until SBOM+VEX+Decision attestations validate | 4100.0005.0001 (DONE) |
| 3 | One-click "Show DSSE chain" renders envelopes with digests and signers | 4200.0001.0001 (planned) |
| 4 | Audit log captures who approved, which digests, evidence hashes | 3801.0001.0004 (DONE) |
| 5 | CLI can verify attestation chain before deploy | **4300.0001.0001** |
| 6 | % attestation completeness >= 95% | **4300.0003.0002** |
| 7 | TTFE (time-to-first-evidence) <= 30s | **4300.0003.0002** |
| 8 | Post-deploy reversions trend to zero | **4300.0003.0002** |
## Team Assignments
| Team | Sprints | Total Effort |
|------|---------|--------------|
| CLI Team | 4300.0001.0001 | M (2-3d) |
| Scanner Team | 4300.0001.0002, 4300.0002.0001 | S+M (3-5d) |
| Policy Team | 4300.0002.0002 | S (1-2d) |
| Attestor Team | 4300.0003.0001 | S (1-2d) |
| Telemetry Team | 4300.0003.0002 | M (2-3d) |
## Deliverables
### New CLI Commands
- `stella verify image <reference> --require sbom,vex,decision`
### New API Endpoints
- `GET /api/v1/findings/{findingId}/evidence`
- `POST /api/v1/findings/evidence/batch`
### New Services
- `ImageAttestationVerifier`
- `TrustPolicyLoader`
- `EvidenceRedactionService`
- `EvidenceTtlEnforcer`
- `AttestationCompletenessCalculator`
- `PredicateSchemaValidator`
### New Metrics
- `stella_attestations_created_total`
- `stella_attestations_verified_total`
- `stella_attestations_failed_total`
- `stella_ttfe_seconds`
- `stella_post_deploy_reversions_total`
### New Schemas
- `docs/schemas/predicates/sbom.v1.schema.json`
- `docs/schemas/predicates/vex.v1.schema.json`
- `docs/schemas/predicates/reachability.v1.schema.json`
- `docs/schemas/predicates/boundary.v1.schema.json`
- `docs/schemas/predicates/policy-decision.v1.schema.json`
- `docs/schemas/predicates/human-approval.v1.schema.json`
### New Dashboard
- `deploy/grafana/dashboards/attestation-metrics.json`
## Risk Register
| Risk | Impact | Mitigation |
|------|--------|------------|
| OCI referrers API not supported by all registries | Fallback tag discovery | Already implemented in 4100.0003.0002 |
| Schema validation performance | Latency on attestation creation | Cache compiled schemas |
| Metric cardinality explosion | Prometheus storage | Limit label values |
## Completion Checklist
- [ ] All 6 sprints marked DONE
- [ ] CLI verify command works end-to-end
- [ ] Evidence API returns advisory-compliant contract
- [ ] Privacy redaction enforced by default
- [ ] TTL staleness affects policy decisions
- [ ] All predicate schemas validate correctly
- [ ] Grafana dashboard shows all metrics
- [ ] Integration tests pass
- [ ] Documentation updated
## Post-Completion
After all sprints complete:
1. Update `docs/09_API_CLI_REFERENCE.md` with new CLI command
2. Update `docs/modules/scanner/architecture.md` with evidence API
3. Archive this summary to `docs/implplan/archived/`
4. Close advisory tracking issue
---
**Sprint Series Status:** TODO (0/6 sprints complete)
**Created:** 2025-12-22
**Origin:** Gap analysis of 18-Dec-2025 advisory

View File

@@ -0,0 +1,112 @@
# SPRINT_4400_0001_0001: Signed Delta Verdict Attestation
## Sprint Metadata
| Field | Value |
|-------|-------|
| **Sprint ID** | 4400_0001_0001 |
| **Title** | Signed Delta Verdict Attestation |
| **Priority** | P2 (Medium) |
| **Moat Strength** | 4 (Strong moat) |
| **Working Directory** | `src/Scanner/`, `src/Attestor/`, `src/Cli/` |
| **Estimated Effort** | 2 weeks |
| **Dependencies** | MaterialRiskChangeDetector (exists), SPRINT_4300_0001_0001 |
---
## Objective
Create a signed attestation format for Smart-Diff results, making semantic risk deltas portable, auditable, and verifiable as part of the change control process.
**Moat thesis**: "We explain what changed in exploitable surface area, not what changed in CVE count."
---
## Background
Smart-Diff (`MaterialRiskChangeDetector`) exists with R1-R4 rules and priority scoring. **Gap**: Results are not attestable.
---
## Deliverables
### D1: Delta Verdict Attestation Predicate
- Define `delta-verdict.stella/v1` predicate type
- Include: changes detected, priority score, evidence references
### D2: Delta Verdict Builder
- Build delta attestation from `MaterialRiskChangeResult`
- Link to before/after proof spines
- Include graph revision IDs
### D3: OCI Delta Push
- Push delta verdict as OCI referrer
- Support linking to two image manifests (before/after)
### D4: CLI Integration
- `stella diff --sign --push` flow
- `stella diff verify` command
---
## Tasks
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| DELTA-001 | Define `DeltaVerdictStatement` predicate | TODO | |
| DELTA-002 | Create `DeltaVerdictBuilder` | TODO | |
| DELTA-003 | Implement before/after proof spine linking | TODO | |
| DELTA-004 | Add delta verdict to OCI pusher | TODO | |
| DELTA-005 | Implement `stella diff --sign` | TODO | |
| DELTA-006 | Implement `stella diff verify` | TODO | |
| DELTA-007 | Add SARIF output with attestation reference | TODO | |
| DELTA-008 | Integration tests | TODO | |
---
## Acceptance Criteria
1. **AC1**: Delta verdict is a signed in-toto statement
2. **AC2**: Delta can be pushed as OCI referrer
3. **AC3**: `stella diff verify` validates signature and content
4. **AC4**: Attestation links to both scan verdicts
---
## Technical Notes
### Delta Verdict Statement
```json
{
"_type": "https://in-toto.io/Statement/v1",
"subject": [
{"digest": {"sha256": "<image-before>"}},
{"digest": {"sha256": "<image-after>"}}
],
"predicateType": "delta-verdict.stella/v1",
"predicate": {
"beforeRevisionId": "...",
"afterRevisionId": "...",
"hasMaterialChange": true,
"priorityScore": 1750,
"changes": [
{
"rule": "R1_ReachabilityFlip",
"findingKey": {"vulnId": "CVE-2024-1234", "purl": "..."},
"direction": "increased",
"reason": "Reachability changed from false to true"
}
],
"beforeVerdictDigest": "sha256:...",
"afterVerdictDigest": "sha256:...",
"comparedAt": "2025-12-22T00:00:00Z"
}
}
```
---
## Documentation Updates
- [ ] Add delta verdict to attestation catalog
- [ ] Update Smart-Diff documentation

View File

@@ -0,0 +1,119 @@
# SPRINT_4400_0001_0002: Reachability Subgraph Attestation
## Sprint Metadata
| Field | Value |
|-------|-------|
| **Sprint ID** | 4400_0001_0002 |
| **Title** | Reachability Subgraph Attestation |
| **Priority** | P2 (Medium) |
| **Moat Strength** | 4 (Strong moat) |
| **Working Directory** | `src/Signals/`, `src/Attestor/`, `src/Scanner/` |
| **Estimated Effort** | 2 weeks |
| **Dependencies** | ReachabilityWitnessStatement (exists), CallPath (exists) |
---
## Objective
Package reachability analysis results as a standalone, attestable subgraph artifact that can be stored, transferred, and verified independently of the full scan context.
**Moat thesis**: "We provide proof of exploitability in *this* artifact, not just a badge."
---
## Background
Current implementation has:
- `ReachabilityWitnessStatement` for single path witness
- `PathWitnessBuilder` for call path construction
- `CallPath` models
**Gap**: No standalone reachability subgraph as portable artifact.
---
## Deliverables
### D1: Reachability Subgraph Format
- Define graph serialization format (nodes, edges, metadata)
- Include: entrypoints, symbols, call edges, gates
- Support partial graphs (per-finding)
### D2: Subgraph Attestation Predicate
- Define `reachability-subgraph.stella/v1` predicate
- Include: graph digest, finding keys covered, analysis metadata
### D3: Subgraph Builder
- Extract relevant subgraph from full call graph
- Prune to reachable paths only
- Include boundary detection results
### D4: OCI Subgraph Push
- Push subgraph as OCI artifact
- Link to SBOM and verdict
### D5: Subgraph Viewer
- CLI command to inspect subgraph
- Visualize call paths to vulnerable symbols
---
## Tasks
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| SUBG-001 | Define `ReachabilitySubgraph` serialization format | TODO | |
| SUBG-002 | Create `ReachabilitySubgraphStatement` predicate | TODO | |
| SUBG-003 | Implement `SubgraphExtractor` from call graph | TODO | |
| SUBG-004 | Add subgraph to attestation pipeline | TODO | |
| SUBG-005 | Implement OCI subgraph push | TODO | |
| SUBG-006 | Create `stella reachability show` command | TODO | |
| SUBG-007 | Add DOT/Mermaid export for visualization | TODO | |
| SUBG-008 | Integration tests with real call graphs | TODO | |
---
## Acceptance Criteria
1. **AC1**: Subgraph captures all paths to vulnerable symbols
2. **AC2**: Subgraph is a signed attestation
3. **AC3**: Subgraph can be pushed as OCI artifact
4. **AC4**: CLI can visualize subgraph
---
## Technical Notes
### Subgraph Format
```json
{
"version": "1.0",
"findingKeys": ["CVE-2024-1234@pkg:npm/lodash@4.17.20"],
"nodes": [
{"id": "n1", "type": "entrypoint", "symbol": "main.handler"},
{"id": "n2", "type": "call", "symbol": "lodash.merge"},
{"id": "n3", "type": "vulnerable", "symbol": "lodash._baseAssign"}
],
"edges": [
{"from": "n1", "to": "n2", "type": "call"},
{"from": "n2", "to": "n3", "type": "call"}
],
"gates": [
{"nodeId": "n1", "gateType": "http", "boundary": "public"}
],
"analysisMetadata": {
"analyzer": "node-callgraph-v2",
"confidence": 0.95,
"completeness": "partial"
}
}
```
---
## Documentation Updates
- [ ] Add reachability subgraph specification
- [ ] Update attestation type catalog
- [ ] Create reachability proof guide

View File

@@ -0,0 +1,50 @@
# SPRINT_4400 SUMMARY: Delta Verdicts & Reachability Attestations
## Program Overview
| Field | Value |
|-------|-------|
| **Program ID** | 4400 |
| **Theme** | Attestable Change Control: Delta Verdicts & Reachability Proofs |
| **Priority** | P2 (Medium) |
| **Total Effort** | ~4 weeks |
| **Advisory Source** | 19-Dec-2025 - Stella Ops candidate features mapped to moat strength |
---
## Strategic Context
This program extends the attestation infrastructure to cover:
1. **Smart-Diff semantic delta** — Changes in exploitable surface as signed artifacts
2. **Reachability proofs** — Call-path subgraphs as portable evidence
---
## Sprint Breakdown
| Sprint ID | Title | Effort | Moat |
|-----------|-------|--------|------|
| 4400_0001_0001 | Signed Delta Verdict Attestation | 2 weeks | 4 |
| 4400_0001_0002 | Reachability Subgraph Attestation | 2 weeks | 4 |
---
## Dependencies
- **Requires**: SPRINT_4300_0001_0001 (OCI Verdict Push)
- **Requires**: MaterialRiskChangeDetector (exists)
- **Requires**: PathWitnessBuilder (exists)
---
## Outcomes
1. Delta verdicts become attestable change-control artifacts
2. Reachability analysis produces portable proof subgraphs
3. Both can be pushed to OCI registries as referrers
---
**Sprint Series Status:** TODO
**Created:** 2025-12-22

View File

@@ -0,0 +1,183 @@
# SPRINT_4500_0001_0001: VEX Hub Aggregation Service
## Sprint Metadata
| Field | Value |
|-------|-------|
| **Sprint ID** | 4500_0001_0001 |
| **Title** | VEX Hub Aggregation Service |
| **Priority** | P1 (High) |
| **Moat Strength** | 3-4 (Moderate-Strong moat) |
| **Working Directory** | `src/Excititor/`, `src/VexLens/`, new `src/VexHub/` |
| **Estimated Effort** | 4 weeks |
| **Dependencies** | VexLens (exists), Excititor connectors (exist) |
---
## Objective
Build a VEX Hub aggregation layer that collects, validates, normalizes, and serves VEX statements at scale, positioning StellaOps as a trusted source for VEX distribution.
**Competitive context**: Aqua's VEX Hub is explicitly called out in the advisory. Differentiation requires verification + trust scoring + tight coupling to deterministic decisioning.
---
## Background
The advisory notes VEX distribution network as **Moat 3-4**. Current implementation:
- Excititor ingests from 7+ VEX sources
- VexLens provides consensus engine
- VexConsensusEngine supports multiple modes
**Gap**: No aggregation layer, no distribution API, no ecosystem play.
---
## Deliverables
### D1: VexHub Module
- New `src/VexHub/` module
- Aggregation scheduler
- Storage layer for normalized VEX
### D2: VEX Ingestion Pipeline
- Scheduled polling of upstream sources
- Normalization to canonical VEX format
- Deduplication and conflict detection
### D3: VEX Validation Pipeline
- Signature verification for signed VEX
- Schema validation
- Provenance tracking
### D4: Distribution API
- REST API for VEX discovery
- Query by: CVE, package (PURL), source
- Pagination and filtering
- Subscription/webhook for updates
### D5: Trivy/Grype Compatibility
- Export in OpenVEX format
- Compatible with Trivy `--vex-url` flag
- Index manifest for tool consumption
---
## Tasks
### Phase 1: Module Setup
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| HUB-001 | Create `StellaOps.VexHub` module structure | TODO | |
| HUB-002 | Define VexHub domain models | TODO | |
| HUB-003 | Create PostgreSQL schema for VEX aggregation | TODO | |
| HUB-004 | Set up web service skeleton | TODO | |
### Phase 2: Ingestion Pipeline
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| HUB-005 | Create `VexIngestionScheduler` | TODO | |
| HUB-006 | Implement source polling orchestration | TODO | |
| HUB-007 | Create `VexNormalizationPipeline` | TODO | |
| HUB-008 | Implement deduplication logic | TODO | |
| HUB-009 | Detect and flag conflicting statements | TODO | |
| HUB-010 | Store normalized VEX with provenance | TODO | |
### Phase 3: Validation Pipeline
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| HUB-011 | Implement signature verification for signed VEX | TODO | |
| HUB-012 | Add schema validation (OpenVEX, CycloneDX, CSAF) | TODO | |
| HUB-013 | Track and store provenance metadata | TODO | |
| HUB-014 | Flag unverified/untrusted statements | TODO | |
### Phase 4: Distribution API
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| HUB-015 | Implement `GET /api/v1/vex/cve/{cve-id}` | TODO | |
| HUB-016 | Implement `GET /api/v1/vex/package/{purl}` | TODO | |
| HUB-017 | Implement `GET /api/v1/vex/source/{source-id}` | TODO | |
| HUB-018 | Add pagination and filtering | TODO | |
| HUB-019 | Implement subscription/webhook for updates | TODO | |
| HUB-020 | Add rate limiting and authentication | TODO | |
### Phase 5: Tool Compatibility
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| HUB-021 | Implement OpenVEX bulk export | TODO | |
| HUB-022 | Create index manifest (vex-index.json) | TODO | |
| HUB-023 | Test with Trivy `--vex-url` | TODO | |
| HUB-024 | Test with Grype VEX support | TODO | |
| HUB-025 | Document integration instructions | TODO | |
---
## Acceptance Criteria
1. **AC1**: VEX Hub ingests from all configured sources on schedule
2. **AC2**: API returns VEX statements by CVE and PURL
3. **AC3**: Signed VEX statements are verified and flagged
4. **AC4**: Trivy can consume VEX from hub URL
5. **AC5**: Conflicts are detected and surfaced
---
## Technical Notes
### API Examples
```http
GET /api/v1/vex/cve/CVE-2024-1234
Accept: application/vnd.openvex+json
Response:
{
"@context": "https://openvex.dev/ns",
"statements": [
{
"vulnerability": "CVE-2024-1234",
"products": ["pkg:npm/express@4.17.1"],
"status": "not_affected",
"justification": "vulnerable_code_not_present",
"source": {"id": "redhat-csaf", "trustScore": 0.95}
}
]
}
```
### Index Manifest
```json
{
"version": "1.0",
"lastUpdated": "2025-12-22T00:00:00Z",
"sources": ["redhat-csaf", "cisco-csaf", "ubuntu-csaf"],
"totalStatements": 45678,
"endpoints": {
"byCve": "/api/v1/vex/cve/{cve}",
"byPackage": "/api/v1/vex/package/{purl}",
"bulk": "/api/v1/vex/export"
}
}
```
---
## Risks & Mitigations
| Risk | Impact | Mitigation |
|------|--------|------------|
| Upstream source instability | Missing VEX | Multiple sources, caching |
| Conflicting VEX from sources | Confusion | Surface conflicts, trust scoring |
| Scale challenges | Performance | Caching, CDN, pagination |
---
## Documentation Updates
- [ ] Create `docs/modules/vexhub/architecture.md`
- [ ] Add VexHub API reference
- [ ] Create integration guide for Trivy/Grype

View File

@@ -0,0 +1,180 @@
# SPRINT_4500_0001_0002: VEX Trust Scoring Framework
## Sprint Metadata
| Field | Value |
|-------|-------|
| **Sprint ID** | 4500_0001_0002 |
| **Title** | VEX Trust Scoring Framework |
| **Priority** | P1 (High) |
| **Moat Strength** | 3-4 (Moderate-Strong moat) |
| **Working Directory** | `src/VexLens/`, `src/VexHub/`, `src/Policy/` |
| **Estimated Effort** | 2 weeks |
| **Dependencies** | SPRINT_4500_0001_0001, TrustWeightEngine (exists) |
---
## Objective
Develop a comprehensive trust scoring framework for VEX sources that goes beyond simple weighting, incorporating verification status, historical accuracy, and timeliness.
**Differentiation**: Competitors treat VEX as suppression. StellaOps treats VEX as a logical claim system with trust semantics.
---
## Background
Current `TrustWeightEngine` provides basic issuer weighting. The advisory calls for:
- "Verification + trust scoring of VEX sources"
- "Trust frameworks" for network effects
---
## Deliverables
### D1: Trust Scoring Model
- Multi-dimensional trust score: authority, accuracy, timeliness, coverage
- Composite score calculation
- Historical accuracy tracking
### D2: Source Verification
- Signature verification status
- Provenance chain validation
- Issuer identity verification
### D3: Trust Decay
- Time-based trust decay for stale statements
- Recency bonus for fresh assessments
- Revocation/update handling
### D4: Trust Policy Integration
- Policy rules based on trust scores
- Minimum trust thresholds
- Source allowlists/blocklists
### D5: Trust Dashboard
- Source trust scorecards
- Historical accuracy metrics
- Conflict resolution audit
---
## Tasks
### Phase 1: Trust Model
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| TRUST-001 | Define `VexSourceTrustScore` model | TODO | |
| TRUST-002 | Implement authority score (issuer reputation) | TODO | |
| TRUST-003 | Implement accuracy score (historical correctness) | TODO | |
| TRUST-004 | Implement timeliness score (response speed) | TODO | |
| TRUST-005 | Implement coverage score (completeness) | TODO | |
| TRUST-006 | Create composite score calculator | TODO | |
### Phase 2: Verification
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| TRUST-007 | Add signature verification to trust pipeline | TODO | |
| TRUST-008 | Implement provenance chain validator | TODO | |
| TRUST-009 | Create issuer identity registry | TODO | |
| TRUST-010 | Score boost for verified statements | TODO | |
### Phase 3: Decay & Freshness
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| TRUST-011 | Implement time-based trust decay | TODO | |
| TRUST-012 | Add recency bonus calculation | TODO | |
| TRUST-013 | Handle statement revocation | TODO | |
| TRUST-014 | Track statement update history | TODO | |
### Phase 4: Policy Integration
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| TRUST-015 | Add trust threshold to policy rules | TODO | |
| TRUST-016 | Implement source allowlist/blocklist | TODO | |
| TRUST-017 | Create `TrustInsufficientViolation` | TODO | |
| TRUST-018 | Add trust context to consensus engine | TODO | |
### Phase 5: Dashboard & Reporting
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| TRUST-019 | Create source trust scorecard API | TODO | |
| TRUST-020 | Add historical accuracy metrics | TODO | |
| TRUST-021 | Implement conflict resolution audit log | TODO | |
| TRUST-022 | Add trust trends visualization data | TODO | |
---
## Acceptance Criteria
1. **AC1**: Each VEX source has a computed trust score
2. **AC2**: Verified statements receive score boost
3. **AC3**: Stale statements decay appropriately
4. **AC4**: Policy can enforce minimum trust thresholds
5. **AC5**: Trust scorecard available via API
---
## Technical Notes
### Trust Score Model
```csharp
public sealed record VexSourceTrustScore
{
public required string SourceId { get; init; }
// Component scores (0.0 - 1.0)
public required double AuthorityScore { get; init; } // Issuer reputation
public required double AccuracyScore { get; init; } // Historical correctness
public required double TimelinessScore { get; init; } // Response speed
public required double CoverageScore { get; init; } // Completeness
public required double VerificationScore { get; init; } // Signature/provenance
// Composite score with weights
public double CompositeScore =>
AuthorityScore * 0.25 +
AccuracyScore * 0.30 +
TimelinessScore * 0.15 +
CoverageScore * 0.10 +
VerificationScore * 0.20;
public required DateTimeOffset ComputedAt { get; init; }
}
```
### Decay Formula
```
effective_score = base_score * decay_factor
decay_factor = max(0.5, 1.0 - (age_days / max_age_days) * 0.5)
```
### Policy Rule Example
```yaml
vex_trust_rules:
- name: "require-high-trust"
minimum_composite_score: 0.7
require_verification: true
action: block_if_below
```
---
## Risks & Mitigations
| Risk | Impact | Mitigation |
|------|--------|------------|
| Inaccurate accuracy scores | Gaming, distrust | Manual calibration, transparency |
| New sources have no history | Cold start problem | Default scores, grace period |
---
## Documentation Updates
- [ ] Add `docs/modules/vexlens/trust-scoring.md`
- [ ] Update policy DSL for trust rules
- [ ] Create trust tuning guide

View File

@@ -0,0 +1,67 @@
# SPRINT_4500 SUMMARY: VEX Hub & Trust Scoring
## Program Overview
| Field | Value |
|-------|-------|
| **Program ID** | 4500 |
| **Theme** | VEX Distribution Network: Aggregation, Trust, and Ecosystem |
| **Priority** | P1 (High) |
| **Total Effort** | ~6 weeks |
| **Advisory Source** | 19-Dec-2025 - Stella Ops candidate features mapped to moat strength |
---
## Strategic Context
The advisory explicitly calls out Aqua's VEX Hub as competitive. This program establishes StellaOps as a trusted VEX distribution layer with:
1. **VEX Hub** — Aggregation, validation, and serving at scale
2. **Trust Scoring** — Multi-dimensional trust assessment of VEX sources
---
## Sprint Breakdown
| Sprint ID | Title | Effort | Moat |
|-----------|-------|--------|------|
| 4500_0001_0001 | VEX Hub Aggregation Service | 4 weeks | 3-4 |
| 4500_0001_0002 | VEX Trust Scoring Framework | 2 weeks | 3-4 |
---
## New Module
This program introduces a new module: `src/VexHub/`
---
## Dependencies
- **Requires**: VexLens (exists)
- **Requires**: Excititor connectors (exist)
- **Requires**: TrustWeightEngine (exists)
---
## Outcomes
1. VEX Hub aggregates statements from all configured sources
2. API enables query by CVE, PURL, source
3. Trivy/Grype can consume VEX from hub URL
4. Trust scores inform consensus decisions
---
## Competitive Positioning
| Competitor | VEX Capability | StellaOps Differentiation |
|------------|----------------|---------------------------|
| Aqua VEX Hub | Centralized repository | +Trust scoring, +Verification, +Decisioning coupling |
| Trivy | VEX consumption | +Aggregation source, +Consensus engine |
| Anchore | VEX annotation | +Multi-source, +Lattice logic |
---
**Sprint Series Status:** TODO
**Created:** 2025-12-22

View File

@@ -0,0 +1,171 @@
# SPRINT_4600_0001_0001: SBOM Lineage Ledger
## Sprint Metadata
| Field | Value |
|-------|-------|
| **Sprint ID** | 4600_0001_0001 |
| **Title** | SBOM Lineage Ledger |
| **Priority** | P2 (Medium) |
| **Moat Strength** | 3 (Moderate moat) |
| **Working Directory** | `src/SbomService/`, `src/Graph/` |
| **Estimated Effort** | 3 weeks |
| **Dependencies** | SbomService (exists), Graph module (exists) |
---
## Objective
Build a versioned SBOM ledger that tracks historical changes, enables diff queries, and maintains lineage relationships between SBOM versions for the same artifact.
**Moat strategy**: Make the ledger valuable via **semantic diff, evidence joins, and provenance** rather than just storage.
---
## Background
Current `SbomService` has:
- Basic version events (registered, updated)
- CatalogRecord storage
- Graph indexing
**Gap**: No historical tracking, no lineage semantics, no temporal queries.
---
## Deliverables
### D1: SBOM Version Chain
- Link SBOM versions by artifact identity
- Track version sequence with timestamps
- Support branching (multiple sources for same artifact)
### D2: Historical Query API
- Query SBOM at point-in-time
- Get version history for artifact
- Diff between two versions
### D3: Lineage Graph
- Build/source relationship tracking
- Parent/child SBOM relationships
- Aggregation relationships
### D4: Change Detection
- Detect component additions/removals
- Detect version changes
- Detect license changes
### D5: Retention Policy
- Configurable retention periods
- Archive/prune old versions
- Audit log preservation
---
## Tasks
### Phase 1: Version Chain
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| LEDGER-001 | Design version chain schema | TODO | |
| LEDGER-002 | Implement `SbomVersionChain` entity | TODO | |
| LEDGER-003 | Create version sequencing logic | TODO | |
| LEDGER-004 | Handle branching from multiple sources | TODO | |
| LEDGER-005 | Add version chain queries | TODO | |
### Phase 2: Historical Queries
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| LEDGER-006 | Implement point-in-time SBOM retrieval | TODO | |
| LEDGER-007 | Create version history endpoint | TODO | |
| LEDGER-008 | Implement SBOM diff API | TODO | |
| LEDGER-009 | Add temporal range queries | TODO | |
### Phase 3: Lineage Graph
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| LEDGER-010 | Define lineage relationship types | TODO | |
| LEDGER-011 | Implement parent/child tracking | TODO | |
| LEDGER-012 | Add build relationship links | TODO | |
| LEDGER-013 | Create lineage query API | TODO | |
### Phase 4: Change Detection
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| LEDGER-014 | Implement component diff algorithm | TODO | |
| LEDGER-015 | Detect version changes | TODO | |
| LEDGER-016 | Detect license changes | TODO | |
| LEDGER-017 | Generate change summary | TODO | |
### Phase 5: Retention
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| LEDGER-018 | Add retention policy configuration | TODO | |
| LEDGER-019 | Implement archive job | TODO | |
| LEDGER-020 | Preserve audit log entries | TODO | |
---
## Acceptance Criteria
1. **AC1**: SBOM versions are chained by artifact
2. **AC2**: Can query SBOM at any historical point
3. **AC3**: Diff shows component changes between versions
4. **AC4**: Lineage relationships are queryable
5. **AC5**: Retention policy enforced
---
## Technical Notes
### Version Chain Model
```csharp
public sealed record SbomVersionChain
{
public required Guid ChainId { get; init; }
public required string ArtifactIdentity { get; init; } // PURL or image ref
public required IReadOnlyList<SbomVersionEntry> Versions { get; init; }
}
public sealed record SbomVersionEntry
{
public required Guid VersionId { get; init; }
public required int SequenceNumber { get; init; }
public required string ContentDigest { get; init; }
public required DateTimeOffset CreatedAt { get; init; }
public required string Source { get; init; } // scanner, import, etc.
public Guid? ParentVersionId { get; init; } // For lineage
}
```
### Diff Response
```json
{
"beforeVersion": "v1.2.3",
"afterVersion": "v1.2.4",
"changes": {
"added": [{"purl": "pkg:npm/new-dep@1.0.0", "license": "MIT"}],
"removed": [{"purl": "pkg:npm/old-dep@0.9.0"}],
"upgraded": [{"purl": "pkg:npm/lodash", "from": "4.17.20", "to": "4.17.21"}],
"licenseChanged": []
},
"summary": {
"addedCount": 1,
"removedCount": 1,
"upgradedCount": 1
}
}
```
---
## Documentation Updates
- [ ] Update `docs/modules/sbomservice/architecture.md`
- [ ] Add SBOM lineage guide
- [ ] Document retention policies

View File

@@ -0,0 +1,136 @@
# SPRINT_4600_0001_0002: BYOS Ingestion Workflow
## Sprint Metadata
| Field | Value |
|-------|-------|
| **Sprint ID** | 4600_0001_0002 |
| **Title** | BYOS (Bring Your Own SBOM) Ingestion Workflow |
| **Priority** | P2 (Medium) |
| **Moat Strength** | 3 (Moderate moat) |
| **Working Directory** | `src/SbomService/`, `src/Scanner/`, `src/Cli/` |
| **Estimated Effort** | 2 weeks |
| **Dependencies** | SPRINT_4600_0001_0001, SbomService (exists) |
---
## Objective
Enable customers to bring their own SBOMs (from Syft, SPDX tools, CycloneDX generators, etc.) and have them processed through StellaOps vulnerability correlation, VEX decisioning, and policy evaluation.
**Strategy**: SBOM generation is table stakes. Value comes from what you do with SBOMs.
---
## Background
Competitors like Anchore explicitly position "Bring Your Own SBOM" as a feature. StellaOps should:
1. Accept external SBOMs
2. Validate and normalize them
3. Run full analysis pipeline
4. Produce verdicts
---
## Deliverables
### D1: SBOM Upload API
- REST endpoint for SBOM submission
- Support: SPDX 2.3, SPDX 3.0, CycloneDX 1.4-1.6
- Validation and normalization
### D2: SBOM Validation Pipeline
- Schema validation
- Completeness checks
- Quality scoring
### D3: CLI Upload Command
- `stella sbom upload --file=sbom.json --artifact=<ref>`
- Progress and validation feedback
### D4: Analysis Triggering
- Trigger vulnerability correlation on upload
- Trigger VEX application
- Trigger policy evaluation
### D5: Provenance Tracking
- Record SBOM source (tool, version)
- Track upload metadata
- Link to external CI/CD context
---
## Tasks
| ID | Task | Status | Assignee |
|----|------|--------|----------|
| BYOS-001 | Create SBOM upload API endpoint | TODO | |
| BYOS-002 | Implement format detection (SPDX/CycloneDX) | TODO | |
| BYOS-003 | Add schema validation per format | TODO | |
| BYOS-004 | Implement normalization to internal model | TODO | |
| BYOS-005 | Create quality scoring algorithm | TODO | |
| BYOS-006 | Trigger analysis pipeline on upload | TODO | |
| BYOS-007 | Add `stella sbom upload` CLI | TODO | |
| BYOS-008 | Track SBOM provenance metadata | TODO | |
| BYOS-009 | Link to artifact identity | TODO | |
| BYOS-010 | Integration tests with Syft/CycloneDX outputs | TODO | |
---
## Acceptance Criteria
1. **AC1**: Can upload SPDX 2.3 and 3.0 SBOMs
2. **AC2**: Can upload CycloneDX 1.4-1.6 SBOMs
3. **AC3**: Invalid SBOMs are rejected with clear errors
4. **AC4**: Uploaded SBOM triggers full analysis
5. **AC5**: Provenance is tracked and queryable
---
## Technical Notes
### Upload API
```http
POST /api/v1/sbom/upload
Content-Type: application/json
{
"artifactRef": "my-app:v1.2.3",
"sbom": { ... }, // Or base64 encoded
"format": "cyclonedx", // Auto-detected if omitted
"source": {
"tool": "syft",
"version": "1.0.0",
"ciContext": {
"buildId": "123",
"repository": "github.com/org/repo"
}
}
}
Response:
{
"sbomId": "uuid",
"validationResult": {
"valid": true,
"qualityScore": 0.85,
"warnings": ["Missing supplier information for 3 components"]
},
"analysisJobId": "uuid"
}
```
### Quality Score Factors
- Component completeness (PURL, version, license)
- Relationship coverage
- Hash/checksum presence
- Supplier information
- External reference quality
---
## Documentation Updates
- [ ] Add BYOS integration guide
- [ ] Document supported formats
- [ ] Create troubleshooting guide for validation errors

View File

@@ -0,0 +1,57 @@
# SPRINT_4600 SUMMARY: SBOM Lineage & BYOS Ingestion
## Program Overview
| Field | Value |
|-------|-------|
| **Program ID** | 4600 |
| **Theme** | SBOM Operations: Historical Tracking, Lineage, and Ingestion |
| **Priority** | P2 (Medium) |
| **Total Effort** | ~5 weeks |
| **Advisory Source** | 19-Dec-2025 - Stella Ops candidate features mapped to moat strength |
---
## Strategic Context
SBOM storage is becoming table stakes. Differentiation comes from:
1. **Lineage ledger** — Historical tracking with semantic diff
2. **BYOS ingestion** — Accept external SBOMs into the analysis pipeline
---
## Sprint Breakdown
| Sprint ID | Title | Effort | Moat |
|-----------|-------|--------|------|
| 4600_0001_0001 | SBOM Lineage Ledger | 3 weeks | 3 |
| 4600_0001_0002 | BYOS Ingestion Workflow | 2 weeks | 3 |
---
## Dependencies
- **Requires**: SbomService (exists)
- **Requires**: Graph module (exists)
- **Requires**: SPRINT_4600_0001_0001 for BYOS
---
## Outcomes
1. SBOM versions are chained by artifact identity
2. Historical queries and diffs are available
3. External SBOMs can be uploaded and analyzed
4. Lineage relationships are queryable
---
## Moat Strategy
> "Make the ledger valuable via **semantic diff, evidence joins, and provenance** rather than storage."
---
**Sprint Series Status:** TODO
**Created:** 2025-12-22

View File

@@ -0,0 +1,256 @@
# Sprint 6000.0002.0003 · Version Comparator Integration
## Topic & Scope
- Extract existing version comparators from Concelier to shared library.
- Add proof-line generation for UX explainability.
- Reference shared library from BinaryIndex.FixIndex.
- **Working directory:** `src/__Libraries/StellaOps.VersionComparison/`
## Advisory Reference
- **Source:** `docs/product-advisories/archived/22-Dec-2025 - Getting Distro Backport Logic Right.md`
- **Related Sprints:**
- SPRINT_2000_0003_0001 (Alpine connector adds `ApkVersionComparer`)
- SPRINT_4000_0002_0001 (UI consumes proof lines)
## Dependencies & Concurrency
- **Upstream**: None (refactoring existing code)
- **Downstream**: SPRINT_6000.0002.0002 (Fix Index Builder), SPRINT_4000_0002_0001 (Backport UX)
- **Safe to parallelize with**: SPRINT_2000_0003_0001
## Documentation Prerequisites
- `src/Concelier/__Libraries/StellaOps.Concelier.Merge/Comparers/Nevra.cs`
- `src/Concelier/__Libraries/StellaOps.Concelier.Merge/Comparers/DebianEvr.cs`
- `docs/product-advisories/archived/22-Dec-2025 - Getting Distro Backport Logic Right.md`
---
## Tasks
### T1: Create StellaOps.VersionComparison Project
**Assignee**: Platform Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: —
**Description**:
Create the shared library project for version comparison.
**Implementation Path**: `src/__Libraries/StellaOps.VersionComparison/`
**Project Structure**:
```
StellaOps.VersionComparison/
├── StellaOps.VersionComparison.csproj
├── IVersionComparator.cs
├── VersionComparisonResult.cs
├── Comparers/
│ ├── RpmVersionComparer.cs
│ ├── DebianVersionComparer.cs
│ └── ApkVersionComparer.cs
├── Models/
│ ├── RpmVersion.cs
│ ├── DebianVersion.cs
│ └── ApkVersion.cs
└── Extensions/
└── ServiceCollectionExtensions.cs
```
**Acceptance Criteria**:
- [ ] Project created with .NET 10 target
- [ ] No external dependencies except System.Collections.Immutable
- [ ] XML documentation enabled
---
### T2: Create IVersionComparator Interface with Proof Support
**Assignee**: Platform Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1
**Description**:
Define the interface for version comparison with proof-line generation.
**Implementation Path**: `src/__Libraries/StellaOps.VersionComparison/IVersionComparator.cs`
**Acceptance Criteria**:
- [ ] Interface supports both simple Compare and CompareWithProof
- [ ] VersionComparisonResult includes proof lines
- [ ] ComparatorType enum for identification
---
### T3: Extract and Enhance RpmVersionComparer
**Assignee**: Platform Team
**Story Points**: 5
**Status**: TODO
**Dependencies**: T1, T2
**Description**:
Extract RPM version comparison logic from Concelier and add proof-line generation.
**Implementation Path**: `src/__Libraries/StellaOps.VersionComparison/Comparers/RpmVersionComparer.cs`
**Acceptance Criteria**:
- [ ] Full rpmvercmp semantics preserved
- [ ] Proof lines generated for each comparison step
- [ ] RpmVersion model for parsed versions
- [ ] Epoch, version, release handled correctly
- [ ] Tilde pre-release handling with proofs
---
### T4: Extract and Enhance DebianVersionComparer
**Assignee**: Platform Team
**Story Points**: 5
**Status**: TODO
**Dependencies**: T1, T2
**Description**:
Extract Debian version comparison logic from Concelier and add proof-line generation.
**Implementation Path**: `src/__Libraries/StellaOps.VersionComparison/Comparers/DebianVersionComparer.cs`
**Acceptance Criteria**:
- [ ] Full dpkg semantics preserved
- [ ] Proof lines generated for each comparison step
- [ ] DebianVersion model for parsed versions
- [ ] Epoch, upstream, revision handled correctly
- [ ] Tilde pre-release handling with proofs
---
### T5: Update Concelier to Reference Shared Library
**Assignee**: Concelier Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T3, T4
**Description**:
Update Concelier.Merge to reference the shared library and deprecate local comparers.
**Implementation Path**: `src/Concelier/__Libraries/StellaOps.Concelier.Merge/`
**Changes**:
1. Add project reference to StellaOps.VersionComparison
2. Mark existing comparers as obsolete with pointer to shared library
3. Create thin wrappers for backward compatibility
4. Update tests to use shared library
**Acceptance Criteria**:
- [ ] Project reference added
- [ ] Existing code paths still work (backward compatible)
- [ ] Obsolete attributes on old comparers
- [ ] All tests pass
---
### T6: Add Reference from BinaryIndex.FixIndex
**Assignee**: BinaryIndex Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T3, T4
**Description**:
Reference the shared version comparison library from BinaryIndex.FixIndex.
**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.FixIndex/`
**Acceptance Criteria**:
- [ ] Project reference added
- [ ] FixIndex uses shared comparers
- [ ] Proof lines available for evidence recording
---
### T7: Unit Tests for Proof-Line Generation
**Assignee**: Platform Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T3, T4
**Description**:
Create comprehensive tests for proof-line generation.
**Implementation Path**: `src/__Libraries/__Tests/StellaOps.VersionComparison.Tests/`
**Test Cases**:
- [ ] RPM epoch comparison proofs
- [ ] RPM tilde pre-release proofs
- [ ] RPM release qualifier proofs
- [ ] Debian epoch comparison proofs
- [ ] Debian revision comparison proofs
- [ ] Debian tilde pre-release proofs
**Acceptance Criteria**:
- [ ] All proof-line formats validated
- [ ] Human-readable output verified
- [ ] Edge cases covered
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Platform Team | Create StellaOps.VersionComparison Project |
| 2 | T2 | TODO | T1 | Platform Team | Create IVersionComparator Interface |
| 3 | T3 | TODO | T1, T2 | Platform Team | Extract and Enhance RpmVersionComparer |
| 4 | T4 | TODO | T1, T2 | Platform Team | Extract and Enhance DebianVersionComparer |
| 5 | T5 | TODO | T3, T4 | Concelier Team | Update Concelier to Reference Shared Library |
| 6 | T6 | TODO | T3, T4 | BinaryIndex Team | Add Reference from BinaryIndex.FixIndex |
| 7 | T7 | TODO | T3, T4 | Platform Team | Unit Tests for Proof-Line Generation |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created. Scope changed from "implement comparators" to "extract existing + add proof generation" based on advisory gap analysis. | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Extract vs wrap | Decision | Platform Team | Extract to shared lib, mark old as obsolete, thin wrappers for compat |
| Proof line format | Decision | Platform Team | Human-readable English, suitable for UI display |
| Backward compatibility | Decision | Platform Team | Concelier existing code paths must continue working |
---
## Success Criteria
- [ ] All 7 tasks marked DONE
- [ ] Shared library created and referenced
- [ ] Proof-line generation working for RPM and Debian
- [ ] Concelier backward compatible
- [ ] BinaryIndex.FixIndex using shared library
- [ ] `dotnet build` succeeds
- [ ] `dotnet test` succeeds with 100% pass rate
---
## References
- Advisory: `docs/product-advisories/archived/22-Dec-2025 - Getting Distro Backport Logic Right.md`
- Existing comparers: `src/Concelier/__Libraries/StellaOps.Concelier.Merge/Comparers/`
- SPRINT_6000_SUMMARY.md (notes on this sprint)
---
*Document Version: 1.0.0*
*Created: 2025-12-22*

View File

@@ -33,11 +33,26 @@ The 6000 series implements the **BinaryIndex** module - a vulnerable binaries da
|--------|-------|-------------|
| 6000.0002.0001 | Fix Evidence Parser | Changelog and patch header parsing |
| 6000.0002.0002 | Fix Index Builder | Merge evidence into fix index |
| 6000.0002.0003 | Version Comparators | Distro-specific version comparison |
| 6000.0002.0003 | Version Comparator Integration | **Reference existing Concelier comparators** (see note below) |
| 6000.0002.0004 | RPM Corpus Connector | RHEL/Fedora package ingestion |
**Acceptance:** For a CVE that upstream marks vulnerable, correctly identify distro backport as fixed.
> **Note (2025-12-22):** Sprint 6000.0002.0003 originally planned to implement distro-specific version comparators. However, production-ready comparators already exist in Concelier:
> - `src/Concelier/__Libraries/StellaOps.Concelier.Merge/Comparers/Nevra.cs` (RPM)
> - `src/Concelier/__Libraries/StellaOps.Concelier.Merge/Comparers/DebianEvr.cs` (Debian/Ubuntu)
> - `src/Concelier/__Libraries/StellaOps.Concelier.Merge/Comparers/ApkVersion.cs` (Alpine, via SPRINT_2000_0003_0001)
>
> This sprint should instead:
> 1. Create a shared `StellaOps.VersionComparison` library extracting existing comparators
> 2. Reference this library from BinaryIndex.FixIndex
> 3. Add proof-line generation per SPRINT_4000_0002_0001
>
> See also:
> - SPRINT_2000_0003_0001 (Alpine connector/comparator)
> - SPRINT_2000_0003_0002 (Comprehensive version tests)
> - SPRINT_4000_0002_0001 (Backport UX explainability)
---
### MVP 3: Binary Fingerprint Factory (Sprint 6000.0003)

View File

@@ -0,0 +1,265 @@
# SPRINT_7000_0001_0001 - Competitive Benchmarking Infrastructure
## Sprint Metadata
| Field | Value |
|-------|-------|
| **Sprint ID** | 7000.0001.0001 |
| **Topic** | Competitive Benchmarking Infrastructure |
| **Duration** | 2 weeks |
| **Priority** | HIGH |
| **Status** | TODO |
| **Owner** | QA + Scanner Team |
| **Working Directory** | `src/Scanner/__Libraries/StellaOps.Scanner.Benchmark/` |
---
## Objective
Establish infrastructure to validate and demonstrate Stella Ops' competitive advantages against Trivy, Grype, Syft, and other container scanners through verifiable benchmarks with ground-truth corpus.
---
## Prerequisites
- [ ] Scanner module functional with SBOM generation
- [ ] Access to competitor CLI tools (Trivy, Grype, Syft)
- [ ] Docker environment for corpus image builds
---
## Delivery Tracker
| ID | Task | Status | Assignee | Notes |
|----|------|--------|----------|-------|
| 7000.0001.01 | Create reference corpus with ground-truth annotations (50+ images) | TODO | | |
| 7000.0001.02 | Build comparison harness: Trivy, Grype, Syft SBOM ingestion | TODO | | |
| 7000.0001.03 | Implement precision/recall/F1 metric calculator | TODO | | |
| 7000.0001.04 | Add findings diff analyzer (TP/FP/TN/FN classification) | TODO | | |
| 7000.0001.05 | Create claims index with evidence links | TODO | | |
| 7000.0001.06 | CI workflow: `benchmark-vs-competitors.yml` | TODO | | |
| 7000.0001.07 | Marketing battlecard generator from benchmark results | TODO | | |
---
## Task Details
### 7000.0001.01: Reference Corpus with Ground-Truth
**Description**: Create a curated corpus of container images with manually verified vulnerability ground truth.
**Deliverables**:
- `bench/competitors/corpus/` directory structure
- 50+ images covering:
- Alpine, Debian, Ubuntu, RHEL base images
- Node.js, Python, Java, .NET application images
- Known CVE scenarios with verified exploitability
- False positive scenarios (backported fixes, unreachable code)
- Ground-truth manifest: `corpus-manifest.json`
```json
{
"images": [
{
"digest": "sha256:...",
"truePositives": ["CVE-2024-1234", "CVE-2024-5678"],
"falsePositives": ["CVE-2024-9999"],
"notes": "CVE-2024-9999 is backported in debian:bookworm"
}
]
}
```
**Acceptance Criteria**:
- [ ] 50+ images with ground-truth annotations
- [ ] Mix of base OS and application images
- [ ] Known FP scenarios documented
- [ ] Corpus reproducible from manifest
---
### 7000.0001.02: Comparison Harness
**Description**: Build harness to run competitor tools and normalize their output for comparison.
**Deliverables**:
- `StellaOps.Scanner.Benchmark.Harness` namespace
- Adapters for:
- Trivy JSON output
- Grype JSON output
- Syft SBOM (CycloneDX/SPDX)
- Normalized finding model: `NormalizedFinding`
- Docker-based runner for competitor tools
**Key Types**:
```csharp
public interface ICompetitorAdapter
{
string ToolName { get; }
Task<ImmutableArray<NormalizedFinding>> ScanAsync(string imageRef, CancellationToken ct);
}
public record NormalizedFinding(
string CveId,
string PackageName,
string PackageVersion,
string Severity,
string Source
);
```
**Acceptance Criteria**:
- [ ] Trivy adapter parses JSON output
- [ ] Grype adapter parses JSON output
- [ ] Syft SBOM ingestion works
- [ ] Normalized output is deterministic
---
### 7000.0001.03: Precision/Recall/F1 Calculator
**Description**: Implement metrics calculator comparing tool output against ground truth.
**Deliverables**:
- `StellaOps.Scanner.Benchmark.Metrics` namespace
- `BenchmarkMetrics` record:
```csharp
public record BenchmarkMetrics(
int TruePositives,
int FalsePositives,
int TrueNegatives,
int FalseNegatives,
double Precision,
double Recall,
double F1Score
);
```
- Per-tool and aggregate metrics
- Breakdown by severity, ecosystem, CVE age
**Acceptance Criteria**:
- [ ] Metrics match manual verification
- [ ] Deterministic output
- [ ] CSV/JSON export
---
### 7000.0001.04: Findings Diff Analyzer
**Description**: Classify findings as TP/FP/TN/FN with detailed reasoning.
**Deliverables**:
- `FindingClassification` enum: `TruePositive`, `FalsePositive`, `TrueNegative`, `FalseNegative`
- Classification report with reasoning
- Drill-down by:
- Package ecosystem
- CVE severity
- Tool
- Reason (backport, version mismatch, unreachable)
**Acceptance Criteria**:
- [ ] Classification logic documented
- [ ] Edge cases handled (version ranges, backports)
- [ ] Report includes reasoning
---
### 7000.0001.05: Claims Index
**Description**: Create verifiable claims index linking marketing claims to benchmark evidence.
**Deliverables**:
- `docs/claims-index.md` with structure:
```markdown
| Claim ID | Claim | Evidence | Verification |
|----------|-------|----------|--------------|
| REACH-001 | "Stella Ops detects 15% more reachable vulns than Trivy" | bench/results/2024-12-22.json | `stella bench verify REACH-001` |
```
- `ClaimsIndex` model in code
- Automated claim verification
**Acceptance Criteria**:
- [ ] 10+ initial claims documented
- [ ] Each claim links to evidence
- [ ] Verification command works
---
### 7000.0001.06: CI Workflow
**Description**: GitHub Actions workflow for automated competitor benchmarking.
**Deliverables**:
- `.gitea/workflows/benchmark-vs-competitors.yml`
- Triggers: weekly, manual, on benchmark code changes
- Outputs:
- Metrics JSON artifact
- Markdown summary
- Claims index update
**Acceptance Criteria**:
- [ ] Workflow runs successfully
- [ ] Artifacts published
- [ ] No secrets exposed
---
### 7000.0001.07: Marketing Battlecard Generator
**Description**: Generate marketing-ready battlecard from benchmark results.
**Deliverables**:
- Markdown battlecard template
- Auto-populated metrics
- Comparison tables
- Key differentiators section
**Acceptance Criteria**:
- [ ] Battlecard generated from latest results
- [ ] Suitable for sales/marketing use
- [ ] Claims linked to evidence
---
## Testing Requirements
| Test Type | Location | Coverage |
|-----------|----------|----------|
| Unit tests | `StellaOps.Scanner.Benchmark.Tests` | Adapters, metrics calculator |
| Integration tests | `StellaOps.Scanner.Benchmark.Integration.Tests` | Full benchmark run |
| Golden fixtures | `bench/competitors/fixtures/` | Deterministic output verification |
---
## Documentation Updates
| Document | Update Required |
|----------|-----------------|
| `docs/claims-index.md` | CREATE - Claims with evidence links |
| `docs/modules/benchmark/architecture.md` | CREATE - Module dossier |
| `docs/testing/benchmark-guide.md` | CREATE - How to run benchmarks |
---
## Decisions & Risks
| ID | Decision/Risk | Status | Resolution |
|----|---------------|--------|------------|
| D1 | Which competitor tool versions to pin? | OPEN | |
| D2 | Corpus storage: Git LFS vs external? | OPEN | |
| R1 | Competitor tool output format changes | OPEN | Version pinning + adapter versioning |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from advisory gap analysis | Agent |
---
## Required Reading
- `docs/07_HIGH_LEVEL_ARCHITECTURE.md`
- `docs/modules/scanner/architecture.md`
- `docs/product-advisories/archived/*/19-Dec-2025 - Benchmarking Container Scanners Against Stella Ops.md`

View File

@@ -0,0 +1,281 @@
# SPRINT_7000_0001_0002 - SBOM Lineage & Repository Semantics
## Sprint Metadata
| Field | Value |
|-------|-------|
| **Sprint ID** | 7000.0001.0002 |
| **Topic** | SBOM Lineage & Repository Semantics |
| **Duration** | 2 weeks |
| **Priority** | HIGH |
| **Status** | TODO |
| **Owner** | Scanner Team |
| **Working Directory** | `src/Scanner/__Libraries/StellaOps.Scanner.Emit/` |
---
## Objective
Transform SBOM from static document artifact into a stateful ledger with lineage tracking, versioning, semantic diffing, and rebuild reproducibility proofs. This addresses the advisory gap: "SBOM must become a stateful ledger, not a document."
---
## Prerequisites
- [ ] Sprint 7000.0001.0001 (Benchmarking) complete or in progress
- [ ] `StellaOps.Scanner.Emit` CycloneDX/SPDX generation functional
- [ ] Database schema for scanner module accessible
---
## Delivery Tracker
| ID | Task | Status | Assignee | Notes |
|----|------|--------|----------|-------|
| 7000.0002.01 | Design SBOM lineage model (parent refs, diff pointers) | TODO | | |
| 7000.0002.02 | Add `sbom_lineage` table to scanner schema | TODO | | |
| 7000.0002.03 | Implement SBOM versioning with content-addressable storage | TODO | | |
| 7000.0002.04 | Build SBOM semantic diff engine (component-level deltas) | TODO | | |
| 7000.0002.05 | Add rebuild reproducibility proof manifest | TODO | | |
| 7000.0002.06 | API: `GET /sboms/{id}/lineage`, `GET /sboms/diff` | TODO | | |
| 7000.0002.07 | Tests: lineage traversal, diff determinism | TODO | | |
---
## Task Details
### 7000.0002.01: SBOM Lineage Model Design
**Description**: Design the data model for tracking SBOM evolution across image versions.
**Deliverables**:
- `SbomLineage` domain model:
```csharp
public record SbomLineage(
SbomId Id,
SbomId? ParentId,
string ImageDigest,
string ContentHash, // SHA-256 of canonical SBOM
DateTimeOffset CreatedAt,
ImmutableArray<SbomId> Ancestors,
SbomDiffPointer? DiffFromParent
);
public record SbomDiffPointer(
int ComponentsAdded,
int ComponentsRemoved,
int ComponentsModified,
string DiffHash // Hash of diff document
);
```
- Lineage DAG specification
- Content-addressable ID scheme
**Acceptance Criteria**:
- [ ] Model supports DAG (merge scenarios)
- [ ] Content hash is deterministic
- [ ] Diff pointer enables lazy loading
---
### 7000.0002.02: Database Schema
**Description**: Add PostgreSQL schema for SBOM lineage tracking.
**Deliverables**:
- Migration: `scanner.sbom_lineage` table
```sql
CREATE TABLE scanner.sbom_lineage (
id UUID PRIMARY KEY,
parent_id UUID REFERENCES scanner.sbom_lineage(id),
image_digest TEXT NOT NULL,
content_hash TEXT NOT NULL UNIQUE,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
diff_components_added INT,
diff_components_removed INT,
diff_components_modified INT,
diff_hash TEXT
);
CREATE INDEX idx_sbom_lineage_image ON scanner.sbom_lineage(image_digest);
CREATE INDEX idx_sbom_lineage_parent ON scanner.sbom_lineage(parent_id);
```
- Index for lineage traversal
- Constraints for referential integrity
**Acceptance Criteria**:
- [ ] Migration applies cleanly
- [ ] Indexes support efficient traversal
- [ ] FK constraints enforced
---
### 7000.0002.03: Content-Addressable Storage
**Description**: Implement content-addressable storage for SBOMs with deduplication.
**Deliverables**:
- `ISbomStore` interface:
```csharp
public interface ISbomStore
{
Task<SbomId> StoreAsync(Sbom sbom, SbomId? parentId, CancellationToken ct);
Task<Sbom?> GetByHashAsync(string contentHash, CancellationToken ct);
Task<Sbom?> GetByIdAsync(SbomId id, CancellationToken ct);
Task<ImmutableArray<SbomLineage>> GetLineageAsync(SbomId id, CancellationToken ct);
}
```
- Canonical serialization for consistent hashing
- Deduplication on content hash
**Acceptance Criteria**:
- [ ] Identical SBOMs produce identical hashes
- [ ] Deduplication works
- [ ] Lineage query efficient (< 100ms for 100 ancestors)
---
### 7000.0002.04: Semantic Diff Engine
**Description**: Build component-level diff engine that understands SBOM semantics.
**Deliverables**:
- `SbomDiff` model:
```csharp
public record SbomDiff(
SbomId FromId,
SbomId ToId,
ImmutableArray<ComponentDelta> Deltas,
DiffSummary Summary
);
public record ComponentDelta(
ComponentDeltaType Type, // Added, Removed, VersionChanged, LicenseChanged
ComponentRef? Before,
ComponentRef? After,
ImmutableArray<string> ChangedFields
);
public enum ComponentDeltaType { Added, Removed, VersionChanged, LicenseChanged, DependencyChanged }
```
- Diff algorithm preserving component identity across versions
- Deterministic diff output (sorted, stable)
**Acceptance Criteria**:
- [ ] Detects version upgrades/downgrades
- [ ] Detects dependency changes
- [ ] Output is deterministic
- [ ] Handles component renames (via PURL matching)
---
### 7000.0002.05: Rebuild Reproducibility Proof
**Description**: Generate proof manifest that enables reproducible SBOM generation.
**Deliverables**:
- `RebuildProof` model:
```csharp
public record RebuildProof(
SbomId SbomId,
string ImageDigest,
string StellaOpsVersion,
ImmutableArray<FeedSnapshot> FeedSnapshots,
ImmutableArray<AnalyzerVersion> AnalyzerVersions,
string PolicyHash,
DateTimeOffset GeneratedAt
);
public record FeedSnapshot(
string FeedId,
string SnapshotHash,
DateTimeOffset AsOf
);
```
- Proof attestation (DSSE-signed)
- Replay verification command
**Acceptance Criteria**:
- [ ] Proof captures all inputs
- [ ] DSSE-signed
- [ ] Replay produces identical SBOM
---
### 7000.0002.06: Lineage API
**Description**: HTTP API for querying SBOM lineage and diffs.
**Deliverables**:
- `GET /api/v1/sboms/{id}/lineage` - Returns lineage DAG
- `GET /api/v1/sboms/diff?from={id}&to={id}` - Returns semantic diff
- `POST /api/v1/sboms/{id}/verify-rebuild` - Verifies rebuild reproducibility
- OpenAPI spec updates
**Acceptance Criteria**:
- [ ] Lineage returns full ancestor chain
- [ ] Diff is deterministic
- [ ] Verify-rebuild confirms reproducibility
---
### 7000.0002.07: Tests
**Description**: Comprehensive tests for lineage and diff functionality.
**Deliverables**:
- Unit tests: `SbomLineageTests`, `SbomDiffEngineTests`
- Integration tests: `SbomLineageApiTests`
- Golden fixtures: deterministic diff output
- Property-based tests: diff(A, B) + diff(B, C) = diff(A, C)
**Acceptance Criteria**:
- [ ] 85%+ code coverage
- [ ] Golden fixtures pass
- [ ] Property tests pass
---
## Testing Requirements
| Test Type | Location | Coverage |
|-----------|----------|----------|
| Unit tests | `StellaOps.Scanner.Emit.Tests/Lineage/` | Models, diff engine |
| Integration tests | `StellaOps.Scanner.WebService.Tests/Lineage/` | API endpoints |
| Golden fixtures | `src/Scanner/__Tests/Fixtures/Lineage/` | Deterministic output |
---
## Documentation Updates
| Document | Update Required |
|----------|-----------------|
| `docs/api/sbom-lineage-api.md` | CREATE - Lineage API reference |
| `docs/db/schemas/scanner_schema_specification.md` | UPDATE - Add sbom_lineage table |
| `docs/modules/scanner/architecture.md` | UPDATE - Lineage section |
---
## Decisions & Risks
| ID | Decision/Risk | Status | Resolution |
|----|---------------|--------|------------|
| D1 | How to handle SBOM format changes across versions? | OPEN | |
| D2 | Max lineage depth to store? | OPEN | Propose: 1000 |
| R1 | Storage growth with lineage tracking | OPEN | Content deduplication mitigates |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from advisory gap analysis | Agent |
---
## Required Reading
- `docs/modules/scanner/architecture.md`
- `src/Scanner/__Libraries/StellaOps.Scanner.Emit/AGENTS.md`
- CycloneDX specification (lineage support)

View File

@@ -0,0 +1,325 @@
# SPRINT_7000_0001_0003 - Explainability with Assumptions & Falsifiability
## Sprint Metadata
| Field | Value |
|-------|-------|
| **Sprint ID** | 7000.0001.0003 |
| **Topic** | Explainability with Assumptions & Falsifiability |
| **Duration** | 2 weeks |
| **Priority** | HIGH |
| **Status** | TODO |
| **Owner** | Scanner Team + Policy Team |
| **Working Directory** | `src/Scanner/__Libraries/StellaOps.Scanner.Explainability/`, `src/Policy/__Libraries/StellaOps.Policy.Explainability/` |
---
## Objective
Implement auditor-grade explainability that answers four non-negotiable questions for every finding:
1. What exact evidence triggered this finding?
2. What code or binary path makes it reachable?
3. What assumptions are being made?
4. **What would falsify this conclusion?**
This addresses the advisory gap: "No existing scanner answers #4."
---
## Prerequisites
- [ ] Sprint 3500 (Score Proofs) complete
- [ ] `StellaOps.Scanner.EntryTrace.Risk` module available
- [ ] DSSE predicate schemas accessible
---
## Delivery Tracker
| ID | Task | Status | Assignee | Notes |
|----|------|--------|----------|-------|
| 7000.0003.01 | Design assumption-set model (compiler flags, runtime config, feature gates) | TODO | | |
| 7000.0003.02 | Implement `AssumptionSet` record in findings | TODO | | |
| 7000.0003.03 | Design falsifiability criteria model | TODO | | |
| 7000.0003.04 | Add "what would disprove this?" to `RiskExplainer` output | TODO | | |
| 7000.0003.05 | Implement evidence-density confidence scorer | TODO | | |
| 7000.0003.06 | Add assumption-set to DSSE predicate schema | TODO | | |
| 7000.0003.07 | UI: Explainability widget with assumption drill-down | TODO | | |
---
## Task Details
### 7000.0003.01: Assumption-Set Model Design
**Description**: Design the data model for tracking assumptions made during analysis.
**Deliverables**:
- `Assumption` domain model:
```csharp
public record Assumption(
AssumptionCategory Category,
string Key,
string AssumedValue,
string? ObservedValue,
AssumptionSource Source,
ConfidenceLevel Confidence
);
public enum AssumptionCategory
{
CompilerFlag, // -fstack-protector, -D_FORTIFY_SOURCE
RuntimeConfig, // Environment variables, config files
FeatureGate, // Feature flags, build variants
LoaderBehavior, // LD_PRELOAD, RPATH, symbol versioning
NetworkExposure, // Port bindings, firewall rules
ProcessPrivilege // Capabilities, seccomp, AppArmor
}
public enum AssumptionSource { Static, Dynamic, Inferred, Default }
```
- `AssumptionSet` aggregate:
```csharp
public record AssumptionSet(
ImmutableArray<Assumption> Assumptions,
int TotalCount,
int VerifiedCount,
int InferredCount,
double AssumptionRisk // Higher = more unverified assumptions
);
```
**Acceptance Criteria**:
- [ ] All assumption categories covered
- [ ] Confidence levels defined
- [ ] Risk score derivable from assumptions
---
### 7000.0003.02: AssumptionSet in Findings
**Description**: Integrate assumption tracking into finding records.
**Deliverables**:
- Update `VulnerabilityFinding` to include `AssumptionSet`
- Assumption collector during scan:
```csharp
public interface IAssumptionCollector
{
void RecordAssumption(Assumption assumption);
AssumptionSet Build();
}
```
- Wire into Scanner Worker pipeline
**Acceptance Criteria**:
- [ ] Every finding has AssumptionSet
- [ ] Assumptions collected during analysis
- [ ] Deterministic ordering
---
### 7000.0003.03: Falsifiability Criteria Model
**Description**: Design model for expressing what would disprove a finding.
**Deliverables**:
- `FalsifiabilityCriteria` model:
```csharp
public record FalsifiabilityCriteria(
ImmutableArray<FalsificationCondition> Conditions,
string HumanReadable
);
public record FalsificationCondition(
FalsificationCategory Category,
string Description,
string? VerificationCommand, // CLI command to verify
string? VerificationQuery // API query to verify
);
public enum FalsificationCategory
{
CodeRemoved, // "Vulnerable function call removed"
PackageUpgraded, // "Package upgraded past fix version"
ConfigDisabled, // "Vulnerable feature disabled via config"
PathUnreachable, // "Call path no longer reachable from entrypoint"
RuntimeGuarded, // "Runtime check prevents exploitation"
SymbolUnresolved // "Vulnerable symbol not linked"
}
```
- Falsifiability generator per finding type
**Acceptance Criteria**:
- [ ] Every finding has falsifiability criteria
- [ ] Human-readable description
- [ ] Verification command where applicable
---
### 7000.0003.04: RiskExplainer Enhancement
**Description**: Extend `RiskExplainer` to output falsifiability and assumptions.
**Deliverables**:
- Update `RiskReport` to include:
```csharp
public record RiskReport(
RiskAssessment Assessment,
string Explanation,
ImmutableArray<string> Recommendations,
AssumptionSet Assumptions, // NEW
FalsifiabilityCriteria Falsifiability // NEW
);
```
- Natural language generation for:
- "This finding assumes..."
- "To disprove this finding, verify that..."
**Acceptance Criteria**:
- [ ] Explanation includes assumptions
- [ ] Explanation includes falsifiability
- [ ] Language is auditor-appropriate
---
### 7000.0003.05: Evidence-Density Confidence Scorer
**Description**: Implement confidence scoring based on evidence density, not CVSS.
**Deliverables**:
- `EvidenceDensityScorer`:
```csharp
public interface IEvidenceDensityScorer
{
ConfidenceScore Score(EvidenceBundle evidence, AssumptionSet assumptions);
}
public record ConfidenceScore(
double Value, // 0.0 - 1.0
ConfidenceTier Tier, // Confirmed, High, Medium, Low, Speculative
ImmutableArray<string> Factors // What contributed to score
);
public enum ConfidenceTier { Confirmed, High, Medium, Low, Speculative }
```
- Scoring factors:
- Evidence count
- Evidence diversity (static + dynamic + runtime)
- Assumption penalty (more unverified = lower confidence)
- Corroboration bonus (multiple sources agree)
**Acceptance Criteria**:
- [ ] Confidence derived from evidence, not CVSS
- [ ] Deterministic scoring
- [ ] Factors explainable
---
### 7000.0003.06: DSSE Predicate Schema Update
**Description**: Add assumption-set and falsifiability to DSSE predicate.
**Deliverables**:
- Schema: `stellaops.dev/predicates/finding@v2`
```json
{
"$schema": "...",
"type": "object",
"properties": {
"finding": { "$ref": "#/definitions/Finding" },
"assumptions": {
"type": "array",
"items": { "$ref": "#/definitions/Assumption" }
},
"falsifiability": {
"type": "object",
"properties": {
"conditions": { "type": "array" },
"humanReadable": { "type": "string" }
}
},
"evidenceConfidence": {
"type": "object",
"properties": {
"value": { "type": "number" },
"tier": { "type": "string" },
"factors": { "type": "array" }
}
}
}
}
```
- Migration path from v1 predicates
**Acceptance Criteria**:
- [ ] Schema validates
- [ ] Backward compatible
- [ ] Registered in predicate registry
---
### 7000.0003.07: UI Explainability Widget
**Description**: Angular component for assumption and falsifiability drill-down.
**Deliverables**:
- `<stellaops-finding-explainer>` component
- Tabs: Evidence | Assumptions | "How to Disprove"
- Assumption table with confidence indicators
- Falsifiability checklist with verification commands
- Copy-to-clipboard for verification commands
**Acceptance Criteria**:
- [ ] Renders for all finding types
- [ ] Assumptions sortable/filterable
- [ ] Verification commands copyable
- [ ] Accessible (WCAG 2.1 AA)
---
## Testing Requirements
| Test Type | Location | Coverage |
|-----------|----------|----------|
| Unit tests | `StellaOps.Scanner.Explainability.Tests/` | Models, scorers |
| Integration tests | `StellaOps.Scanner.WebService.Tests/Explainability/` | API endpoints |
| UI tests | `src/Web/StellaOps.Web/tests/explainability/` | Component tests |
| Golden fixtures | `src/Scanner/__Tests/Fixtures/Explainability/` | Deterministic output |
---
## Documentation Updates
| Document | Update Required |
|----------|-----------------|
| `docs/explainability/assumption-model.md` | CREATE - Assumption-set design |
| `docs/explainability/falsifiability.md` | CREATE - Falsifiability guide |
| `docs/schemas/finding-predicate-v2.md` | CREATE - Schema documentation |
| `docs/api/scanner-findings-api.md` | UPDATE - Explainability fields |
---
## Decisions & Risks
| ID | Decision/Risk | Status | Resolution |
|----|---------------|--------|------------|
| D1 | How to handle assumptions for legacy findings? | OPEN | Propose: empty set with "legacy" flag |
| D2 | Falsifiability verification commands: shell or API? | OPEN | Propose: both where applicable |
| R1 | Performance impact of assumption collection | OPEN | Profile and optimize |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from advisory gap analysis | Agent |
---
## Required Reading
- `src/Scanner/__Libraries/StellaOps.Scanner.EntryTrace/Risk/AGENTS.md`
- `docs/modules/scanner/architecture.md`
- `docs/product-advisories/archived/*/19-Dec-2025 - Benchmarking Container Scanners Against Stella Ops.md` (Section 4: Explainability)

View File

@@ -0,0 +1,367 @@
# SPRINT_7000_0001_0004 - Three-Layer Reachability Integration
## Sprint Metadata
| Field | Value |
|-------|-------|
| **Sprint ID** | 7000.0001.0004 |
| **Topic** | Three-Layer Reachability Integration |
| **Duration** | 2 weeks |
| **Priority** | MEDIUM |
| **Status** | TODO |
| **Owner** | Scanner Team |
| **Working Directory** | `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/` |
---
## Objective
Integrate reachability analysis into a formal three-layer model where exploitability is proven only when ALL THREE layers align:
1. **Layer 1: Static Call Graph** - Vulnerable function reachable from entrypoint
2. **Layer 2: Binary Resolution** - Dynamic loader actually links the symbol
3. **Layer 3: Runtime Gating** - No feature flag/config/environment blocks execution
This makes false positives "structurally impossible, not heuristically reduced."
---
## Prerequisites
- [ ] Sprint 7000.0001.0002 (SBOM Lineage) complete or in progress
- [ ] Sprint 7000.0001.0003 (Explainability) complete or in progress
- [ ] `StellaOps.Scanner.EntryTrace` functional (semantic, binary, speculative)
- [ ] `StellaOps.Scanner.CallGraph` extractors functional
---
## Delivery Tracker
| ID | Task | Status | Assignee | Notes |
|----|------|--------|----------|-------|
| 7000.0004.01 | Formalize 3-layer model: `ReachabilityStack` | TODO | | |
| 7000.0004.02 | Layer 1: Wire existing static call-graph extractors | TODO | | |
| 7000.0004.03 | Layer 2: ELF/PE loader rule resolution | TODO | | |
| 7000.0004.04 | Layer 3: Feature flag / config gating detection | TODO | | |
| 7000.0004.05 | Composite evaluator: all-three-align = exploitable | TODO | | |
| 7000.0004.06 | Tests: 3-layer corpus with known reachability | TODO | | |
| 7000.0004.07 | API: `GET /reachability/{id}/stack` with layer breakdown | TODO | | |
---
## Task Details
### 7000.0004.01: Formalize ReachabilityStack Model
**Description**: Design the composite model representing three-layer reachability.
**Deliverables**:
- `ReachabilityStack` model:
```csharp
public record ReachabilityStack(
ReachabilityLayer1 StaticCallGraph,
ReachabilityLayer2 BinaryResolution,
ReachabilityLayer3 RuntimeGating,
ReachabilityVerdict Verdict
);
public record ReachabilityLayer1(
bool IsReachable,
ImmutableArray<CallPath> Paths,
ImmutableArray<string> Entrypoints,
ConfidenceLevel Confidence
);
public record ReachabilityLayer2(
bool IsResolved,
SymbolResolution? Resolution,
LoaderRule? AppliedRule,
ConfidenceLevel Confidence
);
public record ReachabilityLayer3(
bool IsGated,
ImmutableArray<GatingCondition> Conditions,
GatingOutcome Outcome,
ConfidenceLevel Confidence
);
public enum ReachabilityVerdict
{
Exploitable, // All 3 layers confirm
LikelyExploitable, // L1+L2 confirm, L3 unknown
PossiblyExploitable, // L1 confirms, L2+L3 unknown
Unreachable, // Any layer definitively blocks
Unknown // Insufficient data
}
```
**Acceptance Criteria**:
- [ ] All three layers represented
- [ ] Verdict derivation logic defined
- [ ] Confidence propagation documented
---
### 7000.0004.02: Layer 1 - Static Call Graph Integration
**Description**: Wire existing call-graph extractors into Layer 1.
**Deliverables**:
- `ILayer1Analyzer` interface:
```csharp
public interface ILayer1Analyzer
{
Task<ReachabilityLayer1> AnalyzeAsync(
VulnerableSymbol symbol,
CallGraph graph,
ImmutableArray<Entrypoint> entrypoints,
CancellationToken ct
);
}
```
- Integration with:
- `DotNetCallGraphExtractor`
- `NodeCallGraphExtractor`
- `JavaCallGraphExtractor`
- Path witness generation
**Acceptance Criteria**:
- [ ] All existing extractors integrated
- [ ] Paths include method signatures
- [ ] Entrypoints correctly identified
---
### 7000.0004.03: Layer 2 - Binary Loader Resolution
**Description**: Implement dynamic loader rule resolution for ELF and PE binaries.
**Deliverables**:
- `ILayer2Analyzer` interface:
```csharp
public interface ILayer2Analyzer
{
Task<ReachabilityLayer2> AnalyzeAsync(
VulnerableSymbol symbol,
BinaryArtifact binary,
LoaderContext context,
CancellationToken ct
);
}
public record LoaderContext(
ImmutableArray<string> LdLibraryPath,
ImmutableArray<string> Rpath,
ImmutableArray<string> RunPath,
bool HasLdPreload,
SymbolVersioning? Versioning
);
```
- ELF resolution:
- NEEDED entries
- RPATH/RUNPATH handling
- Symbol versioning (GLIBC_2.17, etc.)
- LD_PRELOAD detection
- PE resolution:
- Import table parsing
- Delay-load DLLs
- SxS manifests
**Acceptance Criteria**:
- [ ] ELF loader rules implemented
- [ ] PE loader rules implemented
- [ ] Symbol versioning handled
- [ ] LD_PRELOAD/DLL injection detected
---
### 7000.0004.04: Layer 3 - Runtime Gating Detection
**Description**: Detect feature flags, configuration, and environment conditions that gate execution.
**Deliverables**:
- `ILayer3Analyzer` interface:
```csharp
public interface ILayer3Analyzer
{
Task<ReachabilityLayer3> AnalyzeAsync(
CallPath path,
RuntimeContext context,
CancellationToken ct
);
}
public record GatingCondition(
GatingType Type,
string Description,
string? ConfigKey,
string? EnvVar,
bool IsBlocking
);
public enum GatingType
{
FeatureFlag, // if (FeatureFlags.UseNewAuth) ...
EnvironmentVariable, // if (Environment.GetEnvironmentVariable("X") != null) ...
ConfigurationValue, // if (config["feature:enabled"] == "true") ...
CompileTimeConditional, // #if DEBUG
PlatformCheck, // if (RuntimeInformation.IsOSPlatform(...))
CapabilityCheck // if (hasCapability(CAP_NET_ADMIN)) ...
}
```
- Integration with:
- `ShellSymbolicExecutor` (speculative execution)
- Static analysis for feature flag patterns
- Config file parsing
**Acceptance Criteria**:
- [ ] Common feature flag patterns detected
- [ ] Environment variable checks detected
- [ ] Platform checks detected
- [ ] Gating blocks marked as blocking/non-blocking
---
### 7000.0004.05: Composite Evaluator
**Description**: Combine all three layers into final verdict.
**Deliverables**:
- `ReachabilityStackEvaluator`:
```csharp
public class ReachabilityStackEvaluator
{
public ReachabilityStack Evaluate(
ReachabilityLayer1 layer1,
ReachabilityLayer2 layer2,
ReachabilityLayer3 layer3
)
{
var verdict = DeriveVerdict(layer1, layer2, layer3);
return new ReachabilityStack(layer1, layer2, layer3, verdict);
}
private ReachabilityVerdict DeriveVerdict(...)
{
// All three confirm reachable = Exploitable
// Any one definitively blocks = Unreachable
// Partial confirmation = Likely/Possibly
// Insufficient data = Unknown
}
}
```
- Verdict derivation truth table
- Confidence aggregation
**Acceptance Criteria**:
- [ ] Verdict logic documented as truth table
- [ ] Confidence properly aggregated
- [ ] Edge cases handled (unknown layers)
---
### 7000.0004.06: 3-Layer Test Corpus
**Description**: Create test corpus with known reachability across all three layers.
**Deliverables**:
- `bench/reachability-3layer/` corpus:
- `exploitable/` - All 3 layers confirm
- `unreachable-l1/` - Static graph blocks
- `unreachable-l2/` - Loader blocks (symbol not linked)
- `unreachable-l3/` - Feature flag blocks
- `partial/` - Mixed confidence
- Ground-truth manifest
- Determinism verification
**Acceptance Criteria**:
- [ ] 20+ test cases per category
- [ ] Ground truth verified manually
- [ ] Deterministic analysis results
---
### 7000.0004.07: Reachability Stack API
**Description**: HTTP API for querying three-layer reachability.
**Deliverables**:
- `GET /api/v1/reachability/{findingId}/stack` - Full 3-layer breakdown
- `GET /api/v1/reachability/{findingId}/stack/layer/{1|2|3}` - Single layer detail
- Response includes:
```json
{
"verdict": "Exploitable",
"layer1": {
"isReachable": true,
"paths": [...],
"confidence": "High"
},
"layer2": {
"isResolved": true,
"resolution": { "symbol": "EVP_DecryptUpdate", "library": "libcrypto.so.1.1" },
"confidence": "Confirmed"
},
"layer3": {
"isGated": false,
"conditions": [],
"confidence": "Medium"
}
}
```
**Acceptance Criteria**:
- [ ] API returns all three layers
- [ ] Drill-down available
- [ ] OpenAPI spec updated
---
## Testing Requirements
| Test Type | Location | Coverage |
|-----------|----------|----------|
| Unit tests | `StellaOps.Scanner.Reachability.Tests/Stack/` | Models, evaluator |
| Integration tests | `StellaOps.Scanner.WebService.Tests/Reachability/` | API endpoints |
| Corpus tests | `StellaOps.Scanner.Reachability.CorpusTests/` | 3-layer corpus |
| Golden fixtures | `src/Scanner/__Tests/Fixtures/Reachability3Layer/` | Deterministic output |
---
## Documentation Updates
| Document | Update Required |
|----------|-----------------|
| `docs/reachability/three-layer-model.md` | CREATE - 3-layer architecture |
| `docs/reachability/verdict-truth-table.md` | CREATE - Verdict derivation |
| `docs/api/reachability-stack-api.md` | CREATE - API reference |
| `docs/modules/scanner/architecture.md` | UPDATE - Reachability section |
---
## Decisions & Risks
| ID | Decision/Risk | Status | Resolution |
|----|---------------|--------|------------|
| D1 | How to handle missing Layer 2/3 data? | OPEN | Propose: degrade to "Possibly" verdict |
| D2 | Layer 3 analysis scope (all configs or allowlist)? | OPEN | Propose: common patterns first |
| R1 | Performance impact of full 3-layer analysis | OPEN | Profile, cache layer results |
| R2 | False negatives from incomplete L3 detection | OPEN | Document known limitations |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from advisory gap analysis | Agent |
---
## Required Reading
- `src/Scanner/__Libraries/StellaOps.Scanner.EntryTrace/AGENTS.md`
- `src/Scanner/__Libraries/StellaOps.Scanner.EntryTrace/Binary/`
- `src/Scanner/__Libraries/StellaOps.Scanner.EntryTrace/Speculative/`
- `docs/reachability/function-level-evidence.md`
- `docs/product-advisories/archived/*/19-Dec-2025 - Benchmarking Container Scanners Against Stella Ops.md` (Section 6: Call-Stack Reachability)

View File

@@ -0,0 +1,865 @@
# Sprint 7000.0001.0001 · Unified Confidence Score Model
## Topic & Scope
- Define unified confidence score aggregating all evidence types
- Implement explainable confidence breakdown per input factor
- Establish bounded computation rules with documentation
**Working directory:** `src/Policy/__Libraries/StellaOps.Policy.Confidence/`
## Dependencies & Concurrency
- **Upstream**: SPRINT_4100_0003_0001 (Risk Verdict Attestation), SPRINT_4100_0002_0001 (Knowledge Snapshot)
- **Downstream**: SPRINT_7000_0001_0002 (Vulnerability-First UX API)
- **Safe to parallelize with**: SPRINT_7000_0003_0001 (Progressive Fidelity)
## Documentation Prerequisites
- `docs/product-advisories/21-Dec-2025 - Designing Explainable Triage Workflows.md`
- `src/Policy/__Libraries/StellaOps.Policy/Scoring/ScoreExplanation.cs`
- `src/Policy/StellaOps.Policy.Engine/Vex/VexDecisionModels.cs`
---
## Problem Statement
The advisory requires: "Confidence score (bounded; explainable inputs)" for each verdict. Currently, confidence exists in VEX (0.0-1.0) but is not unified across all evidence types (reachability, runtime, provenance, policy). Users cannot understand why a verdict has a particular confidence level.
---
## Tasks
### T1: Define ConfidenceScore Model
**Assignee**: Policy Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: —
**Description**:
Create a unified confidence score model that aggregates multiple input factors.
**Implementation Path**: `Models/ConfidenceScore.cs` (new file)
**Model Definition**:
```csharp
namespace StellaOps.Policy.Confidence.Models;
/// <summary>
/// Unified confidence score aggregating all evidence types.
/// Bounded between 0.0 (no confidence) and 1.0 (full confidence).
/// </summary>
public sealed record ConfidenceScore
{
/// <summary>
/// Final aggregated confidence (0.0 - 1.0).
/// </summary>
public required decimal Value { get; init; }
/// <summary>
/// Confidence tier for quick categorization.
/// </summary>
public ConfidenceTier Tier => Value switch
{
>= 0.9m => ConfidenceTier.VeryHigh,
>= 0.7m => ConfidenceTier.High,
>= 0.5m => ConfidenceTier.Medium,
>= 0.3m => ConfidenceTier.Low,
_ => ConfidenceTier.VeryLow
};
/// <summary>
/// Breakdown of contributing factors.
/// </summary>
public required IReadOnlyList<ConfidenceFactor> Factors { get; init; }
/// <summary>
/// Human-readable explanation of the score.
/// </summary>
public required string Explanation { get; init; }
/// <summary>
/// What would improve this confidence score.
/// </summary>
public IReadOnlyList<ConfidenceImprovement> Improvements { get; init; } = [];
}
/// <summary>
/// A single factor contributing to confidence.
/// </summary>
public sealed record ConfidenceFactor
{
/// <summary>
/// Factor type (reachability, runtime, vex, provenance, policy).
/// </summary>
public required ConfidenceFactorType Type { get; init; }
/// <summary>
/// Weight of this factor in aggregation (0.0 - 1.0).
/// </summary>
public required decimal Weight { get; init; }
/// <summary>
/// Raw value before weighting (0.0 - 1.0).
/// </summary>
public required decimal RawValue { get; init; }
/// <summary>
/// Weighted contribution to final score.
/// </summary>
public decimal Contribution => Weight * RawValue;
/// <summary>
/// Human-readable reason for this value.
/// </summary>
public required string Reason { get; init; }
/// <summary>
/// Evidence digests supporting this factor.
/// </summary>
public IReadOnlyList<string> EvidenceDigests { get; init; } = [];
}
public enum ConfidenceFactorType
{
/// <summary>Call graph reachability analysis.</summary>
Reachability,
/// <summary>Runtime corroboration (eBPF, dyld, ETW).</summary>
Runtime,
/// <summary>VEX statement from vendor/distro.</summary>
Vex,
/// <summary>Build provenance and SBOM quality.</summary>
Provenance,
/// <summary>Policy rule match strength.</summary>
Policy,
/// <summary>Advisory freshness and source quality.</summary>
Advisory
}
public enum ConfidenceTier
{
VeryLow,
Low,
Medium,
High,
VeryHigh
}
/// <summary>
/// Actionable improvement to increase confidence.
/// </summary>
public sealed record ConfidenceImprovement(
ConfidenceFactorType Factor,
string Action,
decimal PotentialGain);
```
**Acceptance Criteria**:
- [ ] `ConfidenceScore.cs` created with all models
- [ ] Bounded 0.0-1.0 with tier categorization
- [ ] Factor breakdown with weights and raw values
- [ ] Improvement suggestions included
- [ ] XML documentation complete
---
### T2: Define Weight Configuration
**Assignee**: Policy Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1
**Description**:
Create configurable weight schema for confidence factors.
**Implementation Path**: `Configuration/ConfidenceWeightOptions.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Policy.Confidence.Configuration;
/// <summary>
/// Configuration for confidence factor weights.
/// </summary>
public sealed class ConfidenceWeightOptions
{
public const string SectionName = "ConfidenceWeights";
/// <summary>
/// Weight for reachability factor (default: 0.30).
/// </summary>
public decimal Reachability { get; set; } = 0.30m;
/// <summary>
/// Weight for runtime corroboration (default: 0.20).
/// </summary>
public decimal Runtime { get; set; } = 0.20m;
/// <summary>
/// Weight for VEX statements (default: 0.25).
/// </summary>
public decimal Vex { get; set; } = 0.25m;
/// <summary>
/// Weight for provenance quality (default: 0.15).
/// </summary>
public decimal Provenance { get; set; } = 0.15m;
/// <summary>
/// Weight for policy match (default: 0.10).
/// </summary>
public decimal Policy { get; set; } = 0.10m;
/// <summary>
/// Minimum confidence for not_affected verdict.
/// </summary>
public decimal MinimumForNotAffected { get; set; } = 0.70m;
/// <summary>
/// Validates weights sum to 1.0.
/// </summary>
public bool Validate()
{
var sum = Reachability + Runtime + Vex + Provenance + Policy;
return Math.Abs(sum - 1.0m) < 0.001m;
}
}
```
**Sample YAML**:
```yaml
# etc/policy.confidence.yaml
confidenceWeights:
reachability: 0.30
runtime: 0.20
vex: 0.25
provenance: 0.15
policy: 0.10
minimumForNotAffected: 0.70
```
**Acceptance Criteria**:
- [ ] `ConfidenceWeightOptions.cs` created
- [ ] Weights sum validation
- [ ] Sample YAML configuration
- [ ] Minimum threshold for not_affected
---
### T3: Create ConfidenceCalculator Service
**Assignee**: Policy Team
**Story Points**: 5
**Status**: TODO
**Dependencies**: T1, T2
**Description**:
Implement service that calculates unified confidence from all evidence sources.
**Implementation Path**: `Services/ConfidenceCalculator.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Policy.Confidence.Services;
public interface IConfidenceCalculator
{
ConfidenceScore Calculate(ConfidenceInput input);
}
public sealed class ConfidenceCalculator : IConfidenceCalculator
{
private readonly IOptionsMonitor<ConfidenceWeightOptions> _options;
public ConfidenceCalculator(IOptionsMonitor<ConfidenceWeightOptions> options)
{
_options = options;
}
public ConfidenceScore Calculate(ConfidenceInput input)
{
var weights = _options.CurrentValue;
var factors = new List<ConfidenceFactor>();
// Calculate reachability factor
var reachabilityFactor = CalculateReachabilityFactor(input.Reachability, weights.Reachability);
factors.Add(reachabilityFactor);
// Calculate runtime factor
var runtimeFactor = CalculateRuntimeFactor(input.Runtime, weights.Runtime);
factors.Add(runtimeFactor);
// Calculate VEX factor
var vexFactor = CalculateVexFactor(input.Vex, weights.Vex);
factors.Add(vexFactor);
// Calculate provenance factor
var provenanceFactor = CalculateProvenanceFactor(input.Provenance, weights.Provenance);
factors.Add(provenanceFactor);
// Calculate policy factor
var policyFactor = CalculatePolicyFactor(input.Policy, weights.Policy);
factors.Add(policyFactor);
// Aggregate
var totalValue = factors.Sum(f => f.Contribution);
var clampedValue = Math.Clamp(totalValue, 0m, 1m);
// Generate explanation
var explanation = GenerateExplanation(factors, clampedValue);
// Generate improvements
var improvements = GenerateImprovements(factors, weights);
return new ConfidenceScore
{
Value = clampedValue,
Factors = factors,
Explanation = explanation,
Improvements = improvements
};
}
private ConfidenceFactor CalculateReachabilityFactor(
ReachabilityEvidence? evidence, decimal weight)
{
if (evidence is null)
{
return new ConfidenceFactor
{
Type = ConfidenceFactorType.Reachability,
Weight = weight,
RawValue = 0.5m, // Unknown = 50%
Reason = "No reachability analysis performed",
EvidenceDigests = []
};
}
var rawValue = evidence.State switch
{
ReachabilityState.ConfirmedUnreachable => 1.0m,
ReachabilityState.StaticUnreachable => 0.85m,
ReachabilityState.Unknown => 0.5m,
ReachabilityState.StaticReachable => 0.3m,
ReachabilityState.ConfirmedReachable => 0.1m,
_ => 0.5m
};
// Adjust by confidence of the analysis itself
rawValue *= evidence.AnalysisConfidence;
return new ConfidenceFactor
{
Type = ConfidenceFactorType.Reachability,
Weight = weight,
RawValue = rawValue,
Reason = $"Reachability: {evidence.State} (analysis confidence: {evidence.AnalysisConfidence:P0})",
EvidenceDigests = evidence.GraphDigests.ToList()
};
}
private ConfidenceFactor CalculateRuntimeFactor(
RuntimeEvidence? evidence, decimal weight)
{
if (evidence is null || !evidence.HasObservations)
{
return new ConfidenceFactor
{
Type = ConfidenceFactorType.Runtime,
Weight = weight,
RawValue = 0.5m,
Reason = "No runtime observations available",
EvidenceDigests = []
};
}
var rawValue = evidence.Posture switch
{
RuntimePosture.Supports => 0.9m,
RuntimePosture.Contradicts => 0.2m,
RuntimePosture.Unknown => 0.5m,
_ => 0.5m
};
// Adjust by observation count and recency
var recencyBonus = evidence.ObservedWithinHours(24) ? 0.1m : 0m;
rawValue = Math.Min(1.0m, rawValue + recencyBonus);
return new ConfidenceFactor
{
Type = ConfidenceFactorType.Runtime,
Weight = weight,
RawValue = rawValue,
Reason = $"Runtime {evidence.Posture.ToString().ToLowerInvariant()}: {evidence.ObservationCount} observations",
EvidenceDigests = evidence.SessionDigests.ToList()
};
}
private ConfidenceFactor CalculateVexFactor(
VexEvidence? evidence, decimal weight)
{
if (evidence is null || evidence.Statements.Count == 0)
{
return new ConfidenceFactor
{
Type = ConfidenceFactorType.Vex,
Weight = weight,
RawValue = 0.5m,
Reason = "No VEX statements available",
EvidenceDigests = []
};
}
// Use the best VEX statement (by trust and recency)
var best = evidence.Statements
.OrderByDescending(s => s.TrustScore)
.ThenByDescending(s => s.Timestamp)
.First();
var rawValue = best.Status switch
{
VexStatus.NotAffected => best.TrustScore,
VexStatus.Fixed => best.TrustScore * 0.9m,
VexStatus.UnderInvestigation => 0.4m,
VexStatus.Affected => 0.1m,
_ => 0.5m
};
return new ConfidenceFactor
{
Type = ConfidenceFactorType.Vex,
Weight = weight,
RawValue = rawValue,
Reason = $"VEX {best.Status} from {best.Issuer} (trust: {best.TrustScore:P0})",
EvidenceDigests = [best.StatementDigest]
};
}
private ConfidenceFactor CalculateProvenanceFactor(
ProvenanceEvidence? evidence, decimal weight)
{
if (evidence is null)
{
return new ConfidenceFactor
{
Type = ConfidenceFactorType.Provenance,
Weight = weight,
RawValue = 0.3m,
Reason = "No provenance information",
EvidenceDigests = []
};
}
var rawValue = evidence.Level switch
{
ProvenanceLevel.SlsaLevel3 => 1.0m,
ProvenanceLevel.SlsaLevel2 => 0.85m,
ProvenanceLevel.SlsaLevel1 => 0.7m,
ProvenanceLevel.Signed => 0.6m,
ProvenanceLevel.Unsigned => 0.3m,
_ => 0.3m
};
// SBOM completeness bonus
if (evidence.SbomCompleteness >= 0.9m)
rawValue = Math.Min(1.0m, rawValue + 0.1m);
return new ConfidenceFactor
{
Type = ConfidenceFactorType.Provenance,
Weight = weight,
RawValue = rawValue,
Reason = $"Provenance: {evidence.Level}, SBOM completeness: {evidence.SbomCompleteness:P0}",
EvidenceDigests = evidence.AttestationDigests.ToList()
};
}
private ConfidenceFactor CalculatePolicyFactor(
PolicyEvidence? evidence, decimal weight)
{
if (evidence is null)
{
return new ConfidenceFactor
{
Type = ConfidenceFactorType.Policy,
Weight = weight,
RawValue = 0.5m,
Reason = "No policy evaluation",
EvidenceDigests = []
};
}
// Policy confidence based on rule match quality
var rawValue = evidence.MatchStrength;
return new ConfidenceFactor
{
Type = ConfidenceFactorType.Policy,
Weight = weight,
RawValue = rawValue,
Reason = $"Policy rule '{evidence.RuleName}' matched (strength: {evidence.MatchStrength:P0})",
EvidenceDigests = [evidence.EvaluationDigest]
};
}
private static string GenerateExplanation(
IReadOnlyList<ConfidenceFactor> factors, decimal totalValue)
{
var tier = totalValue switch
{
>= 0.9m => "very high",
>= 0.7m => "high",
>= 0.5m => "medium",
>= 0.3m => "low",
_ => "very low"
};
var topFactors = factors
.OrderByDescending(f => f.Contribution)
.Take(2)
.Select(f => f.Type.ToString().ToLowerInvariant());
return $"Confidence is {tier} ({totalValue:P0}), primarily driven by {string.Join(" and ", topFactors)}.";
}
private static IReadOnlyList<ConfidenceImprovement> GenerateImprovements(
IReadOnlyList<ConfidenceFactor> factors,
ConfidenceWeightOptions weights)
{
var improvements = new List<ConfidenceImprovement>();
foreach (var factor in factors.Where(f => f.RawValue < 0.7m))
{
var (action, potentialGain) = factor.Type switch
{
ConfidenceFactorType.Reachability =>
("Run deeper reachability analysis", factor.Weight * 0.3m),
ConfidenceFactorType.Runtime =>
("Deploy runtime sensor and collect observations", factor.Weight * 0.4m),
ConfidenceFactorType.Vex =>
("Obtain VEX statement from vendor", factor.Weight * 0.4m),
ConfidenceFactorType.Provenance =>
("Add SLSA provenance attestation", factor.Weight * 0.3m),
ConfidenceFactorType.Policy =>
("Review and refine policy rules", factor.Weight * 0.2m),
_ => ("Gather additional evidence", 0.1m)
};
improvements.Add(new ConfidenceImprovement(factor.Type, action, potentialGain));
}
return improvements.OrderByDescending(i => i.PotentialGain).Take(3).ToList();
}
}
/// <summary>
/// Input container for confidence calculation.
/// </summary>
public sealed record ConfidenceInput
{
public ReachabilityEvidence? Reachability { get; init; }
public RuntimeEvidence? Runtime { get; init; }
public VexEvidence? Vex { get; init; }
public ProvenanceEvidence? Provenance { get; init; }
public PolicyEvidence? Policy { get; init; }
}
```
**Acceptance Criteria**:
- [ ] `ConfidenceCalculator.cs` created
- [ ] Calculates all 5 factor types
- [ ] Weights applied correctly
- [ ] Explanation generated automatically
- [ ] Improvements suggested based on low factors
---
### T4: Create Evidence Input Models
**Assignee**: Policy Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1
**Description**:
Create input models for each evidence type used in confidence calculation.
**Implementation Path**: `Models/ConfidenceEvidence.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Policy.Confidence.Models;
public sealed record ReachabilityEvidence
{
public required ReachabilityState State { get; init; }
public required decimal AnalysisConfidence { get; init; }
public IReadOnlyList<string> GraphDigests { get; init; } = [];
}
public enum ReachabilityState
{
Unknown,
StaticReachable,
StaticUnreachable,
ConfirmedReachable,
ConfirmedUnreachable
}
public sealed record RuntimeEvidence
{
public required RuntimePosture Posture { get; init; }
public required int ObservationCount { get; init; }
public required DateTimeOffset LastObserved { get; init; }
public IReadOnlyList<string> SessionDigests { get; init; } = [];
public bool HasObservations => ObservationCount > 0;
public bool ObservedWithinHours(int hours) =>
LastObserved > DateTimeOffset.UtcNow.AddHours(-hours);
}
public enum RuntimePosture
{
Unknown,
Supports,
Contradicts
}
public sealed record VexEvidence
{
public required IReadOnlyList<VexStatement> Statements { get; init; }
}
public sealed record VexStatement
{
public required VexStatus Status { get; init; }
public required string Issuer { get; init; }
public required decimal TrustScore { get; init; }
public required DateTimeOffset Timestamp { get; init; }
public required string StatementDigest { get; init; }
}
public enum VexStatus
{
Affected,
NotAffected,
Fixed,
UnderInvestigation
}
public sealed record ProvenanceEvidence
{
public required ProvenanceLevel Level { get; init; }
public required decimal SbomCompleteness { get; init; }
public IReadOnlyList<string> AttestationDigests { get; init; } = [];
}
public enum ProvenanceLevel
{
Unsigned,
Signed,
SlsaLevel1,
SlsaLevel2,
SlsaLevel3
}
public sealed record PolicyEvidence
{
public required string RuleName { get; init; }
public required decimal MatchStrength { get; init; }
public required string EvaluationDigest { get; init; }
}
```
**Acceptance Criteria**:
- [ ] All evidence input models defined
- [ ] Enums for state/status values
- [ ] Helper methods (ObservedWithinHours, HasObservations)
- [ ] Digest tracking for audit
---
### T5: Integrate with PolicyEvaluator
**Assignee**: Policy Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T3
**Description**:
Integrate confidence calculation into policy evaluation pipeline.
**Implementation Path**: Modify `src/Policy/StellaOps.Policy.Engine/Services/PolicyEvaluator.cs`
**Integration**:
```csharp
// Add to PolicyEvaluationResult
public sealed record PolicyEvaluationResult
{
// ... existing fields ...
/// <summary>
/// Unified confidence score for this verdict.
/// </summary>
public ConfidenceScore? Confidence { get; init; }
}
// In PolicyEvaluator.EvaluateAsync
var confidenceInput = BuildConfidenceInput(context, result);
var confidence = _confidenceCalculator.Calculate(confidenceInput);
return result with { Confidence = confidence };
```
**Acceptance Criteria**:
- [ ] `PolicyEvaluationResult` includes `Confidence`
- [ ] Confidence calculated during evaluation
- [ ] All evidence sources mapped to input
---
### T6: Add Tests
**Assignee**: Policy Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1-T5
**Description**:
Comprehensive tests for confidence calculation.
**Implementation Path**: `src/Policy/__Tests/StellaOps.Policy.Confidence.Tests/`
**Test Cases**:
```csharp
public class ConfidenceCalculatorTests
{
[Fact]
public void Calculate_AllHighFactors_ReturnsVeryHighConfidence()
{
var input = CreateInput(
reachability: ReachabilityState.ConfirmedUnreachable,
runtime: RuntimePosture.Supports,
vex: VexStatus.NotAffected,
provenance: ProvenanceLevel.SlsaLevel3);
var result = _calculator.Calculate(input);
result.Tier.Should().Be(ConfidenceTier.VeryHigh);
result.Value.Should().BeGreaterOrEqualTo(0.9m);
}
[Fact]
public void Calculate_AllLowFactors_ReturnsLowConfidence()
{
var input = CreateInput(
reachability: ReachabilityState.ConfirmedReachable,
runtime: RuntimePosture.Contradicts,
vex: VexStatus.Affected);
var result = _calculator.Calculate(input);
result.Tier.Should().Be(ConfidenceTier.Low);
}
[Fact]
public void Calculate_MissingEvidence_UsesFallbackValues()
{
var input = new ConfidenceInput(); // All null
var result = _calculator.Calculate(input);
result.Value.Should().BeApproximately(0.5m, 0.05m);
result.Factors.Should().AllSatisfy(f => f.Reason.Should().Contain("No"));
}
[Fact]
public void Calculate_GeneratesImprovements_ForLowFactors()
{
var input = CreateInput(reachability: ReachabilityState.Unknown);
var result = _calculator.Calculate(input);
result.Improvements.Should().Contain(i =>
i.Factor == ConfidenceFactorType.Reachability);
}
[Fact]
public void Calculate_WeightsSumToOne()
{
var options = new ConfidenceWeightOptions();
options.Validate().Should().BeTrue();
}
[Fact]
public void Calculate_FactorContributions_SumToValue()
{
var input = CreateFullInput();
var result = _calculator.Calculate(input);
var sumOfContributions = result.Factors.Sum(f => f.Contribution);
result.Value.Should().BeApproximately(sumOfContributions, 0.001m);
}
}
```
**Acceptance Criteria**:
- [ ] Test for high confidence scenario
- [ ] Test for low confidence scenario
- [ ] Test for missing evidence fallback
- [ ] Test for improvement generation
- [ ] Test for weight validation
- [ ] All tests pass
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Policy Team | Define ConfidenceScore model |
| 2 | T2 | TODO | T1 | Policy Team | Define weight configuration |
| 3 | T3 | TODO | T1, T2 | Policy Team | Create ConfidenceCalculator service |
| 4 | T4 | TODO | T1 | Policy Team | Create evidence input models |
| 5 | T5 | TODO | T3 | Policy Team | Integrate with PolicyEvaluator |
| 6 | T6 | TODO | T1-T5 | Policy Team | Add tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from Explainable Triage Workflows advisory gap analysis. | Claude |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Five factor types | Decision | Policy Team | Reachability, Runtime, VEX, Provenance, Policy |
| Default weights | Decision | Policy Team | 0.30/0.20/0.25/0.15/0.10 = 1.0 |
| Missing evidence = 0.5 | Decision | Policy Team | Unknown treated as medium confidence |
| Tier thresholds | Decision | Policy Team | VeryHigh ≥0.9, High ≥0.7, Medium ≥0.5, Low ≥0.3 |
---
## Success Criteria
- [ ] All 6 tasks marked DONE
- [ ] Confidence score bounded 0.0-1.0
- [ ] Factor breakdown available for each score
- [ ] Improvements generated for low factors
- [ ] Integration with PolicyEvaluator complete
- [ ] 6+ tests passing
- [ ] `dotnet build` succeeds
- [ ] `dotnet test` succeeds

View File

@@ -0,0 +1,844 @@
# Sprint 7000.0001.0002 · Vulnerability-First UX API Contracts
## Topic & Scope
- Define API contracts for vulnerability-first finding views
- Implement verdict chip, confidence, and one-liner summary
- Create proof badge computation logic
- Enable click-through to detailed evidence
**Working directory:** `src/Findings/StellaOps.Findings.WebService/`
## Dependencies & Concurrency
- **Upstream**: SPRINT_7000_0001_0001 (Unified Confidence Model)
- **Downstream**: SPRINT_7000_0002_0001 (Evidence Graph), SPRINT_7000_0002_0002 (Reachability Map), SPRINT_7000_0002_0003 (Runtime Timeline)
- **Safe to parallelize with**: None (depends on confidence model)
## Documentation Prerequisites
- `docs/product-advisories/21-Dec-2025 - Designing Explainable Triage Workflows.md`
- SPRINT_7000_0001_0001 completion
- `src/Findings/StellaOps.Findings.Ledger/Domain/DecisionModels.cs`
---
## Problem Statement
The advisory requires: "Finding row shows: Verdict chip + confidence + 'why' one-liner + proof badges (Reachability / Runtime / Policy / Provenance)."
Currently, the backend has all necessary data but no unified API contracts for vulnerability-first presentation. Users must aggregate data from multiple endpoints.
---
## Tasks
### T1: Define FindingSummary Contract
**Assignee**: Findings Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: —
**Description**:
Create the unified finding summary response contract.
**Implementation Path**: `Contracts/FindingSummaryContracts.cs` (new file)
**Contract Definition**:
```csharp
namespace StellaOps.Findings.WebService.Contracts;
/// <summary>
/// Compact finding summary for list views.
/// </summary>
public sealed record FindingSummaryResponse
{
/// <summary>
/// Unique finding identifier.
/// </summary>
public required Guid FindingId { get; init; }
/// <summary>
/// Vulnerability ID (CVE-XXXX-XXXXX).
/// </summary>
public required string VulnerabilityId { get; init; }
/// <summary>
/// Affected component PURL.
/// </summary>
public required string ComponentPurl { get; init; }
/// <summary>
/// Affected component version.
/// </summary>
public required string Version { get; init; }
/// <summary>
/// Verdict chip for display.
/// </summary>
public required VerdictChip Verdict { get; init; }
/// <summary>
/// Unified confidence score.
/// </summary>
public required ConfidenceChip Confidence { get; init; }
/// <summary>
/// One-liner explanation of the verdict.
/// </summary>
public required string WhyOneLiner { get; init; }
/// <summary>
/// Proof badges showing evidence status.
/// </summary>
public required ProofBadges Badges { get; init; }
/// <summary>
/// CVSS score if available.
/// </summary>
public decimal? CvssScore { get; init; }
/// <summary>
/// Severity label (Critical, High, Medium, Low).
/// </summary>
public string? Severity { get; init; }
/// <summary>
/// Whether this finding is in CISA KEV.
/// </summary>
public bool IsKev { get; init; }
/// <summary>
/// EPSS score if available.
/// </summary>
public decimal? EpssScore { get; init; }
/// <summary>
/// Last updated timestamp.
/// </summary>
public DateTimeOffset UpdatedAt { get; init; }
}
/// <summary>
/// Verdict chip for UI display.
/// </summary>
public sealed record VerdictChip
{
/// <summary>
/// Verdict status: affected, not_affected, mitigated, needs_review.
/// </summary>
public required string Status { get; init; }
/// <summary>
/// Display label for the chip.
/// </summary>
public required string Label { get; init; }
/// <summary>
/// Color indicator: red, green, yellow, gray.
/// </summary>
public required string Color { get; init; }
/// <summary>
/// Icon name for the chip.
/// </summary>
public required string Icon { get; init; }
}
/// <summary>
/// Confidence chip for UI display.
/// </summary>
public sealed record ConfidenceChip
{
/// <summary>
/// Numeric value (0-100 for percentage display).
/// </summary>
public required int Percentage { get; init; }
/// <summary>
/// Tier label: Very High, High, Medium, Low, Very Low.
/// </summary>
public required string Tier { get; init; }
/// <summary>
/// Color indicator based on tier.
/// </summary>
public required string Color { get; init; }
/// <summary>
/// Tooltip with factor breakdown.
/// </summary>
public required string Tooltip { get; init; }
}
/// <summary>
/// Proof badges showing evidence availability and status.
/// </summary>
public sealed record ProofBadges
{
/// <summary>
/// Reachability proof badge.
/// </summary>
public required ProofBadge Reachability { get; init; }
/// <summary>
/// Runtime corroboration badge.
/// </summary>
public required ProofBadge Runtime { get; init; }
/// <summary>
/// Policy evaluation badge.
/// </summary>
public required ProofBadge Policy { get; init; }
/// <summary>
/// Provenance/SBOM badge.
/// </summary>
public required ProofBadge Provenance { get; init; }
}
/// <summary>
/// Individual proof badge.
/// </summary>
public sealed record ProofBadge
{
/// <summary>
/// Badge status: available, missing, partial, error.
/// </summary>
public required string Status { get; init; }
/// <summary>
/// Whether this proof is available.
/// </summary>
public bool IsAvailable => Status == "available";
/// <summary>
/// Short label for the badge.
/// </summary>
public required string Label { get; init; }
/// <summary>
/// Tooltip with details.
/// </summary>
public required string Tooltip { get; init; }
/// <summary>
/// Link to detailed view (if available).
/// </summary>
public string? DetailUrl { get; init; }
/// <summary>
/// Evidence digest (if available).
/// </summary>
public string? EvidenceDigest { get; init; }
}
/// <summary>
/// Paginated list of finding summaries.
/// </summary>
public sealed record FindingSummaryListResponse
{
public required IReadOnlyList<FindingSummaryResponse> Items { get; init; }
public required int TotalCount { get; init; }
public string? NextCursor { get; init; }
}
```
**Acceptance Criteria**:
- [ ] `FindingSummaryResponse` with all fields
- [ ] `VerdictChip` with status, label, color, icon
- [ ] `ConfidenceChip` with percentage, tier, color
- [ ] `ProofBadges` with four badge types
- [ ] `ProofBadge` with status and detail URL
---
### T2: Create FindingSummaryBuilder
**Assignee**: Findings Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1
**Description**:
Implement service to build finding summaries from domain models.
**Implementation Path**: `Services/FindingSummaryBuilder.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Findings.WebService.Services;
public interface IFindingSummaryBuilder
{
FindingSummaryResponse Build(Finding finding, EvidenceContext evidence);
}
public sealed class FindingSummaryBuilder : IFindingSummaryBuilder
{
public FindingSummaryResponse Build(Finding finding, EvidenceContext evidence)
{
var verdict = BuildVerdictChip(finding);
var confidence = BuildConfidenceChip(evidence.Confidence);
var badges = BuildProofBadges(finding, evidence);
var oneLiner = GenerateOneLiner(finding, verdict, evidence);
return new FindingSummaryResponse
{
FindingId = finding.Id,
VulnerabilityId = finding.VulnerabilityId,
ComponentPurl = finding.Purl,
Version = finding.Version,
Verdict = verdict,
Confidence = confidence,
WhyOneLiner = oneLiner,
Badges = badges,
CvssScore = finding.CvssScore,
Severity = finding.Severity,
IsKev = finding.IsKev,
EpssScore = finding.EpssScore,
UpdatedAt = finding.UpdatedAt
};
}
private static VerdictChip BuildVerdictChip(Finding finding)
{
return finding.Status switch
{
FindingStatus.Affected => new VerdictChip
{
Status = "affected",
Label = "Affected",
Color = "red",
Icon = "alert-circle"
},
FindingStatus.NotAffected => new VerdictChip
{
Status = "not_affected",
Label = "Not Affected",
Color = "green",
Icon = "check-circle"
},
FindingStatus.Mitigated => new VerdictChip
{
Status = "mitigated",
Label = "Mitigated",
Color = "blue",
Icon = "shield-check"
},
FindingStatus.NeedsReview => new VerdictChip
{
Status = "needs_review",
Label = "Needs Review",
Color = "yellow",
Icon = "help-circle"
},
_ => new VerdictChip
{
Status = "unknown",
Label = "Unknown",
Color = "gray",
Icon = "question-circle"
}
};
}
private static ConfidenceChip BuildConfidenceChip(ConfidenceScore? confidence)
{
if (confidence is null)
{
return new ConfidenceChip
{
Percentage = 50,
Tier = "Unknown",
Color = "gray",
Tooltip = "Confidence not calculated"
};
}
var percentage = (int)(confidence.Value * 100);
var color = confidence.Tier switch
{
ConfidenceTier.VeryHigh => "green",
ConfidenceTier.High => "blue",
ConfidenceTier.Medium => "yellow",
ConfidenceTier.Low => "orange",
ConfidenceTier.VeryLow => "red",
_ => "gray"
};
var topFactors = confidence.Factors
.OrderByDescending(f => f.Contribution)
.Take(2)
.Select(f => $"{f.Type}: {f.RawValue:P0}");
return new ConfidenceChip
{
Percentage = percentage,
Tier = confidence.Tier.ToString(),
Color = color,
Tooltip = $"Driven by {string.Join(", ", topFactors)}"
};
}
private static ProofBadges BuildProofBadges(Finding finding, EvidenceContext evidence)
{
return new ProofBadges
{
Reachability = BuildReachabilityBadge(evidence),
Runtime = BuildRuntimeBadge(evidence),
Policy = BuildPolicyBadge(evidence),
Provenance = BuildProvenanceBadge(evidence)
};
}
private static ProofBadge BuildReachabilityBadge(EvidenceContext evidence)
{
if (evidence.Reachability is null)
{
return new ProofBadge
{
Status = "missing",
Label = "Reach",
Tooltip = "No reachability analysis"
};
}
return new ProofBadge
{
Status = "available",
Label = "Reach",
Tooltip = $"Reachability: {evidence.Reachability.State}",
DetailUrl = $"/api/v1/findings/{evidence.FindingId}/reachability-map",
EvidenceDigest = evidence.Reachability.GraphDigests.FirstOrDefault()
};
}
private static ProofBadge BuildRuntimeBadge(EvidenceContext evidence)
{
if (evidence.Runtime is null || !evidence.Runtime.HasObservations)
{
return new ProofBadge
{
Status = "missing",
Label = "Runtime",
Tooltip = "No runtime observations"
};
}
return new ProofBadge
{
Status = "available",
Label = "Runtime",
Tooltip = $"Runtime: {evidence.Runtime.ObservationCount} observations",
DetailUrl = $"/api/v1/findings/{evidence.FindingId}/runtime-timeline",
EvidenceDigest = evidence.Runtime.SessionDigests.FirstOrDefault()
};
}
private static ProofBadge BuildPolicyBadge(EvidenceContext evidence)
{
if (evidence.Policy is null)
{
return new ProofBadge
{
Status = "missing",
Label = "Policy",
Tooltip = "No policy evaluation"
};
}
return new ProofBadge
{
Status = "available",
Label = "Policy",
Tooltip = $"Policy rule: {evidence.Policy.RuleName}",
DetailUrl = $"/api/v1/findings/{evidence.FindingId}/policy-trace",
EvidenceDigest = evidence.Policy.EvaluationDigest
};
}
private static ProofBadge BuildProvenanceBadge(EvidenceContext evidence)
{
if (evidence.Provenance is null)
{
return new ProofBadge
{
Status = "missing",
Label = "Prov",
Tooltip = "No provenance information"
};
}
return new ProofBadge
{
Status = "available",
Label = "Prov",
Tooltip = $"Provenance: {evidence.Provenance.Level}",
DetailUrl = $"/api/v1/findings/{evidence.FindingId}/provenance",
EvidenceDigest = evidence.Provenance.AttestationDigests.FirstOrDefault()
};
}
private static string GenerateOneLiner(
Finding finding,
VerdictChip verdict,
EvidenceContext evidence)
{
if (verdict.Status == "not_affected" && evidence.Reachability is not null)
{
return $"Not affected: code path to {finding.VulnerabilityId} is not reachable.";
}
if (verdict.Status == "affected" && finding.IsKev)
{
return $"Affected: {finding.VulnerabilityId} is actively exploited (KEV).";
}
if (verdict.Status == "affected")
{
return $"Affected: {finding.VulnerabilityId} impacts {finding.Purl}.";
}
if (verdict.Status == "mitigated")
{
return $"Mitigated: compensating controls address {finding.VulnerabilityId}.";
}
return $"Review required: {finding.VulnerabilityId} needs assessment.";
}
}
```
**Acceptance Criteria**:
- [ ] `FindingSummaryBuilder` implements `IFindingSummaryBuilder`
- [ ] Verdict chip mapping complete
- [ ] Confidence chip with color and tooltip
- [ ] All four proof badges built
- [ ] One-liner generation with context
---
### T3: Create API Endpoints
**Assignee**: Findings Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T2
**Description**:
Create REST API endpoints for finding summaries.
**Implementation Path**: `Endpoints/FindingSummaryEndpoints.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Findings.WebService.Endpoints;
public static class FindingSummaryEndpoints
{
public static void MapFindingSummaryEndpoints(this WebApplication app)
{
var group = app.MapGroup("/api/v1/findings")
.WithTags("Finding Summaries")
.RequireAuthorization();
// GET /api/v1/findings?artifact={digest}&limit={n}&cursor={c}
group.MapGet("/", async (
[FromQuery] string? artifact,
[FromQuery] string? vulnerability,
[FromQuery] string? status,
[FromQuery] string? severity,
[FromQuery] int limit = 50,
[FromQuery] string? cursor,
IFindingSummaryService service,
CancellationToken ct) =>
{
var query = new FindingSummaryQuery
{
ArtifactDigest = artifact,
VulnerabilityId = vulnerability,
Status = status,
Severity = severity,
Limit = Math.Clamp(limit, 1, 100),
Cursor = cursor
};
var result = await service.QueryAsync(query, ct);
return Results.Ok(result);
})
.WithName("ListFindingSummaries")
.WithDescription("List finding summaries with verdict chips and proof badges");
// GET /api/v1/findings/{findingId}/summary
group.MapGet("/{findingId:guid}/summary", async (
Guid findingId,
IFindingSummaryService service,
CancellationToken ct) =>
{
var result = await service.GetSummaryAsync(findingId, ct);
return result is not null
? Results.Ok(result)
: Results.NotFound();
})
.WithName("GetFindingSummary")
.WithDescription("Get detailed finding summary with all badges and evidence links");
// GET /api/v1/findings/{findingId}/evidence-graph
group.MapGet("/{findingId:guid}/evidence-graph", async (
Guid findingId,
IEvidenceGraphService service,
CancellationToken ct) =>
{
var result = await service.GetGraphAsync(findingId, ct);
return result is not null
? Results.Ok(result)
: Results.NotFound();
})
.WithName("GetFindingEvidenceGraph")
.WithDescription("Get evidence graph for click-through visualization");
}
}
```
**Acceptance Criteria**:
- [ ] List endpoint with filtering
- [ ] Single summary endpoint
- [ ] Evidence graph endpoint stub
- [ ] Pagination support
- [ ] OpenAPI documentation
---
### T4: Implement FindingSummaryService
**Assignee**: Findings Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T2, T3
**Description**:
Implement service that aggregates data for finding summaries.
**Implementation Path**: `Services/FindingSummaryService.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Findings.WebService.Services;
public interface IFindingSummaryService
{
Task<FindingSummaryListResponse> QueryAsync(FindingSummaryQuery query, CancellationToken ct);
Task<FindingSummaryResponse?> GetSummaryAsync(Guid findingId, CancellationToken ct);
}
public sealed class FindingSummaryService : IFindingSummaryService
{
private readonly IFindingRepository _findingRepo;
private readonly IEvidenceRepository _evidenceRepo;
private readonly IConfidenceCalculator _confidenceCalculator;
private readonly IFindingSummaryBuilder _builder;
public FindingSummaryService(
IFindingRepository findingRepo,
IEvidenceRepository evidenceRepo,
IConfidenceCalculator confidenceCalculator,
IFindingSummaryBuilder builder)
{
_findingRepo = findingRepo;
_evidenceRepo = evidenceRepo;
_confidenceCalculator = confidenceCalculator;
_builder = builder;
}
public async Task<FindingSummaryListResponse> QueryAsync(
FindingSummaryQuery query,
CancellationToken ct)
{
var findings = await _findingRepo.QueryAsync(query, ct);
var findingIds = findings.Select(f => f.Id).ToList();
// Batch load evidence
var evidenceMap = await _evidenceRepo.GetBatchAsync(findingIds, ct);
var summaries = new List<FindingSummaryResponse>();
foreach (var finding in findings)
{
var evidence = evidenceMap.GetValueOrDefault(finding.Id)
?? new EvidenceContext { FindingId = finding.Id };
// Calculate confidence
var confidenceInput = MapToConfidenceInput(evidence);
evidence.Confidence = _confidenceCalculator.Calculate(confidenceInput);
summaries.Add(_builder.Build(finding, evidence));
}
return new FindingSummaryListResponse
{
Items = summaries,
TotalCount = findings.TotalCount,
NextCursor = findings.NextCursor
};
}
public async Task<FindingSummaryResponse?> GetSummaryAsync(
Guid findingId,
CancellationToken ct)
{
var finding = await _findingRepo.GetByIdAsync(findingId, ct);
if (finding is null) return null;
var evidence = await _evidenceRepo.GetAsync(findingId, ct)
?? new EvidenceContext { FindingId = findingId };
var confidenceInput = MapToConfidenceInput(evidence);
evidence.Confidence = _confidenceCalculator.Calculate(confidenceInput);
return _builder.Build(finding, evidence);
}
private static ConfidenceInput MapToConfidenceInput(EvidenceContext evidence)
{
return new ConfidenceInput
{
Reachability = evidence.Reachability,
Runtime = evidence.Runtime,
Vex = evidence.Vex,
Provenance = evidence.Provenance,
Policy = evidence.Policy
};
}
}
public sealed record FindingSummaryQuery
{
public string? ArtifactDigest { get; init; }
public string? VulnerabilityId { get; init; }
public string? Status { get; init; }
public string? Severity { get; init; }
public int Limit { get; init; } = 50;
public string? Cursor { get; init; }
}
```
**Acceptance Criteria**:
- [ ] Query with filtering and pagination
- [ ] Batch evidence loading for performance
- [ ] Confidence calculation integrated
- [ ] Single finding lookup with full context
---
### T5: Add Tests
**Assignee**: Findings Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1-T4
**Description**:
Tests for finding summary functionality.
**Test Cases**:
```csharp
public class FindingSummaryBuilderTests
{
[Fact]
public void Build_AffectedFinding_ReturnsRedVerdictChip()
{
var finding = CreateFinding(FindingStatus.Affected);
var evidence = CreateEvidence();
var result = _builder.Build(finding, evidence);
result.Verdict.Status.Should().Be("affected");
result.Verdict.Color.Should().Be("red");
}
[Fact]
public void Build_WithReachabilityEvidence_ReturnsAvailableBadge()
{
var finding = CreateFinding();
var evidence = CreateEvidence(hasReachability: true);
var result = _builder.Build(finding, evidence);
result.Badges.Reachability.Status.Should().Be("available");
result.Badges.Reachability.DetailUrl.Should().NotBeNullOrEmpty();
}
[Fact]
public void Build_WithHighConfidence_ReturnsGreenConfidenceChip()
{
var finding = CreateFinding();
var evidence = CreateEvidence(confidenceValue: 0.9m);
var result = _builder.Build(finding, evidence);
result.Confidence.Tier.Should().Be("VeryHigh");
result.Confidence.Color.Should().Be("green");
}
[Fact]
public void Build_KevFinding_GeneratesKevOneLiner()
{
var finding = CreateFinding(isKev: true);
var evidence = CreateEvidence();
var result = _builder.Build(finding, evidence);
result.WhyOneLiner.Should().Contain("actively exploited");
}
}
```
**Acceptance Criteria**:
- [ ] Verdict chip tests
- [ ] Confidence chip tests
- [ ] Proof badge tests
- [ ] One-liner generation tests
- [ ] All tests pass
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Findings Team | Define FindingSummary contract |
| 2 | T2 | TODO | T1 | Findings Team | Create FindingSummaryBuilder |
| 3 | T3 | TODO | T2 | Findings Team | Create API endpoints |
| 4 | T4 | TODO | T2, T3 | Findings Team | Implement FindingSummaryService |
| 5 | T5 | TODO | T1-T4 | Findings Team | Add tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from Explainable Triage Workflows advisory gap analysis. | Claude |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Four proof badges | Decision | Findings Team | Reachability, Runtime, Policy, Provenance |
| Color scheme | Decision | Findings Team | Red=affected, Green=not_affected, Yellow=review, Blue=mitigated |
| One-liner logic | Decision | Findings Team | Context-aware based on status and evidence |
---
## Success Criteria
- [ ] All 5 tasks marked DONE
- [ ] API returns complete finding summaries
- [ ] Verdict chips with correct colors
- [ ] Proof badges with detail URLs
- [ ] Confidence integrated
- [ ] Pagination working
- [ ] All tests pass

View File

@@ -0,0 +1,550 @@
# Sprint 7000.0002.0001 · Evidence Graph Visualization API
## Topic & Scope
- Create API for evidence graph visualization
- Model evidence nodes, edges, and derivation relationships
- Include signature status per evidence node
- Enable audit-ready evidence exploration
**Working directory:** `src/Findings/StellaOps.Findings.WebService/`
## Dependencies & Concurrency
- **Upstream**: SPRINT_7000_0001_0002 (Vulnerability-First UX API)
- **Downstream**: None
- **Safe to parallelize with**: SPRINT_7000_0002_0002, SPRINT_7000_0002_0003
## Documentation Prerequisites
- `docs/product-advisories/21-Dec-2025 - Designing Explainable Triage Workflows.md`
- `src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/`
---
## Tasks
### T1: Define EvidenceGraph Model
**Assignee**: Findings Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: —
**Description**:
Create the evidence graph response model.
**Implementation Path**: `Contracts/EvidenceGraphContracts.cs` (new file)
**Contract Definition**:
```csharp
namespace StellaOps.Findings.WebService.Contracts;
/// <summary>
/// Evidence graph for a finding showing all contributing evidence.
/// </summary>
public sealed record EvidenceGraphResponse
{
/// <summary>
/// Finding this graph is for.
/// </summary>
public required Guid FindingId { get; init; }
/// <summary>
/// Vulnerability ID.
/// </summary>
public required string VulnerabilityId { get; init; }
/// <summary>
/// All evidence nodes.
/// </summary>
public required IReadOnlyList<EvidenceNode> Nodes { get; init; }
/// <summary>
/// Edges representing derivation relationships.
/// </summary>
public required IReadOnlyList<EvidenceEdge> Edges { get; init; }
/// <summary>
/// Root node (verdict).
/// </summary>
public required string RootNodeId { get; init; }
/// <summary>
/// Graph generation timestamp.
/// </summary>
public required DateTimeOffset GeneratedAt { get; init; }
}
/// <summary>
/// A node in the evidence graph.
/// </summary>
public sealed record EvidenceNode
{
/// <summary>
/// Node identifier (content-addressed).
/// </summary>
public required string Id { get; init; }
/// <summary>
/// Node type.
/// </summary>
public required EvidenceNodeType Type { get; init; }
/// <summary>
/// Human-readable label.
/// </summary>
public required string Label { get; init; }
/// <summary>
/// Content digest (sha256:...).
/// </summary>
public required string Digest { get; init; }
/// <summary>
/// Issuer of this evidence.
/// </summary>
public string? Issuer { get; init; }
/// <summary>
/// Timestamp when created.
/// </summary>
public required DateTimeOffset Timestamp { get; init; }
/// <summary>
/// Signature status.
/// </summary>
public required SignatureStatus Signature { get; init; }
/// <summary>
/// Additional metadata.
/// </summary>
public IReadOnlyDictionary<string, string> Metadata { get; init; }
= new Dictionary<string, string>();
/// <summary>
/// URL to fetch raw content.
/// </summary>
public string? ContentUrl { get; init; }
}
public enum EvidenceNodeType
{
/// <summary>Final verdict.</summary>
Verdict,
/// <summary>Policy evaluation trace.</summary>
PolicyTrace,
/// <summary>VEX statement.</summary>
VexStatement,
/// <summary>Reachability analysis.</summary>
Reachability,
/// <summary>Runtime observation.</summary>
RuntimeObservation,
/// <summary>SBOM component.</summary>
SbomComponent,
/// <summary>Advisory source.</summary>
Advisory,
/// <summary>Build provenance.</summary>
Provenance,
/// <summary>Attestation envelope.</summary>
Attestation
}
/// <summary>
/// Signature verification status.
/// </summary>
public sealed record SignatureStatus
{
/// <summary>
/// Whether signed.
/// </summary>
public required bool IsSigned { get; init; }
/// <summary>
/// Whether signature is valid.
/// </summary>
public bool? IsValid { get; init; }
/// <summary>
/// Signer identity (if known).
/// </summary>
public string? SignerIdentity { get; init; }
/// <summary>
/// Signing timestamp.
/// </summary>
public DateTimeOffset? SignedAt { get; init; }
/// <summary>
/// Key ID used for signing.
/// </summary>
public string? KeyId { get; init; }
/// <summary>
/// Rekor log index (if published).
/// </summary>
public long? RekorLogIndex { get; init; }
}
/// <summary>
/// Edge representing derivation relationship.
/// </summary>
public sealed record EvidenceEdge
{
/// <summary>
/// Source node ID.
/// </summary>
public required string From { get; init; }
/// <summary>
/// Target node ID.
/// </summary>
public required string To { get; init; }
/// <summary>
/// Relationship type.
/// </summary>
public required EvidenceRelation Relation { get; init; }
/// <summary>
/// Human-readable label.
/// </summary>
public string? Label { get; init; }
}
public enum EvidenceRelation
{
/// <summary>Derived from (input to output).</summary>
DerivedFrom,
/// <summary>Verified by (attestation verifies content).</summary>
VerifiedBy,
/// <summary>Supersedes (newer replaces older).</summary>
Supersedes,
/// <summary>References (general reference).</summary>
References,
/// <summary>Corroborates (supports claim).</summary>
Corroborates
}
```
**Acceptance Criteria**:
- [ ] EvidenceGraphResponse with nodes and edges
- [ ] EvidenceNode with type, digest, signature
- [ ] SignatureStatus with Rekor integration
- [ ] EvidenceEdge with relation type
---
### T2: Create EvidenceGraphBuilder
**Assignee**: Findings Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1
**Description**:
Build evidence graphs from finding evidence.
**Implementation Path**: `Services/EvidenceGraphBuilder.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Findings.WebService.Services;
public interface IEvidenceGraphBuilder
{
Task<EvidenceGraphResponse> BuildAsync(Guid findingId, CancellationToken ct);
}
public sealed class EvidenceGraphBuilder : IEvidenceGraphBuilder
{
private readonly IEvidenceRepository _evidenceRepo;
private readonly IAttestationVerifier _attestationVerifier;
public async Task<EvidenceGraphResponse> BuildAsync(
Guid findingId,
CancellationToken ct)
{
var evidence = await _evidenceRepo.GetFullEvidenceAsync(findingId, ct);
var nodes = new List<EvidenceNode>();
var edges = new List<EvidenceEdge>();
// Build verdict node (root)
var verdictNode = BuildVerdictNode(evidence.Verdict);
nodes.Add(verdictNode);
// Build policy trace node
if (evidence.PolicyTrace is not null)
{
var policyNode = await BuildPolicyNodeAsync(evidence.PolicyTrace, ct);
nodes.Add(policyNode);
edges.Add(new EvidenceEdge
{
From = policyNode.Id,
To = verdictNode.Id,
Relation = EvidenceRelation.DerivedFrom,
Label = "policy evaluation"
});
}
// Build VEX nodes
foreach (var vex in evidence.VexStatements)
{
var vexNode = await BuildVexNodeAsync(vex, ct);
nodes.Add(vexNode);
edges.Add(new EvidenceEdge
{
From = vexNode.Id,
To = verdictNode.Id,
Relation = EvidenceRelation.DerivedFrom,
Label = vex.Status.ToString().ToLowerInvariant()
});
}
// Build reachability node
if (evidence.Reachability is not null)
{
var reachNode = await BuildReachabilityNodeAsync(evidence.Reachability, ct);
nodes.Add(reachNode);
edges.Add(new EvidenceEdge
{
From = reachNode.Id,
To = verdictNode.Id,
Relation = EvidenceRelation.Corroborates,
Label = "reachability analysis"
});
}
// Build runtime nodes
foreach (var runtime in evidence.RuntimeObservations)
{
var runtimeNode = await BuildRuntimeNodeAsync(runtime, ct);
nodes.Add(runtimeNode);
edges.Add(new EvidenceEdge
{
From = runtimeNode.Id,
To = verdictNode.Id,
Relation = EvidenceRelation.Corroborates,
Label = "runtime observation"
});
}
// Build SBOM node
if (evidence.SbomComponent is not null)
{
var sbomNode = BuildSbomNode(evidence.SbomComponent);
nodes.Add(sbomNode);
edges.Add(new EvidenceEdge
{
From = sbomNode.Id,
To = verdictNode.Id,
Relation = EvidenceRelation.References,
Label = "component"
});
}
// Build provenance node
if (evidence.Provenance is not null)
{
var provNode = await BuildProvenanceNodeAsync(evidence.Provenance, ct);
nodes.Add(provNode);
edges.Add(new EvidenceEdge
{
From = provNode.Id,
To = verdictNode.Id,
Relation = EvidenceRelation.VerifiedBy,
Label = "provenance"
});
}
return new EvidenceGraphResponse
{
FindingId = findingId,
VulnerabilityId = evidence.VulnerabilityId,
Nodes = nodes,
Edges = edges,
RootNodeId = verdictNode.Id,
GeneratedAt = DateTimeOffset.UtcNow
};
}
private async Task<SignatureStatus> VerifySignatureAsync(
string? attestationDigest,
CancellationToken ct)
{
if (attestationDigest is null)
{
return new SignatureStatus { IsSigned = false };
}
var result = await _attestationVerifier.VerifyAsync(attestationDigest, ct);
return new SignatureStatus
{
IsSigned = true,
IsValid = result.IsValid,
SignerIdentity = result.SignerIdentity,
SignedAt = result.SignedAt,
KeyId = result.KeyId,
RekorLogIndex = result.RekorLogIndex
};
}
}
```
**Acceptance Criteria**:
- [ ] Builds complete evidence graph
- [ ] Includes all evidence types
- [ ] Signature verification for each node
- [ ] Proper edge relationships
---
### T3: Create API Endpoint
**Assignee**: Findings Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T2
**Description**:
Create the evidence graph API endpoint.
**Implementation Path**: `Endpoints/EvidenceGraphEndpoints.cs` (new file)
```csharp
namespace StellaOps.Findings.WebService.Endpoints;
public static class EvidenceGraphEndpoints
{
public static void MapEvidenceGraphEndpoints(this WebApplication app)
{
var group = app.MapGroup("/api/v1/findings")
.WithTags("Evidence Graph")
.RequireAuthorization();
// GET /api/v1/findings/{findingId}/evidence-graph
group.MapGet("/{findingId:guid}/evidence-graph", async (
Guid findingId,
[FromQuery] bool includeContent = false,
IEvidenceGraphBuilder builder,
CancellationToken ct) =>
{
var graph = await builder.BuildAsync(findingId, ct);
return graph is not null
? Results.Ok(graph)
: Results.NotFound();
})
.WithName("GetEvidenceGraph")
.WithDescription("Get evidence graph for finding visualization")
.Produces<EvidenceGraphResponse>(200)
.Produces(404);
// GET /api/v1/findings/{findingId}/evidence/{nodeId}
group.MapGet("/{findingId:guid}/evidence/{nodeId}", async (
Guid findingId,
string nodeId,
IEvidenceContentService contentService,
CancellationToken ct) =>
{
var content = await contentService.GetContentAsync(findingId, nodeId, ct);
return content is not null
? Results.Ok(content)
: Results.NotFound();
})
.WithName("GetEvidenceNodeContent")
.WithDescription("Get raw content for an evidence node");
}
}
```
**Acceptance Criteria**:
- [ ] GET /evidence-graph endpoint
- [ ] GET /evidence/{nodeId} for content
- [ ] OpenAPI documentation
- [ ] 404 handling
---
### T4: Add Tests
**Assignee**: Findings Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1-T3
**Test Cases**:
```csharp
public class EvidenceGraphBuilderTests
{
[Fact]
public async Task BuildAsync_WithAllEvidence_ReturnsCompleteGraph()
{
var evidence = CreateFullEvidence();
_evidenceRepo.Setup(r => r.GetFullEvidenceAsync(It.IsAny<Guid>(), It.IsAny<CancellationToken>()))
.ReturnsAsync(evidence);
var result = await _builder.BuildAsync(Guid.NewGuid(), CancellationToken.None);
result.Nodes.Should().HaveCountGreaterThan(1);
result.Edges.Should().NotBeEmpty();
result.RootNodeId.Should().NotBeNullOrEmpty();
}
[Fact]
public async Task BuildAsync_SignedAttestation_IncludesSignatureStatus()
{
var evidence = CreateEvidenceWithSignedAttestation();
var result = await _builder.BuildAsync(Guid.NewGuid(), CancellationToken.None);
var signedNode = result.Nodes.First(n => n.Signature.IsSigned);
signedNode.Signature.IsValid.Should().BeTrue();
}
}
```
**Acceptance Criteria**:
- [ ] Graph building tests
- [ ] Signature verification tests
- [ ] Edge relationship tests
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Findings Team | Define EvidenceGraph model |
| 2 | T2 | TODO | T1 | Findings Team | Create EvidenceGraphBuilder |
| 3 | T3 | TODO | T2 | Findings Team | Create API endpoint |
| 4 | T4 | TODO | T1-T3 | Findings Team | Add tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from Explainable Triage Workflows advisory gap analysis. | Claude |
---
## Success Criteria
- [ ] All 4 tasks marked DONE
- [ ] Evidence graph includes all node types
- [ ] Signature status verified and displayed
- [ ] API returns valid graph structure
- [ ] All tests pass

View File

@@ -0,0 +1,602 @@
# Sprint 7000.0002.0002 · Reachability Mini-Map API
## Topic & Scope
- Create API for condensed reachability subgraph visualization
- Extract entrypoints → affected component → sinks paths
- Provide visual-friendly serialization for UI rendering
**Working directory:** `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/`
## Dependencies & Concurrency
- **Upstream**: SPRINT_7000_0001_0002 (Vulnerability-First UX API)
- **Downstream**: None
- **Safe to parallelize with**: SPRINT_7000_0002_0001, SPRINT_7000_0002_0003
## Documentation Prerequisites
- `docs/product-advisories/21-Dec-2025 - Designing Explainable Triage Workflows.md`
- `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/RichGraph.cs`
- `src/Scanner/__Libraries/StellaOps.Scanner.Reachability/Witnesses/PathWitness.cs`
---
## Tasks
### T1: Define ReachabilityMiniMap Model
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: —
**Implementation Path**: `MiniMap/ReachabilityMiniMap.cs` (new file)
**Contract Definition**:
```csharp
namespace StellaOps.Scanner.Reachability.MiniMap;
/// <summary>
/// Condensed reachability visualization for a finding.
/// Shows paths from entrypoints to vulnerable component to sinks.
/// </summary>
public sealed record ReachabilityMiniMap
{
/// <summary>
/// Finding this map is for.
/// </summary>
public required Guid FindingId { get; init; }
/// <summary>
/// Vulnerability ID.
/// </summary>
public required string VulnerabilityId { get; init; }
/// <summary>
/// The vulnerable component.
/// </summary>
public required MiniMapNode VulnerableComponent { get; init; }
/// <summary>
/// Entry points that reach the vulnerable component.
/// </summary>
public required IReadOnlyList<MiniMapEntrypoint> Entrypoints { get; init; }
/// <summary>
/// Paths from entrypoints to vulnerable component.
/// </summary>
public required IReadOnlyList<MiniMapPath> Paths { get; init; }
/// <summary>
/// Overall reachability state.
/// </summary>
public required ReachabilityState State { get; init; }
/// <summary>
/// Confidence of the analysis.
/// </summary>
public required decimal Confidence { get; init; }
/// <summary>
/// Full graph digest for verification.
/// </summary>
public required string GraphDigest { get; init; }
/// <summary>
/// When analysis was performed.
/// </summary>
public required DateTimeOffset AnalyzedAt { get; init; }
}
/// <summary>
/// A node in the mini-map.
/// </summary>
public sealed record MiniMapNode
{
/// <summary>
/// Node identifier.
/// </summary>
public required string Id { get; init; }
/// <summary>
/// Display label.
/// </summary>
public required string Label { get; init; }
/// <summary>
/// Node type.
/// </summary>
public required MiniMapNodeType Type { get; init; }
/// <summary>
/// Package URL (if applicable).
/// </summary>
public string? Purl { get; init; }
/// <summary>
/// Source file location.
/// </summary>
public string? SourceFile { get; init; }
/// <summary>
/// Line number in source.
/// </summary>
public int? LineNumber { get; init; }
}
public enum MiniMapNodeType
{
Entrypoint,
Function,
Class,
Module,
VulnerableComponent,
Sink
}
/// <summary>
/// An entry point in the mini-map.
/// </summary>
public sealed record MiniMapEntrypoint
{
/// <summary>
/// Entry point node.
/// </summary>
public required MiniMapNode Node { get; init; }
/// <summary>
/// Entry point kind.
/// </summary>
public required EntrypointKind Kind { get; init; }
/// <summary>
/// Number of paths from this entrypoint.
/// </summary>
public required int PathCount { get; init; }
/// <summary>
/// Shortest path length to vulnerable component.
/// </summary>
public required int ShortestPathLength { get; init; }
}
public enum EntrypointKind
{
HttpEndpoint,
GrpcMethod,
MessageHandler,
CliCommand,
MainFunction,
PublicApi,
EventHandler,
Other
}
/// <summary>
/// A path from entrypoint to vulnerable component.
/// </summary>
public sealed record MiniMapPath
{
/// <summary>
/// Path identifier.
/// </summary>
public required string PathId { get; init; }
/// <summary>
/// Starting entrypoint ID.
/// </summary>
public required string EntrypointId { get; init; }
/// <summary>
/// Ordered steps in the path.
/// </summary>
public required IReadOnlyList<MiniMapPathStep> Steps { get; init; }
/// <summary>
/// Path length.
/// </summary>
public int Length => Steps.Count;
/// <summary>
/// Whether path has runtime corroboration.
/// </summary>
public bool HasRuntimeEvidence { get; init; }
/// <summary>
/// Confidence for this specific path.
/// </summary>
public decimal PathConfidence { get; init; }
}
/// <summary>
/// A step in a path.
/// </summary>
public sealed record MiniMapPathStep
{
/// <summary>
/// Step index (0-based).
/// </summary>
public required int Index { get; init; }
/// <summary>
/// Node at this step.
/// </summary>
public required MiniMapNode Node { get; init; }
/// <summary>
/// Call type to next step.
/// </summary>
public string? CallType { get; init; }
}
public enum ReachabilityState
{
Unknown,
StaticReachable,
StaticUnreachable,
ConfirmedReachable,
ConfirmedUnreachable
}
```
**Acceptance Criteria**:
- [ ] ReachabilityMiniMap model complete
- [ ] MiniMapNode with type and location
- [ ] MiniMapEntrypoint with kind
- [ ] MiniMapPath with steps
- [ ] XML documentation
---
### T2: Create MiniMapExtractor
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1
**Implementation Path**: `MiniMap/MiniMapExtractor.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Scanner.Reachability.MiniMap;
public interface IMiniMapExtractor
{
ReachabilityMiniMap Extract(RichGraph graph, string vulnerableComponent, int maxPaths = 10);
}
public sealed class MiniMapExtractor : IMiniMapExtractor
{
public ReachabilityMiniMap Extract(
RichGraph graph,
string vulnerableComponent,
int maxPaths = 10)
{
// Find vulnerable component node
var vulnNode = graph.Nodes.FirstOrDefault(n =>
n.Purl == vulnerableComponent ||
n.SymbolId?.Contains(vulnerableComponent) == true);
if (vulnNode is null)
{
return CreateNotFoundMap(vulnerableComponent);
}
// Find all entrypoints
var entrypoints = graph.Nodes
.Where(n => IsEntrypoint(n))
.ToList();
// BFS from each entrypoint to vulnerable component
var paths = new List<MiniMapPath>();
var entrypointInfos = new List<MiniMapEntrypoint>();
foreach (var ep in entrypoints)
{
var epPaths = FindPaths(graph, ep, vulnNode, maxDepth: 20);
if (epPaths.Count > 0)
{
entrypointInfos.Add(new MiniMapEntrypoint
{
Node = ToMiniMapNode(ep),
Kind = ClassifyEntrypoint(ep),
PathCount = epPaths.Count,
ShortestPathLength = epPaths.Min(p => p.Length)
});
paths.AddRange(epPaths.Take(maxPaths / entrypoints.Count + 1));
}
}
// Determine state
var state = paths.Count > 0
? (paths.Any(p => p.HasRuntimeEvidence)
? ReachabilityState.ConfirmedReachable
: ReachabilityState.StaticReachable)
: ReachabilityState.StaticUnreachable;
// Calculate confidence
var confidence = CalculateConfidence(paths, entrypointInfos, graph);
return new ReachabilityMiniMap
{
FindingId = Guid.Empty, // Set by caller
VulnerabilityId = string.Empty, // Set by caller
VulnerableComponent = ToMiniMapNode(vulnNode),
Entrypoints = entrypointInfos.OrderBy(e => e.ShortestPathLength).ToList(),
Paths = paths.OrderBy(p => p.Length).Take(maxPaths).ToList(),
State = state,
Confidence = confidence,
GraphDigest = graph.Digest,
AnalyzedAt = DateTimeOffset.UtcNow
};
}
private static bool IsEntrypoint(RichGraphNode node)
{
return node.Kind is "entrypoint" or "export" or "main" or "handler";
}
private static EntrypointKind ClassifyEntrypoint(RichGraphNode node)
{
if (node.Attributes.TryGetValue("http_method", out _))
return EntrypointKind.HttpEndpoint;
if (node.Attributes.TryGetValue("grpc_service", out _))
return EntrypointKind.GrpcMethod;
if (node.Kind == "main")
return EntrypointKind.MainFunction;
if (node.Kind == "handler")
return EntrypointKind.EventHandler;
if (node.Attributes.TryGetValue("cli_command", out _))
return EntrypointKind.CliCommand;
return EntrypointKind.PublicApi;
}
private List<MiniMapPath> FindPaths(
RichGraph graph,
RichGraphNode start,
RichGraphNode end,
int maxDepth)
{
var paths = new List<MiniMapPath>();
var queue = new Queue<(RichGraphNode node, List<RichGraphNode> path)>();
queue.Enqueue((start, [start]));
while (queue.Count > 0 && paths.Count < 100)
{
var (current, path) = queue.Dequeue();
if (path.Count > maxDepth) continue;
if (current.Id == end.Id)
{
paths.Add(BuildPath(path, graph));
continue;
}
var edges = graph.Edges.Where(e => e.From == current.Id);
foreach (var edge in edges)
{
var nextNode = graph.Nodes.FirstOrDefault(n => n.Id == edge.To);
if (nextNode is not null && !path.Any(n => n.Id == nextNode.Id))
{
queue.Enqueue((nextNode, [.. path, nextNode]));
}
}
}
return paths;
}
private static MiniMapPath BuildPath(List<RichGraphNode> nodes, RichGraph graph)
{
var steps = nodes.Select((n, i) =>
{
var edge = i < nodes.Count - 1
? graph.Edges.FirstOrDefault(e => e.From == n.Id && e.To == nodes[i + 1].Id)
: null;
return new MiniMapPathStep
{
Index = i,
Node = ToMiniMapNode(n),
CallType = edge?.Kind
};
}).ToList();
var hasRuntime = graph.Edges
.Where(e => nodes.Any(n => n.Id == e.From))
.Any(e => e.Evidence?.Contains("runtime") == true);
return new MiniMapPath
{
PathId = $"path:{ComputePathHash(nodes)}",
EntrypointId = nodes.First().Id,
Steps = steps,
HasRuntimeEvidence = hasRuntime,
PathConfidence = hasRuntime ? 0.95m : 0.75m
};
}
private static MiniMapNode ToMiniMapNode(RichGraphNode node)
{
return new MiniMapNode
{
Id = node.Id,
Label = node.Display ?? node.SymbolId ?? node.Id,
Type = node.Kind switch
{
"entrypoint" or "export" or "main" => MiniMapNodeType.Entrypoint,
"function" or "method" => MiniMapNodeType.Function,
"class" => MiniMapNodeType.Class,
"module" or "package" => MiniMapNodeType.Module,
"sink" => MiniMapNodeType.Sink,
_ => MiniMapNodeType.Function
},
Purl = node.Purl,
SourceFile = node.Attributes.GetValueOrDefault("source_file"),
LineNumber = node.Attributes.TryGetValue("line", out var line) ? int.Parse(line) : null
};
}
private static decimal CalculateConfidence(
List<MiniMapPath> paths,
List<MiniMapEntrypoint> entrypoints,
RichGraph graph)
{
if (paths.Count == 0) return 0.9m; // High confidence in unreachability
var runtimePaths = paths.Count(p => p.HasRuntimeEvidence);
var runtimeRatio = (decimal)runtimePaths / paths.Count;
return 0.6m + (0.3m * runtimeRatio);
}
private static string ComputePathHash(List<RichGraphNode> nodes)
{
var ids = string.Join("|", nodes.Select(n => n.Id));
return Convert.ToHexString(SHA256.HashData(Encoding.UTF8.GetBytes(ids)))[..16].ToLowerInvariant();
}
}
```
**Acceptance Criteria**:
- [ ] Extracts paths from RichGraph
- [ ] Classifies entrypoints correctly
- [ ] BFS path finding with depth limit
- [ ] Confidence calculation
- [ ] Runtime evidence detection
---
### T3: Create API Endpoint
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T2
**Implementation Path**: `src/Findings/StellaOps.Findings.WebService/Endpoints/ReachabilityMapEndpoints.cs`
```csharp
namespace StellaOps.Findings.WebService.Endpoints;
public static class ReachabilityMapEndpoints
{
public static void MapReachabilityMapEndpoints(this WebApplication app)
{
var group = app.MapGroup("/api/v1/findings")
.WithTags("Reachability")
.RequireAuthorization();
// GET /api/v1/findings/{findingId}/reachability-map
group.MapGet("/{findingId:guid}/reachability-map", async (
Guid findingId,
[FromQuery] int maxPaths = 10,
IReachabilityMapService service,
CancellationToken ct) =>
{
var map = await service.GetMiniMapAsync(findingId, maxPaths, ct);
return map is not null
? Results.Ok(map)
: Results.NotFound();
})
.WithName("GetReachabilityMiniMap")
.WithDescription("Get condensed reachability visualization")
.Produces<ReachabilityMiniMap>(200)
.Produces(404);
}
}
```
**Acceptance Criteria**:
- [ ] GET endpoint implemented
- [ ] maxPaths query parameter
- [ ] OpenAPI documentation
---
### T4: Add Tests
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1-T3
**Test Cases**:
```csharp
public class MiniMapExtractorTests
{
[Fact]
public void Extract_ReachableComponent_ReturnsPaths()
{
var graph = CreateGraphWithPaths();
var result = _extractor.Extract(graph, "pkg:npm/vulnerable@1.0.0");
result.State.Should().Be(ReachabilityState.StaticReachable);
result.Paths.Should().NotBeEmpty();
result.Entrypoints.Should().NotBeEmpty();
}
[Fact]
public void Extract_UnreachableComponent_ReturnsEmptyPaths()
{
var graph = CreateGraphWithoutPaths();
var result = _extractor.Extract(graph, "pkg:npm/isolated@1.0.0");
result.State.Should().Be(ReachabilityState.StaticUnreachable);
result.Paths.Should().BeEmpty();
}
[Fact]
public void Extract_WithRuntimeEvidence_ReturnsConfirmedReachable()
{
var graph = CreateGraphWithRuntimeEvidence();
var result = _extractor.Extract(graph, "pkg:npm/vulnerable@1.0.0");
result.State.Should().Be(ReachabilityState.ConfirmedReachable);
result.Paths.Should().Contain(p => p.HasRuntimeEvidence);
}
}
```
**Acceptance Criteria**:
- [ ] Reachable component tests
- [ ] Unreachable component tests
- [ ] Runtime evidence tests
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Scanner Team | Define ReachabilityMiniMap model |
| 2 | T2 | TODO | T1 | Scanner Team | Create MiniMapExtractor |
| 3 | T3 | TODO | T2 | Scanner Team | Create API endpoint |
| 4 | T4 | TODO | T1-T3 | Scanner Team | Add tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from Explainable Triage Workflows advisory gap analysis. | Claude |
---
## Success Criteria
- [ ] All 4 tasks marked DONE
- [ ] Mini-map shows entrypoints to vulnerable component
- [ ] Paths with runtime evidence highlighted
- [ ] Confidence reflects analysis quality
- [ ] All tests pass

View File

@@ -0,0 +1,624 @@
# Sprint 7000.0002.0003 · Runtime Timeline API
## Topic & Scope
- Create API for runtime corroboration timeline visualization
- Show time-windowed load events, syscalls, network exposure
- Map observations to supports/contradicts/unknown posture
**Working directory:** `src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Native/`
## Dependencies & Concurrency
- **Upstream**: SPRINT_7000_0001_0002 (Vulnerability-First UX API)
- **Downstream**: None
- **Safe to parallelize with**: SPRINT_7000_0002_0001, SPRINT_7000_0002_0002
## Documentation Prerequisites
- `docs/product-advisories/21-Dec-2025 - Designing Explainable Triage Workflows.md`
- `src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Native/RuntimeCapture/`
---
## Tasks
### T1: Define RuntimeTimeline Model
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: —
**Implementation Path**: `RuntimeCapture/Timeline/RuntimeTimeline.cs` (new file)
**Contract Definition**:
```csharp
namespace StellaOps.Scanner.Analyzers.Native.RuntimeCapture.Timeline;
/// <summary>
/// Runtime observation timeline for a finding.
/// </summary>
public sealed record RuntimeTimeline
{
/// <summary>
/// Finding this timeline is for.
/// </summary>
public required Guid FindingId { get; init; }
/// <summary>
/// Vulnerable component being tracked.
/// </summary>
public required string ComponentPurl { get; init; }
/// <summary>
/// Time window start.
/// </summary>
public required DateTimeOffset WindowStart { get; init; }
/// <summary>
/// Time window end.
/// </summary>
public required DateTimeOffset WindowEnd { get; init; }
/// <summary>
/// Overall posture based on observations.
/// </summary>
public required RuntimePosture Posture { get; init; }
/// <summary>
/// Posture explanation.
/// </summary>
public required string PostureExplanation { get; init; }
/// <summary>
/// Time buckets with observation summaries.
/// </summary>
public required IReadOnlyList<TimelineBucket> Buckets { get; init; }
/// <summary>
/// Significant events in the timeline.
/// </summary>
public required IReadOnlyList<TimelineEvent> Events { get; init; }
/// <summary>
/// Total observation count.
/// </summary>
public int TotalObservations => Buckets.Sum(b => b.ObservationCount);
/// <summary>
/// Capture session digests.
/// </summary>
public required IReadOnlyList<string> SessionDigests { get; init; }
}
public enum RuntimePosture
{
/// <summary>No runtime data available.</summary>
Unknown,
/// <summary>Runtime evidence supports the verdict.</summary>
Supports,
/// <summary>Runtime evidence contradicts the verdict.</summary>
Contradicts,
/// <summary>Runtime evidence is inconclusive.</summary>
Inconclusive
}
/// <summary>
/// A time bucket in the timeline.
/// </summary>
public sealed record TimelineBucket
{
/// <summary>
/// Bucket start time.
/// </summary>
public required DateTimeOffset Start { get; init; }
/// <summary>
/// Bucket end time.
/// </summary>
public required DateTimeOffset End { get; init; }
/// <summary>
/// Number of observations in this bucket.
/// </summary>
public required int ObservationCount { get; init; }
/// <summary>
/// Observation types in this bucket.
/// </summary>
public required IReadOnlyList<ObservationTypeSummary> ByType { get; init; }
/// <summary>
/// Whether component was loaded in this bucket.
/// </summary>
public required bool ComponentLoaded { get; init; }
/// <summary>
/// Whether vulnerable code was executed.
/// </summary>
public bool? VulnerableCodeExecuted { get; init; }
}
/// <summary>
/// Summary of observations by type.
/// </summary>
public sealed record ObservationTypeSummary
{
public required ObservationType Type { get; init; }
public required int Count { get; init; }
}
public enum ObservationType
{
LibraryLoad,
Syscall,
NetworkConnection,
FileAccess,
ProcessSpawn,
SymbolResolution
}
/// <summary>
/// A significant event in the timeline.
/// </summary>
public sealed record TimelineEvent
{
/// <summary>
/// Event timestamp.
/// </summary>
public required DateTimeOffset Timestamp { get; init; }
/// <summary>
/// Event type.
/// </summary>
public required TimelineEventType Type { get; init; }
/// <summary>
/// Event description.
/// </summary>
public required string Description { get; init; }
/// <summary>
/// Significance level.
/// </summary>
public required EventSignificance Significance { get; init; }
/// <summary>
/// Related evidence digest.
/// </summary>
public string? EvidenceDigest { get; init; }
/// <summary>
/// Additional details.
/// </summary>
public IReadOnlyDictionary<string, string> Details { get; init; }
= new Dictionary<string, string>();
}
public enum TimelineEventType
{
ComponentLoaded,
ComponentUnloaded,
VulnerableFunctionCalled,
NetworkExposure,
SyscallBlocked,
ProcessForked,
CaptureStarted,
CaptureStopped
}
public enum EventSignificance
{
Low,
Medium,
High,
Critical
}
```
**Acceptance Criteria**:
- [ ] RuntimeTimeline with window and posture
- [ ] TimelineBucket with observation summary
- [ ] TimelineEvent for significant events
- [ ] Posture enum with explanations
---
### T2: Create TimelineBuilder
**Assignee**: Scanner Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1
**Implementation Path**: `RuntimeCapture/Timeline/TimelineBuilder.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Scanner.Analyzers.Native.RuntimeCapture.Timeline;
public interface ITimelineBuilder
{
RuntimeTimeline Build(
RuntimeEvidence evidence,
string componentPurl,
TimelineOptions options);
}
public sealed class TimelineBuilder : ITimelineBuilder
{
public RuntimeTimeline Build(
RuntimeEvidence evidence,
string componentPurl,
TimelineOptions options)
{
var windowStart = options.WindowStart ?? evidence.FirstObservation;
var windowEnd = options.WindowEnd ?? evidence.LastObservation;
// Build time buckets
var buckets = BuildBuckets(evidence, componentPurl, windowStart, windowEnd, options.BucketSize);
// Extract significant events
var events = ExtractEvents(evidence, componentPurl);
// Determine posture
var (posture, explanation) = DeterminePosture(buckets, events, componentPurl);
return new RuntimeTimeline
{
FindingId = Guid.Empty, // Set by caller
ComponentPurl = componentPurl,
WindowStart = windowStart,
WindowEnd = windowEnd,
Posture = posture,
PostureExplanation = explanation,
Buckets = buckets,
Events = events.OrderBy(e => e.Timestamp).ToList(),
SessionDigests = evidence.SessionDigests.ToList()
};
}
private List<TimelineBucket> BuildBuckets(
RuntimeEvidence evidence,
string componentPurl,
DateTimeOffset start,
DateTimeOffset end,
TimeSpan bucketSize)
{
var buckets = new List<TimelineBucket>();
var current = start;
while (current < end)
{
var bucketEnd = current + bucketSize;
if (bucketEnd > end) bucketEnd = end;
var observations = evidence.Observations
.Where(o => o.Timestamp >= current && o.Timestamp < bucketEnd)
.ToList();
var byType = observations
.GroupBy(o => ClassifyObservation(o))
.Select(g => new ObservationTypeSummary
{
Type = g.Key,
Count = g.Count()
})
.ToList();
var componentLoaded = observations.Any(o =>
o.Type == "library_load" &&
o.Path?.Contains(ExtractComponentName(componentPurl)) == true);
buckets.Add(new TimelineBucket
{
Start = current,
End = bucketEnd,
ObservationCount = observations.Count,
ByType = byType,
ComponentLoaded = componentLoaded,
VulnerableCodeExecuted = componentLoaded ? DetectVulnerableExecution(observations) : null
});
current = bucketEnd;
}
return buckets;
}
private List<TimelineEvent> ExtractEvents(RuntimeEvidence evidence, string componentPurl)
{
var events = new List<TimelineEvent>();
var componentName = ExtractComponentName(componentPurl);
foreach (var obs in evidence.Observations)
{
if (obs.Type == "library_load" && obs.Path?.Contains(componentName) == true)
{
events.Add(new TimelineEvent
{
Timestamp = obs.Timestamp,
Type = TimelineEventType.ComponentLoaded,
Description = $"Component {componentName} loaded",
Significance = EventSignificance.High,
EvidenceDigest = obs.Digest,
Details = new Dictionary<string, string>
{
["path"] = obs.Path ?? "",
["process_id"] = obs.ProcessId.ToString()
}
});
}
if (obs.Type == "network" && obs.Port is > 0 and < 1024)
{
events.Add(new TimelineEvent
{
Timestamp = obs.Timestamp,
Type = TimelineEventType.NetworkExposure,
Description = $"Network exposure on port {obs.Port}",
Significance = EventSignificance.Critical,
EvidenceDigest = obs.Digest
});
}
}
// Add capture session events
foreach (var session in evidence.Sessions)
{
events.Add(new TimelineEvent
{
Timestamp = session.StartTime,
Type = TimelineEventType.CaptureStarted,
Description = $"Capture session started ({session.Platform})",
Significance = EventSignificance.Low
});
if (session.EndTime.HasValue)
{
events.Add(new TimelineEvent
{
Timestamp = session.EndTime.Value,
Type = TimelineEventType.CaptureStopped,
Description = "Capture session stopped",
Significance = EventSignificance.Low
});
}
}
return events;
}
private static (RuntimePosture posture, string explanation) DeterminePosture(
List<TimelineBucket> buckets,
List<TimelineEvent> events,
string componentPurl)
{
if (buckets.Count == 0 || buckets.All(b => b.ObservationCount == 0))
{
return (RuntimePosture.Unknown, "No runtime observations collected");
}
var componentLoadedCount = buckets.Count(b => b.ComponentLoaded);
var totalBuckets = buckets.Count;
if (componentLoadedCount == 0)
{
return (RuntimePosture.Supports,
$"Component {ExtractComponentName(componentPurl)} was not loaded during observation window");
}
var hasNetworkExposure = events.Any(e => e.Type == TimelineEventType.NetworkExposure);
var hasVulnerableExecution = buckets.Any(b => b.VulnerableCodeExecuted == true);
if (hasVulnerableExecution || hasNetworkExposure)
{
return (RuntimePosture.Contradicts,
"Runtime evidence shows component is actively used and exposed");
}
if (componentLoadedCount < totalBuckets / 2)
{
return (RuntimePosture.Inconclusive,
$"Component loaded in {componentLoadedCount}/{totalBuckets} time periods");
}
return (RuntimePosture.Supports,
"Component loaded but no evidence of vulnerable code execution");
}
private static ObservationType ClassifyObservation(RuntimeObservation obs)
{
return obs.Type switch
{
"library_load" or "dlopen" => ObservationType.LibraryLoad,
"syscall" => ObservationType.Syscall,
"network" or "connect" => ObservationType.NetworkConnection,
"file" or "open" => ObservationType.FileAccess,
"fork" or "exec" => ObservationType.ProcessSpawn,
"symbol" => ObservationType.SymbolResolution,
_ => ObservationType.LibraryLoad
};
}
private static string ExtractComponentName(string purl)
{
// Extract name from PURL like pkg:npm/lodash@4.17.21
var parts = purl.Split('/');
var namePart = parts.LastOrDefault() ?? purl;
return namePart.Split('@').FirstOrDefault() ?? namePart;
}
private static bool? DetectVulnerableExecution(List<RuntimeObservation> observations)
{
// Check if any observation indicates vulnerable code path execution
return observations.Any(o =>
o.Type == "symbol" ||
o.Attributes?.ContainsKey("vulnerable_function") == true);
}
}
public sealed record TimelineOptions
{
public DateTimeOffset? WindowStart { get; init; }
public DateTimeOffset? WindowEnd { get; init; }
public TimeSpan BucketSize { get; init; } = TimeSpan.FromHours(1);
}
```
**Acceptance Criteria**:
- [ ] Builds timeline from runtime evidence
- [ ] Groups into time buckets
- [ ] Extracts significant events
- [ ] Determines posture with explanation
---
### T3: Create API Endpoint
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T2
**Implementation Path**: `src/Findings/StellaOps.Findings.WebService/Endpoints/RuntimeTimelineEndpoints.cs`
```csharp
namespace StellaOps.Findings.WebService.Endpoints;
public static class RuntimeTimelineEndpoints
{
public static void MapRuntimeTimelineEndpoints(this WebApplication app)
{
var group = app.MapGroup("/api/v1/findings")
.WithTags("Runtime")
.RequireAuthorization();
// GET /api/v1/findings/{findingId}/runtime-timeline
group.MapGet("/{findingId:guid}/runtime-timeline", async (
Guid findingId,
[FromQuery] DateTimeOffset? from,
[FromQuery] DateTimeOffset? to,
[FromQuery] int bucketHours = 1,
IRuntimeTimelineService service,
CancellationToken ct) =>
{
var options = new TimelineOptions
{
WindowStart = from,
WindowEnd = to,
BucketSize = TimeSpan.FromHours(Math.Clamp(bucketHours, 1, 24))
};
var timeline = await service.GetTimelineAsync(findingId, options, ct);
return timeline is not null
? Results.Ok(timeline)
: Results.NotFound();
})
.WithName("GetRuntimeTimeline")
.WithDescription("Get runtime corroboration timeline")
.Produces<RuntimeTimeline>(200)
.Produces(404);
}
}
```
**Acceptance Criteria**:
- [ ] GET endpoint with time window params
- [ ] Bucket size configuration
- [ ] OpenAPI documentation
---
### T4: Add Tests
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1-T3
**Test Cases**:
```csharp
public class TimelineBuilderTests
{
[Fact]
public void Build_WithNoObservations_ReturnsUnknownPosture()
{
var evidence = CreateEmptyEvidence();
var result = _builder.Build(evidence, "pkg:npm/test@1.0.0", new TimelineOptions());
result.Posture.Should().Be(RuntimePosture.Unknown);
}
[Fact]
public void Build_ComponentNotLoaded_ReturnsSupportsPosture()
{
var evidence = CreateEvidenceWithoutComponent();
var result = _builder.Build(evidence, "pkg:npm/vulnerable@1.0.0", new TimelineOptions());
result.Posture.Should().Be(RuntimePosture.Supports);
result.PostureExplanation.Should().Contain("not loaded");
}
[Fact]
public void Build_WithNetworkExposure_ReturnsContradictsPosture()
{
var evidence = CreateEvidenceWithNetworkExposure();
var result = _builder.Build(evidence, "pkg:npm/vulnerable@1.0.0", new TimelineOptions());
result.Posture.Should().Be(RuntimePosture.Contradicts);
}
[Fact]
public void Build_CreatesCorrectBuckets()
{
var evidence = CreateEvidenceOver24Hours();
var options = new TimelineOptions { BucketSize = TimeSpan.FromHours(6) };
var result = _builder.Build(evidence, "pkg:npm/test@1.0.0", options);
result.Buckets.Should().HaveCount(4);
}
}
```
**Acceptance Criteria**:
- [ ] Posture determination tests
- [ ] Bucket building tests
- [ ] Event extraction tests
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Scanner Team | Define RuntimeTimeline model |
| 2 | T2 | TODO | T1 | Scanner Team | Create TimelineBuilder |
| 3 | T3 | TODO | T2 | Scanner Team | Create API endpoint |
| 4 | T4 | TODO | T1-T3 | Scanner Team | Add tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from Explainable Triage Workflows advisory gap analysis. | Claude |
---
## Success Criteria
- [ ] All 4 tasks marked DONE
- [ ] Timeline shows time-windowed observations
- [ ] Posture correctly determined
- [ ] Events extracted with significance
- [ ] All tests pass

View File

@@ -0,0 +1,654 @@
# Sprint 7000.0003.0001 · Progressive Fidelity Mode
## Topic & Scope
- Implement tiered analysis fidelity (Quick, Standard, Deep)
- Enable fast heuristic triage with option for deeper proof
- Reflect fidelity level in verdict confidence
- Support "request deeper analysis" workflow
**Working directory:** `src/Scanner/__Libraries/StellaOps.Scanner.Orchestration/`
## Dependencies & Concurrency
- **Upstream**: None (independent)
- **Downstream**: SPRINT_7000_0001_0001 (Confidence reflects fidelity)
- **Safe to parallelize with**: SPRINT_7000_0003_0002
## Documentation Prerequisites
- `docs/product-advisories/21-Dec-2025 - Designing Explainable Triage Workflows.md`
- `src/Scanner/StellaOps.Scanner.WebService/`
---
## Problem Statement
The advisory requires: "Progressive fidelity: fast heuristic → deeper proof when requested; verdict must reflect confidence accordingly."
Currently, reachability analysis is all-or-nothing. Users cannot quickly triage thousands of findings and then selectively request deeper analysis for high-priority items.
---
## Tasks
### T1: Define FidelityLevel Enum and Configuration
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: —
**Implementation Path**: `Fidelity/FidelityLevel.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Scanner.Orchestration.Fidelity;
/// <summary>
/// Analysis fidelity level controlling depth vs speed tradeoff.
/// </summary>
public enum FidelityLevel
{
/// <summary>
/// Fast heuristic analysis. Uses package-level matching only.
/// ~10x faster than Standard. Lower confidence.
/// </summary>
Quick,
/// <summary>
/// Standard analysis. Includes call graph for top languages.
/// Balanced speed and accuracy.
/// </summary>
Standard,
/// <summary>
/// Deep analysis. Full call graph, runtime correlation, binary mapping.
/// Highest confidence but slowest.
/// </summary>
Deep
}
/// <summary>
/// Configuration for each fidelity level.
/// </summary>
public sealed record FidelityConfiguration
{
public required FidelityLevel Level { get; init; }
/// <summary>
/// Whether to perform call graph extraction.
/// </summary>
public bool EnableCallGraph { get; init; }
/// <summary>
/// Whether to correlate with runtime evidence.
/// </summary>
public bool EnableRuntimeCorrelation { get; init; }
/// <summary>
/// Whether to perform binary mapping.
/// </summary>
public bool EnableBinaryMapping { get; init; }
/// <summary>
/// Maximum call graph depth.
/// </summary>
public int MaxCallGraphDepth { get; init; }
/// <summary>
/// Timeout for analysis.
/// </summary>
public TimeSpan Timeout { get; init; }
/// <summary>
/// Base confidence for this fidelity level.
/// </summary>
public decimal BaseConfidence { get; init; }
/// <summary>
/// Languages to analyze (null = all).
/// </summary>
public IReadOnlyList<string>? TargetLanguages { get; init; }
public static FidelityConfiguration Quick => new()
{
Level = FidelityLevel.Quick,
EnableCallGraph = false,
EnableRuntimeCorrelation = false,
EnableBinaryMapping = false,
MaxCallGraphDepth = 0,
Timeout = TimeSpan.FromSeconds(30),
BaseConfidence = 0.5m,
TargetLanguages = null
};
public static FidelityConfiguration Standard => new()
{
Level = FidelityLevel.Standard,
EnableCallGraph = true,
EnableRuntimeCorrelation = false,
EnableBinaryMapping = false,
MaxCallGraphDepth = 10,
Timeout = TimeSpan.FromMinutes(5),
BaseConfidence = 0.75m,
TargetLanguages = ["java", "dotnet", "python", "go", "node"]
};
public static FidelityConfiguration Deep => new()
{
Level = FidelityLevel.Deep,
EnableCallGraph = true,
EnableRuntimeCorrelation = true,
EnableBinaryMapping = true,
MaxCallGraphDepth = 50,
Timeout = TimeSpan.FromMinutes(30),
BaseConfidence = 0.9m,
TargetLanguages = null
};
public static FidelityConfiguration FromLevel(FidelityLevel level) => level switch
{
FidelityLevel.Quick => Quick,
FidelityLevel.Standard => Standard,
FidelityLevel.Deep => Deep,
_ => Standard
};
}
```
**Acceptance Criteria**:
- [ ] FidelityLevel enum defined
- [ ] FidelityConfiguration for each level
- [ ] Configurable timeouts and depths
- [ ] Base confidence per level
---
### T2: Create FidelityAwareAnalyzer
**Assignee**: Scanner Team
**Story Points**: 5
**Status**: TODO
**Dependencies**: T1
**Implementation Path**: `Fidelity/FidelityAwareAnalyzer.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Scanner.Orchestration.Fidelity;
public interface IFidelityAwareAnalyzer
{
Task<FidelityAnalysisResult> AnalyzeAsync(
AnalysisRequest request,
FidelityLevel level,
CancellationToken ct);
Task<FidelityUpgradeResult> UpgradeFidelityAsync(
Guid findingId,
FidelityLevel targetLevel,
CancellationToken ct);
}
public sealed class FidelityAwareAnalyzer : IFidelityAwareAnalyzer
{
private readonly ICallGraphExtractor _callGraphExtractor;
private readonly IRuntimeCorrelator _runtimeCorrelator;
private readonly IBinaryMapper _binaryMapper;
private readonly IPackageMatcher _packageMatcher;
private readonly ILogger<FidelityAwareAnalyzer> _logger;
public async Task<FidelityAnalysisResult> AnalyzeAsync(
AnalysisRequest request,
FidelityLevel level,
CancellationToken ct)
{
var config = FidelityConfiguration.FromLevel(level);
var stopwatch = Stopwatch.StartNew();
using var cts = CancellationTokenSource.CreateLinkedTokenSource(ct);
cts.CancelAfter(config.Timeout);
try
{
// Level 1: Package matching (always done)
var packageResult = await _packageMatcher.MatchAsync(request, cts.Token);
if (level == FidelityLevel.Quick)
{
return BuildResult(packageResult, config, stopwatch.Elapsed);
}
// Level 2: Call graph analysis (Standard and Deep)
CallGraphResult? callGraphResult = null;
if (config.EnableCallGraph)
{
var languages = config.TargetLanguages ?? request.DetectedLanguages;
callGraphResult = await _callGraphExtractor.ExtractAsync(
request,
languages,
config.MaxCallGraphDepth,
cts.Token);
}
if (level == FidelityLevel.Standard)
{
return BuildResult(packageResult, callGraphResult, config, stopwatch.Elapsed);
}
// Level 3: Binary mapping and runtime (Deep only)
BinaryMappingResult? binaryResult = null;
RuntimeCorrelationResult? runtimeResult = null;
if (config.EnableBinaryMapping)
{
binaryResult = await _binaryMapper.MapAsync(request, cts.Token);
}
if (config.EnableRuntimeCorrelation)
{
runtimeResult = await _runtimeCorrelator.CorrelateAsync(request, cts.Token);
}
return BuildResult(
packageResult,
callGraphResult,
binaryResult,
runtimeResult,
config,
stopwatch.Elapsed);
}
catch (OperationCanceledException) when (cts.IsCancellationRequested && !ct.IsCancellationRequested)
{
_logger.LogWarning(
"Analysis timeout at fidelity {Level} after {Elapsed}",
level, stopwatch.Elapsed);
return BuildTimeoutResult(level, config, stopwatch.Elapsed);
}
}
public async Task<FidelityUpgradeResult> UpgradeFidelityAsync(
Guid findingId,
FidelityLevel targetLevel,
CancellationToken ct)
{
// Load existing analysis
var existing = await LoadExistingAnalysisAsync(findingId, ct);
if (existing is null)
{
return FidelityUpgradeResult.NotFound(findingId);
}
if (existing.FidelityLevel >= targetLevel)
{
return FidelityUpgradeResult.AlreadyAtLevel(existing);
}
// Perform incremental upgrade
var request = existing.ToAnalysisRequest();
var result = await AnalyzeAsync(request, targetLevel, ct);
// Merge with existing
var merged = MergeResults(existing, result);
// Persist upgraded result
await PersistResultAsync(merged, ct);
return new FidelityUpgradeResult
{
Success = true,
FindingId = findingId,
PreviousLevel = existing.FidelityLevel,
NewLevel = targetLevel,
ConfidenceImprovement = merged.Confidence - existing.Confidence,
NewResult = merged
};
}
private FidelityAnalysisResult BuildResult(
PackageMatchResult packageResult,
FidelityConfiguration config,
TimeSpan elapsed)
{
var confidence = config.BaseConfidence;
// Adjust confidence based on match quality
if (packageResult.HasExactMatch)
confidence += 0.1m;
return new FidelityAnalysisResult
{
FidelityLevel = config.Level,
Confidence = Math.Min(confidence, 1.0m),
IsReachable = null, // Unknown at Quick level
PackageMatches = packageResult.Matches,
CallGraph = null,
BinaryMapping = null,
RuntimeCorrelation = null,
AnalysisTime = elapsed,
CanUpgrade = true,
UpgradeRecommendation = "Upgrade to Standard for call graph analysis"
};
}
private FidelityAnalysisResult BuildResult(
PackageMatchResult packageResult,
CallGraphResult? callGraphResult,
FidelityConfiguration config,
TimeSpan elapsed)
{
var confidence = config.BaseConfidence;
// Adjust based on call graph completeness
if (callGraphResult?.IsComplete == true)
confidence += 0.15m;
var isReachable = callGraphResult?.HasPathToVulnerable;
return new FidelityAnalysisResult
{
FidelityLevel = config.Level,
Confidence = Math.Min(confidence, 1.0m),
IsReachable = isReachable,
PackageMatches = packageResult.Matches,
CallGraph = callGraphResult,
BinaryMapping = null,
RuntimeCorrelation = null,
AnalysisTime = elapsed,
CanUpgrade = true,
UpgradeRecommendation = isReachable == true
? "Upgrade to Deep for runtime verification"
: "Upgrade to Deep for binary mapping confirmation"
};
}
private FidelityAnalysisResult BuildResult(
PackageMatchResult packageResult,
CallGraphResult? callGraphResult,
BinaryMappingResult? binaryResult,
RuntimeCorrelationResult? runtimeResult,
FidelityConfiguration config,
TimeSpan elapsed)
{
var confidence = config.BaseConfidence;
// Adjust based on runtime corroboration
if (runtimeResult?.HasCorroboration == true)
confidence = 0.95m;
else if (binaryResult?.HasMapping == true)
confidence += 0.05m;
var isReachable = DetermineReachability(
callGraphResult,
binaryResult,
runtimeResult);
return new FidelityAnalysisResult
{
FidelityLevel = config.Level,
Confidence = Math.Min(confidence, 1.0m),
IsReachable = isReachable,
PackageMatches = packageResult.Matches,
CallGraph = callGraphResult,
BinaryMapping = binaryResult,
RuntimeCorrelation = runtimeResult,
AnalysisTime = elapsed,
CanUpgrade = false,
UpgradeRecommendation = null
};
}
private static bool? DetermineReachability(
CallGraphResult? callGraph,
BinaryMappingResult? binary,
RuntimeCorrelationResult? runtime)
{
// Runtime is authoritative
if (runtime?.WasExecuted == true)
return true;
if (runtime?.WasExecuted == false && runtime.ObservationCount > 100)
return false;
// Fall back to call graph
if (callGraph?.HasPathToVulnerable == true)
return true;
if (callGraph?.HasPathToVulnerable == false && callGraph.IsComplete)
return false;
return null; // Unknown
}
private FidelityAnalysisResult BuildTimeoutResult(
FidelityLevel attemptedLevel,
FidelityConfiguration config,
TimeSpan elapsed)
{
return new FidelityAnalysisResult
{
FidelityLevel = attemptedLevel,
Confidence = 0.3m,
IsReachable = null,
PackageMatches = [],
AnalysisTime = elapsed,
TimedOut = true,
CanUpgrade = false,
UpgradeRecommendation = "Analysis timed out. Try with smaller scope."
};
}
}
public sealed record FidelityAnalysisResult
{
public required FidelityLevel FidelityLevel { get; init; }
public required decimal Confidence { get; init; }
public bool? IsReachable { get; init; }
public required IReadOnlyList<PackageMatch> PackageMatches { get; init; }
public CallGraphResult? CallGraph { get; init; }
public BinaryMappingResult? BinaryMapping { get; init; }
public RuntimeCorrelationResult? RuntimeCorrelation { get; init; }
public required TimeSpan AnalysisTime { get; init; }
public bool TimedOut { get; init; }
public required bool CanUpgrade { get; init; }
public string? UpgradeRecommendation { get; init; }
}
public sealed record FidelityUpgradeResult
{
public required bool Success { get; init; }
public Guid FindingId { get; init; }
public FidelityLevel? PreviousLevel { get; init; }
public FidelityLevel? NewLevel { get; init; }
public decimal ConfidenceImprovement { get; init; }
public FidelityAnalysisResult? NewResult { get; init; }
public string? Error { get; init; }
public static FidelityUpgradeResult NotFound(Guid id) => new()
{
Success = false,
FindingId = id,
Error = "Finding not found"
};
public static FidelityUpgradeResult AlreadyAtLevel(FidelityAnalysisResult existing) => new()
{
Success = true,
PreviousLevel = existing.FidelityLevel,
NewLevel = existing.FidelityLevel,
ConfidenceImprovement = 0,
NewResult = existing
};
}
```
**Acceptance Criteria**:
- [ ] Implements Quick/Standard/Deep analysis
- [ ] Respects timeouts per level
- [ ] Supports fidelity upgrade
- [ ] Confidence reflects fidelity level
---
### T3: Create API Endpoints
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T2
**Implementation Path**: `src/Scanner/StellaOps.Scanner.WebService/Endpoints/FidelityEndpoints.cs`
```csharp
namespace StellaOps.Scanner.WebService.Endpoints;
public static class FidelityEndpoints
{
public static void MapFidelityEndpoints(this WebApplication app)
{
var group = app.MapGroup("/api/v1/scan")
.WithTags("Fidelity")
.RequireAuthorization();
// POST /api/v1/scan/analyze?fidelity={level}
group.MapPost("/analyze", async (
[FromBody] AnalysisRequest request,
[FromQuery] FidelityLevel fidelity = FidelityLevel.Standard,
IFidelityAwareAnalyzer analyzer,
CancellationToken ct) =>
{
var result = await analyzer.AnalyzeAsync(request, fidelity, ct);
return Results.Ok(result);
})
.WithName("AnalyzeWithFidelity")
.WithDescription("Analyze with specified fidelity level");
// POST /api/v1/scan/findings/{findingId}/upgrade
group.MapPost("/findings/{findingId:guid}/upgrade", async (
Guid findingId,
[FromQuery] FidelityLevel target = FidelityLevel.Deep,
IFidelityAwareAnalyzer analyzer,
CancellationToken ct) =>
{
var result = await analyzer.UpgradeFidelityAsync(findingId, target, ct);
return result.Success
? Results.Ok(result)
: Results.BadRequest(result);
})
.WithName("UpgradeFidelity")
.WithDescription("Upgrade analysis fidelity for a finding");
}
}
```
**Acceptance Criteria**:
- [ ] Analyze endpoint with fidelity param
- [ ] Upgrade endpoint for findings
- [ ] OpenAPI documentation
---
### T4: Add Tests
**Assignee**: Scanner Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1-T3
**Test Cases**:
```csharp
public class FidelityAwareAnalyzerTests
{
[Fact]
public async Task AnalyzeAsync_QuickLevel_SkipsCallGraph()
{
var request = CreateAnalysisRequest();
var result = await _analyzer.AnalyzeAsync(request, FidelityLevel.Quick, CancellationToken.None);
result.FidelityLevel.Should().Be(FidelityLevel.Quick);
result.CallGraph.Should().BeNull();
result.Confidence.Should().BeLessThan(0.7m);
}
[Fact]
public async Task AnalyzeAsync_StandardLevel_IncludesCallGraph()
{
var request = CreateAnalysisRequest();
var result = await _analyzer.AnalyzeAsync(request, FidelityLevel.Standard, CancellationToken.None);
result.FidelityLevel.Should().Be(FidelityLevel.Standard);
result.CallGraph.Should().NotBeNull();
}
[Fact]
public async Task AnalyzeAsync_DeepLevel_IncludesRuntime()
{
var request = CreateAnalysisRequest();
var result = await _analyzer.AnalyzeAsync(request, FidelityLevel.Deep, CancellationToken.None);
result.FidelityLevel.Should().Be(FidelityLevel.Deep);
result.RuntimeCorrelation.Should().NotBeNull();
result.CanUpgrade.Should().BeFalse();
}
[Fact]
public async Task UpgradeFidelityAsync_FromQuickToStandard_ImprovesConfidence()
{
var findingId = await CreateFindingAtQuickLevel();
var result = await _analyzer.UpgradeFidelityAsync(findingId, FidelityLevel.Standard, CancellationToken.None);
result.Success.Should().BeTrue();
result.ConfidenceImprovement.Should().BePositive();
}
}
```
**Acceptance Criteria**:
- [ ] Level-specific tests
- [ ] Upgrade tests
- [ ] Timeout tests
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Scanner Team | Define FidelityLevel and configuration |
| 2 | T2 | TODO | T1 | Scanner Team | Create FidelityAwareAnalyzer |
| 3 | T3 | TODO | T2 | Scanner Team | Create API endpoints |
| 4 | T4 | TODO | T1-T3 | Scanner Team | Add tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from Explainable Triage Workflows advisory gap analysis. | Claude |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Three fidelity levels | Decision | Scanner Team | Quick, Standard, Deep |
| Quick timeout | Decision | Scanner Team | 30 seconds |
| Standard languages | Decision | Scanner Team | Java, .NET, Python, Go, Node |
| Deep includes runtime | Decision | Scanner Team | Only Deep level correlates runtime |
---
## Success Criteria
- [ ] All 4 tasks marked DONE
- [ ] Quick analysis completes in <30s
- [ ] Standard analysis includes call graph
- [ ] Deep analysis includes runtime
- [ ] Upgrade path works correctly
- [ ] All tests pass

View File

@@ -0,0 +1,606 @@
# Sprint 7000.0003.0002 · Evidence Size Budgets
## Topic & Scope
- Implement per-scan evidence size caps
- Define retention tier policies (hot/warm/cold/archive)
- Enforce budgets during evidence generation
- Ensure audit pack completeness with tier-aware pruning
**Working directory:** `src/__Libraries/StellaOps.Evidence/`
## Dependencies & Concurrency
- **Upstream**: None (independent)
- **Downstream**: SPRINT_5100_0006_0001 (Audit Pack Export)
- **Safe to parallelize with**: SPRINT_7000_0003_0001
## Documentation Prerequisites
- `docs/product-advisories/21-Dec-2025 - Designing Explainable Triage Workflows.md`
- `docs/24_OFFLINE_KIT.md`
---
## Tasks
### T1: Define EvidenceBudget Model
**Assignee**: Platform Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: —
**Implementation Path**: `Budgets/EvidenceBudget.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Evidence.Budgets;
/// <summary>
/// Budget configuration for evidence storage.
/// </summary>
public sealed record EvidenceBudget
{
/// <summary>
/// Maximum total evidence size per scan (bytes).
/// </summary>
public required long MaxScanSizeBytes { get; init; }
/// <summary>
/// Maximum size per evidence type (bytes).
/// </summary>
public IReadOnlyDictionary<EvidenceType, long> MaxPerType { get; init; }
= new Dictionary<EvidenceType, long>();
/// <summary>
/// Retention policy by tier.
/// </summary>
public required IReadOnlyDictionary<RetentionTier, RetentionPolicy> RetentionPolicies { get; init; }
/// <summary>
/// Action when budget is exceeded.
/// </summary>
public BudgetExceededAction ExceededAction { get; init; } = BudgetExceededAction.Warn;
/// <summary>
/// Evidence types to always preserve (never prune).
/// </summary>
public IReadOnlySet<EvidenceType> AlwaysPreserve { get; init; }
= new HashSet<EvidenceType> { EvidenceType.Verdict, EvidenceType.Attestation };
public static EvidenceBudget Default => new()
{
MaxScanSizeBytes = 100 * 1024 * 1024, // 100 MB
MaxPerType = new Dictionary<EvidenceType, long>
{
[EvidenceType.CallGraph] = 50 * 1024 * 1024,
[EvidenceType.RuntimeCapture] = 20 * 1024 * 1024,
[EvidenceType.Sbom] = 10 * 1024 * 1024,
[EvidenceType.PolicyTrace] = 5 * 1024 * 1024
},
RetentionPolicies = new Dictionary<RetentionTier, RetentionPolicy>
{
[RetentionTier.Hot] = new RetentionPolicy { Duration = TimeSpan.FromDays(7) },
[RetentionTier.Warm] = new RetentionPolicy { Duration = TimeSpan.FromDays(30) },
[RetentionTier.Cold] = new RetentionPolicy { Duration = TimeSpan.FromDays(90) },
[RetentionTier.Archive] = new RetentionPolicy { Duration = TimeSpan.FromDays(365) }
}
};
}
public enum EvidenceType
{
Verdict,
PolicyTrace,
CallGraph,
RuntimeCapture,
Sbom,
Vex,
Attestation,
PathWitness,
Advisory
}
public enum RetentionTier
{
/// <summary>Immediately accessible, highest cost.</summary>
Hot,
/// <summary>Quick retrieval, moderate cost.</summary>
Warm,
/// <summary>Delayed retrieval, lower cost.</summary>
Cold,
/// <summary>Long-term storage, lowest cost.</summary>
Archive
}
public sealed record RetentionPolicy
{
/// <summary>
/// How long evidence stays in this tier.
/// </summary>
public required TimeSpan Duration { get; init; }
/// <summary>
/// Compression algorithm for this tier.
/// </summary>
public CompressionLevel Compression { get; init; } = CompressionLevel.None;
/// <summary>
/// Whether to deduplicate within this tier.
/// </summary>
public bool Deduplicate { get; init; } = true;
}
public enum CompressionLevel
{
None,
Fast,
Optimal,
Maximum
}
public enum BudgetExceededAction
{
/// <summary>Log warning but continue.</summary>
Warn,
/// <summary>Block the operation.</summary>
Block,
/// <summary>Automatically prune lowest priority evidence.</summary>
AutoPrune
}
```
**Acceptance Criteria**:
- [ ] EvidenceBudget with size limits
- [ ] RetentionTier enum with policies
- [ ] Default budget configuration
- [ ] AlwaysPreserve set for critical evidence
---
### T2: Create EvidenceBudgetService
**Assignee**: Platform Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1
**Implementation Path**: `Budgets/EvidenceBudgetService.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Evidence.Budgets;
public interface IEvidenceBudgetService
{
BudgetCheckResult CheckBudget(Guid scanId, EvidenceItem item);
BudgetStatus GetBudgetStatus(Guid scanId);
Task<PruneResult> PruneToFitAsync(Guid scanId, long targetBytes, CancellationToken ct);
}
public sealed class EvidenceBudgetService : IEvidenceBudgetService
{
private readonly IEvidenceRepository _repository;
private readonly IOptionsMonitor<EvidenceBudget> _options;
private readonly ILogger<EvidenceBudgetService> _logger;
public BudgetCheckResult CheckBudget(Guid scanId, EvidenceItem item)
{
var budget = _options.CurrentValue;
var currentUsage = GetCurrentUsage(scanId);
var issues = new List<string>();
// Check total budget
var projectedTotal = currentUsage.TotalBytes + item.SizeBytes;
if (projectedTotal > budget.MaxScanSizeBytes)
{
issues.Add($"Would exceed total budget: {projectedTotal:N0} > {budget.MaxScanSizeBytes:N0} bytes");
}
// Check per-type budget
if (budget.MaxPerType.TryGetValue(item.Type, out var typeLimit))
{
var typeUsage = currentUsage.ByType.GetValueOrDefault(item.Type, 0);
var projectedType = typeUsage + item.SizeBytes;
if (projectedType > typeLimit)
{
issues.Add($"Would exceed {item.Type} budget: {projectedType:N0} > {typeLimit:N0} bytes");
}
}
if (issues.Count == 0)
{
return BudgetCheckResult.WithinBudget();
}
return new BudgetCheckResult
{
IsWithinBudget = false,
Issues = issues,
RecommendedAction = budget.ExceededAction,
CanAutoPrune = budget.ExceededAction == BudgetExceededAction.AutoPrune,
BytesToFree = projectedTotal - budget.MaxScanSizeBytes
};
}
public BudgetStatus GetBudgetStatus(Guid scanId)
{
var budget = _options.CurrentValue;
var usage = GetCurrentUsage(scanId);
return new BudgetStatus
{
ScanId = scanId,
TotalBudgetBytes = budget.MaxScanSizeBytes,
UsedBytes = usage.TotalBytes,
RemainingBytes = Math.Max(0, budget.MaxScanSizeBytes - usage.TotalBytes),
UtilizationPercent = (decimal)usage.TotalBytes / budget.MaxScanSizeBytes * 100,
ByType = usage.ByType.ToDictionary(
kvp => kvp.Key,
kvp => new TypeBudgetStatus
{
Type = kvp.Key,
UsedBytes = kvp.Value,
LimitBytes = budget.MaxPerType.GetValueOrDefault(kvp.Key),
UtilizationPercent = budget.MaxPerType.TryGetValue(kvp.Key, out var limit)
? (decimal)kvp.Value / limit * 100
: 0
})
};
}
public async Task<PruneResult> PruneToFitAsync(
Guid scanId,
long targetBytes,
CancellationToken ct)
{
var budget = _options.CurrentValue;
var usage = GetCurrentUsage(scanId);
if (usage.TotalBytes <= targetBytes)
{
return PruneResult.NoPruningNeeded();
}
var bytesToPrune = usage.TotalBytes - targetBytes;
var pruned = new List<PrunedItem>();
// Get all evidence items, sorted by pruning priority
var items = await _repository.GetByScanIdAsync(scanId, ct);
var candidates = items
.Where(i => !budget.AlwaysPreserve.Contains(i.Type))
.OrderBy(i => GetPrunePriority(i))
.ToList();
long prunedBytes = 0;
foreach (var item in candidates)
{
if (prunedBytes >= bytesToPrune)
break;
// Move to archive tier or delete
await _repository.MoveToTierAsync(item.Id, RetentionTier.Archive, ct);
pruned.Add(new PrunedItem(item.Id, item.Type, item.SizeBytes));
prunedBytes += item.SizeBytes;
}
_logger.LogInformation(
"Pruned {Count} items ({Bytes:N0} bytes) for scan {ScanId}",
pruned.Count, prunedBytes, scanId);
return new PruneResult
{
Success = prunedBytes >= bytesToPrune,
BytesPruned = prunedBytes,
ItemsPruned = pruned,
BytesRemaining = usage.TotalBytes - prunedBytes
};
}
private static int GetPrunePriority(EvidenceItem item)
{
// Lower = prune first
return item.Type switch
{
EvidenceType.RuntimeCapture => 1,
EvidenceType.CallGraph => 2,
EvidenceType.Advisory => 3,
EvidenceType.PathWitness => 4,
EvidenceType.PolicyTrace => 5,
EvidenceType.Sbom => 6,
EvidenceType.Vex => 7,
EvidenceType.Attestation => 8,
EvidenceType.Verdict => 9, // Never prune
_ => 5
};
}
private UsageStats GetCurrentUsage(Guid scanId)
{
// Implementation to calculate current usage
return new UsageStats();
}
}
public sealed record BudgetCheckResult
{
public required bool IsWithinBudget { get; init; }
public IReadOnlyList<string> Issues { get; init; } = [];
public BudgetExceededAction RecommendedAction { get; init; }
public bool CanAutoPrune { get; init; }
public long BytesToFree { get; init; }
public static BudgetCheckResult WithinBudget() => new() { IsWithinBudget = true };
}
public sealed record BudgetStatus
{
public required Guid ScanId { get; init; }
public required long TotalBudgetBytes { get; init; }
public required long UsedBytes { get; init; }
public required long RemainingBytes { get; init; }
public required decimal UtilizationPercent { get; init; }
public required IReadOnlyDictionary<EvidenceType, TypeBudgetStatus> ByType { get; init; }
}
public sealed record TypeBudgetStatus
{
public required EvidenceType Type { get; init; }
public required long UsedBytes { get; init; }
public long? LimitBytes { get; init; }
public decimal UtilizationPercent { get; init; }
}
public sealed record PruneResult
{
public required bool Success { get; init; }
public long BytesPruned { get; init; }
public IReadOnlyList<PrunedItem> ItemsPruned { get; init; } = [];
public long BytesRemaining { get; init; }
public static PruneResult NoPruningNeeded() => new() { Success = true };
}
public sealed record PrunedItem(Guid ItemId, EvidenceType Type, long SizeBytes);
public sealed record UsageStats
{
public long TotalBytes { get; init; }
public IReadOnlyDictionary<EvidenceType, long> ByType { get; init; } = new Dictionary<EvidenceType, long>();
}
```
**Acceptance Criteria**:
- [ ] Budget checking before storage
- [ ] Budget status reporting
- [ ] Auto-pruning with priority
- [ ] AlwaysPreserve respected
---
### T3: Create RetentionTierManager
**Assignee**: Platform Team
**Story Points**: 3
**Status**: TODO
**Dependencies**: T1
**Implementation Path**: `Retention/RetentionTierManager.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Evidence.Retention;
public interface IRetentionTierManager
{
Task<TierMigrationResult> RunMigrationAsync(CancellationToken ct);
RetentionTier GetCurrentTier(EvidenceItem item);
Task EnsureAuditCompleteAsync(Guid scanId, CancellationToken ct);
}
public sealed class RetentionTierManager : IRetentionTierManager
{
private readonly IEvidenceRepository _repository;
private readonly IArchiveStorage _archiveStorage;
private readonly IOptionsMonitor<EvidenceBudget> _options;
public async Task<TierMigrationResult> RunMigrationAsync(CancellationToken ct)
{
var budget = _options.CurrentValue;
var now = DateTimeOffset.UtcNow;
var migrated = new List<MigratedItem>();
// Hot → Warm
var hotExpiry = now - budget.RetentionPolicies[RetentionTier.Hot].Duration;
var toWarm = await _repository.GetOlderThanAsync(RetentionTier.Hot, hotExpiry, ct);
foreach (var item in toWarm)
{
await MigrateAsync(item, RetentionTier.Warm, ct);
migrated.Add(new MigratedItem(item.Id, RetentionTier.Hot, RetentionTier.Warm));
}
// Warm → Cold
var warmExpiry = now - budget.RetentionPolicies[RetentionTier.Warm].Duration;
var toCold = await _repository.GetOlderThanAsync(RetentionTier.Warm, warmExpiry, ct);
foreach (var item in toCold)
{
await MigrateAsync(item, RetentionTier.Cold, ct);
migrated.Add(new MigratedItem(item.Id, RetentionTier.Warm, RetentionTier.Cold));
}
// Cold → Archive
var coldExpiry = now - budget.RetentionPolicies[RetentionTier.Cold].Duration;
var toArchive = await _repository.GetOlderThanAsync(RetentionTier.Cold, coldExpiry, ct);
foreach (var item in toArchive)
{
await MigrateAsync(item, RetentionTier.Archive, ct);
migrated.Add(new MigratedItem(item.Id, RetentionTier.Cold, RetentionTier.Archive));
}
return new TierMigrationResult
{
MigratedCount = migrated.Count,
Items = migrated
};
}
public RetentionTier GetCurrentTier(EvidenceItem item)
{
var budget = _options.CurrentValue;
var age = DateTimeOffset.UtcNow - item.CreatedAt;
if (age < budget.RetentionPolicies[RetentionTier.Hot].Duration)
return RetentionTier.Hot;
if (age < budget.RetentionPolicies[RetentionTier.Warm].Duration)
return RetentionTier.Warm;
if (age < budget.RetentionPolicies[RetentionTier.Cold].Duration)
return RetentionTier.Cold;
return RetentionTier.Archive;
}
public async Task EnsureAuditCompleteAsync(Guid scanId, CancellationToken ct)
{
var budget = _options.CurrentValue;
// Ensure all AlwaysPreserve types are in Hot tier for audit export
foreach (var type in budget.AlwaysPreserve)
{
var items = await _repository.GetByScanIdAndTypeAsync(scanId, type, ct);
foreach (var item in items.Where(i => i.Tier != RetentionTier.Hot))
{
await RestoreToHotAsync(item, ct);
}
}
}
private async Task MigrateAsync(EvidenceItem item, RetentionTier targetTier, CancellationToken ct)
{
var policy = _options.CurrentValue.RetentionPolicies[targetTier];
if (policy.Compression != CompressionLevel.None)
{
// Compress before migration
var compressed = await CompressAsync(item, policy.Compression, ct);
await _repository.UpdateContentAsync(item.Id, compressed, ct);
}
await _repository.MoveToTierAsync(item.Id, targetTier, ct);
}
private async Task RestoreToHotAsync(EvidenceItem item, CancellationToken ct)
{
if (item.Tier == RetentionTier.Archive)
{
// Retrieve from archive storage
var content = await _archiveStorage.RetrieveAsync(item.ArchiveKey!, ct);
await _repository.UpdateContentAsync(item.Id, content, ct);
}
await _repository.MoveToTierAsync(item.Id, RetentionTier.Hot, ct);
}
}
public sealed record TierMigrationResult
{
public required int MigratedCount { get; init; }
public IReadOnlyList<MigratedItem> Items { get; init; } = [];
}
public sealed record MigratedItem(Guid ItemId, RetentionTier FromTier, RetentionTier ToTier);
```
**Acceptance Criteria**:
- [ ] Tier migration based on age
- [ ] Compression on tier change
- [ ] Audit completeness restoration
- [ ] Archive storage integration
---
### T4: Add Tests
**Assignee**: Platform Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1-T3
**Test Cases**:
```csharp
public class EvidenceBudgetServiceTests
{
[Fact]
public void CheckBudget_WithinLimit_ReturnsSuccess()
{
var item = CreateItem(sizeBytes: 1024);
var result = _service.CheckBudget(Guid.NewGuid(), item);
result.IsWithinBudget.Should().BeTrue();
}
[Fact]
public void CheckBudget_ExceedsTotal_ReturnsViolation()
{
var scanId = SetupScanAtBudgetLimit();
var item = CreateItem(sizeBytes: 1024 * 1024);
var result = _service.CheckBudget(scanId, item);
result.IsWithinBudget.Should().BeFalse();
result.Issues.Should().Contain(i => i.Contains("total budget"));
}
[Fact]
public async Task PruneToFitAsync_PreservesAlwaysPreserveTypes()
{
var scanId = SetupScanOverBudget();
var result = await _service.PruneToFitAsync(scanId, 50 * 1024 * 1024, CancellationToken.None);
result.ItemsPruned.Should().NotContain(i => i.Type == EvidenceType.Verdict);
result.ItemsPruned.Should().NotContain(i => i.Type == EvidenceType.Attestation);
}
}
```
**Acceptance Criteria**:
- [ ] Budget check tests
- [ ] Pruning priority tests
- [ ] AlwaysPreserve tests
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Platform Team | Define EvidenceBudget model |
| 2 | T2 | TODO | T1 | Platform Team | Create EvidenceBudgetService |
| 3 | T3 | TODO | T1 | Platform Team | Create RetentionTierManager |
| 4 | T4 | TODO | T1-T3 | Platform Team | Add tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from Explainable Triage Workflows advisory gap analysis. | Claude |
---
## Success Criteria
- [ ] All 4 tasks marked DONE
- [ ] Budget enforcement prevents oversized scans
- [ ] Retention tiers migrate automatically
- [ ] Audit packs remain complete
- [ ] All tests pass

View File

@@ -0,0 +1,681 @@
# Sprint 7000.0004.0001 · Quality KPIs Tracking
## Topic & Scope
- Implement KPI tracking infrastructure for explainable triage
- Track: % non-UNKNOWN reachability, runtime corroboration, explainability completeness, replay success
- Create dashboard API endpoints
- Enable weekly KPI reporting
**Working directory:** `src/__Libraries/StellaOps.Metrics/`
## Dependencies & Concurrency
- **Upstream**: All SPRINT_7000 sprints (uses their outputs)
- **Downstream**: None
- **Safe to parallelize with**: None (depends on other features)
## Documentation Prerequisites
- `docs/product-advisories/21-Dec-2025 - Designing Explainable Triage Workflows.md`
---
## Problem Statement
The advisory defines quality KPIs:
- % findings with non-UNKNOWN reachability
- % findings with runtime corroboration available
- False-positive reduction vs baseline
- "Explainability completeness": % verdicts with reason steps + at least one proof pointer
- Replay success rate: % attestations replaying deterministically
Currently, no infrastructure exists to track these metrics.
---
## Tasks
### T1: Define KPI Models
**Assignee**: Platform Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: —
**Implementation Path**: `Kpi/KpiModels.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Metrics.Kpi;
/// <summary>
/// Quality KPIs for explainable triage.
/// </summary>
public sealed record TriageQualityKpis
{
/// <summary>
/// Reporting period start.
/// </summary>
public required DateTimeOffset PeriodStart { get; init; }
/// <summary>
/// Reporting period end.
/// </summary>
public required DateTimeOffset PeriodEnd { get; init; }
/// <summary>
/// Tenant ID (null for global).
/// </summary>
public string? TenantId { get; init; }
/// <summary>
/// Reachability KPIs.
/// </summary>
public required ReachabilityKpis Reachability { get; init; }
/// <summary>
/// Runtime KPIs.
/// </summary>
public required RuntimeKpis Runtime { get; init; }
/// <summary>
/// Explainability KPIs.
/// </summary>
public required ExplainabilityKpis Explainability { get; init; }
/// <summary>
/// Replay/Determinism KPIs.
/// </summary>
public required ReplayKpis Replay { get; init; }
/// <summary>
/// Unknown budget KPIs.
/// </summary>
public required UnknownBudgetKpis Unknowns { get; init; }
/// <summary>
/// Operational KPIs.
/// </summary>
public required OperationalKpis Operational { get; init; }
}
public sealed record ReachabilityKpis
{
/// <summary>
/// Total findings analyzed.
/// </summary>
public required int TotalFindings { get; init; }
/// <summary>
/// Findings with non-UNKNOWN reachability.
/// </summary>
public required int WithKnownReachability { get; init; }
/// <summary>
/// Percentage with known reachability.
/// </summary>
public decimal PercentKnown => TotalFindings > 0
? (decimal)WithKnownReachability / TotalFindings * 100
: 0;
/// <summary>
/// Breakdown by reachability state.
/// </summary>
public required IReadOnlyDictionary<string, int> ByState { get; init; }
/// <summary>
/// Findings confirmed unreachable.
/// </summary>
public int ConfirmedUnreachable =>
ByState.GetValueOrDefault("ConfirmedUnreachable", 0);
/// <summary>
/// Noise reduction (unreachable / total).
/// </summary>
public decimal NoiseReductionPercent => TotalFindings > 0
? (decimal)ConfirmedUnreachable / TotalFindings * 100
: 0;
}
public sealed record RuntimeKpis
{
/// <summary>
/// Total findings in environments with sensors.
/// </summary>
public required int TotalWithSensorDeployed { get; init; }
/// <summary>
/// Findings with runtime observations.
/// </summary>
public required int WithRuntimeCorroboration { get; init; }
/// <summary>
/// Coverage percentage.
/// </summary>
public decimal CoveragePercent => TotalWithSensorDeployed > 0
? (decimal)WithRuntimeCorroboration / TotalWithSensorDeployed * 100
: 0;
/// <summary>
/// Breakdown by posture.
/// </summary>
public required IReadOnlyDictionary<string, int> ByPosture { get; init; }
}
public sealed record ExplainabilityKpis
{
/// <summary>
/// Total verdicts generated.
/// </summary>
public required int TotalVerdicts { get; init; }
/// <summary>
/// Verdicts with reason steps.
/// </summary>
public required int WithReasonSteps { get; init; }
/// <summary>
/// Verdicts with at least one proof pointer.
/// </summary>
public required int WithProofPointer { get; init; }
/// <summary>
/// Verdicts that are "complete" (both reason steps AND proof pointer).
/// </summary>
public required int FullyExplainable { get; init; }
/// <summary>
/// Explainability completeness percentage.
/// </summary>
public decimal CompletenessPercent => TotalVerdicts > 0
? (decimal)FullyExplainable / TotalVerdicts * 100
: 0;
}
public sealed record ReplayKpis
{
/// <summary>
/// Total replay attempts.
/// </summary>
public required int TotalAttempts { get; init; }
/// <summary>
/// Successful replays (identical verdict).
/// </summary>
public required int Successful { get; init; }
/// <summary>
/// Replay success rate.
/// </summary>
public decimal SuccessRate => TotalAttempts > 0
? (decimal)Successful / TotalAttempts * 100
: 0;
/// <summary>
/// Common failure reasons.
/// </summary>
public required IReadOnlyDictionary<string, int> FailureReasons { get; init; }
}
public sealed record UnknownBudgetKpis
{
/// <summary>
/// Total environments tracked.
/// </summary>
public required int TotalEnvironments { get; init; }
/// <summary>
/// Budget breaches by environment.
/// </summary>
public required IReadOnlyDictionary<string, int> BreachesByEnvironment { get; init; }
/// <summary>
/// Total overrides/exceptions granted.
/// </summary>
public required int OverridesGranted { get; init; }
/// <summary>
/// Average override age (days).
/// </summary>
public decimal AvgOverrideAgeDays { get; init; }
}
public sealed record OperationalKpis
{
/// <summary>
/// Median time to first verdict (seconds).
/// </summary>
public required double MedianTimeToVerdictSeconds { get; init; }
/// <summary>
/// Cache hit rate for graphs/proofs.
/// </summary>
public required decimal CacheHitRate { get; init; }
/// <summary>
/// Average evidence size per scan (bytes).
/// </summary>
public required long AvgEvidenceSizeBytes { get; init; }
/// <summary>
/// 95th percentile verdict time (seconds).
/// </summary>
public required double P95VerdictTimeSeconds { get; init; }
}
```
**Acceptance Criteria**:
- [ ] All KPI categories defined
- [ ] Percentage calculations
- [ ] Breakdown dictionaries
- [ ] Period tracking
---
### T2: Create KpiCollector Service
**Assignee**: Platform Team
**Story Points**: 5
**Status**: TODO
**Dependencies**: T1
**Implementation Path**: `Kpi/KpiCollector.cs` (new file)
**Implementation**:
```csharp
namespace StellaOps.Metrics.Kpi;
public interface IKpiCollector
{
Task<TriageQualityKpis> CollectAsync(
DateTimeOffset start,
DateTimeOffset end,
string? tenantId = null,
CancellationToken ct = default);
Task RecordReachabilityResultAsync(Guid findingId, string state, CancellationToken ct);
Task RecordRuntimeObservationAsync(Guid findingId, string posture, CancellationToken ct);
Task RecordVerdictAsync(Guid verdictId, bool hasReasonSteps, bool hasProofPointer, CancellationToken ct);
Task RecordReplayAttemptAsync(Guid attestationId, bool success, string? failureReason, CancellationToken ct);
}
public sealed class KpiCollector : IKpiCollector
{
private readonly IKpiRepository _repository;
private readonly IFindingRepository _findingRepo;
private readonly IVerdictRepository _verdictRepo;
private readonly IReplayRepository _replayRepo;
private readonly ILogger<KpiCollector> _logger;
public async Task<TriageQualityKpis> CollectAsync(
DateTimeOffset start,
DateTimeOffset end,
string? tenantId = null,
CancellationToken ct = default)
{
var reachability = await CollectReachabilityKpisAsync(start, end, tenantId, ct);
var runtime = await CollectRuntimeKpisAsync(start, end, tenantId, ct);
var explainability = await CollectExplainabilityKpisAsync(start, end, tenantId, ct);
var replay = await CollectReplayKpisAsync(start, end, tenantId, ct);
var unknowns = await CollectUnknownBudgetKpisAsync(start, end, tenantId, ct);
var operational = await CollectOperationalKpisAsync(start, end, tenantId, ct);
return new TriageQualityKpis
{
PeriodStart = start,
PeriodEnd = end,
TenantId = tenantId,
Reachability = reachability,
Runtime = runtime,
Explainability = explainability,
Replay = replay,
Unknowns = unknowns,
Operational = operational
};
}
private async Task<ReachabilityKpis> CollectReachabilityKpisAsync(
DateTimeOffset start,
DateTimeOffset end,
string? tenantId,
CancellationToken ct)
{
var findings = await _findingRepo.GetInPeriodAsync(start, end, tenantId, ct);
var byState = findings
.GroupBy(f => f.ReachabilityState ?? "Unknown")
.ToDictionary(g => g.Key, g => g.Count());
var withKnown = findings.Count(f =>
f.ReachabilityState is not null and not "Unknown");
return new ReachabilityKpis
{
TotalFindings = findings.Count,
WithKnownReachability = withKnown,
ByState = byState
};
}
private async Task<RuntimeKpis> CollectRuntimeKpisAsync(
DateTimeOffset start,
DateTimeOffset end,
string? tenantId,
CancellationToken ct)
{
var findings = await _findingRepo.GetWithSensorDeployedAsync(start, end, tenantId, ct);
var withRuntime = findings.Count(f => f.HasRuntimeEvidence);
var byPosture = findings
.Where(f => f.RuntimePosture is not null)
.GroupBy(f => f.RuntimePosture!)
.ToDictionary(g => g.Key, g => g.Count());
return new RuntimeKpis
{
TotalWithSensorDeployed = findings.Count,
WithRuntimeCorroboration = withRuntime,
ByPosture = byPosture
};
}
private async Task<ExplainabilityKpis> CollectExplainabilityKpisAsync(
DateTimeOffset start,
DateTimeOffset end,
string? tenantId,
CancellationToken ct)
{
var verdicts = await _verdictRepo.GetInPeriodAsync(start, end, tenantId, ct);
var withReasonSteps = verdicts.Count(v => v.ReasonSteps?.Count > 0);
var withProofPointer = verdicts.Count(v => v.ProofPointers?.Count > 0);
var fullyExplainable = verdicts.Count(v =>
v.ReasonSteps?.Count > 0 && v.ProofPointers?.Count > 0);
return new ExplainabilityKpis
{
TotalVerdicts = verdicts.Count,
WithReasonSteps = withReasonSteps,
WithProofPointer = withProofPointer,
FullyExplainable = fullyExplainable
};
}
private async Task<ReplayKpis> CollectReplayKpisAsync(
DateTimeOffset start,
DateTimeOffset end,
string? tenantId,
CancellationToken ct)
{
var replays = await _replayRepo.GetInPeriodAsync(start, end, tenantId, ct);
var successful = replays.Count(r => r.Success);
var failureReasons = replays
.Where(r => !r.Success && r.FailureReason is not null)
.GroupBy(r => r.FailureReason!)
.ToDictionary(g => g.Key, g => g.Count());
return new ReplayKpis
{
TotalAttempts = replays.Count,
Successful = successful,
FailureReasons = failureReasons
};
}
private async Task<UnknownBudgetKpis> CollectUnknownBudgetKpisAsync(
DateTimeOffset start,
DateTimeOffset end,
string? tenantId,
CancellationToken ct)
{
var breaches = await _repository.GetBudgetBreachesAsync(start, end, tenantId, ct);
var overrides = await _repository.GetOverridesAsync(start, end, tenantId, ct);
return new UnknownBudgetKpis
{
TotalEnvironments = breaches.Keys.Count,
BreachesByEnvironment = breaches,
OverridesGranted = overrides.Count,
AvgOverrideAgeDays = overrides.Any()
? (decimal)overrides.Average(o => (DateTimeOffset.UtcNow - o.GrantedAt).TotalDays)
: 0
};
}
private async Task<OperationalKpis> CollectOperationalKpisAsync(
DateTimeOffset start,
DateTimeOffset end,
string? tenantId,
CancellationToken ct)
{
var metrics = await _repository.GetOperationalMetricsAsync(start, end, tenantId, ct);
return new OperationalKpis
{
MedianTimeToVerdictSeconds = metrics.MedianVerdictTime.TotalSeconds,
CacheHitRate = metrics.CacheHitRate,
AvgEvidenceSizeBytes = metrics.AvgEvidenceSize,
P95VerdictTimeSeconds = metrics.P95VerdictTime.TotalSeconds
};
}
// Recording methods for real-time tracking
public Task RecordReachabilityResultAsync(Guid findingId, string state, CancellationToken ct) =>
_repository.IncrementCounterAsync("reachability", state, ct);
public Task RecordRuntimeObservationAsync(Guid findingId, string posture, CancellationToken ct) =>
_repository.IncrementCounterAsync("runtime", posture, ct);
public Task RecordVerdictAsync(Guid verdictId, bool hasReasonSteps, bool hasProofPointer, CancellationToken ct)
{
var label = (hasReasonSteps, hasProofPointer) switch
{
(true, true) => "fully_explainable",
(true, false) => "reasons_only",
(false, true) => "proofs_only",
(false, false) => "unexplained"
};
return _repository.IncrementCounterAsync("explainability", label, ct);
}
public Task RecordReplayAttemptAsync(Guid attestationId, bool success, string? failureReason, CancellationToken ct)
{
var label = success ? "success" : (failureReason ?? "unknown_failure");
return _repository.IncrementCounterAsync("replay", label, ct);
}
}
```
**Acceptance Criteria**:
- [ ] Collects all KPI categories
- [ ] Supports period and tenant filtering
- [ ] Real-time recording methods
- [ ] Handles missing data gracefully
---
### T3: Create API Endpoints
**Assignee**: Platform Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T2
**Implementation Path**: `src/Platform/StellaOps.Platform.WebService/Endpoints/KpiEndpoints.cs`
```csharp
namespace StellaOps.Platform.WebService.Endpoints;
public static class KpiEndpoints
{
public static void MapKpiEndpoints(this WebApplication app)
{
var group = app.MapGroup("/api/v1/metrics/kpis")
.WithTags("Quality KPIs")
.RequireAuthorization("metrics:read");
// GET /api/v1/metrics/kpis
group.MapGet("/", async (
[FromQuery] DateTimeOffset? from,
[FromQuery] DateTimeOffset? to,
[FromQuery] string? tenant,
IKpiCollector collector,
CancellationToken ct) =>
{
var start = from ?? DateTimeOffset.UtcNow.AddDays(-7);
var end = to ?? DateTimeOffset.UtcNow;
var kpis = await collector.CollectAsync(start, end, tenant, ct);
return Results.Ok(kpis);
})
.WithName("GetQualityKpis")
.WithDescription("Get quality KPIs for explainable triage");
// GET /api/v1/metrics/kpis/reachability
group.MapGet("/reachability", async (
[FromQuery] DateTimeOffset? from,
[FromQuery] DateTimeOffset? to,
[FromQuery] string? tenant,
IKpiCollector collector,
CancellationToken ct) =>
{
var kpis = await collector.CollectAsync(
from ?? DateTimeOffset.UtcNow.AddDays(-7),
to ?? DateTimeOffset.UtcNow,
tenant,
ct);
return Results.Ok(kpis.Reachability);
})
.WithName("GetReachabilityKpis");
// GET /api/v1/metrics/kpis/explainability
group.MapGet("/explainability", async (
[FromQuery] DateTimeOffset? from,
[FromQuery] DateTimeOffset? to,
[FromQuery] string? tenant,
IKpiCollector collector,
CancellationToken ct) =>
{
var kpis = await collector.CollectAsync(
from ?? DateTimeOffset.UtcNow.AddDays(-7),
to ?? DateTimeOffset.UtcNow,
tenant,
ct);
return Results.Ok(kpis.Explainability);
})
.WithName("GetExplainabilityKpis");
// GET /api/v1/metrics/kpis/trend
group.MapGet("/trend", async (
[FromQuery] int days = 30,
[FromQuery] string? tenant,
IKpiTrendService trendService,
CancellationToken ct) =>
{
var trend = await trendService.GetTrendAsync(days, tenant, ct);
return Results.Ok(trend);
})
.WithName("GetKpiTrend")
.WithDescription("Get KPI trend over time");
}
}
```
**Acceptance Criteria**:
- [ ] Main KPI endpoint
- [ ] Category-specific endpoints
- [ ] Trend endpoint
- [ ] Period filtering
---
### T4: Add Tests
**Assignee**: Platform Team
**Story Points**: 2
**Status**: TODO
**Dependencies**: T1-T3
**Test Cases**:
```csharp
public class KpiCollectorTests
{
[Fact]
public async Task CollectAsync_ReturnsAllCategories()
{
var result = await _collector.CollectAsync(
DateTimeOffset.UtcNow.AddDays(-7),
DateTimeOffset.UtcNow,
ct: CancellationToken.None);
result.Reachability.Should().NotBeNull();
result.Runtime.Should().NotBeNull();
result.Explainability.Should().NotBeNull();
result.Replay.Should().NotBeNull();
}
[Fact]
public async Task CollectAsync_CalculatesPercentagesCorrectly()
{
SetupTestData(totalFindings: 100, withKnownReachability: 75);
var result = await _collector.CollectAsync(
DateTimeOffset.UtcNow.AddDays(-7),
DateTimeOffset.UtcNow,
ct: CancellationToken.None);
result.Reachability.PercentKnown.Should().Be(75m);
}
[Fact]
public async Task RecordVerdictAsync_IncrementsCorrectCounter()
{
await _collector.RecordVerdictAsync(
Guid.NewGuid(),
hasReasonSteps: true,
hasProofPointer: true,
CancellationToken.None);
_repository.Verify(r => r.IncrementCounterAsync(
"explainability", "fully_explainable", It.IsAny<CancellationToken>()));
}
}
```
**Acceptance Criteria**:
- [ ] Collection tests
- [ ] Calculation tests
- [ ] Recording tests
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Platform Team | Define KPI models |
| 2 | T2 | TODO | T1 | Platform Team | Create KpiCollector service |
| 3 | T3 | TODO | T2 | Platform Team | Create API endpoints |
| 4 | T4 | TODO | T1-T3 | Platform Team | Add tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint created from Explainable Triage Workflows advisory gap analysis. | Claude |
---
## Success Criteria
- [ ] All 4 tasks marked DONE
- [ ] All KPI categories tracked
- [ ] Dashboard API functional
- [ ] Historical trend available
- [ ] All tests pass

View File

@@ -0,0 +1,414 @@
# Sprint Epic 7000 - Competitive Moat & Explainable Triage
## Overview
Epic 7000 encompasses two major capability sets:
1. **Competitive Benchmarking** (batch 0001): Verifiable competitive differentiation through benchmarking infrastructure, SBOM lineage semantics, auditor-grade explainability, and integrated three-layer reachability analysis. *Source: 19-Dec-2025 advisory*
2. **Explainable Triage Workflows** (batches 0002-0005): Policy-backed, reachability-informed, runtime-corroborated verdicts with full explainability and auditability. *Source: 21-Dec-2025 advisory*
**IMPLID**: 7000 (Competitive Moat & Explainable Triage)
**Total Sprints**: 12
**Total Tasks**: 68
**Source Advisories**:
- `docs/product-advisories/archived/19-Dec-2025 - Benchmarking Container Scanners Against Stella Ops.md`
- `docs/product-advisories/archived/21-Dec-2025 - Designing Explainable Triage Workflows.md`
---
## Gap Analysis Summary
| Gap | Severity | Sprint | Status |
|-----|----------|--------|--------|
| No competitive benchmarking infrastructure | HIGH | 7000.0001.0001 | TODO |
| SBOM as static document, no lineage/versioning | HIGH | 7000.0001.0002 | TODO |
| No assumption-set or falsifiability tracking | HIGH | 7000.0001.0003 | TODO |
| 3-layer reachability not integrated | MEDIUM | 7000.0001.0004 | TODO |
---
## Epic Structure
### Phase 1: Benchmarking Foundation
| Sprint | Name | Tasks | Priority | Duration |
|--------|------|-------|----------|----------|
| 7000.0001.0001 | [Competitive Benchmarking Infrastructure](SPRINT_7000_0001_0001_competitive_benchmarking.md) | 7 | HIGH | 2 weeks |
**Key Deliverables**:
- Reference corpus with ground-truth annotations
- Comparison harness for Trivy, Grype, Syft
- Precision/recall/F1 metrics
- Claims index with verifiable evidence
- Marketing battlecard generator
---
### Phase 2: SBOM Evolution
| Sprint | Name | Tasks | Priority | Duration |
|--------|------|-------|----------|----------|
| 7000.0001.0002 | [SBOM Lineage & Repository Semantics](SPRINT_7000_0001_0002_sbom_lineage.md) | 7 | HIGH | 2 weeks |
**Key Deliverables**:
- SBOM lineage DAG with content-addressable storage
- Semantic diff engine (component-level deltas)
- Rebuild reproducibility proof manifest
- Lineage traversal API
---
### Phase 3: Explainability Enhancement
| Sprint | Name | Tasks | Priority | Duration |
|--------|------|-------|----------|----------|
| 7000.0001.0003 | [Explainability with Assumptions & Falsifiability](SPRINT_7000_0001_0003_explainability.md) | 7 | HIGH | 2 weeks |
**Key Deliverables**:
- Assumption-set model (compiler flags, runtime config, feature gates)
- Falsifiability criteria ("what would disprove this?")
- Evidence-density confidence scorer
- Updated DSSE predicate schema
---
### Phase 4: Reachability Integration
| Sprint | Name | Tasks | Priority | Duration |
|--------|------|-------|----------|----------|
| 7000.0001.0004 | [Three-Layer Reachability Integration](SPRINT_7000_0001_0004_three_layer_reachability.md) | 7 | MEDIUM | 2 weeks |
**Key Deliverables**:
- `ReachabilityStack` composite model
- Layer 2: Binary loader resolution (ELF/PE)
- Layer 3: Feature flag / config gating
- "All-three-align" exploitability proof
---
## Batch 2: Explainable Triage Foundation
### Phase 5: Confidence & UX
| Sprint | Name | Tasks | Priority |
|--------|------|-------|----------|
| 7000.0002.0001 | [Unified Confidence Model](SPRINT_7000_0002_0001_unified_confidence_model.md) | 5 | HIGH |
| 7000.0002.0002 | [Vulnerability-First UX API](SPRINT_7000_0002_0002_vulnerability_first_ux_api.md) | 5 | HIGH |
**Key Deliverables**:
- `ConfidenceScore` with 5-factor breakdown (Reachability, Runtime, VEX, Provenance, Policy)
- `FindingSummaryResponse` with verdict chip, confidence chip, one-liner
- `ProofBadges` for visual evidence indicators
- Findings list and detail API endpoints
---
### Phase 6: Visualization APIs
| Sprint | Name | Tasks | Priority |
|--------|------|-------|----------|
| 7000.0003.0001 | [Evidence Graph API](SPRINT_7000_0003_0001_evidence_graph_api.md) | 4 | MEDIUM |
| 7000.0003.0002 | [Reachability Mini-Map API](SPRINT_7000_0003_0002_reachability_minimap_api.md) | 4 | MEDIUM |
| 7000.0003.0003 | [Runtime Timeline API](SPRINT_7000_0003_0003_runtime_timeline_api.md) | 4 | MEDIUM |
**Key Deliverables**:
- Evidence graph with nodes, edges, signature status
- Reachability mini-map with condensed call paths
- Runtime timeline with time-windowed observations and posture
---
### Phase 7: Fidelity & Budgets
| Sprint | Name | Tasks | Priority |
|--------|------|-------|----------|
| 7000.0004.0001 | [Progressive Fidelity Mode](SPRINT_7000_0004_0001_progressive_fidelity.md) | 5 | HIGH |
| 7000.0004.0002 | [Evidence Size Budgets](SPRINT_7000_0004_0002_evidence_size_budgets.md) | 4 | MEDIUM |
**Key Deliverables**:
- `FidelityLevel` enum with Quick/Standard/Deep modes
- Fidelity-aware analyzer orchestration with timeouts
- `EvidenceBudget` with per-scan caps
- Retention tier management (Hot/Warm/Cold/Archive)
---
### Phase 8: Metrics & Observability
| Sprint | Name | Tasks | Priority |
|--------|------|-------|----------|
| 7000.0005.0001 | [Quality KPIs Tracking](SPRINT_7000_0005_0001_quality_kpis_tracking.md) | 5 | MEDIUM |
**Key Deliverables**:
- `TriageQualityKpis` model
- KPI collection and snapshotting
- Dashboard API endpoint
---
## Dependency Graph
```mermaid
graph TD
subgraph Batch1["Batch 1: Competitive Moat"]
S7001[7000.0001.0001<br/>Benchmarking]
S7002[7000.0001.0002<br/>SBOM Lineage]
S7003[7000.0001.0003<br/>Explainability]
S7004[7000.0001.0004<br/>3-Layer Reach]
S7001 --> S7002
S7002 --> S7004
S7003 --> S7004
end
subgraph Batch2["Batch 2: Explainable Triage"]
S7021[7000.0002.0001<br/>Confidence Model]
S7022[7000.0002.0002<br/>UX API]
S7031[7000.0003.0001<br/>Evidence Graph]
S7032[7000.0003.0002<br/>Mini-Map]
S7033[7000.0003.0003<br/>Timeline]
S7041[7000.0004.0001<br/>Fidelity]
S7042[7000.0004.0002<br/>Budgets]
S7051[7000.0005.0001<br/>KPIs]
S7021 --> S7022
S7022 --> S7031
S7022 --> S7032
S7022 --> S7033
S7021 --> S7051
end
subgraph External["Related Sprints"]
S4200[4200.0001.0002<br/>VEX Lattice]
S4500[4500.0002.0001<br/>VEX Conflict Studio]
S3500[3500 Series<br/>Score Proofs - DONE]
S4100[4100.0003.0001<br/>Risk Verdict]
end
S7001 --> S4500
S3500 --> S7003
S7021 --> S4100
```
---
## Integration Points
### Scanner Module
- `StellaOps.Scanner.Benchmark` - New library for competitor comparison
- `StellaOps.Scanner.Emit` - Enhanced with lineage tracking
- `StellaOps.Scanner.Reachability` - 3-layer stack integration
### Policy Module
- `StellaOps.Policy.Explainability` - Assumption-set and falsifiability models
### Attestor Module
- Updated predicate schemas for explainability fields
---
## Success Criteria
### Batch 1: Competitive Moat
#### Sprint 7000.0001.0001 (Benchmarking)
- [ ] 50+ image corpus with ground-truth annotations
- [ ] Automated comparison against Trivy, Grype, Syft
- [ ] Precision/recall metrics published
- [ ] Claims index with evidence links
#### Sprint 7000.0001.0002 (SBOM Lineage)
- [ ] SBOM versioning with content-addressable storage
- [ ] Semantic diff between SBOM versions
- [ ] Lineage API operational
- [ ] Deterministic diff output
#### Sprint 7000.0001.0003 (Explainability)
- [ ] Assumption-set tracked for all findings
- [ ] Falsifiability criteria in explainer output
- [ ] Evidence-density confidence scores
- [ ] UI widget for assumption drill-down
#### Sprint 7000.0001.0004 (3-Layer Reachability)
- [ ] All 3 layers integrated in reachability analysis
- [ ] Binary loader resolution for ELF/PE
- [ ] Feature flag gating detection
- [ ] "Structurally proven" exploitability tier
### Batch 2: Explainable Triage
#### Sprint 7000.0002.0001 (Unified Confidence Model)
- [ ] ConfidenceScore model with 5-factor breakdown
- [ ] ConfidenceCalculator service
- [ ] Factor explanations with evidence links
- [ ] Bounded 0.0-1.0 scores
#### Sprint 7000.0002.0002 (Vulnerability-First UX API)
- [ ] FindingSummaryResponse with verdict/confidence chips
- [ ] ProofBadges for visual indicators
- [ ] Findings list and detail endpoints
- [ ] Drill-down into evidence graph
#### Sprint 7000.0003.0001 (Evidence Graph API)
- [ ] EvidenceGraphResponse with nodes and edges
- [ ] Signature status per evidence node
- [ ] Click-through to raw evidence
- [ ] OpenAPI documentation
#### Sprint 7000.0003.0002 (Reachability Mini-Map API)
- [ ] Condensed call paths
- [ ] Entrypoint to vulnerable component visualization
- [ ] Depth-limited graph extraction
- [ ] Path highlighting
#### Sprint 7000.0003.0003 (Runtime Timeline API)
- [ ] Time-windowed observation buckets
- [ ] Posture determination (Supports/Contradicts/Unknown)
- [ ] Significant event extraction
- [ ] Session correlation
#### Sprint 7000.0004.0001 (Progressive Fidelity)
- [ ] FidelityLevel enum (Quick/Standard/Deep)
- [ ] Fidelity-aware analyzer orchestration
- [ ] Configurable timeouts per level
- [ ] Fidelity upgrade endpoint
#### Sprint 7000.0004.0002 (Evidence Size Budgets)
- [ ] Per-scan evidence caps
- [ ] Retention tier management
- [ ] Size tracking and pruning
- [ ] Budget configuration API
#### Sprint 7000.0005.0001 (Quality KPIs)
- [ ] % non-UNKNOWN reachability >80%
- [ ] % runtime corroboration >50%
- [ ] Explainability completeness >95%
- [ ] Dashboard endpoint operational
---
## Module Structure
### Batch 1: Competitive Moat
```
src/Scanner/
├── __Libraries/
│ ├── StellaOps.Scanner.Benchmark/ # NEW: Competitor comparison
│ │ ├── Corpus/ # Ground-truth corpus
│ │ ├── Harness/ # Comparison harness
│ │ ├── Metrics/ # Precision/recall
│ │ └── Claims/ # Claims index
│ ├── StellaOps.Scanner.Emit/ # ENHANCED
│ │ └── Lineage/ # SBOM lineage tracking
│ ├── StellaOps.Scanner.Explainability/ # NEW: Assumption/falsifiability
│ └── StellaOps.Scanner.Reachability/ # ENHANCED
│ └── Stack/ # 3-layer integration
src/Policy/
├── __Libraries/
│ └── StellaOps.Policy.Explainability/ # NEW: Assumption models
```
### Batch 2: Explainable Triage
```
src/
├── Policy/
│ └── __Libraries/
│ └── StellaOps.Policy.Confidence/ # NEW: Confidence model
│ ├── Models/
│ │ ├── ConfidenceScore.cs
│ │ └── ConfidenceFactor.cs
│ └── Services/
│ └── ConfidenceCalculator.cs
├── Scanner/
│ └── __Libraries/
│ └── StellaOps.Scanner.Orchestration/ # NEW: Fidelity orchestration
│ └── Fidelity/
│ ├── FidelityLevel.cs
│ └── FidelityAwareAnalyzer.cs
├── Findings/
│ └── StellaOps.Findings.WebService/ # EXTEND: UX APIs
│ ├── Contracts/
│ │ ├── FindingSummaryResponse.cs
│ │ ├── EvidenceGraphResponse.cs
│ │ ├── ReachabilityMiniMap.cs
│ │ └── RuntimeTimeline.cs
│ └── Endpoints/
│ ├── FindingsEndpoints.cs
│ ├── EvidenceGraphEndpoints.cs
│ ├── ReachabilityMapEndpoints.cs
│ └── RuntimeTimelineEndpoints.cs
├── Evidence/ # NEW: Evidence management
│ └── StellaOps.Evidence/
│ ├── Budgets/
│ └── Retention/
└── Metrics/ # NEW: KPI tracking
└── StellaOps.Metrics/
└── Kpi/
├── TriageQualityKpis.cs
└── KpiCollector.cs
```
---
## Documentation Created
### Batch 1: Competitive Moat
| Document | Location | Purpose |
|----------|----------|---------|
| Sprint Summary | `docs/implplan/SPRINT_7000_SUMMARY.md` | This file |
| Benchmarking Sprint | `docs/implplan/SPRINT_7000_0001_0001_competitive_benchmarking.md` | Sprint details |
| SBOM Lineage Sprint | `docs/implplan/SPRINT_7000_0001_0002_sbom_lineage.md` | Sprint details |
| Explainability Sprint | `docs/implplan/SPRINT_7000_0001_0003_explainability.md` | Sprint details |
| 3-Layer Reachability Sprint | `docs/implplan/SPRINT_7000_0001_0004_three_layer_reachability.md` | Sprint details |
| Claims Index | `docs/claims-index.md` | Verifiable competitive claims |
| Benchmark Architecture | `docs/modules/benchmark/architecture.md` | Module dossier |
### Batch 2: Explainable Triage
| Document | Location | Purpose |
|----------|----------|---------|
| Implementation Plan | `docs/modules/platform/explainable-triage-implementation-plan.md` | High-level plan |
| Unified Confidence Model | `docs/implplan/SPRINT_7000_0002_0001_unified_confidence_model.md` | Sprint details |
| Vulnerability-First UX API | `docs/implplan/SPRINT_7000_0002_0002_vulnerability_first_ux_api.md` | Sprint details |
| Evidence Graph API | `docs/implplan/SPRINT_7000_0003_0001_evidence_graph_api.md` | Sprint details |
| Reachability Mini-Map API | `docs/implplan/SPRINT_7000_0003_0002_reachability_minimap_api.md` | Sprint details |
| Runtime Timeline API | `docs/implplan/SPRINT_7000_0003_0003_runtime_timeline_api.md` | Sprint details |
| Progressive Fidelity Mode | `docs/implplan/SPRINT_7000_0004_0001_progressive_fidelity.md` | Sprint details |
| Evidence Size Budgets | `docs/implplan/SPRINT_7000_0004_0002_evidence_size_budgets.md` | Sprint details |
| Quality KPIs Tracking | `docs/implplan/SPRINT_7000_0005_0001_quality_kpis_tracking.md` | Sprint details |
---
## Related Work
### Completed (Leverage)
- **Sprint 3500**: Score Proofs, Unknowns Registry, Reachability foundations
- **Sprint 3600**: CycloneDX 1.7, SPDX 3.0.1 generation
- **EntryTrace**: Semantic, temporal, mesh, binary intelligence
### In Progress (Coordinate)
- **Sprint 4100**: Unknowns decay, knowledge snapshots
- **Sprint 4200**: Triage API, policy lattice
- **Sprint 5100**: Comprehensive testing strategy
- **Sprint 6000**: BinaryIndex module
### Planned (Accelerate)
- **Sprint 4500.0002.0001**: VEX Conflict Studio
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Batch 1 (Competitive Moat) created from 19-Dec-2025 advisory. 4 sprints defined. | Agent |
| 2025-12-22 | Batch 2 (Explainable Triage) added from 21-Dec-2025 advisory. 8 sprints defined (73 story points). | Claude |
---
**Epic Status**: PLANNING (0/12 sprints complete)

View File

@@ -0,0 +1,356 @@
# Sprint 7100.0001.0001 — Trust Vector Foundation
## Topic & Scope
- Implement the foundational 3-component trust vector model (Provenance, Coverage, Replayability) for VEX sources.
- Create claim scoring with strength multipliers and freshness decay.
- Extend VexProvider to support trust vector configuration.
- **Working directory:** `src/Excititor/__Libraries/StellaOps.Excititor.Core/TrustVector/`
## Dependencies & Concurrency
- **Upstream**: None (foundational sprint)
- **Downstream**: Sprint 7100.0001.0002 (Verdict Manifest) depends on this
- **Safe to parallelize with**: Unrelated epics
## Documentation Prerequisites
- `docs/product-advisories/archived/22-Dec-2026 - Building a Trust Lattice for VEX Sources.md`
- `docs/modules/excititor/architecture.md`
- `docs/modules/excititor/scoring.md`
---
## Tasks
### T1: TrustVector Record
**Assignee**: Excititor Team
**Story Points**: 3
**Status**: TODO
**Description**:
Create the core TrustVector record with P/C/R components and configurable weights.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/TrustVector/TrustVector.cs`
**Acceptance Criteria**:
- [ ] `TrustVector` record with Provenance, Coverage, Replayability scores
- [ ] `TrustWeights` record with wP, wC, wR (defaults: 0.45, 0.35, 0.20)
- [ ] `BaseTrust` computed property: `wP*P + wC*C + wR*R`
- [ ] Validation: all scores in [0..1] range
- [ ] Immutable, deterministic equality
**Domain Model Spec**:
```csharp
/// <summary>
/// 3-component trust vector for VEX sources.
/// </summary>
public sealed record TrustVector
{
/// <summary>Provenance score: cryptographic & process integrity [0..1].</summary>
public required double Provenance { get; init; }
/// <summary>Coverage score: how well the statement scope maps to the asset [0..1].</summary>
public required double Coverage { get; init; }
/// <summary>Replayability score: can we deterministically re-derive the claim? [0..1].</summary>
public required double Replayability { get; init; }
/// <summary>Compute base trust using provided weights.</summary>
public double ComputeBaseTrust(TrustWeights weights)
=> weights.WP * Provenance + weights.WC * Coverage + weights.WR * Replayability;
}
/// <summary>
/// Configurable weights for trust vector components.
/// </summary>
public sealed record TrustWeights
{
public double WP { get; init; } = 0.45;
public double WC { get; init; } = 0.35;
public double WR { get; init; } = 0.20;
public static TrustWeights Default => new();
}
```
---
### T2: Provenance Scoring Rules
**Assignee**: Excititor Team
**Story Points**: 3
**Status**: TODO
**Description**:
Implement provenance score calculation based on cryptographic and process integrity.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/TrustVector/ProvenanceScorer.cs`
**Acceptance Criteria**:
- [ ] Score 1.00: DSSE-signed, timestamped, Rekor/Git anchored, key in allow-list
- [ ] Score 0.75: DSSE-signed + public key known, no transparency log
- [ ] Score 0.40: Unsigned but authenticated, immutable artifact repo
- [ ] Score 0.10: Opaque/CSV/email/manual import
- [ ] `IProvenanceScorer` interface for extensibility
- [ ] Unit tests for each scoring tier
**Scoring Table**:
```csharp
public static class ProvenanceScores
{
public const double FullyAttested = 1.00; // DSSE + Rekor + key allow-list
public const double SignedNoLog = 0.75; // DSSE + known key, no log
public const double AuthenticatedUnsigned = 0.40; // Immutable repo, no sig
public const double ManualImport = 0.10; // Opaque/CSV/email
}
```
---
### T3: Coverage Scoring Rules
**Assignee**: Excititor Team
**Story Points**: 3
**Status**: TODO
**Description**:
Implement coverage score calculation based on scope matching precision.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/TrustVector/CoverageScorer.cs`
**Acceptance Criteria**:
- [ ] Score 1.00: Exact package + version/build digest + feature/flag context matched
- [ ] Score 0.75: Exact pkg + version range matched; partial feature context
- [ ] Score 0.50: Product-level only; maps via CPE/PURL family
- [ ] Score 0.25: Family-level heuristics; no version proof
- [ ] `ICoverageScorer` interface for extensibility
- [ ] Unit tests for each scoring tier
---
### T4: Replayability Scoring Rules
**Assignee**: Excititor Team
**Story Points**: 3
**Status**: TODO
**Description**:
Implement replayability score calculation based on input pinning.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/TrustVector/ReplayabilityScorer.cs`
**Acceptance Criteria**:
- [ ] Score 1.00: All inputs pinned (feeds, SBOM hash, ruleset hash, lattice version); replays byte-identical
- [ ] Score 0.60: Inputs mostly pinned; non-deterministic ordering tolerated but stable outcome
- [ ] Score 0.20: Ephemeral APIs; no snapshot
- [ ] `IReplayabilityScorer` interface for extensibility
- [ ] Unit tests for each scoring tier
---
### T5: ClaimStrength Enum
**Assignee**: Excititor Team
**Story Points**: 2
**Status**: TODO
**Description**:
Create claim strength enum with evidence-based multipliers.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/TrustVector/ClaimStrength.cs`
**Acceptance Criteria**:
- [ ] `ClaimStrength` enum with numeric multiplier values
- [ ] `ExploitabilityWithReachability` = 1.00 (analysis + reachability proof subgraph)
- [ ] `ConfigWithEvidence` = 0.80 (config/feature-flag with evidence)
- [ ] `VendorBlanket` = 0.60 (vendor blanket statement)
- [ ] `UnderInvestigation` = 0.40 (investigation in progress)
- [ ] Extension method `ToMultiplier()` for calculations
**Domain Model Spec**:
```csharp
public enum ClaimStrength
{
/// <summary>Exploitability analysis with reachability proof subgraph.</summary>
ExploitabilityWithReachability = 100,
/// <summary>Config/feature-flag reason with evidence.</summary>
ConfigWithEvidence = 80,
/// <summary>Vendor blanket statement.</summary>
VendorBlanket = 60,
/// <summary>Under investigation.</summary>
UnderInvestigation = 40
}
public static class ClaimStrengthExtensions
{
public static double ToMultiplier(this ClaimStrength strength)
=> (int)strength / 100.0;
}
```
---
### T6: FreshnessCalculator
**Assignee**: Excititor Team
**Story Points**: 3
**Status**: TODO
**Description**:
Implement freshness decay calculation with configurable half-life.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/TrustVector/FreshnessCalculator.cs`
**Acceptance Criteria**:
- [ ] Exponential decay formula: `F = exp(-ln(2) * age_days / half_life)`
- [ ] Configurable half-life (default 90 days)
- [ ] Configurable floor (default 0.35, minimum freshness)
- [ ] `Compute(DateTimeOffset issuedAt, DateTimeOffset cutoff)` method
- [ ] Pure function, deterministic output
- [ ] Unit tests for decay curve, boundary conditions
**Implementation Spec**:
```csharp
public sealed class FreshnessCalculator
{
public double HalfLifeDays { get; init; } = 90.0;
public double Floor { get; init; } = 0.35;
public double Compute(DateTimeOffset issuedAt, DateTimeOffset cutoff)
{
var ageDays = (cutoff - issuedAt).TotalDays;
if (ageDays < 0) return 1.0; // Future date, full freshness
var decay = Math.Exp(-Math.Log(2) * ageDays / HalfLifeDays);
return Math.Max(decay, Floor);
}
}
```
---
### T7: ClaimScoreCalculator
**Assignee**: Excititor Team
**Story Points**: 5
**Status**: TODO
**Description**:
Implement the complete claim score calculation: `ClaimScore = BaseTrust(S) * M * F`.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/TrustVector/ClaimScoreCalculator.cs`
**Acceptance Criteria**:
- [ ] `IClaimScoreCalculator` interface
- [ ] `ClaimScoreCalculator` implementation
- [ ] `Compute(TrustVector, TrustWeights, ClaimStrength, DateTimeOffset issuedAt, DateTimeOffset cutoff)` method
- [ ] Returns `ClaimScoreResult` with score + breakdown (baseTrust, strength, freshness)
- [ ] Pure function, deterministic output
- [ ] Unit tests for various input combinations
**Domain Model Spec**:
```csharp
public sealed record ClaimScoreResult
{
public required double Score { get; init; }
public required double BaseTrust { get; init; }
public required double StrengthMultiplier { get; init; }
public required double FreshnessMultiplier { get; init; }
public required TrustVector Vector { get; init; }
public required TrustWeights Weights { get; init; }
}
public interface IClaimScoreCalculator
{
ClaimScoreResult Compute(
TrustVector vector,
TrustWeights weights,
ClaimStrength strength,
DateTimeOffset issuedAt,
DateTimeOffset cutoff);
}
```
---
### T8: Extend VexProvider with TrustVector
**Assignee**: Excititor Team
**Story Points**: 3
**Status**: TODO
**Description**:
Extend the existing VexProvider model to support TrustVector configuration.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/VexProvider.cs`
**Acceptance Criteria**:
- [ ] Add `TrustVector? Vector` property to `VexProviderTrust`
- [ ] Backward compatibility: if Vector is null, fall back to legacy Weight
- [ ] Add `TrustWeights? Weights` property for per-provider weight overrides
- [ ] Migration path from legacy Weight to TrustVector documented
- [ ] Unit tests for backward compatibility
---
### T9: Unit Tests — Determinism Validation
**Assignee**: Excititor Team
**Story Points**: 5
**Status**: TODO
**Description**:
Comprehensive unit tests ensuring deterministic scoring across all components.
**Implementation Path**: `src/Excititor/__Tests/StellaOps.Excititor.Core.Tests/TrustVector/`
**Acceptance Criteria**:
- [ ] TrustVector construction and validation tests
- [ ] ProvenanceScorer tests for all tiers
- [ ] CoverageScorer tests for all tiers
- [ ] ReplayabilityScorer tests for all tiers
- [ ] FreshnessCalculator decay curve tests
- [ ] ClaimScoreCalculator integration tests
- [ ] Determinism tests: same inputs → identical outputs (1000 iterations)
- [ ] Boundary condition tests (edge values, nulls, extremes)
- [ ] Test coverage ≥90%
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Excititor Team | TrustVector Record |
| 2 | T2 | TODO | T1 | Excititor Team | Provenance Scoring Rules |
| 3 | T3 | TODO | T1 | Excititor Team | Coverage Scoring Rules |
| 4 | T4 | TODO | T1 | Excititor Team | Replayability Scoring Rules |
| 5 | T5 | TODO | — | Excititor Team | ClaimStrength Enum |
| 6 | T6 | TODO | — | Excititor Team | FreshnessCalculator |
| 7 | T7 | TODO | T1-T6 | Excititor Team | ClaimScoreCalculator |
| 8 | T8 | TODO | T1 | Excititor Team | Extend VexProvider |
| 9 | T9 | TODO | T1-T8 | Excititor Team | Unit Tests — Determinism |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint file created from advisory processing. | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Weight defaults | Decision | Excititor Team | Using wP=0.45, wC=0.35, wR=0.20 per advisory |
| Freshness floor | Decision | Excititor Team | 0.35 floor prevents complete decay to zero |
| Backward compatibility | Risk | Excititor Team | Legacy single-weight mode must work during transition |
| Scorer extensibility | Decision | Excititor Team | Interface-based design allows custom scoring rules |
---
**Sprint Status**: TODO (0/9 tasks complete)

View File

@@ -0,0 +1,462 @@
# Sprint 7100.0001.0002 — Verdict Manifest & Deterministic Replay
## Topic & Scope
- Implement DSSE-signed verdict manifests for replayable VEX decisions.
- Create PostgreSQL storage and indexing for verdict manifests.
- Build replay verification endpoint.
- **Working directory:** `src/Authority/__Libraries/StellaOps.Authority.Core/Verdicts/`
## Dependencies & Concurrency
- **Upstream**: Sprint 7100.0001.0001 (Trust Vector Foundation)
- **Downstream**: Sprint 7100.0002.0002 (Calibration), Sprint 7100.0003.0001 (UI)
- **Safe to parallelize with**: Sprint 7100.0002.0001 (Policy Gates)
## Documentation Prerequisites
- `docs/product-advisories/archived/22-Dec-2026 - Building a Trust Lattice for VEX Sources.md`
- `docs/modules/authority/architecture.md`
- `docs/modules/excititor/architecture.md`
- `src/Attestor/__Libraries/StellaOps.Attestor.Dsse/` (DSSE implementation)
---
## Tasks
### T1: VerdictManifest Domain Model
**Assignee**: Authority Team
**Story Points**: 5
**Status**: TODO
**Description**:
Create the VerdictManifest model that captures all inputs and outputs for deterministic replay.
**Implementation Path**: `src/Authority/__Libraries/StellaOps.Authority.Core/Verdicts/VerdictManifest.cs`
**Acceptance Criteria**:
- [ ] `VerdictManifest` record with all required fields
- [ ] Input pinning: SBOM digests, vuln feed snapshot IDs, VEX document digests
- [ ] Output fields: status, confidence, explanation, evidence refs
- [ ] Policy context: policy_hash, lattice_version
- [ ] Immutable, deterministic serialization
- [ ] JSON Schema validation
**Domain Model Spec**:
```csharp
public sealed record VerdictManifest
{
// Identity
public required string ManifestId { get; init; }
public required string Tenant { get; init; }
// Scope
public required string AssetDigest { get; init; }
public required string VulnerabilityId { get; init; }
// Inputs (pinned for replay)
public required VerdictInputs Inputs { get; init; }
// Verdict
public required VerdictResult Result { get; init; }
// Policy context
public required string PolicyHash { get; init; }
public required string LatticeVersion { get; init; }
// Metadata
public required DateTimeOffset EvaluatedAt { get; init; }
public required string ManifestDigest { get; init; }
}
public sealed record VerdictInputs
{
public required ImmutableArray<string> SbomDigests { get; init; }
public required ImmutableArray<string> VulnFeedSnapshotIds { get; init; }
public required ImmutableArray<string> VexDocumentDigests { get; init; }
public required ImmutableArray<string> ReachabilityGraphIds { get; init; }
public required DateTimeOffset ClockCutoff { get; init; }
}
public sealed record VerdictResult
{
public required VexStatus Status { get; init; }
public required double Confidence { get; init; }
public required ImmutableArray<VerdictExplanation> Explanations { get; init; }
public required ImmutableArray<string> EvidenceRefs { get; init; }
}
public sealed record VerdictExplanation
{
public required string SourceId { get; init; }
public required string Reason { get; init; }
public required double ProvenanceScore { get; init; }
public required double CoverageScore { get; init; }
public required double ReplayabilityScore { get; init; }
public required double StrengthMultiplier { get; init; }
public required double FreshnessMultiplier { get; init; }
public required double ClaimScore { get; init; }
}
```
---
### T2: VerdictManifestBuilder
**Assignee**: Authority Team
**Story Points**: 5
**Status**: TODO
**Description**:
Create builder for deterministic assembly of verdict manifests with stable ordering.
**Implementation Path**: `src/Authority/__Libraries/StellaOps.Authority.Core/Verdicts/VerdictManifestBuilder.cs`
**Acceptance Criteria**:
- [ ] Fluent builder API for manifest construction
- [ ] Stable sorting of all collections (by issuer_did, statement_digest)
- [ ] Canonical JSON serialization (sorted keys, UTC timestamps)
- [ ] Automatic manifest digest computation (sha256)
- [ ] Validation before build (required fields, valid ranges)
- [ ] Pure function, deterministic output
**Implementation Spec**:
```csharp
public sealed class VerdictManifestBuilder
{
public VerdictManifestBuilder WithTenant(string tenant);
public VerdictManifestBuilder WithAsset(string assetDigest, string vulnId);
public VerdictManifestBuilder WithInputs(VerdictInputs inputs);
public VerdictManifestBuilder WithResult(VerdictResult result);
public VerdictManifestBuilder WithPolicy(string policyHash, string latticeVersion);
public VerdictManifestBuilder WithClock(DateTimeOffset evaluatedAt);
public VerdictManifest Build();
}
```
---
### T3: DSSE Signing for Verdict Manifests
**Assignee**: Authority Team + Signer Team
**Story Points**: 5
**Status**: TODO
**Description**:
Implement DSSE envelope signing for verdict manifests using existing Signer infrastructure.
**Implementation Path**: `src/Authority/__Libraries/StellaOps.Authority.Core/Verdicts/VerdictManifestSigner.cs`
**Acceptance Criteria**:
- [ ] `IVerdictManifestSigner` interface
- [ ] Integration with `StellaOps.Signer` module
- [ ] Predicate type: `https://stella-ops.org/attestations/vex-verdict/1`
- [ ] Support for multiple signature schemes (DSSE, Sigstore)
- [ ] Optional Rekor transparency logging
- [ ] Signature verification method
**Predicate Schema**:
```json
{
"$schema": "https://json-schema.org/draft/2020-12/schema",
"type": "object",
"required": ["manifestId", "assetDigest", "vulnerabilityId", "status", "confidence", "policyHash", "latticeVersion"],
"properties": {
"manifestId": { "type": "string" },
"assetDigest": { "type": "string" },
"vulnerabilityId": { "type": "string" },
"status": { "type": "string", "enum": ["affected", "not_affected", "fixed", "under_investigation"] },
"confidence": { "type": "number", "minimum": 0, "maximum": 1 },
"policyHash": { "type": "string" },
"latticeVersion": { "type": "string" },
"evaluatedAt": { "type": "string", "format": "date-time" }
}
}
```
---
### T4: PostgreSQL Schema for Verdict Manifests
**Assignee**: Authority Team
**Story Points**: 5
**Status**: TODO
**Description**:
Create database migration for verdict manifest storage.
**Implementation Path**: `src/Authority/__Libraries/StellaOps.Authority.Storage.Postgres/Migrations/`
**Acceptance Criteria**:
- [ ] `authority.verdict_manifests` table
- [ ] Indexes on: (asset_digest, vulnerability_id), (policy_hash, lattice_version), (evaluated_at)
- [ ] Compound index for replay queries
- [ ] BRIN index on evaluated_at for time-based queries
- [ ] Signature storage in JSONB column
**Schema Spec**:
```sql
CREATE TABLE authority.verdict_manifests (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
manifest_id TEXT NOT NULL UNIQUE,
tenant TEXT NOT NULL,
-- Scope
asset_digest TEXT NOT NULL,
vulnerability_id TEXT NOT NULL,
-- Inputs (JSONB for flexibility)
inputs_json JSONB NOT NULL,
-- Result
status TEXT NOT NULL CHECK (status IN ('affected', 'not_affected', 'fixed', 'under_investigation')),
confidence DOUBLE PRECISION NOT NULL CHECK (confidence >= 0 AND confidence <= 1),
result_json JSONB NOT NULL,
-- Policy context
policy_hash TEXT NOT NULL,
lattice_version TEXT NOT NULL,
-- Metadata
evaluated_at TIMESTAMPTZ NOT NULL,
manifest_digest TEXT NOT NULL,
-- Signature
signature_json JSONB,
rekor_log_id TEXT,
-- Timestamps
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
-- Primary lookup: asset + CVE
CREATE INDEX idx_verdict_asset_vuln ON authority.verdict_manifests(tenant, asset_digest, vulnerability_id);
-- Replay queries: same policy + lattice
CREATE INDEX idx_verdict_policy ON authority.verdict_manifests(tenant, policy_hash, lattice_version);
-- Time-based queries
CREATE INDEX idx_verdict_time USING BRIN ON authority.verdict_manifests(evaluated_at);
-- Composite for deterministic replay
CREATE UNIQUE INDEX idx_verdict_replay ON authority.verdict_manifests(
tenant, asset_digest, vulnerability_id, policy_hash, lattice_version
);
```
---
### T5: IVerdictManifestStore Interface
**Assignee**: Authority Team
**Story Points**: 3
**Status**: TODO
**Description**:
Create repository interface for verdict manifest persistence.
**Implementation Path**: `src/Authority/__Libraries/StellaOps.Authority.Core/Verdicts/IVerdictManifestStore.cs`
**Acceptance Criteria**:
- [ ] `IVerdictManifestStore` interface
- [ ] Methods: Store, GetById, GetByScope, GetByPolicy, GetLatest
- [ ] Support for signed manifest retrieval
- [ ] Pagination for list queries
- [ ] Tenant isolation
**Interface Spec**:
```csharp
public interface IVerdictManifestStore
{
Task<VerdictManifest> StoreAsync(
VerdictManifest manifest,
byte[]? signature = null,
string? rekorLogId = null,
CancellationToken ct = default);
Task<VerdictManifest?> GetByIdAsync(
string tenant,
string manifestId,
CancellationToken ct = default);
Task<VerdictManifest?> GetByScopeAsync(
string tenant,
string assetDigest,
string vulnerabilityId,
string? policyHash = null,
string? latticeVersion = null,
CancellationToken ct = default);
Task<IReadOnlyList<VerdictManifest>> ListByPolicyAsync(
string tenant,
string policyHash,
string latticeVersion,
int limit = 100,
string? pageToken = null,
CancellationToken ct = default);
}
```
---
### T6: PostgreSQL Store Implementation
**Assignee**: Authority Team
**Story Points**: 5
**Status**: TODO
**Description**:
Implement PostgreSQL repository for verdict manifests.
**Implementation Path**: `src/Authority/__Libraries/StellaOps.Authority.Storage.Postgres/VerdictManifestStore.cs`
**Acceptance Criteria**:
- [ ] `PostgresVerdictManifestStore` implementation
- [ ] Uses Npgsql with Dapper
- [ ] Canonical JSON serialization for JSONB columns
- [ ] Efficient scope queries
- [ ] Deterministic ordering for pagination
---
### T7: Replay Verification Service
**Assignee**: Authority Team
**Story Points**: 5
**Status**: TODO
**Description**:
Create service that verifies verdict manifests can be replayed to produce identical results.
**Implementation Path**: `src/Authority/__Libraries/StellaOps.Authority.Core/Verdicts/VerdictReplayVerifier.cs`
**Acceptance Criteria**:
- [ ] `IVerdictReplayVerifier` interface
- [ ] Retrieves pinned inputs from manifest
- [ ] Re-executes trust lattice evaluation
- [ ] Compares result with stored verdict
- [ ] Returns detailed diff on mismatch
- [ ] Verifies signature if present
**Interface Spec**:
```csharp
public sealed record ReplayVerificationResult
{
public required bool Success { get; init; }
public required VerdictManifest OriginalManifest { get; init; }
public VerdictManifest? ReplayedManifest { get; init; }
public ImmutableArray<string>? Differences { get; init; }
public bool SignatureValid { get; init; }
public string? Error { get; init; }
}
public interface IVerdictReplayVerifier
{
Task<ReplayVerificationResult> VerifyAsync(
string manifestId,
CancellationToken ct = default);
}
```
---
### T8: Replay Verification API Endpoint
**Assignee**: Authority Team
**Story Points**: 3
**Status**: TODO
**Description**:
Create API endpoint for replay verification.
**Implementation Path**: `src/Authority/StellaOps.Authority.WebService/Controllers/VerdictController.cs`
**Acceptance Criteria**:
- [ ] `POST /api/v1/authority/verdicts/{manifestId}/replay` endpoint
- [ ] Scope: `verdict.read`
- [ ] Returns `ReplayVerificationResult`
- [ ] Rate limiting: 10 req/min per tenant
- [ ] OpenAPI documentation
**API Spec**:
```yaml
/api/v1/authority/verdicts/{manifestId}/replay:
post:
operationId: replayVerdict
summary: Verify verdict can be replayed
parameters:
- name: manifestId
in: path
required: true
schema:
type: string
responses:
200:
description: Replay verification result
content:
application/json:
schema:
$ref: '#/components/schemas/ReplayVerificationResult'
404:
description: Manifest not found
```
---
### T9: Integration Tests
**Assignee**: Authority Team
**Story Points**: 5
**Status**: TODO
**Description**:
Integration tests for verdict manifest pipeline.
**Implementation Path**: `src/Authority/__Tests/StellaOps.Authority.Core.Tests/Verdicts/`
**Acceptance Criteria**:
- [ ] Manifest construction tests
- [ ] DSSE signing and verification tests
- [ ] PostgreSQL store CRUD tests
- [ ] Replay verification tests (success and failure cases)
- [ ] Determinism tests: same inputs → identical manifests (1000 iterations)
- [ ] Concurrent access tests
- [ ] Test coverage ≥85%
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Authority Team | VerdictManifest Domain Model |
| 2 | T2 | TODO | T1 | Authority Team | VerdictManifestBuilder |
| 3 | T3 | TODO | T1 | Authority + Signer | DSSE Signing |
| 4 | T4 | TODO | T1 | Authority Team | PostgreSQL Schema |
| 5 | T5 | TODO | T1 | Authority Team | Store Interface |
| 6 | T6 | TODO | T4, T5 | Authority Team | PostgreSQL Implementation |
| 7 | T7 | TODO | T1, T6 | Authority Team | Replay Verification Service |
| 8 | T8 | TODO | T7 | Authority Team | Replay API Endpoint |
| 9 | T9 | TODO | T1-T8 | Authority Team | Integration Tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint file created from advisory processing. | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Predicate type | Decision | Authority Team | Using `https://stella-ops.org/attestations/vex-verdict/1` |
| Composite unique index | Decision | Authority Team | Ensures one verdict per (asset, CVE, policy, lattice) |
| Rekor optional | Decision | Authority Team | Transparency logging is opt-in per policy |
| Replay performance | Risk | Authority Team | Full replay may be expensive; consider caching |
---
**Sprint Status**: TODO (0/9 tasks complete)

View File

@@ -0,0 +1,422 @@
# Sprint 7100.0002.0001 — Policy Gates & Lattice Merge
## Topic & Scope
- Extend TrustLatticeEngine with ClaimScore-based merge algorithm.
- Implement policy gates for explainable decision control.
- Add conflict penalty mechanism for contradictory claims.
- **Working directory:** `src/Policy/__Libraries/StellaOps.Policy/TrustLattice/` and `src/Policy/__Libraries/StellaOps.Policy/Gates/`
## Dependencies & Concurrency
- **Upstream**: Sprint 7100.0001.0001 (Trust Vector Foundation)
- **Downstream**: Sprint 7100.0002.0002 (Calibration), Sprint 7100.0003.0001 (UI)
- **Safe to parallelize with**: Sprint 7100.0001.0002 (Verdict Manifest)
## Documentation Prerequisites
- `docs/product-advisories/archived/22-Dec-2026 - Building a Trust Lattice for VEX Sources.md`
- `docs/modules/policy/architecture.md`
- `src/Policy/__Libraries/StellaOps.Policy/TrustLattice/TrustLatticeEngine.cs`
---
## Tasks
### T1: ClaimScoreMerger
**Assignee**: Policy Team
**Story Points**: 8
**Status**: TODO
**Description**:
Implement the core merge algorithm that selects verdicts based on ClaimScore with conflict handling.
**Implementation Path**: `src/Policy/__Libraries/StellaOps.Policy/TrustLattice/ClaimScoreMerger.cs`
**Acceptance Criteria**:
- [ ] `IClaimScoreMerger` interface
- [ ] Partial order on claims by (scope specificity, ClaimScore)
- [ ] Conflict detection: contradictory statuses trigger conflict mode
- [ ] Conflict penalty: 0.25 down-weight on older/weaker claims
- [ ] Winner selection: `argmax(ClaimScore)` after adjustments
- [ ] Audit trail generation with all considered claims
- [ ] Deterministic: stable ordering for tie-breaking
**Algorithm Spec**:
```csharp
public sealed record MergeResult
{
public required VexStatus Status { get; init; }
public required double Confidence { get; init; }
public required bool HasConflicts { get; init; }
public required ImmutableArray<ScoredClaim> AllClaims { get; init; }
public required ScoredClaim WinningClaim { get; init; }
public required ImmutableArray<ConflictRecord> Conflicts { get; init; }
}
public sealed record ScoredClaim
{
public required string SourceId { get; init; }
public required VexStatus Status { get; init; }
public required double OriginalScore { get; init; }
public required double AdjustedScore { get; init; }
public required int ScopeSpecificity { get; init; }
public required bool Accepted { get; init; }
public required string Reason { get; init; }
}
public interface IClaimScoreMerger
{
MergeResult Merge(
IEnumerable<(VexClaim Claim, ClaimScoreResult Score)> scoredClaims,
MergePolicy policy,
CancellationToken ct = default);
}
```
---
### T2: Conflict Penalty Implementation
**Assignee**: Policy Team
**Story Points**: 3
**Status**: TODO
**Description**:
Implement conflict penalty mechanism for contradictory VEX claims.
**Implementation Path**: `src/Policy/__Libraries/StellaOps.Policy/TrustLattice/ConflictPenalizer.cs`
**Acceptance Criteria**:
- [ ] Detect contradictory claims (different statuses for same CVE+asset)
- [ ] Apply configurable penalty (default delta=0.25)
- [ ] Penalty applied to older/weaker claims, not the strongest
- [ ] Preserve original scores for audit trail
- [ ] Trigger replay proof requirement when conflicts exist
**Implementation Spec**:
```csharp
public sealed class ConflictPenalizer
{
public double ConflictPenalty { get; init; } = 0.25;
public IReadOnlyList<ScoredClaim> ApplyPenalties(
IReadOnlyList<ScoredClaim> claims)
{
var statuses = claims.Select(c => c.Status).Distinct().ToList();
if (statuses.Count <= 1)
return claims; // No conflict
// Find strongest claim
var strongest = claims.OrderByDescending(c => c.OriginalScore).First();
// Penalize all claims that disagree with strongest
return claims.Select(c =>
{
if (c.Status == strongest.Status)
return c;
return c with
{
AdjustedScore = c.OriginalScore * (1 - ConflictPenalty),
Reason = $"Conflict penalty applied (disagrees with {strongest.SourceId})"
};
}).ToList();
}
}
```
---
### T3: MinimumConfidenceGate
**Assignee**: Policy Team
**Story Points**: 3
**Status**: TODO
**Description**:
Implement policy gate that requires minimum confidence by environment.
**Implementation Path**: `src/Policy/__Libraries/StellaOps.Policy/Gates/MinimumConfidenceGate.cs`
**Acceptance Criteria**:
- [ ] `IPolicy Gate` interface implementation
- [ ] Configurable minimum confidence per environment (e.g., prod ≥ 0.75)
- [ ] Fail verdict if confidence below threshold for "not_affected"
- [ ] Allow "affected" status regardless of confidence (conservative)
- [ ] Return clear gate failure reason
**Configuration Spec**:
```yaml
gates:
minimumConfidence:
enabled: true
thresholds:
production: 0.75
staging: 0.60
development: 0.40
applyToStatuses:
- not_affected
- fixed
```
---
### T4: UnknownsBudgetGate
**Assignee**: Policy Team
**Story Points**: 5
**Status**: TODO
**Description**:
Implement policy gate that fails if unknowns exceed budget.
**Implementation Path**: `src/Policy/__Libraries/StellaOps.Policy/Gates/UnknownsBudgetGate.cs`
**Acceptance Criteria**:
- [ ] Configurable max unknown count (e.g., N=5)
- [ ] Configurable cumulative uncertainty threshold (e.g., T=2.0)
- [ ] Fail if #unknown deps > N
- [ ] Fail if Σ(1-ClaimScore) over unknowns > T
- [ ] Integration with Unknowns Registry from Sprint 3500
**Configuration Spec**:
```yaml
gates:
unknownsBudget:
enabled: true
maxUnknownCount: 5
maxCumulativeUncertainty: 2.0
escalateOnFail: true
```
---
### T5: SourceQuotaGate
**Assignee**: Policy Team
**Story Points**: 5
**Status**: TODO
**Description**:
Implement policy gate that caps influence from any single vendor.
**Implementation Path**: `src/Policy/__Libraries/StellaOps.Policy/Gates/SourceQuotaGate.cs`
**Acceptance Criteria**:
- [ ] Configurable max influence per source (default 60%)
- [ ] Fail if single source dominates verdict without corroboration
- [ ] Corroboration threshold: second source within delta=0.1
- [ ] Apply to verdicts where source influence exceeds quota
- [ ] Return details of which sources exceeded quota
**Configuration Spec**:
```yaml
gates:
sourceQuota:
enabled: true
maxInfluencePercent: 60
corroborationDelta: 0.10
requireCorroborationFor:
- not_affected
- fixed
```
---
### T6: ReachabilityRequirementGate
**Assignee**: Policy Team
**Story Points**: 5
**Status**: TODO
**Description**:
Implement policy gate that requires reachability proof for critical vulnerabilities.
**Implementation Path**: `src/Policy/__Libraries/StellaOps.Policy/Gates/ReachabilityRequirementGate.cs`
**Acceptance Criteria**:
- [ ] Require reachability proof for "not_affected" on critical CVEs
- [ ] Integration with reachability graph from Scanner module
- [ ] Configurable severity threshold (default: CRITICAL)
- [ ] Configurable bypass for specific reason codes
- [ ] Fail with clear reason if reachability proof missing
**Configuration Spec**:
```yaml
gates:
reachabilityRequirement:
enabled: true
severityThreshold: CRITICAL
requiredForStatuses:
- not_affected
bypassReasons:
- component_not_present
- vulnerable_configuration_unused
```
---
### T7: Policy Gate Registry
**Assignee**: Policy Team
**Story Points**: 3
**Status**: TODO
**Description**:
Create registry for managing and executing policy gates.
**Implementation Path**: `src/Policy/__Libraries/StellaOps.Policy/Gates/PolicyGateRegistry.cs`
**Acceptance Criteria**:
- [ ] `IPolicyGateRegistry` interface
- [ ] Register gates by name
- [ ] Execute gates in configured order
- [ ] Short-circuit on first failure (configurable)
- [ ] Aggregate all gate results for audit
- [ ] DI integration for gate injection
**Interface Spec**:
```csharp
public sealed record GateResult
{
public required string GateName { get; init; }
public required bool Passed { get; init; }
public required string? Reason { get; init; }
public required ImmutableDictionary<string, object> Details { get; init; }
}
public sealed record GateEvaluationResult
{
public required bool AllPassed { get; init; }
public required ImmutableArray<GateResult> Results { get; init; }
public GateResult? FirstFailure => Results.FirstOrDefault(r => !r.Passed);
}
public interface IPolicyGateRegistry
{
void Register<TGate>(string name) where TGate : IPolicyGate;
Task<GateEvaluationResult> EvaluateAsync(
MergeResult mergeResult,
PolicyGateContext context,
CancellationToken ct = default);
}
```
---
### T8: Policy Configuration Schema
**Assignee**: Policy Team
**Story Points**: 3
**Status**: TODO
**Description**:
Create configuration schema for policy gates and merge settings.
**Implementation Path**: `etc/policy-gates.yaml.sample`
**Acceptance Criteria**:
- [ ] YAML schema for all gates
- [ ] JSON Schema validation
- [ ] Sample configuration file
- [ ] Documentation in `docs/modules/policy/`
- [ ] Environment variable overrides
**Sample Configuration**:
```yaml
# etc/policy-gates.yaml.sample
version: "1.0"
trustLattice:
weights:
provenance: 0.45
coverage: 0.35
replayability: 0.20
freshness:
halfLifeDays: 90
floor: 0.35
conflictPenalty: 0.25
gates:
minimumConfidence:
enabled: true
thresholds:
production: 0.75
staging: 0.60
development: 0.40
unknownsBudget:
enabled: true
maxUnknownCount: 5
maxCumulativeUncertainty: 2.0
sourceQuota:
enabled: true
maxInfluencePercent: 60
corroborationDelta: 0.10
reachabilityRequirement:
enabled: true
severityThreshold: CRITICAL
```
---
### T9: Unit Tests
**Assignee**: Policy Team
**Story Points**: 5
**Status**: TODO
**Description**:
Comprehensive unit tests for merge algorithm and all gates.
**Implementation Path**: `src/Policy/__Tests/StellaOps.Policy.Tests/TrustLattice/`
**Acceptance Criteria**:
- [ ] ClaimScoreMerger tests for all scenarios
- [ ] Conflict penalty tests
- [ ] MinimumConfidenceGate edge cases
- [ ] UnknownsBudgetGate threshold tests
- [ ] SourceQuotaGate corroboration tests
- [ ] ReachabilityRequirementGate integration tests
- [ ] Gate registry ordering tests
- [ ] Determinism tests (1000 iterations)
- [ ] Test coverage ≥90%
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Policy Team | ClaimScoreMerger |
| 2 | T2 | TODO | T1 | Policy Team | Conflict Penalty |
| 3 | T3 | TODO | T1 | Policy Team | MinimumConfidenceGate |
| 4 | T4 | TODO | T1 | Policy Team | UnknownsBudgetGate |
| 5 | T5 | TODO | T1 | Policy Team | SourceQuotaGate |
| 6 | T6 | TODO | T1 | Policy Team | ReachabilityRequirementGate |
| 7 | T7 | TODO | T3-T6 | Policy Team | Gate Registry |
| 8 | T8 | TODO | T3-T6 | Policy Team | Configuration Schema |
| 9 | T9 | TODO | T1-T8 | Policy Team | Unit Tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint file created from advisory processing. | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Gate execution order | Decision | Policy Team | MinConfidence → Unknowns → SourceQuota → Reachability |
| Short-circuit behavior | Decision | Policy Team | First failure stops evaluation by default |
| Conflict penalty value | Decision | Policy Team | Using 0.25 (25%) per advisory |
| Reachability integration | Risk | Policy Team | Depends on Sprint 3500 reachability graphs |
---
**Sprint Status**: TODO (0/9 tasks complete)

View File

@@ -0,0 +1,537 @@
# Sprint 7100.0002.0002 — Source Defaults & Calibration
## Topic & Scope
- Define default trust vectors for Vendor/Distro/Internal source classes.
- Implement calibration system for rolling trust weight adjustment.
- Create CalibrationManifest for auditable tuning history.
- **Working directory:** `src/Excititor/__Libraries/StellaOps.Excititor.Core/TrustVector/` and `src/Excititor/__Libraries/StellaOps.Excititor.Core/Calibration/`
## Dependencies & Concurrency
- **Upstream**: Sprint 7100.0001.0001 (Trust Vector), Sprint 7100.0002.0001 (Policy Gates)
- **Downstream**: Sprint 7100.0003.0002 (Integration)
- **Safe to parallelize with**: Sprint 7100.0003.0001 (UI)
## Documentation Prerequisites
- `docs/product-advisories/archived/22-Dec-2026 - Building a Trust Lattice for VEX Sources.md`
- `docs/modules/excititor/architecture.md`
- `docs/modules/excititor/scoring.md`
---
## Tasks
### T1: Default Trust Vectors
**Assignee**: Excititor Team
**Story Points**: 3
**Status**: TODO
**Description**:
Define default trust vectors for the three major source classes.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/TrustVector/DefaultTrustVectors.cs`
**Acceptance Criteria**:
- [ ] `DefaultTrustVectors` static class with pre-defined vectors
- [ ] Vendor defaults: P=0.90, C=0.70, R=0.60
- [ ] Distro defaults: P=0.80, C=0.85, R=0.60
- [ ] Internal defaults: P=0.85, C=0.95, R=0.90
- [ ] Hub defaults: P=0.60, C=0.50, R=0.40
- [ ] Attestation defaults: P=0.95, C=0.80, R=0.70
**Implementation Spec**:
```csharp
public static class DefaultTrustVectors
{
/// <summary>Software vendor (e.g., Microsoft, Red Hat as vendor).</summary>
public static TrustVector Vendor => new()
{
Provenance = 0.90,
Coverage = 0.70, // Often coarse-grained
Replayability = 0.60
};
/// <summary>Distribution security team (e.g., Debian, Ubuntu, RHEL as distro).</summary>
public static TrustVector Distro => new()
{
Provenance = 0.80,
Coverage = 0.85, // Build-aware
Replayability = 0.60
};
/// <summary>Internal organization source (org-signed, exact SBOM+reach).</summary>
public static TrustVector Internal => new()
{
Provenance = 0.85,
Coverage = 0.95, // Exact SBOM match
Replayability = 0.90
};
/// <summary>Aggregator hubs (e.g., OSV, GitHub Advisory).</summary>
public static TrustVector Hub => new()
{
Provenance = 0.60,
Coverage = 0.50,
Replayability = 0.40
};
/// <summary>OCI attestations.</summary>
public static TrustVector Attestation => new()
{
Provenance = 0.95,
Coverage = 0.80,
Replayability = 0.70
};
public static TrustVector GetDefault(VexProviderKind kind) => kind switch
{
VexProviderKind.Vendor => Vendor,
VexProviderKind.Distro => Distro,
VexProviderKind.Platform => Internal,
VexProviderKind.Hub => Hub,
VexProviderKind.Attestation => Attestation,
_ => Hub // Conservative default
};
}
```
---
### T2: Source Classification Service
**Assignee**: Excititor Team
**Story Points**: 5
**Status**: TODO
**Description**:
Create service for auto-classifying VEX sources into source classes.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/TrustVector/SourceClassificationService.cs`
**Acceptance Criteria**:
- [ ] `ISourceClassificationService` interface
- [ ] Classify based on issuer domain, signature type, content format
- [ ] Known vendor/distro registry lookup
- [ ] Heuristic classification for unknown sources
- [ ] Override capability via configuration
- [ ] Audit trail of classification decisions
**Interface Spec**:
```csharp
public sealed record SourceClassification
{
public required VexProviderKind Kind { get; init; }
public required TrustVector DefaultVector { get; init; }
public required double Confidence { get; init; }
public required string Reason { get; init; }
public required bool IsOverride { get; init; }
}
public interface ISourceClassificationService
{
SourceClassification Classify(
string issuerId,
string? issuerDomain,
string? signatureType,
string contentFormat);
void RegisterOverride(string issuerPattern, VexProviderKind kind);
}
```
---
### T3: Calibration Manifest Model
**Assignee**: Excititor Team
**Story Points**: 5
**Status**: TODO
**Description**:
Create CalibrationManifest model for auditable trust weight tuning history.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/Calibration/CalibrationManifest.cs`
**Acceptance Criteria**:
- [ ] `CalibrationManifest` record with epoch, adjustments, signatures
- [ ] `CalibrationEpoch` with timestamp, baseline, and adjusted vectors
- [ ] `CalibrationAdjustment` with source, old/new values, reason
- [ ] Signed manifest for audit compliance
- [ ] Deterministic serialization
**Domain Model Spec**:
```csharp
public sealed record CalibrationManifest
{
public required string ManifestId { get; init; }
public required string Tenant { get; init; }
public required int EpochNumber { get; init; }
public required DateTimeOffset EpochStart { get; init; }
public required DateTimeOffset EpochEnd { get; init; }
public required ImmutableArray<CalibrationAdjustment> Adjustments { get; init; }
public required CalibrationMetrics Metrics { get; init; }
public required string ManifestDigest { get; init; }
public string? Signature { get; init; }
}
public sealed record CalibrationAdjustment
{
public required string SourceId { get; init; }
public required TrustVector OldVector { get; init; }
public required TrustVector NewVector { get; init; }
public required double Delta { get; init; }
public required string Reason { get; init; }
public required int SampleCount { get; init; }
public required double AccuracyBefore { get; init; }
public required double AccuracyAfter { get; init; }
}
public sealed record CalibrationMetrics
{
public required int TotalVerdicts { get; init; }
public required int CorrectVerdicts { get; init; }
public required int PostMortemReversals { get; init; }
public required double OverallAccuracy { get; init; }
}
```
---
### T4: Calibration Comparison Engine
**Assignee**: Excititor Team
**Story Points**: 8
**Status**: TODO
**Description**:
Implement calibration comparison between VEX claims and post-mortem truth.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/Calibration/CalibrationComparisonEngine.cs`
**Acceptance Criteria**:
- [ ] Compare historical verdicts against post-mortem truth data
- [ ] Post-mortem sources: KEV confirmations, exploit publications, vendor patches
- [ ] Track prediction accuracy per source
- [ ] Identify sources with systematic bias
- [ ] Generate comparison report with confidence intervals
**Interface Spec**:
```csharp
public sealed record ComparisonResult
{
public required string SourceId { get; init; }
public required int TotalPredictions { get; init; }
public required int CorrectPredictions { get; init; }
public required int FalseNegatives { get; init; } // Said not_affected, was exploited
public required int FalsePositives { get; init; } // Said affected, never exploited
public required double Accuracy { get; init; }
public required double ConfidenceInterval { get; init; }
public required CalibrationBias? DetectedBias { get; init; }
}
public enum CalibrationBias
{
None,
OptimisticBias, // Tends to say not_affected when actually affected
PessimisticBias, // Tends to say affected when actually not_affected
ScopeBias // Coverage claims don't match actual scope
}
public interface ICalibrationComparisonEngine
{
Task<IReadOnlyList<ComparisonResult>> CompareAsync(
string tenant,
DateTimeOffset epochStart,
DateTimeOffset epochEnd,
CancellationToken ct = default);
}
```
---
### T5: Learning Rate Adjustment
**Assignee**: Excititor Team
**Story Points**: 5
**Status**: TODO
**Description**:
Implement learning rate adjustment for trust vector calibration.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/Calibration/TrustVectorCalibrator.cs`
**Acceptance Criteria**:
- [ ] Configurable learning rate (default ±0.02/epoch)
- [ ] Bounded adjustments to prevent oscillation
- [ ] Separate learning rates for P/C/R components
- [ ] Momentum factor for stable convergence
- [ ] Roll back capability on accuracy regression
**Implementation Spec**:
```csharp
public sealed class TrustVectorCalibrator
{
public double LearningRate { get; init; } = 0.02;
public double MaxAdjustmentPerEpoch { get; init; } = 0.05;
public double MinValue { get; init; } = 0.10;
public double MaxValue { get; init; } = 1.00;
public double MomentumFactor { get; init; } = 0.9;
public TrustVector Calibrate(
TrustVector current,
ComparisonResult comparison,
CalibrationBias? detectedBias)
{
if (comparison.Accuracy >= 0.95)
return current; // No adjustment needed
var adjustment = CalculateAdjustment(comparison, detectedBias);
return ApplyAdjustment(current, adjustment);
}
private CalibrationDelta CalculateAdjustment(
ComparisonResult comparison,
CalibrationBias? bias)
{
// Adjust based on bias type and accuracy
var delta = (1.0 - comparison.Accuracy) * LearningRate;
delta = Math.Min(delta, MaxAdjustmentPerEpoch);
return bias switch
{
CalibrationBias.OptimisticBias => new(-delta, 0, 0), // Reduce P
CalibrationBias.PessimisticBias => new(+delta, 0, 0), // Increase P
CalibrationBias.ScopeBias => new(0, -delta, 0), // Reduce C
_ => new(-delta / 3, -delta / 3, -delta / 3) // Uniform
};
}
}
public sealed record CalibrationDelta(double DeltaP, double DeltaC, double DeltaR);
```
---
### T6: Calibration Service
**Assignee**: Excititor Team
**Story Points**: 5
**Status**: TODO
**Description**:
Create orchestration service for running calibration epochs.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/Calibration/TrustCalibrationService.cs`
**Acceptance Criteria**:
- [ ] `ITrustCalibrationService` interface
- [ ] Run calibration epoch on schedule or demand
- [ ] Generate and sign CalibrationManifest
- [ ] Store calibration history
- [ ] Apply calibrated vectors to provider registry
- [ ] Rollback on accuracy regression
**Interface Spec**:
```csharp
public interface ITrustCalibrationService
{
Task<CalibrationManifest> RunEpochAsync(
string tenant,
DateTimeOffset? epochEnd = null,
CancellationToken ct = default);
Task<CalibrationManifest?> GetLatestAsync(
string tenant,
CancellationToken ct = default);
Task ApplyCalibrationAsync(
string tenant,
string manifestId,
CancellationToken ct = default);
Task RollbackAsync(
string tenant,
string manifestId,
CancellationToken ct = default);
}
```
---
### T7: PostgreSQL Schema for Calibration
**Assignee**: Excititor Team
**Story Points**: 3
**Status**: TODO
**Description**:
Create database migration for calibration storage.
**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Storage.Postgres/Migrations/`
**Acceptance Criteria**:
- [ ] `excititor.calibration_manifests` table
- [ ] `excititor.calibration_adjustments` table
- [ ] `excititor.source_trust_vectors` table (current active vectors)
- [ ] Indexes for tenant + epoch queries
- [ ] Foreign key to source registry
**Schema Spec**:
```sql
CREATE TABLE excititor.calibration_manifests (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
manifest_id TEXT NOT NULL UNIQUE,
tenant TEXT NOT NULL,
epoch_number INTEGER NOT NULL,
epoch_start TIMESTAMPTZ NOT NULL,
epoch_end TIMESTAMPTZ NOT NULL,
metrics_json JSONB NOT NULL,
manifest_digest TEXT NOT NULL,
signature TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
applied_at TIMESTAMPTZ,
UNIQUE (tenant, epoch_number)
);
CREATE TABLE excititor.calibration_adjustments (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
manifest_id TEXT NOT NULL REFERENCES excititor.calibration_manifests(manifest_id),
source_id TEXT NOT NULL,
old_provenance DOUBLE PRECISION NOT NULL,
old_coverage DOUBLE PRECISION NOT NULL,
old_replayability DOUBLE PRECISION NOT NULL,
new_provenance DOUBLE PRECISION NOT NULL,
new_coverage DOUBLE PRECISION NOT NULL,
new_replayability DOUBLE PRECISION NOT NULL,
delta DOUBLE PRECISION NOT NULL,
reason TEXT NOT NULL,
sample_count INTEGER NOT NULL,
accuracy_before DOUBLE PRECISION NOT NULL,
accuracy_after DOUBLE PRECISION NOT NULL
);
CREATE TABLE excititor.source_trust_vectors (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
tenant TEXT NOT NULL,
source_id TEXT NOT NULL,
provenance DOUBLE PRECISION NOT NULL,
coverage DOUBLE PRECISION NOT NULL,
replayability DOUBLE PRECISION NOT NULL,
calibration_manifest_id TEXT REFERENCES excititor.calibration_manifests(manifest_id),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
UNIQUE (tenant, source_id)
);
CREATE INDEX idx_calibration_tenant_epoch ON excititor.calibration_manifests(tenant, epoch_number DESC);
CREATE INDEX idx_calibration_adjustments_manifest ON excititor.calibration_adjustments(manifest_id);
CREATE INDEX idx_source_vectors_tenant ON excititor.source_trust_vectors(tenant);
```
---
### T8: Configuration for Calibration
**Assignee**: Excititor Team
**Story Points**: 2
**Status**: TODO
**Description**:
Create configuration schema for calibration settings.
**Implementation Path**: `etc/excititor-calibration.yaml.sample`
**Acceptance Criteria**:
- [ ] YAML configuration for calibration policy
- [ ] Epoch duration settings
- [ ] Learning rate configuration
- [ ] Rollback thresholds
- [ ] Post-mortem source configuration
**Sample Configuration**:
```yaml
# etc/excititor-calibration.yaml.sample
calibration:
enabled: true
schedule:
epochDuration: "30d" # 30-day calibration epochs
runAt: "02:00" # Run at 2 AM UTC
learning:
rate: 0.02
maxAdjustmentPerEpoch: 0.05
momentumFactor: 0.9
rollback:
accuracyRegressionThreshold: 0.05
autoRollbackEnabled: true
postMortem:
sources:
- type: kev
weight: 1.0
- type: exploit-db
weight: 0.8
- type: vendor-patch
weight: 0.9
```
---
### T9: Unit Tests
**Assignee**: Excititor Team
**Story Points**: 5
**Status**: TODO
**Description**:
Comprehensive unit tests for calibration system.
**Implementation Path**: `src/Excititor/__Tests/StellaOps.Excititor.Core.Tests/Calibration/`
**Acceptance Criteria**:
- [ ] Default trust vector tests
- [ ] Source classification tests
- [ ] Calibration comparison tests
- [ ] Learning rate adjustment tests (convergence, bounds)
- [ ] Rollback tests
- [ ] Determinism tests (1000 iterations)
- [ ] Integration tests with PostgreSQL
- [ ] Test coverage ≥85%
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Excititor Team | Default Trust Vectors |
| 2 | T2 | TODO | T1 | Excititor Team | Source Classification Service |
| 3 | T3 | TODO | — | Excititor Team | Calibration Manifest Model |
| 4 | T4 | TODO | T3 | Excititor Team | Calibration Comparison Engine |
| 5 | T5 | TODO | T4 | Excititor Team | Learning Rate Adjustment |
| 6 | T6 | TODO | T4, T5 | Excititor Team | Calibration Service |
| 7 | T7 | TODO | T3 | Excititor Team | PostgreSQL Schema |
| 8 | T8 | TODO | T6 | Excititor Team | Configuration |
| 9 | T9 | TODO | T1-T8 | Excititor Team | Unit Tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint file created from advisory processing. | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Default vector values | Decision | Excititor Team | Using advisory values: Vendor(0.9,0.7,0.6), Distro(0.8,0.85,0.6), Internal(0.85,0.95,0.9) |
| Learning rate | Decision | Excititor Team | Using ±0.02/epoch per advisory |
| Post-mortem sources | Risk | Excititor Team | Need reliable ground truth data for calibration |
| Calibration frequency | Decision | Excititor Team | 30-day epochs by default |
---
**Sprint Status**: TODO (0/9 tasks complete)

View File

@@ -0,0 +1,365 @@
# Sprint 7100.0003.0001 — UI Trust Algebra Panel
## Topic & Scope
- Implement the "Trust Algebra" visualization panel for explaining VEX verdicts.
- Create confidence meter, P/C/R stacked bars, and claim comparison table.
- Add replay button for verdict reproduction.
- **Working directory:** `src/Web/StellaOps.Web/src/app/features/vulnerabilities/components/trust-algebra/`
## Dependencies & Concurrency
- **Upstream**: Sprint 7100.0001.0002 (Verdict Manifest), Sprint 7100.0002.0001 (Policy Gates)
- **Downstream**: Sprint 7100.0003.0002 (Integration)
- **Safe to parallelize with**: Sprint 7100.0002.0002 (Calibration)
## Documentation Prerequisites
- `docs/product-advisories/archived/22-Dec-2026 - Building a Trust Lattice for VEX Sources.md`
- Angular v17 best practices
- Existing vulnerability detail views in `src/Web/StellaOps.Web/`
---
## Tasks
### T1: TrustAlgebraComponent
**Assignee**: UI Team
**Story Points**: 5
**Status**: TODO
**Description**:
Create the main Trust Algebra Angular component for verdict explanation.
**Implementation Path**: `src/Web/StellaOps.Web/src/app/features/vulnerabilities/components/trust-algebra/trust-algebra.component.ts`
**Acceptance Criteria**:
- [ ] Angular standalone component
- [ ] Input: VerdictManifest from API
- [ ] Header: CVE x Asset digest → final status + confidence meter
- [ ] Expandable/collapsible sections for detailed breakdown
- [ ] Integration with existing vulnerability detail view
- [ ] Responsive design for different screen sizes
**Component Structure**:
```typescript
@Component({
selector: 'app-trust-algebra',
standalone: true,
imports: [
CommonModule,
ConfidenceMeterComponent,
TrustVectorBarsComponent,
ClaimTableComponent,
PolicyChipsComponent,
ReplayButtonComponent
],
templateUrl: './trust-algebra.component.html',
styleUrls: ['./trust-algebra.component.scss']
})
export class TrustAlgebraComponent {
@Input() verdictManifest!: VerdictManifest;
@Input() isReplayMode = false;
showConflicts = false;
expandedSections: Set<string> = new Set(['summary']);
toggleSection(section: string): void;
toggleConflicts(): void;
}
```
---
### T2: Confidence Meter Visualization
**Assignee**: UI Team
**Story Points**: 3
**Status**: TODO
**Description**:
Create confidence meter visualization showing 0-1 scale with color coding.
**Implementation Path**: `src/Web/StellaOps.Web/src/app/features/vulnerabilities/components/trust-algebra/confidence-meter.component.ts`
**Acceptance Criteria**:
- [ ] Circular or linear meter showing confidence 0-1
- [ ] Color gradient: red (0-0.4) → yellow (0.4-0.7) → green (0.7-1.0)
- [ ] Numeric display with 2 decimal precision
- [ ] Threshold markers for policy gates (e.g., prod minimum at 0.75)
- [ ] Animation on value change
- [ ] Accessible: ARIA labels, keyboard navigation
**Visual Spec**:
```
┌─────────────────────────────────────┐
│ ┌───────────────────────────┐ │
│ │ ◐ 0.82 │ │
│ │ CONFIDENCE │ │
│ └───────────────────────────┘ │
│ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓░░░░ │0.75 │
│ ↑ min-prod │
└─────────────────────────────────────┘
```
---
### T3: P/C/R Stacked Bar Chart
**Assignee**: UI Team
**Story Points**: 5
**Status**: TODO
**Description**:
Create stacked bar visualization for trust vector components.
**Implementation Path**: `src/Web/StellaOps.Web/src/app/features/vulnerabilities/components/trust-algebra/trust-vector-bars.component.ts`
**Acceptance Criteria**:
- [ ] Horizontal stacked bar showing P/C/R contributions
- [ ] Color-coded segments: P=blue, C=green, R=purple
- [ ] Hover/click for detailed breakdown
- [ ] Show weighted vs. raw values
- [ ] Legend with component labels
- [ ] Responsive sizing
**Visual Spec**:
```
┌─────────────────────────────────────┐
│ Trust Vector Breakdown │
│ │
│ ████████████▓▓▓▓▓▓▓▓░░░░░░ = 0.78 │
│ └──P:0.41──┘└─C:0.26─┘└R:0.11┘ │
│ │
│ ○ Provenance (wP=0.45) 0.90 │
│ ○ Coverage (wC=0.35) 0.75 │
│ ○ Replayability (wR=0.20) 0.55 │
└─────────────────────────────────────┘
```
---
### T4: Claim Comparison Table
**Assignee**: UI Team
**Story Points**: 5
**Status**: TODO
**Description**:
Create sortable table showing all claims with scores and conflict highlighting.
**Implementation Path**: `src/Web/StellaOps.Web/src/app/features/vulnerabilities/components/trust-algebra/claim-table.component.ts`
**Acceptance Criteria**:
- [ ] Table columns: Source, Status, Reason, P/C/R, Strength, Freshness, ClaimScore
- [ ] Sortable by any column
- [ ] Winning claim highlighted
- [ ] Conflict toggle: show/hide conflicting claims
- [ ] Row expansion for full claim details
- [ ] Export to CSV/JSON
**Visual Spec**:
```
┌──────────────────────────────────────────────────────────────────────┐
│ VEX Claims (3) [Toggle Conflicts ☐] │
├──────────┬─────────────┬──────────────┬─────┬─────┬─────┬───────────┤
│ Source │ Status │ Reason │ P │ C │ R │ ClaimScore│
├──────────┼─────────────┼──────────────┼─────┼─────┼─────┼───────────┤
│ ★redhat │ not_affected│ config_off │ 0.90│ 0.85│ 0.60│ 0.82 ▲ │
│ ubuntu │ not_affected│ not_present │ 0.80│ 0.75│ 0.50│ 0.71 │
│ ⚠internal│ affected │ under_invest │ 0.85│ 0.95│ 0.90│ 0.58* │
└──────────┴─────────────┴──────────────┴─────┴─────┴─────┴───────────┘
★ = Winner ⚠ = Conflict * = Penalty Applied
```
---
### T5: Policy Chips Display
**Assignee**: UI Team
**Story Points**: 3
**Status**: TODO
**Description**:
Create chip/tag display showing which policy gates were applied.
**Implementation Path**: `src/Web/StellaOps.Web/src/app/features/vulnerabilities/components/trust-algebra/policy-chips.component.ts`
**Acceptance Criteria**:
- [ ] Chips for each applied gate (MinConfidence, SourceQuota, etc.)
- [ ] Color: green=passed, red=failed, gray=not applicable
- [ ] Click to open policy YAML/JSON viewer (read-only in replay mode)
- [ ] Tooltip with gate configuration
- [ ] Show policy_hash and lattice_version
**Visual Spec**:
```
┌─────────────────────────────────────────────────────────────┐
│ Policy Gates │
│ │
│ [✓ MinConfidence] [✓ SourceQuota] [— Reachability] [✓ PASS]│
│ │
│ Policy: sha256:abc123... Lattice: v1.2.0 │
│ [View Policy YAML] │
└─────────────────────────────────────────────────────────────┘
```
---
### T6: Replay Button Component
**Assignee**: UI Team
**Story Points**: 5
**Status**: TODO
**Description**:
Create "Reproduce Verdict" button that triggers replay verification.
**Implementation Path**: `src/Web/StellaOps.Web/src/app/features/vulnerabilities/components/trust-algebra/replay-button.component.ts`
**Acceptance Criteria**:
- [ ] Button triggers replay verification API call
- [ ] Loading state during verification
- [ ] Success: show checkmark + "Verdict Reproduced"
- [ ] Failure: show diff panel with discrepancies
- [ ] Download signed VerdictManifest option
- [ ] Copy manifest ID to clipboard
**Visual Spec**:
```
┌─────────────────────────────────────┐
│ [🔄 Reproduce Verdict] [📋 Copy ID]│
│ │
│ After click (success): │
│ [✓ Verdict Reproduced] [⬇ Download]│
│ │
│ After click (failure): │
│ [✗ Mismatch Detected] │
│ ┌─────────────────────────────────┐ │
│ │ Differences: │ │
│ │ - confidence: 0.82 → 0.81 │ │
│ │ - freshness: 0.95 → 0.94 │ │
│ └─────────────────────────────────┘ │
└─────────────────────────────────────┘
```
---
### T7: Trust Algebra API Service
**Assignee**: UI Team
**Story Points**: 3
**Status**: TODO
**Description**:
Create Angular service for Trust Algebra API calls.
**Implementation Path**: `src/Web/StellaOps.Web/src/app/features/vulnerabilities/services/trust-algebra.service.ts`
**Acceptance Criteria**:
- [ ] `TrustAlgebraService` for API integration
- [ ] Get verdict manifest by ID
- [ ] Get verdict by (asset, CVE)
- [ ] Trigger replay verification
- [ ] Download signed manifest
- [ ] Error handling with user-friendly messages
**Service Spec**:
```typescript
@Injectable({ providedIn: 'root' })
export class TrustAlgebraService {
constructor(private http: HttpClient) {}
getVerdictManifest(manifestId: string): Observable<VerdictManifest>;
getVerdictByScope(
assetDigest: string,
vulnerabilityId: string
): Observable<VerdictManifest | null>;
replayVerdict(manifestId: string): Observable<ReplayVerificationResult>;
downloadManifest(manifestId: string): Observable<Blob>;
}
```
---
### T8: Accessibility & Keyboard Navigation
**Assignee**: UI Team
**Story Points**: 3
**Status**: TODO
**Description**:
Ensure Trust Algebra panel meets accessibility standards.
**Implementation Path**: All components in `trust-algebra/`
**Acceptance Criteria**:
- [ ] WCAG 2.1 AA compliance
- [ ] Keyboard navigation for all interactive elements
- [ ] Screen reader support with ARIA labels
- [ ] High contrast mode support
- [ ] Focus indicators
- [ ] Color-blind friendly palette options
---
### T9: E2E Tests
**Assignee**: UI Team
**Story Points**: 5
**Status**: TODO
**Description**:
End-to-end tests for Trust Algebra panel.
**Implementation Path**: `src/Web/StellaOps.Web/e2e/trust-algebra/`
**Acceptance Criteria**:
- [ ] Component rendering tests
- [ ] Confidence meter accuracy tests
- [ ] Claim table sorting/filtering tests
- [ ] Replay button flow tests
- [ ] Policy chips interaction tests
- [ ] Accessibility tests (axe-core)
- [ ] Responsive design tests
- [ ] Cross-browser tests (Chrome, Firefox, Safari)
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | UI Team | TrustAlgebraComponent |
| 2 | T2 | TODO | T1 | UI Team | Confidence Meter |
| 3 | T3 | TODO | T1 | UI Team | P/C/R Stacked Bars |
| 4 | T4 | TODO | T1 | UI Team | Claim Comparison Table |
| 5 | T5 | TODO | T1 | UI Team | Policy Chips Display |
| 6 | T6 | TODO | T1, T7 | UI Team | Replay Button |
| 7 | T7 | TODO | — | UI Team | API Service |
| 8 | T8 | TODO | T1-T6 | UI Team | Accessibility |
| 9 | T9 | TODO | T1-T8 | UI Team | E2E Tests |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint file created from advisory processing. | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Angular standalone | Decision | UI Team | Using Angular 17 standalone components |
| Chart library | Decision | UI Team | Consider ngx-charts or custom SVG for visualizations |
| Real-time updates | Risk | UI Team | May need WebSocket for live verdict updates |
| UX wireframes | Dependency | Product | Wireframes needed before implementation |
---
**Sprint Status**: TODO (0/9 tasks complete)

View File

@@ -0,0 +1,338 @@
# Sprint 7100.0003.0002 — Integration & Documentation
## Topic & Scope
- End-to-end integration of all Trust Lattice components.
- Create comprehensive documentation and specifications.
- Update sample configuration files.
- **Working directory:** `docs/` and cross-module integration
## Dependencies & Concurrency
- **Upstream**: All prior sprints (7100.0001.0001 through 7100.0003.0001)
- **Downstream**: None (final sprint)
- **Safe to parallelize with**: None (integration sprint)
## Documentation Prerequisites
- All prior sprint deliverables completed
- `docs/product-advisories/archived/22-Dec-2026 - Building a Trust Lattice for VEX Sources.md`
---
## Tasks
### T1: Update Excititor Architecture Documentation
**Assignee**: Docs Guild
**Story Points**: 3
**Status**: TODO
**Description**:
Update Excititor architecture documentation to include trust lattice.
**Implementation Path**: `docs/modules/excititor/architecture.md`
**Acceptance Criteria**:
- [ ] Add Trust Lattice section to architecture overview
- [ ] Document TrustVector model and scoring
- [ ] Document ClaimScore calculation pipeline
- [ ] Update data flow diagrams
- [ ] Cross-reference to trust-lattice.md specification
---
### T2: Create Trust Lattice Specification
**Assignee**: Docs Guild
**Story Points**: 8
**Status**: TODO
**Description**:
Create comprehensive trust lattice specification document.
**Implementation Path**: `docs/modules/excititor/trust-lattice.md`
**Acceptance Criteria**:
- [ ] Trust vector model (P/C/R components)
- [ ] Scoring formulas with examples
- [ ] Claim strength and freshness calculations
- [ ] Merge algorithm specification
- [ ] Conflict handling rules
- [ ] Policy gates reference
- [ ] Configuration reference
- [ ] API endpoint reference
**Document Outline**:
```markdown
# VEX Trust Lattice Specification
## 1. Overview
## 2. Trust Vector Model
2.1 Provenance (P)
2.2 Coverage (C)
2.3 Replayability (R)
2.4 Weight Configuration
## 3. Claim Scoring
3.1 Base Trust Calculation
3.2 Claim Strength Multipliers
3.3 Freshness Decay
3.4 ClaimScore Formula
## 4. Lattice Merge Algorithm
4.1 Partial Ordering
4.2 Conflict Detection
4.3 Winner Selection
4.4 Audit Trail Generation
## 5. Policy Gates
5.1 MinimumConfidenceGate
5.2 UnknownsBudgetGate
5.3 SourceQuotaGate
5.4 ReachabilityRequirementGate
## 6. Deterministic Replay
6.1 Input Pinning
6.2 Verdict Manifest
6.3 Replay Verification
## 7. Configuration Reference
## 8. API Reference
## 9. Examples
```
---
### T3: Update Policy Architecture Documentation
**Assignee**: Docs Guild
**Story Points**: 3
**Status**: TODO
**Description**:
Update Policy module documentation with gate specifications.
**Implementation Path**: `docs/modules/policy/architecture.md`
**Acceptance Criteria**:
- [ ] Add Policy Gates section
- [ ] Document gate interface and registry
- [ ] Document gate configuration schema
- [ ] Include decision flow diagrams
- [ ] Cross-reference to trust-lattice.md
---
### T4: Create Verdict Manifest Specification
**Assignee**: Docs Guild
**Story Points**: 5
**Status**: TODO
**Description**:
Create specification for verdict manifest format and signing.
**Implementation Path**: `docs/modules/authority/verdict-manifest.md`
**Acceptance Criteria**:
- [ ] Verdict manifest schema
- [ ] Input pinning requirements
- [ ] DSSE signing process
- [ ] Storage and indexing
- [ ] Replay verification protocol
- [ ] JSON Schema definition
**Document Outline**:
```markdown
# Verdict Manifest Specification
## 1. Overview
## 2. Manifest Schema
2.1 Identity Fields
2.2 Input Pinning
2.3 Verdict Result
2.4 Policy Context
## 3. Deterministic Serialization
3.1 Canonical JSON
3.2 Digest Computation
## 4. Signing
4.1 DSSE Envelope
4.2 Predicate Type
4.3 Rekor Integration
## 5. Storage
5.1 PostgreSQL Schema
5.2 Indexing Strategy
## 6. Replay Verification
6.1 Verification Protocol
6.2 Failure Handling
## 7. API Reference
## 8. JSON Schema
```
---
### T5: Create JSON Schemas
**Assignee**: Docs Guild
**Story Points**: 3
**Status**: TODO
**Description**:
Create JSON Schemas for trust lattice data structures.
**Implementation Path**: `docs/attestor/schemas/`
**Acceptance Criteria**:
- [ ] `verdict-manifest.schema.json`
- [ ] `calibration-manifest.schema.json`
- [ ] `trust-vector.schema.json`
- [ ] Schema validation tests
- [ ] Integration with OpenAPI specs
**Schema Files**:
```
docs/attestor/schemas/
├── verdict-manifest.schema.json
├── calibration-manifest.schema.json
├── trust-vector.schema.json
└── claim-score.schema.json
```
---
### T6: Update API Reference
**Assignee**: Docs Guild
**Story Points**: 3
**Status**: TODO
**Description**:
Update API reference documentation with new endpoints.
**Implementation Path**: `docs/09_API_CLI_REFERENCE.md` and OpenAPI specs
**Acceptance Criteria**:
- [ ] Document verdict manifest endpoints
- [ ] Document replay verification endpoint
- [ ] Document calibration endpoints
- [ ] Update OpenAPI specifications
- [ ] Add example requests/responses
---
### T7: Create Sample Configuration Files
**Assignee**: Docs Guild
**Story Points**: 2
**Status**: TODO
**Description**:
Create sample configuration files for trust lattice.
**Implementation Path**: `etc/`
**Acceptance Criteria**:
- [ ] `etc/trust-lattice.yaml.sample` - Trust vector defaults and weights
- [ ] `etc/policy-gates.yaml.sample` - Gate configuration
- [ ] `etc/excititor-calibration.yaml.sample` - Calibration settings
- [ ] Comments explaining each setting
- [ ] Environment variable overrides documented
---
### T8: End-to-End Integration Tests
**Assignee**: QA Team
**Story Points**: 8
**Status**: TODO
**Description**:
Create comprehensive E2E tests for trust lattice flow.
**Implementation Path**: `src/Scanner/__Tests/StellaOps.Scanner.Integration.Tests/TrustLattice/`
**Acceptance Criteria**:
- [ ] Full flow: VEX ingest → score → merge → verdict → sign → replay
- [ ] Multi-source conflict scenarios
- [ ] Policy gate triggering scenarios
- [ ] Calibration epoch simulation
- [ ] UI integration verification
- [ ] Air-gap bundle verification
- [ ] Performance benchmarks
**Test Scenarios**:
```
1. Single source, high confidence → PASS
2. Multiple agreeing sources → PASS with corroboration boost
3. Conflicting sources → Conflict penalty applied
4. Below minimum confidence → FAIL gate
5. Source quota exceeded → FAIL gate (no corroboration)
6. Critical CVE without reachability → FAIL gate
7. Replay verification → Success (identical)
8. Replay with changed inputs → Failure (diff reported)
9. Calibration epoch → Adjustments applied correctly
```
---
### T9: Training and Handoff Documentation
**Assignee**: Docs Guild
**Story Points**: 3
**Status**: TODO
**Description**:
Create training materials for support and operations teams.
**Implementation Path**: `docs/operations/` and `docs/training/`
**Acceptance Criteria**:
- [ ] Operations runbook: `docs/operations/trust-lattice-runbook.md`
- [ ] Troubleshooting guide: `docs/operations/trust-lattice-troubleshooting.md`
- [ ] Support FAQ
- [ ] Architecture overview for new team members
- [ ] Claims index update: TRUST-001, VERDICT-001, CALIBRATION-001
---
## Delivery Tracker
| # | Task ID | Status | Dependency | Owners | Task Definition |
|---|---------|--------|------------|--------|-----------------|
| 1 | T1 | TODO | — | Docs Guild | Excititor Architecture Update |
| 2 | T2 | TODO | T1 | Docs Guild | Trust Lattice Specification |
| 3 | T3 | TODO | — | Docs Guild | Policy Architecture Update |
| 4 | T4 | TODO | — | Docs Guild | Verdict Manifest Specification |
| 5 | T5 | TODO | T2, T4 | Docs Guild | JSON Schemas |
| 6 | T6 | TODO | T2, T4 | Docs Guild | API Reference Update |
| 7 | T7 | TODO | T2 | Docs Guild | Sample Configuration Files |
| 8 | T8 | TODO | All prior | QA Team | E2E Integration Tests |
| 9 | T9 | TODO | T1-T7 | Docs Guild | Training & Handoff |
---
## Execution Log
| Date (UTC) | Update | Owner |
|------------|--------|-------|
| 2025-12-22 | Sprint file created from advisory processing. | Agent |
---
## Decisions & Risks
| Item | Type | Owner | Notes |
|------|------|-------|-------|
| Documentation format | Decision | Docs Guild | Using existing markdown format |
| Schema validation | Decision | Docs Guild | Using JSON Schema draft 2020-12 |
| Training timing | Risk | Docs Guild | Training should happen before GA release |
| E2E test infrastructure | Dependency | QA Team | Requires all modules deployed together |
---
## Definition of Done
Before marking this sprint complete:
- [ ] All documentation reviewed by 2+ stakeholders
- [ ] All JSON schemas validate against sample data
- [ ] E2E tests pass in CI pipeline
- [ ] Sample configs tested in development environment
- [ ] Training materials reviewed by support team
- [ ] Advisory archived to `docs/product-advisories/archived/`
---
**Sprint Status**: TODO (0/9 tasks complete)

View File

@@ -0,0 +1,268 @@
# SPRINT_7100 Summary — VEX Trust Lattice
**Epic**: VEX Trust Lattice for Explainable, Replayable Decisioning
**Total Duration**: 12 weeks (6 sprints)
**Status**: TODO
**Source Advisory**: `docs/product-advisories/archived/22-Dec-2026 - Building a Trust Lattice for VEX Sources.md`
---
## Executive Summary
Implement a sophisticated 3-component trust vector model (Provenance, Coverage, Replayability) for VEX sources, enabling explainable and deterministically replayable vulnerability decisioning. This replaces the current single-weight trust model with a mathematically rigorous lattice-based approach that produces signed, auditable verdict manifests.
### Key Features
1. **Trust Vector (P/C/R)**: 3-component scoring per VEX source
2. **Claim Scoring**: `ClaimScore = BaseTrust(S) * M * F` with strength and freshness multipliers
3. **Policy Gates**: Minimum confidence, unknowns budget, source quotas, reachability requirements
4. **Verdict Manifest**: DSSE-signed, indexed, replayable verdicts
5. **Trust Algebra UI**: Visual explanation panel for trust decisions
6. **Calibration**: Rolling trust weight adjustment based on post-mortem truth
---
## Sprint Overview
| Sprint ID | Topic | Duration | Status | Key Deliverables |
|-----------|-------|----------|--------|------------------|
| **7100.0001.0001** | Trust Vector Foundation | 2 weeks | TODO | TrustVector, ClaimStrength, FreshnessCalculator, ClaimScoreCalculator |
| **7100.0001.0002** | Verdict Manifest & Replay | 2 weeks | TODO | VerdictManifest, DSSE signing, PostgreSQL store, replay verification |
| **7100.0002.0001** | Policy Gates & Lattice Merge | 2 weeks | TODO | ClaimScoreMerger, MinimumConfidenceGate, SourceQuotaGate, UnknownsBudgetGate |
| **7100.0002.0002** | Source Defaults & Calibration | 2 weeks | TODO | DefaultTrustVectors, CalibrationManifest, TrustCalibrationService |
| **7100.0003.0001** | UI Trust Algebra Panel | 2 weeks | TODO | TrustAlgebraComponent, confidence meter, P/C/R bars, claim table |
| **7100.0003.0002** | Integration & Documentation | 2 weeks | TODO | Architecture docs, trust-lattice.md, verdict-manifest.md, API reference |
---
## Gap Analysis (Advisory vs. Current Implementation)
| Advisory Feature | Current State | Gap Severity | Sprint |
|-----------------|---------------|--------------|--------|
| 3-Component Trust Vector (P/C/R) | Single weight per provider | MAJOR | 7100.0001.0001 |
| Claim Strength Multiplier (M) | Status-based adjustments only | MEDIUM | 7100.0001.0001 |
| Freshness Decay (F) | Fixed staleness penalties (-5%/-10%) | MEDIUM | 7100.0001.0001 |
| ClaimScore = BaseTrust*M*F | Not implemented | MAJOR | 7100.0001.0001 |
| Conflict Mode + Replay Proof | K4 conflict detection, no down-weight | MINOR | 7100.0002.0001 |
| Verdict Manifest (DSSE-signed) | Not implemented | MAJOR | 7100.0001.0002 |
| Policy Gates (min confidence, quotas) | Partial (jurisdiction rules) | MEDIUM | 7100.0002.0001 |
| Deterministic Replay Pinning | Determinism prioritized, no manifest | MEDIUM | 7100.0001.0002 |
| UI Trust Algebra Panel | Not implemented | MEDIUM | 7100.0003.0001 |
| Calibration Manifest | Not implemented | MINOR | 7100.0002.0002 |
---
## Batch A: Core Models (Sprints 7100.0001.00010002)
### Sprint 7100.0001.0001: Trust Vector Foundation
**Owner**: Excititor Team + Policy Team
**Working Directory**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/`
**Deliverables**:
- [ ] `TrustVector` record with P/C/R components and configurable weights
- [ ] `ClaimStrength` enum with evidence-based multipliers (0.401.00)
- [ ] `FreshnessCalculator` with configurable half-life decay (default 90 days)
- [ ] `ClaimScoreCalculator` implementing `BaseTrust(S) * M * F`
- [ ] Extended `VexProvider` with TrustVector configuration
- [ ] Unit tests for scoring calculations (determinism validation)
**Tests**: ≥90% coverage, determinism assertions
---
### Sprint 7100.0001.0002: Verdict Manifest & Replay
**Owner**: Authority Team + Excititor Team
**Working Directory**: `src/Authority/__Libraries/StellaOps.Authority.Core/`
**Deliverables**:
- [ ] `VerdictManifest` model with inputs pinning
- [ ] `VerdictManifestBuilder` for deterministic assembly
- [ ] DSSE signing for verdict manifests via Signer module
- [ ] `IVerdictManifestStore` interface and PostgreSQL implementation
- [ ] Indexing by (asset_digest, CVE, policy_hash, lattice_version)
- [ ] Replay verification endpoint
- [ ] Integration tests with determinism assertions
**Tests**: DSSE signing tests, replay verification tests
---
## Batch B: Policy Integration (Sprints 7100.0002.00010002)
### Sprint 7100.0002.0001: Policy Gates & Lattice Merge
**Owner**: Policy Team
**Working Directory**: `src/Policy/__Libraries/StellaOps.Policy/`
**Deliverables**:
- [ ] Extend `TrustLatticeEngine` with ClaimScore-based merge
- [ ] Implement conflict penalty (delta=0.25) on contradictory claims
- [ ] `MinimumConfidenceGate` policy hook (prod requires ≥0.75)
- [ ] `UnknownsBudgetGate` policy hook (fail if unknowns > N)
- [ ] `SourceQuotaGate` (cap influence at 60% unless corroborated)
- [ ] `ReachabilityRequirementGate` for criticals
- [ ] Policy configuration schema (YAML/JSON)
- [ ] Unit tests for all gates with edge cases
**Tests**: Gate edge cases, conflict scenarios
---
### Sprint 7100.0002.0002: Source Defaults & Calibration
**Owner**: Excititor Team
**Working Directory**: `src/Excititor/__Libraries/StellaOps.Excititor.Core/`
**Deliverables**:
- [ ] Default trust vectors for Vendor/Distro/Internal source classes
- [ ] `SourceClassification` service for auto-classification
- [ ] `CalibrationManifest` model for tuning history
- [ ] Calibration comparison (claim vs. post-mortem truth)
- [ ] Learning rate adjustment (±0.02/epoch)
- [ ] Configuration for calibration policy
**Tests**: Default vector tests, calibration accuracy tests
---
## Batch C: UI & Integration (Sprints 7100.0003.00010002)
### Sprint 7100.0003.0001: UI Trust Algebra Panel
**Owner**: UI Team
**Working Directory**: `src/Web/StellaOps.Web/`
**Deliverables**:
- [ ] `TrustAlgebraComponent` Angular component
- [ ] Confidence meter visualization (01 scale)
- [ ] P/C/R stacked bar chart for winning claim
- [ ] Claim comparison table with conflict toggle
- [ ] Policy chips display with YAML viewer (read-only in replay)
- [ ] "Reproduce verdict" replay button
- [ ] E2E tests for trust algebra panel
**Tests**: Component tests, accessibility tests
---
### Sprint 7100.0003.0002: Integration & Documentation
**Owner**: Docs Guild + All Teams
**Working Directory**: `docs/` and cross-module integration
**Deliverables**:
- [ ] Update `docs/modules/excititor/architecture.md` with trust lattice
- [ ] Create `docs/modules/excititor/trust-lattice.md` specification
- [ ] Update `docs/modules/policy/architecture.md` with gates
- [ ] Create `docs/modules/authority/verdict-manifest.md` specification
- [ ] Add JSON Schema for VerdictManifest to `docs/attestor/`
- [ ] Update API reference with verdict manifest endpoints
- [ ] Integration tests: end-to-end trust lattice flow
- [ ] Update `etc/*.yaml.sample` configuration files
**Tests**: Documentation review, E2E integration tests
---
## Dependencies
```mermaid
graph TD
A[7100.0001.0001 Trust Vector] --> B[7100.0001.0002 Verdict Manifest]
A --> C[7100.0002.0001 Policy Gates]
B --> D[7100.0002.0002 Calibration]
C --> D
B --> E[7100.0003.0001 UI Panel]
C --> E
D --> F[7100.0003.0002 Integration]
E --> F
```
---
## Technical Design
### Trust Vector Formula
```
BaseTrust(S) = wP*P + wC*C + wR*R
Where:
- P = Provenance score [0..1]
- C = Coverage score [0..1]
- R = Replayability score [0..1]
- wP = 0.45 (default)
- wC = 0.35 (default)
- wR = 0.20 (default)
```
### Claim Score Formula
```
ClaimScore = BaseTrust(S) * M * F
Where:
- M = Claim strength multiplier [0.40..1.00]
- F = Freshness decay = max(exp(-ln(2) * age_days / half_life), floor)
- half_life = 90 days (default)
- floor = 0.35 (minimum freshness)
```
### Default Trust Vectors by Source Class
| Source Class | P | C | R |
|-------------|---|---|---|
| Vendor | 0.90 | 0.70 | 0.60 |
| Distro | 0.80 | 0.85 | 0.60 |
| Internal | 0.85 | 0.95 | 0.90 |
### Claim Strength Values
| Evidence Type | Strength (M) |
|--------------|--------------|
| Exploitability analysis + reachability proof | 1.00 |
| Config/feature-flag reason with evidence | 0.80 |
| Vendor blanket statement | 0.60 |
| Under investigation | 0.40 |
---
## Success Metrics
### Technical Metrics
- **Determinism**: 100% bit-identical verdict manifests for same inputs
- **Performance**: Verdict computation <100ms for 1k claims (p95)
- **Accuracy**: Calibration drift <5% per epoch
- **Scalability**: 100k verdicts/day without degradation
### Business Metrics
- **Explainability**: 100% of verdicts include full audit trail
- **Compliance**: DSSE-signed verdicts meet audit requirements
- **Customer adoption**: 30% enable trust algebra UI (12 months)
---
## Architectural Decisions
| Decision | Rationale |
|----------|-----------|
| Extend, don't replace | Build trust vectors alongside existing append-only linksets |
| Backward compatibility | Existing `VexProvider.Trust.Weight` maps to legacy mode |
| Scoring at evaluation time | No ingestion-time decisioning per AOC-19 |
| Air-gap support | Trust vectors work offline with local signature verification |
| Calibration as separate manifest | Allows auditable tuning history |
---
## Quick Links
**Sprint Files**:
- [SPRINT_7100_0001_0001 - Trust Vector Foundation](SPRINT_7100_0001_0001_trust_vector_foundation.md)
- [SPRINT_7100_0001_0002 - Verdict Manifest & Replay](SPRINT_7100_0001_0002_verdict_manifest_replay.md)
- [SPRINT_7100_0002_0001 - Policy Gates & Merge](SPRINT_7100_0002_0001_policy_gates_merge.md)
- [SPRINT_7100_0002_0002 - Source Defaults & Calibration](SPRINT_7100_0002_0002_source_defaults_calibration.md)
- [SPRINT_7100_0003_0001 - UI Trust Algebra Panel](SPRINT_7100_0003_0001_ui_trust_algebra.md)
- [SPRINT_7100_0003_0002 - Integration & Documentation](SPRINT_7100_0003_0002_integration_documentation.md)
**Documentation**:
- [Trust Lattice Specification](../modules/excititor/trust-lattice.md)
- [Verdict Manifest Specification](../modules/authority/verdict-manifest.md)
- [Excititor Architecture](../modules/excititor/architecture.md)
**Source Advisory**:
- [22-Dec-2026 - Building a Trust Lattice for VEX Sources](../product-advisories/archived/22-Dec-2026%20-%20Building%20a%20Trust%20Lattice%20for%20VEX%20Sources.md)
---
**Last Updated**: 2025-12-22
**Next Review**: Weekly during sprint execution

View File

@@ -0,0 +1,305 @@
# Gap Analysis: Explainable Triage and Proof-Linked Evidence
**Date:** 2025-12-22
**Advisory:** 18-Dec-2025 - Designing Explainable Triage and Proof-Linked Evidence
**Analyst:** Agent
---
## 1. Executive Summary
The advisory "Designing Explainable Triage and Proof-Linked Evidence" defines a comprehensive vision for making security triage **explainable** and approvals **provably evidence-linked**. This gap analysis compares the advisory requirements against the current StellaOps implementation.
**Key Finding:** ~85% of the advisory is already implemented through prior sprint work (3800, 3801, 4100, 4200 series). Six specific gaps remain, addressed by the SPRINT_4300 series.
---
## 2. Advisory Requirements Summary
### 2.1 Explainable Triage UX
- Every risk row shows: Score, CVE, service, package
- Expand panel shows: Path, Boundary, VEX, Last-seen, Actions
- Data contract for evidence retrieval
### 2.2 Evidence-Linked Approvals
- Chain: SBOM → VEX → Policy Decision
- in-toto/DSSE attestations with signatures
- Gate merges/deploys on chain validation
### 2.3 Backend Requirements
- `/findings/:id/evidence` endpoint
- `/approvals/:artifact/attestations` endpoint
- Proof bundles as content-addressed blobs
- DSSE envelopes for signatures
### 2.4 CLI/API
- `stella verify image:<digest> --require sbom,vex,decision`
- Signed summary return
- Non-zero exit for CI/CD gates
### 2.5 Invariants
- Artifact anchoring (no "latest tag" approvals)
- Evidence closure (decision refs exact evidence)
- Signature chain (DSSE, signed, verifiable)
- Staleness (last_seen, expires_at, TTL)
### 2.6 Metrics
- % attestation completeness (target ≥95%)
- TTFE (time-to-first-evidence, target ≤30s)
- Post-deploy reversions (target: zero)
---
## 3. Implementation Status
### 3.1 Fully Implemented (No Action Needed)
| Requirement | Implementation | Evidence |
|-------------|----------------|----------|
| **Triage DB Schema** | TriageDbContext with 8 entities | `src/Scanner/__Libraries/StellaOps.Scanner.Triage/` |
| **Evidence Bundle** | EvidenceBundle with 6 evidence types | `src/__Libraries/StellaOps.Evidence.Bundle/` |
| **VEX Decision Models** | OpenVEX output with x-stellaops-evidence | `src/Policy/StellaOps.Policy.Engine/Vex/` |
| **Score Explanation** | ScoreExplanationService, additive model | `src/Signals/StellaOps.Signals/Services/` |
| **Trust Lattice Engine** | K4 evaluation, claim aggregation | `src/Policy/__Libraries/StellaOps.Policy/TrustLattice/` |
| **Boundary Extractors** | K8s, Gateway, IaC extractors | SPRINT_3800_0002_* (archived, DONE) |
| **Human Approval Attestation** | stella.ops/human-approval@v1 | SPRINT_3801_0001_0004 (DONE) |
| **Risk Verdict Attestation** | RiskVerdictAttestation, RvaBuilder | SPRINT_4100_0003_0001 (DONE) |
| **OCI Referrer Push** | OciPushClient, RvaOciPublisher | SPRINT_4100_0003_0002 (DONE) |
| **Approve Button UI** | ApprovalButtonComponent (624 lines) | SPRINT_4100_0005_0001 (DONE) |
| **Decision Recording** | DecisionService, replay tokens | `src/Findings/StellaOps.Findings.Ledger/` |
| **Policy Gates** | PolicyGateEvaluator, Pass/Block/Warn | `src/Policy/StellaOps.Policy.Engine/Gates/` |
| **Exception Evaluation** | ExceptionEvaluator, compensating controls | SPRINT_3900 series (DONE) |
| **TTFS Telemetry** | TtfsIngestionService | `src/Telemetry/StellaOps.Telemetry.Core/Triage/` |
### 3.2 Planned (In Progress)
| Requirement | Sprint | Status |
|-------------|--------|--------|
| Proof Chain Verification UI | SPRINT_4200_0001_0001 | TODO |
### 3.3 Gaps Identified
| ID | Gap | Advisory Section | Priority |
|----|-----|------------------|----------|
| G1 | CLI Attestation Chain Verify | CLI/API, Pipeline gate | HIGH |
| G2 | Evidence Privacy Controls | Evidence privacy | MEDIUM |
| G3 | Evidence TTL Strategy API | Staleness invariant | MEDIUM |
| G4 | Predicate Type JSON Schemas | Predicate types | LOW |
| G5 | Metrics Dashboard | Metrics | LOW |
| G6 | Findings Evidence API | Backend, Data contract | MEDIUM |
---
## 4. Gap Details
### G1: CLI Attestation Chain Verify Command
**Advisory Requirement:**
```
stella verify image:<digest> --require sbom,vex,decision
```
Returns signed summary; pipelines fail on non-zero.
**Current State:**
- `stella verify offline` exists for offline verification
- No image-based attestation chain verification
- No `--require` attestation type filtering
**Gap:** Need online image verification with attestation requirements.
**Resolution:** SPRINT_4300_0001_0001
---
### G2: Evidence Privacy Controls
**Advisory Requirement:**
> Store file hashes, symbol names, and line ranges (no raw source required). Gate raw source behind elevated permissions.
**Current State:**
- Evidence contains full details
- No redaction service
- No permission-based access control
**Gap:** Need redaction levels and permission checks.
**Resolution:** SPRINT_4300_0002_0001
---
### G3: Evidence TTL Strategy Enforcement
**Advisory Requirement:**
> SBOM: long TTL (weeks/months). Boundary: short TTL (hours/days). Reachability: medium TTL. Staleness behavior in policy.
**Current State:**
- TTL fields exist on evidence entities
- No enforcement in policy gate
- No staleness warnings
**Gap:** Need TTL enforcer service integrated with policy.
**Resolution:** SPRINT_4300_0002_0002
---
### G4: Predicate Type JSON Schemas
**Advisory Requirement:**
> Predicate types: stella/sbom@v1, stella/vex@v1, stella/reachability@v1, stella/boundary@v1, stella/policy-decision@v1, stella/human-approval@v1
**Current State:**
- C# models exist for all predicate types
- No formal JSON Schema definitions
- No schema validation on attestation creation
**Gap:** Need JSON schemas and validation.
**Resolution:** SPRINT_4300_0003_0001
---
### G5: Attestation Completeness Metrics
**Advisory Requirement:**
> Metrics: % changes with complete attestations (target ≥95%), TTFE (target ≤30s), Post-deploy reversions (trend to zero)
**Current State:**
- TTFS telemetry exists (time-to-first-skeleton)
- No attestation completeness ratio
- No reversion tracking
- No Grafana dashboard
**Gap:** Need full metrics suite and dashboard.
**Resolution:** SPRINT_4300_0003_0002
---
### G6: Findings Evidence API Endpoint
**Advisory Requirement:**
> Backend: add `/findings/:id/evidence` (returns the contract).
Contract:
```json
{
"finding_id": "f-7b3c",
"cve": "CVE-2024-12345",
"component": {...},
"reachable_path": [...],
"entrypoint": {...},
"vex": {...},
"last_seen": "...",
"attestation_refs": [...]
}
```
**Current State:**
- EvidenceCompositionService exists internally
- No REST endpoint exposing advisory contract
- Different internal response format
**Gap:** Need REST endpoint with advisory-compliant contract.
**Resolution:** SPRINT_4300_0001_0002
---
## 5. Coverage Matrix
| Advisory Section | Subsection | Implemented | Gap Sprint |
|------------------|------------|-------------|------------|
| Explainable Triage UX | Row (collapsed) | ✅ | — |
| | Expand panel | ✅ | — |
| | Data contract | ⚠️ | 4300.0001.0002 |
| Evidence-Linked Approvals | Chain exists | ✅ | — |
| | in-toto/DSSE | ✅ | — |
| | Gate merges | ✅ | — |
| Backend | /findings/:id/evidence | ❌ | 4300.0001.0002 |
| | /approvals/:artifact/attestations | ✅ | — |
| | Proof bundles | ✅ | — |
| CLI/API | stella verify image | ❌ | 4300.0001.0001 |
| Invariants | Artifact anchoring | ✅ | — |
| | Evidence closure | ✅ | — |
| | Signature chain | ✅ | — |
| | Staleness | ⚠️ | 4300.0002.0002 |
| Data Model | artifacts table | ✅ | — |
| | findings table | ✅ | — |
| | evidence table | ✅ | — |
| | attestations table | ✅ | — |
| | approvals table | ✅ | — |
| Evidence Types | Reachable path proof | ✅ | — |
| | Boundary proof | ✅ | — |
| | VEX status | ✅ | — |
| | Score explanation | ✅ | — |
| Predicate Types | stella/sbom@v1 | ⚠️ | 4300.0003.0001 |
| | stella/vex@v1 | ⚠️ | 4300.0003.0001 |
| | stella/reachability@v1 | ⚠️ | 4300.0003.0001 |
| | stella/boundary@v1 | ⚠️ | 4300.0003.0001 |
| | stella/policy-decision@v1 | ⚠️ | 4300.0003.0001 |
| | stella/human-approval@v1 | ⚠️ | 4300.0003.0001 |
| Policy Gate | OPA/Rego | ✅ | — |
| | Signed decision | ✅ | — |
| Approve Button | Disabled until valid | ✅ | — |
| | Creates approval attestation | ✅ | — |
| Verification | Shared verifier library | ✅ | — |
| Privacy | Redacted proofs | ❌ | 4300.0002.0001 |
| | Elevated permissions | ❌ | 4300.0002.0001 |
| TTL Strategy | Per-type TTLs | ⚠️ | 4300.0002.0002 |
| Metrics | % completeness | ❌ | 4300.0003.0002 |
| | TTFE | ⚠️ | 4300.0003.0002 |
| | Reversions | ❌ | 4300.0003.0002 |
| UI Components | Findings list | ✅ | — |
| | Evidence drawer | ⏳ | 4200.0001.0001 |
| | Proof bundle viewer | ⏳ | 4200.0001.0001 |
**Legend:** ✅ Implemented | ⚠️ Partial | ❌ Missing | ⏳ Planned
---
## 6. Effort Estimation
| Sprint | Effort | Team | Parallelizable |
|--------|--------|------|----------------|
| 4300.0001.0001 | M (2-3d) | CLI | Yes |
| 4300.0001.0002 | S (1-2d) | Scanner | Yes |
| 4300.0002.0001 | M (2-3d) | Scanner | Yes |
| 4300.0002.0002 | S (1-2d) | Policy | Yes |
| 4300.0003.0001 | S (1-2d) | Attestor | Yes |
| 4300.0003.0002 | M (2-3d) | Telemetry | Yes |
**Total:** 10-14 days (can complete in 1-2 weeks with parallel execution)
---
## 7. Recommendations
1. **Prioritize G1 (CLI Verify)** - This is the only HIGH priority gap and enables CI/CD integration.
2. **Bundle G2+G3** - Evidence privacy and TTL can share context in Scanner/Policy teams.
3. **Defer G4+G5** - Predicate schemas and metrics are LOW priority; can follow after core functionality.
4. **Leverage 4200.0001.0001** - Proof Chain UI sprint is already planned; ensure it consumes new evidence API.
---
## 8. Appendix: Prior Sprint References
| Sprint | Topic | Status |
|--------|-------|--------|
| 3800.0000.0000 | Explainable Triage Master | DONE |
| 3800.0002.0001 | RichGraph Boundary Extractor | DONE |
| 3800.0002.0002 | K8s Boundary Extractor | DONE |
| 3800.0003.0001 | Evidence API Endpoint | DONE |
| 3801.0001.0001 | Policy Decision Attestation | DONE |
| 3801.0001.0004 | Human Approval Attestation | DONE |
| 4100.0003.0001 | Risk Verdict Attestation | DONE |
| 4100.0003.0002 | OCI Referrer Push | DONE |
| 4100.0005.0001 | Approve Button UI | DONE |
| 4200.0001.0001 | Proof Chain Verification UI | TODO |
---
**Analysis Complete:** 2025-12-22

View File

@@ -23,7 +23,7 @@
## Documentation Prerequisites
- `docs/product-advisories/unprocessed/19-Dec-2025 - Trust Algebra and Lattice Engine Specification.md`
- `docs/product-advisories/archived/19-Dec-2025 - Trust Algebra and Lattice Engine Specification.md`
- `docs/modules/policy/architecture.md`
- `docs/reachability/lattice.md`

View File

@@ -2,7 +2,7 @@
**IMPLID:** 1200 (Router infrastructure)
**Feature:** Centralized rate limiting for Stella Router as standalone product
**Advisory Source:** `docs/product-advisories/unprocessed/15-Dec-2025 - Designing 202 + RetryAfter Backpressure Control.md`
**Advisory Source:** `docs/product-advisories/archived/15-Dec-2025 - Designing 202 + RetryAfter Backpressure Control.md`
**Owner:** Router Team
**Status:** DONE (Sprints 16 closed; Sprint 4 closed N/A)
**Priority:** HIGH - Core feature for Router product
@@ -210,7 +210,7 @@ Each target can have multiple rules (AND logic):
## Related Documentation
- **Advisory:** `docs/product-advisories/unprocessed/15-Dec-2025 - Designing 202 + RetryAfter Backpressure Control.md`
- **Advisory:** `docs/product-advisories/archived/15-Dec-2025 - Designing 202 + RetryAfter Backpressure Control.md`
- **Implementation:** `src/__Libraries/StellaOps.Router.Gateway/RateLimit/`
- **Tests:** `tests/StellaOps.Router.Gateway.Tests/`
- **Implementation Guides:** `docs/implplan/SPRINT_1200_001_00X_*.md` (see below)

View File

@@ -701,7 +701,7 @@ rate_limiting:
## References
- **Advisory:** `docs/product-advisories/unprocessed/15-Dec-2025 - Designing 202 + RetryAfter Backpressure Control.md`
- **Advisory:** `docs/product-advisories/archived/15-Dec-2025 - Designing 202 + RetryAfter Backpressure Control.md`
- **Master Sprint Tracker:** `docs/implplan/SPRINT_1200_001_000_router_rate_limiting_master.md`
- **Sprint Files:** `docs/implplan/SPRINT_1200_001_00X_*.md`
- **HTTP 429 Semantics:** RFC 6585

View File

@@ -3,7 +3,7 @@
**Package Created:** 2025-12-17
**For:** Implementation agents / reviewers
**Status:** DONE (Sprints 16 closed; Sprint 4 closed N/A)
**Advisory Source:** `docs/product-advisories/unprocessed/15-Dec-2025 - Designing 202 + RetryAfter Backpressure Control.md`
**Advisory Source:** `docs/product-advisories/archived/15-Dec-2025 - Designing 202 + RetryAfter Backpressure Control.md`
---
@@ -114,7 +114,7 @@ Week 4+: Service Migration
1. `SPRINT_1200_001_000_router_rate_limiting_master.md` - Overview
2. `SPRINT_1200_001_IMPLEMENTATION_GUIDE.md` - Technical details
3. Original advisory: `docs/product-advisories/unprocessed/15-Dec-2025 - Designing 202 + RetryAfter Backpressure Control.md`
3. Original advisory: `docs/product-advisories/archived/15-Dec-2025 - Designing 202 + RetryAfter Backpressure Control.md`
4. Analysis plan: `C:\Users\VladimirMoushkov\.claude\plans\vectorized-kindling-rocket.md`
### 2. Environment Setup
@@ -471,7 +471,7 @@ rate_limiting:
## Related Documentation
### Source Documents
- **Advisory:** `docs/product-advisories/unprocessed/15-Dec-2025 - Designing 202 + RetryAfter Backpressure Control.md`
- **Advisory:** `docs/product-advisories/archived/15-Dec-2025 - Designing 202 + RetryAfter Backpressure Control.md`
- **Analysis Plan:** `C:\Users\VladimirMoushkov\.claude\plans\vectorized-kindling-rocket.md`
- **Architecture:** `docs/modules/platform/architecture-overview.md`

View File

@@ -9,7 +9,7 @@ Implement the score replay capability and proof bundle writer from the "Building
3. **Score Replay Endpoint** - `POST /score/replay` to recompute scores without rescanning
4. **Scan Manifest** - DSSE-signed manifest capturing all inputs affecting results
**Source Advisory**: `docs/product-advisories/unprocessed/16-Dec-2025 - Building a Deeper Moat Beyond Reachability.md`
**Source Advisory**: `docs/product-advisories/archived/16-Dec-2025 - Building a Deeper Moat Beyond Reachability.md`
**Related Docs**: `docs/product-advisories/14-Dec-2025 - Determinism and Reproducibility Technical Reference.md` §11.2, §12
**Working Directory**: `src/Scanner/StellaOps.Scanner.WebService`, `src/Policy/__Libraries/StellaOps.Policy/`

View File

@@ -9,7 +9,7 @@ Establish the ground-truth corpus for binary-only reachability benchmarking and
3. **CI Regression Gates** - Fail build on precision/recall/determinism regressions
4. **Baseline Management** - Tooling to update baselines when improvements land
**Source Advisory**: `docs/product-advisories/unprocessed/16-Dec-2025 - Building a Deeper Moat Beyond Reachability.md`
**Source Advisory**: `docs/product-advisories/archived/16-Dec-2025 - Building a Deeper Moat Beyond Reachability.md`
**Related Docs**: `docs/benchmarks/ground-truth-corpus.md` (new)
**Working Directory**: `bench/reachability-benchmark/`, `datasets/reachability/`, `src/Scanner/`

View File

@@ -9,7 +9,7 @@ Enhance the Unknowns ranking model with blast radius and runtime containment sig
3. **Unknown Proof Trail** - Emit proof nodes explaining rank factors
4. **API: `/unknowns/list?sort=score`** - Expose ranked unknowns
**Source Advisory**: `docs/product-advisories/unprocessed/16-Dec-2025 - Building a Deeper Moat Beyond Reachability.md`
**Source Advisory**: `docs/product-advisories/archived/16-Dec-2025 - Building a Deeper Moat Beyond Reachability.md`
**Related Docs**: `docs/product-advisories/14-Dec-2025 - Triage and Unknowns Technical Reference.md` §17.5
**Working Directory**: `src/Scanner/__Libraries/StellaOps.Scanner.Unknowns/`, `src/Scanner/StellaOps.Scanner.WebService/`

View File

@@ -240,4 +240,4 @@ public class TriageSchemaTests : IAsyncLifetime
- Schema definition: `docs/db/triage_schema.sql`
- UX Guide: `docs/ux/TRIAGE_UX_GUIDE.md`
- API Contract: `docs/api/triage.contract.v1.md`
- Advisory: `docs/product-advisories/unprocessed/16-Dec-2025 - Reimagining Proof-Linked UX in Security Workflows.md`
- Advisory: `docs/product-advisories/archived/16-Dec-2025 - Reimagining Proof-Linked UX in Security Workflows.md`