diff --git a/docs/19_TEST_SUITE_OVERVIEW.md b/docs/19_TEST_SUITE_OVERVIEW.md index edd03616e..24874c605 100755 --- a/docs/19_TEST_SUITE_OVERVIEW.md +++ b/docs/19_TEST_SUITE_OVERVIEW.md @@ -1,7 +1,7 @@ -# Automated Test‑Suite Overview +# Automated Test-Suite Overview -This document enumerates **every automated check** executed by the Stella Ops -CI pipeline, from unit level to chaos experiments. It is intended for +This document enumerates **every automated check** executed by the Stella Ops +CI pipeline, from unit level to chaos experiments. It is intended for contributors who need to extend coverage or diagnose failures. > **Build parameters** – values such as `{{ dotnet }}` (runtime) and @@ -9,40 +9,81 @@ contributors who need to extend coverage or diagnose failures. --- -## Layer map +## Test Philosophy -| Layer | Tooling | Entry‑point | Frequency | -|-------|---------|-------------|-----------| -| **1. Unit** | `xUnit` (dotnet test) | `*.Tests.csproj` | per PR / push | -| **2. Property‑based** | `FsCheck` | `SbomPropertyTests` | per PR | -| **3. Integration (API)** | `Testcontainers` suite | `test/Api.Integration` | per PR + nightly | -| **4. Integration (DB-merge)** | Testcontainers PostgreSQL + Redis | `Concelier.Integration` (vulnerability ingest/merge/export service) | per PR | -| **5. Contract (gRPC)** | `Buf breaking` | `buf.yaml` files | per PR | -| **6. Front‑end unit** | `Jest` | `ui/src/**/*.spec.ts` | per PR | -| **7. Front‑end E2E** | `Playwright` | `ui/e2e/**` | nightly | -| **8. Lighthouse perf / a11y** | `lighthouse-ci` (Chrome headless) | `ui/dist/index.html` | nightly | -| **9. Load** | `k6` scripted scenarios | `k6/*.js` | nightly | -| **10. Chaos CPU / OOM** | `pumba` | Docker Compose overlay | weekly | -| **11. Dependency scanning** | `Trivy fs` + `dotnet list package --vuln` | root | per PR | -| **12. License compliance** | `LicenceFinder` | root | per PR | -| **13. SBOM reproducibility** | `in‑toto attestation` diff | GitLab job | release tags | +### Core Principles + +1. **Determinism as Contract**: Scan verdicts must be reproducible. Same inputs → byte-identical outputs. +2. **Offline by Default**: Every test (except explicitly tagged "online") runs without network access. +3. **Evidence-First Validation**: Assertions verify the complete evidence chain, not just pass/fail. +4. **Interop is Required**: Compatibility with ecosystem tools (Syft, Grype, Trivy, cosign) blocks releases. +5. **Coverage by Risk**: Prioritize testing high-risk paths over line coverage metrics. + +### Test Boundaries + +- **Lattice/policy merge** algorithms run in `scanner.webservice` +- **Concelier/Excitors** preserve prune source (no conflict resolution) +- Tests enforce these boundaries explicitly --- -## Quality gates +## Layer Map + +| Layer | Tooling | Entry-point | Frequency | +|-------|---------|-------------|-----------| +| **1. Unit** | `xUnit` (dotnet test) | `*.Tests.csproj` | per PR / push | +| **2. Property-based** | `FsCheck` | `SbomPropertyTests`, `Canonicalization` | per PR | +| **3. Integration (API)** | `Testcontainers` suite | `test/Api.Integration` | per PR + nightly | +| **4. Integration (DB-merge)** | Testcontainers PostgreSQL + Valkey | `Concelier.Integration` | per PR | +| **5. Contract (OpenAPI)** | Schema validation | `docs/api/*.yaml` | per PR | +| **6. Front-end unit** | `Jest` | `ui/src/**/*.spec.ts` | per PR | +| **7. Front-end E2E** | `Playwright` | `ui/e2e/**` | nightly | +| **8. Lighthouse perf / a11y** | `lighthouse-ci` (Chrome headless) | `ui/dist/index.html` | nightly | +| **9. Load** | `k6` scripted scenarios | `tests/load/*.js` | nightly | +| **10. Chaos** | `pumba`, custom harness | `tests/chaos/` | weekly | +| **11. Interop** | Syft/Grype/cosign | `tests/interop/` | nightly | +| **12. Offline E2E** | Network-isolated containers | `tests/offline/` | nightly | +| **13. Replay Verification** | Golden corpus replay | `bench/golden-corpus/` | per PR | +| **14. Dependency scanning** | `Trivy fs` + `dotnet list package --vuln` | root | per PR | +| **15. License compliance** | `LicenceFinder` | root | per PR | +| **16. SBOM reproducibility** | `in-toto attestation` diff | GitLab job | release tags | + +--- + +## Test Categories (xUnit Traits) + +```csharp +[Trait("Category", "Unit")] // Fast, isolated unit tests +[Trait("Category", "Integration")] // Tests requiring infrastructure +[Trait("Category", "E2E")] // Full end-to-end workflows +[Trait("Category", "AirGap")] // Must work without network +[Trait("Category", "Interop")] // Third-party tool compatibility +[Trait("Category", "Performance")] // Performance benchmarks +[Trait("Category", "Chaos")] // Failure injection tests +[Trait("Category", "Security")] // Security-focused tests +``` + +--- + +## Quality Gates | Metric | Budget | Gate | |--------|--------|------| -| API unit coverage | ≥ 85 % lines | PR merge | -| API response P95 | ≤ 120 ms | nightly alert | -| Δ‑SBOM warm scan P95 (4 vCPU) | ≤ 5 s | nightly alert | -| Lighthouse performance score | ≥ 90 | nightly alert | -| Lighthouse accessibility score | ≥ 95 | nightly alert | -| k6 sustained RPS drop | < 5 % vs baseline | nightly alert | +| API unit coverage | ≥ 85% lines | PR merge | +| API response P95 | ≤ 120 ms | nightly alert | +| Δ-SBOM warm scan P95 (4 vCPU) | ≤ 5 s | nightly alert | +| Lighthouse performance score | ≥ 90 | nightly alert | +| Lighthouse accessibility score | ≥ 95 | nightly alert | +| k6 sustained RPS drop | < 5% vs baseline | nightly alert | +| **Replay determinism** | 0 byte diff | **Release** | +| **Interop findings parity** | ≥ 95% | **Release** | +| **Offline E2E** | All pass with no network | **Release** | +| **Unknowns budget (prod)** | ≤ configured limit | **Release** | +| **Router Retry-After compliance** | 100% | Nightly | --- -## Local runner +## Local Runner ```bash # minimal run: unit + property + frontend tests @@ -50,21 +91,26 @@ contributors who need to extend coverage or diagnose failures. # full stack incl. Playwright and lighthouse ./scripts/dev-test.sh --full -```` -The script spins up PostgreSQL/Redis via Testcontainers and requires: +# category-specific +dotnet test --filter "Category=Unit" +dotnet test --filter "Category=AirGap" +dotnet test --filter "Category=Interop" +``` + +The script spins up PostgreSQL/Valkey via Testcontainers and requires: * Docker ≥ 25 * Node 20 (for Jest/Playwright) -#### PostgreSQL Testcontainers +### PostgreSQL Testcontainers Multiple suites (Concelier connectors, Excititor worker/WebService, Scheduler) use Testcontainers with PostgreSQL for integration tests. If you don't have Docker available, tests can also run against a local PostgreSQL instance listening on `127.0.0.1:5432`. -#### Local PostgreSQL helper +### Local PostgreSQL Helper Some suites (Concelier WebService/Core, Exporter JSON) need a full PostgreSQL instance when you want to debug or inspect data with `psql`. @@ -84,9 +130,59 @@ By default the script uses Docker to run PostgreSQL 16, binds to connection string is printed on start and you can export it before running `dotnet test` if a suite supports overriding its connection string. ---- +--- -### Concelier OSV↔GHSA parity fixtures +## New Test Infrastructure (Epic 5100) + +### Run Manifest & Replay + +Every scan captures a **Run Manifest** containing all inputs (artifact digests, feed versions, policy versions, PRNG seed). This enables deterministic replay: + +```bash +# Replay a scan from manifest +stella replay --manifest run-manifest.json --output verdict.json + +# Verify determinism +stella replay verify --manifest run-manifest.json +``` + +### Evidence Index + +The **Evidence Index** links verdicts to their supporting evidence chain: +- Verdict → SBOM digests → Attestation IDs → Tool versions + +### Golden Corpus + +Located at `bench/golden-corpus/`, contains 50+ test cases: +- Severity levels (Critical, High, Medium, Low) +- VEX scenarios (Not Affected, Affected, Conflicting) +- Reachability cases (Reachable, Not Reachable, Inconclusive) +- Unknowns scenarios +- Scale tests (200 to 50k+ packages) +- Multi-distro (Alpine, Debian, RHEL, SUSE, Ubuntu) +- Interop fixtures (Syft-generated, Trivy-generated) +- Negative cases (malformed inputs) + +### Offline Testing + +Inherit from `NetworkIsolatedTestBase` for air-gap compliance: + +```csharp +[Trait("Category", "AirGap")] +public class OfflineTests : NetworkIsolatedTestBase +{ + [Fact] + public async Task Test_WorksOffline() + { + // Test implementation + AssertNoNetworkCalls(); // Fails if network accessed + } +} +``` + +--- + +## Concelier OSV↔GHSA Parity Fixtures The Concelier connector suite includes a regression test (`OsvGhsaParityRegressionTests`) that checks a curated set of GHSA identifiers against OSV responses. The fixture @@ -104,7 +200,7 @@ fixtures stay stable across machines. --- -## CI job layout +## CI Job Layout ```mermaid flowchart LR @@ -115,21 +211,42 @@ flowchart LR I1 --> FE[Jest] FE --> E2E[Playwright] E2E --> Lighthouse + + subgraph release-gates + REPLAY[Replay Verify] + INTEROP[Interop E2E] + OFFLINE[Offline E2E] + BUDGET[Unknowns Gate] + end + Lighthouse --> INTEG2[Concelier] INTEG2 --> LOAD[k6] - LOAD --> CHAOS[pumba] + LOAD --> CHAOS[Chaos Suite] CHAOS --> RELEASE[Attestation diff] + + RELEASE --> release-gates ``` --- -## Adding a new test layer +## Adding a New Test Layer 1. Extend `scripts/dev-test.sh` so local contributors get the layer by default. -2. Add a dedicated GitLab job in `.gitlab-ci.yml` (stage `test` or `nightly`). +2. Add a dedicated workflow in `.gitea/workflows/` (or GitLab job in `.gitlab-ci.yml`). 3. Register the job in `docs/19_TEST_SUITE_OVERVIEW.md` *and* list its metric in `docs/metrics/README.md`. +4. If the test requires network isolation, inherit from `NetworkIsolatedTestBase`. +5. If the test uses golden corpus, add cases to `bench/golden-corpus/`. --- -*Last updated {{ "now" | date: "%Y‑%m‑%d" }}* +## Related Documentation + +- [Sprint Epic 5100 - Testing Strategy](implplan/SPRINT_5100_SUMMARY.md) +- [tests/AGENTS.md](../tests/AGENTS.md) +- [Offline Operation Guide](24_OFFLINE_KIT.md) +- [Module Architecture Dossiers](modules/) + +--- + +*Last updated 2025-12-21* diff --git a/docs/db/schemas/binaries_schema_specification.md b/docs/db/schemas/binaries_schema_specification.md new file mode 100644 index 000000000..0655f9f37 --- /dev/null +++ b/docs/db/schemas/binaries_schema_specification.md @@ -0,0 +1,680 @@ +# Binaries Schema Specification + +**Version:** 1.0.0 +**Status:** DRAFT +**Owner:** BinaryIndex Module +**Last Updated:** 2025-12-21 + +--- + +## 1. Overview + +The `binaries` schema stores binary identity, vulnerability mappings, fingerprints, and patch-aware fix status for the BinaryIndex module. This enables detection of vulnerable binaries independent of package metadata. + +## 2. Schema Definition + +```sql +-- ============================================================================ +-- BINARIES SCHEMA +-- ============================================================================ +-- Purpose: Binary identity, fingerprint, and vulnerability mapping for +-- the BinaryIndex module (vulnerable binaries database). +-- ============================================================================ + +CREATE SCHEMA IF NOT EXISTS binaries; +CREATE SCHEMA IF NOT EXISTS binaries_app; + +-- ---------------------------------------------------------------------------- +-- RLS Helper Function +-- ---------------------------------------------------------------------------- + +CREATE OR REPLACE FUNCTION binaries_app.require_current_tenant() +RETURNS TEXT +LANGUAGE plpgsql STABLE SECURITY DEFINER +AS $$ +DECLARE + v_tenant TEXT; +BEGIN + v_tenant := current_setting('app.tenant_id', true); + IF v_tenant IS NULL OR v_tenant = '' THEN + RAISE EXCEPTION 'app.tenant_id session variable not set'; + END IF; + RETURN v_tenant; +END; +$$; + +-- ============================================================================ +-- CORE IDENTITY TABLES +-- ============================================================================ + +-- ---------------------------------------------------------------------------- +-- Table: binary_identity +-- Purpose: Known binary identities extracted from packages +-- ---------------------------------------------------------------------------- + +CREATE TABLE binaries.binary_identity ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + + -- Primary identity (Build-ID preferred for ELF) + binary_key TEXT NOT NULL, -- build_id || file_sha256 (normalized) + build_id TEXT, -- ELF GNU Build-ID (hex) + build_id_type TEXT CHECK (build_id_type IN ('gnu-build-id', 'pe-cv', 'macho-uuid')), + + -- Hashes + file_sha256 TEXT NOT NULL, -- sha256 of entire file + text_sha256 TEXT, -- sha256 of .text section (ELF) + blake3_hash TEXT, -- Optional faster hash + + -- Binary metadata + format TEXT NOT NULL CHECK (format IN ('elf', 'pe', 'macho')), + architecture TEXT NOT NULL, -- x86-64, aarch64, arm, etc. + osabi TEXT, -- linux, windows, darwin + binary_type TEXT CHECK (binary_type IN ('executable', 'shared_library', 'static_library', 'object')), + is_stripped BOOLEAN DEFAULT FALSE, + + -- Tracking + first_seen_snapshot_id UUID, + last_seen_snapshot_id UUID, + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + + CONSTRAINT binary_identity_key_unique UNIQUE (tenant_id, binary_key) +); + +-- ---------------------------------------------------------------------------- +-- Table: binary_package_map +-- Purpose: Maps binaries to source packages (per snapshot) +-- ---------------------------------------------------------------------------- + +CREATE TABLE binaries.binary_package_map ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + + -- Binary reference + binary_identity_id UUID NOT NULL REFERENCES binaries.binary_identity(id) ON DELETE CASCADE, + binary_key TEXT NOT NULL, + + -- Package info + distro TEXT NOT NULL, -- debian, ubuntu, rhel, alpine + release TEXT NOT NULL, -- bookworm, jammy, 9, 3.19 + source_pkg TEXT NOT NULL, -- Source package name (e.g., openssl) + binary_pkg TEXT NOT NULL, -- Binary package name (e.g., libssl3) + pkg_version TEXT NOT NULL, -- Full distro version (e.g., 1.1.1n-0+deb11u5) + pkg_purl TEXT, -- PURL if derivable + architecture TEXT NOT NULL, + + -- File location + file_path_in_pkg TEXT NOT NULL, -- /usr/lib/x86_64-linux-gnu/libssl.so.3 + + -- Snapshot reference + snapshot_id UUID NOT NULL, + + -- Metadata + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + + CONSTRAINT binary_package_map_unique UNIQUE (binary_identity_id, snapshot_id, file_path_in_pkg) +); + +-- ---------------------------------------------------------------------------- +-- Table: corpus_snapshots +-- Purpose: Tracks corpus ingestion snapshots +-- ---------------------------------------------------------------------------- + +CREATE TABLE binaries.corpus_snapshots ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + + -- Snapshot identification + distro TEXT NOT NULL, + release TEXT NOT NULL, + architecture TEXT NOT NULL, + snapshot_id TEXT NOT NULL, -- Unique snapshot identifier + + -- Content tracking + packages_processed INT NOT NULL DEFAULT 0, + binaries_indexed INT NOT NULL DEFAULT 0, + repo_metadata_digest TEXT, -- SHA-256 of repo metadata + + -- Signing + signing_key_id TEXT, + dsse_envelope_ref TEXT, -- RustFS reference to DSSE envelope + + -- Status + status TEXT NOT NULL DEFAULT 'pending' CHECK (status IN ('pending', 'processing', 'completed', 'failed')), + error TEXT, + + -- Timestamps + started_at TIMESTAMPTZ, + completed_at TIMESTAMPTZ, + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + + CONSTRAINT corpus_snapshots_unique UNIQUE (tenant_id, distro, release, architecture, snapshot_id) +); + +-- ============================================================================ +-- VULNERABILITY MAPPING TABLES +-- ============================================================================ + +-- ---------------------------------------------------------------------------- +-- Table: vulnerable_buildids +-- Purpose: Build-IDs known to be associated with vulnerable packages +-- ---------------------------------------------------------------------------- + +CREATE TABLE binaries.vulnerable_buildids ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + + -- Build-ID reference + buildid_type TEXT NOT NULL CHECK (buildid_type IN ('gnu-build-id', 'pe-cv', 'macho-uuid')), + buildid_value TEXT NOT NULL, -- Hex string + + -- Package info + purl TEXT NOT NULL, -- Package URL + pkg_version TEXT NOT NULL, + distro TEXT, + release TEXT, + + -- Confidence + confidence TEXT NOT NULL DEFAULT 'exact' CHECK (confidence IN ('exact', 'inferred', 'heuristic')), + + -- Provenance + provenance JSONB DEFAULT '{}', + snapshot_id UUID REFERENCES binaries.corpus_snapshots(id), + + -- Tracking + indexed_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + + CONSTRAINT vulnerable_buildids_unique UNIQUE (tenant_id, buildid_value, buildid_type, purl, pkg_version) +); + +-- ---------------------------------------------------------------------------- +-- Table: binary_vuln_assertion +-- Purpose: CVE status assertions for specific binaries +-- ---------------------------------------------------------------------------- + +CREATE TABLE binaries.binary_vuln_assertion ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + + -- Binary reference + binary_key TEXT NOT NULL, + binary_identity_id UUID REFERENCES binaries.binary_identity(id), + + -- CVE reference + cve_id TEXT NOT NULL, + advisory_id UUID, -- Reference to vuln.advisories + + -- Status + status TEXT NOT NULL CHECK (status IN ('affected', 'not_affected', 'fixed', 'unknown')), + + -- Method used to determine status + method TEXT NOT NULL CHECK (method IN ('range_match', 'buildid_catalog', 'fingerprint_match', 'fix_index')), + confidence NUMERIC(3,2) CHECK (confidence >= 0 AND confidence <= 1), + + -- Evidence + evidence_ref TEXT, -- RustFS reference to evidence bundle + evidence_digest TEXT, -- SHA-256 of evidence + + -- Tracking + evaluated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + + CONSTRAINT binary_vuln_assertion_unique UNIQUE (tenant_id, binary_key, cve_id) +); + +-- ============================================================================ +-- FIX INDEX TABLES (Patch-Aware Backport Handling) +-- ============================================================================ + +-- ---------------------------------------------------------------------------- +-- Table: cve_fix_evidence +-- Purpose: Raw evidence of CVE fixes (append-only) +-- ---------------------------------------------------------------------------- + +CREATE TABLE binaries.cve_fix_evidence ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + + -- Key fields + distro TEXT NOT NULL, + release TEXT NOT NULL, + source_pkg TEXT NOT NULL, + cve_id TEXT NOT NULL, + + -- Fix information + state TEXT NOT NULL CHECK (state IN ('fixed', 'vulnerable', 'not_affected', 'wontfix', 'unknown')), + fixed_version TEXT, -- Distro version string (nullable for not_affected) + + -- Method and confidence + method TEXT NOT NULL CHECK (method IN ('security_feed', 'changelog', 'patch_header', 'upstream_patch_match')), + confidence NUMERIC(3,2) NOT NULL CHECK (confidence >= 0 AND confidence <= 1), + + -- Evidence details + evidence JSONB NOT NULL, -- Method-specific evidence payload + + -- Snapshot reference + snapshot_id UUID REFERENCES binaries.corpus_snapshots(id), + + -- Tracking + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() +); + +-- ---------------------------------------------------------------------------- +-- Table: cve_fix_index +-- Purpose: Merged best-record for CVE fix status per distro/package +-- ---------------------------------------------------------------------------- + +CREATE TABLE binaries.cve_fix_index ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + + -- Key fields + distro TEXT NOT NULL, + release TEXT NOT NULL, + source_pkg TEXT NOT NULL, + cve_id TEXT NOT NULL, + architecture TEXT, -- NULL means all architectures + + -- Fix status + state TEXT NOT NULL CHECK (state IN ('fixed', 'vulnerable', 'not_affected', 'wontfix', 'unknown')), + fixed_version TEXT, + + -- Merge metadata + primary_method TEXT NOT NULL, -- Method of highest-confidence evidence + confidence NUMERIC(3,2) NOT NULL, + evidence_ids UUID[], -- References to cve_fix_evidence + + -- Tracking + computed_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + + CONSTRAINT cve_fix_index_unique UNIQUE (tenant_id, distro, release, source_pkg, cve_id, architecture) +); + +-- ============================================================================ +-- FINGERPRINT TABLES +-- ============================================================================ + +-- ---------------------------------------------------------------------------- +-- Table: vulnerable_fingerprints +-- Purpose: Function fingerprints for CVE detection +-- ---------------------------------------------------------------------------- + +CREATE TABLE binaries.vulnerable_fingerprints ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + + -- CVE and component + cve_id TEXT NOT NULL, + component TEXT NOT NULL, -- e.g., openssl, glibc + purl TEXT, -- Package URL if applicable + + -- Fingerprint data + algorithm TEXT NOT NULL CHECK (algorithm IN ('basic_block', 'control_flow_graph', 'string_refs', 'combined')), + fingerprint_id TEXT NOT NULL, -- Unique ID (e.g., "bb-abc123...") + fingerprint_hash BYTEA NOT NULL, -- Raw fingerprint bytes (16-32 bytes) + architecture TEXT NOT NULL, -- x86-64, aarch64 + + -- Function hints + function_name TEXT, -- Original function name if known + source_file TEXT, -- Source file path + source_line INT, + + -- Confidence and validation + similarity_threshold NUMERIC(3,2) DEFAULT 0.95, + confidence NUMERIC(3,2) CHECK (confidence >= 0 AND confidence <= 1), + validated BOOLEAN DEFAULT FALSE, + validation_stats JSONB DEFAULT '{}', -- precision, recall, etc. + + -- Reference builds + vuln_build_ref TEXT, -- RustFS ref to vulnerable reference build + fixed_build_ref TEXT, -- RustFS ref to fixed reference build + + -- Metadata + notes TEXT, + evidence_ref TEXT, -- RustFS ref to evidence bundle + + -- Tracking + indexed_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + + CONSTRAINT vulnerable_fingerprints_unique UNIQUE (tenant_id, cve_id, algorithm, fingerprint_id, architecture) +); + +-- ---------------------------------------------------------------------------- +-- Table: fingerprint_corpus_metadata +-- Purpose: Tracks which packages have been fingerprinted +-- ---------------------------------------------------------------------------- + +CREATE TABLE binaries.fingerprint_corpus_metadata ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + + -- Package identification + purl TEXT NOT NULL, + version TEXT NOT NULL, + + -- Fingerprinting info + algorithm TEXT NOT NULL, + binary_digest TEXT, -- sha256 of the binary analyzed + + -- Statistics + function_count INT NOT NULL DEFAULT 0, + fingerprints_indexed INT NOT NULL DEFAULT 0, + + -- Provenance + indexed_by TEXT, -- Service/user that indexed + indexed_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + + -- Tracking + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + + CONSTRAINT fingerprint_corpus_metadata_unique UNIQUE (tenant_id, purl, version, algorithm) +); + +-- ============================================================================ +-- MATCH RESULTS TABLES +-- ============================================================================ + +-- ---------------------------------------------------------------------------- +-- Table: fingerprint_matches +-- Purpose: Records fingerprint matches during scans +-- ---------------------------------------------------------------------------- + +CREATE TABLE binaries.fingerprint_matches ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + + -- Scan reference + scan_id UUID NOT NULL, -- Reference to scanner.scan_manifest + + -- Match details + match_type TEXT NOT NULL CHECK (match_type IN ('fingerprint', 'buildid', 'hash_exact')), + binary_key TEXT NOT NULL, + binary_identity_id UUID REFERENCES binaries.binary_identity(id), + + -- Vulnerable package + vulnerable_purl TEXT NOT NULL, + vulnerable_version TEXT NOT NULL, + + -- Fingerprint match specifics (nullable for non-fingerprint matches) + matched_fingerprint_id UUID REFERENCES binaries.vulnerable_fingerprints(id), + matched_function TEXT, + similarity NUMERIC(3,2), -- 0.00-1.00 + + -- CVE linkage + advisory_ids TEXT[], -- Linked CVE/GHSA IDs + + -- Reachability (populated later by Scanner) + reachability_status TEXT CHECK (reachability_status IN ('reachable', 'unreachable', 'unknown', 'partial')), + + -- Evidence + evidence JSONB DEFAULT '{}', + + -- Tracking + matched_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() +); + +-- ============================================================================ +-- INDEXES +-- ============================================================================ + +-- binary_identity indexes +CREATE INDEX idx_binary_identity_tenant ON binaries.binary_identity(tenant_id); +CREATE INDEX idx_binary_identity_buildid ON binaries.binary_identity(build_id) WHERE build_id IS NOT NULL; +CREATE INDEX idx_binary_identity_sha256 ON binaries.binary_identity(file_sha256); +CREATE INDEX idx_binary_identity_key ON binaries.binary_identity(binary_key); + +-- binary_package_map indexes +CREATE INDEX idx_binary_package_map_tenant ON binaries.binary_package_map(tenant_id); +CREATE INDEX idx_binary_package_map_binary ON binaries.binary_package_map(binary_identity_id); +CREATE INDEX idx_binary_package_map_distro ON binaries.binary_package_map(distro, release, source_pkg); +CREATE INDEX idx_binary_package_map_snapshot ON binaries.binary_package_map(snapshot_id); +CREATE INDEX idx_binary_package_map_purl ON binaries.binary_package_map(pkg_purl) WHERE pkg_purl IS NOT NULL; + +-- corpus_snapshots indexes +CREATE INDEX idx_corpus_snapshots_tenant ON binaries.corpus_snapshots(tenant_id); +CREATE INDEX idx_corpus_snapshots_distro ON binaries.corpus_snapshots(distro, release, architecture); +CREATE INDEX idx_corpus_snapshots_status ON binaries.corpus_snapshots(status) WHERE status IN ('pending', 'processing'); + +-- vulnerable_buildids indexes +CREATE INDEX idx_vulnerable_buildids_tenant ON binaries.vulnerable_buildids(tenant_id); +CREATE INDEX idx_vulnerable_buildids_value ON binaries.vulnerable_buildids(buildid_type, buildid_value); +CREATE INDEX idx_vulnerable_buildids_purl ON binaries.vulnerable_buildids(purl); + +-- binary_vuln_assertion indexes +CREATE INDEX idx_binary_vuln_assertion_tenant ON binaries.binary_vuln_assertion(tenant_id); +CREATE INDEX idx_binary_vuln_assertion_binary ON binaries.binary_vuln_assertion(binary_key); +CREATE INDEX idx_binary_vuln_assertion_cve ON binaries.binary_vuln_assertion(cve_id); +CREATE INDEX idx_binary_vuln_assertion_status ON binaries.binary_vuln_assertion(status) WHERE status = 'affected'; + +-- cve_fix_evidence indexes +CREATE INDEX idx_cve_fix_evidence_tenant ON binaries.cve_fix_evidence(tenant_id); +CREATE INDEX idx_cve_fix_evidence_key ON binaries.cve_fix_evidence(distro, release, source_pkg, cve_id); + +-- cve_fix_index indexes +CREATE INDEX idx_cve_fix_index_tenant ON binaries.cve_fix_index(tenant_id); +CREATE INDEX idx_cve_fix_index_lookup ON binaries.cve_fix_index(distro, release, source_pkg, cve_id); +CREATE INDEX idx_cve_fix_index_state ON binaries.cve_fix_index(state) WHERE state = 'fixed'; + +-- vulnerable_fingerprints indexes +CREATE INDEX idx_vulnerable_fingerprints_tenant ON binaries.vulnerable_fingerprints(tenant_id); +CREATE INDEX idx_vulnerable_fingerprints_cve ON binaries.vulnerable_fingerprints(cve_id); +CREATE INDEX idx_vulnerable_fingerprints_component ON binaries.vulnerable_fingerprints(component, architecture); +CREATE INDEX idx_vulnerable_fingerprints_hash ON binaries.vulnerable_fingerprints USING hash (fingerprint_hash); +CREATE INDEX idx_vulnerable_fingerprints_validated ON binaries.vulnerable_fingerprints(validated) WHERE validated = TRUE; + +-- fingerprint_corpus_metadata indexes +CREATE INDEX idx_fingerprint_corpus_tenant ON binaries.fingerprint_corpus_metadata(tenant_id); +CREATE INDEX idx_fingerprint_corpus_purl ON binaries.fingerprint_corpus_metadata(purl, version); + +-- fingerprint_matches indexes +CREATE INDEX idx_fingerprint_matches_tenant ON binaries.fingerprint_matches(tenant_id); +CREATE INDEX idx_fingerprint_matches_scan ON binaries.fingerprint_matches(scan_id); +CREATE INDEX idx_fingerprint_matches_type ON binaries.fingerprint_matches(match_type); +CREATE INDEX idx_fingerprint_matches_purl ON binaries.fingerprint_matches(vulnerable_purl); + +-- ============================================================================ +-- ROW-LEVEL SECURITY +-- ============================================================================ + +-- Enable RLS on all tenant-scoped tables +ALTER TABLE binaries.binary_identity ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.binary_identity FORCE ROW LEVEL SECURITY; +CREATE POLICY binary_identity_tenant_isolation ON binaries.binary_identity + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.binary_package_map ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.binary_package_map FORCE ROW LEVEL SECURITY; +CREATE POLICY binary_package_map_tenant_isolation ON binaries.binary_package_map + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.corpus_snapshots ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.corpus_snapshots FORCE ROW LEVEL SECURITY; +CREATE POLICY corpus_snapshots_tenant_isolation ON binaries.corpus_snapshots + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.vulnerable_buildids ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.vulnerable_buildids FORCE ROW LEVEL SECURITY; +CREATE POLICY vulnerable_buildids_tenant_isolation ON binaries.vulnerable_buildids + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.binary_vuln_assertion ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.binary_vuln_assertion FORCE ROW LEVEL SECURITY; +CREATE POLICY binary_vuln_assertion_tenant_isolation ON binaries.binary_vuln_assertion + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.cve_fix_evidence ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.cve_fix_evidence FORCE ROW LEVEL SECURITY; +CREATE POLICY cve_fix_evidence_tenant_isolation ON binaries.cve_fix_evidence + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.cve_fix_index ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.cve_fix_index FORCE ROW LEVEL SECURITY; +CREATE POLICY cve_fix_index_tenant_isolation ON binaries.cve_fix_index + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.vulnerable_fingerprints ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.vulnerable_fingerprints FORCE ROW LEVEL SECURITY; +CREATE POLICY vulnerable_fingerprints_tenant_isolation ON binaries.vulnerable_fingerprints + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.fingerprint_corpus_metadata ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.fingerprint_corpus_metadata FORCE ROW LEVEL SECURITY; +CREATE POLICY fingerprint_corpus_metadata_tenant_isolation ON binaries.fingerprint_corpus_metadata + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.fingerprint_matches ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.fingerprint_matches FORCE ROW LEVEL SECURITY; +CREATE POLICY fingerprint_matches_tenant_isolation ON binaries.fingerprint_matches + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); +``` + +--- + +## 3. Table Relationships + +``` +┌─────────────────────────────────────────────────────────────────────────────┐ +│ BINARIES SCHEMA │ +│ │ +│ ┌────────────────────┐ ┌────────────────────┐ │ +│ │ corpus_snapshots │<────────│ binary_package_map │ │ +│ │ (ingestion state) │ │ (binary→pkg) │ │ +│ └─────────┬──────────┘ └────────┬───────────┘ │ +│ │ │ │ +│ │ ▼ │ +│ │ ┌────────────────────┐ │ +│ └───────────────────>│ binary_identity │<─────────────────┐ │ +│ │ (Build-ID, hashes) │ │ │ +│ └────────┬───────────┘ │ │ +│ │ │ │ +│ ┌─────────────────────────────┼───────────────────────────────┤ │ +│ │ │ │ │ +│ ▼ ▼ │ │ +│ ┌────────────────────┐ ┌─────────────────────┐ ┌──────────┴───┐ +│ │ vulnerable_buildids│ │ binary_vuln_ │ │fingerprint_ │ +│ │ (known vuln builds)│ │ assertion │ │matches │ +│ └────────────────────┘ │ (CVE status) │ │(scan results)│ +│ └─────────────────────┘ └──────────────┘ +│ │ +│ ┌─────────────────────────────────────────────────────────────────────────┐│ +│ │ FIX INDEX (Patch-Aware) ││ +│ │ ┌────────────────────┐ ┌────────────────────┐ ││ +│ │ │ cve_fix_evidence │────────>│ cve_fix_index │ ││ +│ │ │ (raw evidence) │ merge │ (merged best) │ ││ +│ │ └────────────────────┘ └────────────────────┘ ││ +│ └─────────────────────────────────────────────────────────────────────────┘│ +│ │ +│ ┌─────────────────────────────────────────────────────────────────────────┐│ +│ │ FINGERPRINTS ││ +│ │ ┌────────────────────┐ ┌──────────────────────┐ ││ +│ │ │vulnerable_ │ │fingerprint_corpus_ │ ││ +│ │ │fingerprints │ │metadata │ ││ +│ │ │(CVE fingerprints) │ │(what's indexed) │ ││ +│ │ └────────────────────┘ └──────────────────────┘ ││ +│ └─────────────────────────────────────────────────────────────────────────┘│ +└─────────────────────────────────────────────────────────────────────────────┘ +``` + +--- + +## 4. Query Patterns + +### 4.1 Lookup by Build-ID + +```sql +-- Find vulnerabilities for a specific Build-ID +SELECT ba.cve_id, ba.status, ba.confidence, ba.method +FROM binaries.binary_vuln_assertion ba +JOIN binaries.binary_identity bi ON bi.binary_key = ba.binary_key +WHERE bi.build_id = :build_id + AND bi.build_id_type = 'gnu-build-id' + AND ba.status = 'affected'; +``` + +### 4.2 Check Fix Status (Patch-Aware) + +```sql +-- Check if a CVE is fixed for a specific distro/package +SELECT cfi.state, cfi.fixed_version, cfi.confidence, cfi.primary_method +FROM binaries.cve_fix_index cfi +WHERE cfi.distro = :distro + AND cfi.release = :release + AND cfi.source_pkg = :source_pkg + AND cfi.cve_id = :cve_id; +``` + +### 4.3 Fingerprint Similarity Search + +```sql +-- Find fingerprints with similar hash (requires application-level similarity) +SELECT vf.cve_id, vf.component, vf.function_name, vf.confidence +FROM binaries.vulnerable_fingerprints vf +WHERE vf.algorithm = :algorithm + AND vf.architecture = :architecture + AND vf.validated = TRUE + -- Application performs similarity comparison on fingerprint_hash +``` + +--- + +## 5. Migration Strategy + +### 5.1 Initial Migration + +```sql +-- V001__create_binaries_schema.sql +-- Creates all tables, indexes, and RLS policies +``` + +### 5.2 Seed Data + +```sql +-- S001__seed_reference_fingerprints.sql +-- Seeds fingerprints for high-impact CVEs from golden corpus +``` + +--- + +## 6. Performance Considerations + +### 6.1 Table Sizing Estimates + +| Table | Expected Rows | Growth Rate | +|-------|---------------|-------------| +| binary_identity | 10M | 1M/month | +| binary_package_map | 50M | 5M/month | +| vulnerable_buildids | 1M | 100K/month | +| cve_fix_index | 500K | 50K/month | +| vulnerable_fingerprints | 100K | 10K/month | +| fingerprint_matches | 10M | 1M/month | + +### 6.2 Partitioning Candidates + +- `fingerprint_matches` - Partition by `matched_at` (monthly) +- `cve_fix_evidence` - Partition by `created_at` (monthly) + +### 6.3 Index Maintenance + +- Hash index on `fingerprint_hash` for exact matches +- Consider bloom filter for fingerprint similarity pre-filtering + +--- + +*Document Version: 1.0.0* +*Last Updated: 2025-12-21* diff --git a/docs/implplan/SPRINT_3600_0001_0001_gateway_webservice.md b/docs/implplan/SPRINT_3600_0001_0001_gateway_webservice.md new file mode 100644 index 000000000..65096e2be --- /dev/null +++ b/docs/implplan/SPRINT_3600_0001_0001_gateway_webservice.md @@ -0,0 +1,378 @@ +# Sprint 3600.0001.0001 · Gateway WebService — HTTP Ingress Implementation + +## Topic & Scope +- Implement the missing `StellaOps.Gateway.WebService` HTTP ingress service. +- This is the single entry point for all external HTTP traffic, routing to microservices via the Router binary protocol. +- Connects the existing `StellaOps.Router.Gateway` library to a production-ready ASP.NET Core host. +- **Working directory:** `src/Gateway/StellaOps.Gateway.WebService/` + +## Dependencies & Concurrency +- **Upstream**: `StellaOps.Router.Gateway`, `StellaOps.Router.Transport.*`, `StellaOps.Auth.ServerIntegration` +- **Downstream**: All external API consumers, CLI, UI +- **Safe to parallelize with**: Sprints 3600.0002.*, 4200.*, 5200.* + +## Documentation Prerequisites +- `docs/modules/router/architecture.md` (canonical Router specification) +- `docs/modules/gateway/openapi.md` (OpenAPI aggregation) +- `docs/product-advisories/archived/2025-12-21-reference-architecture/20-Dec-2025 - Stella Ops Reference Architecture.md` +- `docs/07_HIGH_LEVEL_ARCHITECTURE.md` Section 7 (APIs) + +--- + +## Tasks + +### T1: Project Scaffolding + +**Assignee**: Platform Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Create the Gateway.WebService project with proper structure and dependencies. + +**Implementation Path**: `src/Gateway/StellaOps.Gateway.WebService/` + +**Acceptance Criteria**: +- [ ] `StellaOps.Gateway.WebService.csproj` targeting `net10.0` +- [ ] References: `StellaOps.Router.Gateway`, `StellaOps.Auth.ServerIntegration`, `StellaOps.Router.Transport.Tcp`, `StellaOps.Router.Transport.Tls` +- [ ] `Program.cs` with minimal viable bootstrap +- [ ] `appsettings.json` and `appsettings.Development.json` +- [ ] Dockerfile for containerized deployment +- [ ] Added to `StellaOps.sln` + +**Project Structure**: +``` +src/Gateway/ +├── StellaOps.Gateway.WebService/ +│ ├── StellaOps.Gateway.WebService.csproj +│ ├── Program.cs +│ ├── Dockerfile +│ ├── appsettings.json +│ ├── appsettings.Development.json +│ ├── Configuration/ +│ │ └── GatewayOptions.cs +│ ├── Middleware/ +│ │ ├── TenantMiddleware.cs +│ │ ├── RequestRoutingMiddleware.cs +│ │ └── HealthCheckMiddleware.cs +│ └── Services/ +│ ├── GatewayHostedService.cs +│ └── OpenApiAggregationService.cs +``` + +--- + +### T2: Gateway Host Service + +**Assignee**: Platform Team +**Story Points**: 5 +**Status**: TODO + +**Description**: +Implement the hosted service that manages Router transport connections and microservice registration. + +**Acceptance Criteria**: +- [ ] `GatewayHostedService` : `IHostedService` +- [ ] Starts TCP/TLS transport servers on configured ports +- [ ] Handles HELLO frames from microservices +- [ ] Maintains connection health via heartbeats +- [ ] Graceful shutdown with DRAINING state propagation +- [ ] Metrics: active_connections, registered_endpoints + +**Code Spec**: +```csharp +public sealed class GatewayHostedService : IHostedService, IDisposable +{ + private readonly ITransportServer _tcpServer; + private readonly ITransportServer _tlsServer; + private readonly IRoutingStateManager _routingState; + private readonly ILogger _logger; + + public async Task StartAsync(CancellationToken ct) + { + _tcpServer.OnHelloReceived += HandleHelloAsync; + _tcpServer.OnHeartbeatReceived += HandleHeartbeatAsync; + _tcpServer.OnConnectionClosed += HandleDisconnectAsync; + + await _tcpServer.StartAsync(ct); + await _tlsServer.StartAsync(ct); + + _logger.LogInformation("Gateway started on TCP:{TcpPort} TLS:{TlsPort}", + _options.TcpPort, _options.TlsPort); + } + + public async Task StopAsync(CancellationToken ct) + { + await _routingState.DrainAllConnectionsAsync(ct); + await _tcpServer.StopAsync(ct); + await _tlsServer.StopAsync(ct); + } +} +``` + +--- + +### T3: Request Routing Middleware + +**Assignee**: Platform Team +**Story Points**: 5 +**Status**: TODO + +**Description**: +Implement the core HTTP-to-binary routing middleware. + +**Acceptance Criteria**: +- [ ] `RequestRoutingMiddleware` intercepts all non-system routes +- [ ] Extracts `(Method, Path)` from HTTP request +- [ ] Looks up endpoint in routing state +- [ ] Serializes HTTP request to binary frame +- [ ] Sends to selected microservice instance +- [ ] Deserializes binary response to HTTP response +- [ ] Supports streaming responses (chunked transfer) +- [ ] Propagates cancellation on client disconnect +- [ ] Request correlation ID in X-Correlation-Id header + +**Routing Flow**: +``` +HTTP Request → Middleware → RoutingState.SelectInstance() + ↓ + TransportClient.SendRequestAsync() + ↓ + Microservice processes + ↓ + TransportClient.ReceiveResponseAsync() + ↓ +HTTP Response ← Middleware ← Response Frame +``` + +--- + +### T4: Authentication & Authorization Integration + +**Assignee**: Platform Team +**Story Points**: 5 +**Status**: TODO + +**Description**: +Integrate Authority DPoP/mTLS validation and claims-based authorization. + +**Acceptance Criteria**: +- [ ] DPoP token validation via `StellaOps.Auth.ServerIntegration` +- [ ] mTLS certificate binding validation +- [ ] Claims extraction and propagation to microservices +- [ ] Endpoint-level authorization based on `RequiringClaims` +- [ ] Tenant context extraction from `tid` claim +- [ ] Rate limiting per tenant/identity +- [ ] Audit logging of auth failures + +**Claims Propagation**: +```csharp +// Claims are serialized into request frame headers +var claims = new Dictionary +{ + ["sub"] = principal.FindFirst("sub")?.Value ?? "", + ["tid"] = principal.FindFirst("tid")?.Value ?? "", + ["scope"] = string.Join(" ", principal.FindAll("scope").Select(c => c.Value)), + ["cnf.jkt"] = principal.FindFirst("cnf.jkt")?.Value ?? "" +}; +requestFrame.Headers = claims; +``` + +--- + +### T5: OpenAPI Aggregation Endpoint + +**Assignee**: Platform Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Implement aggregated OpenAPI 3.1.0 spec generation from registered endpoints. + +**Acceptance Criteria**: +- [ ] `GET /openapi.json` returns aggregated spec +- [ ] `GET /openapi.yaml` returns YAML format +- [ ] TTL-based caching (5 min default) +- [ ] ETag generation for conditional requests +- [ ] Schema validation before aggregation +- [ ] Includes all registered endpoints with their schemas +- [ ] Info section populated from gateway config + +--- + +### T6: Health & Readiness Endpoints + +**Assignee**: Platform Team +**Story Points**: 2 +**Status**: TODO + +**Description**: +Implement health check endpoints for orchestration platforms. + +**Acceptance Criteria**: +- [ ] `GET /health/live` - Liveness probe (process alive) +- [ ] `GET /health/ready` - Readiness probe (accepting traffic) +- [ ] `GET /health/startup` - Startup probe (initialization complete) +- [ ] Downstream health aggregation from connected microservices +- [ ] Metrics endpoint at `/metrics` (Prometheus format) + +--- + +### T7: Configuration & Options + +**Assignee**: Platform Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Define comprehensive gateway configuration model. + +**Acceptance Criteria**: +- [ ] `GatewayOptions` with all configurable settings +- [ ] YAML configuration support +- [ ] Environment variable overrides +- [ ] Configuration validation on startup +- [ ] Hot-reload for non-transport settings + +**Configuration Spec**: +```yaml +gateway: + node: + region: "eu1" + nodeId: "gw-eu1-01" + environment: "prod" + + transports: + tcp: + enabled: true + port: 9100 + maxConnections: 1000 + tls: + enabled: true + port: 9443 + certificatePath: "/certs/gateway.pfx" + clientCertificateMode: "RequireCertificate" + + routing: + defaultTimeout: "30s" + maxRequestBodySize: "100MB" + streamingEnabled: true + neighborRegions: ["eu2", "us1"] + + auth: + dpopEnabled: true + mtlsEnabled: true + rateLimiting: + enabled: true + requestsPerMinute: 1000 + burstSize: 100 + + openapi: + enabled: true + cacheTtlSeconds: 300 +``` + +--- + +### T8: Unit Tests + +**Assignee**: Platform Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Comprehensive unit tests for gateway components. + +**Acceptance Criteria**: +- [ ] Routing middleware tests (happy path, errors, timeouts) +- [ ] Instance selection algorithm tests +- [ ] Claims extraction tests +- [ ] Configuration validation tests +- [ ] OpenAPI aggregation tests +- [ ] 90%+ code coverage + +--- + +### T9: Integration Tests + +**Assignee**: Platform Team +**Story Points**: 5 +**Status**: TODO + +**Description**: +End-to-end integration tests with in-memory transport. + +**Acceptance Criteria**: +- [ ] Request routing through gateway to mock microservice +- [ ] Streaming response handling +- [ ] Cancellation propagation +- [ ] Auth flow integration +- [ ] Multi-instance load balancing +- [ ] Health check aggregation +- [ ] Uses `StellaOps.Router.Transport.InMemory` for testing + +--- + +### T10: Documentation + +**Assignee**: Platform Team +**Story Points**: 2 +**Status**: TODO + +**Description**: +Create gateway architecture documentation. + +**Acceptance Criteria**: +- [ ] `docs/modules/gateway/architecture.md` - Full architecture card +- [ ] Update `docs/07_HIGH_LEVEL_ARCHITECTURE.md` with gateway details +- [ ] Operator runbook for deployment and troubleshooting +- [ ] Configuration reference + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Platform Team | Project Scaffolding | +| 2 | T2 | TODO | T1 | Platform Team | Gateway Host Service | +| 3 | T3 | TODO | T2 | Platform Team | Request Routing Middleware | +| 4 | T4 | TODO | T1 | Platform Team | Auth & Authorization Integration | +| 5 | T5 | TODO | T2 | Platform Team | OpenAPI Aggregation Endpoint | +| 6 | T6 | TODO | T1 | Platform Team | Health & Readiness Endpoints | +| 7 | T7 | TODO | T1 | Platform Team | Configuration & Options | +| 8 | T8 | TODO | T1-T7 | Platform Team | Unit Tests | +| 9 | T9 | TODO | T8 | Platform Team | Integration Tests | +| 10 | T10 | TODO | T1-T9 | Platform Team | Documentation | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Reference Architecture advisory gap analysis. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Single ingress point | Decision | Platform Team | All HTTP traffic goes through Gateway.WebService | +| Binary protocol only for internal | Decision | Platform Team | No HTTP between Gateway and microservices | +| TLS required for production | Decision | Platform Team | TCP transport only for development/testing | +| DPoP + mTLS dual support | Decision | Platform Team | Both auth mechanisms supported concurrently | + +--- + +## Success Criteria + +- [ ] Gateway accepts HTTP requests and routes to microservices via binary protocol +- [ ] All existing Router.Gateway tests pass +- [ ] `tests/StellaOps.Gateway.WebService.Tests/` project references work (no longer orphaned) +- [ ] OpenAPI spec aggregation functional +- [ ] Auth integration with Authority validated +- [ ] Performance: <5ms routing overhead at P99 + +**Sprint Status**: TODO (0/10 tasks complete) diff --git a/docs/implplan/SPRINT_3600_0002_0001_cyclonedx_1_7_upgrade.md b/docs/implplan/SPRINT_3600_0002_0001_cyclonedx_1_7_upgrade.md new file mode 100644 index 000000000..fe76b6c4d --- /dev/null +++ b/docs/implplan/SPRINT_3600_0002_0001_cyclonedx_1_7_upgrade.md @@ -0,0 +1,309 @@ +# Sprint 3600.0002.0001 · CycloneDX 1.7 Upgrade — SBOM Format Migration + +## Topic & Scope +- Upgrade all CycloneDX SBOM generation from version 1.6 to version 1.7. +- Update serialization, parsing, and validation to CycloneDX 1.7 specification. +- Maintain backward compatibility for reading CycloneDX 1.6 documents. +- **Working directory:** `src/Scanner/__Libraries/StellaOps.Scanner.Emit/`, `src/SbomService/`, `src/Excititor/` + +## Dependencies & Concurrency +- **Upstream**: CycloneDX Core NuGet package update +- **Downstream**: All SBOM consumers (Policy, Excititor, ExportCenter) +- **Safe to parallelize with**: Sprints 3600.0003.*, 4200.*, 5200.* + +## Documentation Prerequisites +- CycloneDX 1.7 Specification: https://cyclonedx.org/docs/1.7/ +- `docs/modules/scanner/architecture.md` +- `docs/modules/sbomservice/architecture.md` + +--- + +## Tasks + +### T1: CycloneDX NuGet Package Update + +**Assignee**: Scanner Team +**Story Points**: 2 +**Status**: TODO + +**Description**: +Update CycloneDX.Core and related packages to versions supporting 1.7. + +**Acceptance Criteria**: +- [ ] Update `CycloneDX.Core` to latest version with 1.7 support +- [ ] Update `CycloneDX.Json` if separate +- [ ] Update `CycloneDX.Protobuf` if separate +- [ ] Verify all dependent projects build +- [ ] No breaking API changes (or document migration path) + +**Package Updates**: +```xml + + + + + +``` + +--- + +### T2: CycloneDxComposer Update + +**Assignee**: Scanner Team +**Story Points**: 5 +**Status**: TODO + +**Description**: +Update the SBOM composer to emit CycloneDX 1.7 format. + +**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Emit/Composition/CycloneDxComposer.cs` + +**Acceptance Criteria**: +- [ ] Spec version set to "1.7" +- [ ] Media type updated to `application/vnd.cyclonedx+json; version=1.7` +- [ ] New 1.7 fields populated where applicable: + - [ ] `declarations` for attestations + - [ ] `definitions` for standards/requirements + - [ ] Enhanced `formulation` for build environment + - [ ] `modelCard` for ML components (if applicable) + - [ ] `cryptography` properties (if applicable) +- [ ] Existing fields remain populated correctly +- [ ] Deterministic output maintained + +**Key 1.7 Additions**: +```csharp +// CycloneDX 1.7 new features +public sealed record CycloneDx17Enhancements +{ + // Attestations - link to in-toto/DSSE + public ImmutableArray Declarations { get; init; } + + // Standards compliance (e.g., NIST, ISO) + public ImmutableArray Definitions { get; init; } + + // Enhanced formulation for reproducibility + public Formulation? Formulation { get; init; } + + // Cryptography bill of materials + public CryptographyProperties? Cryptography { get; init; } +} +``` + +--- + +### T3: SBOM Serialization Updates + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Update JSON and Protobuf serialization for 1.7 schema. + +**Acceptance Criteria**: +- [ ] JSON serialization outputs valid CycloneDX 1.7 +- [ ] Protobuf serialization updated for 1.7 schema +- [ ] Schema validation against official 1.7 JSON schema +- [ ] Canonical JSON ordering preserved (determinism) +- [ ] Empty collections omitted (spec compliance) + +--- + +### T4: SBOM Parsing Backward Compatibility + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Ensure parsers can read both 1.6 and 1.7 CycloneDX documents. + +**Implementation Path**: `src/Excititor/__Libraries/StellaOps.Excititor.Formats.CycloneDX/` + +**Acceptance Criteria**: +- [ ] Parser auto-detects spec version from document +- [ ] 1.6 documents parsed without errors +- [ ] 1.7 documents parsed with new fields +- [ ] Unknown fields in future versions ignored gracefully +- [ ] Version-specific validation applied + +**Parsing Logic**: +```csharp +public CycloneDxBom Parse(string json) +{ + var specVersion = ExtractSpecVersion(json); + return specVersion switch + { + "1.6" => ParseV16(json), + "1.7" => ParseV17(json), + _ when specVersion.StartsWith("1.") => ParseV17(json), // forward compat + _ => throw new UnsupportedSpecVersionException(specVersion) + }; +} +``` + +--- + +### T5: VEX Format Updates + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Update VEX document generation to leverage CycloneDX 1.7 improvements. + +**Acceptance Criteria**: +- [ ] VEX documents reference 1.7 spec +- [ ] Enhanced `vulnerability.ratings` with CVSS 4.0 vectors +- [ ] `vulnerability.affects[].versions` range expressions +- [ ] `vulnerability.source` with PURL references +- [ ] Backward-compatible with 1.6 VEX consumers + +--- + +### T6: Media Type Updates + +**Assignee**: Scanner Team +**Story Points**: 2 +**Status**: TODO + +**Description**: +Update all media type references throughout the codebase. + +**Acceptance Criteria**: +- [ ] Constants updated: `application/vnd.cyclonedx+json; version=1.7` +- [ ] OCI artifact type updated for SBOM referrers +- [ ] Content-Type headers in API responses updated +- [ ] Accept header handling supports both 1.6 and 1.7 + +**Media Type Constants**: +```csharp +public static class CycloneDxMediaTypes +{ + public const string JsonV17 = "application/vnd.cyclonedx+json; version=1.7"; + public const string JsonV16 = "application/vnd.cyclonedx+json; version=1.6"; + public const string Json = JsonV17; // Default to latest + + public const string ProtobufV17 = "application/vnd.cyclonedx+protobuf; version=1.7"; + public const string XmlV17 = "application/vnd.cyclonedx+xml; version=1.7"; +} +``` + +--- + +### T7: Golden Corpus Update + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Update golden test corpus with CycloneDX 1.7 expected outputs. + +**Acceptance Criteria**: +- [ ] Regenerate all golden SBOM files in 1.7 format +- [ ] Verify determinism: same inputs produce identical outputs +- [ ] Add 1.7-specific test cases (declarations, formulation) +- [ ] Retain 1.6 golden files for backward compat testing +- [ ] CI/CD determinism tests pass + +--- + +### T8: Unit Tests + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Update and expand unit tests for 1.7 support. + +**Acceptance Criteria**: +- [ ] Composer tests for 1.7 output +- [ ] Parser tests for 1.6 and 1.7 input +- [ ] Serialization round-trip tests +- [ ] Schema validation tests +- [ ] Media type handling tests + +--- + +### T9: Integration Tests + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +End-to-end integration tests with 1.7 SBOMs. + +**Acceptance Criteria**: +- [ ] Full scan → SBOM → Policy evaluation flow +- [ ] SBOM export to OCI registry as referrer +- [ ] Cross-module SBOM consumption (Excititor, Policy) +- [ ] Air-gap bundle with 1.7 SBOMs + +--- + +### T10: Documentation Updates + +**Assignee**: Scanner Team +**Story Points**: 2 +**Status**: TODO + +**Description**: +Update documentation to reflect 1.7 upgrade. + +**Acceptance Criteria**: +- [ ] Update `docs/modules/scanner/architecture.md` with 1.7 references +- [ ] Update `docs/modules/sbomservice/architecture.md` +- [ ] Update API documentation with new media types +- [ ] Migration guide for 1.6 → 1.7 + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Scanner Team | NuGet Package Update | +| 2 | T2 | TODO | T1 | Scanner Team | CycloneDxComposer Update | +| 3 | T3 | TODO | T1 | Scanner Team | Serialization Updates | +| 4 | T4 | TODO | T1 | Scanner Team | Parsing Backward Compatibility | +| 5 | T5 | TODO | T2 | Scanner Team | VEX Format Updates | +| 6 | T6 | TODO | T2 | Scanner Team | Media Type Updates | +| 7 | T7 | TODO | T2-T6 | Scanner Team | Golden Corpus Update | +| 8 | T8 | TODO | T2-T6 | Scanner Team | Unit Tests | +| 9 | T9 | TODO | T8 | Scanner Team | Integration Tests | +| 10 | T10 | TODO | T1-T9 | Scanner Team | Documentation Updates | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Reference Architecture advisory - upgrading from 1.6 to 1.7. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Default to 1.7 | Decision | Scanner Team | New SBOMs default to 1.7; 1.6 available via config | +| Backward compat | Decision | Scanner Team | Parsers support 1.5, 1.6, 1.7 for ingestion | +| Protobuf sync | Risk | Scanner Team | Protobuf schema may lag JSON; prioritize JSON | +| NuGet availability | Risk | Scanner Team | CycloneDX.Core 1.7 support timing unclear | + +--- + +## Success Criteria + +- [ ] All SBOM generation outputs valid CycloneDX 1.7 +- [ ] All parsers read 1.6 and 1.7 without errors +- [ ] Determinism tests pass with 1.7 output +- [ ] No regression in scan-to-policy flow +- [ ] Media types correctly reflect 1.7 + +**Sprint Status**: TODO (0/10 tasks complete) diff --git a/docs/implplan/SPRINT_3600_0003_0001_spdx_3_0_1_generation.md b/docs/implplan/SPRINT_3600_0003_0001_spdx_3_0_1_generation.md new file mode 100644 index 000000000..6a212c258 --- /dev/null +++ b/docs/implplan/SPRINT_3600_0003_0001_spdx_3_0_1_generation.md @@ -0,0 +1,387 @@ +# Sprint 3600.0003.0001 · SPDX 3.0.1 Native Generation — Full SBOM Format Support + +## Topic & Scope +- Implement native SPDX 3.0.1 SBOM generation capability. +- Currently only license normalization and import parsing exists; this sprint adds full generation. +- Provide SPDX 3.0.1 as an alternative output format alongside CycloneDX 1.7. +- **Working directory:** `src/Scanner/__Libraries/StellaOps.Scanner.Emit/`, `src/SbomService/` + +## Dependencies & Concurrency +- **Upstream**: Sprint 3600.0002.0001 (CycloneDX 1.7 - establishes patterns) +- **Downstream**: ExportCenter, air-gap bundles, Policy (optional SPDX support) +- **Safe to parallelize with**: Sprints 4200.*, 5200.* + +## Documentation Prerequisites +- SPDX 3.0.1 Specification: https://spdx.github.io/spdx-spec/v3.0.1/ +- `docs/modules/scanner/architecture.md` +- Existing: `src/AirGap/StellaOps.AirGap.Importer/Reconciliation/Parsers/SpdxParser.cs` + +--- + +## Tasks + +### T1: SPDX 3.0.1 Domain Model + +**Assignee**: Scanner Team +**Story Points**: 5 +**Status**: TODO + +**Description**: +Create comprehensive C# domain model for SPDX 3.0.1 elements. + +**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Emit/Spdx/Models/` + +**Acceptance Criteria**: +- [ ] Core classes: `SpdxDocument`, `SpdxElement`, `SpdxRelationship` +- [ ] Package model: `SpdxPackage` with all 3.0.1 fields +- [ ] File model: `SpdxFile` with checksums and annotations +- [ ] Snippet model: `SpdxSnippet` for partial file references +- [ ] Licensing: `SpdxLicense`, `SpdxLicenseExpression`, `SpdxExtractedLicense` +- [ ] Security: `SpdxVulnerability`, `SpdxVulnAssessment` +- [ ] Annotations and relationships per spec +- [ ] Immutable records with init-only properties + +**Core Model**: +```csharp +namespace StellaOps.Scanner.Emit.Spdx.Models; + +public sealed record SpdxDocument +{ + public required string SpdxVersion { get; init; } // "SPDX-3.0.1" + public required string DocumentNamespace { get; init; } + public required string Name { get; init; } + public required SpdxCreationInfo CreationInfo { get; init; } + public ImmutableArray Elements { get; init; } + public ImmutableArray Relationships { get; init; } + public ImmutableArray Annotations { get; init; } +} + +public abstract record SpdxElement +{ + public required string SpdxId { get; init; } + public string? Name { get; init; } + public string? Comment { get; init; } +} + +public sealed record SpdxPackage : SpdxElement +{ + public string? Version { get; init; } + public string? PackageUrl { get; init; } // PURL + public string? DownloadLocation { get; init; } + public SpdxLicenseExpression? DeclaredLicense { get; init; } + public SpdxLicenseExpression? ConcludedLicense { get; init; } + public string? CopyrightText { get; init; } + public ImmutableArray Checksums { get; init; } + public ImmutableArray ExternalRefs { get; init; } + public SpdxPackageVerificationCode? VerificationCode { get; init; } +} + +public sealed record SpdxRelationship +{ + public required string FromElement { get; init; } + public required SpdxRelationshipType Type { get; init; } + public required string ToElement { get; init; } +} +``` + +--- + +### T2: SPDX 3.0.1 Composer + +**Assignee**: Scanner Team +**Story Points**: 5 +**Status**: TODO + +**Description**: +Implement SBOM composer that generates SPDX 3.0.1 documents from scan results. + +**Implementation Path**: `src/Scanner/__Libraries/StellaOps.Scanner.Emit/Composition/SpdxComposer.cs` + +**Acceptance Criteria**: +- [ ] `ISpdxComposer` interface with `Compose()` method +- [ ] `SpdxComposer` implementation +- [ ] Maps internal package model to SPDX packages +- [ ] Generates DESCRIBES relationships for root packages +- [ ] Generates DEPENDENCY_OF relationships for dependencies +- [ ] Populates license expressions from detected licenses +- [ ] Deterministic SPDX ID generation (content-addressed) +- [ ] Document namespace follows URI pattern + +**Composer Interface**: +```csharp +public interface ISpdxComposer +{ + SpdxDocument Compose( + ScanResult scanResult, + SpdxCompositionOptions options, + CancellationToken cancellationToken = default); + + ValueTask ComposeAsync( + ScanResult scanResult, + SpdxCompositionOptions options, + CancellationToken cancellationToken = default); +} + +public sealed record SpdxCompositionOptions +{ + public string CreatorTool { get; init; } = "StellaOps-Scanner"; + public string? CreatorOrganization { get; init; } + public string NamespaceBase { get; init; } = "https://stellaops.io/spdx"; + public bool IncludeFiles { get; init; } = false; + public bool IncludeSnippets { get; init; } = false; + public SpdxLicenseListVersion LicenseListVersion { get; init; } = SpdxLicenseListVersion.V3_21; +} +``` + +--- + +### T3: SPDX JSON-LD Serialization + +**Assignee**: Scanner Team +**Story Points**: 5 +**Status**: TODO + +**Description**: +Implement JSON-LD serialization per SPDX 3.0.1 specification. + +**Acceptance Criteria**: +- [ ] JSON-LD output with proper @context +- [ ] @type annotations for all elements +- [ ] @id for element references +- [ ] Canonical JSON ordering (deterministic) +- [ ] Schema validation against official SPDX 3.0.1 JSON schema +- [ ] Compact JSON-LD form (not expanded) + +**JSON-LD Output Example**: +```json +{ + "@context": "https://spdx.org/rdf/3.0.1/spdx-context.jsonld", + "@type": "SpdxDocument", + "spdxVersion": "SPDX-3.0.1", + "name": "SBOM for container:sha256:abc123", + "documentNamespace": "https://stellaops.io/spdx/container/sha256:abc123", + "creationInfo": { + "@type": "CreationInfo", + "created": "2025-12-21T10:00:00Z", + "createdBy": ["Tool: StellaOps-Scanner-1.0.0"] + }, + "rootElement": ["SPDXRef-Package-root"], + "element": [ + { + "@type": "Package", + "@id": "SPDXRef-Package-root", + "name": "myapp", + "packageVersion": "1.0.0", + "packageUrl": "pkg:oci/myapp@sha256:abc123" + } + ] +} +``` + +--- + +### T4: SPDX Tag-Value Serialization (Optional) + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Implement legacy tag-value format for backward compatibility. + +**Acceptance Criteria**: +- [ ] Tag-value output matching SPDX 2.3 format +- [ ] Deterministic field ordering +- [ ] Proper escaping of multi-line text +- [ ] Relationship serialization +- [ ] Can be disabled via configuration + +**Tag-Value Example**: +``` +SPDXVersion: SPDX-2.3 +DataLicense: CC0-1.0 +SPDXID: SPDXRef-DOCUMENT +DocumentName: SBOM for container:sha256:abc123 +DocumentNamespace: https://stellaops.io/spdx/container/sha256:abc123 + +PackageName: myapp +SPDXID: SPDXRef-Package-root +PackageVersion: 1.0.0 +PackageDownloadLocation: NOASSERTION +``` + +--- + +### T5: License Expression Handling + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Implement SPDX license expression parsing and generation. + +**Acceptance Criteria**: +- [ ] Parse SPDX license expressions (AND, OR, WITH) +- [ ] Generate valid license expressions +- [ ] Handle LicenseRef- for custom licenses +- [ ] Validate against SPDX license list +- [ ] Support SPDX 3.21 license list + +**License Expression Model**: +```csharp +public abstract record SpdxLicenseExpression; + +public sealed record SpdxSimpleLicense(string LicenseId) : SpdxLicenseExpression; + +public sealed record SpdxConjunctiveLicense( + SpdxLicenseExpression Left, + SpdxLicenseExpression Right) : SpdxLicenseExpression; // AND + +public sealed record SpdxDisjunctiveLicense( + SpdxLicenseExpression Left, + SpdxLicenseExpression Right) : SpdxLicenseExpression; // OR + +public sealed record SpdxWithException( + SpdxLicenseExpression License, + string Exception) : SpdxLicenseExpression; +``` + +--- + +### T6: SPDX-CycloneDX Conversion + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Implement bidirectional conversion between SPDX and CycloneDX. + +**Acceptance Criteria**: +- [ ] CycloneDX → SPDX conversion +- [ ] SPDX → CycloneDX conversion +- [ ] Preserve all common fields +- [ ] Handle format-specific fields gracefully +- [ ] Conversion loss documented + +--- + +### T7: SBOM Service Integration + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Integrate SPDX generation into SBOM service endpoints. + +**Implementation Path**: `src/SbomService/` + +**Acceptance Criteria**: +- [ ] `Accept: application/spdx+json` returns SPDX 3.0.1 +- [ ] `Accept: text/spdx` returns tag-value format +- [ ] Query parameter `?format=spdx` as alternative +- [ ] Default remains CycloneDX 1.7 +- [ ] Caching works for both formats + +--- + +### T8: OCI Artifact Type Registration + +**Assignee**: Scanner Team +**Story Points**: 2 +**Status**: TODO + +**Description**: +Register SPDX SBOMs as OCI referrers with proper artifact type. + +**Acceptance Criteria**: +- [ ] Artifact type: `application/spdx+json` +- [ ] Push to registry alongside CycloneDX +- [ ] Configurable: push one or both formats +- [ ] Referrer index lists both when available + +--- + +### T9: Unit Tests + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Comprehensive unit tests for SPDX generation. + +**Acceptance Criteria**: +- [ ] Model construction tests +- [ ] Composer tests for various scan results +- [ ] JSON-LD serialization tests +- [ ] Tag-value serialization tests +- [ ] License expression tests +- [ ] Conversion tests + +--- + +### T10: Integration Tests & Golden Corpus + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +End-to-end tests and golden file corpus for SPDX. + +**Acceptance Criteria**: +- [ ] Full scan → SPDX flow +- [ ] Golden SPDX files for determinism testing +- [ ] SPDX validation against official tooling +- [ ] Air-gap bundle with SPDX SBOMs + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Scanner Team | SPDX 3.0.1 Domain Model | +| 2 | T2 | TODO | T1 | Scanner Team | SPDX 3.0.1 Composer | +| 3 | T3 | TODO | T1 | Scanner Team | JSON-LD Serialization | +| 4 | T4 | TODO | T1 | Scanner Team | Tag-Value Serialization | +| 5 | T5 | TODO | — | Scanner Team | License Expression Handling | +| 6 | T6 | TODO | T1, T3 | Scanner Team | SPDX-CycloneDX Conversion | +| 7 | T7 | TODO | T2, T3 | Scanner Team | SBOM Service Integration | +| 8 | T8 | TODO | T7 | Scanner Team | OCI Artifact Type Registration | +| 9 | T9 | TODO | T1-T6 | Scanner Team | Unit Tests | +| 10 | T10 | TODO | T7-T8 | Scanner Team | Integration Tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Reference Architecture advisory - adding SPDX 3.0.1 generation. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| JSON-LD primary | Decision | Scanner Team | JSON-LD is primary format; tag-value for legacy | +| CycloneDX default | Decision | Scanner Team | CycloneDX remains default; SPDX opt-in | +| SPDX 3.0.1 only | Decision | Scanner Team | No support for SPDX 2.x generation (only parsing) | +| License list sync | Risk | Scanner Team | SPDX license list updates may require periodic sync | + +--- + +## Success Criteria + +- [ ] Valid SPDX 3.0.1 JSON-LD output from scans +- [ ] Passes official SPDX validation tools +- [ ] Deterministic output (same input = same output) +- [ ] Can export both CycloneDX and SPDX for same scan +- [ ] Documentation complete + +**Sprint Status**: TODO (0/10 tasks complete) diff --git a/docs/implplan/SPRINT_3600_SUMMARY.md b/docs/implplan/SPRINT_3600_SUMMARY.md new file mode 100644 index 000000000..4d817991d --- /dev/null +++ b/docs/implplan/SPRINT_3600_SUMMARY.md @@ -0,0 +1,87 @@ +# Sprint Series 3600 · Reference Architecture Gap Closure + +## Overview + +This sprint series addresses gaps identified from the **20-Dec-2025 Reference Architecture Advisory** analysis. These sprints complete the implementation of the Stella Ops reference architecture vision. + +## Sprint Index + +| Sprint | Title | Priority | Status | Dependencies | +|--------|-------|----------|--------|--------------| +| 3600.0001.0001 | Gateway WebService | HIGH | TODO | Router infrastructure (complete) | +| 3600.0002.0001 | CycloneDX 1.7 Upgrade | HIGH | TODO | None | +| 3600.0003.0001 | SPDX 3.0.1 Generation | MEDIUM | TODO | 3600.0002.0001 | + +## Related Sprints (Other Series) + +| Sprint | Title | Priority | Status | Series | +|--------|-------|----------|--------|--------| +| 4200.0001.0001 | Proof Chain Verification UI | HIGH | TODO | 4200 (UI) | +| 5200.0001.0001 | Starter Policy Template | HIGH | TODO | 5200 (Docs) | + +## Gap Analysis Source + +**Advisory**: `docs/product-advisories/archived/2025-12-21-reference-architecture/20-Dec-2025 - Stella Ops Reference Architecture.md` + +### Gaps Addressed + +| Gap | Sprint | Description | +|-----|--------|-------------| +| Gateway WebService Missing | 3600.0001.0001 | HTTP ingress service not implemented | +| CycloneDX 1.6 → 1.7 | 3600.0002.0001 | Upgrade to latest CycloneDX spec | +| SPDX 3.0.1 Generation | 3600.0003.0001 | Native SPDX SBOM generation | +| Proof Chain UI | 4200.0001.0001 | Evidence transparency dashboard | +| Starter Policy | 5200.0001.0001 | Day-1 policy pack for onboarding | + +### Already Implemented (No Action Required) + +| Component | Status | Notes | +|-----------|--------|-------| +| Scheduler | Complete | Full implementation with PostgreSQL, Redis | +| Policy Engine | Complete | Signed verdicts, deterministic IR, exceptions | +| Authority | Complete | DPoP/mTLS, OpToks, JWKS rotation | +| Attestor | Complete | DSSE/in-toto, Rekor v2, proof chains | +| Timeline/Notify | Complete | TimelineIndexer + Notify with 4 channels | +| Excititor | Complete | VEX ingestion, CycloneDX, OpenVEX | +| Concelier | Complete | 31+ connectors, Link-Not-Merge | +| Reachability/Signals | Complete | 5-factor scoring, lattice logic | +| OCI Referrers | Complete | ExportCenter + Excititor | +| Tenant Isolation | Complete | RLS, per-tenant keys, namespaces | + +## Execution Order + +```mermaid +graph LR + A[3600.0002.0001
CycloneDX 1.7] --> B[3600.0003.0001
SPDX 3.0.1] + C[3600.0001.0001
Gateway WebService] --> D[Production Ready] + B --> D + E[4200.0001.0001
Proof Chain UI] --> D + F[5200.0001.0001
Starter Policy] --> D +``` + +## Success Criteria for Series + +- [ ] Gateway WebService accepts HTTP and routes to microservices +- [ ] All SBOMs generated in CycloneDX 1.7 format +- [ ] SPDX 3.0.1 available as alternative SBOM format +- [ ] Auditors can view complete evidence chains in UI +- [ ] New customers can deploy starter policy in <5 minutes + +## Created + +- **Date**: 2025-12-21 +- **Source**: Reference Architecture Advisory Gap Analysis +- **Author**: Agent + +--- + +## Sprint Status Summary + +| Sprint | Tasks | Completed | Status | +|--------|-------|-----------|--------| +| 3600.0001.0001 | 10 | 0 | TODO | +| 3600.0002.0001 | 10 | 0 | TODO | +| 3600.0003.0001 | 10 | 0 | TODO | +| 4200.0001.0001 | 11 | 0 | TODO | +| 5200.0001.0001 | 10 | 0 | TODO | +| **Total** | **51** | **0** | **TODO** | diff --git a/docs/implplan/SPRINT_4000_0001_0001_unknowns_decay_algorithm.md b/docs/implplan/SPRINT_4000_0001_0001_unknowns_decay_algorithm.md new file mode 100644 index 000000000..b31471220 --- /dev/null +++ b/docs/implplan/SPRINT_4000_0001_0001_unknowns_decay_algorithm.md @@ -0,0 +1,384 @@ +# Sprint 4000.0001.0001 · Unknowns Decay Algorithm + +## Topic & Scope + +- Add time-based decay factor to the UnknownRanker scoring algorithm +- Implements bucket-based freshness decay following existing `FreshnessModels` pattern +- Ensures older unknowns gradually reduce in priority unless re-evaluated + +**Working directory:** `src/Policy/__Libraries/StellaOps.Policy.Unknowns/` + +## Dependencies & Concurrency + +- **Upstream**: None (first sprint in batch) +- **Downstream**: Sprint 4000.0001.0002 (BlastRadius/Containment) +- **Safe to parallelize with**: Sprint 4000.0002.0001 (EPSS Connector) + +## Documentation Prerequisites + +- `src/Policy/__Libraries/StellaOps.Policy.Unknowns/AGENTS.md` +- `src/Policy/__Libraries/StellaOps.Policy/Scoring/FreshnessModels.cs` (pattern reference) +- `docs/product-advisories/14-Dec-2025 - Triage and Unknowns Technical Reference.md` + +--- + +## Tasks + +### T1: Extend UnknownRankInput with Timestamps + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Add timestamp fields to `UnknownRankInput` record to support decay calculation. + +**Implementation Path**: `Services/UnknownRanker.cs` (lines 16-23) + +**Changes**: +```csharp +public sealed record UnknownRankInput( + bool HasVexStatement, + bool HasReachabilityData, + bool HasConflictingSources, + bool IsStaleAdvisory, + bool IsInKev, + decimal EpssScore, + decimal CvssScore, + // NEW: Time-based decay inputs + DateTimeOffset? FirstSeenAt, + DateTimeOffset? LastEvaluatedAt, + DateTimeOffset AsOfDateTime); +``` + +**Acceptance Criteria**: +- [ ] `FirstSeenAt` nullable timestamp added (when unknown first detected) +- [ ] `LastEvaluatedAt` nullable timestamp added (last ranking recalculation) +- [ ] `AsOfDateTime` required timestamp added (reference time for decay) +- [ ] Backward compatible: existing callers can pass null for new optional fields +- [ ] All existing tests still pass + +--- + +### T2: Implement DecayCalculator + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement bucket-based decay calculation following the `FreshnessModels` pattern in `StellaOps.Policy.Scoring`. + +**Implementation Path**: `Services/UnknownRanker.cs` + +**Decay Buckets** (from FreshnessModels pattern): +```csharp +/// +/// Computes decay factor based on days since last evaluation. +/// Returns 1.0 for fresh, decreasing to 0.2 for very old. +/// +private static decimal ComputeDecayFactor(UnknownRankInput input) +{ + if (input.LastEvaluatedAt is null) + return 1.0m; // No history = no decay + + var ageDays = (int)(input.AsOfDateTime - input.LastEvaluatedAt.Value).TotalDays; + + return ageDays switch + { + <= 7 => 1.00m, // Fresh (7d): 100% + <= 30 => 0.90m, // 30d: 90% + <= 90 => 0.75m, // 90d: 75% + <= 180 => 0.60m, // 180d: 60% + <= 365 => 0.40m, // 365d: 40% + _ => 0.20m // >365d: 20% + }; +} +``` + +**Acceptance Criteria**: +- [ ] `ComputeDecayFactor` method implemented with bucket logic +- [ ] Returns `1.0m` when `LastEvaluatedAt` is null (no decay) +- [ ] All arithmetic uses `decimal` for determinism +- [ ] Buckets match FreshnessModels pattern (7/30/90/180/365 days) + +--- + +### T3: Extend UnknownRankerOptions + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T2 + +**Description**: +Add decay configuration options to allow customization of decay behavior. + +**Implementation Path**: `Services/UnknownRanker.cs` (lines 162-172) + +**Changes**: +```csharp +public sealed class UnknownRankerOptions +{ + // Existing band thresholds + public decimal HotThreshold { get; set; } = 75m; + public decimal WarmThreshold { get; set; } = 50m; + public decimal ColdThreshold { get; set; } = 25m; + + // NEW: Decay configuration + public bool EnableDecay { get; set; } = true; + public IReadOnlyList DecayBuckets { get; set; } = DefaultDecayBuckets; + + public static IReadOnlyList DefaultDecayBuckets { get; } = + [ + new DecayBucket(7, 10000), // 7d: 100% + new DecayBucket(30, 9000), // 30d: 90% + new DecayBucket(90, 7500), // 90d: 75% + new DecayBucket(180, 6000), // 180d: 60% + new DecayBucket(365, 4000), // 365d: 40% + new DecayBucket(int.MaxValue, 2000) // >365d: 20% + ]; +} + +public sealed record DecayBucket(int MaxAgeDays, int MultiplierBps); +``` + +**Acceptance Criteria**: +- [ ] `EnableDecay` toggle added (default: true) +- [ ] `DecayBuckets` configurable list added +- [ ] Uses basis points (10000 = 100%) for integer math +- [ ] Default buckets match T2 implementation +- [ ] DI configuration via `services.Configure()` works + +--- + +### T4: Integrate Decay into Rank() + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T2, T3 + +**Description**: +Apply decay factor to the final score calculation in the `Rank()` method. + +**Implementation Path**: `Services/UnknownRanker.cs` (lines 87-95) + +**Updated Rank Method**: +```csharp +public UnknownRankResult Rank(UnknownRankInput input) +{ + var uncertainty = ComputeUncertainty(input); + var pressure = ComputeExploitPressure(input); + var rawScore = Math.Round((uncertainty * 50m) + (pressure * 50m), 2); + + // Apply decay factor if enabled + decimal decayFactor = 1.0m; + if (_options.EnableDecay) + { + decayFactor = ComputeDecayFactor(input); + } + + var score = Math.Round(rawScore * decayFactor, 2); + var band = AssignBand(score); + + return new UnknownRankResult(score, uncertainty, pressure, band, decayFactor); +} +``` + +**Updated Result Record**: +```csharp +public sealed record UnknownRankResult( + decimal Score, + decimal UncertaintyFactor, + decimal ExploitPressure, + UnknownBand Band, + decimal DecayFactor = 1.0m); // NEW field +``` + +**Acceptance Criteria**: +- [ ] Decay factor applied as multiplier to raw score +- [ ] `DecayFactor` added to `UnknownRankResult` +- [ ] Score still rounded to 2 decimal places +- [ ] Band assignment uses decayed score +- [ ] When `EnableDecay = false`, decay factor is 1.0 + +--- + +### T5: Add Decay Tests + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T4 + +**Description**: +Add comprehensive tests for decay calculation covering all buckets and edge cases. + +**Implementation Path**: `src/Policy/__Tests/StellaOps.Policy.Unknowns.Tests/Services/UnknownRankerTests.cs` + +**Test Cases**: +```csharp +#region Decay Factor Tests + +[Fact] +public void ComputeDecay_NullLastEvaluated_Returns100Percent() +{ + var input = CreateInputWithAge(lastEvaluatedAt: null); + var result = _ranker.Rank(input); + result.DecayFactor.Should().Be(1.00m); +} + +[Theory] +[InlineData(0, 1.00)] // Today +[InlineData(7, 1.00)] // 7 days +[InlineData(8, 0.90)] // 8 days (next bucket) +[InlineData(30, 0.90)] // 30 days +[InlineData(31, 0.75)] // 31 days +[InlineData(90, 0.75)] // 90 days +[InlineData(91, 0.60)] // 91 days +[InlineData(180, 0.60)] // 180 days +[InlineData(181, 0.40)] // 181 days +[InlineData(365, 0.40)] // 365 days +[InlineData(366, 0.20)] // 366 days +[InlineData(1000, 0.20)] // Very old +public void ComputeDecay_AgeBuckets_ReturnsCorrectMultiplier(int ageDays, decimal expected) +{ + var asOf = DateTimeOffset.UtcNow; + var input = CreateInputWithAge( + lastEvaluatedAt: asOf.AddDays(-ageDays), + asOfDateTime: asOf); + + var result = _ranker.Rank(input); + result.DecayFactor.Should().Be(expected); +} + +[Fact] +public void Rank_WithDecay_AppliesMultiplierToScore() +{ + // Arrange: Create input that would score 50 without decay + var input = CreateHighScoreInput(ageDays: 100); // 75% decay + + // Act + var result = _ranker.Rank(input); + + // Assert: Score should be 50 * 0.75 = 37.50 + result.Score.Should().Be(37.50m); + result.DecayFactor.Should().Be(0.75m); +} + +[Fact] +public void Rank_DecayDisabled_ReturnsFullScore() +{ + // Arrange + var options = new UnknownRankerOptions { EnableDecay = false }; + var ranker = new UnknownRanker(Options.Create(options)); + var input = CreateHighScoreInput(ageDays: 100); + + // Act + var result = ranker.Rank(input); + + // Assert + result.DecayFactor.Should().Be(1.0m); +} + +[Fact] +public void Rank_Determinism_SameInputSameOutput() +{ + var input = CreateInputWithAge(ageDays: 45); + + var results = Enumerable.Range(0, 100) + .Select(_ => _ranker.Rank(input)) + .ToList(); + + results.Should().AllBeEquivalentTo(results[0]); +} + +#endregion +``` + +**Acceptance Criteria**: +- [ ] Test for null `LastEvaluatedAt` returns 1.0 +- [ ] Theory test covers all bucket boundaries (0, 7, 8, 30, 31, 90, 91, 180, 181, 365, 366) +- [ ] Test verifies decay multiplier applied to score +- [ ] Test verifies `EnableDecay = false` bypasses decay +- [ ] Determinism test confirms reproducibility +- [ ] All 6+ new tests pass + +--- + +### T6: Update UnknownsRepository + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Ensure repository queries populate `first_seen_at` and `last_evaluated_at` columns. + +**Implementation Path**: `Repositories/UnknownsRepository.cs` + +**SQL Updates**: +```sql +-- Verify columns exist in policy.unknowns table +-- first_seen_at should already exist per schema +-- last_evaluated_at needs to be updated on each ranking + +UPDATE policy.unknowns +SET last_evaluated_at = @now, + score = @score, + band = @band, + uncertainty_factor = @uncertainty, + exploit_pressure = @pressure +WHERE id = @id AND tenant_id = @tenantId; +``` + +**Acceptance Criteria**: +- [ ] `first_seen_at` column is set on INSERT (if not already) +- [ ] `last_evaluated_at` column updated on every re-ranking +- [ ] Repository methods return timestamps for decay calculation +- [ ] RLS (tenant isolation) still enforced + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Policy Team | Extend UnknownRankInput with timestamps | +| 2 | T2 | TODO | T1 | Policy Team | Implement DecayCalculator | +| 3 | T3 | TODO | T2 | Policy Team | Extend UnknownRankerOptions | +| 4 | T4 | TODO | T2, T3 | Policy Team | Integrate decay into Rank() | +| 5 | T5 | TODO | T4 | Policy Team | Add decay tests | +| 6 | T6 | TODO | T1 | Policy Team | Update UnknownsRepository | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from MOAT gap analysis. Decay logic identified as gap in Triage & Unknowns advisory. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Decay as multiplier vs deduction | Decision | Policy Team | Using multiplier (score × decay) preserves relative ordering | +| Bucket boundaries | Decision | Policy Team | Following FreshnessModels pattern (7/30/90/180/365 days) | +| Nullable timestamps | Decision | Policy Team | Allow null for backward compatibility; null = no decay | + +--- + +## Success Criteria + +- [ ] All 6 tasks marked DONE +- [ ] 6+ decay-related tests passing +- [ ] Existing 29 tests still passing +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds for `StellaOps.Policy.Unknowns.Tests` diff --git a/docs/implplan/SPRINT_4000_0001_0002_unknowns_blast_radius_containment.md b/docs/implplan/SPRINT_4000_0001_0002_unknowns_blast_radius_containment.md new file mode 100644 index 000000000..c862f16d1 --- /dev/null +++ b/docs/implplan/SPRINT_4000_0001_0002_unknowns_blast_radius_containment.md @@ -0,0 +1,500 @@ +# Sprint 4000.0001.0002 · Unknowns BlastRadius & Containment Signals + +## Topic & Scope + +- Add BlastRadius scoring (dependency graph impact) to UnknownRanker +- Add ContainmentSignals scoring (runtime isolation posture) to UnknownRanker +- Extends the ranking formula with a containment reduction factor + +**Working directory:** `src/Policy/__Libraries/StellaOps.Policy.Unknowns/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 4000.0001.0001 (Decay Algorithm) — MUST BE DONE +- **Downstream**: None +- **Safe to parallelize with**: Sprint 4000.0002.0001 (EPSS Connector) + +## Documentation Prerequisites + +- Sprint 4000.0001.0001 completion +- `src/Policy/__Libraries/StellaOps.Policy.Unknowns/AGENTS.md` +- `docs/product-advisories/14-Dec-2025 - Triage and Unknowns Technical Reference.md` + +--- + +## Tasks + +### T1: Define BlastRadius Model + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create a new model for blast radius data representing dependency graph impact. + +**Implementation Path**: `Models/BlastRadius.cs` (new file) + +**Model Definition**: +```csharp +namespace StellaOps.Policy.Unknowns.Models; + +/// +/// Represents the dependency graph impact of an unknown package. +/// Data sourced from Scanner/Signals module call graph analysis. +/// +public sealed record BlastRadius +{ + /// + /// Number of packages that directly or transitively depend on this package. + /// 0 = isolated, higher = more impact if exploited. + /// + public int Dependents { get; init; } + + /// + /// Whether this package is reachable from network-facing entrypoints. + /// True = higher risk, False = reduced risk. + /// + public bool NetFacing { get; init; } + + /// + /// Privilege level under which this package typically runs. + /// "root" = highest risk, "user" = normal, "none" = lowest. + /// + public string? Privilege { get; init; } +} +``` + +**Acceptance Criteria**: +- [ ] `BlastRadius.cs` file created in `Models/` directory +- [ ] Record is immutable with init-only properties +- [ ] XML documentation describes each property +- [ ] Namespace is `StellaOps.Policy.Unknowns.Models` + +--- + +### T2: Define ContainmentSignals Model + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create a new model for runtime containment posture signals. + +**Implementation Path**: `Models/ContainmentSignals.cs` (new file) + +**Model Definition**: +```csharp +namespace StellaOps.Policy.Unknowns.Models; + +/// +/// Represents runtime isolation and containment posture signals. +/// Data sourced from runtime probes (Seccomp, eBPF, container config). +/// +public sealed record ContainmentSignals +{ + /// + /// Seccomp profile status: "enforced", "permissive", "disabled", null if unknown. + /// "enforced" = reduced risk (limits syscalls). + /// + public string? Seccomp { get; init; } + + /// + /// Filesystem mount mode: "ro" (read-only), "rw" (read-write), null if unknown. + /// "ro" = reduced risk (limits persistence). + /// + public string? FileSystem { get; init; } + + /// + /// Network policy status: "isolated", "restricted", "open", null if unknown. + /// "isolated" = reduced risk (no egress). + /// + public string? NetworkPolicy { get; init; } +} +``` + +**Acceptance Criteria**: +- [ ] `ContainmentSignals.cs` file created in `Models/` directory +- [ ] Record is immutable with init-only properties +- [ ] All properties nullable (unknown state allowed) +- [ ] XML documentation describes each property + +--- + +### T3: Extend UnknownRankInput + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Add blast radius and containment fields to `UnknownRankInput`. + +**Implementation Path**: `Services/UnknownRanker.cs` + +**Updated Record**: +```csharp +public sealed record UnknownRankInput( + // Existing fields + bool HasVexStatement, + bool HasReachabilityData, + bool HasConflictingSources, + bool IsStaleAdvisory, + bool IsInKev, + decimal EpssScore, + decimal CvssScore, + // From Sprint 4000.0001.0001 (Decay) + DateTimeOffset? FirstSeenAt, + DateTimeOffset? LastEvaluatedAt, + DateTimeOffset AsOfDateTime, + // NEW: BlastRadius & Containment + BlastRadius? BlastRadius, + ContainmentSignals? Containment); +``` + +**Acceptance Criteria**: +- [ ] `BlastRadius` nullable field added +- [ ] `Containment` nullable field added +- [ ] Both fields default to null (backward compatible) +- [ ] Existing tests still pass with null values + +--- + +### T4: Implement ComputeContainmentReduction + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T3 + +**Description**: +Implement containment-based score reduction logic. + +**Implementation Path**: `Services/UnknownRanker.cs` + +**Reduction Formula**: +```csharp +/// +/// Computes a reduction factor based on containment posture. +/// Better containment = lower effective risk = score reduction. +/// Maximum reduction capped at 40%. +/// +private decimal ComputeContainmentReduction(UnknownRankInput input) +{ + decimal reduction = 0m; + + // BlastRadius reductions + if (input.BlastRadius is { } blast) + { + // Isolated package (no dependents) reduces risk + if (blast.Dependents == 0) + reduction += _options.IsolatedReduction; // default: 0.15 + + // Not network-facing reduces risk + if (!blast.NetFacing) + reduction += _options.NotNetFacingReduction; // default: 0.05 + + // Non-root privilege reduces risk + if (blast.Privilege is "user" or "none") + reduction += _options.NonRootReduction; // default: 0.05 + } + + // ContainmentSignals reductions + if (input.Containment is { } contain) + { + // Enforced Seccomp reduces risk + if (contain.Seccomp == "enforced") + reduction += _options.SeccompEnforcedReduction; // default: 0.10 + + // Read-only filesystem reduces risk + if (contain.FileSystem == "ro") + reduction += _options.FsReadOnlyReduction; // default: 0.10 + + // Network isolation reduces risk + if (contain.NetworkPolicy == "isolated") + reduction += _options.NetworkIsolatedReduction; // default: 0.05 + } + + // Cap at maximum reduction + return Math.Min(reduction, _options.MaxContainmentReduction); // default: 0.40 +} +``` + +**Score Application**: +```csharp +// In Rank() method, after decay: +var containmentReduction = ComputeContainmentReduction(input); +var finalScore = Math.Max(0m, decayedScore * (1m - containmentReduction)); +``` + +**Acceptance Criteria**: +- [ ] Method computes reduction from BlastRadius and ContainmentSignals +- [ ] Null inputs contribute 0 reduction +- [ ] Reduction capped at configurable maximum (default 40%) +- [ ] All arithmetic uses `decimal` +- [ ] Reduction applied as multiplier: `score * (1 - reduction)` + +--- + +### T5: Extend UnknownRankerOptions + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T4 + +**Description**: +Add containment reduction weight configuration. + +**Implementation Path**: `Services/UnknownRanker.cs` + +**Updated Options**: +```csharp +public sealed class UnknownRankerOptions +{ + // Existing band thresholds + public decimal HotThreshold { get; set; } = 75m; + public decimal WarmThreshold { get; set; } = 50m; + public decimal ColdThreshold { get; set; } = 25m; + + // Decay (from Sprint 4000.0001.0001) + public bool EnableDecay { get; set; } = true; + public IReadOnlyList DecayBuckets { get; set; } = DefaultDecayBuckets; + + // NEW: Containment reduction weights + public bool EnableContainmentReduction { get; set; } = true; + public decimal IsolatedReduction { get; set; } = 0.15m; + public decimal NotNetFacingReduction { get; set; } = 0.05m; + public decimal NonRootReduction { get; set; } = 0.05m; + public decimal SeccompEnforcedReduction { get; set; } = 0.10m; + public decimal FsReadOnlyReduction { get; set; } = 0.10m; + public decimal NetworkIsolatedReduction { get; set; } = 0.05m; + public decimal MaxContainmentReduction { get; set; } = 0.40m; +} +``` + +**Acceptance Criteria**: +- [ ] `EnableContainmentReduction` toggle added +- [ ] Individual reduction weights configurable +- [ ] `MaxContainmentReduction` cap configurable +- [ ] Defaults match T4 implementation +- [ ] DI configuration works + +--- + +### T6: Add DB Migration + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Add columns to `policy.unknowns` table for blast radius and containment data. + +**Implementation Path**: `src/Policy/__Libraries/StellaOps.Policy.Storage.Postgres/migrations/` + +**Migration SQL**: +```sql +-- Migration: Add blast radius and containment columns to policy.unknowns + +ALTER TABLE policy.unknowns +ADD COLUMN IF NOT EXISTS blast_radius_dependents INT, +ADD COLUMN IF NOT EXISTS blast_radius_net_facing BOOLEAN, +ADD COLUMN IF NOT EXISTS blast_radius_privilege TEXT, +ADD COLUMN IF NOT EXISTS containment_seccomp TEXT, +ADD COLUMN IF NOT EXISTS containment_fs_mode TEXT, +ADD COLUMN IF NOT EXISTS containment_network_policy TEXT; + +COMMENT ON COLUMN policy.unknowns.blast_radius_dependents IS 'Number of packages depending on this package'; +COMMENT ON COLUMN policy.unknowns.blast_radius_net_facing IS 'Whether reachable from network entrypoints'; +COMMENT ON COLUMN policy.unknowns.blast_radius_privilege IS 'Privilege level: root, user, none'; +COMMENT ON COLUMN policy.unknowns.containment_seccomp IS 'Seccomp status: enforced, permissive, disabled'; +COMMENT ON COLUMN policy.unknowns.containment_fs_mode IS 'Filesystem mode: ro, rw'; +COMMENT ON COLUMN policy.unknowns.containment_network_policy IS 'Network policy: isolated, restricted, open'; +``` + +**Acceptance Criteria**: +- [ ] Migration file created with sequential number +- [ ] All 6 columns added with appropriate types +- [ ] Column comments added for documentation +- [ ] Migration is idempotent (IF NOT EXISTS) +- [ ] RLS policies still apply + +--- + +### T7: Add Containment Tests + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T4, T5 + +**Description**: +Add comprehensive tests for containment reduction logic. + +**Implementation Path**: `src/Policy/__Tests/StellaOps.Policy.Unknowns.Tests/Services/UnknownRankerTests.cs` + +**Test Cases**: +```csharp +#region Containment Reduction Tests + +[Fact] +public void ComputeContainmentReduction_NullInputs_ReturnsZero() +{ + var input = CreateInputWithContainment(blastRadius: null, containment: null); + var result = _ranker.Rank(input); + result.ContainmentReduction.Should().Be(0m); +} + +[Fact] +public void ComputeContainmentReduction_IsolatedPackage_Returns15Percent() +{ + var blast = new BlastRadius { Dependents = 0, NetFacing = true }; + var input = CreateInputWithContainment(blastRadius: blast); + + var result = _ranker.Rank(input); + result.ContainmentReduction.Should().Be(0.15m); +} + +[Fact] +public void ComputeContainmentReduction_AllContainmentFactors_CapsAt40Percent() +{ + var blast = new BlastRadius { Dependents = 0, NetFacing = false, Privilege = "none" }; + var contain = new ContainmentSignals { Seccomp = "enforced", FileSystem = "ro", NetworkPolicy = "isolated" }; + var input = CreateInputWithContainment(blastRadius: blast, containment: contain); + + // Total would be: 0.15 + 0.05 + 0.05 + 0.10 + 0.10 + 0.05 = 0.50 + // But capped at 0.40 + var result = _ranker.Rank(input); + result.ContainmentReduction.Should().Be(0.40m); +} + +[Fact] +public void Rank_WithContainment_AppliesReductionToScore() +{ + // Arrange: Create input that would score 60 before containment + var blast = new BlastRadius { Dependents = 0 }; // 15% reduction + var input = CreateHighScoreInputWithContainment(blast); + + // Act + var result = _ranker.Rank(input); + + // Assert: Score reduced by 15%: 60 * 0.85 = 51 + result.Score.Should().Be(51.00m); +} + +[Fact] +public void Rank_ContainmentDisabled_NoReduction() +{ + var options = new UnknownRankerOptions { EnableContainmentReduction = false }; + var ranker = new UnknownRanker(Options.Create(options)); + var blast = new BlastRadius { Dependents = 0 }; + var input = CreateHighScoreInputWithContainment(blast); + + var result = ranker.Rank(input); + result.ContainmentReduction.Should().Be(0m); +} + +#endregion +``` + +**Acceptance Criteria**: +- [ ] Test for null BlastRadius/Containment returns 0 reduction +- [ ] Test for isolated package (Dependents=0) +- [ ] Test for cap at 40% maximum +- [ ] Test verifies reduction applied to final score +- [ ] Test for `EnableContainmentReduction = false` +- [ ] All 5+ new tests pass + +--- + +### T8: Document Signal Sources + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Update AGENTS.md with signal provenance for blast radius and containment. + +**Implementation Path**: `src/Policy/__Libraries/StellaOps.Policy.Unknowns/AGENTS.md` + +**Documentation to Add**: +```markdown +## Signal Sources + +### BlastRadius +- **Source**: Scanner/Signals module call graph analysis +- **Dependents**: Count of packages in dependency tree +- **NetFacing**: Reachability from network entrypoints (ASP.NET controllers, gRPC, etc.) +- **Privilege**: Extracted from container config or runtime probes + +### ContainmentSignals +- **Source**: Runtime probes (eBPF, Seccomp profiles, container inspection) +- **Seccomp**: Seccomp profile enforcement status +- **FileSystem**: Mount mode from container spec or /proc/mounts +- **NetworkPolicy**: Kubernetes NetworkPolicy or firewall rules + +### Data Flow +1. Scanner generates BlastRadius during SBOM analysis +2. Runtime probes collect ContainmentSignals +3. Signals stored in `policy.unknowns` table +4. UnknownRanker reads signals for scoring +``` + +**Acceptance Criteria**: +- [ ] AGENTS.md updated with Signal Sources section +- [ ] BlastRadius provenance documented +- [ ] ContainmentSignals provenance documented +- [ ] Data flow explained + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Policy Team | Define BlastRadius model | +| 2 | T2 | TODO | — | Policy Team | Define ContainmentSignals model | +| 3 | T3 | TODO | T1, T2 | Policy Team | Extend UnknownRankInput | +| 4 | T4 | TODO | T3 | Policy Team | Implement ComputeContainmentReduction | +| 5 | T5 | TODO | T4 | Policy Team | Extend UnknownRankerOptions | +| 6 | T6 | TODO | T1, T2 | Policy Team | Add DB migration | +| 7 | T7 | TODO | T4, T5 | Policy Team | Add containment tests | +| 8 | T8 | TODO | T1, T2 | Policy Team | Document signal sources | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from MOAT gap analysis. BlastRadius/ContainmentSignals identified as gap in Triage & Unknowns advisory. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Reduction vs multiplier | Decision | Policy Team | Using reduction (score × (1-reduction)) allows additive containment factors | +| Maximum cap at 40% | Decision | Policy Team | Prevents well-contained packages from dropping to 0; preserves signal | +| Nullable signals | Decision | Policy Team | Allow null for unknown containment state; null = no reduction | +| JSONB vs columns | Decision | Policy Team | Using columns for queryability and indexing | + +--- + +## Success Criteria + +- [ ] All 8 tasks marked DONE +- [ ] 5+ containment-related tests passing +- [ ] Existing tests still passing (including decay tests from Sprint 1) +- [ ] Migration applies cleanly +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_4000_0002_0001_epss_feed_connector.md b/docs/implplan/SPRINT_4000_0002_0001_epss_feed_connector.md new file mode 100644 index 000000000..9b262afde --- /dev/null +++ b/docs/implplan/SPRINT_4000_0002_0001_epss_feed_connector.md @@ -0,0 +1,866 @@ +# Sprint 4000.0002.0001 · EPSS Feed Connector + +## Topic & Scope + +- Create Concelier connector for EPSS (Exploit Prediction Scoring System) feed ingestion +- Follows three-stage connector pattern: Fetch → Parse → Map +- Leverages existing `EpssCsvStreamParser` from Scanner module for CSV parsing +- Integrates with orchestrator for scheduled, rate-limited, airgap-capable ingestion + +**Working directory:** `src/Concelier/__Libraries/StellaOps.Concelier.Connector.Epss/` + +## Dependencies & Concurrency + +- **Upstream**: None (first sprint in batch 0002) +- **Downstream**: None +- **Safe to parallelize with**: Sprint 4000.0001.0001 (Decay), Sprint 4000.0001.0002 (Containment) + +## Documentation Prerequisites + +- `src/Concelier/__Libraries/StellaOps.Concelier.Core/Orchestration/ConnectorMetadata.cs` +- `src/Scanner/__Libraries/StellaOps.Scanner.Storage/Epss/EpssCsvStreamParser.cs` (reuse pattern) +- Existing connector examples: `StellaOps.Concelier.Connector.CertFr`, `StellaOps.Concelier.Connector.Osv` + +--- + +## Tasks + +### T1: Create Project Structure + +**Assignee**: Concelier Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create new connector project following established Concelier patterns. + +**Implementation Path**: `src/Concelier/__Libraries/StellaOps.Concelier.Connector.Epss/` + +**Project Structure**: +``` +StellaOps.Concelier.Connector.Epss/ +├── StellaOps.Concelier.Connector.Epss.csproj +├── EpssConnectorPlugin.cs +├── EpssDependencyInjectionRoutine.cs +├── EpssServiceCollectionExtensions.cs +├── Jobs.cs +├── Configuration/ +│ └── EpssOptions.cs +└── Internal/ + ├── EpssConnector.cs + ├── EpssCursor.cs + ├── EpssMapper.cs + └── EpssDiagnostics.cs +``` + +**csproj Definition**: +```xml + + + net10.0 + StellaOps.Concelier.Connector.Epss + StellaOps.Concelier.Connector.Epss + enable + enable + preview + + + + + + + +``` + +**Acceptance Criteria**: +- [ ] Project created with correct structure +- [ ] References to Concelier.Core and Scanner.Storage added +- [ ] Compiles successfully +- [ ] Follows naming conventions + +--- + +### T2: Implement EpssConnectorPlugin + +**Assignee**: Concelier Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement the plugin entry point for connector registration. + +**Implementation Path**: `EpssConnectorPlugin.cs` + +**Plugin Definition**: +```csharp +using Microsoft.Extensions.DependencyInjection; +using StellaOps.Concelier.Connector.Epss.Internal; +using StellaOps.Plugin; + +namespace StellaOps.Concelier.Connector.Epss; + +/// +/// Plugin entry point for EPSS feed connector. +/// Provides EPSS probability scores for CVE exploitation. +/// +public sealed class EpssConnectorPlugin : IConnectorPlugin +{ + public const string SourceName = "epss"; + + public string Name => SourceName; + + public bool IsAvailable(IServiceProvider services) + => services.GetService() is not null; + + public IFeedConnector Create(IServiceProvider services) + { + ArgumentNullException.ThrowIfNull(services); + return services.GetRequiredService(); + } +} +``` + +**Acceptance Criteria**: +- [ ] Implements `IConnectorPlugin` +- [ ] Source name is `"epss"` +- [ ] Factory method resolves connector from DI +- [ ] Availability check works correctly + +--- + +### T3: Implement EpssOptions + +**Assignee**: Concelier Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create configuration options for EPSS connector. + +**Implementation Path**: `Configuration/EpssOptions.cs` + +**Options Definition**: +```csharp +namespace StellaOps.Concelier.Connector.Epss.Configuration; + +/// +/// Configuration options for EPSS feed connector. +/// +public sealed class EpssOptions +{ + /// + /// Configuration section name. + /// + public const string SectionName = "Concelier:Epss"; + + /// + /// Base URL for EPSS API/feed. + /// Default: https://epss.empiricalsecurity.com/ + /// + public string BaseUrl { get; set; } = "https://epss.empiricalsecurity.com/"; + + /// + /// Whether to fetch the current day's snapshot or historical. + /// Default: true (fetch current). + /// + public bool FetchCurrent { get; set; } = true; + + /// + /// Number of days to look back for initial catch-up. + /// Default: 7 days. + /// + public int CatchUpDays { get; set; } = 7; + + /// + /// Request timeout in seconds. + /// Default: 120 (2 minutes for large CSV files). + /// + public int TimeoutSeconds { get; set; } = 120; + + /// + /// Maximum retries on transient failure. + /// Default: 3. + /// + public int MaxRetries { get; set; } = 3; + + /// + /// Whether to enable offline/airgap mode using bundled data. + /// Default: false. + /// + public bool AirgapMode { get; set; } = false; + + /// + /// Path to offline bundle directory (when AirgapMode=true). + /// + public string? BundlePath { get; set; } +} +``` + +**Acceptance Criteria**: +- [ ] All configuration options documented +- [ ] Sensible defaults provided +- [ ] Airgap mode flag present +- [ ] Timeout and retry settings included + +--- + +### T4: Implement EpssCursor + +**Assignee**: Concelier Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create cursor model for resumable state tracking. + +**Implementation Path**: `Internal/EpssCursor.cs` + +**Cursor Definition**: +```csharp +namespace StellaOps.Concelier.Connector.Epss.Internal; + +/// +/// Resumable cursor state for EPSS connector. +/// Tracks model version and last processed date for incremental sync. +/// +public sealed record EpssCursor +{ + /// + /// EPSS model version tag (e.g., "v2024.12.21"). + /// + public string? ModelVersion { get; init; } + + /// + /// Date of the last successfully processed snapshot. + /// + public DateOnly? LastProcessedDate { get; init; } + + /// + /// HTTP ETag of last fetched resource (for conditional requests). + /// + public string? ETag { get; init; } + + /// + /// SHA-256 hash of the last processed CSV content. + /// + public string? ContentHash { get; init; } + + /// + /// Number of CVE scores in the last snapshot. + /// + public int? LastRowCount { get; init; } + + /// + /// Timestamp when cursor was last updated. + /// + public DateTimeOffset UpdatedAt { get; init; } + + /// + /// Creates initial empty cursor. + /// + public static EpssCursor Empty => new() { UpdatedAt = DateTimeOffset.MinValue }; +} +``` + +**Acceptance Criteria**: +- [ ] Record is immutable +- [ ] Tracks model version for EPSS updates +- [ ] Tracks content hash for change detection +- [ ] Includes ETag for conditional HTTP requests +- [ ] Has static `Empty` factory + +--- + +### T5: Implement EpssConnector.FetchAsync + +**Assignee**: Concelier Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T3, T4 + +**Description**: +Implement HTTP fetch stage with ETag/gzip support. + +**Implementation Path**: `Internal/EpssConnector.cs` + +**Fetch Implementation**: +```csharp +using System.Net.Http; +using Microsoft.Extensions.Logging; +using Microsoft.Extensions.Options; +using StellaOps.Concelier.Connector.Epss.Configuration; +using StellaOps.Concelier.Core.Feeds; + +namespace StellaOps.Concelier.Connector.Epss.Internal; + +/// +/// EPSS feed connector implementing three-stage Fetch/Parse/Map pattern. +/// +public sealed partial class EpssConnector : IFeedConnector +{ + private readonly HttpClient _httpClient; + private readonly EpssOptions _options; + private readonly ILogger _logger; + + public EpssConnector( + HttpClient httpClient, + IOptions options, + ILogger logger) + { + _httpClient = httpClient ?? throw new ArgumentNullException(nameof(httpClient)); + _options = options?.Value ?? throw new ArgumentNullException(nameof(options)); + _logger = logger ?? throw new ArgumentNullException(nameof(logger)); + } + + /// + /// Fetches EPSS CSV snapshot from remote or bundle source. + /// + public async Task FetchAsync( + EpssCursor cursor, + CancellationToken cancellationToken) + { + var targetDate = DateOnly.FromDateTime(DateTime.UtcNow); + var fileName = $"epss_scores-{targetDate:yyyy-MM-dd}.csv.gz"; + + if (_options.AirgapMode && !string.IsNullOrEmpty(_options.BundlePath)) + { + return FetchFromBundle(fileName); + } + + var uri = new Uri(new Uri(_options.BaseUrl), fileName); + + using var request = new HttpRequestMessage(HttpMethod.Get, uri); + + // Conditional fetch if we have ETag + if (!string.IsNullOrEmpty(cursor.ETag)) + { + request.Headers.IfNoneMatch.ParseAdd(cursor.ETag); + } + + using var response = await _httpClient.SendAsync( + request, + HttpCompletionOption.ResponseHeadersRead, + cancellationToken).ConfigureAwait(false); + + if (response.StatusCode == System.Net.HttpStatusCode.NotModified) + { + _logger.LogInformation("EPSS snapshot unchanged (304 Not Modified)"); + return FetchResult.NotModified(cursor); + } + + response.EnsureSuccessStatusCode(); + + var stream = await response.Content.ReadAsStreamAsync(cancellationToken).ConfigureAwait(false); + var etag = response.Headers.ETag?.Tag; + + return FetchResult.Success(stream, targetDate, etag); + } + + private FetchResult FetchFromBundle(string fileName) + { + var bundlePath = Path.Combine(_options.BundlePath!, fileName); + if (!File.Exists(bundlePath)) + { + _logger.LogWarning("EPSS bundle file not found: {Path}", bundlePath); + return FetchResult.NotFound(bundlePath); + } + + var stream = File.OpenRead(bundlePath); + return FetchResult.Success(stream, DateOnly.FromDateTime(DateTime.UtcNow), etag: null); + } +} +``` + +**Acceptance Criteria**: +- [ ] HTTP GET with gzip streaming +- [ ] Conditional requests using ETag (If-None-Match) +- [ ] Handles 304 Not Modified response +- [ ] Airgap mode falls back to bundle +- [ ] Proper error handling and logging + +--- + +### T6: Implement EpssConnector.ParseAsync + +**Assignee**: Concelier Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T5 + +**Description**: +Implement CSV parsing stage reusing Scanner's `EpssCsvStreamParser`. + +**Implementation Path**: `Internal/EpssConnector.cs` (continued) + +**Parse Implementation**: +```csharp +using StellaOps.Scanner.Storage.Epss; + +public sealed partial class EpssConnector +{ + private readonly EpssCsvStreamParser _parser = new(); + + /// + /// Parses gzip CSV stream into EPSS score rows. + /// Reuses Scanner's EpssCsvStreamParser for deterministic parsing. + /// + public async IAsyncEnumerable ParseAsync( + Stream gzipStream, + [EnumeratorCancellation] CancellationToken cancellationToken) + { + ArgumentNullException.ThrowIfNull(gzipStream); + + await using var session = _parser.ParseGzip(gzipStream); + + await foreach (var row in session.WithCancellation(cancellationToken)) + { + yield return row; + } + + // Log session metadata + _logger.LogInformation( + "Parsed EPSS snapshot: ModelVersion={ModelVersion}, Date={Date}, Rows={Rows}, Hash={Hash}", + session.ModelVersionTag, + session.PublishedDate, + session.RowCount, + session.DecompressedSha256); + } + + /// + /// Gets parse session metadata after enumeration. + /// + public EpssCursor CreateCursorFromSession( + EpssCsvStreamParser.EpssCsvParseSession session, + string? etag) + { + return new EpssCursor + { + ModelVersion = session.ModelVersionTag, + LastProcessedDate = session.PublishedDate, + ETag = etag, + ContentHash = session.DecompressedSha256, + LastRowCount = session.RowCount, + UpdatedAt = DateTimeOffset.UtcNow + }; + } +} +``` + +**Acceptance Criteria**: +- [ ] Reuses `EpssCsvStreamParser` from Scanner module +- [ ] Async enumerable streaming (no full materialization) +- [ ] Captures session metadata (model version, date, hash) +- [ ] Creates cursor from parse session +- [ ] Proper cancellation support + +--- + +### T7: Implement EpssConnector.MapAsync + +**Assignee**: Concelier Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T6 + +**Description**: +Map parsed EPSS rows to canonical observation records. + +**Implementation Path**: `Internal/EpssMapper.cs` + +**Mapper Definition**: +```csharp +using StellaOps.Concelier.Core.Observations; +using StellaOps.Scanner.Storage.Epss; + +namespace StellaOps.Concelier.Connector.Epss.Internal; + +/// +/// Maps EPSS score rows to canonical observation records. +/// +public static class EpssMapper +{ + /// + /// Maps a single EPSS score row to an observation. + /// + public static EpssObservation ToObservation( + EpssScoreRow row, + string modelVersion, + DateOnly publishedDate) + { + ArgumentNullException.ThrowIfNull(row); + + return new EpssObservation + { + CveId = row.CveId, + Score = (decimal)row.EpssScore, + Percentile = (decimal)row.Percentile, + ModelVersion = modelVersion, + PublishedDate = publishedDate, + Band = DetermineBand((decimal)row.EpssScore) + }; + } + + /// + /// Determines priority band based on EPSS score. + /// + private static EpssBand DetermineBand(decimal score) => score switch + { + >= 0.70m => EpssBand.Critical, // Top 30%: Critical priority + >= 0.40m => EpssBand.High, // 40-70%: High priority + >= 0.10m => EpssBand.Medium, // 10-40%: Medium priority + _ => EpssBand.Low // <10%: Low priority + }; +} + +/// +/// EPSS observation record. +/// +public sealed record EpssObservation +{ + public required string CveId { get; init; } + public required decimal Score { get; init; } + public required decimal Percentile { get; init; } + public required string ModelVersion { get; init; } + public required DateOnly PublishedDate { get; init; } + public required EpssBand Band { get; init; } +} + +/// +/// EPSS priority bands. +/// +public enum EpssBand +{ + Low = 0, + Medium = 1, + High = 2, + Critical = 3 +} +``` + +**Acceptance Criteria**: +- [ ] Maps `EpssScoreRow` to `EpssObservation` +- [ ] Score values converted to `decimal` for consistency +- [ ] Priority bands assigned based on score thresholds +- [ ] Model version and date preserved +- [ ] Immutable record output + +--- + +### T8: Register with WellKnownConnectors + +**Assignee**: Concelier Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T2 + +**Description**: +Add EPSS to the well-known connectors registry. + +**Implementation Path**: `src/Concelier/__Libraries/StellaOps.Concelier.Core/Orchestration/ConnectorRegistrationService.cs` + +**Updated WellKnownConnectors**: +```csharp +/// +/// EPSS (Exploit Prediction Scoring System) connector metadata. +/// +public static ConnectorMetadata Epss => new() +{ + ConnectorId = "epss", + Source = "epss", + DisplayName = "EPSS", + Description = "FIRST.org Exploit Prediction Scoring System", + Capabilities = ["observations"], + ArtifactKinds = ["raw-scores", "normalized"], + DefaultCron = "0 10 * * *", // Daily at 10:00 UTC (after EPSS publishes ~08:00 UTC) + DefaultRpm = 100, // No rate limiting on EPSS feed + MaxLagMinutes = 1440, // 24 hours (daily feed) + EgressAllowlist = ["epss.empiricalsecurity.com"] +}; + +/// +/// Gets metadata for all well-known connectors. +/// +public static IReadOnlyList All => [Nvd, Ghsa, Osv, Kev, IcsCisa, Epss]; +``` + +**Acceptance Criteria**: +- [ ] `Epss` static property added to `WellKnownConnectors` +- [ ] ConnectorId is `"epss"` +- [ ] Default cron set to daily 10:00 UTC +- [ ] Egress allowlist includes `epss.empiricalsecurity.com` +- [ ] Added to `All` collection + +--- + +### T9: Add Connector Tests + +**Assignee**: Concelier Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T5, T6, T7 + +**Description**: +Add integration tests with mock HTTP for EPSS connector. + +**Implementation Path**: `src/Concelier/__Tests/StellaOps.Concelier.Connector.Epss.Tests/` + +**Test Cases**: +```csharp +using System.Net; +using Microsoft.Extensions.Options; +using StellaOps.Concelier.Connector.Epss.Configuration; +using StellaOps.Concelier.Connector.Epss.Internal; + +namespace StellaOps.Concelier.Connector.Epss.Tests; + +public class EpssConnectorTests +{ + private static readonly string SampleCsvGz = GetEmbeddedResource("sample_epss.csv.gz"); + + [Fact] + public async Task FetchAsync_ReturnsStream_OnSuccess() + { + // Arrange + var handler = new MockHttpMessageHandler(SampleCsvGz, HttpStatusCode.OK); + var httpClient = new HttpClient(handler); + var connector = CreateConnector(httpClient); + var cursor = EpssCursor.Empty; + + // Act + var result = await connector.FetchAsync(cursor, CancellationToken.None); + + // Assert + result.IsSuccess.Should().BeTrue(); + result.Stream.Should().NotBeNull(); + } + + [Fact] + public async Task FetchAsync_ReturnsNotModified_OnETagMatch() + { + // Arrange + var handler = new MockHttpMessageHandler(status: HttpStatusCode.NotModified); + var httpClient = new HttpClient(handler); + var connector = CreateConnector(httpClient); + var cursor = new EpssCursor { ETag = "\"abc123\"" }; + + // Act + var result = await connector.FetchAsync(cursor, CancellationToken.None); + + // Assert + result.IsNotModified.Should().BeTrue(); + } + + [Fact] + public async Task ParseAsync_YieldsAllRows() + { + // Arrange + await using var stream = GetSampleGzipStream(); + var connector = CreateConnector(); + + // Act + var rows = await connector.ParseAsync(stream, CancellationToken.None).ToListAsync(); + + // Assert + rows.Should().HaveCountGreaterThan(0); + rows.Should().AllSatisfy(r => + { + r.CveId.Should().StartWith("CVE-"); + r.EpssScore.Should().BeInRange(0.0, 1.0); + r.Percentile.Should().BeInRange(0.0, 1.0); + }); + } + + [Theory] + [InlineData(0.75, EpssBand.Critical)] + [InlineData(0.50, EpssBand.High)] + [InlineData(0.20, EpssBand.Medium)] + [InlineData(0.05, EpssBand.Low)] + public void ToObservation_AssignsCorrectBand(double score, EpssBand expectedBand) + { + // Arrange + var row = new EpssScoreRow("CVE-2024-12345", score, 0.5); + + // Act + var observation = EpssMapper.ToObservation(row, "v2024.12.21", DateOnly.FromDateTime(DateTime.UtcNow)); + + // Assert + observation.Band.Should().Be(expectedBand); + } + + [Fact] + public void EpssCursor_Empty_HasMinValue() + { + // Act + var cursor = EpssCursor.Empty; + + // Assert + cursor.UpdatedAt.Should().Be(DateTimeOffset.MinValue); + cursor.ModelVersion.Should().BeNull(); + cursor.ContentHash.Should().BeNull(); + } +} +``` + +**Acceptance Criteria**: +- [ ] Test for successful fetch with mock HTTP +- [ ] Test for 304 Not Modified handling +- [ ] Test for parse yielding all rows +- [ ] Test for band assignment logic +- [ ] Test for cursor creation +- [ ] All 5+ tests pass + +--- + +### T10: Add Airgap Bundle Support + +**Assignee**: Concelier Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T5 + +**Description**: +Implement offline bundle fallback for airgap deployments. + +**Implementation Path**: `Internal/EpssConnector.cs` (update FetchAsync) + +**Bundle Convention**: +``` +/var/stellaops/bundles/epss/ +├── epss_scores-2024-12-21.csv.gz +├── epss_scores-2024-12-20.csv.gz +└── manifest.json +``` + +**Manifest Schema**: +```json +{ + "source": "epss", + "created": "2024-12-21T10:00:00Z", + "files": [ + { + "name": "epss_scores-2024-12-21.csv.gz", + "modelVersion": "v2024.12.21", + "sha256": "sha256:abc123...", + "rowCount": 245000 + } + ] +} +``` + +**Acceptance Criteria**: +- [ ] Bundle path configurable via `EpssOptions.BundlePath` +- [ ] Falls back to bundle when `AirgapMode = true` +- [ ] Reads files from bundle directory +- [ ] Logs warning if bundle file missing +- [ ] Manifest.json validation optional but recommended + +--- + +### T11: Update Documentation + +**Assignee**: Concelier Team +**Story Points**: 1 +**Status**: TODO +**Dependencies**: T8 + +**Description**: +Add EPSS connector to documentation and create AGENTS.md. + +**Implementation Path**: +- `src/Concelier/__Libraries/StellaOps.Concelier.Connector.Epss/AGENTS.md` (new) +- `docs/modules/concelier/connectors.md` (update) + +**AGENTS.md Content**: +```markdown +# AGENTS.md - EPSS Connector + +## Purpose +Ingests EPSS (Exploit Prediction Scoring System) scores from FIRST.org. +Provides exploitation probability estimates for CVE prioritization. + +## Data Source +- **URL**: https://epss.empiricalsecurity.com/ +- **Format**: CSV.gz (gzip-compressed CSV) +- **Update Frequency**: Daily (~08:00 UTC) +- **Coverage**: All CVEs with exploitation telemetry + +## Data Flow +1. Connector fetches daily snapshot (epss_scores-YYYY-MM-DD.csv.gz) +2. Parses using EpssCsvStreamParser (reused from Scanner) +3. Maps to EpssObservation records with band classification +4. Stores in concelier.epss_observations table +5. Publishes EpssUpdatedEvent for downstream consumers + +## Configuration +```yaml +Concelier: + Epss: + BaseUrl: "https://epss.empiricalsecurity.com/" + AirgapMode: false + BundlePath: "/var/stellaops/bundles/epss" +``` + +## Orchestrator Registration +- ConnectorId: `epss` +- Default Schedule: Daily 10:00 UTC +- Egress Allowlist: `epss.empiricalsecurity.com` +``` + +**Acceptance Criteria**: +- [ ] AGENTS.md created in connector directory +- [ ] Connector added to docs/modules/concelier/connectors.md +- [ ] Data flow documented +- [ ] Configuration examples provided + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Concelier Team | Create project structure | +| 2 | T2 | TODO | T1 | Concelier Team | Implement EpssConnectorPlugin | +| 3 | T3 | TODO | T1 | Concelier Team | Implement EpssOptions | +| 4 | T4 | TODO | T1 | Concelier Team | Implement EpssCursor | +| 5 | T5 | TODO | T3, T4 | Concelier Team | Implement FetchAsync | +| 6 | T6 | TODO | T5 | Concelier Team | Implement ParseAsync | +| 7 | T7 | TODO | T6 | Concelier Team | Implement MapAsync | +| 8 | T8 | TODO | T2 | Concelier Team | Register with WellKnownConnectors | +| 9 | T9 | TODO | T5, T6, T7 | Concelier Team | Add connector tests | +| 10 | T10 | TODO | T5 | Concelier Team | Add airgap bundle support | +| 11 | T11 | TODO | T8 | Concelier Team | Update documentation | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from MOAT gap analysis. EPSS connector identified as gap in orchestrated feed ingestion. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Reuse EpssCsvStreamParser | Decision | Concelier Team | Avoids duplication; Scanner parser already tested and optimized | +| Separate project vs Scanner extension | Decision | Concelier Team | New Concelier connector aligns with orchestrator pattern | +| Daily vs hourly schedule | Decision | Concelier Team | EPSS publishes daily; no benefit to more frequent polling | +| Band thresholds | Decision | Concelier Team | 0.70/0.40/0.10 aligned with EPSS community recommendations | + +--- + +## Success Criteria + +- [ ] All 11 tasks marked DONE +- [ ] 5+ connector tests passing +- [ ] `dotnet build` succeeds for connector project +- [ ] Connector registered in WellKnownConnectors +- [ ] Airgap bundle fallback works +- [ ] AGENTS.md created + diff --git a/docs/implplan/SPRINT_4100_0001_0001_reason_coded_unknowns.md b/docs/implplan/SPRINT_4100_0001_0001_reason_coded_unknowns.md new file mode 100644 index 000000000..05113a27f --- /dev/null +++ b/docs/implplan/SPRINT_4100_0001_0001_reason_coded_unknowns.md @@ -0,0 +1,489 @@ +# Sprint 4100.0001.0001 · Reason-Coded Unknowns + +## Topic & Scope + +- Define structured reason codes for why a component is marked "unknown" +- Add remediation hints that map to each reason code +- Enable actionable triage by categorizing uncertainty sources + +**Working directory:** `src/Policy/__Libraries/StellaOps.Policy.Unknowns/` + +## Dependencies & Concurrency + +- **Upstream**: None (first sprint in batch) +- **Downstream**: Sprint 4100.0001.0002 (Unknown Budgets), Sprint 4100.0001.0003 (Unknowns in Attestations) +- **Safe to parallelize with**: Sprint 4100.0002.0001, Sprint 4100.0003.0001, Sprint 4100.0004.0002 + +## Documentation Prerequisites + +- `src/Policy/__Libraries/StellaOps.Policy.Unknowns/AGENTS.md` +- `docs/product-advisories/19-Dec-2025 - Moat #5.md` (Unknowns as First-Class Risk) +- `docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Triage and Unknowns Technical Reference.md` + +--- + +## Tasks + +### T1: Define UnknownReasonCode Enum + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create an enumeration defining the canonical reason codes for unknowns. + +**Implementation Path**: `Models/UnknownReasonCode.cs` (new file) + +**Model Definition**: +```csharp +namespace StellaOps.Policy.Unknowns.Models; + +/// +/// Canonical reason codes explaining why a component is marked as "unknown". +/// Each code maps to a specific remediation action. +/// +public enum UnknownReasonCode +{ + /// + /// U-RCH: Call path analysis is indeterminate. + /// The reachability analyzer cannot confirm or deny exploitability. + /// + Reachability, + + /// + /// U-ID: Ambiguous package identity or missing digest. + /// Cannot uniquely identify the component (e.g., missing PURL, no checksum). + /// + Identity, + + /// + /// U-PROV: Cannot map binary artifact to source repository. + /// Provenance chain is broken or unavailable. + /// + Provenance, + + /// + /// U-VEX: VEX statements conflict or missing applicability data. + /// Multiple VEX sources disagree or no VEX coverage exists. + /// + VexConflict, + + /// + /// U-FEED: Required knowledge source is missing or stale. + /// Advisory feed gap (e.g., no NVD/OSV data for this package). + /// + FeedGap, + + /// + /// U-CONFIG: Feature flag or configuration not observable. + /// Cannot determine if vulnerable code path is enabled at runtime. + /// + ConfigUnknown, + + /// + /// U-ANALYZER: Language or framework not supported by analyzer. + /// Static analysis tools do not cover this ecosystem. + /// + AnalyzerLimit +} +``` + +**Acceptance Criteria**: +- [ ] `UnknownReasonCode.cs` file created in `Models/` directory +- [ ] 7 reason codes defined with XML documentation +- [ ] Each code has a short prefix (U-RCH, U-ID, etc.) documented +- [ ] Namespace is `StellaOps.Policy.Unknowns.Models` + +--- + +### T2: Extend Unknown Model + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Add reason code, remediation hint, evidence references, and assumptions to the Unknown model. + +**Implementation Path**: `Models/Unknown.cs` + +**Updated Model**: +```csharp +public sealed record Unknown +{ + // Existing fields + public Guid Id { get; init; } + public string PackageUrl { get; init; } + public string? CveId { get; init; } + public decimal Score { get; init; } + public UnknownBand Band { get; init; } + + // NEW: Reason code explaining why this is unknown + public UnknownReasonCode ReasonCode { get; init; } + + // NEW: Human-readable remediation guidance + public string? RemediationHint { get; init; } + + // NEW: References to evidence that led to unknown classification + public IReadOnlyList EvidenceRefs { get; init; } = []; + + // NEW: Assumptions made during analysis (for audit trail) + public IReadOnlyList Assumptions { get; init; } = []; +} + +/// +/// Reference to evidence supporting unknown classification. +/// +public sealed record EvidenceRef( + string Type, // "reachability", "vex", "sbom", "feed" + string Uri, // Location of evidence + string? Digest); // Content hash if applicable +``` + +**Acceptance Criteria**: +- [ ] `ReasonCode` field added to `Unknown` record +- [ ] `RemediationHint` nullable string field added +- [ ] `EvidenceRefs` collection added with `EvidenceRef` record +- [ ] `Assumptions` string collection added +- [ ] All new fields have XML documentation +- [ ] Existing tests still pass with default values + +--- + +### T3: Create RemediationHintsRegistry + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create a registry that maps reason codes to actionable remediation hints. + +**Implementation Path**: `Services/RemediationHintsRegistry.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Unknowns.Services; + +/// +/// Registry of remediation hints for each unknown reason code. +/// Provides actionable guidance for resolving unknowns. +/// +public sealed class RemediationHintsRegistry : IRemediationHintsRegistry +{ + private static readonly IReadOnlyDictionary _hints = + new Dictionary + { + [UnknownReasonCode.Reachability] = new( + ShortHint: "Run reachability analysis", + DetailedHint: "Execute call-graph analysis to determine if vulnerable code paths are reachable from application entrypoints.", + AutomationRef: "stella analyze --reachability"), + + [UnknownReasonCode.Identity] = new( + ShortHint: "Add package digest", + DetailedHint: "Ensure SBOM includes package checksums (SHA-256) and valid PURL coordinates.", + AutomationRef: "stella sbom --include-digests"), + + [UnknownReasonCode.Provenance] = new( + ShortHint: "Add provenance attestation", + DetailedHint: "Generate SLSA provenance linking binary artifact to source repository and build.", + AutomationRef: "stella attest --provenance"), + + [UnknownReasonCode.VexConflict] = new( + ShortHint: "Publish authoritative VEX", + DetailedHint: "Create or update VEX document with applicability assessment for your deployment context.", + AutomationRef: "stella vex create"), + + [UnknownReasonCode.FeedGap] = new( + ShortHint: "Add advisory source", + DetailedHint: "Configure additional advisory feeds (OSV, vendor-specific) or request coverage from upstream.", + AutomationRef: "stella feed add"), + + [UnknownReasonCode.ConfigUnknown] = new( + ShortHint: "Document feature flags", + DetailedHint: "Export runtime configuration showing which features are enabled/disabled in this deployment.", + AutomationRef: "stella config export"), + + [UnknownReasonCode.AnalyzerLimit] = new( + ShortHint: "Request analyzer support", + DetailedHint: "This language/framework is not yet supported. File an issue or use manual assessment.", + AutomationRef: null) + }; + + public RemediationHint GetHint(UnknownReasonCode code) => + _hints.TryGetValue(code, out var hint) ? hint : RemediationHint.Empty; + + public IEnumerable<(UnknownReasonCode Code, RemediationHint Hint)> GetAllHints() => + _hints.Select(kv => (kv.Key, kv.Value)); +} + +public sealed record RemediationHint( + string ShortHint, + string DetailedHint, + string? AutomationRef) +{ + public static RemediationHint Empty { get; } = new("No remediation available", "", null); +} + +public interface IRemediationHintsRegistry +{ + RemediationHint GetHint(UnknownReasonCode code); + IEnumerable<(UnknownReasonCode Code, RemediationHint Hint)> GetAllHints(); +} +``` + +**Acceptance Criteria**: +- [ ] `RemediationHintsRegistry.cs` created in `Services/` +- [ ] All 7 reason codes have mapped hints +- [ ] Each hint includes short hint, detailed hint, and optional automation reference +- [ ] Interface `IRemediationHintsRegistry` defined for DI +- [ ] Registry is thread-safe (immutable dictionary) + +--- + +### T4: Update UnknownRanker + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T2, T3 + +**Description**: +Update the UnknownRanker to emit reason codes and remediation hints on ranking. + +**Implementation Path**: `Services/UnknownRanker.cs` + +**Updated Input**: +```csharp +public sealed record UnknownRankInput( + // Existing fields + bool HasVexStatement, + bool HasReachabilityData, + bool HasConflictingSources, + bool IsStaleAdvisory, + bool IsInKev, + decimal EpssScore, + decimal CvssScore, + DateTimeOffset? FirstSeenAt, + DateTimeOffset? LastEvaluatedAt, + DateTimeOffset AsOfDateTime, + BlastRadius? BlastRadius, + ContainmentSignals? Containment, + // NEW: Reason classification inputs + bool HasPackageDigest, + bool HasProvenanceAttestation, + bool HasVexConflicts, + bool HasFeedCoverage, + bool HasConfigVisibility, + bool IsAnalyzerSupported); +``` + +**Reason Code Assignment Logic**: +```csharp +/// +/// Determines the primary reason code for unknown classification. +/// Returns the most actionable/resolvable reason. +/// +private UnknownReasonCode DetermineReasonCode(UnknownRankInput input) +{ + // Priority order: most actionable first + if (!input.IsAnalyzerSupported) + return UnknownReasonCode.AnalyzerLimit; + + if (!input.HasReachabilityData) + return UnknownReasonCode.Reachability; + + if (!input.HasPackageDigest) + return UnknownReasonCode.Identity; + + if (!input.HasProvenanceAttestation) + return UnknownReasonCode.Provenance; + + if (input.HasVexConflicts || !input.HasVexStatement) + return UnknownReasonCode.VexConflict; + + if (!input.HasFeedCoverage) + return UnknownReasonCode.FeedGap; + + if (!input.HasConfigVisibility) + return UnknownReasonCode.ConfigUnknown; + + // Default to reachability if no specific reason + return UnknownReasonCode.Reachability; +} +``` + +**Updated Result**: +```csharp +public sealed record UnknownRankResult( + decimal Score, + decimal UncertaintyFactor, + decimal ExploitPressure, + UnknownBand Band, + decimal DecayFactor = 1.0m, + decimal ContainmentReduction = 0m, + // NEW: Reason code and hint + UnknownReasonCode ReasonCode = UnknownReasonCode.Reachability, + string? RemediationHint = null); +``` + +**Acceptance Criteria**: +- [ ] `UnknownRankInput` extended with reason classification inputs +- [ ] `DetermineReasonCode` method implemented with priority logic +- [ ] `UnknownRankResult` extended with `ReasonCode` and `RemediationHint` +- [ ] Ranker uses `IRemediationHintsRegistry` to populate hints +- [ ] Existing tests updated for new input/output fields + +--- + +### T5: Add DB Migration + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Add columns to `policy.unknowns` table for reason code and remediation hint. + +**Implementation Path**: `src/Policy/__Libraries/StellaOps.Policy.Storage.Postgres/migrations/` + +**Migration SQL**: +```sql +-- Migration: Add reason code and remediation columns to policy.unknowns + +ALTER TABLE policy.unknowns +ADD COLUMN IF NOT EXISTS reason_code TEXT, +ADD COLUMN IF NOT EXISTS remediation_hint TEXT, +ADD COLUMN IF NOT EXISTS evidence_refs JSONB DEFAULT '[]', +ADD COLUMN IF NOT EXISTS assumptions JSONB DEFAULT '[]'; + +-- Create index for querying by reason code +CREATE INDEX IF NOT EXISTS idx_unknowns_reason_code +ON policy.unknowns(reason_code) +WHERE reason_code IS NOT NULL; + +COMMENT ON COLUMN policy.unknowns.reason_code IS 'Canonical reason code: Reachability, Identity, Provenance, VexConflict, FeedGap, ConfigUnknown, AnalyzerLimit'; +COMMENT ON COLUMN policy.unknowns.remediation_hint IS 'Actionable guidance for resolving this unknown'; +COMMENT ON COLUMN policy.unknowns.evidence_refs IS 'JSON array of evidence references supporting classification'; +COMMENT ON COLUMN policy.unknowns.assumptions IS 'JSON array of assumptions made during analysis'; +``` + +**Acceptance Criteria**: +- [ ] Migration file created with sequential number +- [ ] `reason_code` TEXT column added +- [ ] `remediation_hint` TEXT column added +- [ ] `evidence_refs` JSONB column added with default +- [ ] `assumptions` JSONB column added with default +- [ ] Index created for reason_code queries +- [ ] Column comments added for documentation +- [ ] Migration is idempotent (IF NOT EXISTS) +- [ ] RLS policies still apply + +--- + +### T6: Update API DTOs + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T4 + +**Description**: +Include reason codes and remediation hints in API response DTOs. + +**Implementation Path**: `src/Policy/StellaOps.Policy.WebService/Controllers/UnknownsController.cs` + +**Updated DTO**: +```csharp +public sealed record UnknownDto +{ + public Guid Id { get; init; } + public string PackageUrl { get; init; } + public string? CveId { get; init; } + public decimal Score { get; init; } + public string Band { get; init; } + // NEW fields + public string ReasonCode { get; init; } + public string ReasonCodeShort { get; init; } // e.g., "U-RCH" + public string? RemediationHint { get; init; } + public string? DetailedHint { get; init; } + public string? AutomationCommand { get; init; } + public IReadOnlyList EvidenceRefs { get; init; } +} + +public sealed record EvidenceRefDto( + string Type, + string Uri, + string? Digest); +``` + +**Short Code Mapping**: +```csharp +private static readonly IReadOnlyDictionary ShortCodes = new Dictionary +{ + [UnknownReasonCode.Reachability] = "U-RCH", + [UnknownReasonCode.Identity] = "U-ID", + [UnknownReasonCode.Provenance] = "U-PROV", + [UnknownReasonCode.VexConflict] = "U-VEX", + [UnknownReasonCode.FeedGap] = "U-FEED", + [UnknownReasonCode.ConfigUnknown] = "U-CONFIG", + [UnknownReasonCode.AnalyzerLimit] = "U-ANALYZER" +}; +``` + +**Acceptance Criteria**: +- [ ] `UnknownDto` extended with reason code fields +- [ ] Short code (U-RCH, U-ID, etc.) included in response +- [ ] Remediation hint fields included +- [ ] Evidence references included as array +- [ ] OpenAPI spec updated +- [ ] Response schema validated + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Policy Team | Define UnknownReasonCode enum | +| 2 | T2 | TODO | T1 | Policy Team | Extend Unknown model | +| 3 | T3 | TODO | T1 | Policy Team | Create RemediationHintsRegistry | +| 4 | T4 | TODO | T2, T3 | Policy Team | Update UnknownRanker | +| 5 | T5 | TODO | T1, T2 | Policy Team | Add DB migration | +| 6 | T6 | TODO | T4 | Policy Team | Update API DTOs | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from MOAT Phase 2 gap analysis. Reason-coded unknowns identified as requirement from Moat #5 advisory. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| 7 reason codes | Decision | Policy Team | Covers all identified uncertainty sources; extensible if needed | +| Priority ordering | Decision | Policy Team | Most actionable/resolvable reasons assigned first | +| Short codes (U-*) | Decision | Policy Team | Human-readable prefixes for triage dashboards | +| JSONB for arrays | Decision | Policy Team | Flexible schema for evidence refs and assumptions | + +--- + +## Success Criteria + +- [ ] All 6 tasks marked DONE +- [ ] 7 reason codes defined and documented +- [ ] Remediation hints mapped for all codes +- [ ] API returns reason codes in responses +- [ ] Migration applies cleanly +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds for `StellaOps.Policy.Unknowns.Tests` diff --git a/docs/implplan/SPRINT_4100_0001_0002_unknown_budgets.md b/docs/implplan/SPRINT_4100_0001_0002_unknown_budgets.md new file mode 100644 index 000000000..52b900ddd --- /dev/null +++ b/docs/implplan/SPRINT_4100_0001_0002_unknown_budgets.md @@ -0,0 +1,659 @@ +# Sprint 4100.0001.0002 · Unknown Budgets & Environment Thresholds + +## Topic & Scope + +- Define environment-aware unknown budgets (prod: strict, stage: moderate, dev: permissive) +- Implement budget enforcement with block/warn actions +- Enable policy-driven control over acceptable unknown counts + +**Working directory:** `src/Policy/__Libraries/StellaOps.Policy.Unknowns/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 4100.0001.0001 (Reason-Coded Unknowns) — MUST BE DONE +- **Downstream**: Sprint 4100.0001.0003 (Unknowns in Attestations) +- **Safe to parallelize with**: Sprint 4100.0002.0002, Sprint 4100.0003.0002 + +## Documentation Prerequisites + +- Sprint 4100.0001.0001 completion +- `src/Policy/__Libraries/StellaOps.Policy.Unknowns/AGENTS.md` +- `docs/product-advisories/19-Dec-2025 - Moat #5.md` (Unknowns as First-Class Risk) + +--- + +## Tasks + +### T1: Define UnknownBudget Model + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create a model representing unknown budgets with environment-specific thresholds. + +**Implementation Path**: `Models/UnknownBudget.cs` (new file) + +**Model Definition**: +```csharp +namespace StellaOps.Policy.Unknowns.Models; + +/// +/// Represents an unknown budget for a specific environment. +/// Budgets define maximum acceptable unknown counts by reason code. +/// +public sealed record UnknownBudget +{ + /// + /// Environment name: "prod", "stage", "dev", or custom. + /// + public required string Environment { get; init; } + + /// + /// Maximum total unknowns allowed across all reason codes. + /// + public int? TotalLimit { get; init; } + + /// + /// Per-reason-code limits. Missing codes inherit from TotalLimit. + /// + public IReadOnlyDictionary ReasonLimits { get; init; } + = new Dictionary(); + + /// + /// Action when budget is exceeded. + /// + public BudgetAction Action { get; init; } = BudgetAction.Warn; + + /// + /// Custom message to display when budget is exceeded. + /// + public string? ExceededMessage { get; init; } +} + +/// +/// Action to take when unknown budget is exceeded. +/// +public enum BudgetAction +{ + /// + /// Log warning only, do not block. + /// + Warn, + + /// + /// Block the operation (fail policy evaluation). + /// + Block, + + /// + /// Warn but allow if exception is applied. + /// + WarnUnlessException +} + +/// +/// Result of checking unknowns against a budget. +/// +public sealed record BudgetCheckResult +{ + public required bool IsWithinBudget { get; init; } + public required BudgetAction RecommendedAction { get; init; } + public required int TotalUnknowns { get; init; } + public int? TotalLimit { get; init; } + public IReadOnlyDictionary Violations { get; init; } + = new Dictionary(); + public string? Message { get; init; } +} + +/// +/// Details of a specific budget violation. +/// +public sealed record BudgetViolation( + UnknownReasonCode ReasonCode, + int Count, + int Limit); +``` + +**Acceptance Criteria**: +- [ ] `UnknownBudget.cs` file created in `Models/` directory +- [ ] Budget supports total and per-reason limits +- [ ] `BudgetAction` enum with Warn, Block, WarnUnlessException +- [ ] `BudgetCheckResult` captures violation details +- [ ] XML documentation on all types + +--- + +### T2: Create UnknownBudgetService + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement service for retrieving budgets and checking compliance. + +**Implementation Path**: `Services/UnknownBudgetService.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Unknowns.Services; + +/// +/// Service for managing and checking unknown budgets. +/// +public sealed class UnknownBudgetService : IUnknownBudgetService +{ + private readonly IOptionsMonitor _options; + private readonly ILogger _logger; + + public UnknownBudgetService( + IOptionsMonitor options, + ILogger logger) + { + _options = options; + _logger = logger; + } + + /// + /// Gets the budget configuration for a specific environment. + /// Falls back to default if environment not found. + /// + public UnknownBudget GetBudgetForEnvironment(string environment) + { + var budgets = _options.CurrentValue.Budgets; + + if (budgets.TryGetValue(environment, out var budget)) + return budget; + + if (budgets.TryGetValue("default", out var defaultBudget)) + return defaultBudget with { Environment = environment }; + + // Permissive fallback if no configuration + return new UnknownBudget + { + Environment = environment, + TotalLimit = null, + Action = BudgetAction.Warn + }; + } + + /// + /// Checks a collection of unknowns against the budget for an environment. + /// + public BudgetCheckResult CheckBudget( + string environment, + IReadOnlyList unknowns) + { + var budget = GetBudgetForEnvironment(environment); + var violations = new Dictionary(); + var total = unknowns.Count; + + // Check per-reason-code limits + var byReason = unknowns + .GroupBy(u => u.ReasonCode) + .ToDictionary(g => g.Key, g => g.Count()); + + foreach (var (code, limit) in budget.ReasonLimits) + { + if (byReason.TryGetValue(code, out var count) && count > limit) + { + violations[code] = new BudgetViolation(code, count, limit); + } + } + + // Check total limit + var isWithinBudget = violations.Count == 0 && + (!budget.TotalLimit.HasValue || total <= budget.TotalLimit.Value); + + var message = isWithinBudget + ? null + : budget.ExceededMessage ?? $"Unknown budget exceeded: {total} unknowns in {environment}"; + + return new BudgetCheckResult + { + IsWithinBudget = isWithinBudget, + RecommendedAction = isWithinBudget ? BudgetAction.Warn : budget.Action, + TotalUnknowns = total, + TotalLimit = budget.TotalLimit, + Violations = violations, + Message = message + }; + } + + /// + /// Checks if an operation should be blocked based on budget result. + /// + public bool ShouldBlock(BudgetCheckResult result) => + !result.IsWithinBudget && result.RecommendedAction == BudgetAction.Block; +} + +public interface IUnknownBudgetService +{ + UnknownBudget GetBudgetForEnvironment(string environment); + BudgetCheckResult CheckBudget(string environment, IReadOnlyList unknowns); + bool ShouldBlock(BudgetCheckResult result); +} +``` + +**Acceptance Criteria**: +- [ ] `UnknownBudgetService.cs` created in `Services/` +- [ ] `GetBudgetForEnvironment` with fallback logic +- [ ] `CheckBudget` aggregates violations by reason code +- [ ] `ShouldBlock` helper method +- [ ] Interface defined for DI + +--- + +### T3: Implement Budget Checking Logic + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T2 + +**Description**: +Implement the detailed budget checking with block/warn decision logic. + +**Implementation Path**: `Services/UnknownBudgetService.cs` + +**Extended Logic**: +```csharp +/// +/// Performs comprehensive budget check with environment escalation. +/// +public BudgetCheckResult CheckBudgetWithEscalation( + string environment, + IReadOnlyList unknowns, + IReadOnlyList? exceptions = null) +{ + var baseResult = CheckBudget(environment, unknowns); + + if (baseResult.IsWithinBudget) + return baseResult; + + // Check if exceptions cover the violations + if (exceptions?.Count > 0) + { + var coveredReasons = exceptions + .Where(e => e.Status == ExceptionStatus.Approved) + .SelectMany(e => e.CoveredReasonCodes) + .ToHashSet(); + + var uncoveredViolations = baseResult.Violations + .Where(v => !coveredReasons.Contains(v.Key)) + .ToDictionary(v => v.Key, v => v.Value); + + if (uncoveredViolations.Count == 0) + { + return baseResult with + { + IsWithinBudget = true, + RecommendedAction = BudgetAction.Warn, + Message = "Budget exceeded but covered by approved exceptions" + }; + } + } + + // Log the violation for observability + _logger.LogWarning( + "Unknown budget exceeded for environment {Environment}: {Total}/{Limit}", + environment, baseResult.TotalUnknowns, baseResult.TotalLimit); + + return baseResult; +} + +/// +/// Gets a summary of budget status for reporting. +/// +public BudgetStatusSummary GetBudgetStatus( + string environment, + IReadOnlyList unknowns) +{ + var budget = GetBudgetForEnvironment(environment); + var result = CheckBudget(environment, unknowns); + + return new BudgetStatusSummary + { + Environment = environment, + TotalUnknowns = unknowns.Count, + TotalLimit = budget.TotalLimit, + PercentageUsed = budget.TotalLimit.HasValue + ? (decimal)unknowns.Count / budget.TotalLimit.Value * 100 + : 0m, + IsExceeded = !result.IsWithinBudget, + ViolationCount = result.Violations.Count, + ByReasonCode = unknowns + .GroupBy(u => u.ReasonCode) + .ToDictionary(g => g.Key, g => g.Count()) + }; +} + +public sealed record BudgetStatusSummary +{ + public required string Environment { get; init; } + public required int TotalUnknowns { get; init; } + public int? TotalLimit { get; init; } + public decimal PercentageUsed { get; init; } + public bool IsExceeded { get; init; } + public int ViolationCount { get; init; } + public IReadOnlyDictionary ByReasonCode { get; init; } + = new Dictionary(); +} +``` + +**Acceptance Criteria**: +- [ ] `CheckBudgetWithEscalation` supports exception coverage +- [ ] Approved exceptions can cover specific reason codes +- [ ] Violations logged for observability +- [ ] `GetBudgetStatus` returns summary for dashboards +- [ ] Percentage calculation for budget utilization + +--- + +### T4: Add Policy Configuration + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Define YAML configuration schema for unknown budgets. + +**Implementation Path**: `Configuration/UnknownBudgetOptions.cs` (new file) + +**Options Class**: +```csharp +namespace StellaOps.Policy.Unknowns.Configuration; + +/// +/// Configuration options for unknown budgets. +/// +public sealed class UnknownBudgetOptions +{ + public const string SectionName = "UnknownBudgets"; + + /// + /// Budget configurations keyed by environment name. + /// + public Dictionary Budgets { get; set; } = new(); + + /// + /// Whether to enforce budgets (false = warn only). + /// + public bool EnforceBudgets { get; set; } = true; +} +``` + +**Sample YAML Configuration**: +```yaml +# etc/policy.unknowns.yaml +unknownBudgets: + enforceBudgets: true + budgets: + prod: + environment: prod + totalLimit: 3 + reasonLimits: + Reachability: 0 + Provenance: 0 + VexConflict: 1 + action: Block + exceededMessage: "Production requires zero reachability unknowns" + + stage: + environment: stage + totalLimit: 10 + reasonLimits: + Reachability: 1 + action: WarnUnlessException + + dev: + environment: dev + totalLimit: null # No limit + action: Warn + + default: + environment: default + totalLimit: 5 + action: Warn +``` + +**DI Registration**: +```csharp +// In startup/DI configuration +services.Configure( + configuration.GetSection(UnknownBudgetOptions.SectionName)); +services.AddSingleton(); +``` + +**Acceptance Criteria**: +- [ ] `UnknownBudgetOptions.cs` created in `Configuration/` +- [ ] Options bind from YAML configuration +- [ ] Sample configuration documented +- [ ] `EnforceBudgets` toggle for global enable/disable +- [ ] Default budget fallback defined + +--- + +### T5: Integrate with PolicyEvaluator + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T2, T3 + +**Description**: +Integrate unknown budget checking into the policy evaluation pipeline. + +**Implementation Path**: `src/Policy/StellaOps.Policy.Engine/Services/PolicyEvaluator.cs` + +**Integration Points**: +```csharp +public sealed class PolicyEvaluator +{ + private readonly IUnknownBudgetService _budgetService; + + public async Task EvaluateAsync( + PolicyEvaluationRequest request, + CancellationToken ct = default) + { + // ... existing evaluation logic ... + + // Check unknown budgets + var budgetResult = _budgetService.CheckBudgetWithEscalation( + request.Environment, + unknowns, + request.AppliedExceptions); + + if (_budgetService.ShouldBlock(budgetResult)) + { + return PolicyEvaluationResult.Fail( + PolicyFailureReason.UnknownBudgetExceeded, + budgetResult.Message, + new UnknownBudgetViolation(budgetResult)); + } + + // Include budget status in result + return result with + { + UnknownBudgetStatus = new BudgetStatusSummary + { + IsExceeded = !budgetResult.IsWithinBudget, + TotalUnknowns = budgetResult.TotalUnknowns, + TotalLimit = budgetResult.TotalLimit, + Violations = budgetResult.Violations + } + }; + } +} + +/// +/// Failure reason for policy evaluation. +/// +public enum PolicyFailureReason +{ + // Existing reasons... + CveExceedsThreshold, + LicenseViolation, + // NEW + UnknownBudgetExceeded +} +``` + +**Acceptance Criteria**: +- [ ] `PolicyEvaluator` checks unknown budgets +- [ ] Blocking configured budgets fail evaluation +- [ ] `UnknownBudgetExceeded` failure reason added +- [ ] Budget status included in evaluation result +- [ ] Exception coverage respected + +--- + +### T6: Add Tests + +**Assignee**: Policy Team +**Story Points**: 1 +**Status**: TODO +**Dependencies**: T5 + +**Description**: +Add comprehensive tests for budget enforcement. + +**Implementation Path**: `src/Policy/__Tests/StellaOps.Policy.Unknowns.Tests/Services/UnknownBudgetServiceTests.cs` + +**Test Cases**: +```csharp +public class UnknownBudgetServiceTests +{ + [Fact] + public void GetBudgetForEnvironment_KnownEnv_ReturnsBudget() + { + // Arrange + var options = CreateOptions(prod: new UnknownBudget + { + Environment = "prod", + TotalLimit = 3 + }); + var service = new UnknownBudgetService(options, NullLogger.Instance); + + // Act + var budget = service.GetBudgetForEnvironment("prod"); + + // Assert + budget.TotalLimit.Should().Be(3); + } + + [Fact] + public void CheckBudget_WithinLimit_ReturnsSuccess() + { + var unknowns = CreateUnknowns(count: 2); + var result = _service.CheckBudget("prod", unknowns); + + result.IsWithinBudget.Should().BeTrue(); + } + + [Fact] + public void CheckBudget_ExceedsTotal_ReturnsViolation() + { + var unknowns = CreateUnknowns(count: 5); // limit is 3 + var result = _service.CheckBudget("prod", unknowns); + + result.IsWithinBudget.Should().BeFalse(); + result.RecommendedAction.Should().Be(BudgetAction.Block); + } + + [Fact] + public void CheckBudget_ExceedsReasonLimit_ReturnsSpecificViolation() + { + var unknowns = CreateUnknowns( + reachability: 2, // limit is 0 + identity: 1); + var result = _service.CheckBudget("prod", unknowns); + + result.Violations.Should().ContainKey(UnknownReasonCode.Reachability); + result.Violations[UnknownReasonCode.Reachability].Count.Should().Be(2); + } + + [Fact] + public void CheckBudgetWithEscalation_ExceptionCovers_AllowsOperation() + { + var unknowns = CreateUnknowns(reachability: 1); + var exceptions = new[] { CreateException(UnknownReasonCode.Reachability) }; + + var result = _service.CheckBudgetWithEscalation("prod", unknowns, exceptions); + + result.IsWithinBudget.Should().BeTrue(); + result.Message.Should().Contain("covered by approved exceptions"); + } + + [Fact] + public void ShouldBlock_BlockAction_ReturnsTrue() + { + var result = new BudgetCheckResult + { + IsWithinBudget = false, + RecommendedAction = BudgetAction.Block + }; + + _service.ShouldBlock(result).Should().BeTrue(); + } +} +``` + +**Acceptance Criteria**: +- [ ] Test for budget retrieval with fallback +- [ ] Test for within-budget success +- [ ] Test for total limit violation +- [ ] Test for per-reason limit violation +- [ ] Test for exception coverage +- [ ] Test for block action decision +- [ ] All tests pass + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Policy Team | Define UnknownBudget model | +| 2 | T2 | TODO | T1 | Policy Team | Create UnknownBudgetService | +| 3 | T3 | TODO | T2 | Policy Team | Implement budget checking logic | +| 4 | T4 | TODO | T1 | Policy Team | Add policy configuration | +| 5 | T5 | TODO | T2, T3 | Policy Team | Integrate with PolicyEvaluator | +| 6 | T6 | TODO | T5 | Policy Team | Add tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from MOAT Phase 2 gap analysis. Unknown budgets identified as requirement from Moat #5 advisory. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Environment-keyed budgets | Decision | Policy Team | Allows prod/stage/dev differentiation | +| BudgetAction enum | Decision | Policy Team | Block, Warn, WarnUnlessException provides flexibility | +| Exception coverage | Decision | Policy Team | Approved exceptions can override budget violations | +| Null totalLimit | Decision | Policy Team | Null means unlimited (no budget enforcement) | + +--- + +## Success Criteria + +- [ ] All 6 tasks marked DONE +- [ ] Budget configuration loads from YAML +- [ ] Policy evaluator respects budget limits +- [ ] Exceptions can cover violations +- [ ] 6+ budget-related tests passing +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_4100_0001_0003_unknowns_attestations.md b/docs/implplan/SPRINT_4100_0001_0003_unknowns_attestations.md new file mode 100644 index 000000000..c1bcaae6e --- /dev/null +++ b/docs/implplan/SPRINT_4100_0001_0003_unknowns_attestations.md @@ -0,0 +1,675 @@ +# Sprint 4100.0001.0003 · Unknowns in Attestations + +## Topic & Scope + +- Include unknown summaries in signed attestations +- Aggregate unknowns by reason code for policy predicates +- Enable attestation consumers to verify unknown handling + +**Working directory:** `src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 4100.0001.0001 (Reason-Coded Unknowns), Sprint 4100.0001.0002 (Unknown Budgets) — MUST BE DONE +- **Downstream**: Sprint 4100.0003.0001 (Risk Verdict Attestation) +- **Safe to parallelize with**: Sprint 4100.0002.0003, Sprint 4100.0004.0001 + +## Documentation Prerequisites + +- Sprint 4100.0001.0001 completion (UnknownReasonCode enum) +- Sprint 4100.0001.0002 completion (UnknownBudget model) +- `src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/AGENTS.md` +- `docs/product-advisories/19-Dec-2025 - Moat #5.md` + +--- + +## Tasks + +### T1: Define UnknownsSummary Model + +**Assignee**: Attestor Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create a model for aggregated unknowns data to include in attestations. + +**Implementation Path**: `Models/UnknownsSummary.cs` (new file) + +**Model Definition**: +```csharp +namespace StellaOps.Attestor.ProofChain.Models; + +/// +/// Aggregated summary of unknowns for inclusion in attestations. +/// Provides verifiable data about unknown risk handled during evaluation. +/// +public sealed record UnknownsSummary +{ + /// + /// Total count of unknowns encountered. + /// + public int Total { get; init; } + + /// + /// Count of unknowns by reason code. + /// + public IReadOnlyDictionary ByReasonCode { get; init; } + = new Dictionary(); + + /// + /// Count of unknowns that would block if not excepted. + /// + public int BlockingCount { get; init; } + + /// + /// Count of unknowns that are covered by approved exceptions. + /// + public int ExceptedCount { get; init; } + + /// + /// Policy thresholds that were evaluated. + /// + public IReadOnlyList PolicyThresholdsApplied { get; init; } = []; + + /// + /// Exception IDs that were applied to cover unknowns. + /// + public IReadOnlyList ExceptionsApplied { get; init; } = []; + + /// + /// Hash of the unknowns list for integrity verification. + /// + public string? UnknownsDigest { get; init; } + + /// + /// Creates an empty summary for cases with no unknowns. + /// + public static UnknownsSummary Empty { get; } = new() + { + Total = 0, + ByReasonCode = new Dictionary(), + BlockingCount = 0, + ExceptedCount = 0 + }; +} +``` + +**Acceptance Criteria**: +- [ ] `UnknownsSummary.cs` file created in `Models/` directory +- [ ] Total and per-reason-code counts included +- [ ] Blocking and excepted counts tracked +- [ ] Policy thresholds and exception IDs recorded +- [ ] Digest field for integrity verification +- [ ] Static `Empty` instance for convenience + +--- + +### T2: Extend VerdictReceiptPayload + +**Assignee**: Attestor Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Add unknowns summary field to the verdict receipt statement payload. + +**Implementation Path**: `Statements/VerdictReceiptStatement.cs` + +**Updated Payload**: +```csharp +/// +/// Payload for verdict receipt attestation statement. +/// +public sealed record VerdictReceiptPayload +{ + // Existing fields + public required string VerdictId { get; init; } + public required string ArtifactDigest { get; init; } + public required string PolicyRef { get; init; } + public required VerdictStatus Status { get; init; } + public required DateTimeOffset EvaluatedAt { get; init; } + public IReadOnlyList Findings { get; init; } = []; + public IReadOnlyList AppliedExceptions { get; init; } = []; + + // NEW: Unknowns summary + /// + /// Summary of unknowns encountered during evaluation. + /// Included for transparency about uncertainty in the verdict. + /// + public UnknownsSummary? Unknowns { get; init; } + + // NEW: Knowledge snapshot reference + /// + /// Reference to the knowledge snapshot used for evaluation. + /// Enables replay and verification of inputs. + /// + public string? KnowledgeSnapshotId { get; init; } +} +``` + +**JSON Schema Update**: +```json +{ + "type": "object", + "properties": { + "verdictId": { "type": "string" }, + "artifactDigest": { "type": "string" }, + "unknowns": { + "type": "object", + "properties": { + "total": { "type": "integer" }, + "byReasonCode": { + "type": "object", + "additionalProperties": { "type": "integer" } + }, + "blockingCount": { "type": "integer" }, + "exceptedCount": { "type": "integer" }, + "policyThresholdsApplied": { + "type": "array", + "items": { "type": "string" } + }, + "exceptionsApplied": { + "type": "array", + "items": { "type": "string" } + }, + "unknownsDigest": { "type": "string" } + } + }, + "knowledgeSnapshotId": { "type": "string" } + } +} +``` + +**Acceptance Criteria**: +- [ ] `Unknowns` field added to `VerdictReceiptPayload` +- [ ] `KnowledgeSnapshotId` field added for replay support +- [ ] JSON schema updated with unknowns structure +- [ ] Field is nullable for backward compatibility +- [ ] Existing attestation tests still pass + +--- + +### T3: Create UnknownsAggregator + +**Assignee**: Attestor Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement service to aggregate unknowns into summary format for attestations. + +**Implementation Path**: `Services/UnknownsAggregator.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Attestor.ProofChain.Services; + +/// +/// Aggregates unknowns data into summary format for attestations. +/// +public sealed class UnknownsAggregator : IUnknownsAggregator +{ + private readonly IHasher _hasher; + + public UnknownsAggregator(IHasher hasher) + { + _hasher = hasher; + } + + /// + /// Creates an unknowns summary from evaluation results. + /// + public UnknownsSummary Aggregate( + IReadOnlyList unknowns, + BudgetCheckResult? budgetResult = null, + IReadOnlyList? exceptions = null) + { + if (unknowns.Count == 0) + return UnknownsSummary.Empty; + + // Count by reason code + var byReasonCode = unknowns + .GroupBy(u => u.ReasonCode.ToString()) + .ToDictionary(g => g.Key, g => g.Count()); + + // Calculate blocking count (would block without exceptions) + var blockingCount = budgetResult?.Violations.Values.Sum(v => v.Count) ?? 0; + + // Calculate excepted count + var exceptedCount = exceptions?.Count ?? 0; + + // Compute digest of unknowns list for integrity + var unknownsDigest = ComputeUnknownsDigest(unknowns); + + // Extract policy thresholds that were checked + var thresholds = budgetResult?.Violations.Keys + .Select(k => $"{k}:{budgetResult.Violations[k].Limit}") + .ToList() ?? []; + + // Extract applied exception IDs + var exceptionIds = exceptions? + .Select(e => e.ExceptionId) + .ToList() ?? []; + + return new UnknownsSummary + { + Total = unknowns.Count, + ByReasonCode = byReasonCode, + BlockingCount = blockingCount, + ExceptedCount = exceptedCount, + PolicyThresholdsApplied = thresholds, + ExceptionsApplied = exceptionIds, + UnknownsDigest = unknownsDigest + }; + } + + /// + /// Computes a deterministic digest of the unknowns list. + /// + private string ComputeUnknownsDigest(IReadOnlyList unknowns) + { + // Sort for determinism + var sorted = unknowns + .OrderBy(u => u.PackageUrl) + .ThenBy(u => u.CveId) + .ThenBy(u => u.ReasonCode.ToString()) + .ToList(); + + // Serialize to canonical JSON + var json = JsonSerializer.Serialize(sorted, new JsonSerializerOptions + { + PropertyNamingPolicy = JsonNamingPolicy.CamelCase, + WriteIndented = false + }); + + // Hash the serialized data + return _hasher.ComputeSha256(json); + } +} + +/// +/// Input item for unknowns aggregation. +/// +public sealed record UnknownItem( + string PackageUrl, + string? CveId, + string ReasonCode, + string? RemediationHint); + +/// +/// Reference to an applied exception. +/// +public sealed record ExceptionRef( + string ExceptionId, + string Status, + IReadOnlyList CoveredReasonCodes); + +public interface IUnknownsAggregator +{ + UnknownsSummary Aggregate( + IReadOnlyList unknowns, + BudgetCheckResult? budgetResult = null, + IReadOnlyList? exceptions = null); +} +``` + +**Acceptance Criteria**: +- [ ] `UnknownsAggregator.cs` created in `Services/` +- [ ] Aggregates unknowns by reason code +- [ ] Computes blocking and excepted counts +- [ ] Generates deterministic digest of unknowns +- [ ] Records policy thresholds and exception IDs +- [ ] Interface defined for DI + +--- + +### T4: Update PolicyDecisionPredicate + +**Assignee**: Attestor Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T2, T3 + +**Description**: +Include unknowns data in the policy decision predicate for attestation verification. + +**Implementation Path**: `Predicates/PolicyDecisionPredicate.cs` + +**Updated Predicate**: +```csharp +namespace StellaOps.Attestor.ProofChain.Predicates; + +/// +/// Predicate type for policy decision attestations. +/// +public sealed record PolicyDecisionPredicate +{ + public const string PredicateType = "https://stella.ops/predicates/policy-decision@v2"; + + // Existing fields + public required string PolicyRef { get; init; } + public required PolicyDecision Decision { get; init; } + public required DateTimeOffset EvaluatedAt { get; init; } + public IReadOnlyList Findings { get; init; } = []; + + // NEW: Unknowns handling + /// + /// Summary of unknowns and how they were handled. + /// + public UnknownsSummary? Unknowns { get; init; } + + /// + /// Whether unknowns were a factor in the decision. + /// + public bool UnknownsAffectedDecision { get; init; } + + /// + /// Reason codes that caused blocking (if any). + /// + public IReadOnlyList BlockingReasonCodes { get; init; } = []; + + // NEW: Knowledge snapshot reference + /// + /// Content-addressed ID of the knowledge snapshot used. + /// + public string? KnowledgeSnapshotId { get; init; } +} + +/// +/// Policy decision outcome. +/// +public enum PolicyDecision +{ + Pass, + Fail, + PassWithExceptions, + Indeterminate +} +``` + +**Predicate Builder Update**: +```csharp +public PolicyDecisionPredicate Build(PolicyEvaluationResult result) +{ + var unknownsAffected = result.UnknownBudgetStatus?.IsExceeded == true || + result.FailureReason == PolicyFailureReason.UnknownBudgetExceeded; + + var blockingCodes = result.UnknownBudgetStatus?.Violations.Keys + .Select(k => k.ToString()) + .ToList() ?? []; + + return new PolicyDecisionPredicate + { + PolicyRef = result.PolicyRef, + Decision = MapDecision(result), + EvaluatedAt = result.EvaluatedAt, + Findings = result.Findings.Select(MapFinding).ToList(), + Unknowns = _aggregator.Aggregate(result.Unknowns, result.UnknownBudgetStatus), + UnknownsAffectedDecision = unknownsAffected, + BlockingReasonCodes = blockingCodes, + KnowledgeSnapshotId = result.KnowledgeSnapshotId + }; +} +``` + +**Acceptance Criteria**: +- [ ] Predicate version bumped to v2 +- [ ] `Unknowns` field added with summary +- [ ] `UnknownsAffectedDecision` boolean flag +- [ ] `BlockingReasonCodes` list for failed verdicts +- [ ] `KnowledgeSnapshotId` for replay support +- [ ] Predicate builder uses aggregator + +--- + +### T5: Add Attestation Tests + +**Assignee**: Attestor Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T4 + +**Description**: +Add tests verifying unknowns are correctly included in signed attestations. + +**Implementation Path**: `src/Attestor/__Tests/StellaOps.Attestor.ProofChain.Tests/` + +**Test Cases**: +```csharp +public class UnknownsSummaryTests +{ + [Fact] + public void Empty_ReturnsZeroCounts() + { + var summary = UnknownsSummary.Empty; + + summary.Total.Should().Be(0); + summary.ByReasonCode.Should().BeEmpty(); + summary.BlockingCount.Should().Be(0); + } +} + +public class UnknownsAggregatorTests +{ + [Fact] + public void Aggregate_GroupsByReasonCode() + { + var unknowns = new[] + { + new UnknownItem("pkg:npm/foo@1.0", null, "Reachability", null), + new UnknownItem("pkg:npm/bar@1.0", null, "Reachability", null), + new UnknownItem("pkg:npm/baz@1.0", null, "Identity", null) + }; + + var summary = _aggregator.Aggregate(unknowns); + + summary.Total.Should().Be(3); + summary.ByReasonCode["Reachability"].Should().Be(2); + summary.ByReasonCode["Identity"].Should().Be(1); + } + + [Fact] + public void Aggregate_ComputesDeterministicDigest() + { + var unknowns = CreateUnknowns(); + + var summary1 = _aggregator.Aggregate(unknowns); + var summary2 = _aggregator.Aggregate(unknowns.Reverse().ToList()); + + summary1.UnknownsDigest.Should().Be(summary2.UnknownsDigest); + } + + [Fact] + public void Aggregate_IncludesExceptionIds() + { + var unknowns = CreateUnknowns(); + var exceptions = new[] + { + new ExceptionRef("EXC-001", "Approved", new[] { "Reachability" }) + }; + + var summary = _aggregator.Aggregate(unknowns, null, exceptions); + + summary.ExceptionsApplied.Should().Contain("EXC-001"); + summary.ExceptedCount.Should().Be(1); + } +} + +public class VerdictReceiptStatementTests +{ + [Fact] + public void CreateStatement_IncludesUnknownsSummary() + { + var result = CreateEvaluationResult(unknownsCount: 5); + + var statement = _builder.Build(result); + + statement.Predicate.Unknowns.Should().NotBeNull(); + statement.Predicate.Unknowns.Total.Should().Be(5); + } + + [Fact] + public void CreateStatement_SignatureCoversUnknowns() + { + var result = CreateEvaluationResult(unknownsCount: 5); + + var envelope = _signer.SignStatement(result); + + // Modify unknowns and verify signature fails + var tampered = envelope with + { + Payload = ModifyUnknownsCount(envelope.Payload, 0) + }; + + _verifier.Verify(tampered).Should().BeFalse(); + } +} +``` + +**Acceptance Criteria**: +- [ ] Test for empty summary creation +- [ ] Test for reason code grouping +- [ ] Test for deterministic digest computation +- [ ] Test for exception ID inclusion +- [ ] Test for unknowns in statement payload +- [ ] Test that signature covers unknowns data +- [ ] All 6+ tests pass + +--- + +### T6: Update Predicate Schema + +**Assignee**: Attestor Team +**Story Points**: 1 +**Status**: TODO +**Dependencies**: T4 + +**Description**: +Update the JSON schema documentation for the policy decision predicate. + +**Implementation Path**: `docs/api/predicates/policy-decision-v2.schema.json` + +**Schema Documentation**: +```json +{ + "$schema": "https://json-schema.org/draft/2020-12/schema", + "$id": "https://stella.ops/predicates/policy-decision@v2", + "title": "Policy Decision Predicate v2", + "description": "Attestation predicate for policy evaluation decisions, including unknowns handling.", + "type": "object", + "required": ["policyRef", "decision", "evaluatedAt"], + "properties": { + "policyRef": { + "type": "string", + "description": "Reference to the policy that was evaluated" + }, + "decision": { + "type": "string", + "enum": ["Pass", "Fail", "PassWithExceptions", "Indeterminate"], + "description": "Final policy decision" + }, + "evaluatedAt": { + "type": "string", + "format": "date-time", + "description": "ISO-8601 timestamp of evaluation" + }, + "unknowns": { + "type": "object", + "description": "Summary of unknowns encountered during evaluation", + "properties": { + "total": { + "type": "integer", + "minimum": 0, + "description": "Total count of unknowns" + }, + "byReasonCode": { + "type": "object", + "additionalProperties": { "type": "integer" }, + "description": "Count per reason code (Reachability, Identity, etc.)" + }, + "blockingCount": { + "type": "integer", + "minimum": 0, + "description": "Count that would block without exceptions" + }, + "exceptedCount": { + "type": "integer", + "minimum": 0, + "description": "Count covered by approved exceptions" + }, + "unknownsDigest": { + "type": "string", + "description": "SHA-256 digest of unknowns list" + } + } + }, + "unknownsAffectedDecision": { + "type": "boolean", + "description": "Whether unknowns influenced the decision" + }, + "blockingReasonCodes": { + "type": "array", + "items": { "type": "string" }, + "description": "Reason codes that caused blocking" + }, + "knowledgeSnapshotId": { + "type": "string", + "description": "Content-addressed ID of knowledge snapshot" + } + } +} +``` + +**Acceptance Criteria**: +- [ ] Schema file created at `docs/api/predicates/` +- [ ] All new fields documented +- [ ] Schema validates against sample payloads +- [ ] Version bump to v2 documented + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Attestor Team | Define UnknownsSummary model | +| 2 | T2 | TODO | T1 | Attestor Team | Extend VerdictReceiptPayload | +| 3 | T3 | TODO | T1 | Attestor Team | Create UnknownsAggregator | +| 4 | T4 | TODO | T2, T3 | Attestor Team | Update PolicyDecisionPredicate | +| 5 | T5 | TODO | T4 | Attestor Team | Add attestation tests | +| 6 | T6 | TODO | T4 | Attestor Team | Update predicate schema | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from MOAT Phase 2 gap analysis. Unknowns in attestations identified as requirement from Moat #5 advisory. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Predicate version bump | Decision | Attestor Team | v1 → v2 for backward compatibility tracking | +| Deterministic digest | Decision | Attestor Team | Enables tamper detection of unknowns list | +| String reason codes | Decision | Attestor Team | Using strings instead of enums for JSON flexibility | +| Nullable unknowns | Decision | Attestor Team | Allows backward compatibility with v1 payloads | + +--- + +## Success Criteria + +- [ ] All 6 tasks marked DONE +- [ ] Unknowns summary included in attestations +- [ ] Predicate schema v2 documented +- [ ] Aggregator computes deterministic digests +- [ ] 6+ attestation tests passing +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_4100_0002_0001_knowledge_snapshot_manifest.md b/docs/implplan/SPRINT_4100_0002_0001_knowledge_snapshot_manifest.md new file mode 100644 index 000000000..703d0cc77 --- /dev/null +++ b/docs/implplan/SPRINT_4100_0002_0001_knowledge_snapshot_manifest.md @@ -0,0 +1,949 @@ +# Sprint 4100.0002.0001 · Knowledge Snapshot Manifest + +## Topic & Scope + +- Define unified content-addressed manifest for knowledge snapshots +- Enable deterministic capture of all evaluation inputs +- Support time-travel replay by freezing knowledge state + +**Working directory:** `src/Policy/__Libraries/StellaOps.Policy/Snapshots/` + +## Dependencies & Concurrency + +- **Upstream**: None (first sprint in batch) +- **Downstream**: Sprint 4100.0002.0002 (Replay Engine), Sprint 4100.0002.0003 (Snapshot Export/Import), Sprint 4100.0004.0001 (Security State Delta) +- **Safe to parallelize with**: Sprint 4100.0001.0001, Sprint 4100.0003.0001, Sprint 4100.0004.0002 + +## Documentation Prerequisites + +- `src/Policy/__Libraries/StellaOps.Policy/AGENTS.md` +- `docs/product-advisories/20-Dec-2025 - Moat Explanation - Knowledge Snapshots and Time‑Travel Replay.md` +- `docs/product-advisories/19-Dec-2025 - Moat #2.md` (Risk Verdict Attestation) + +--- + +## Tasks + +### T1: Define KnowledgeSnapshotManifest + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the unified manifest structure for knowledge snapshots. + +**Implementation Path**: `Snapshots/KnowledgeSnapshotManifest.cs` (new file) + +**Model Definition**: +```csharp +namespace StellaOps.Policy.Snapshots; + +/// +/// Unified manifest for a knowledge snapshot. +/// Content-addressed bundle capturing all inputs to a policy evaluation. +/// +public sealed record KnowledgeSnapshotManifest +{ + /// + /// Content-addressed snapshot ID: ksm:sha256:{hash} + /// + public required string SnapshotId { get; init; } + + /// + /// When this snapshot was created (UTC). + /// + public required DateTimeOffset CreatedAt { get; init; } + + /// + /// Engine version that created this snapshot. + /// + public required EngineInfo Engine { get; init; } + + /// + /// Plugins/analyzers active during snapshot creation. + /// + public IReadOnlyList Plugins { get; init; } = []; + + /// + /// Reference to the policy bundle used. + /// + public required PolicyBundleRef Policy { get; init; } + + /// + /// Reference to the scoring rules used. + /// + public required ScoringRulesRef Scoring { get; init; } + + /// + /// Reference to the trust bundle (root certificates, VEX publishers). + /// + public TrustBundleRef? Trust { get; init; } + + /// + /// Knowledge sources included in this snapshot. + /// + public required IReadOnlyList Sources { get; init; } + + /// + /// Determinism profile for environment reproducibility. + /// + public DeterminismProfile? Environment { get; init; } + + /// + /// Optional DSSE signature over the manifest. + /// + public string? Signature { get; init; } + + /// + /// Manifest format version. + /// + public string ManifestVersion { get; init; } = "1.0"; +} + +/// +/// Engine version information. +/// +public sealed record EngineInfo( + string Name, + string Version, + string Commit); + +/// +/// Plugin/analyzer information. +/// +public sealed record PluginInfo( + string Name, + string Version, + string Type); + +/// +/// Reference to a policy bundle. +/// +public sealed record PolicyBundleRef( + string PolicyId, + string Digest, + string? Uri); + +/// +/// Reference to scoring rules. +/// +public sealed record ScoringRulesRef( + string RulesId, + string Digest, + string? Uri); + +/// +/// Reference to trust bundle. +/// +public sealed record TrustBundleRef( + string BundleId, + string Digest, + string? Uri); + +/// +/// Determinism profile for environment capture. +/// +public sealed record DeterminismProfile( + string TimezoneOffset, + string Locale, + string Platform, + IReadOnlyDictionary EnvironmentVars); +``` + +**Acceptance Criteria**: +- [ ] `KnowledgeSnapshotManifest.cs` file created in `Snapshots/` directory +- [ ] All component records defined (EngineInfo, PluginInfo, etc.) +- [ ] SnapshotId uses content-addressed format `ksm:sha256:{hash}` +- [ ] Manifest is immutable (all init-only properties) +- [ ] XML documentation on all types + +--- + +### T2: Define KnowledgeSourceDescriptor + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create a model describing each knowledge source in the snapshot. + +**Implementation Path**: `Snapshots/KnowledgeSourceDescriptor.cs` (new file) + +**Model Definition**: +```csharp +namespace StellaOps.Policy.Snapshots; + +/// +/// Descriptor for a knowledge source included in a snapshot. +/// +public sealed record KnowledgeSourceDescriptor +{ + /// + /// Unique name of the source (e.g., "nvd", "osv", "vendor-vex"). + /// + public required string Name { get; init; } + + /// + /// Type of source: "advisory-feed", "vex", "sbom", "reachability", "policy". + /// + public required string Type { get; init; } + + /// + /// Epoch or version of the source data. + /// + public required string Epoch { get; init; } + + /// + /// Content digest of the source data. + /// + public required string Digest { get; init; } + + /// + /// Origin URI where this source was fetched from. + /// + public string? Origin { get; init; } + + /// + /// When this source was last updated. + /// + public DateTimeOffset? LastUpdatedAt { get; init; } + + /// + /// Record count or entry count in this source. + /// + public int? RecordCount { get; init; } + + /// + /// Whether this source is bundled (embedded) or referenced. + /// + public SourceInclusionMode InclusionMode { get; init; } = SourceInclusionMode.Referenced; + + /// + /// Relative path within the snapshot bundle (if bundled). + /// + public string? BundlePath { get; init; } +} + +/// +/// How a source is included in the snapshot. +/// +public enum SourceInclusionMode +{ + /// + /// Source is referenced by digest only (requires external fetch for replay). + /// + Referenced, + + /// + /// Source content is embedded in the snapshot bundle. + /// + Bundled, + + /// + /// Source is bundled and compressed. + /// + BundledCompressed +} +``` + +**Acceptance Criteria**: +- [ ] `KnowledgeSourceDescriptor.cs` file created +- [ ] Source types defined: advisory-feed, vex, sbom, reachability, policy +- [ ] Inclusion modes defined: Referenced, Bundled, BundledCompressed +- [ ] Digest and epoch for content addressing +- [ ] Optional bundle path for embedded sources + +--- + +### T3: Create SnapshotBuilder + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Implement a fluent API for constructing snapshot manifests. + +**Implementation Path**: `Snapshots/SnapshotBuilder.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Snapshots; + +/// +/// Fluent builder for constructing knowledge snapshot manifests. +/// +public sealed class SnapshotBuilder +{ + private readonly List _sources = []; + private readonly List _plugins = []; + private EngineInfo? _engine; + private PolicyBundleRef? _policy; + private ScoringRulesRef? _scoring; + private TrustBundleRef? _trust; + private DeterminismProfile? _environment; + private readonly IHasher _hasher; + + public SnapshotBuilder(IHasher hasher) + { + _hasher = hasher; + } + + public SnapshotBuilder WithEngine(string name, string version, string commit) + { + _engine = new EngineInfo(name, version, commit); + return this; + } + + public SnapshotBuilder WithPlugin(string name, string version, string type) + { + _plugins.Add(new PluginInfo(name, version, type)); + return this; + } + + public SnapshotBuilder WithPolicy(string policyId, string digest, string? uri = null) + { + _policy = new PolicyBundleRef(policyId, digest, uri); + return this; + } + + public SnapshotBuilder WithScoring(string rulesId, string digest, string? uri = null) + { + _scoring = new ScoringRulesRef(rulesId, digest, uri); + return this; + } + + public SnapshotBuilder WithTrust(string bundleId, string digest, string? uri = null) + { + _trust = new TrustBundleRef(bundleId, digest, uri); + return this; + } + + public SnapshotBuilder WithSource(KnowledgeSourceDescriptor source) + { + _sources.Add(source); + return this; + } + + public SnapshotBuilder WithAdvisoryFeed( + string name, string epoch, string digest, string? origin = null) + { + _sources.Add(new KnowledgeSourceDescriptor + { + Name = name, + Type = "advisory-feed", + Epoch = epoch, + Digest = digest, + Origin = origin + }); + return this; + } + + public SnapshotBuilder WithVex(string name, string digest, string? origin = null) + { + _sources.Add(new KnowledgeSourceDescriptor + { + Name = name, + Type = "vex", + Epoch = DateTimeOffset.UtcNow.ToString("o"), + Digest = digest, + Origin = origin + }); + return this; + } + + public SnapshotBuilder WithEnvironment(DeterminismProfile environment) + { + _environment = environment; + return this; + } + + public SnapshotBuilder CaptureCurrentEnvironment() + { + _environment = new DeterminismProfile( + TimezoneOffset: TimeZoneInfo.Local.BaseUtcOffset.ToString(), + Locale: CultureInfo.CurrentCulture.Name, + Platform: Environment.OSVersion.ToString(), + EnvironmentVars: new Dictionary()); + return this; + } + + /// + /// Builds the manifest and computes the content-addressed ID. + /// + public KnowledgeSnapshotManifest Build() + { + if (_engine is null) + throw new InvalidOperationException("Engine info is required"); + if (_policy is null) + throw new InvalidOperationException("Policy reference is required"); + if (_scoring is null) + throw new InvalidOperationException("Scoring reference is required"); + if (_sources.Count == 0) + throw new InvalidOperationException("At least one source is required"); + + // Create manifest without ID first + var manifest = new KnowledgeSnapshotManifest + { + SnapshotId = "", // Placeholder + CreatedAt = DateTimeOffset.UtcNow, + Engine = _engine, + Plugins = _plugins.ToList(), + Policy = _policy, + Scoring = _scoring, + Trust = _trust, + Sources = _sources.OrderBy(s => s.Name).ToList(), + Environment = _environment + }; + + // Compute content-addressed ID + var snapshotId = ComputeSnapshotId(manifest); + + return manifest with { SnapshotId = snapshotId }; + } + + private string ComputeSnapshotId(KnowledgeSnapshotManifest manifest) + { + // Serialize to canonical JSON (sorted keys, no whitespace) + var json = JsonSerializer.Serialize(manifest with { SnapshotId = "" }, + new JsonSerializerOptions + { + PropertyNamingPolicy = JsonNamingPolicy.CamelCase, + WriteIndented = false, + DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull + }); + + var hash = _hasher.ComputeSha256(json); + return $"ksm:sha256:{hash}"; + } +} +``` + +**Acceptance Criteria**: +- [ ] `SnapshotBuilder.cs` created in `Snapshots/` +- [ ] Fluent API for all manifest components +- [ ] Validation on Build() for required fields +- [ ] Content-addressed ID computed from manifest hash +- [ ] Sources sorted for determinism +- [ ] Environment capture helper method + +--- + +### T4: Implement Content-Addressed ID + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T3 + +**Description**: +Ensure snapshot ID is deterministically computed from manifest content. + +**Implementation Path**: `Snapshots/SnapshotIdGenerator.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Snapshots; + +/// +/// Generates and validates content-addressed snapshot IDs. +/// +public sealed class SnapshotIdGenerator : ISnapshotIdGenerator +{ + private const string Prefix = "ksm:sha256:"; + private readonly IHasher _hasher; + + public SnapshotIdGenerator(IHasher hasher) + { + _hasher = hasher; + } + + /// + /// Generates a content-addressed ID for a manifest. + /// + public string GenerateId(KnowledgeSnapshotManifest manifest) + { + var canonicalJson = ToCanonicalJson(manifest with { SnapshotId = "", Signature = null }); + var hash = _hasher.ComputeSha256(canonicalJson); + return $"{Prefix}{hash}"; + } + + /// + /// Validates that a manifest's ID matches its content. + /// + public bool ValidateId(KnowledgeSnapshotManifest manifest) + { + var expectedId = GenerateId(manifest); + return manifest.SnapshotId == expectedId; + } + + /// + /// Parses a snapshot ID into its components. + /// + public SnapshotIdComponents? ParseId(string snapshotId) + { + if (!snapshotId.StartsWith(Prefix)) + return null; + + var hash = snapshotId[Prefix.Length..]; + if (hash.Length != 64) // SHA-256 hex length + return null; + + return new SnapshotIdComponents("sha256", hash); + } + + private static string ToCanonicalJson(KnowledgeSnapshotManifest manifest) + { + return JsonSerializer.Serialize(manifest, new JsonSerializerOptions + { + PropertyNamingPolicy = JsonNamingPolicy.CamelCase, + WriteIndented = false, + DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull, + Encoder = System.Text.Encodings.Web.JavaScriptEncoder.UnsafeRelaxedJsonEscaping + }); + } +} + +public sealed record SnapshotIdComponents(string Algorithm, string Hash); + +public interface ISnapshotIdGenerator +{ + string GenerateId(KnowledgeSnapshotManifest manifest); + bool ValidateId(KnowledgeSnapshotManifest manifest); + SnapshotIdComponents? ParseId(string snapshotId); +} +``` + +**Acceptance Criteria**: +- [ ] `SnapshotIdGenerator.cs` created +- [ ] ID format: `ksm:sha256:{64-char-hex}` +- [ ] ID excludes signature field from hash +- [ ] Validation method confirms ID matches content +- [ ] Parse method extracts algorithm and hash +- [ ] Interface defined for DI + +--- + +### T5: Create SnapshotService + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T3, T4 + +**Description**: +Implement service for creating, sealing, and verifying snapshots. + +**Implementation Path**: `Snapshots/SnapshotService.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Snapshots; + +/// +/// Service for managing knowledge snapshots. +/// +public sealed class SnapshotService : ISnapshotService +{ + private readonly ISnapshotIdGenerator _idGenerator; + private readonly ISigner _signer; + private readonly ISnapshotStore _store; + private readonly ILogger _logger; + + public SnapshotService( + ISnapshotIdGenerator idGenerator, + ISigner signer, + ISnapshotStore store, + ILogger logger) + { + _idGenerator = idGenerator; + _signer = signer; + _store = store; + _logger = logger; + } + + /// + /// Creates and persists a new snapshot. + /// + public async Task CreateSnapshotAsync( + SnapshotBuilder builder, + CancellationToken ct = default) + { + var manifest = builder.Build(); + + // Validate ID before storing + if (!_idGenerator.ValidateId(manifest)) + throw new InvalidOperationException("Snapshot ID validation failed"); + + await _store.SaveAsync(manifest, ct); + + _logger.LogInformation("Created snapshot {SnapshotId}", manifest.SnapshotId); + + return manifest; + } + + /// + /// Seals a snapshot with a DSSE signature. + /// + public async Task SealSnapshotAsync( + KnowledgeSnapshotManifest manifest, + CancellationToken ct = default) + { + var payload = JsonSerializer.SerializeToUtf8Bytes(manifest with { Signature = null }); + var signature = await _signer.SignAsync(payload, ct); + + var sealed = manifest with { Signature = signature }; + + await _store.SaveAsync(sealed, ct); + + _logger.LogInformation("Sealed snapshot {SnapshotId}", manifest.SnapshotId); + + return sealed; + } + + /// + /// Verifies a snapshot's integrity and signature. + /// + public async Task VerifySnapshotAsync( + KnowledgeSnapshotManifest manifest, + CancellationToken ct = default) + { + // Verify content-addressed ID + if (!_idGenerator.ValidateId(manifest)) + { + return SnapshotVerificationResult.Fail("Snapshot ID does not match content"); + } + + // Verify signature if present + if (manifest.Signature is not null) + { + var payload = JsonSerializer.SerializeToUtf8Bytes(manifest with { Signature = null }); + var sigValid = await _signer.VerifyAsync(payload, manifest.Signature, ct); + + if (!sigValid) + { + return SnapshotVerificationResult.Fail("Signature verification failed"); + } + } + + return SnapshotVerificationResult.Success(); + } + + /// + /// Retrieves a snapshot by ID. + /// + public async Task GetSnapshotAsync( + string snapshotId, + CancellationToken ct = default) + { + return await _store.GetAsync(snapshotId, ct); + } +} + +public sealed record SnapshotVerificationResult(bool IsValid, string? Error) +{ + public static SnapshotVerificationResult Success() => new(true, null); + public static SnapshotVerificationResult Fail(string error) => new(false, error); +} + +public interface ISnapshotService +{ + Task CreateSnapshotAsync(SnapshotBuilder builder, CancellationToken ct = default); + Task SealSnapshotAsync(KnowledgeSnapshotManifest manifest, CancellationToken ct = default); + Task VerifySnapshotAsync(KnowledgeSnapshotManifest manifest, CancellationToken ct = default); + Task GetSnapshotAsync(string snapshotId, CancellationToken ct = default); +} + +public interface ISnapshotStore +{ + Task SaveAsync(KnowledgeSnapshotManifest manifest, CancellationToken ct = default); + Task GetAsync(string snapshotId, CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] `SnapshotService.cs` created in `Snapshots/` +- [ ] Create, seal, verify, and get operations +- [ ] Sealing adds DSSE signature +- [ ] Verification checks ID and signature +- [ ] Store interface for persistence abstraction +- [ ] Logging for observability + +--- + +### T6: Integrate with PolicyEvaluator + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T5 + +**Description**: +Bind policy evaluation to a knowledge snapshot for reproducibility. + +**Implementation Path**: `src/Policy/StellaOps.Policy.Engine/Services/PolicyEvaluator.cs` + +**Integration**: +```csharp +public sealed class PolicyEvaluator +{ + private readonly ISnapshotService _snapshotService; + + /// + /// Evaluates policy with an explicit knowledge snapshot. + /// + public async Task EvaluateWithSnapshotAsync( + PolicyEvaluationRequest request, + KnowledgeSnapshotManifest snapshot, + CancellationToken ct = default) + { + // Verify snapshot before use + var verification = await _snapshotService.VerifySnapshotAsync(snapshot, ct); + if (!verification.IsValid) + { + return PolicyEvaluationResult.Fail( + PolicyFailureReason.InvalidSnapshot, + verification.Error); + } + + // Bind evaluation to snapshot sources + var context = await CreateEvaluationContext(request, snapshot, ct); + + // Perform evaluation with frozen inputs + var result = await EvaluateInternalAsync(context, ct); + + // Include snapshot reference in result + return result with + { + KnowledgeSnapshotId = snapshot.SnapshotId, + SnapshotCreatedAt = snapshot.CreatedAt + }; + } + + /// + /// Creates a snapshot capturing current knowledge state. + /// + public async Task CaptureCurrentSnapshotAsync( + CancellationToken ct = default) + { + var builder = new SnapshotBuilder(_hasher) + .WithEngine("StellaOps.Policy", _version, _commit) + .WithPolicy(_policyRef.Id, _policyRef.Digest) + .WithScoring(_scoringRef.Id, _scoringRef.Digest); + + // Add all active knowledge sources + foreach (var source in await _knowledgeSourceProvider.GetActiveSourcesAsync(ct)) + { + builder.WithSource(source); + } + + builder.CaptureCurrentEnvironment(); + + return await _snapshotService.CreateSnapshotAsync(builder, ct); + } +} + +// Extended result +public sealed record PolicyEvaluationResult +{ + // Existing fields... + public string? KnowledgeSnapshotId { get; init; } + public DateTimeOffset? SnapshotCreatedAt { get; init; } +} +``` + +**Acceptance Criteria**: +- [ ] `EvaluateWithSnapshotAsync` method added +- [ ] Snapshot verification before evaluation +- [ ] Evaluation bound to snapshot sources +- [ ] `CaptureCurrentSnapshotAsync` for snapshot creation +- [ ] Result includes snapshot reference +- [ ] `InvalidSnapshot` failure reason added + +--- + +### T7: Add Tests + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T6 + +**Description**: +Add comprehensive tests for snapshot determinism and integrity. + +**Implementation Path**: `src/Policy/__Tests/StellaOps.Policy.Tests/Snapshots/` + +**Test Cases**: +```csharp +public class SnapshotBuilderTests +{ + [Fact] + public void Build_ValidInputs_CreatesManifest() + { + var builder = new SnapshotBuilder(_hasher) + .WithEngine("test", "1.0", "abc123") + .WithPolicy("policy-1", "sha256:xxx") + .WithScoring("scoring-1", "sha256:yyy") + .WithAdvisoryFeed("nvd", "2025-12-21", "sha256:zzz"); + + var manifest = builder.Build(); + + manifest.SnapshotId.Should().StartWith("ksm:sha256:"); + manifest.Sources.Should().HaveCount(1); + } + + [Fact] + public void Build_MissingEngine_Throws() + { + var builder = new SnapshotBuilder(_hasher) + .WithPolicy("policy-1", "sha256:xxx") + .WithScoring("scoring-1", "sha256:yyy"); + + var act = () => builder.Build(); + + act.Should().Throw(); + } +} + +public class SnapshotIdGeneratorTests +{ + [Fact] + public void GenerateId_DeterministicForSameContent() + { + var manifest = CreateTestManifest(); + + var id1 = _generator.GenerateId(manifest); + var id2 = _generator.GenerateId(manifest); + + id1.Should().Be(id2); + } + + [Fact] + public void GenerateId_DifferentForDifferentContent() + { + var manifest1 = CreateTestManifest() with { CreatedAt = DateTimeOffset.UtcNow }; + var manifest2 = CreateTestManifest() with { CreatedAt = DateTimeOffset.UtcNow.AddSeconds(1) }; + + var id1 = _generator.GenerateId(manifest1); + var id2 = _generator.GenerateId(manifest2); + + id1.Should().NotBe(id2); + } + + [Fact] + public void ValidateId_ValidManifest_ReturnsTrue() + { + var manifest = new SnapshotBuilder(_hasher) + .WithEngine("test", "1.0", "abc") + .WithPolicy("p", "sha256:x") + .WithScoring("s", "sha256:y") + .WithAdvisoryFeed("nvd", "2025", "sha256:z") + .Build(); + + _generator.ValidateId(manifest).Should().BeTrue(); + } + + [Fact] + public void ValidateId_TamperedManifest_ReturnsFalse() + { + var manifest = CreateTestManifest(); + var tampered = manifest with { Policy = manifest.Policy with { Digest = "sha256:tampered" } }; + + _generator.ValidateId(tampered).Should().BeFalse(); + } +} + +public class SnapshotServiceTests +{ + [Fact] + public async Task CreateSnapshot_PersistsManifest() + { + var builder = CreateBuilder(); + + var manifest = await _service.CreateSnapshotAsync(builder); + + var retrieved = await _service.GetSnapshotAsync(manifest.SnapshotId); + retrieved.Should().NotBeNull(); + } + + [Fact] + public async Task SealSnapshot_AddsSignature() + { + var manifest = await _service.CreateSnapshotAsync(CreateBuilder()); + + var sealed = await _service.SealSnapshotAsync(manifest); + + sealed.Signature.Should().NotBeNullOrEmpty(); + } + + [Fact] + public async Task VerifySnapshot_ValidSealed_ReturnsSuccess() + { + var manifest = await _service.CreateSnapshotAsync(CreateBuilder()); + var sealed = await _service.SealSnapshotAsync(manifest); + + var result = await _service.VerifySnapshotAsync(sealed); + + result.IsValid.Should().BeTrue(); + } +} +``` + +**Acceptance Criteria**: +- [ ] Builder tests for valid/invalid inputs +- [ ] ID generator determinism tests +- [ ] ID validation tests (valid and tampered) +- [ ] Service create/seal/verify tests +- [ ] All 8+ tests pass + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Policy Team | Define KnowledgeSnapshotManifest | +| 2 | T2 | TODO | — | Policy Team | Define KnowledgeSourceDescriptor | +| 3 | T3 | TODO | T1, T2 | Policy Team | Create SnapshotBuilder | +| 4 | T4 | TODO | T3 | Policy Team | Implement content-addressed ID | +| 5 | T5 | TODO | T3, T4 | Policy Team | Create SnapshotService | +| 6 | T6 | TODO | T5 | Policy Team | Integrate with PolicyEvaluator | +| 7 | T7 | TODO | T6 | Policy Team | Add tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from MOAT Phase 2 gap analysis. Knowledge snapshots identified as requirement from Knowledge Snapshots advisory. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Content-addressed ID | Decision | Policy Team | ksm:sha256:{hash} format ensures immutability | +| Canonical JSON | Decision | Policy Team | Sorted keys, no whitespace for determinism | +| Signature exclusion | Decision | Policy Team | ID computed without signature field | +| Source ordering | Decision | Policy Team | Sources sorted by name for determinism | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] Snapshot IDs are content-addressed +- [ ] Manifests are deterministically serializable +- [ ] Sealing adds verifiable signatures +- [ ] Policy evaluator integrates snapshots +- [ ] 8+ snapshot tests passing +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_4100_0002_0002_replay_engine.md b/docs/implplan/SPRINT_4100_0002_0002_replay_engine.md new file mode 100644 index 000000000..b878a83a5 --- /dev/null +++ b/docs/implplan/SPRINT_4100_0002_0002_replay_engine.md @@ -0,0 +1,1589 @@ +# Sprint 4100.0002.0002 · Replay Engine + +## Topic & Scope + +- Implement time-travel replay for policy evaluations +- Enable re-evaluation with frozen knowledge inputs +- Support determinism verification and audit + +**Working directory:** `src/Policy/__Libraries/StellaOps.Policy/Replay/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 4100.0002.0001 (Knowledge Snapshot Manifest) — MUST BE DONE +- **Downstream**: None +- **Safe to parallelize with**: Sprint 4100.0001.0002, Sprint 4100.0003.0002 + +## Documentation Prerequisites + +- Sprint 4100.0002.0001 completion (KnowledgeSnapshotManifest) +- `docs/product-advisories/20-Dec-2025 - Moat Explanation - Knowledge Snapshots and Time‑Travel Replay.md` +- `docs/product-advisories/20-Dec-2025 - Moat Explanation - Guidelines for Product and Development Managers - Signed, Replayable Risk Verdicts.md` + +--- + +## Tasks + +### T1: Define ReplayRequest + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the request model for replay operations. + +**Implementation Path**: `Replay/ReplayRequest.cs` (new file) + +**Model Definition**: +```csharp +namespace StellaOps.Policy.Replay; + +/// +/// Request to replay a policy evaluation with frozen inputs. +/// +public sealed record ReplayRequest +{ + /// + /// The artifact to evaluate (same as original). + /// + public required string ArtifactDigest { get; init; } + + /// + /// ID of the knowledge snapshot to use for replay. + /// + public required string SnapshotId { get; init; } + + /// + /// Original verdict ID being replayed (for comparison). + /// + public string? OriginalVerdictId { get; init; } + + /// + /// Replay options. + /// + public ReplayOptions Options { get; init; } = ReplayOptions.Default; +} + +/// +/// Options controlling replay behavior. +/// +public sealed record ReplayOptions +{ + /// + /// Whether to compare with original verdict. + /// + public bool CompareWithOriginal { get; init; } = true; + + /// + /// Whether to allow network access for missing sources. + /// + public bool AllowNetworkFetch { get; init; } = false; + + /// + /// Whether to generate detailed diff report. + /// + public bool GenerateDetailedReport { get; init; } = true; + + /// + /// Tolerance for score differences (for floating point comparison). + /// + public decimal ScoreTolerance { get; init; } = 0.001m; + + public static ReplayOptions Default { get; } = new(); +} +``` + +**Acceptance Criteria**: +- [ ] `ReplayRequest.cs` file created in `Replay/` directory +- [ ] Artifact digest and snapshot ID required +- [ ] Original verdict ID optional for comparison +- [ ] Options for controlling replay behavior +- [ ] Default options defined + +--- + +### T2: Define ReplayResult + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create the result model for replay operations. + +**Implementation Path**: `Replay/ReplayResult.cs` (new file) + +**Model Definition**: +```csharp +namespace StellaOps.Policy.Replay; + +/// +/// Result of a replay operation. +/// +public sealed record ReplayResult +{ + /// + /// Whether the replay matched the original verdict. + /// + public required ReplayMatchStatus MatchStatus { get; init; } + + /// + /// The verdict produced by replay. + /// + public required PolicyEvaluationResult ReplayedVerdict { get; init; } + + /// + /// The original verdict (if available for comparison). + /// + public PolicyEvaluationResult? OriginalVerdict { get; init; } + + /// + /// Detailed delta report if differences found. + /// + public ReplayDeltaReport? DeltaReport { get; init; } + + /// + /// Snapshot used for replay. + /// + public required string SnapshotId { get; init; } + + /// + /// When replay was executed. + /// + public required DateTimeOffset ReplayedAt { get; init; } + + /// + /// Duration of replay execution. + /// + public TimeSpan Duration { get; init; } +} + +/// +/// Match status between replayed and original verdict. +/// +public enum ReplayMatchStatus +{ + /// + /// Verdicts match exactly (deterministic). + /// + ExactMatch, + + /// + /// Verdicts match within tolerance. + /// + MatchWithinTolerance, + + /// + /// Verdicts differ (non-deterministic or inputs changed). + /// + Mismatch, + + /// + /// Original verdict not available for comparison. + /// + NoComparison, + + /// + /// Replay failed due to missing inputs. + /// + ReplayFailed +} + +/// +/// Detailed report of differences between replayed and original. +/// +public sealed record ReplayDeltaReport +{ + /// + /// Summary of the difference. + /// + public required string Summary { get; init; } + + /// + /// Specific fields that differ. + /// + public IReadOnlyList FieldDeltas { get; init; } = []; + + /// + /// Findings that differ. + /// + public IReadOnlyList FindingDeltas { get; init; } = []; + + /// + /// Input sources that may have caused difference. + /// + public IReadOnlyList SuspectedCauses { get; init; } = []; +} + +public sealed record FieldDelta( + string FieldName, + string OriginalValue, + string ReplayedValue); + +public sealed record FindingDelta( + string FindingId, + DeltaType Type, + string? Description); + +public enum DeltaType +{ + Added, + Removed, + Modified +} +``` + +**Acceptance Criteria**: +- [ ] `ReplayResult.cs` file created +- [ ] Match status enum with all states +- [ ] Delta report with field and finding differences +- [ ] Suspected causes for non-determinism +- [ ] Duration tracking for performance + +--- + +### T3: Create ReplayEngine Service + +**Assignee**: Policy Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Implement the core replay engine that orchestrates frozen evaluation. + +**Implementation Path**: `Replay/ReplayEngine.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Replay; + +/// +/// Engine for replaying policy evaluations with frozen inputs. +/// +public sealed class ReplayEngine : IReplayEngine +{ + private readonly ISnapshotService _snapshotService; + private readonly IPolicyEvaluator _evaluator; + private readonly IVerdictStore _verdictStore; + private readonly IKnowledgeSourceResolver _sourceResolver; + private readonly ILogger _logger; + + public ReplayEngine( + ISnapshotService snapshotService, + IPolicyEvaluator evaluator, + IVerdictStore verdictStore, + IKnowledgeSourceResolver sourceResolver, + ILogger logger) + { + _snapshotService = snapshotService; + _evaluator = evaluator; + _verdictStore = verdictStore; + _sourceResolver = sourceResolver; + _logger = logger; + } + + /// + /// Replays a policy evaluation with frozen inputs from a snapshot. + /// + public async Task ReplayAsync( + ReplayRequest request, + CancellationToken ct = default) + { + var stopwatch = Stopwatch.StartNew(); + + _logger.LogInformation( + "Starting replay for artifact {Artifact} with snapshot {Snapshot}", + request.ArtifactDigest, request.SnapshotId); + + // Step 1: Load and verify snapshot + var snapshot = await LoadAndVerifySnapshotAsync(request.SnapshotId, ct); + if (snapshot is null) + { + return CreateFailedResult(request, "Snapshot not found or invalid"); + } + + // Step 2: Resolve frozen inputs from snapshot + var frozenInputs = await ResolveFrozenInputsAsync(snapshot, request.Options, ct); + if (!frozenInputs.IsComplete) + { + return CreateFailedResult(request, $"Missing inputs: {string.Join(", ", frozenInputs.MissingSources)}"); + } + + // Step 3: Execute evaluation with frozen inputs + var replayedVerdict = await _evaluator.EvaluateWithFrozenInputsAsync( + request.ArtifactDigest, + frozenInputs, + ct); + + // Step 4: Load original verdict for comparison (if requested) + PolicyEvaluationResult? originalVerdict = null; + if (request.OriginalVerdictId is not null && request.Options.CompareWithOriginal) + { + originalVerdict = await _verdictStore.GetAsync(request.OriginalVerdictId, ct); + } + + // Step 5: Compare and generate result + var matchStatus = CompareVerdicts(replayedVerdict, originalVerdict, request.Options); + var deltaReport = matchStatus == ReplayMatchStatus.Mismatch && request.Options.GenerateDetailedReport + ? GenerateDeltaReport(replayedVerdict, originalVerdict!) + : null; + + stopwatch.Stop(); + + return new ReplayResult + { + MatchStatus = matchStatus, + ReplayedVerdict = replayedVerdict, + OriginalVerdict = originalVerdict, + DeltaReport = deltaReport, + SnapshotId = request.SnapshotId, + ReplayedAt = DateTimeOffset.UtcNow, + Duration = stopwatch.Elapsed + }; + } + + private async Task LoadAndVerifySnapshotAsync( + string snapshotId, CancellationToken ct) + { + var snapshot = await _snapshotService.GetSnapshotAsync(snapshotId, ct); + if (snapshot is null) + return null; + + var verification = await _snapshotService.VerifySnapshotAsync(snapshot, ct); + if (!verification.IsValid) + { + _logger.LogWarning("Snapshot {SnapshotId} verification failed: {Error}", + snapshotId, verification.Error); + return null; + } + + return snapshot; + } + + private async Task ResolveFrozenInputsAsync( + KnowledgeSnapshotManifest snapshot, + ReplayOptions options, + CancellationToken ct) + { + var inputs = new FrozenInputsBuilder(); + var missingSources = new List(); + + foreach (var source in snapshot.Sources) + { + var resolved = await _sourceResolver.ResolveAsync(source, options.AllowNetworkFetch, ct); + if (resolved is not null) + { + inputs.AddSource(source.Name, resolved); + } + else + { + missingSources.Add($"{source.Name}:{source.Digest}"); + } + } + + return inputs.Build(missingSources); + } + + private ReplayMatchStatus CompareVerdicts( + PolicyEvaluationResult replayed, + PolicyEvaluationResult? original, + ReplayOptions options) + { + if (original is null) + return ReplayMatchStatus.NoComparison; + + // Compare decision + if (replayed.Decision != original.Decision) + return ReplayMatchStatus.Mismatch; + + // Compare score with tolerance + if (Math.Abs(replayed.Score - original.Score) > options.ScoreTolerance) + return ReplayMatchStatus.MatchWithinTolerance; + + // Compare findings + if (!FindingsMatch(replayed.Findings, original.Findings)) + return ReplayMatchStatus.Mismatch; + + return ReplayMatchStatus.ExactMatch; + } + + private bool FindingsMatch( + IReadOnlyList replayed, + IReadOnlyList original) + { + if (replayed.Count != original.Count) + return false; + + var replayedIds = replayed.Select(f => f.Id).OrderBy(x => x).ToList(); + var originalIds = original.Select(f => f.Id).OrderBy(x => x).ToList(); + + return replayedIds.SequenceEqual(originalIds); + } + + private ReplayDeltaReport GenerateDeltaReport( + PolicyEvaluationResult replayed, + PolicyEvaluationResult original) + { + var fieldDeltas = new List(); + var findingDeltas = new List(); + var suspectedCauses = new List(); + + // Compare scalar fields + if (replayed.Decision != original.Decision) + fieldDeltas.Add(new FieldDelta("Decision", original.Decision.ToString(), replayed.Decision.ToString())); + + if (replayed.Score != original.Score) + fieldDeltas.Add(new FieldDelta("Score", original.Score.ToString(), replayed.Score.ToString())); + + // Compare findings + var replayedIds = replayed.Findings.Select(f => f.Id).ToHashSet(); + var originalIds = original.Findings.Select(f => f.Id).ToHashSet(); + + foreach (var added in replayedIds.Except(originalIds)) + findingDeltas.Add(new FindingDelta(added, DeltaType.Added, null)); + + foreach (var removed in originalIds.Except(replayedIds)) + findingDeltas.Add(new FindingDelta(removed, DeltaType.Removed, null)); + + // Infer suspected causes + if (findingDeltas.Count > 0) + suspectedCauses.Add("Advisory data differences"); + + return new ReplayDeltaReport + { + Summary = $"{fieldDeltas.Count} field(s) and {findingDeltas.Count} finding(s) differ", + FieldDeltas = fieldDeltas, + FindingDeltas = findingDeltas, + SuspectedCauses = suspectedCauses + }; + } + + private ReplayResult CreateFailedResult(ReplayRequest request, string error) + { + _logger.LogWarning("Replay failed for {Artifact}: {Error}", + request.ArtifactDigest, error); + + return new ReplayResult + { + MatchStatus = ReplayMatchStatus.ReplayFailed, + ReplayedVerdict = PolicyEvaluationResult.Empty, + SnapshotId = request.SnapshotId, + ReplayedAt = DateTimeOffset.UtcNow, + DeltaReport = new ReplayDeltaReport + { + Summary = error, + SuspectedCauses = new[] { error } + } + }; + } +} + +public interface IReplayEngine +{ + Task ReplayAsync(ReplayRequest request, CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] `ReplayEngine.cs` created in `Replay/` +- [ ] Snapshot loading and verification +- [ ] Frozen input resolution from snapshot sources +- [ ] Evaluation with frozen inputs +- [ ] Verdict comparison with configurable tolerance +- [ ] Delta report generation for mismatches +- [ ] Logging for observability + +--- + +### T4: Implement Input Resolution + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T3 + +**Description**: +Implement resolution of exact inputs from snapshot sources. + +**Implementation Path**: `Replay/KnowledgeSourceResolver.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Replay; + +/// +/// Resolves knowledge sources from snapshot descriptors. +/// +public sealed class KnowledgeSourceResolver : IKnowledgeSourceResolver +{ + private readonly ISnapshotStore _snapshotStore; + private readonly IAdvisoryFeedStore _feedStore; + private readonly IVexStore _vexStore; + private readonly IHttpClientFactory _httpClientFactory; + private readonly ILogger _logger; + + public KnowledgeSourceResolver( + ISnapshotStore snapshotStore, + IAdvisoryFeedStore feedStore, + IVexStore vexStore, + IHttpClientFactory httpClientFactory, + ILogger logger) + { + _snapshotStore = snapshotStore; + _feedStore = feedStore; + _vexStore = vexStore; + _httpClientFactory = httpClientFactory; + _logger = logger; + } + + /// + /// Resolves a knowledge source to its actual content. + /// + public async Task ResolveAsync( + KnowledgeSourceDescriptor descriptor, + bool allowNetworkFetch, + CancellationToken ct = default) + { + _logger.LogDebug("Resolving source {Name} ({Type})", descriptor.Name, descriptor.Type); + + // Try bundled content first + if (descriptor.InclusionMode != SourceInclusionMode.Referenced && + descriptor.BundlePath is not null) + { + var bundled = await ResolveBundledAsync(descriptor, ct); + if (bundled is not null) + return bundled; + } + + // Try local store by digest + var local = await ResolveFromLocalStoreAsync(descriptor, ct); + if (local is not null) + return local; + + // Try network fetch if allowed + if (allowNetworkFetch && descriptor.Origin is not null) + { + var fetched = await FetchFromOriginAsync(descriptor, ct); + if (fetched is not null) + return fetched; + } + + _logger.LogWarning("Failed to resolve source {Name} with digest {Digest}", + descriptor.Name, descriptor.Digest); + + return null; + } + + private async Task ResolveBundledAsync( + KnowledgeSourceDescriptor descriptor, + CancellationToken ct) + { + try + { + var content = await _snapshotStore.GetBundledContentAsync( + descriptor.BundlePath!, ct); + + if (content is null) + return null; + + // Verify digest + var actualDigest = ComputeDigest(content); + if (actualDigest != descriptor.Digest) + { + _logger.LogWarning( + "Bundled source {Name} digest mismatch: expected {Expected}, got {Actual}", + descriptor.Name, descriptor.Digest, actualDigest); + return null; + } + + return new ResolvedSource( + descriptor.Name, + descriptor.Type, + content, + SourceResolutionMethod.Bundled); + } + catch (Exception ex) + { + _logger.LogWarning(ex, "Failed to resolve bundled source {Name}", descriptor.Name); + return null; + } + } + + private async Task ResolveFromLocalStoreAsync( + KnowledgeSourceDescriptor descriptor, + CancellationToken ct) + { + return descriptor.Type switch + { + "advisory-feed" => await ResolveFeedAsync(descriptor, ct), + "vex" => await ResolveVexAsync(descriptor, ct), + _ => null + }; + } + + private async Task ResolveFeedAsync( + KnowledgeSourceDescriptor descriptor, + CancellationToken ct) + { + var feed = await _feedStore.GetByDigestAsync(descriptor.Digest, ct); + if (feed is null) + return null; + + return new ResolvedSource( + descriptor.Name, + descriptor.Type, + feed.Content, + SourceResolutionMethod.LocalStore); + } + + private async Task ResolveVexAsync( + KnowledgeSourceDescriptor descriptor, + CancellationToken ct) + { + var vex = await _vexStore.GetByDigestAsync(descriptor.Digest, ct); + if (vex is null) + return null; + + return new ResolvedSource( + descriptor.Name, + descriptor.Type, + vex.Content, + SourceResolutionMethod.LocalStore); + } + + private async Task FetchFromOriginAsync( + KnowledgeSourceDescriptor descriptor, + CancellationToken ct) + { + try + { + var client = _httpClientFactory.CreateClient("replay"); + var response = await client.GetAsync(descriptor.Origin, ct); + response.EnsureSuccessStatusCode(); + + var content = await response.Content.ReadAsByteArrayAsync(ct); + + // Verify digest + var actualDigest = ComputeDigest(content); + if (actualDigest != descriptor.Digest) + { + _logger.LogWarning( + "Fetched source {Name} digest mismatch: expected {Expected}, got {Actual}", + descriptor.Name, descriptor.Digest, actualDigest); + return null; + } + + return new ResolvedSource( + descriptor.Name, + descriptor.Type, + content, + SourceResolutionMethod.NetworkFetch); + } + catch (Exception ex) + { + _logger.LogWarning(ex, "Failed to fetch source {Name} from {Origin}", + descriptor.Name, descriptor.Origin); + return null; + } + } + + private static string ComputeDigest(byte[] content) + { + var hash = SHA256.HashData(content); + return $"sha256:{Convert.ToHexString(hash).ToLowerInvariant()}"; + } +} + +public sealed record ResolvedSource( + string Name, + string Type, + byte[] Content, + SourceResolutionMethod Method); + +public enum SourceResolutionMethod +{ + Bundled, + LocalStore, + NetworkFetch +} + +public interface IKnowledgeSourceResolver +{ + Task ResolveAsync( + KnowledgeSourceDescriptor descriptor, + bool allowNetworkFetch, + CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] `KnowledgeSourceResolver.cs` created +- [ ] Resolution order: bundled → local store → network +- [ ] Digest verification on all resolved content +- [ ] Network fetch controlled by flag +- [ ] Resolution method tracked for audit +- [ ] Logging for observability + +--- + +### T5: Implement Comparison Logic + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T3 + +**Description**: +Implement detailed comparison logic to detect determinism violations. + +**Implementation Path**: `Replay/VerdictComparer.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Replay; + +/// +/// Compares policy evaluation results for determinism verification. +/// +public sealed class VerdictComparer : IVerdictComparer +{ + /// + /// Compares two verdicts and returns detailed comparison result. + /// + public VerdictComparisonResult Compare( + PolicyEvaluationResult replayed, + PolicyEvaluationResult original, + VerdictComparisonOptions options) + { + var differences = new List(); + + // Compare decision + if (replayed.Decision != original.Decision) + { + differences.Add(new VerdictDifference( + "Decision", + DifferenceCategory.Critical, + original.Decision.ToString(), + replayed.Decision.ToString())); + } + + // Compare score with tolerance + var scoreDiff = Math.Abs(replayed.Score - original.Score); + if (scoreDiff > options.ScoreTolerance) + { + differences.Add(new VerdictDifference( + "Score", + scoreDiff > options.CriticalScoreTolerance + ? DifferenceCategory.Critical + : DifferenceCategory.Minor, + original.Score.ToString("F4"), + replayed.Score.ToString("F4"))); + } + + // Compare findings + var findingDiffs = CompareFindingLists(replayed.Findings, original.Findings); + differences.AddRange(findingDiffs); + + // Compare unknowns summary + if (replayed.UnknownBudgetStatus is not null && original.UnknownBudgetStatus is not null) + { + var unknownDiffs = CompareUnknownsSummary( + replayed.UnknownBudgetStatus, + original.UnknownBudgetStatus); + differences.AddRange(unknownDiffs); + } + + // Determine overall match status + var matchStatus = DetermineMatchStatus(differences, options); + + return new VerdictComparisonResult + { + MatchStatus = matchStatus, + Differences = differences, + IsDeterministic = matchStatus == ReplayMatchStatus.ExactMatch, + DeterminismConfidence = CalculateDeterminismConfidence(differences) + }; + } + + private IEnumerable CompareFindingLists( + IReadOnlyList replayed, + IReadOnlyList original) + { + var replayedMap = replayed.ToDictionary(f => f.Id); + var originalMap = original.ToDictionary(f => f.Id); + + // Findings added in replay + foreach (var id in replayedMap.Keys.Except(originalMap.Keys)) + { + yield return new VerdictDifference( + $"Finding:{id}", + DifferenceCategory.Finding, + "absent", + "present"); + } + + // Findings removed in replay + foreach (var id in originalMap.Keys.Except(replayedMap.Keys)) + { + yield return new VerdictDifference( + $"Finding:{id}", + DifferenceCategory.Finding, + "present", + "absent"); + } + + // Findings present in both - compare details + foreach (var id in replayedMap.Keys.Intersect(originalMap.Keys)) + { + var replayedFinding = replayedMap[id]; + var originalFinding = originalMap[id]; + + if (replayedFinding.Severity != originalFinding.Severity) + { + yield return new VerdictDifference( + $"Finding:{id}:Severity", + DifferenceCategory.Minor, + originalFinding.Severity.ToString(), + replayedFinding.Severity.ToString()); + } + } + } + + private IEnumerable CompareUnknownsSummary( + BudgetStatusSummary replayed, + BudgetStatusSummary original) + { + if (replayed.TotalUnknowns != original.TotalUnknowns) + { + yield return new VerdictDifference( + "Unknowns:Total", + DifferenceCategory.Minor, + original.TotalUnknowns.ToString(), + replayed.TotalUnknowns.ToString()); + } + + if (replayed.IsExceeded != original.IsExceeded) + { + yield return new VerdictDifference( + "Unknowns:BudgetExceeded", + DifferenceCategory.Critical, + original.IsExceeded.ToString(), + replayed.IsExceeded.ToString()); + } + } + + private ReplayMatchStatus DetermineMatchStatus( + List differences, + VerdictComparisonOptions options) + { + if (differences.Count == 0) + return ReplayMatchStatus.ExactMatch; + + if (differences.Any(d => d.Category == DifferenceCategory.Critical)) + return ReplayMatchStatus.Mismatch; + + if (options.TreatMinorAsMatch && + differences.All(d => d.Category == DifferenceCategory.Minor)) + return ReplayMatchStatus.MatchWithinTolerance; + + return ReplayMatchStatus.Mismatch; + } + + private decimal CalculateDeterminismConfidence(List differences) + { + if (differences.Count == 0) + return 1.0m; + + var criticalCount = differences.Count(d => d.Category == DifferenceCategory.Critical); + var minorCount = differences.Count(d => d.Category == DifferenceCategory.Minor); + + // Simple confidence calculation + var penalty = (criticalCount * 0.3m) + (minorCount * 0.05m); + return Math.Max(0, 1.0m - penalty); + } +} + +public sealed record VerdictComparisonResult +{ + public required ReplayMatchStatus MatchStatus { get; init; } + public required IReadOnlyList Differences { get; init; } + public required bool IsDeterministic { get; init; } + public required decimal DeterminismConfidence { get; init; } +} + +public sealed record VerdictDifference( + string Field, + DifferenceCategory Category, + string OriginalValue, + string ReplayedValue); + +public enum DifferenceCategory +{ + Critical, + Minor, + Finding +} + +public sealed record VerdictComparisonOptions +{ + public decimal ScoreTolerance { get; init; } = 0.001m; + public decimal CriticalScoreTolerance { get; init; } = 0.1m; + public bool TreatMinorAsMatch { get; init; } = true; + + public static VerdictComparisonOptions Default { get; } = new(); +} + +public interface IVerdictComparer +{ + VerdictComparisonResult Compare( + PolicyEvaluationResult replayed, + PolicyEvaluationResult original, + VerdictComparisonOptions options); +} +``` + +**Acceptance Criteria**: +- [ ] `VerdictComparer.cs` created +- [ ] Decision comparison (critical difference) +- [ ] Score comparison with configurable tolerance +- [ ] Finding list comparison (added/removed/modified) +- [ ] Unknowns summary comparison +- [ ] Determinism confidence calculation +- [ ] Difference categorization (critical vs minor) + +--- + +### T6: Create ReplayReport + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T5 + +**Description**: +Create detailed report format for replay results. + +**Implementation Path**: `Replay/ReplayReport.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Replay; + +/// +/// Detailed report of a replay operation. +/// +public sealed record ReplayReport +{ + /// + /// Report ID for reference. + /// + public required string ReportId { get; init; } + + /// + /// When the report was generated. + /// + public required DateTimeOffset GeneratedAt { get; init; } + + /// + /// Artifact that was evaluated. + /// + public required string ArtifactDigest { get; init; } + + /// + /// Snapshot used for replay. + /// + public required string SnapshotId { get; init; } + + /// + /// Original verdict ID (if compared). + /// + public string? OriginalVerdictId { get; init; } + + /// + /// Overall match status. + /// + public required ReplayMatchStatus MatchStatus { get; init; } + + /// + /// Whether the evaluation is deterministic. + /// + public required bool IsDeterministic { get; init; } + + /// + /// Confidence level in determinism (0.0 to 1.0). + /// + public required decimal DeterminismConfidence { get; init; } + + /// + /// Summary of differences found. + /// + public required DifferenceSummary Differences { get; init; } + + /// + /// Input resolution details. + /// + public required InputResolutionSummary InputResolution { get; init; } + + /// + /// Execution timing. + /// + public required ExecutionTiming Timing { get; init; } + + /// + /// Recommendations based on results. + /// + public IReadOnlyList Recommendations { get; init; } = []; +} + +public sealed record DifferenceSummary +{ + public int TotalDifferences { get; init; } + public int CriticalDifferences { get; init; } + public int MinorDifferences { get; init; } + public int FindingDifferences { get; init; } + public IReadOnlyList TopDifferences { get; init; } = []; +} + +public sealed record InputResolutionSummary +{ + public int TotalSources { get; init; } + public int ResolvedFromBundle { get; init; } + public int ResolvedFromLocalStore { get; init; } + public int ResolvedFromNetwork { get; init; } + public int FailedToResolve { get; init; } + public IReadOnlyList MissingSources { get; init; } = []; +} + +public sealed record ExecutionTiming +{ + public TimeSpan TotalDuration { get; init; } + public TimeSpan SnapshotLoadTime { get; init; } + public TimeSpan InputResolutionTime { get; init; } + public TimeSpan EvaluationTime { get; init; } + public TimeSpan ComparisonTime { get; init; } +} + +/// +/// Builder for creating replay reports. +/// +public sealed class ReplayReportBuilder +{ + private readonly ReplayResult _result; + private readonly ReplayRequest _request; + private readonly List _recommendations = []; + + public ReplayReportBuilder(ReplayRequest request, ReplayResult result) + { + _request = request; + _result = result; + } + + public ReplayReportBuilder AddRecommendation(string recommendation) + { + _recommendations.Add(recommendation); + return this; + } + + public ReplayReportBuilder AddRecommendationsFromResult() + { + if (_result.MatchStatus == ReplayMatchStatus.Mismatch) + { + _recommendations.Add("Review the delta report to identify non-deterministic behavior"); + _recommendations.Add("Check if advisory feeds have been updated since the original evaluation"); + } + + if (_result.MatchStatus == ReplayMatchStatus.ReplayFailed) + { + _recommendations.Add("Ensure the snapshot bundle is complete and accessible"); + _recommendations.Add("Consider enabling network fetch for missing sources"); + } + + return this; + } + + public ReplayReport Build() + { + return new ReplayReport + { + ReportId = $"rpt:{Guid.NewGuid():N}", + GeneratedAt = DateTimeOffset.UtcNow, + ArtifactDigest = _request.ArtifactDigest, + SnapshotId = _request.SnapshotId, + OriginalVerdictId = _request.OriginalVerdictId, + MatchStatus = _result.MatchStatus, + IsDeterministic = _result.MatchStatus == ReplayMatchStatus.ExactMatch, + DeterminismConfidence = CalculateConfidence(), + Differences = BuildDifferenceSummary(), + InputResolution = BuildInputResolutionSummary(), + Timing = BuildExecutionTiming(), + Recommendations = _recommendations + }; + } + + private decimal CalculateConfidence() => + _result.MatchStatus switch + { + ReplayMatchStatus.ExactMatch => 1.0m, + ReplayMatchStatus.MatchWithinTolerance => 0.9m, + ReplayMatchStatus.Mismatch => 0.0m, + ReplayMatchStatus.NoComparison => 0.5m, + ReplayMatchStatus.ReplayFailed => 0.0m, + _ => 0.5m + }; + + private DifferenceSummary BuildDifferenceSummary() + { + if (_result.DeltaReport is null) + return new DifferenceSummary(); + + var fieldDeltas = _result.DeltaReport.FieldDeltas; + var findingDeltas = _result.DeltaReport.FindingDeltas; + + return new DifferenceSummary + { + TotalDifferences = fieldDeltas.Count + findingDeltas.Count, + CriticalDifferences = fieldDeltas.Count(d => d.FieldName is "Decision" or "Score"), + MinorDifferences = fieldDeltas.Count(d => d.FieldName is not "Decision" and not "Score"), + FindingDifferences = findingDeltas.Count + }; + } + + private InputResolutionSummary BuildInputResolutionSummary() + { + // This would be populated from actual resolution data + return new InputResolutionSummary + { + TotalSources = 0, + ResolvedFromBundle = 0, + ResolvedFromLocalStore = 0, + ResolvedFromNetwork = 0, + FailedToResolve = 0, + MissingSources = _result.DeltaReport?.SuspectedCauses ?? [] + }; + } + + private ExecutionTiming BuildExecutionTiming() + { + return new ExecutionTiming + { + TotalDuration = _result.Duration + }; + } +} +``` + +**Acceptance Criteria**: +- [ ] `ReplayReport.cs` created +- [ ] Comprehensive report structure with all metadata +- [ ] Difference summary with categorization +- [ ] Input resolution summary +- [ ] Execution timing breakdown +- [ ] Recommendation generation +- [ ] Report builder for easy construction + +--- + +### T7: Add CLI Command + +**Assignee**: CLI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T3, T6 + +**Description**: +Add CLI command for replay operations. + +**Implementation Path**: `src/Cli/StellaOps.Cli/Commands/ReplayCommand.cs` + +**Implementation**: +```csharp +namespace StellaOps.Cli.Commands; + +[Command("replay", Description = "Replay a policy evaluation with frozen inputs")] +public class ReplayCommand : ICommand +{ + [Option("--verdict", Description = "Original verdict ID to replay")] + public string? VerdictId { get; set; } + + [Option("--snapshot", Description = "Snapshot ID to use")] + public string? SnapshotId { get; set; } + + [Option("--artifact", Description = "Artifact digest to evaluate")] + public string? ArtifactDigest { get; set; } + + [Option("--allow-network", Description = "Allow network fetch for missing sources")] + public bool AllowNetwork { get; set; } + + [Option("--output", Description = "Output format: text, json, or report")] + public string Output { get; set; } = "text"; + + [Option("--report-file", Description = "Write detailed report to file")] + public string? ReportFile { get; set; } + + private readonly IReplayEngine _replayEngine; + private readonly IVerdictStore _verdictStore; + private readonly IConsole _console; + + public async Task ExecuteAsync(CancellationToken ct) + { + // Resolve inputs + var request = await BuildRequestAsync(ct); + if (request is null) + { + _console.WriteError("Could not determine replay parameters"); + return; + } + + // Execute replay + _console.WriteLine($"Replaying evaluation for {request.ArtifactDigest}..."); + var result = await _replayEngine.ReplayAsync(request, ct); + + // Generate report + var report = new ReplayReportBuilder(request, result) + .AddRecommendationsFromResult() + .Build(); + + // Output results + switch (Output.ToLowerInvariant()) + { + case "json": + OutputJson(result); + break; + case "report": + OutputReport(report); + break; + default: + OutputText(result, report); + break; + } + + // Write report file if requested + if (ReportFile is not null) + { + var json = JsonSerializer.Serialize(report, new JsonSerializerOptions { WriteIndented = true }); + await File.WriteAllTextAsync(ReportFile, json, ct); + _console.WriteLine($"Report written to {ReportFile}"); + } + } + + private async Task BuildRequestAsync(CancellationToken ct) + { + // If verdict ID provided, load verdict to get artifact and snapshot + if (VerdictId is not null) + { + var verdict = await _verdictStore.GetAsync(VerdictId, ct); + if (verdict is null) + { + _console.WriteError($"Verdict {VerdictId} not found"); + return null; + } + + return new ReplayRequest + { + ArtifactDigest = verdict.ArtifactDigest, + SnapshotId = verdict.KnowledgeSnapshotId ?? SnapshotId ?? throw new InvalidOperationException("Snapshot ID required"), + OriginalVerdictId = VerdictId, + Options = new ReplayOptions { AllowNetworkFetch = AllowNetwork } + }; + } + + // Otherwise, require explicit artifact and snapshot + if (ArtifactDigest is null || SnapshotId is null) + { + _console.WriteError("Either --verdict or both --artifact and --snapshot required"); + return null; + } + + return new ReplayRequest + { + ArtifactDigest = ArtifactDigest, + SnapshotId = SnapshotId, + Options = new ReplayOptions { AllowNetworkFetch = AllowNetwork } + }; + } + + private void OutputText(ReplayResult result, ReplayReport report) + { + var statusSymbol = result.MatchStatus switch + { + ReplayMatchStatus.ExactMatch => "[OK]", + ReplayMatchStatus.MatchWithinTolerance => "[~OK]", + ReplayMatchStatus.Mismatch => "[MISMATCH]", + ReplayMatchStatus.NoComparison => "[N/A]", + ReplayMatchStatus.ReplayFailed => "[FAILED]", + _ => "[?]" + }; + + _console.WriteLine($"Replay Status: {statusSymbol} {result.MatchStatus}"); + _console.WriteLine($"Determinism Confidence: {report.DeterminismConfidence:P0}"); + _console.WriteLine($"Duration: {result.Duration.TotalMilliseconds:F0}ms"); + + if (result.DeltaReport is not null && result.DeltaReport.FieldDeltas.Count > 0) + { + _console.WriteLine("\nDifferences:"); + foreach (var delta in result.DeltaReport.FieldDeltas) + { + _console.WriteLine($" {delta.FieldName}: {delta.OriginalValue} → {delta.ReplayedValue}"); + } + } + + if (report.Recommendations.Count > 0) + { + _console.WriteLine("\nRecommendations:"); + foreach (var rec in report.Recommendations) + { + _console.WriteLine($" - {rec}"); + } + } + } + + private void OutputJson(ReplayResult result) + { + var json = JsonSerializer.Serialize(result, new JsonSerializerOptions { WriteIndented = true }); + _console.WriteLine(json); + } + + private void OutputReport(ReplayReport report) + { + var json = JsonSerializer.Serialize(report, new JsonSerializerOptions { WriteIndented = true }); + _console.WriteLine(json); + } +} +``` + +**Acceptance Criteria**: +- [ ] `ReplayCommand.cs` created in CLI +- [ ] `stella replay --verdict ` command works +- [ ] `stella replay --artifact --snapshot ` works +- [ ] `--allow-network` flag for network fetch +- [ ] Multiple output formats (text, json, report) +- [ ] Report file export with `--report-file` + +--- + +### T8: Add Golden Replay Tests + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T3, T5 + +**Description**: +Add golden tests verifying replay determinism. + +**Implementation Path**: `src/Policy/__Tests/StellaOps.Policy.Tests/Replay/` + +**Test Cases**: +```csharp +public class ReplayEngineTests +{ + [Fact] + public async Task Replay_SameInputs_ProducesExactMatch() + { + // Arrange + var snapshot = await CreateSnapshotAsync(); + var originalVerdict = await _evaluator.EvaluateWithSnapshotAsync(CreateRequest(), snapshot); + await _verdictStore.SaveAsync(originalVerdict); + + var request = new ReplayRequest + { + ArtifactDigest = originalVerdict.ArtifactDigest, + SnapshotId = snapshot.SnapshotId, + OriginalVerdictId = originalVerdict.VerdictId + }; + + // Act + var result = await _replayEngine.ReplayAsync(request); + + // Assert + result.MatchStatus.Should().Be(ReplayMatchStatus.ExactMatch); + result.ReplayedVerdict.Decision.Should().Be(originalVerdict.Decision); + result.ReplayedVerdict.Score.Should().Be(originalVerdict.Score); + } + + [Fact] + public async Task Replay_MissingSource_FailsGracefully() + { + // Arrange + var snapshot = CreateSnapshotWithMissingSource(); + var request = new ReplayRequest + { + ArtifactDigest = "sha256:abc", + SnapshotId = snapshot.SnapshotId, + Options = new ReplayOptions { AllowNetworkFetch = false } + }; + + // Act + var result = await _replayEngine.ReplayAsync(request); + + // Assert + result.MatchStatus.Should().Be(ReplayMatchStatus.ReplayFailed); + result.DeltaReport?.SuspectedCauses.Should().Contain("Missing inputs"); + } + + [Fact] + public async Task Replay_DifferentAdvisoryData_DetectsMismatch() + { + // Arrange + var originalSnapshot = await CreateSnapshotAsync(); + var originalVerdict = await _evaluator.EvaluateWithSnapshotAsync(CreateRequest(), originalSnapshot); + + // Create new snapshot with different advisory data + var newSnapshot = await CreateSnapshotWithUpdatedAdvisoriesAsync(); + + var request = new ReplayRequest + { + ArtifactDigest = originalVerdict.ArtifactDigest, + SnapshotId = newSnapshot.SnapshotId, + OriginalVerdictId = originalVerdict.VerdictId + }; + + // Act + var result = await _replayEngine.ReplayAsync(request); + + // Assert + result.MatchStatus.Should().Be(ReplayMatchStatus.Mismatch); + result.DeltaReport.Should().NotBeNull(); + } + + [Fact] + public async Task Replay_100Iterations_AllDeterministic() + { + // Arrange + var snapshot = await CreateSnapshotAsync(); + var request = new ReplayRequest + { + ArtifactDigest = "sha256:test", + SnapshotId = snapshot.SnapshotId + }; + + // Act + var results = new List(); + for (int i = 0; i < 100; i++) + { + results.Add(await _replayEngine.ReplayAsync(request)); + } + + // Assert + var firstScore = results[0].ReplayedVerdict.Score; + var firstDecision = results[0].ReplayedVerdict.Decision; + + results.Should().AllSatisfy(r => + { + r.ReplayedVerdict.Score.Should().Be(firstScore); + r.ReplayedVerdict.Decision.Should().Be(firstDecision); + }); + } +} + +public class VerdictComparerTests +{ + [Fact] + public void Compare_IdenticalVerdicts_ReturnsExactMatch() + { + var verdict = CreateVerdict(decision: PolicyDecision.Pass, score: 85.5m); + + var result = _comparer.Compare(verdict, verdict, VerdictComparisonOptions.Default); + + result.MatchStatus.Should().Be(ReplayMatchStatus.ExactMatch); + result.IsDeterministic.Should().BeTrue(); + result.DeterminismConfidence.Should().Be(1.0m); + } + + [Fact] + public void Compare_DifferentDecisions_ReturnsMismatch() + { + var original = CreateVerdict(decision: PolicyDecision.Pass); + var replayed = CreateVerdict(decision: PolicyDecision.Fail); + + var result = _comparer.Compare(replayed, original, VerdictComparisonOptions.Default); + + result.MatchStatus.Should().Be(ReplayMatchStatus.Mismatch); + result.Differences.Should().Contain(d => d.Field == "Decision"); + } + + [Fact] + public void Compare_ScoreWithinTolerance_ReturnsMatch() + { + var original = CreateVerdict(score: 85.5000m); + var replayed = CreateVerdict(score: 85.5005m); + + var result = _comparer.Compare(replayed, original, + new VerdictComparisonOptions { ScoreTolerance = 0.001m }); + + result.MatchStatus.Should().Be(ReplayMatchStatus.MatchWithinTolerance); + } + + [Fact] + public void Compare_DifferentFindings_DetectsChanges() + { + var original = CreateVerdictWithFindings("CVE-2024-001", "CVE-2024-002"); + var replayed = CreateVerdictWithFindings("CVE-2024-001", "CVE-2024-003"); + + var result = _comparer.Compare(replayed, original, VerdictComparisonOptions.Default); + + result.MatchStatus.Should().Be(ReplayMatchStatus.Mismatch); + result.Differences.Should().Contain(d => d.Field == "Finding:CVE-2024-002"); + result.Differences.Should().Contain(d => d.Field == "Finding:CVE-2024-003"); + } +} +``` + +**Acceptance Criteria**: +- [ ] Test for exact match with same inputs +- [ ] Test for failure with missing sources +- [ ] Test for mismatch detection with different advisories +- [ ] Stress test: 100 iterations all deterministic +- [ ] Verdict comparer tests for all cases +- [ ] All 10+ golden tests pass + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Policy Team | Define ReplayRequest | +| 2 | T2 | TODO | T1 | Policy Team | Define ReplayResult | +| 3 | T3 | TODO | T1, T2 | Policy Team | Create ReplayEngine service | +| 4 | T4 | TODO | T3 | Policy Team | Implement input resolution | +| 5 | T5 | TODO | T3 | Policy Team | Implement comparison logic | +| 6 | T6 | TODO | T5 | Policy Team | Create ReplayReport | +| 7 | T7 | TODO | T3, T6 | CLI Team | Add CLI command | +| 8 | T8 | TODO | T3, T5 | Policy Team | Add golden replay tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from MOAT Phase 2 gap analysis. Replay Engine identified as requirement from Knowledge Snapshots advisory. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Score tolerance | Decision | Policy Team | 0.001 default allows for floating point variance | +| Network fetch default | Decision | Policy Team | Disabled by default for air-gap safety | +| Determinism confidence | Decision | Policy Team | Simple penalty-based calculation; tune with data | +| Source resolution order | Decision | Policy Team | Bundled → local → network for performance/offline | + +--- + +## Success Criteria + +- [ ] All 8 tasks marked DONE +- [ ] Replay produces exact match for same inputs +- [ ] Missing sources handled gracefully +- [ ] Detailed delta reports generated +- [ ] CLI command works with --verdict and --snapshot +- [ ] 10+ golden replay tests passing +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_4100_0002_0003_snapshot_export_import.md b/docs/implplan/SPRINT_4100_0002_0003_snapshot_export_import.md new file mode 100644 index 000000000..a077d4baf --- /dev/null +++ b/docs/implplan/SPRINT_4100_0002_0003_snapshot_export_import.md @@ -0,0 +1,1180 @@ +# Sprint 4100.0002.0003 · Snapshot Export/Import + +## Topic & Scope + +- Enable portable snapshot bundles for air-gapped replay +- Implement export with selectable inclusion levels +- Implement import with integrity verification + +**Working directory:** `src/ExportCenter/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 4100.0002.0001 (Knowledge Snapshot Manifest) — MUST BE DONE +- **Downstream**: None +- **Safe to parallelize with**: Sprint 4100.0001.0003, Sprint 4100.0004.0001 + +## Documentation Prerequisites + +- Sprint 4100.0002.0001 completion (KnowledgeSnapshotManifest, KnowledgeSourceDescriptor) +- `docs/product-advisories/20-Dec-2025 - Moat Explanation - Knowledge Snapshots and Time‑Travel Replay.md` +- `docs/24_OFFLINE_KIT.md` + +--- + +## Tasks + +### T1: Define SnapshotBundle Format + +**Assignee**: ExportCenter Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Define the ZIP bundle structure for portable snapshots. + +**Implementation Path**: `StellaOps.ExportCenter/Snapshots/SnapshotBundle.cs` (new file) + +**Bundle Structure**: +``` +snapshot-{id}.zip +├── manifest.json # KnowledgeSnapshotManifest +├── manifest.dsse.json # DSSE-signed envelope of manifest +├── sources/ +│ ├── nvd-2025-12-21.jsonl.gz +│ ├── osv-2025-12-21.jsonl.gz +│ ├── vex-sha256-abc.json +│ └── ... +├── policy/ +│ └── policy-sha256-xyz.json +├── scoring/ +│ └── scoring-sha256-def.json +├── trust/ +│ └── trust-sha256-ghi.pem +└── META/ + ├── BUNDLE_INFO.json # Bundle metadata + └── CHECKSUMS.sha256 # All file checksums +``` + +**Model Definition**: +```csharp +namespace StellaOps.ExportCenter.Snapshots; + +/// +/// Represents a portable snapshot bundle. +/// +public sealed record SnapshotBundle +{ + /// + /// The snapshot manifest. + /// + public required KnowledgeSnapshotManifest Manifest { get; init; } + + /// + /// Signed envelope of the manifest (if sealed). + /// + public string? SignedEnvelope { get; init; } + + /// + /// Bundle metadata. + /// + public required BundleInfo Info { get; init; } + + /// + /// Source files included in the bundle. + /// + public required IReadOnlyList Sources { get; init; } + + /// + /// Policy bundle file. + /// + public BundledFile? Policy { get; init; } + + /// + /// Scoring rules file. + /// + public BundledFile? Scoring { get; init; } + + /// + /// Trust bundle file. + /// + public BundledFile? Trust { get; init; } +} + +/// +/// Metadata about the bundle. +/// +public sealed record BundleInfo +{ + public required string BundleId { get; init; } + public required DateTimeOffset CreatedAt { get; init; } + public required string CreatedBy { get; init; } + public required SnapshotInclusionLevel InclusionLevel { get; init; } + public required long TotalSizeBytes { get; init; } + public required int FileCount { get; init; } + public string? Description { get; init; } +} + +/// +/// A file included in the bundle. +/// +public sealed record BundledFile( + string Path, + string Digest, + long SizeBytes, + bool IsCompressed); + +/// +/// Level of content inclusion in the bundle. +/// +public enum SnapshotInclusionLevel +{ + /// + /// Only manifest with content digests (requires network for replay). + /// + ReferenceOnly, + + /// + /// Manifest plus essential sources for offline replay. + /// + Portable, + + /// + /// Full bundle with all sources, sealed and signed. + /// + Sealed +} +``` + +**Acceptance Criteria**: +- [ ] `SnapshotBundle.cs` created with all models +- [ ] ZIP structure documented +- [ ] Three inclusion levels defined +- [ ] Checksums file format specified +- [ ] All paths are relative within bundle + +--- + +### T2: Implement ExportSnapshotService + +**Assignee**: ExportCenter Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement service to create portable snapshot bundles. + +**Implementation Path**: `StellaOps.ExportCenter/Snapshots/ExportSnapshotService.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.ExportCenter.Snapshots; + +/// +/// Service for exporting snapshots to portable bundles. +/// +public sealed class ExportSnapshotService : IExportSnapshotService +{ + private readonly ISnapshotService _snapshotService; + private readonly IKnowledgeSourceResolver _sourceResolver; + private readonly ISigner _signer; + private readonly ILogger _logger; + + public async Task ExportAsync( + string snapshotId, + ExportOptions options, + CancellationToken ct = default) + { + _logger.LogInformation("Exporting snapshot {SnapshotId} with level {Level}", + snapshotId, options.InclusionLevel); + + // Load snapshot + var snapshot = await _snapshotService.GetSnapshotAsync(snapshotId, ct); + if (snapshot is null) + return ExportResult.Fail($"Snapshot {snapshotId} not found"); + + // Create temp directory for bundle assembly + var tempDir = Path.Combine(Path.GetTempPath(), $"snapshot-export-{Guid.NewGuid():N}"); + Directory.CreateDirectory(tempDir); + + try + { + // Write manifest + await WriteManifestAsync(tempDir, snapshot, ct); + + // Bundle sources based on inclusion level + var bundledFiles = new List(); + if (options.InclusionLevel != SnapshotInclusionLevel.ReferenceOnly) + { + bundledFiles = await BundleSourcesAsync(tempDir, snapshot, options, ct); + } + + // Bundle policy and scoring + if (options.IncludePolicy) + { + var policyFile = await BundlePolicyAsync(tempDir, snapshot.Policy, ct); + if (policyFile is not null) + bundledFiles.Add(policyFile); + } + + // Write checksums + await WriteChecksumsAsync(tempDir, bundledFiles, ct); + + // Create bundle info + var bundleInfo = new BundleInfo + { + BundleId = $"bundle:{Guid.NewGuid():N}", + CreatedAt = DateTimeOffset.UtcNow, + CreatedBy = options.CreatedBy ?? "StellaOps", + InclusionLevel = options.InclusionLevel, + TotalSizeBytes = bundledFiles.Sum(f => f.SizeBytes), + FileCount = bundledFiles.Count, + Description = options.Description + }; + + await WriteBundleInfoAsync(tempDir, bundleInfo, ct); + + // Create ZIP + var zipPath = options.OutputPath ?? Path.Combine( + Path.GetTempPath(), + $"snapshot-{snapshot.SnapshotId.Split(':').Last()[..12]}.zip"); + + ZipFile.CreateFromDirectory(tempDir, zipPath, CompressionLevel.Optimal, false); + + _logger.LogInformation("Exported snapshot to {ZipPath}", zipPath); + + return ExportResult.Success(zipPath, bundleInfo); + } + finally + { + // Cleanup temp directory + if (Directory.Exists(tempDir)) + Directory.Delete(tempDir, true); + } + } + + private async Task WriteManifestAsync( + string tempDir, KnowledgeSnapshotManifest manifest, CancellationToken ct) + { + var manifestPath = Path.Combine(tempDir, "manifest.json"); + var json = JsonSerializer.Serialize(manifest, new JsonSerializerOptions { WriteIndented = true }); + await File.WriteAllTextAsync(manifestPath, json, ct); + + // Write signed envelope if signature present + if (manifest.Signature is not null) + { + var envelopePath = Path.Combine(tempDir, "manifest.dsse.json"); + var envelope = CreateDsseEnvelope(manifest); + await File.WriteAllTextAsync(envelopePath, envelope, ct); + } + } + + private async Task> BundleSourcesAsync( + string tempDir, KnowledgeSnapshotManifest manifest, ExportOptions options, CancellationToken ct) + { + var sourcesDir = Path.Combine(tempDir, "sources"); + Directory.CreateDirectory(sourcesDir); + + var bundledFiles = new List(); + + foreach (var source in manifest.Sources) + { + // Resolve source content + var resolved = await _sourceResolver.ResolveAsync(source, true, ct); + if (resolved is null) + { + _logger.LogWarning("Could not resolve source {Name} for bundling", source.Name); + continue; + } + + // Determine file path + var fileName = $"{source.Name}-{source.Epoch}.{GetExtension(source.Type)}"; + var filePath = Path.Combine(sourcesDir, fileName); + + // Compress if option enabled + if (options.CompressSources) + { + filePath += ".gz"; + await using var fs = File.Create(filePath); + await using var gz = new GZipStream(fs, CompressionLevel.Optimal); + await gz.WriteAsync(resolved.Content, ct); + } + else + { + await File.WriteAllBytesAsync(filePath, resolved.Content, ct); + } + + bundledFiles.Add(new BundledFile( + Path: $"sources/{Path.GetFileName(filePath)}", + Digest: source.Digest, + SizeBytes: new FileInfo(filePath).Length, + IsCompressed: options.CompressSources)); + } + + return bundledFiles; + } + + private async Task WriteChecksumsAsync( + string tempDir, List files, CancellationToken ct) + { + var metaDir = Path.Combine(tempDir, "META"); + Directory.CreateDirectory(metaDir); + + var checksums = string.Join("\n", files.Select(f => $"{f.Digest} {f.Path}")); + await File.WriteAllTextAsync(Path.Combine(metaDir, "CHECKSUMS.sha256"), checksums, ct); + } + + private async Task WriteBundleInfoAsync( + string tempDir, BundleInfo info, CancellationToken ct) + { + var metaDir = Path.Combine(tempDir, "META"); + Directory.CreateDirectory(metaDir); + + var json = JsonSerializer.Serialize(info, new JsonSerializerOptions { WriteIndented = true }); + await File.WriteAllTextAsync(Path.Combine(metaDir, "BUNDLE_INFO.json"), json, ct); + } + + private static string GetExtension(string sourceType) => + sourceType switch + { + "advisory-feed" => "jsonl", + "vex" => "json", + "sbom" => "json", + _ => "bin" + }; +} + +public sealed record ExportOptions +{ + public SnapshotInclusionLevel InclusionLevel { get; init; } = SnapshotInclusionLevel.Portable; + public bool CompressSources { get; init; } = true; + public bool IncludePolicy { get; init; } = true; + public bool IncludeScoring { get; init; } = true; + public bool IncludeTrust { get; init; } = true; + public string? OutputPath { get; init; } + public string? CreatedBy { get; init; } + public string? Description { get; init; } +} + +public sealed record ExportResult +{ + public bool IsSuccess { get; init; } + public string? FilePath { get; init; } + public BundleInfo? BundleInfo { get; init; } + public string? Error { get; init; } + + public static ExportResult Success(string filePath, BundleInfo info) => + new() { IsSuccess = true, FilePath = filePath, BundleInfo = info }; + + public static ExportResult Fail(string error) => + new() { IsSuccess = false, Error = error }; +} + +public interface IExportSnapshotService +{ + Task ExportAsync(string snapshotId, ExportOptions options, CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] `ExportSnapshotService.cs` created +- [ ] Manifest and signed envelope written +- [ ] Sources bundled with optional compression +- [ ] Checksums file generated +- [ ] Bundle info metadata written +- [ ] ZIP creation with cleanup + +--- + +### T3: Implement ImportSnapshotService + +**Assignee**: ExportCenter Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement service to import snapshot bundles with integrity verification. + +**Implementation Path**: `StellaOps.ExportCenter/Snapshots/ImportSnapshotService.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.ExportCenter.Snapshots; + +/// +/// Service for importing snapshot bundles. +/// +public sealed class ImportSnapshotService : IImportSnapshotService +{ + private readonly ISnapshotService _snapshotService; + private readonly ISnapshotStore _snapshotStore; + private readonly IKnowledgeSourceStore _sourceStore; + private readonly ILogger _logger; + + public async Task ImportAsync( + string bundlePath, + ImportOptions options, + CancellationToken ct = default) + { + _logger.LogInformation("Importing snapshot bundle from {Path}", bundlePath); + + // Validate bundle exists + if (!File.Exists(bundlePath)) + return ImportResult.Fail($"Bundle not found: {bundlePath}"); + + // Extract to temp directory + var tempDir = Path.Combine(Path.GetTempPath(), $"snapshot-import-{Guid.NewGuid():N}"); + + try + { + ZipFile.ExtractToDirectory(bundlePath, tempDir); + + // Verify checksums first + if (options.VerifyChecksums) + { + var checksumResult = await VerifyChecksumsAsync(tempDir, ct); + if (!checksumResult.IsValid) + { + return ImportResult.Fail($"Checksum verification failed: {checksumResult.Error}"); + } + } + + // Load manifest + var manifestPath = Path.Combine(tempDir, "manifest.json"); + if (!File.Exists(manifestPath)) + return ImportResult.Fail("Bundle missing manifest.json"); + + var manifestJson = await File.ReadAllTextAsync(manifestPath, ct); + var manifest = JsonSerializer.Deserialize(manifestJson) + ?? throw new InvalidOperationException("Failed to parse manifest"); + + // Verify manifest signature if sealed + if (options.VerifySignature) + { + var envelopePath = Path.Combine(tempDir, "manifest.dsse.json"); + if (File.Exists(envelopePath)) + { + var verification = await VerifySignatureAsync(envelopePath, ct); + if (!verification.IsValid) + { + return ImportResult.Fail($"Signature verification failed: {verification.Error}"); + } + } + } + + // Verify content-addressed ID + var idVerification = await _snapshotService.VerifySnapshotAsync(manifest, ct); + if (!idVerification.IsValid) + { + return ImportResult.Fail($"Manifest ID verification failed: {idVerification.Error}"); + } + + // Check for conflicts + var existing = await _snapshotStore.GetAsync(manifest.SnapshotId, ct); + if (existing is not null && !options.OverwriteExisting) + { + return ImportResult.Fail($"Snapshot {manifest.SnapshotId} already exists"); + } + + // Import sources + var importedSources = 0; + var sourcesDir = Path.Combine(tempDir, "sources"); + if (Directory.Exists(sourcesDir)) + { + foreach (var sourceFile in Directory.GetFiles(sourcesDir)) + { + await ImportSourceFileAsync(sourceFile, manifest, ct); + importedSources++; + } + } + + // Save manifest + await _snapshotStore.SaveAsync(manifest, ct); + + _logger.LogInformation( + "Imported snapshot {SnapshotId} with {SourceCount} sources", + manifest.SnapshotId, importedSources); + + return ImportResult.Success(manifest, importedSources); + } + finally + { + // Cleanup temp directory + if (Directory.Exists(tempDir)) + Directory.Delete(tempDir, true); + } + } + + private async Task VerifyChecksumsAsync(string tempDir, CancellationToken ct) + { + var checksumsPath = Path.Combine(tempDir, "META", "CHECKSUMS.sha256"); + if (!File.Exists(checksumsPath)) + return VerificationResult.Valid(); + + var lines = await File.ReadAllLinesAsync(checksumsPath, ct); + foreach (var line in lines) + { + var parts = line.Split(" ", 2); + if (parts.Length != 2) continue; + + var expectedDigest = parts[0]; + var filePath = Path.Combine(tempDir, parts[1]); + + if (!File.Exists(filePath)) + { + return VerificationResult.Invalid($"Missing file: {parts[1]}"); + } + + var actualDigest = await ComputeFileDigestAsync(filePath, ct); + if (actualDigest != expectedDigest) + { + return VerificationResult.Invalid($"Digest mismatch for {parts[1]}"); + } + } + + return VerificationResult.Valid(); + } + + private async Task ComputeFileDigestAsync(string filePath, CancellationToken ct) + { + await using var fs = File.OpenRead(filePath); + var hash = await SHA256.HashDataAsync(fs, ct); + return $"sha256:{Convert.ToHexString(hash).ToLowerInvariant()}"; + } + + private async Task VerifySignatureAsync(string envelopePath, CancellationToken ct) + { + // Delegate to signer for DSSE verification + var envelope = await File.ReadAllTextAsync(envelopePath, ct); + // ... signature verification logic + return VerificationResult.Valid(); + } + + private async Task ImportSourceFileAsync( + string filePath, KnowledgeSnapshotManifest manifest, CancellationToken ct) + { + var fileName = Path.GetFileName(filePath); + + // Decompress if needed + byte[] content; + if (filePath.EndsWith(".gz")) + { + await using var fs = File.OpenRead(filePath); + await using var gz = new GZipStream(fs, CompressionMode.Decompress); + using var ms = new MemoryStream(); + await gz.CopyToAsync(ms, ct); + content = ms.ToArray(); + } + else + { + content = await File.ReadAllBytesAsync(filePath, ct); + } + + // Find matching source descriptor + var digest = ComputeDigest(content); + var sourceDescriptor = manifest.Sources.FirstOrDefault(s => s.Digest == digest); + + if (sourceDescriptor is not null) + { + await _sourceStore.StoreAsync(sourceDescriptor, content, ct); + } + } + + private static string ComputeDigest(byte[] content) + { + var hash = SHA256.HashData(content); + return $"sha256:{Convert.ToHexString(hash).ToLowerInvariant()}"; + } +} + +public sealed record ImportOptions +{ + public bool VerifyChecksums { get; init; } = true; + public bool VerifySignature { get; init; } = true; + public bool OverwriteExisting { get; init; } = false; +} + +public sealed record ImportResult +{ + public bool IsSuccess { get; init; } + public KnowledgeSnapshotManifest? Manifest { get; init; } + public int ImportedSourceCount { get; init; } + public string? Error { get; init; } + + public static ImportResult Success(KnowledgeSnapshotManifest manifest, int sourceCount) => + new() { IsSuccess = true, Manifest = manifest, ImportedSourceCount = sourceCount }; + + public static ImportResult Fail(string error) => + new() { IsSuccess = false, Error = error }; +} + +public sealed record VerificationResult(bool IsValid, string? Error) +{ + public static VerificationResult Valid() => new(true, null); + public static VerificationResult Invalid(string error) => new(false, error); +} + +public interface IImportSnapshotService +{ + Task ImportAsync(string bundlePath, ImportOptions options, CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] `ImportSnapshotService.cs` created +- [ ] ZIP extraction to temp directory +- [ ] Checksum verification of all files +- [ ] Signature verification for sealed bundles +- [ ] Content-addressed ID verification +- [ ] Conflict detection with overwrite option +- [ ] Source files imported and stored +- [ ] Cleanup on completion/failure + +--- + +### T4: Add Snapshot Levels + +**Assignee**: ExportCenter Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement behavior differences for the three inclusion levels. + +**Implementation Path**: `StellaOps.ExportCenter/Snapshots/SnapshotLevelHandler.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.ExportCenter.Snapshots; + +/// +/// Handles snapshot level-specific behavior. +/// +public sealed class SnapshotLevelHandler +{ + /// + /// Gets the default export options for a given inclusion level. + /// + public ExportOptions GetDefaultOptions(SnapshotInclusionLevel level) + { + return level switch + { + SnapshotInclusionLevel.ReferenceOnly => new ExportOptions + { + InclusionLevel = level, + CompressSources = false, + IncludePolicy = false, + IncludeScoring = false, + IncludeTrust = false + }, + + SnapshotInclusionLevel.Portable => new ExportOptions + { + InclusionLevel = level, + CompressSources = true, + IncludePolicy = true, + IncludeScoring = true, + IncludeTrust = false + }, + + SnapshotInclusionLevel.Sealed => new ExportOptions + { + InclusionLevel = level, + CompressSources = true, + IncludePolicy = true, + IncludeScoring = true, + IncludeTrust = true + }, + + _ => throw new ArgumentOutOfRangeException(nameof(level)) + }; + } + + /// + /// Validates that a snapshot can be exported at the requested level. + /// + public ValidationResult ValidateForExport( + KnowledgeSnapshotManifest manifest, + SnapshotInclusionLevel level) + { + var issues = new List(); + + // Sealed level requires signature + if (level == SnapshotInclusionLevel.Sealed && manifest.Signature is null) + { + issues.Add("Sealed export requires signed manifest. Seal the snapshot first."); + } + + // Portable and Sealed require bundled sources + if (level != SnapshotInclusionLevel.ReferenceOnly) + { + var referencedOnly = manifest.Sources + .Where(s => s.InclusionMode == SourceInclusionMode.Referenced) + .ToList(); + + if (referencedOnly.Count > 0) + { + issues.Add($"{referencedOnly.Count} sources are reference-only and cannot be bundled without network access"); + } + } + + return issues.Count == 0 + ? ValidationResult.Valid() + : ValidationResult.Invalid(issues); + } + + /// + /// Gets the minimum requirements for replay at each level. + /// + public ReplayRequirements GetReplayRequirements(SnapshotInclusionLevel level) + { + return level switch + { + SnapshotInclusionLevel.ReferenceOnly => new ReplayRequirements + { + RequiresNetwork = true, + RequiresLocalStore = true, + RequiresTrustBundle = false, + Description = "Requires network access to fetch sources by digest" + }, + + SnapshotInclusionLevel.Portable => new ReplayRequirements + { + RequiresNetwork = false, + RequiresLocalStore = false, + RequiresTrustBundle = false, + Description = "Fully offline replay possible" + }, + + SnapshotInclusionLevel.Sealed => new ReplayRequirements + { + RequiresNetwork = false, + RequiresLocalStore = false, + RequiresTrustBundle = true, + Description = "Fully offline replay with cryptographic verification" + }, + + _ => throw new ArgumentOutOfRangeException(nameof(level)) + }; + } +} + +public sealed record ValidationResult +{ + public bool IsValid { get; init; } + public IReadOnlyList Issues { get; init; } = []; + + public static ValidationResult Valid() => new() { IsValid = true }; + public static ValidationResult Invalid(IReadOnlyList issues) => + new() { IsValid = false, Issues = issues }; +} + +public sealed record ReplayRequirements +{ + public bool RequiresNetwork { get; init; } + public bool RequiresLocalStore { get; init; } + public bool RequiresTrustBundle { get; init; } + public required string Description { get; init; } +} +``` + +**Acceptance Criteria**: +- [ ] `SnapshotLevelHandler.cs` created +- [ ] Default options per level defined +- [ ] Export validation per level +- [ ] Replay requirements documented +- [ ] Sealed requires signature +- [ ] Portable requires bundled sources + +--- + +### T5: Integrate with CLI + +**Assignee**: CLI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T2, T3 + +**Description**: +Add CLI commands for snapshot export and import. + +**Implementation Path**: `src/Cli/StellaOps.Cli/Commands/SnapshotCommand.cs` + +**Implementation**: +```csharp +namespace StellaOps.Cli.Commands; + +[Command("snapshot", Description = "Manage knowledge snapshots")] +public class SnapshotCommand +{ + [Command("export", Description = "Export a snapshot to a portable bundle")] + public class ExportCommand : ICommand + { + [Argument(0, Description = "Snapshot ID to export")] + public required string SnapshotId { get; set; } + + [Option("-o|--output", Description = "Output file path")] + public string? OutputPath { get; set; } + + [Option("-l|--level", Description = "Inclusion level: reference, portable, sealed")] + public string Level { get; set; } = "portable"; + + [Option("--no-compress", Description = "Disable source compression")] + public bool NoCompress { get; set; } + + [Option("--description", Description = "Bundle description")] + public string? Description { get; set; } + + private readonly IExportSnapshotService _exportService; + private readonly IConsole _console; + + public async Task ExecuteAsync(CancellationToken ct) + { + var inclusionLevel = Level.ToLowerInvariant() switch + { + "reference" => SnapshotInclusionLevel.ReferenceOnly, + "portable" => SnapshotInclusionLevel.Portable, + "sealed" => SnapshotInclusionLevel.Sealed, + _ => throw new ArgumentException($"Unknown level: {Level}") + }; + + var options = new ExportOptions + { + InclusionLevel = inclusionLevel, + CompressSources = !NoCompress, + OutputPath = OutputPath, + Description = Description + }; + + _console.WriteLine($"Exporting snapshot {SnapshotId} as {Level}..."); + + var result = await _exportService.ExportAsync(SnapshotId, options, ct); + + if (result.IsSuccess) + { + _console.WriteLine($"Exported to: {result.FilePath}"); + _console.WriteLine($"Bundle size: {FormatSize(result.BundleInfo!.TotalSizeBytes)}"); + _console.WriteLine($"Files: {result.BundleInfo.FileCount}"); + } + else + { + _console.WriteError($"Export failed: {result.Error}"); + } + } + + private static string FormatSize(long bytes) + { + string[] sizes = { "B", "KB", "MB", "GB" }; + int order = 0; + double size = bytes; + while (size >= 1024 && order < sizes.Length - 1) + { + order++; + size /= 1024; + } + return $"{size:0.##} {sizes[order]}"; + } + } + + [Command("import", Description = "Import a snapshot bundle")] + public class ImportCommand : ICommand + { + [Argument(0, Description = "Path to bundle ZIP file")] + public required string BundlePath { get; set; } + + [Option("--no-verify", Description = "Skip checksum and signature verification")] + public bool NoVerify { get; set; } + + [Option("--overwrite", Description = "Overwrite existing snapshot")] + public bool Overwrite { get; set; } + + private readonly IImportSnapshotService _importService; + private readonly IConsole _console; + + public async Task ExecuteAsync(CancellationToken ct) + { + var options = new ImportOptions + { + VerifyChecksums = !NoVerify, + VerifySignature = !NoVerify, + OverwriteExisting = Overwrite + }; + + _console.WriteLine($"Importing bundle from {BundlePath}..."); + + var result = await _importService.ImportAsync(BundlePath, options, ct); + + if (result.IsSuccess) + { + _console.WriteLine($"Imported snapshot: {result.Manifest!.SnapshotId}"); + _console.WriteLine($"Sources imported: {result.ImportedSourceCount}"); + } + else + { + _console.WriteError($"Import failed: {result.Error}"); + } + } + } + + [Command("list", Description = "List available snapshots")] + public class ListCommand : ICommand + { + [Option("--format", Description = "Output format: table, json")] + public string Format { get; set; } = "table"; + + private readonly ISnapshotStore _store; + private readonly IConsole _console; + + public async Task ExecuteAsync(CancellationToken ct) + { + var snapshots = await _store.ListAsync(ct); + + if (Format == "json") + { + var json = JsonSerializer.Serialize(snapshots, new JsonSerializerOptions { WriteIndented = true }); + _console.WriteLine(json); + } + else + { + _console.WriteLine("ID Created Sources"); + _console.WriteLine("------------------------------------------ ------------------- -------"); + foreach (var s in snapshots) + { + _console.WriteLine($"{s.SnapshotId,-42} {s.CreatedAt:yyyy-MM-dd HH:mm} {s.Sources.Count,7}"); + } + } + } + } +} +``` + +**Acceptance Criteria**: +- [ ] `stella snapshot export ` command works +- [ ] `stella snapshot import ` command works +- [ ] `stella snapshot list` command works +- [ ] Level selection with `--level` +- [ ] Verification toggle with `--no-verify` +- [ ] Overwrite option with `--overwrite` +- [ ] Size and file count reported + +--- + +### T6: Add Air-Gap Tests + +**Assignee**: ExportCenter Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T2, T3 + +**Description**: +Add tests verifying offline replay with exported bundles. + +**Implementation Path**: `src/ExportCenter/__Tests/StellaOps.ExportCenter.Tests/Snapshots/` + +**Test Cases**: +```csharp +public class ExportSnapshotServiceTests +{ + [Fact] + public async Task Export_PortableLevel_IncludesSources() + { + // Arrange + var snapshot = await CreateSnapshotWithSourcesAsync(); + var options = new ExportOptions { InclusionLevel = SnapshotInclusionLevel.Portable }; + + // Act + var result = await _exportService.ExportAsync(snapshot.SnapshotId, options); + + // Assert + result.IsSuccess.Should().BeTrue(); + File.Exists(result.FilePath).Should().BeTrue(); + + using var zip = ZipFile.OpenRead(result.FilePath); + zip.Entries.Should().Contain(e => e.FullName.StartsWith("sources/")); + zip.Entries.Should().Contain(e => e.Name == "manifest.json"); + } + + [Fact] + public async Task Export_ReferenceLevel_ExcludesSources() + { + var snapshot = await CreateSnapshotWithSourcesAsync(); + var options = new ExportOptions { InclusionLevel = SnapshotInclusionLevel.ReferenceOnly }; + + var result = await _exportService.ExportAsync(snapshot.SnapshotId, options); + + using var zip = ZipFile.OpenRead(result.FilePath); + zip.Entries.Should().NotContain(e => e.FullName.StartsWith("sources/")); + } + + [Fact] + public async Task Export_GeneratesValidChecksums() + { + var snapshot = await CreateSnapshotWithSourcesAsync(); + var result = await _exportService.ExportAsync(snapshot.SnapshotId, new ExportOptions()); + + using var zip = ZipFile.OpenRead(result.FilePath); + var checksumsEntry = zip.GetEntry("META/CHECKSUMS.sha256"); + checksumsEntry.Should().NotBeNull(); + } +} + +public class ImportSnapshotServiceTests +{ + [Fact] + public async Task Import_ValidBundle_Succeeds() + { + // Arrange + var bundlePath = await CreateTestBundleAsync(); + + // Act + var result = await _importService.ImportAsync(bundlePath, new ImportOptions()); + + // Assert + result.IsSuccess.Should().BeTrue(); + result.Manifest.Should().NotBeNull(); + } + + [Fact] + public async Task Import_TamperedFile_FailsVerification() + { + var bundlePath = await CreateTestBundleAsync(); + await TamperWithBundleAsync(bundlePath); + + var result = await _importService.ImportAsync(bundlePath, new ImportOptions { VerifyChecksums = true }); + + result.IsSuccess.Should().BeFalse(); + result.Error.Should().Contain("Checksum"); + } + + [Fact] + public async Task Import_ExistingSnapshot_FailsWithoutOverwrite() + { + var bundlePath = await CreateTestBundleAsync(); + await _importService.ImportAsync(bundlePath, new ImportOptions()); + + var result = await _importService.ImportAsync(bundlePath, new ImportOptions { OverwriteExisting = false }); + + result.IsSuccess.Should().BeFalse(); + result.Error.Should().Contain("already exists"); + } + + [Fact] + public async Task Import_ExistingSnapshot_SucceedsWithOverwrite() + { + var bundlePath = await CreateTestBundleAsync(); + await _importService.ImportAsync(bundlePath, new ImportOptions()); + + var result = await _importService.ImportAsync(bundlePath, new ImportOptions { OverwriteExisting = true }); + + result.IsSuccess.Should().BeTrue(); + } +} + +public class AirGapReplayTests +{ + [Fact] + public async Task FullAirGapWorkflow_ExportImportReplay() + { + // Step 1: Create snapshot with evaluation + var snapshot = await _snapshotService.CaptureCurrentSnapshotAsync(); + var originalVerdict = await _evaluator.EvaluateWithSnapshotAsync(CreateRequest(), snapshot); + + // Step 2: Export to portable bundle + var exportResult = await _exportService.ExportAsync(snapshot.SnapshotId, + new ExportOptions { InclusionLevel = SnapshotInclusionLevel.Portable }); + exportResult.IsSuccess.Should().BeTrue(); + + // Step 3: Clear local stores (simulate air-gap transfer) + await ClearLocalStoresAsync(); + + // Step 4: Import bundle (as if on air-gapped system) + var importResult = await _importService.ImportAsync(exportResult.FilePath, new ImportOptions()); + importResult.IsSuccess.Should().BeTrue(); + + // Step 5: Replay without network + var replayRequest = new ReplayRequest + { + ArtifactDigest = originalVerdict.ArtifactDigest, + SnapshotId = snapshot.SnapshotId, + OriginalVerdictId = originalVerdict.VerdictId, + Options = new ReplayOptions { AllowNetworkFetch = false } + }; + + var replayResult = await _replayEngine.ReplayAsync(replayRequest); + + // Assert: Replay matches original + replayResult.MatchStatus.Should().Be(ReplayMatchStatus.ExactMatch); + } + + [Fact] + public async Task AirGap_SealedBundle_VerifiesSignature() + { + var snapshot = await CreateAndSealSnapshotAsync(); + var exportResult = await _exportService.ExportAsync(snapshot.SnapshotId, + new ExportOptions { InclusionLevel = SnapshotInclusionLevel.Sealed }); + + var importResult = await _importService.ImportAsync(exportResult.FilePath, + new ImportOptions { VerifySignature = true }); + + importResult.IsSuccess.Should().BeTrue(); + } +} +``` + +**Acceptance Criteria**: +- [ ] Export tests for each inclusion level +- [ ] Import tests for verification scenarios +- [ ] Tamper detection test +- [ ] Overwrite behavior tests +- [ ] Full air-gap workflow test +- [ ] Sealed bundle signature verification +- [ ] All 6+ tests pass + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | ExportCenter Team | Define SnapshotBundle format | +| 2 | T2 | TODO | T1 | ExportCenter Team | Implement ExportSnapshotService | +| 3 | T3 | TODO | T1 | ExportCenter Team | Implement ImportSnapshotService | +| 4 | T4 | TODO | T1 | ExportCenter Team | Add snapshot levels | +| 5 | T5 | TODO | T2, T3 | CLI Team | Integrate with CLI | +| 6 | T6 | TODO | T2, T3 | ExportCenter Team | Add air-gap tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from MOAT Phase 2 gap analysis. Snapshot export/import for air-gap identified as requirement. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| ZIP format | Decision | ExportCenter Team | Standard ZIP for broad compatibility | +| Gzip compression | Decision | ExportCenter Team | Optional per-source compression | +| Three inclusion levels | Decision | ExportCenter Team | Reference, Portable, Sealed for flexibility | +| Temp directory cleanup | Decision | ExportCenter Team | Always cleanup even on failure | + +--- + +## Success Criteria + +- [ ] All 6 tasks marked DONE +- [ ] Export creates valid ZIP bundles +- [ ] Import verifies checksums and signatures +- [ ] Full air-gap workflow tested +- [ ] CLI commands work +- [ ] 6+ tests passing +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_4100_0003_0001_risk_verdict_attestation.md b/docs/implplan/SPRINT_4100_0003_0001_risk_verdict_attestation.md new file mode 100644 index 000000000..57ee47062 --- /dev/null +++ b/docs/implplan/SPRINT_4100_0003_0001_risk_verdict_attestation.md @@ -0,0 +1,1325 @@ +# Sprint 4100.0003.0001 · Risk Verdict Attestation Contract + +## Topic & Scope + +- Define formal Risk Verdict Attestation (RVA) contract +- Support PASS/FAIL/PASS_WITH_EXCEPTIONS/INDETERMINATE outcomes +- Enable cryptographically signed, replayable verdicts + +**Working directory:** `src/Policy/StellaOps.Policy.Engine/Attestation/` + +## Dependencies & Concurrency + +- **Upstream**: None (first sprint in batch) +- **Downstream**: Sprint 4100.0003.0002 (OCI Referrer Push) +- **Safe to parallelize with**: Sprint 4100.0001.0001, Sprint 4100.0002.0001, Sprint 4100.0004.0002 + +## Documentation Prerequisites + +- `src/Attestor/__Libraries/StellaOps.Attestor.ProofChain/AGENTS.md` +- `docs/product-advisories/19-Dec-2025 - Moat #2.md` (Risk Verdict Attestation) +- `docs/product-advisories/20-Dec-2025 - Moat Explanation - Guidelines for Product and Development Managers - Signed, Replayable Risk Verdicts.md` + +--- + +## Tasks + +### T1: Define RiskVerdictAttestation Model + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the formal RVA model with all required fields for signed verdicts. + +**Implementation Path**: `Attestation/RiskVerdictAttestation.cs` (new file) + +**Model Definition**: +```csharp +namespace StellaOps.Policy.Engine.Attestation; + +/// +/// Risk Verdict Attestation - the signed, replayable output of policy evaluation. +/// This is the formal contract for communicating risk decisions. +/// +public sealed record RiskVerdictAttestation +{ + /// + /// Unique identifier for this attestation. + /// Format: rva:{sha256-of-content} + /// + public required string AttestationId { get; init; } + + /// + /// Schema version for forward compatibility. + /// + public string SchemaVersion { get; init; } = "1.0"; + + /// + /// When this attestation was created. + /// + public required DateTimeOffset CreatedAt { get; init; } + + /// + /// The final verdict status. + /// + public required RiskVerdictStatus Verdict { get; init; } + + /// + /// Subject artifact being evaluated. + /// + public required ArtifactSubject Subject { get; init; } + + /// + /// Reference to the policy that was evaluated. + /// + public required PolicyRef Policy { get; init; } + + /// + /// Reference to the knowledge snapshot used. + /// Enables replay with frozen inputs. + /// + public required string KnowledgeSnapshotId { get; init; } + + /// + /// Evidence references supporting the verdict. + /// + public IReadOnlyList Evidence { get; init; } = []; + + /// + /// Reason codes explaining the verdict. + /// + public IReadOnlyList ReasonCodes { get; init; } = []; + + /// + /// Summary of unknowns encountered. + /// + public UnknownsSummary? Unknowns { get; init; } + + /// + /// Exception IDs that were applied. + /// + public IReadOnlyList AppliedExceptions { get; init; } = []; + + /// + /// Human-readable explanation of the verdict. + /// + public string? Explanation { get; init; } + + /// + /// Expiration time for this verdict (optional). + /// + public DateTimeOffset? ExpiresAt { get; init; } + + /// + /// Metadata for extensibility. + /// + public IReadOnlyDictionary Metadata { get; init; } + = new Dictionary(); +} + +/// +/// The four possible verdict outcomes. +/// +public enum RiskVerdictStatus +{ + /// + /// No policy violations detected. Safe to proceed. + /// + Pass, + + /// + /// Policy violations detected. Block deployment. + /// + Fail, + + /// + /// Violations exist but are covered by approved exceptions. + /// + PassWithExceptions, + + /// + /// Cannot determine risk due to insufficient data. + /// + Indeterminate +} + +/// +/// The artifact being evaluated. +/// +public sealed record ArtifactSubject +{ + /// + /// Artifact digest (sha256:...). + /// + public required string Digest { get; init; } + + /// + /// Artifact type: container-image, sbom, binary, etc. + /// + public required string Type { get; init; } + + /// + /// Human-readable name (e.g., image:tag). + /// + public string? Name { get; init; } + + /// + /// Registry or repository URI. + /// + public string? Uri { get; init; } +} + +/// +/// Reference to the evaluated policy. +/// +public sealed record PolicyRef +{ + public required string PolicyId { get; init; } + public required string Version { get; init; } + public required string Digest { get; init; } + public string? Uri { get; init; } +} + +/// +/// Reference to evidence supporting the verdict. +/// +public sealed record EvidenceRef +{ + public required string Type { get; init; } + public required string Digest { get; init; } + public string? Uri { get; init; } + public string? Description { get; init; } +} +``` + +**Acceptance Criteria**: +- [ ] `RiskVerdictAttestation.cs` created with all models +- [ ] Four verdict statuses: Pass, Fail, PassWithExceptions, Indeterminate +- [ ] Subject, Policy, Snapshot references included +- [ ] Evidence references for audit trail +- [ ] Expiration support for time-limited verdicts +- [ ] Metadata for extensibility + +--- + +### T2: Define VerdictReasonCode Enum + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create enumeration of structured reason codes for verdicts. + +**Implementation Path**: `Attestation/VerdictReasonCode.cs` (new file) + +**Model Definition**: +```csharp +namespace StellaOps.Policy.Engine.Attestation; + +/// +/// Structured reason codes explaining verdict outcomes. +/// Format: CATEGORY.SUBCATEGORY.DETAIL +/// +public enum VerdictReasonCode +{ + // PASS reasons + /// + /// No CVEs found in artifact. + /// + PassNoCves, + + /// + /// All CVEs are not reachable. + /// + PassNotReachable, + + /// + /// All CVEs are covered by VEX not_affected statements. + /// + PassVexNotAffected, + + /// + /// All CVEs are below severity threshold. + /// + PassBelowThreshold, + + // FAIL reasons - CVE + /// + /// Reachable CVE exceeds severity threshold. + /// + FailCveReachable, + + /// + /// CVE in CISA KEV (Known Exploited Vulnerabilities). + /// + FailCveKev, + + /// + /// CVE with high EPSS score. + /// + FailCveEpss, + + /// + /// CVE severity exceeds maximum allowed. + /// + FailCveSeverity, + + // FAIL reasons - Policy + /// + /// License violation detected. + /// + FailPolicyLicense, + + /// + /// Blocked package detected. + /// + FailPolicyBlockedPackage, + + /// + /// Unknown budget exceeded. + /// + FailPolicyUnknownBudget, + + /// + /// SBOM completeness below threshold. + /// + FailPolicySbomCompleteness, + + // FAIL reasons - Provenance + /// + /// Missing provenance attestation. + /// + FailProvenanceMissing, + + /// + /// Provenance signature invalid. + /// + FailProvenanceInvalid, + + // EXCEPTION reasons + /// + /// CVE covered by approved exception. + /// + ExceptionCve, + + /// + /// License covered by approved exception. + /// + ExceptionLicense, + + /// + /// Unknowns covered by approved exception. + /// + ExceptionUnknown, + + // INDETERMINATE reasons + /// + /// Insufficient data to evaluate. + /// + IndeterminateInsufficientData, + + /// + /// Analyzer does not support this artifact type. + /// + IndeterminateUnsupported, + + /// + /// Conflicting VEX statements. + /// + IndeterminateVexConflict, + + /// + /// Required knowledge source unavailable. + /// + IndeterminateFeedUnavailable +} + +/// +/// Extension methods for reason code handling. +/// +public static class VerdictReasonCodeExtensions +{ + /// + /// Gets the category of a reason code (Pass, Fail, Exception, Indeterminate). + /// + public static string GetCategory(this VerdictReasonCode code) + { + return code.ToString() switch + { + var s when s.StartsWith("Pass") => "Pass", + var s when s.StartsWith("Fail") => "Fail", + var s when s.StartsWith("Exception") => "Exception", + var s when s.StartsWith("Indeterminate") => "Indeterminate", + _ => "Unknown" + }; + } + + /// + /// Gets a human-readable description of the reason code. + /// + public static string GetDescription(this VerdictReasonCode code) + { + return code switch + { + VerdictReasonCode.PassNoCves => "No CVEs found in artifact", + VerdictReasonCode.PassNotReachable => "All CVEs are not reachable", + VerdictReasonCode.FailCveReachable => "Reachable CVE exceeds severity threshold", + VerdictReasonCode.FailCveKev => "CVE in CISA Known Exploited Vulnerabilities list", + VerdictReasonCode.FailPolicyUnknownBudget => "Unknown budget exceeded", + VerdictReasonCode.IndeterminateInsufficientData => "Insufficient data to evaluate", + _ => code.ToString() + }; + } +} +``` + +**Acceptance Criteria**: +- [ ] `VerdictReasonCode.cs` created with all codes +- [ ] Codes organized by category (Pass, Fail, Exception, Indeterminate) +- [ ] CVE, Policy, Provenance failure categories +- [ ] Extension methods for category and description +- [ ] XML documentation on all codes + +--- + +### T3: Create RvaBuilder + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Create fluent builder for constructing RVA instances. + +**Implementation Path**: `Attestation/RvaBuilder.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Engine.Attestation; + +/// +/// Fluent builder for constructing Risk Verdict Attestations. +/// +public sealed class RvaBuilder +{ + private RiskVerdictStatus _verdict; + private ArtifactSubject? _subject; + private PolicyRef? _policy; + private string? _snapshotId; + private readonly List _evidence = []; + private readonly List _reasonCodes = []; + private readonly List _exceptions = []; + private UnknownsSummary? _unknowns; + private string? _explanation; + private DateTimeOffset? _expiresAt; + private readonly Dictionary _metadata = []; + private readonly IHasher _hasher; + + public RvaBuilder(IHasher hasher) + { + _hasher = hasher; + } + + public RvaBuilder WithVerdict(RiskVerdictStatus verdict) + { + _verdict = verdict; + return this; + } + + public RvaBuilder WithSubject(string digest, string type, string? name = null, string? uri = null) + { + _subject = new ArtifactSubject + { + Digest = digest, + Type = type, + Name = name, + Uri = uri + }; + return this; + } + + public RvaBuilder WithPolicy(string policyId, string version, string digest, string? uri = null) + { + _policy = new PolicyRef + { + PolicyId = policyId, + Version = version, + Digest = digest, + Uri = uri + }; + return this; + } + + public RvaBuilder WithKnowledgeSnapshot(string snapshotId) + { + _snapshotId = snapshotId; + return this; + } + + public RvaBuilder WithEvidence(string type, string digest, string? uri = null, string? description = null) + { + _evidence.Add(new EvidenceRef + { + Type = type, + Digest = digest, + Uri = uri, + Description = description + }); + return this; + } + + public RvaBuilder WithReasonCode(VerdictReasonCode code) + { + if (!_reasonCodes.Contains(code)) + _reasonCodes.Add(code); + return this; + } + + public RvaBuilder WithReasonCodes(IEnumerable codes) + { + foreach (var code in codes) + WithReasonCode(code); + return this; + } + + public RvaBuilder WithException(string exceptionId) + { + _exceptions.Add(exceptionId); + return this; + } + + public RvaBuilder WithUnknowns(UnknownsSummary unknowns) + { + _unknowns = unknowns; + return this; + } + + public RvaBuilder WithExplanation(string explanation) + { + _explanation = explanation; + return this; + } + + public RvaBuilder WithExpiration(DateTimeOffset expiresAt) + { + _expiresAt = expiresAt; + return this; + } + + public RvaBuilder WithMetadata(string key, string value) + { + _metadata[key] = value; + return this; + } + + /// + /// Builds the RVA from a policy evaluation result. + /// + public RvaBuilder FromEvaluationResult(PolicyEvaluationResult result) + { + _verdict = MapDecision(result.Decision); + _subject = new ArtifactSubject + { + Digest = result.ArtifactDigest, + Type = "container-image", + Name = result.ArtifactName + }; + _snapshotId = result.KnowledgeSnapshotId; + _unknowns = result.UnknownsSummary; + + foreach (var exc in result.AppliedExceptions) + _exceptions.Add(exc); + + // Derive reason codes from findings + foreach (var finding in result.Findings.Where(f => f.IsBlocking)) + { + var code = DeriveReasonCode(finding); + WithReasonCode(code); + } + + return this; + } + + public RiskVerdictAttestation Build() + { + if (_subject is null) + throw new InvalidOperationException("Subject is required"); + if (_policy is null) + throw new InvalidOperationException("Policy is required"); + if (_snapshotId is null) + throw new InvalidOperationException("Knowledge snapshot ID is required"); + + var attestation = new RiskVerdictAttestation + { + AttestationId = "", // Computed below + CreatedAt = DateTimeOffset.UtcNow, + Verdict = _verdict, + Subject = _subject, + Policy = _policy, + KnowledgeSnapshotId = _snapshotId, + Evidence = _evidence.ToList(), + ReasonCodes = _reasonCodes.ToList(), + AppliedExceptions = _exceptions.ToList(), + Unknowns = _unknowns, + Explanation = _explanation ?? GenerateExplanation(), + ExpiresAt = _expiresAt, + Metadata = _metadata.ToDictionary() + }; + + // Compute content-addressed ID + var attestationId = ComputeAttestationId(attestation); + + return attestation with { AttestationId = attestationId }; + } + + private string ComputeAttestationId(RiskVerdictAttestation attestation) + { + var json = JsonSerializer.Serialize(attestation with { AttestationId = "" }, + new JsonSerializerOptions + { + PropertyNamingPolicy = JsonNamingPolicy.CamelCase, + WriteIndented = false + }); + + var hash = _hasher.ComputeSha256(json); + return $"rva:sha256:{hash}"; + } + + private static RiskVerdictStatus MapDecision(PolicyDecision decision) + { + return decision switch + { + PolicyDecision.Pass => RiskVerdictStatus.Pass, + PolicyDecision.Fail => RiskVerdictStatus.Fail, + PolicyDecision.PassWithExceptions => RiskVerdictStatus.PassWithExceptions, + PolicyDecision.Indeterminate => RiskVerdictStatus.Indeterminate, + _ => RiskVerdictStatus.Indeterminate + }; + } + + private VerdictReasonCode DeriveReasonCode(Finding finding) + { + return finding.Type switch + { + "cve" when finding.IsReachable == true => VerdictReasonCode.FailCveReachable, + "cve" when finding.IsInKev == true => VerdictReasonCode.FailCveKev, + "license" => VerdictReasonCode.FailPolicyLicense, + "blocked-package" => VerdictReasonCode.FailPolicyBlockedPackage, + "unknown-budget" => VerdictReasonCode.FailPolicyUnknownBudget, + _ => VerdictReasonCode.FailCveSeverity + }; + } + + private string GenerateExplanation() + { + if (_reasonCodes.Count == 0) + return $"Verdict: {_verdict}"; + + var reasons = string.Join(", ", _reasonCodes.Take(3).Select(c => c.GetDescription())); + return $"Verdict: {_verdict}. Reasons: {reasons}"; + } +} +``` + +**Acceptance Criteria**: +- [ ] `RvaBuilder.cs` created with fluent API +- [ ] `FromEvaluationResult` for easy conversion +- [ ] Content-addressed attestation ID computed +- [ ] Auto-generated explanation +- [ ] Reason code derivation from findings +- [ ] Validation on Build() + +--- + +### T4: Integrate Knowledge Snapshot Reference + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T3 + +**Description**: +Ensure RVA includes knowledge snapshot reference for replay. + +**Implementation Path**: `Attestation/RvaBuilder.cs`, `Services/PolicyEvaluator.cs` + +**Integration**: +```csharp +// In PolicyEvaluator after evaluation +public async Task CreateAttestationAsync( + PolicyEvaluationResult result, + CancellationToken ct = default) +{ + // Ensure snapshot exists + if (result.KnowledgeSnapshotId is null) + { + throw new InvalidOperationException("Evaluation must have knowledge snapshot for attestation"); + } + + var attestation = new RvaBuilder(_hasher) + .FromEvaluationResult(result) + .WithPolicy(_policyRef.Id, _policyRef.Version, _policyRef.Digest) + .WithEvidence("sbom", result.SbomDigest, description: "SBOM used for analysis") + .WithEvidence("reachability", result.ReachabilityDigest, description: "Call graph analysis") + .Build(); + + // Log for observability + _logger.LogInformation( + "Created RVA {AttestationId} with verdict {Verdict} for {Artifact}", + attestation.AttestationId, attestation.Verdict, result.ArtifactDigest); + + // Store attestation + await _attestationStore.SaveAsync(attestation, ct); + + return attestation; +} +``` + +**Replay Support**: +```csharp +/// +/// Validates that an RVA can be replayed. +/// +public async Task ValidateForReplayAsync( + RiskVerdictAttestation attestation, + CancellationToken ct = default) +{ + // Check snapshot exists + var snapshot = await _snapshotService.GetSnapshotAsync(attestation.KnowledgeSnapshotId, ct); + if (snapshot is null) + { + return ReplayValidation.Fail("Knowledge snapshot not found"); + } + + // Check snapshot integrity + var verification = await _snapshotService.VerifySnapshotAsync(snapshot, ct); + if (!verification.IsValid) + { + return ReplayValidation.Fail($"Snapshot verification failed: {verification.Error}"); + } + + return ReplayValidation.Success(snapshot); +} + +public sealed record ReplayValidation(bool CanReplay, string? Error, KnowledgeSnapshotManifest? Snapshot) +{ + public static ReplayValidation Success(KnowledgeSnapshotManifest snapshot) => + new(true, null, snapshot); + public static ReplayValidation Fail(string error) => + new(false, error, null); +} +``` + +**Acceptance Criteria**: +- [ ] RVA always includes KnowledgeSnapshotId +- [ ] Evaluation without snapshot throws +- [ ] Evidence references added (SBOM, reachability) +- [ ] Replay validation method added +- [ ] Attestation stored after creation + +--- + +### T5: Update Predicate Type + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Define the in-toto predicate type for RVA. + +**Implementation Path**: `Attestation/RvaPredicate.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Engine.Attestation; + +/// +/// In-toto predicate wrapper for Risk Verdict Attestations. +/// +public sealed class RvaPredicate +{ + /// + /// Predicate type URI for RVA. + /// + public const string PredicateType = "https://stella.ops/predicates/risk-verdict@v1"; + + /// + /// Creates an in-toto statement from an RVA. + /// + public static InTotoStatement CreateStatement(RiskVerdictAttestation attestation) + { + return new InTotoStatement + { + Type = "https://in-toto.io/Statement/v1", + Subject = new[] + { + new InTotoSubject + { + Name = attestation.Subject.Name ?? attestation.Subject.Digest, + Digest = new Dictionary + { + ["sha256"] = attestation.Subject.Digest.Replace("sha256:", "") + } + } + }, + PredicateType = PredicateType, + Predicate = new RvaPredicateContent + { + AttestationId = attestation.AttestationId, + Verdict = attestation.Verdict.ToString(), + Policy = new PolicyPredicateRef + { + Id = attestation.Policy.PolicyId, + Version = attestation.Policy.Version, + Digest = attestation.Policy.Digest + }, + KnowledgeSnapshotId = attestation.KnowledgeSnapshotId, + ReasonCodes = attestation.ReasonCodes.Select(c => c.ToString()).ToList(), + Unknowns = attestation.Unknowns is not null ? new UnknownsPredicateRef + { + Total = attestation.Unknowns.Total, + BlockingCount = attestation.Unknowns.BlockingCount + } : null, + AppliedExceptions = attestation.AppliedExceptions.ToList(), + Explanation = attestation.Explanation, + CreatedAt = attestation.CreatedAt.ToString("o"), + ExpiresAt = attestation.ExpiresAt?.ToString("o") + } + }; + } +} + +public sealed record InTotoStatement +{ + [JsonPropertyName("_type")] + public required string Type { get; init; } + + [JsonPropertyName("subject")] + public required InTotoSubject[] Subject { get; init; } + + [JsonPropertyName("predicateType")] + public required string PredicateType { get; init; } + + [JsonPropertyName("predicate")] + public required object Predicate { get; init; } +} + +public sealed record InTotoSubject +{ + [JsonPropertyName("name")] + public required string Name { get; init; } + + [JsonPropertyName("digest")] + public required Dictionary Digest { get; init; } +} + +public sealed record RvaPredicateContent +{ + [JsonPropertyName("attestationId")] + public required string AttestationId { get; init; } + + [JsonPropertyName("verdict")] + public required string Verdict { get; init; } + + [JsonPropertyName("policy")] + public required PolicyPredicateRef Policy { get; init; } + + [JsonPropertyName("knowledgeSnapshotId")] + public required string KnowledgeSnapshotId { get; init; } + + [JsonPropertyName("reasonCodes")] + public required IReadOnlyList ReasonCodes { get; init; } + + [JsonPropertyName("unknowns")] + public UnknownsPredicateRef? Unknowns { get; init; } + + [JsonPropertyName("appliedExceptions")] + public required IReadOnlyList AppliedExceptions { get; init; } + + [JsonPropertyName("explanation")] + public string? Explanation { get; init; } + + [JsonPropertyName("createdAt")] + public required string CreatedAt { get; init; } + + [JsonPropertyName("expiresAt")] + public string? ExpiresAt { get; init; } +} + +public sealed record PolicyPredicateRef +{ + [JsonPropertyName("id")] + public required string Id { get; init; } + + [JsonPropertyName("version")] + public required string Version { get; init; } + + [JsonPropertyName("digest")] + public required string Digest { get; init; } +} + +public sealed record UnknownsPredicateRef +{ + [JsonPropertyName("total")] + public int Total { get; init; } + + [JsonPropertyName("blockingCount")] + public int BlockingCount { get; init; } +} +``` + +**Acceptance Criteria**: +- [ ] `RvaPredicate.cs` created +- [ ] Predicate type: `https://stella.ops/predicates/risk-verdict@v1` +- [ ] In-toto statement structure correct +- [ ] All RVA fields mapped to predicate +- [ ] JSON property names in camelCase + +--- + +### T6: Create RvaVerifier + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T5 + +**Description**: +Implement verification of RVA signatures and integrity. + +**Implementation Path**: `Attestation/RvaVerifier.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Engine.Attestation; + +/// +/// Verifies Risk Verdict Attestation signatures and integrity. +/// +public sealed class RvaVerifier : IRvaVerifier +{ + private readonly ISigner _signer; + private readonly ISnapshotService _snapshotService; + private readonly ITrustStore _trustStore; + private readonly ILogger _logger; + + public RvaVerifier( + ISigner signer, + ISnapshotService snapshotService, + ITrustStore trustStore, + ILogger logger) + { + _signer = signer; + _snapshotService = snapshotService; + _trustStore = trustStore; + _logger = logger; + } + + /// + /// Verifies a DSSE-wrapped RVA. + /// + public async Task VerifyAsync( + DsseEnvelope envelope, + RvaVerificationOptions options, + CancellationToken ct = default) + { + var issues = new List(); + + // Step 1: Verify DSSE signature + var sigResult = await VerifySignatureAsync(envelope, options, ct); + if (!sigResult.IsValid) + { + issues.Add($"Signature verification failed: {sigResult.Error}"); + if (!options.ContinueOnSignatureFailure) + { + return RvaVerificationResult.Fail(issues); + } + } + + // Step 2: Parse payload + var attestation = ParsePayload(envelope); + if (attestation is null) + { + issues.Add("Failed to parse RVA payload"); + return RvaVerificationResult.Fail(issues); + } + + // Step 3: Verify content-addressed ID + var idValid = VerifyAttestationId(attestation); + if (!idValid) + { + issues.Add("Attestation ID does not match content"); + return RvaVerificationResult.Fail(issues); + } + + // Step 4: Verify expiration + if (options.CheckExpiration && attestation.ExpiresAt.HasValue) + { + if (attestation.ExpiresAt.Value < DateTimeOffset.UtcNow) + { + issues.Add($"Attestation expired at {attestation.ExpiresAt.Value:o}"); + if (!options.AllowExpired) + { + return RvaVerificationResult.Fail(issues); + } + } + } + + // Step 5: Verify knowledge snapshot exists (if requested) + if (options.VerifySnapshotExists) + { + var snapshot = await _snapshotService.GetSnapshotAsync(attestation.KnowledgeSnapshotId, ct); + if (snapshot is null) + { + issues.Add($"Knowledge snapshot {attestation.KnowledgeSnapshotId} not found"); + } + } + + // Step 6: Verify signer identity against trust store + if (options.VerifySignerIdentity && sigResult.SignerIdentity is not null) + { + var trusted = await _trustStore.IsTrustedSignerAsync(sigResult.SignerIdentity, ct); + if (!trusted) + { + issues.Add($"Signer {sigResult.SignerIdentity} is not in trust store"); + } + } + + var isValid = issues.Count == 0 || + (issues.All(i => i.Contains("expired") && options.AllowExpired)); + + return new RvaVerificationResult + { + IsValid = isValid, + Attestation = attestation, + SignerIdentity = sigResult.SignerIdentity, + Issues = issues, + VerifiedAt = DateTimeOffset.UtcNow + }; + } + + /// + /// Quick verification of just the signature. + /// + public async Task VerifySignatureAsync( + DsseEnvelope envelope, + RvaVerificationOptions options, + CancellationToken ct = default) + { + try + { + var payload = Convert.FromBase64String(envelope.Payload); + var signature = Convert.FromBase64String(envelope.Signatures[0].Sig); + + var isValid = await _signer.VerifyAsync(payload, signature, ct); + + return new SignatureVerificationResult + { + IsValid = isValid, + SignerIdentity = envelope.Signatures[0].KeyId + }; + } + catch (Exception ex) + { + _logger.LogWarning(ex, "Signature verification failed"); + return new SignatureVerificationResult + { + IsValid = false, + Error = ex.Message + }; + } + } + + private RiskVerdictAttestation? ParsePayload(DsseEnvelope envelope) + { + try + { + var payloadBytes = Convert.FromBase64String(envelope.Payload); + var statement = JsonSerializer.Deserialize(payloadBytes); + + if (statement?.PredicateType != RvaPredicate.PredicateType) + return null; + + var predicateJson = JsonSerializer.Serialize(statement.Predicate); + var predicate = JsonSerializer.Deserialize(predicateJson); + + // Convert predicate back to RVA (simplified) + return ConvertToRva(statement, predicate!); + } + catch + { + return null; + } + } + + private bool VerifyAttestationId(RiskVerdictAttestation attestation) + { + var json = JsonSerializer.Serialize(attestation with { AttestationId = "" }, + new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase }); + var expectedId = $"rva:sha256:{ComputeSha256(json)}"; + return attestation.AttestationId == expectedId; + } + + private static string ComputeSha256(string input) + { + var bytes = SHA256.HashData(Encoding.UTF8.GetBytes(input)); + return Convert.ToHexString(bytes).ToLowerInvariant(); + } +} + +public sealed record RvaVerificationResult +{ + public required bool IsValid { get; init; } + public RiskVerdictAttestation? Attestation { get; init; } + public string? SignerIdentity { get; init; } + public IReadOnlyList Issues { get; init; } = []; + public DateTimeOffset VerifiedAt { get; init; } + + public static RvaVerificationResult Fail(IReadOnlyList issues) => + new() { IsValid = false, Issues = issues, VerifiedAt = DateTimeOffset.UtcNow }; +} + +public sealed record SignatureVerificationResult +{ + public required bool IsValid { get; init; } + public string? SignerIdentity { get; init; } + public string? Error { get; init; } +} + +public sealed record RvaVerificationOptions +{ + public bool CheckExpiration { get; init; } = true; + public bool AllowExpired { get; init; } = false; + public bool VerifySnapshotExists { get; init; } = false; + public bool VerifySignerIdentity { get; init; } = true; + public bool ContinueOnSignatureFailure { get; init; } = false; + + public static RvaVerificationOptions Default { get; } = new(); + public static RvaVerificationOptions Strict { get; } = new() + { + VerifySnapshotExists = true, + AllowExpired = false + }; +} + +public interface IRvaVerifier +{ + Task VerifyAsync(DsseEnvelope envelope, RvaVerificationOptions options, CancellationToken ct = default); + Task VerifySignatureAsync(DsseEnvelope envelope, RvaVerificationOptions options, CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] `RvaVerifier.cs` created +- [ ] DSSE signature verification +- [ ] Content-addressed ID verification +- [ ] Expiration checking with configurable behavior +- [ ] Snapshot existence verification (optional) +- [ ] Signer identity trust verification +- [ ] Comprehensive verification result + +--- + +### T7: Add Tests + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T6 + +**Description**: +Add comprehensive tests for RVA creation and verification. + +**Implementation Path**: `src/Policy/__Tests/StellaOps.Policy.Engine.Tests/Attestation/` + +**Test Cases**: +```csharp +public class RvaBuilderTests +{ + [Fact] + public void Build_ValidInputs_CreatesRva() + { + var rva = new RvaBuilder(_hasher) + .WithVerdict(RiskVerdictStatus.Pass) + .WithSubject("sha256:abc123", "container-image", "myapp:v1.0") + .WithPolicy("policy-1", "1.0", "sha256:xyz") + .WithKnowledgeSnapshot("ksm:sha256:def456") + .WithReasonCode(VerdictReasonCode.PassNoCves) + .Build(); + + rva.AttestationId.Should().StartWith("rva:sha256:"); + rva.Verdict.Should().Be(RiskVerdictStatus.Pass); + rva.ReasonCodes.Should().Contain(VerdictReasonCode.PassNoCves); + } + + [Fact] + public void Build_MissingSubject_Throws() + { + var builder = new RvaBuilder(_hasher) + .WithVerdict(RiskVerdictStatus.Pass) + .WithPolicy("p", "1.0", "sha256:x") + .WithKnowledgeSnapshot("ksm:sha256:y"); + + var act = () => builder.Build(); + + act.Should().Throw() + .WithMessage("*Subject*"); + } + + [Fact] + public void FromEvaluationResult_MapsCorrectly() + { + var result = CreateEvaluationResult(PolicyDecision.Fail, findings: new[] + { + CreateFinding("CVE-2024-001", isReachable: true) + }); + + var rva = new RvaBuilder(_hasher) + .FromEvaluationResult(result) + .WithPolicy("p", "1.0", "sha256:x") + .Build(); + + rva.Verdict.Should().Be(RiskVerdictStatus.Fail); + rva.ReasonCodes.Should().Contain(VerdictReasonCode.FailCveReachable); + } + + [Fact] + public void Build_ContentAddressedId_IsDeterministic() + { + var builder1 = CreateBuilder(); + var builder2 = CreateBuilder(); + + var rva1 = builder1.Build(); + var rva2 = builder2.Build(); + + rva1.AttestationId.Should().Be(rva2.AttestationId); + } +} + +public class RvaVerifierTests +{ + [Fact] + public async Task Verify_ValidSignature_ReturnsSuccess() + { + var rva = CreateRva(); + var envelope = await SignRvaAsync(rva); + + var result = await _verifier.VerifyAsync(envelope, RvaVerificationOptions.Default); + + result.IsValid.Should().BeTrue(); + result.Attestation.Should().NotBeNull(); + } + + [Fact] + public async Task Verify_TamperedPayload_ReturnsFailure() + { + var rva = CreateRva(); + var envelope = await SignRvaAsync(rva); + var tampered = TamperWithPayload(envelope); + + var result = await _verifier.VerifyAsync(tampered, RvaVerificationOptions.Default); + + result.IsValid.Should().BeFalse(); + result.Issues.Should().Contain(i => i.Contains("Signature")); + } + + [Fact] + public async Task Verify_ExpiredRva_FailsByDefault() + { + var rva = CreateRva(expiresAt: DateTimeOffset.UtcNow.AddDays(-1)); + var envelope = await SignRvaAsync(rva); + + var result = await _verifier.VerifyAsync(envelope, RvaVerificationOptions.Default); + + result.IsValid.Should().BeFalse(); + result.Issues.Should().Contain(i => i.Contains("expired")); + } + + [Fact] + public async Task Verify_ExpiredRva_AllowedWithOption() + { + var rva = CreateRva(expiresAt: DateTimeOffset.UtcNow.AddDays(-1)); + var envelope = await SignRvaAsync(rva); + var options = new RvaVerificationOptions { AllowExpired = true }; + + var result = await _verifier.VerifyAsync(envelope, options); + + result.IsValid.Should().BeTrue(); + } + + [Fact] + public async Task Verify_InvalidAttestationId_Fails() + { + var rva = CreateRva() with { AttestationId = "rva:sha256:tampered" }; + var envelope = await SignRvaAsync(rva); + + var result = await _verifier.VerifyAsync(envelope, RvaVerificationOptions.Default); + + result.IsValid.Should().BeFalse(); + result.Issues.Should().Contain(i => i.Contains("ID")); + } +} +``` + +**Acceptance Criteria**: +- [ ] Builder tests for valid/invalid inputs +- [ ] Builder determinism test +- [ ] FromEvaluationResult mapping test +- [ ] Verifier signature verification test +- [ ] Verifier tamper detection test +- [ ] Verifier expiration tests +- [ ] All 6+ tests pass + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Policy Team | Define RiskVerdictAttestation model | +| 2 | T2 | TODO | — | Policy Team | Define VerdictReasonCode enum | +| 3 | T3 | TODO | T1, T2 | Policy Team | Create RvaBuilder | +| 4 | T4 | TODO | T3 | Policy Team | Integrate knowledge snapshot reference | +| 5 | T5 | TODO | T1 | Policy Team | Update predicate type | +| 6 | T6 | TODO | T1, T5 | Policy Team | Create RvaVerifier | +| 7 | T7 | TODO | T6 | Policy Team | Add tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from MOAT Phase 2 gap analysis. RVA contract identified as requirement from Moat #2 advisory. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Four verdict statuses | Decision | Policy Team | PASS/FAIL/PASS_WITH_EXCEPTIONS/INDETERMINATE covers all cases | +| Content-addressed ID | Decision | Policy Team | rva:sha256:{hash} ensures immutability | +| In-toto predicate type | Decision | Policy Team | stella.ops/predicates/risk-verdict@v1 | +| Expiration support | Decision | Policy Team | Optional but recommended for time-sensitive verdicts | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] RVA model supports all verdict types +- [ ] Builder creates valid attestations +- [ ] Verifier catches tampering +- [ ] Predicate type follows in-toto spec +- [ ] 6+ tests passing +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_4100_0003_0002_oci_referrer_push.md b/docs/implplan/SPRINT_4100_0003_0002_oci_referrer_push.md new file mode 100644 index 000000000..d577a9fe3 --- /dev/null +++ b/docs/implplan/SPRINT_4100_0003_0002_oci_referrer_push.md @@ -0,0 +1,1344 @@ +# Sprint 4100.0003.0002 · OCI Referrer Push & Discovery + +## Topic & Scope + +- Implement OCI artifact push with subject binding (referrers API) +- Enable RVA attachment to container images +- Support discovery of attestations by image digest + +**Working directory:** `src/ExportCenter/StellaOps.ExportCenter.WebService/Distribution/Oci/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 4100.0003.0001 (Risk Verdict Attestation Contract) — MUST BE DONE +- **Downstream**: None +- **Safe to parallelize with**: Sprint 4100.0001.0002, Sprint 4100.0002.0002 + +## Documentation Prerequisites + +- Sprint 4100.0003.0001 completion (RiskVerdictAttestation) +- `src/ExportCenter/StellaOps.ExportCenter.WebService/AGENTS.md` +- `docs/product-advisories/19-Dec-2025 - Moat #2.md` (Risk Verdict Attestation) +- OCI Distribution Spec: Referrers API + +--- + +## Tasks + +### T1: Implement OCI Push Client + +**Assignee**: ExportCenter Team +**Story Points**: 4 +**Status**: TODO +**Dependencies**: — + +**Description**: +Implement OCI registry push client with subject binding support. + +**Implementation Path**: `Oci/OciPushClient.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.ExportCenter.WebService.Distribution.Oci; + +/// +/// Client for pushing artifacts to OCI registries with referrer support. +/// +public sealed class OciPushClient : IOciPushClient +{ + private readonly HttpClient _httpClient; + private readonly IOciAuthProvider _authProvider; + private readonly ILogger _logger; + + public OciPushClient( + HttpClient httpClient, + IOciAuthProvider authProvider, + ILogger logger) + { + _httpClient = httpClient; + _authProvider = authProvider; + _logger = logger; + } + + /// + /// Pushes an artifact to the registry with subject binding. + /// + public async Task PushArtifactAsync( + OciPushRequest request, + CancellationToken ct = default) + { + _logger.LogInformation("Pushing artifact to {Registry}/{Repository}", + request.Registry, request.Repository); + + try + { + // Authenticate + var token = await _authProvider.GetTokenAsync(request.Registry, request.Repository, ct); + + // Step 1: Push config blob (empty for attestations) + var configDigest = await PushBlobAsync( + request.Registry, request.Repository, + request.Config, token, ct); + + // Step 2: Push artifact content as blob + var contentDigest = await PushBlobAsync( + request.Registry, request.Repository, + request.Content, token, ct); + + // Step 3: Create and push manifest with subject + var manifest = CreateManifest(request, configDigest, contentDigest); + var manifestDigest = await PushManifestAsync( + request.Registry, request.Repository, + manifest, token, ct); + + _logger.LogInformation("Pushed artifact {Digest} to {Registry}/{Repository}", + manifestDigest, request.Registry, request.Repository); + + return new OciPushResult + { + IsSuccess = true, + Digest = manifestDigest, + Registry = request.Registry, + Repository = request.Repository + }; + } + catch (Exception ex) + { + _logger.LogError(ex, "Failed to push artifact to {Registry}/{Repository}", + request.Registry, request.Repository); + + return new OciPushResult + { + IsSuccess = false, + Error = ex.Message + }; + } + } + + private async Task PushBlobAsync( + string registry, string repository, + byte[] content, string token, CancellationToken ct) + { + var digest = ComputeDigest(content); + + // Check if blob exists + var checkUrl = $"https://{registry}/v2/{repository}/blobs/{digest}"; + var checkRequest = new HttpRequestMessage(HttpMethod.Head, checkUrl); + checkRequest.Headers.Authorization = new AuthenticationHeaderValue("Bearer", token); + + var checkResponse = await _httpClient.SendAsync(checkRequest, ct); + if (checkResponse.IsSuccessStatusCode) + { + _logger.LogDebug("Blob {Digest} already exists", digest); + return digest; + } + + // Start upload session + var uploadUrl = $"https://{registry}/v2/{repository}/blobs/uploads/"; + var uploadRequest = new HttpRequestMessage(HttpMethod.Post, uploadUrl); + uploadRequest.Headers.Authorization = new AuthenticationHeaderValue("Bearer", token); + + var uploadResponse = await _httpClient.SendAsync(uploadRequest, ct); + uploadResponse.EnsureSuccessStatusCode(); + + var location = uploadResponse.Headers.Location?.ToString() + ?? throw new InvalidOperationException("No upload location returned"); + + // Complete upload + var completeUrl = location.Contains('?') + ? $"{location}&digest={digest}" + : $"{location}?digest={digest}"; + + var completeRequest = new HttpRequestMessage(HttpMethod.Put, completeUrl); + completeRequest.Headers.Authorization = new AuthenticationHeaderValue("Bearer", token); + completeRequest.Content = new ByteArrayContent(content); + completeRequest.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream"); + + var completeResponse = await _httpClient.SendAsync(completeRequest, ct); + completeResponse.EnsureSuccessStatusCode(); + + return digest; + } + + private OciManifest CreateManifest(OciPushRequest request, string configDigest, string contentDigest) + { + var manifest = new OciManifest + { + SchemaVersion = 2, + MediaType = OciMediaTypes.ImageManifest, + Config = new OciDescriptor + { + MediaType = request.ConfigMediaType ?? OciMediaTypes.EmptyConfig, + Digest = configDigest, + Size = request.Config.Length + }, + Layers = new[] + { + new OciDescriptor + { + MediaType = request.ContentMediaType, + Digest = contentDigest, + Size = request.Content.Length, + Annotations = request.Annotations + } + }, + Annotations = request.ManifestAnnotations + }; + + // Add subject for referrer binding + if (request.SubjectDigest is not null) + { + manifest.Subject = new OciDescriptor + { + MediaType = OciMediaTypes.ImageManifest, + Digest = request.SubjectDigest + }; + } + + return manifest; + } + + private async Task PushManifestAsync( + string registry, string repository, + OciManifest manifest, string token, CancellationToken ct) + { + var json = JsonSerializer.Serialize(manifest, new JsonSerializerOptions + { + PropertyNamingPolicy = JsonNamingPolicy.CamelCase, + DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull + }); + + var digest = ComputeDigest(Encoding.UTF8.GetBytes(json)); + + var url = $"https://{registry}/v2/{repository}/manifests/{digest}"; + var request = new HttpRequestMessage(HttpMethod.Put, url); + request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", token); + request.Content = new StringContent(json, Encoding.UTF8, OciMediaTypes.ImageManifest); + + var response = await _httpClient.SendAsync(request, ct); + response.EnsureSuccessStatusCode(); + + return digest; + } + + private static string ComputeDigest(byte[] content) + { + var hash = SHA256.HashData(content); + return $"sha256:{Convert.ToHexString(hash).ToLowerInvariant()}"; + } +} + +public sealed record OciPushRequest +{ + public required string Registry { get; init; } + public required string Repository { get; init; } + public required byte[] Content { get; init; } + public required string ContentMediaType { get; init; } + public byte[] Config { get; init; } = []; + public string? ConfigMediaType { get; init; } + public string? SubjectDigest { get; init; } + public IReadOnlyDictionary? Annotations { get; init; } + public IReadOnlyDictionary? ManifestAnnotations { get; init; } +} + +public sealed record OciPushResult +{ + public required bool IsSuccess { get; init; } + public string? Digest { get; init; } + public string? Registry { get; init; } + public string? Repository { get; init; } + public string? Error { get; init; } +} + +public interface IOciPushClient +{ + Task PushArtifactAsync(OciPushRequest request, CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] `OciPushClient.cs` created +- [ ] Blob upload with digest check +- [ ] Manifest creation with subject binding +- [ ] Bearer token authentication +- [ ] Error handling and logging +- [ ] Interface for DI + +--- + +### T2: Add Referrer Discovery + +**Assignee**: ExportCenter Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement discovery of artifacts via OCI referrers API. + +**Implementation Path**: `Oci/OciReferrerDiscovery.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.ExportCenter.WebService.Distribution.Oci; + +/// +/// Discovers artifacts attached to images via the OCI referrers API. +/// +public sealed class OciReferrerDiscovery : IOciReferrerDiscovery +{ + private readonly HttpClient _httpClient; + private readonly IOciAuthProvider _authProvider; + private readonly ILogger _logger; + + /// + /// Lists all referrers for a given image digest. + /// + public async Task ListReferrersAsync( + string registry, string repository, string digest, + ReferrerFilterOptions? filter = null, + CancellationToken ct = default) + { + _logger.LogDebug("Listing referrers for {Registry}/{Repository}@{Digest}", + registry, repository, digest); + + try + { + var token = await _authProvider.GetTokenAsync(registry, repository, ct); + + // Try referrers API first (OCI 1.1+) + var result = await TryReferrersApiAsync(registry, repository, digest, token, filter, ct); + if (result is not null) + return result; + + // Fall back to tag-based discovery + return await FallbackTagDiscoveryAsync(registry, repository, digest, token, filter, ct); + } + catch (Exception ex) + { + _logger.LogError(ex, "Failed to list referrers for {Digest}", digest); + return new ReferrerListResult + { + IsSuccess = false, + Error = ex.Message + }; + } + } + + private async Task TryReferrersApiAsync( + string registry, string repository, string digest, string token, + ReferrerFilterOptions? filter, CancellationToken ct) + { + var url = $"https://{registry}/v2/{repository}/referrers/{digest}"; + if (filter?.ArtifactType is not null) + { + url += $"?artifactType={Uri.EscapeDataString(filter.ArtifactType)}"; + } + + var request = new HttpRequestMessage(HttpMethod.Get, url); + request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", token); + request.Headers.Accept.Add(new MediaTypeWithQualityHeaderValue(OciMediaTypes.ImageIndex)); + + var response = await _httpClient.SendAsync(request, ct); + + if (response.StatusCode == HttpStatusCode.NotFound) + { + // Registry doesn't support referrers API + return null; + } + + response.EnsureSuccessStatusCode(); + + var json = await response.Content.ReadAsStringAsync(ct); + var index = JsonSerializer.Deserialize(json, new JsonSerializerOptions + { + PropertyNamingPolicy = JsonNamingPolicy.CamelCase + }); + + return new ReferrerListResult + { + IsSuccess = true, + Referrers = index?.Manifests?.Select(m => new ReferrerInfo + { + Digest = m.Digest, + ArtifactType = m.ArtifactType, + MediaType = m.MediaType, + Size = m.Size, + Annotations = m.Annotations ?? new Dictionary() + }).ToList() ?? [], + SupportsReferrersApi = true + }; + } + + private async Task FallbackTagDiscoveryAsync( + string registry, string repository, string digest, string token, + ReferrerFilterOptions? filter, CancellationToken ct) + { + // Fallback: Check for tagged index at sha256-{hash} + var hashPart = digest.Replace("sha256:", ""); + var tagPrefix = $"sha256-{hashPart}"; + + var url = $"https://{registry}/v2/{repository}/tags/list"; + var request = new HttpRequestMessage(HttpMethod.Get, url); + request.Headers.Authorization = new AuthenticationHeaderValue("Bearer", token); + + var response = await _httpClient.SendAsync(request, ct); + response.EnsureSuccessStatusCode(); + + var json = await response.Content.ReadAsStringAsync(ct); + var tagList = JsonSerializer.Deserialize(json); + + var matchingTags = tagList?.Tags? + .Where(t => t.StartsWith(tagPrefix)) + .ToList() ?? []; + + var referrers = new List(); + foreach (var tag in matchingTags) + { + var manifest = await GetManifestAsync(registry, repository, tag, token, ct); + if (manifest is not null) + { + referrers.Add(new ReferrerInfo + { + Digest = ComputeManifestDigest(manifest), + ArtifactType = manifest.ArtifactType, + MediaType = manifest.MediaType, + Annotations = manifest.Annotations ?? new Dictionary() + }); + } + } + + if (filter?.ArtifactType is not null) + { + referrers = referrers.Where(r => r.ArtifactType == filter.ArtifactType).ToList(); + } + + return new ReferrerListResult + { + IsSuccess = true, + Referrers = referrers, + SupportsReferrersApi = false + }; + } + + /// + /// Finds RVA attestations for an image. + /// + public async Task> FindRvaAttestationsAsync( + string registry, string repository, string imageDigest, + CancellationToken ct = default) + { + var result = await ListReferrersAsync(registry, repository, imageDigest, + new ReferrerFilterOptions { ArtifactType = OciArtifactTypes.RvaJson }, + ct); + + return result.IsSuccess ? result.Referrers : []; + } +} + +public sealed record ReferrerListResult +{ + public required bool IsSuccess { get; init; } + public IReadOnlyList Referrers { get; init; } = []; + public bool SupportsReferrersApi { get; init; } + public string? Error { get; init; } +} + +public sealed record ReferrerInfo +{ + public required string Digest { get; init; } + public string? ArtifactType { get; init; } + public string? MediaType { get; init; } + public long Size { get; init; } + public IReadOnlyDictionary Annotations { get; init; } + = new Dictionary(); +} + +public sealed record ReferrerFilterOptions +{ + public string? ArtifactType { get; init; } +} + +public interface IOciReferrerDiscovery +{ + Task ListReferrersAsync( + string registry, string repository, string digest, + ReferrerFilterOptions? filter = null, + CancellationToken ct = default); + + Task> FindRvaAttestationsAsync( + string registry, string repository, string imageDigest, + CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] `OciReferrerDiscovery.cs` created +- [ ] Referrers API (OCI 1.1+) supported +- [ ] Fallback to tag-based discovery for older registries +- [ ] Artifact type filtering +- [ ] `FindRvaAttestationsAsync` convenience method +- [ ] Interface for DI + +--- + +### T3: Implement Fallback Strategy + +**Assignee**: ExportCenter Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Implement fallback to tagged index for registries without referrers API. + +**Implementation Path**: `Oci/OciReferrerFallback.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.ExportCenter.WebService.Distribution.Oci; + +/// +/// Fallback strategies for registries without native referrers API. +/// +public sealed class OciReferrerFallback : IOciReferrerFallback +{ + private readonly IOciPushClient _pushClient; + private readonly ILogger _logger; + + /// + /// Pushes an artifact with fallback tag for older registries. + /// + public async Task PushWithFallbackAsync( + OciPushRequest request, + FallbackOptions options, + CancellationToken ct = default) + { + // First, try native push with subject + var result = await _pushClient.PushArtifactAsync(request, ct); + + if (!result.IsSuccess) + { + _logger.LogWarning("Native push failed: {Error}", result.Error); + return result; + } + + // If subject was specified, also create fallback tag + if (request.SubjectDigest is not null && options.CreateFallbackTag) + { + await CreateFallbackTagAsync( + request.Registry, request.Repository, + request.SubjectDigest, result.Digest!, + ct); + } + + return result; + } + + private async Task CreateFallbackTagAsync( + string registry, string repository, + string subjectDigest, string referrerDigest, + CancellationToken ct) + { + // Create tag in format: sha256-{subject-hash}.{artifact-type} + var subjectHash = subjectDigest.Replace("sha256:", ""); + var tag = $"sha256-{subjectHash}.rva"; + + _logger.LogDebug("Creating fallback tag {Tag} for referrer {Digest}", + tag, referrerDigest); + + // Create index manifest pointing to the referrer + var index = new OciIndex + { + SchemaVersion = 2, + MediaType = OciMediaTypes.ImageIndex, + Manifests = new[] + { + new OciDescriptor + { + MediaType = OciMediaTypes.ImageManifest, + Digest = referrerDigest, + ArtifactType = OciArtifactTypes.RvaJson + } + } + }; + + // Push the index with the fallback tag + // ... implementation details ... + } + + /// + /// Determines the best push strategy for a registry. + /// + public async Task ProbeCapabilitiesAsync( + string registry, + CancellationToken ct = default) + { + var capabilities = new RegistryCapabilities + { + Registry = registry + }; + + try + { + // Check OCI Distribution version + var response = await _httpClient.GetAsync($"https://{registry}/v2/", ct); + var version = response.Headers.TryGetValues("OCI-Distribution-Version", out var values) + ? values.FirstOrDefault() + : null; + + capabilities.DistributionVersion = version; + capabilities.SupportsReferrersApi = version?.StartsWith("1.1") == true; + + // Check if registry accepts artifact types + capabilities.SupportsArtifactType = await ProbeArtifactTypeAsync(registry, ct); + } + catch (Exception ex) + { + _logger.LogWarning(ex, "Failed to probe capabilities for {Registry}", registry); + } + + return capabilities; + } + + private async Task ProbeArtifactTypeAsync(string registry, CancellationToken ct) + { + // Implementation: Try to push a test manifest with artifactType + // and see if it's accepted + return true; // Simplified + } +} + +public sealed record FallbackOptions +{ + /// + /// Create a tagged index for registries without referrers API. + /// + public bool CreateFallbackTag { get; init; } = true; + + /// + /// Tag format template. {subject} and {type} are replaced. + /// + public string TagTemplate { get; init; } = "sha256-{subject}.{type}"; +} + +public sealed record RegistryCapabilities +{ + public required string Registry { get; init; } + public string? DistributionVersion { get; init; } + public bool SupportsReferrersApi { get; init; } + public bool SupportsArtifactType { get; init; } +} + +public interface IOciReferrerFallback +{ + Task PushWithFallbackAsync(OciPushRequest request, FallbackOptions options, CancellationToken ct = default); + Task ProbeCapabilitiesAsync(string registry, CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] `OciReferrerFallback.cs` created +- [ ] Fallback tag creation for older registries +- [ ] Registry capability probing +- [ ] Configurable tag template +- [ ] Logging for strategy selection + +--- + +### T4: Register Artifact Types + +**Assignee**: ExportCenter Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Define and register StellaOps artifact type constants. + +**Implementation Path**: `Oci/OciArtifactTypes.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.ExportCenter.WebService.Distribution.Oci; + +/// +/// OCI artifact types for StellaOps attestations. +/// +public static class OciArtifactTypes +{ + /// + /// Risk Verdict Attestation (JSON). + /// + public const string RvaJson = "application/vnd.stellaops.rva+json"; + + /// + /// Risk Verdict Attestation (DSSE envelope). + /// + public const string RvaDsse = "application/vnd.stellaops.rva.dsse+json"; + + /// + /// SBOM (CycloneDX JSON). + /// + public const string SbomCyclonedx = "application/vnd.cyclonedx+json"; + + /// + /// SBOM (SPDX JSON). + /// + public const string SbomSpdx = "application/spdx+json"; + + /// + /// VEX document (OpenVEX). + /// + public const string VexOpenvex = "application/vnd.openvex+json"; + + /// + /// Knowledge snapshot manifest. + /// + public const string KnowledgeSnapshot = "application/vnd.stellaops.knowledge-snapshot+json"; + + /// + /// Policy bundle. + /// + public const string PolicyBundle = "application/vnd.stellaops.policy+json"; + + /// + /// In-toto statement (generic). + /// + public const string InTotoStatement = "application/vnd.in-toto+json"; + + /// + /// Gets the artifact type for an RVA based on format. + /// + public static string GetRvaType(bool isSigned) => + isSigned ? RvaDsse : RvaJson; +} + +/// +/// Standard OCI media types. +/// +public static class OciMediaTypes +{ + public const string ImageManifest = "application/vnd.oci.image.manifest.v1+json"; + public const string ImageIndex = "application/vnd.oci.image.index.v1+json"; + public const string ImageConfig = "application/vnd.oci.image.config.v1+json"; + public const string EmptyConfig = "application/vnd.oci.empty.v1+json"; + public const string ImageLayer = "application/vnd.oci.image.layer.v1.tar+gzip"; +} + +/// +/// Standard OCI annotation keys. +/// +public static class OciAnnotations +{ + public const string CreatedAt = "org.opencontainers.image.created"; + public const string Authors = "org.opencontainers.image.authors"; + public const string Description = "org.opencontainers.image.description"; + public const string Title = "org.opencontainers.image.title"; + + // StellaOps custom annotations + public const string RvaId = "ops.stella.rva.id"; + public const string RvaVerdict = "ops.stella.rva.verdict"; + public const string RvaPolicy = "ops.stella.rva.policy"; + public const string RvaSnapshot = "ops.stella.rva.snapshot"; + public const string RvaExpires = "ops.stella.rva.expires"; +} +``` + +**Acceptance Criteria**: +- [ ] `OciArtifactTypes.cs` created +- [ ] RVA types: JSON and DSSE +- [ ] SBOM types: CycloneDX, SPDX +- [ ] VEX type: OpenVEX +- [ ] Standard OCI media types +- [ ] StellaOps custom annotations + +--- + +### T5: Add Registry Config + +**Assignee**: ExportCenter Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Add configuration for registry authentication and TLS. + +**Implementation Path**: `Oci/OciRegistryConfig.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.ExportCenter.WebService.Distribution.Oci; + +/// +/// Configuration for OCI registry connections. +/// +public sealed class OciRegistryConfig +{ + /// + /// Default registry (e.g., docker.io, ghcr.io). + /// + public string? DefaultRegistry { get; set; } + + /// + /// Registry-specific configurations keyed by hostname. + /// + public Dictionary Registries { get; set; } = new(); + + /// + /// Global settings applied to all registries. + /// + public RegistryGlobalSettings Global { get; set; } = new(); +} + +public sealed class RegistryEndpointConfig +{ + /// + /// Registry hostname (e.g., "gcr.io", "registry.example.com"). + /// + public required string Host { get; set; } + + /// + /// Authentication method. + /// + public RegistryAuthMethod AuthMethod { get; set; } = RegistryAuthMethod.Anonymous; + + /// + /// Username for basic auth. + /// + public string? Username { get; set; } + + /// + /// Password or token for basic auth. + /// + public string? Password { get; set; } + + /// + /// Path to credentials file (e.g., Docker config.json). + /// + public string? CredentialsFile { get; set; } + + /// + /// OAuth2/OIDC token endpoint. + /// + public string? TokenEndpoint { get; set; } + + /// + /// TLS configuration. + /// + public RegistryTlsConfig? Tls { get; set; } + + /// + /// Use HTTP instead of HTTPS (insecure, for local dev only). + /// + public bool Insecure { get; set; } +} + +public sealed class RegistryTlsConfig +{ + /// + /// Path to CA certificate bundle. + /// + public string? CaCertPath { get; set; } + + /// + /// Path to client certificate (for mTLS). + /// + public string? ClientCertPath { get; set; } + + /// + /// Path to client key (for mTLS). + /// + public string? ClientKeyPath { get; set; } + + /// + /// Skip certificate verification (insecure). + /// + public bool SkipVerify { get; set; } +} + +public sealed class RegistryGlobalSettings +{ + /// + /// Timeout for registry operations. + /// + public TimeSpan Timeout { get; set; } = TimeSpan.FromMinutes(5); + + /// + /// Retry count for failed operations. + /// + public int RetryCount { get; set; } = 3; + + /// + /// User agent string. + /// + public string UserAgent { get; set; } = "StellaOps/1.0"; + + /// + /// Enable referrers API fallback. + /// + public bool EnableReferrersFallback { get; set; } = true; +} + +public enum RegistryAuthMethod +{ + Anonymous, + Basic, + Bearer, + DockerConfig, + Oidc, + AwsEcr, + GcpGcr, + AzureAcr +} + +/// +/// Factory for creating configured HTTP clients. +/// +public sealed class OciHttpClientFactory +{ + private readonly OciRegistryConfig _config; + + public HttpClient CreateClient(string registry) + { + var endpointConfig = GetEndpointConfig(registry); + var handler = CreateHandler(endpointConfig); + + var client = new HttpClient(handler) + { + Timeout = _config.Global.Timeout + }; + + client.DefaultRequestHeaders.UserAgent.ParseAdd(_config.Global.UserAgent); + + return client; + } + + private RegistryEndpointConfig GetEndpointConfig(string registry) + { + if (_config.Registries.TryGetValue(registry, out var config)) + return config; + + // Return default config + return new RegistryEndpointConfig { Host = registry }; + } + + private HttpClientHandler CreateHandler(RegistryEndpointConfig config) + { + var handler = new HttpClientHandler(); + + if (config.Tls?.SkipVerify == true) + { + handler.ServerCertificateCustomValidationCallback = + HttpClientHandler.DangerousAcceptAnyServerCertificateValidator; + } + + if (config.Tls?.CaCertPath is not null) + { + // Load custom CA certificate + // ... implementation ... + } + + return handler; + } +} +``` + +**Acceptance Criteria**: +- [ ] `OciRegistryConfig.cs` created +- [ ] Per-registry configuration +- [ ] Multiple auth methods supported +- [ ] TLS/mTLS configuration +- [ ] Global timeout and retry settings +- [ ] HTTP client factory + +--- + +### T6: Integrate with RVA Flow + +**Assignee**: ExportCenter Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1, T4 + +**Description**: +Auto-push RVA to registry on verdict creation. + +**Implementation Path**: `Oci/RvaOciPublisher.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.ExportCenter.WebService.Distribution.Oci; + +/// +/// Publishes Risk Verdict Attestations to OCI registries. +/// +public sealed class RvaOciPublisher : IRvaOciPublisher +{ + private readonly IOciPushClient _pushClient; + private readonly IOciReferrerFallback _fallback; + private readonly ISigner _signer; + private readonly ILogger _logger; + + public RvaOciPublisher( + IOciPushClient pushClient, + IOciReferrerFallback fallback, + ISigner signer, + ILogger logger) + { + _pushClient = pushClient; + _fallback = fallback; + _signer = signer; + _logger = logger; + } + + /// + /// Publishes an RVA as an OCI artifact attached to the subject image. + /// + public async Task PublishAsync( + RiskVerdictAttestation attestation, + RvaPublishOptions options, + CancellationToken ct = default) + { + _logger.LogInformation( + "Publishing RVA {AttestationId} to {Registry}/{Repository}", + attestation.AttestationId, options.Registry, options.Repository); + + try + { + // Sign the attestation + var statement = RvaPredicate.CreateStatement(attestation); + var envelope = await SignStatementAsync(statement, ct); + + // Prepare push request + var request = new OciPushRequest + { + Registry = options.Registry, + Repository = options.Repository, + Content = Encoding.UTF8.GetBytes(envelope), + ContentMediaType = OciArtifactTypes.RvaDsse, + SubjectDigest = attestation.Subject.Digest, + Annotations = CreateAnnotations(attestation), + ManifestAnnotations = new Dictionary + { + [OciAnnotations.CreatedAt] = attestation.CreatedAt.ToString("o"), + [OciAnnotations.Title] = $"RVA for {attestation.Subject.Name}" + } + }; + + // Push with fallback support + var result = await _fallback.PushWithFallbackAsync(request, + new FallbackOptions { CreateFallbackTag = options.CreateFallbackTag }, + ct); + + if (!result.IsSuccess) + { + return new RvaPublishResult + { + IsSuccess = false, + Error = result.Error + }; + } + + _logger.LogInformation( + "Published RVA {AttestationId} as {Digest}", + attestation.AttestationId, result.Digest); + + return new RvaPublishResult + { + IsSuccess = true, + AttestationId = attestation.AttestationId, + ArtifactDigest = result.Digest, + Registry = options.Registry, + Repository = options.Repository, + ReferrerUri = $"{options.Registry}/{options.Repository}@{result.Digest}" + }; + } + catch (Exception ex) + { + _logger.LogError(ex, "Failed to publish RVA {AttestationId}", + attestation.AttestationId); + + return new RvaPublishResult + { + IsSuccess = false, + Error = ex.Message + }; + } + } + + private async Task SignStatementAsync(InTotoStatement statement, CancellationToken ct) + { + var payload = JsonSerializer.Serialize(statement, new JsonSerializerOptions + { + PropertyNamingPolicy = JsonNamingPolicy.CamelCase + }); + + var payloadBytes = Encoding.UTF8.GetBytes(payload); + var signature = await _signer.SignAsync(payloadBytes, ct); + + var envelope = new DsseEnvelope + { + PayloadType = "application/vnd.in-toto+json", + Payload = Convert.ToBase64String(payloadBytes), + Signatures = new[] + { + new DsseSignature + { + KeyId = _signer.KeyId, + Sig = Convert.ToBase64String(signature) + } + } + }; + + return JsonSerializer.Serialize(envelope); + } + + private static IReadOnlyDictionary CreateAnnotations( + RiskVerdictAttestation attestation) + { + var annotations = new Dictionary + { + [OciAnnotations.RvaId] = attestation.AttestationId, + [OciAnnotations.RvaVerdict] = attestation.Verdict.ToString(), + [OciAnnotations.RvaPolicy] = attestation.Policy.PolicyId, + [OciAnnotations.RvaSnapshot] = attestation.KnowledgeSnapshotId + }; + + if (attestation.ExpiresAt.HasValue) + { + annotations[OciAnnotations.RvaExpires] = attestation.ExpiresAt.Value.ToString("o"); + } + + return annotations; + } +} + +public sealed record RvaPublishOptions +{ + public required string Registry { get; init; } + public required string Repository { get; init; } + public bool CreateFallbackTag { get; init; } = true; + public bool SignAttestation { get; init; } = true; +} + +public sealed record RvaPublishResult +{ + public required bool IsSuccess { get; init; } + public string? AttestationId { get; init; } + public string? ArtifactDigest { get; init; } + public string? Registry { get; init; } + public string? Repository { get; init; } + public string? ReferrerUri { get; init; } + public string? Error { get; init; } +} + +public interface IRvaOciPublisher +{ + Task PublishAsync( + RiskVerdictAttestation attestation, + RvaPublishOptions options, + CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] `RvaOciPublisher.cs` created +- [ ] RVA signing to DSSE envelope +- [ ] Push with subject binding +- [ ] Custom annotations on artifacts +- [ ] Fallback tag support +- [ ] Complete publish result with URI + +--- + +### T7: Add Tests + +**Assignee**: ExportCenter Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T6 + +**Description**: +Add mock registry integration tests. + +**Implementation Path**: `src/ExportCenter/__Tests/StellaOps.ExportCenter.Tests/Distribution/Oci/` + +**Test Cases**: +```csharp +public class OciPushClientTests +{ + [Fact] + public async Task PushArtifact_ValidRequest_Succeeds() + { + // Arrange + var mockHandler = CreateMockHandler(HttpStatusCode.Created); + var client = new OciPushClient(new HttpClient(mockHandler), _mockAuth.Object, _logger); + + var request = new OciPushRequest + { + Registry = "registry.example.com", + Repository = "myapp", + Content = "test content"u8.ToArray(), + ContentMediaType = OciArtifactTypes.RvaJson, + SubjectDigest = "sha256:abc123" + }; + + // Act + var result = await client.PushArtifactAsync(request); + + // Assert + result.IsSuccess.Should().BeTrue(); + result.Digest.Should().StartWith("sha256:"); + } + + [Fact] + public async Task PushArtifact_AuthFailure_ReturnsError() + { + var mockHandler = CreateMockHandler(HttpStatusCode.Unauthorized); + var client = new OciPushClient(new HttpClient(mockHandler), _mockAuth.Object, _logger); + + var request = CreateRequest(); + var result = await client.PushArtifactAsync(request); + + result.IsSuccess.Should().BeFalse(); + result.Error.Should().Contain("401"); + } +} + +public class OciReferrerDiscoveryTests +{ + [Fact] + public async Task ListReferrers_WithReferrersApi_ReturnsResults() + { + var mockHandler = CreateReferrersApiHandler(manifests: new[] + { + new OciDescriptor { Digest = "sha256:rva1", ArtifactType = OciArtifactTypes.RvaJson } + }); + var discovery = new OciReferrerDiscovery(new HttpClient(mockHandler), _mockAuth.Object, _logger); + + var result = await discovery.ListReferrersAsync("registry.example.com", "myapp", "sha256:image"); + + result.IsSuccess.Should().BeTrue(); + result.Referrers.Should().HaveCount(1); + result.SupportsReferrersApi.Should().BeTrue(); + } + + [Fact] + public async Task ListReferrers_FallbackToTags_ReturnsResults() + { + var mockHandler = CreateFallbackHandler(tags: new[] { "sha256-image.rva" }); + var discovery = new OciReferrerDiscovery(new HttpClient(mockHandler), _mockAuth.Object, _logger); + + var result = await discovery.ListReferrersAsync("registry.example.com", "myapp", "sha256:image"); + + result.IsSuccess.Should().BeTrue(); + result.SupportsReferrersApi.Should().BeFalse(); + } + + [Fact] + public async Task FindRvaAttestations_FiltersCorrectly() + { + var mockHandler = CreateReferrersApiHandler(manifests: new[] + { + new OciDescriptor { Digest = "sha256:rva1", ArtifactType = OciArtifactTypes.RvaJson }, + new OciDescriptor { Digest = "sha256:sbom", ArtifactType = OciArtifactTypes.SbomCyclonedx } + }); + var discovery = new OciReferrerDiscovery(new HttpClient(mockHandler), _mockAuth.Object, _logger); + + var results = await discovery.FindRvaAttestationsAsync("registry.example.com", "myapp", "sha256:image"); + + results.Should().HaveCount(1); + results[0].ArtifactType.Should().Be(OciArtifactTypes.RvaJson); + } +} + +public class RvaOciPublisherTests +{ + [Fact] + public async Task Publish_ValidRva_CreatesReferrer() + { + var rva = CreateRva(); + var options = new RvaPublishOptions + { + Registry = "registry.example.com", + Repository = "myapp" + }; + + var result = await _publisher.PublishAsync(rva, options); + + result.IsSuccess.Should().BeTrue(); + result.ArtifactDigest.Should().NotBeNullOrEmpty(); + result.ReferrerUri.Should().Contain("registry.example.com/myapp@"); + } + + [Fact] + public async Task Publish_SetsCorrectAnnotations() + { + var rva = CreateRva(verdict: RiskVerdictStatus.Pass); + + await _publisher.PublishAsync(rva, CreateOptions()); + + // Verify mock received correct annotations + _mockPushClient.Verify(c => c.PushArtifactAsync( + It.Is(r => + r.Annotations![OciAnnotations.RvaVerdict] == "Pass"), + It.IsAny())); + } +} +``` + +**Acceptance Criteria**: +- [ ] Push client tests with mock handler +- [ ] Auth failure handling test +- [ ] Referrer discovery tests (API and fallback) +- [ ] RVA filtering test +- [ ] Publisher integration test +- [ ] All 4+ tests pass + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | ExportCenter Team | Implement OCI push client | +| 2 | T2 | TODO | T1 | ExportCenter Team | Add referrer discovery | +| 3 | T3 | TODO | T1, T2 | ExportCenter Team | Implement fallback strategy | +| 4 | T4 | TODO | — | ExportCenter Team | Register artifact types | +| 5 | T5 | TODO | T1 | ExportCenter Team | Add registry config | +| 6 | T6 | TODO | T1, T4 | ExportCenter Team | Integrate with RVA flow | +| 7 | T7 | TODO | T6 | ExportCenter Team | Add tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from MOAT Phase 2 gap analysis. OCI referrer push identified as requirement from Moat #2 advisory. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Referrers API first | Decision | ExportCenter Team | Try OCI 1.1 referrers API, fallback to tags | +| DSSE envelope | Decision | ExportCenter Team | Sign RVA with DSSE for in-toto compatibility | +| Custom annotations | Decision | ExportCenter Team | ops.stella.* prefix for StellaOps annotations | +| Fallback tag format | Decision | ExportCenter Team | sha256-{subject-hash}.rva for discovery | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] RVA can be pushed to OCI registries +- [ ] Referrers API and fallback work +- [ ] Discovery finds attached RVAs +- [ ] Registry config supports auth methods +- [ ] 4+ integration tests passing +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_4100_0004_0001_security_state_delta.md b/docs/implplan/SPRINT_4100_0004_0001_security_state_delta.md new file mode 100644 index 000000000..a660dd4fb --- /dev/null +++ b/docs/implplan/SPRINT_4100_0004_0001_security_state_delta.md @@ -0,0 +1,1434 @@ +# Sprint 4100.0004.0001 · Security State Delta & Verdict + +## Topic & Scope + +- Define security state delta model comparing baseline vs target +- Implement delta computation across SBOM, reachability, VEX, policy +- Create signed delta verdict attestation + +**Working directory:** `src/Policy/__Libraries/StellaOps.Policy/Deltas/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 4100.0002.0001 (Knowledge Snapshot Manifest) — MUST BE DONE +- **Downstream**: None +- **Safe to parallelize with**: Sprint 4100.0001.0003, Sprint 4100.0002.0003 + +## Documentation Prerequisites + +- Sprint 4100.0002.0001 completion (KnowledgeSnapshotManifest) +- `docs/product-advisories/19-Dec-2025 - Moat #1.md` (Security Delta as Governance Unit) +- `docs/product-advisories/20-Dec-2025 - Moat Explanation - Risk Budgets and Diff-Aware Release Gates.md` + +--- + +## Tasks + +### T1: Define SecurityStateDelta Model + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the unified delta model comparing baseline and target security states. + +**Implementation Path**: `Deltas/SecurityStateDelta.cs` (new file) + +**Model Definition**: +```csharp +namespace StellaOps.Policy.Deltas; + +/// +/// Represents the delta between two security states (baseline vs target). +/// This is the atomic unit of governance for release decisions. +/// +public sealed record SecurityStateDelta +{ + /// + /// Unique identifier for this delta. + /// Format: delta:sha256:{hash} + /// + public required string DeltaId { get; init; } + + /// + /// When this delta was computed. + /// + public required DateTimeOffset ComputedAt { get; init; } + + /// + /// Knowledge snapshot ID of the baseline state. + /// + public required string BaselineSnapshotId { get; init; } + + /// + /// Knowledge snapshot ID of the target state. + /// + public required string TargetSnapshotId { get; init; } + + /// + /// Artifact being evaluated. + /// + public required ArtifactRef Artifact { get; init; } + + /// + /// SBOM differences. + /// + public required SbomDelta Sbom { get; init; } + + /// + /// Reachability differences. + /// + public required ReachabilityDelta Reachability { get; init; } + + /// + /// VEX coverage differences. + /// + public required VexDelta Vex { get; init; } + + /// + /// Policy evaluation differences. + /// + public required PolicyDelta Policy { get; init; } + + /// + /// Unknowns differences. + /// + public required UnknownsDelta Unknowns { get; init; } + + /// + /// Findings that drive the verdict. + /// + public IReadOnlyList Drivers { get; init; } = []; + + /// + /// Summary statistics. + /// + public required DeltaSummary Summary { get; init; } +} + +/// +/// Reference to the artifact being evaluated. +/// +public sealed record ArtifactRef( + string Digest, + string? Name, + string? Tag); + +/// +/// SBOM-level differences. +/// +public sealed record SbomDelta +{ + public int PackagesAdded { get; init; } + public int PackagesRemoved { get; init; } + public int PackagesModified { get; init; } + public IReadOnlyList AddedPackages { get; init; } = []; + public IReadOnlyList RemovedPackages { get; init; } = []; + public IReadOnlyList VersionChanges { get; init; } = []; +} + +public sealed record PackageChange(string Purl, string? License); +public sealed record PackageVersionChange(string Purl, string OldVersion, string NewVersion); + +/// +/// Reachability analysis differences. +/// +public sealed record ReachabilityDelta +{ + public int NewReachable { get; init; } + public int NewUnreachable { get; init; } + public int ChangedReachability { get; init; } + public IReadOnlyList Changes { get; init; } = []; +} + +public sealed record ReachabilityChange( + string CveId, + string Purl, + bool WasReachable, + bool IsReachable); + +/// +/// VEX coverage differences. +/// +public sealed record VexDelta +{ + public int NewVexStatements { get; init; } + public int RevokedVexStatements { get; init; } + public int CoverageIncrease { get; init; } + public int CoverageDecrease { get; init; } + public IReadOnlyList Changes { get; init; } = []; +} + +public sealed record VexChange( + string CveId, + string? OldStatus, + string? NewStatus); + +/// +/// Policy evaluation differences. +/// +public sealed record PolicyDelta +{ + public int NewViolations { get; init; } + public int ResolvedViolations { get; init; } + public int PolicyVersionChanged { get; init; } + public IReadOnlyList Changes { get; init; } = []; +} + +public sealed record PolicyChange( + string RuleId, + string ChangeType, + string? Description); + +/// +/// Unknowns differences. +/// +public sealed record UnknownsDelta +{ + public int NewUnknowns { get; init; } + public int ResolvedUnknowns { get; init; } + public int TotalBaselineUnknowns { get; init; } + public int TotalTargetUnknowns { get; init; } + public IReadOnlyDictionary ByReasonCode { get; init; } + = new Dictionary(); +} + +/// +/// A finding that drives the delta verdict. +/// +public sealed record DeltaDriver +{ + public required string Type { get; init; } // "new-cve", "reachability-change", etc. + public required DeltaDriverSeverity Severity { get; init; } + public required string Description { get; init; } + public string? CveId { get; init; } + public string? Purl { get; init; } + public IReadOnlyDictionary Details { get; init; } + = new Dictionary(); +} + +public enum DeltaDriverSeverity +{ + Low, + Medium, + High, + Critical +} + +/// +/// Summary statistics for the delta. +/// +public sealed record DeltaSummary +{ + public int TotalChanges { get; init; } + public int RiskIncreasing { get; init; } + public int RiskDecreasing { get; init; } + public int Neutral { get; init; } + public decimal RiskScore { get; init; } + public string RiskDirection { get; init; } = "stable"; // "increasing", "decreasing", "stable" +} +``` + +**Acceptance Criteria**: +- [ ] `SecurityStateDelta.cs` created with all models +- [ ] SBOM, Reachability, VEX, Policy, Unknowns deltas defined +- [ ] DeltaDriver for verdict justification +- [ ] Summary statistics with risk direction +- [ ] Content-addressed delta ID + +--- + +### T2: Define DeltaVerdict Model + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create the verdict model for security state deltas. + +**Implementation Path**: `Deltas/DeltaVerdict.cs` (new file) + +**Model Definition**: +```csharp +namespace StellaOps.Policy.Deltas; + +/// +/// Verdict for a security state delta. +/// Determines whether a change should be allowed to proceed. +/// +public sealed record DeltaVerdict +{ + /// + /// Unique identifier for this verdict. + /// + public required string VerdictId { get; init; } + + /// + /// Reference to the delta being evaluated. + /// + public required string DeltaId { get; init; } + + /// + /// When this verdict was rendered. + /// + public required DateTimeOffset EvaluatedAt { get; init; } + + /// + /// The verdict outcome. + /// + public required DeltaVerdictStatus Status { get; init; } + + /// + /// Recommended gate level based on delta risk. + /// + public GateLevel RecommendedGate { get; init; } + + /// + /// Risk points consumed by this change. + /// + public int RiskPoints { get; init; } + + /// + /// Drivers that contributed to the verdict. + /// + public IReadOnlyList BlockingDrivers { get; init; } = []; + + /// + /// Drivers that raised warnings but didn't block. + /// + public IReadOnlyList WarningDrivers { get; init; } = []; + + /// + /// Applied exceptions that allowed blocking drivers. + /// + public IReadOnlyList AppliedExceptions { get; init; } = []; + + /// + /// Human-readable explanation. + /// + public string? Explanation { get; init; } + + /// + /// Recommendations for addressing issues. + /// + public IReadOnlyList Recommendations { get; init; } = []; +} + +/// +/// Possible verdict outcomes for a delta. +/// +public enum DeltaVerdictStatus +{ + /// + /// Delta is safe to proceed. + /// + Pass, + + /// + /// Delta has warnings but can proceed. + /// + Warn, + + /// + /// Delta should not proceed without remediation. + /// + Fail, + + /// + /// Delta is blocked but covered by exceptions. + /// + PassWithExceptions +} + +/// +/// Gate levels aligned with diff-aware release gates. +/// +public enum GateLevel +{ + /// + /// G0: No-risk (docs, comments only). + /// + G0, + + /// + /// G1: Low risk (unit tests, 1 review). + /// + G1, + + /// + /// G2: Moderate risk (integration tests, code owner, canary). + /// + G2, + + /// + /// G3: High risk (security scan, migration plan, release captain). + /// + G3, + + /// + /// G4: Very high risk (formal review, extended canary, comms plan). + /// + G4 +} + +/// +/// Builder for delta verdicts. +/// +public sealed class DeltaVerdictBuilder +{ + private DeltaVerdictStatus _status = DeltaVerdictStatus.Pass; + private GateLevel _gate = GateLevel.G1; + private int _riskPoints; + private readonly List _blockingDrivers = []; + private readonly List _warningDrivers = []; + private readonly List _exceptions = []; + private readonly List _recommendations = []; + private string? _explanation; + + public DeltaVerdictBuilder WithStatus(DeltaVerdictStatus status) + { + _status = status; + return this; + } + + public DeltaVerdictBuilder WithGate(GateLevel gate) + { + _gate = gate; + return this; + } + + public DeltaVerdictBuilder WithRiskPoints(int points) + { + _riskPoints = points; + return this; + } + + public DeltaVerdictBuilder AddBlockingDriver(DeltaDriver driver) + { + _blockingDrivers.Add(driver); + _status = DeltaVerdictStatus.Fail; + return this; + } + + public DeltaVerdictBuilder AddWarningDriver(DeltaDriver driver) + { + _warningDrivers.Add(driver); + if (_status == DeltaVerdictStatus.Pass) + _status = DeltaVerdictStatus.Warn; + return this; + } + + public DeltaVerdictBuilder AddException(string exceptionId) + { + _exceptions.Add(exceptionId); + return this; + } + + public DeltaVerdictBuilder AddRecommendation(string recommendation) + { + _recommendations.Add(recommendation); + return this; + } + + public DeltaVerdictBuilder WithExplanation(string explanation) + { + _explanation = explanation; + return this; + } + + public DeltaVerdict Build(string deltaId) + { + // If all blocking drivers are excepted, change to PassWithExceptions + if (_status == DeltaVerdictStatus.Fail && _exceptions.Count >= _blockingDrivers.Count) + { + _status = DeltaVerdictStatus.PassWithExceptions; + } + + return new DeltaVerdict + { + VerdictId = $"dv:{Guid.NewGuid():N}", + DeltaId = deltaId, + EvaluatedAt = DateTimeOffset.UtcNow, + Status = _status, + RecommendedGate = _gate, + RiskPoints = _riskPoints, + BlockingDrivers = _blockingDrivers.ToList(), + WarningDrivers = _warningDrivers.ToList(), + AppliedExceptions = _exceptions.ToList(), + Explanation = _explanation ?? GenerateExplanation(), + Recommendations = _recommendations.ToList() + }; + } + + private string GenerateExplanation() + { + return _status switch + { + DeltaVerdictStatus.Pass => "No blocking changes detected", + DeltaVerdictStatus.Warn => $"{_warningDrivers.Count} warning(s) detected", + DeltaVerdictStatus.Fail => $"{_blockingDrivers.Count} blocking issue(s) detected", + DeltaVerdictStatus.PassWithExceptions => $"Blocked by {_blockingDrivers.Count} issue(s), covered by exceptions", + _ => "Unknown status" + }; + } +} +``` + +**Acceptance Criteria**: +- [ ] `DeltaVerdict.cs` created +- [ ] Four verdict statuses: Pass, Warn, Fail, PassWithExceptions +- [ ] Gate level recommendation (G0-G4) +- [ ] Risk points calculation +- [ ] Blocking and warning drivers separated +- [ ] Builder with auto-explanation + +--- + +### T3: Implement DeltaComputer + +**Assignee**: Policy Team +**Story Points**: 4 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Implement computation of deltas across all dimensions. + +**Implementation Path**: `Deltas/DeltaComputer.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Deltas; + +/// +/// Computes security state deltas between baseline and target. +/// +public sealed class DeltaComputer : IDeltaComputer +{ + private readonly ISnapshotService _snapshotService; + private readonly ISbomComparer _sbomComparer; + private readonly IReachabilityComparer _reachabilityComparer; + private readonly IVexComparer _vexComparer; + private readonly IPolicyComparer _policyComparer; + private readonly IHasher _hasher; + private readonly ILogger _logger; + + public async Task ComputeDeltaAsync( + string baselineSnapshotId, + string targetSnapshotId, + ArtifactRef artifact, + CancellationToken ct = default) + { + _logger.LogInformation( + "Computing delta between {Baseline} and {Target}", + baselineSnapshotId, targetSnapshotId); + + // Load snapshots + var baseline = await _snapshotService.GetSnapshotAsync(baselineSnapshotId, ct) + ?? throw new InvalidOperationException($"Baseline snapshot {baselineSnapshotId} not found"); + var target = await _snapshotService.GetSnapshotAsync(targetSnapshotId, ct) + ?? throw new InvalidOperationException($"Target snapshot {targetSnapshotId} not found"); + + // Compute component deltas + var sbomDelta = await _sbomComparer.CompareAsync(baseline, target, ct); + var reachabilityDelta = await _reachabilityComparer.CompareAsync(baseline, target, ct); + var vexDelta = await _vexComparer.CompareAsync(baseline, target, ct); + var policyDelta = await _policyComparer.CompareAsync(baseline, target, ct); + var unknownsDelta = ComputeUnknownsDelta(baseline, target); + + // Identify drivers + var drivers = IdentifyDrivers(sbomDelta, reachabilityDelta, vexDelta, policyDelta, unknownsDelta); + + // Compute summary + var summary = ComputeSummary(sbomDelta, reachabilityDelta, vexDelta, policyDelta, drivers); + + var delta = new SecurityStateDelta + { + DeltaId = "", // Computed below + ComputedAt = DateTimeOffset.UtcNow, + BaselineSnapshotId = baselineSnapshotId, + TargetSnapshotId = targetSnapshotId, + Artifact = artifact, + Sbom = sbomDelta, + Reachability = reachabilityDelta, + Vex = vexDelta, + Policy = policyDelta, + Unknowns = unknownsDelta, + Drivers = drivers, + Summary = summary + }; + + // Compute content-addressed ID + var deltaId = ComputeDeltaId(delta); + + return delta with { DeltaId = deltaId }; + } + + private IReadOnlyList IdentifyDrivers( + SbomDelta sbom, + ReachabilityDelta reach, + VexDelta vex, + PolicyDelta policy, + UnknownsDelta unknowns) + { + var drivers = new List(); + + // New reachable CVEs are critical drivers + foreach (var change in reach.Changes.Where(c => !c.WasReachable && c.IsReachable)) + { + drivers.Add(new DeltaDriver + { + Type = "new-reachable-cve", + Severity = DeltaDriverSeverity.Critical, + Description = $"CVE {change.CveId} is now reachable", + CveId = change.CveId, + Purl = change.Purl + }); + } + + // Lost VEX coverage + foreach (var change in vex.Changes.Where(c => c.OldStatus == "not_affected" && c.NewStatus is null)) + { + drivers.Add(new DeltaDriver + { + Type = "lost-vex-coverage", + Severity = DeltaDriverSeverity.High, + Description = $"VEX coverage lost for {change.CveId}", + CveId = change.CveId + }); + } + + // New policy violations + foreach (var change in policy.Changes.Where(c => c.ChangeType == "new-violation")) + { + drivers.Add(new DeltaDriver + { + Type = "new-policy-violation", + Severity = DeltaDriverSeverity.High, + Description = change.Description ?? $"New violation of rule {change.RuleId}" + }); + } + + // New high-risk packages + foreach (var pkg in sbom.AddedPackages.Where(IsHighRiskPackage)) + { + drivers.Add(new DeltaDriver + { + Type = "high-risk-package-added", + Severity = DeltaDriverSeverity.Medium, + Description = $"New high-risk package: {pkg.Purl}", + Purl = pkg.Purl + }); + } + + // Increased unknowns + if (unknowns.NewUnknowns > 0) + { + drivers.Add(new DeltaDriver + { + Type = "new-unknowns", + Severity = DeltaDriverSeverity.Medium, + Description = $"{unknowns.NewUnknowns} new unknown(s) introduced", + Details = unknowns.ByReasonCode.ToDictionary(kv => kv.Key, kv => kv.Value.ToString()) + }); + } + + return drivers.OrderByDescending(d => d.Severity).ToList(); + } + + private DeltaSummary ComputeSummary( + SbomDelta sbom, + ReachabilityDelta reach, + VexDelta vex, + PolicyDelta policy, + IReadOnlyList drivers) + { + var totalChanges = sbom.PackagesAdded + sbom.PackagesRemoved + + reach.NewReachable + reach.NewUnreachable + + vex.NewVexStatements + vex.RevokedVexStatements + + policy.NewViolations + policy.ResolvedViolations; + + var riskIncreasing = drivers.Count(d => + d.Severity is DeltaDriverSeverity.Critical or DeltaDriverSeverity.High); + var riskDecreasing = reach.NewUnreachable + vex.NewVexStatements + policy.ResolvedViolations; + var neutral = totalChanges - riskIncreasing - riskDecreasing; + + var riskScore = ComputeRiskScore(drivers); + var riskDirection = riskIncreasing > riskDecreasing ? "increasing" : + riskIncreasing < riskDecreasing ? "decreasing" : "stable"; + + return new DeltaSummary + { + TotalChanges = totalChanges, + RiskIncreasing = riskIncreasing, + RiskDecreasing = riskDecreasing, + Neutral = neutral, + RiskScore = riskScore, + RiskDirection = riskDirection + }; + } + + private decimal ComputeRiskScore(IReadOnlyList drivers) + { + return drivers.Sum(d => d.Severity switch + { + DeltaDriverSeverity.Critical => 20m, + DeltaDriverSeverity.High => 10m, + DeltaDriverSeverity.Medium => 5m, + DeltaDriverSeverity.Low => 1m, + _ => 0m + }); + } + + private static bool IsHighRiskPackage(PackageChange pkg) + { + // Simplified: Check for known high-risk characteristics + return pkg.Purl.Contains("native") || pkg.Purl.Contains("crypto"); + } + + private string ComputeDeltaId(SecurityStateDelta delta) + { + var json = JsonSerializer.Serialize(delta with { DeltaId = "" }, + new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase }); + var hash = _hasher.ComputeSha256(json); + return $"delta:sha256:{hash}"; + } +} + +public interface IDeltaComputer +{ + Task ComputeDeltaAsync( + string baselineSnapshotId, + string targetSnapshotId, + ArtifactRef artifact, + CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] `DeltaComputer.cs` created +- [ ] Snapshot loading and comparison +- [ ] SBOM, reachability, VEX, policy comparers used +- [ ] Drivers identified by severity +- [ ] Summary statistics computed +- [ ] Risk score and direction calculated +- [ ] Content-addressed delta ID + +--- + +### T4: Implement BaselineSelector + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement selection of appropriate baseline for delta comparison. + +**Implementation Path**: `Deltas/BaselineSelector.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Deltas; + +/// +/// Selects the appropriate baseline for delta comparison. +/// +public sealed class BaselineSelector : IBaselineSelector +{ + private readonly ISnapshotStore _snapshotStore; + private readonly IVerdictStore _verdictStore; + private readonly ILogger _logger; + + /// + /// Selects a baseline snapshot for the given artifact. + /// + public async Task SelectBaselineAsync( + string artifactDigest, + BaselineSelectionStrategy strategy, + CancellationToken ct = default) + { + _logger.LogDebug("Selecting baseline for {Artifact} using strategy {Strategy}", + artifactDigest, strategy); + + return strategy switch + { + BaselineSelectionStrategy.PreviousBuild => await SelectPreviousBuildAsync(artifactDigest, ct), + BaselineSelectionStrategy.LastApproved => await SelectLastApprovedAsync(artifactDigest, ct), + BaselineSelectionStrategy.ProductionDeployed => await SelectProductionAsync(artifactDigest, ct), + BaselineSelectionStrategy.BranchBase => await SelectBranchBaseAsync(artifactDigest, ct), + BaselineSelectionStrategy.Explicit => throw new ArgumentException("Explicit strategy requires baseline ID"), + _ => throw new ArgumentOutOfRangeException(nameof(strategy)) + }; + } + + /// + /// Selects a baseline with an explicit snapshot ID. + /// + public async Task SelectExplicitAsync( + string baselineSnapshotId, + CancellationToken ct = default) + { + var snapshot = await _snapshotStore.GetAsync(baselineSnapshotId, ct); + if (snapshot is null) + { + return BaselineSelectionResult.NotFound($"Snapshot {baselineSnapshotId} not found"); + } + + return BaselineSelectionResult.Success(snapshot, BaselineSelectionStrategy.Explicit); + } + + private async Task SelectPreviousBuildAsync( + string artifactDigest, CancellationToken ct) + { + // Find the most recent verdict for this artifact's repository + var repository = ExtractRepository(artifactDigest); + var verdicts = await _verdictStore.ListByRepositoryAsync(repository, limit: 10, ct); + + var previousVerdict = verdicts + .Where(v => v.ArtifactDigest != artifactDigest) + .OrderByDescending(v => v.EvaluatedAt) + .FirstOrDefault(); + + if (previousVerdict?.KnowledgeSnapshotId is null) + { + return BaselineSelectionResult.NotFound("No previous build found"); + } + + var snapshot = await _snapshotStore.GetAsync(previousVerdict.KnowledgeSnapshotId, ct); + return snapshot is not null + ? BaselineSelectionResult.Success(snapshot, BaselineSelectionStrategy.PreviousBuild) + : BaselineSelectionResult.NotFound("Previous build snapshot not found"); + } + + private async Task SelectLastApprovedAsync( + string artifactDigest, CancellationToken ct) + { + var repository = ExtractRepository(artifactDigest); + + // Find the most recent passing verdict + var verdicts = await _verdictStore.ListByRepositoryAsync(repository, limit: 50, ct); + + var approvedVerdict = verdicts + .Where(v => v.Status is RiskVerdictStatus.Pass or RiskVerdictStatus.PassWithExceptions) + .OrderByDescending(v => v.EvaluatedAt) + .FirstOrDefault(); + + if (approvedVerdict?.KnowledgeSnapshotId is null) + { + return BaselineSelectionResult.NotFound("No approved baseline found"); + } + + var snapshot = await _snapshotStore.GetAsync(approvedVerdict.KnowledgeSnapshotId, ct); + return snapshot is not null + ? BaselineSelectionResult.Success(snapshot, BaselineSelectionStrategy.LastApproved) + : BaselineSelectionResult.NotFound("Approved baseline snapshot not found"); + } + + private async Task SelectProductionAsync( + string artifactDigest, CancellationToken ct) + { + var repository = ExtractRepository(artifactDigest); + + // Find verdict tagged as production deployment + var prodVerdict = await _verdictStore.GetByTagAsync(repository, "production", ct); + + if (prodVerdict?.KnowledgeSnapshotId is null) + { + return BaselineSelectionResult.NotFound("No production baseline found"); + } + + var snapshot = await _snapshotStore.GetAsync(prodVerdict.KnowledgeSnapshotId, ct); + return snapshot is not null + ? BaselineSelectionResult.Success(snapshot, BaselineSelectionStrategy.ProductionDeployed) + : BaselineSelectionResult.NotFound("Production baseline snapshot not found"); + } + + private async Task SelectBranchBaseAsync( + string artifactDigest, CancellationToken ct) + { + // This would integrate with git to find the branch base + // For now, fall back to last approved + return await SelectLastApprovedAsync(artifactDigest, ct); + } + + private static string ExtractRepository(string artifactDigest) + { + // Extract repository from artifact metadata + // This is a simplified implementation + return artifactDigest.Split('@')[0]; + } +} + +/// +/// Strategies for selecting a baseline. +/// +public enum BaselineSelectionStrategy +{ + /// + /// Use the immediately previous build of the same artifact. + /// + PreviousBuild, + + /// + /// Use the most recent build that passed policy. + /// + LastApproved, + + /// + /// Use the build currently deployed to production. + /// + ProductionDeployed, + + /// + /// Use the commit where the current branch diverged. + /// + BranchBase, + + /// + /// Use an explicitly specified baseline. + /// + Explicit +} + +public sealed record BaselineSelectionResult +{ + public required bool IsFound { get; init; } + public KnowledgeSnapshotManifest? Snapshot { get; init; } + public BaselineSelectionStrategy? Strategy { get; init; } + public string? Error { get; init; } + + public static BaselineSelectionResult Success(KnowledgeSnapshotManifest snapshot, BaselineSelectionStrategy strategy) => + new() { IsFound = true, Snapshot = snapshot, Strategy = strategy }; + + public static BaselineSelectionResult NotFound(string error) => + new() { IsFound = false, Error = error }; +} + +public interface IBaselineSelector +{ + Task SelectBaselineAsync( + string artifactDigest, + BaselineSelectionStrategy strategy, + CancellationToken ct = default); + + Task SelectExplicitAsync( + string baselineSnapshotId, + CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] `BaselineSelector.cs` created +- [ ] Multiple selection strategies: PreviousBuild, LastApproved, ProductionDeployed, BranchBase, Explicit +- [ ] Fallback when baseline not found +- [ ] Integration with verdict store +- [ ] Logging for strategy selection + +--- + +### T5: Create DeltaVerdictStatement + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T2 + +**Description**: +Create signed attestation for delta verdicts. + +**Implementation Path**: `Deltas/DeltaVerdictStatement.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Deltas; + +/// +/// Creates in-toto statements for delta verdicts. +/// +public static class DeltaVerdictStatement +{ + public const string PredicateType = "https://stella.ops/predicates/delta-verdict@v1"; + + /// + /// Creates an in-toto statement from a delta verdict. + /// + public static InTotoStatement CreateStatement( + SecurityStateDelta delta, + DeltaVerdict verdict) + { + return new InTotoStatement + { + Type = "https://in-toto.io/Statement/v1", + Subject = new[] + { + new InTotoSubject + { + Name = delta.Artifact.Name ?? delta.Artifact.Digest, + Digest = new Dictionary + { + ["sha256"] = delta.Artifact.Digest.Replace("sha256:", "") + } + } + }, + PredicateType = PredicateType, + Predicate = new DeltaVerdictPredicate + { + DeltaId = delta.DeltaId, + VerdictId = verdict.VerdictId, + Status = verdict.Status.ToString(), + BaselineSnapshotId = delta.BaselineSnapshotId, + TargetSnapshotId = delta.TargetSnapshotId, + RecommendedGate = verdict.RecommendedGate.ToString(), + RiskPoints = verdict.RiskPoints, + Summary = new DeltaSummaryPredicate + { + TotalChanges = delta.Summary.TotalChanges, + RiskIncreasing = delta.Summary.RiskIncreasing, + RiskDecreasing = delta.Summary.RiskDecreasing, + RiskDirection = delta.Summary.RiskDirection + }, + BlockingDrivers = verdict.BlockingDrivers + .Select(d => new DriverPredicate { Type = d.Type, Description = d.Description }) + .ToList(), + AppliedExceptions = verdict.AppliedExceptions.ToList(), + EvaluatedAt = verdict.EvaluatedAt.ToString("o") + } + }; + } +} + +public sealed record DeltaVerdictPredicate +{ + [JsonPropertyName("deltaId")] + public required string DeltaId { get; init; } + + [JsonPropertyName("verdictId")] + public required string VerdictId { get; init; } + + [JsonPropertyName("status")] + public required string Status { get; init; } + + [JsonPropertyName("baselineSnapshotId")] + public required string BaselineSnapshotId { get; init; } + + [JsonPropertyName("targetSnapshotId")] + public required string TargetSnapshotId { get; init; } + + [JsonPropertyName("recommendedGate")] + public required string RecommendedGate { get; init; } + + [JsonPropertyName("riskPoints")] + public int RiskPoints { get; init; } + + [JsonPropertyName("summary")] + public required DeltaSummaryPredicate Summary { get; init; } + + [JsonPropertyName("blockingDrivers")] + public required IReadOnlyList BlockingDrivers { get; init; } + + [JsonPropertyName("appliedExceptions")] + public required IReadOnlyList AppliedExceptions { get; init; } + + [JsonPropertyName("evaluatedAt")] + public required string EvaluatedAt { get; init; } +} + +public sealed record DeltaSummaryPredicate +{ + [JsonPropertyName("totalChanges")] + public int TotalChanges { get; init; } + + [JsonPropertyName("riskIncreasing")] + public int RiskIncreasing { get; init; } + + [JsonPropertyName("riskDecreasing")] + public int RiskDecreasing { get; init; } + + [JsonPropertyName("riskDirection")] + public required string RiskDirection { get; init; } +} + +public sealed record DriverPredicate +{ + [JsonPropertyName("type")] + public required string Type { get; init; } + + [JsonPropertyName("description")] + public required string Description { get; init; } +} + +/// +/// Service for creating and signing delta verdict attestations. +/// +public sealed class DeltaVerdictAttestor : IDeltaVerdictAttestor +{ + private readonly ISigner _signer; + private readonly ILogger _logger; + + public async Task AttestAsync( + SecurityStateDelta delta, + DeltaVerdict verdict, + CancellationToken ct = default) + { + var statement = DeltaVerdictStatement.CreateStatement(delta, verdict); + + var payload = JsonSerializer.SerializeToUtf8Bytes(statement, new JsonSerializerOptions + { + PropertyNamingPolicy = JsonNamingPolicy.CamelCase + }); + + var signature = await _signer.SignAsync(payload, ct); + + _logger.LogInformation( + "Created delta verdict attestation for {DeltaId} with status {Status}", + delta.DeltaId, verdict.Status); + + return new DsseEnvelope + { + PayloadType = "application/vnd.in-toto+json", + Payload = Convert.ToBase64String(payload), + Signatures = new[] + { + new DsseSignature + { + KeyId = _signer.KeyId, + Sig = Convert.ToBase64String(signature) + } + } + }; + } +} + +public interface IDeltaVerdictAttestor +{ + Task AttestAsync( + SecurityStateDelta delta, + DeltaVerdict verdict, + CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] `DeltaVerdictStatement.cs` created +- [ ] Predicate type: `stella.ops/predicates/delta-verdict@v1` +- [ ] In-toto statement structure correct +- [ ] Delta summary included in predicate +- [ ] Blocking drivers listed +- [ ] Attestor service for signing + +--- + +### T6: Add Delta API Endpoints + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T3, T4, T5 + +**Description**: +Add REST API endpoints for delta operations. + +**Implementation Path**: `src/Policy/StellaOps.Policy.WebService/Controllers/DeltasController.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.WebService.Controllers; + +[ApiController] +[Route("api/v1/policy/deltas")] +public class DeltasController : ControllerBase +{ + private readonly IDeltaComputer _deltaComputer; + private readonly IBaselineSelector _baselineSelector; + private readonly IDeltaVerdictAttestor _attestor; + private readonly ILogger _logger; + + /// + /// Computes a security state delta. + /// + [HttpPost("compute")] + public async Task> ComputeDelta( + [FromBody] ComputeDeltaRequest request, + CancellationToken ct) + { + // Select baseline + var baselineResult = string.IsNullOrEmpty(request.BaselineSnapshotId) + ? await _baselineSelector.SelectBaselineAsync( + request.ArtifactDigest, + request.BaselineStrategy ?? BaselineSelectionStrategy.LastApproved, + ct) + : await _baselineSelector.SelectExplicitAsync(request.BaselineSnapshotId, ct); + + if (!baselineResult.IsFound) + { + return NotFound(new { error = baselineResult.Error }); + } + + // Compute delta + var delta = await _deltaComputer.ComputeDeltaAsync( + baselineResult.Snapshot!.SnapshotId, + request.TargetSnapshotId, + new ArtifactRef(request.ArtifactDigest, request.ArtifactName, request.ArtifactTag), + ct); + + return Ok(new ComputeDeltaResponse + { + DeltaId = delta.DeltaId, + BaselineSnapshotId = delta.BaselineSnapshotId, + TargetSnapshotId = delta.TargetSnapshotId, + Summary = delta.Summary, + DriverCount = delta.Drivers.Count + }); + } + + /// + /// Gets a delta by ID. + /// + [HttpGet("{deltaId}")] + public async Task> GetDelta( + string deltaId, + CancellationToken ct) + { + var delta = await _deltaStore.GetAsync(deltaId, ct); + if (delta is null) + return NotFound(); + + return Ok(delta); + } + + /// + /// Evaluates a delta and returns a verdict. + /// + [HttpPost("{deltaId}/evaluate")] + public async Task> EvaluateDelta( + string deltaId, + [FromBody] EvaluateDeltaRequest? request, + CancellationToken ct) + { + var delta = await _deltaStore.GetAsync(deltaId, ct); + if (delta is null) + return NotFound(); + + var verdict = await _verdictEvaluator.EvaluateAsync(delta, request?.Exceptions, ct); + + return Ok(verdict); + } + + /// + /// Gets a signed attestation for a delta verdict. + /// + [HttpGet("{deltaId}/attestation")] + public async Task> GetAttestation( + string deltaId, + CancellationToken ct) + { + var delta = await _deltaStore.GetAsync(deltaId, ct); + if (delta is null) + return NotFound(); + + var verdict = await _verdictStore.GetByDeltaAsync(deltaId, ct); + if (verdict is null) + return NotFound(new { error = "No verdict for this delta" }); + + var envelope = await _attestor.AttestAsync(delta, verdict, ct); + + return Ok(envelope); + } +} + +public sealed record ComputeDeltaRequest +{ + public required string ArtifactDigest { get; init; } + public string? ArtifactName { get; init; } + public string? ArtifactTag { get; init; } + public required string TargetSnapshotId { get; init; } + public string? BaselineSnapshotId { get; init; } + public BaselineSelectionStrategy? BaselineStrategy { get; init; } +} + +public sealed record ComputeDeltaResponse +{ + public required string DeltaId { get; init; } + public required string BaselineSnapshotId { get; init; } + public required string TargetSnapshotId { get; init; } + public required DeltaSummary Summary { get; init; } + public int DriverCount { get; init; } +} + +public sealed record EvaluateDeltaRequest +{ + public IReadOnlyList? Exceptions { get; init; } +} +``` + +**Acceptance Criteria**: +- [ ] `DeltasController.cs` created +- [ ] `POST /api/v1/policy/deltas/compute` endpoint +- [ ] `GET /api/v1/policy/deltas/{deltaId}` endpoint +- [ ] `POST /api/v1/policy/deltas/{deltaId}/evaluate` endpoint +- [ ] `GET /api/v1/policy/deltas/{deltaId}/attestation` endpoint +- [ ] Baseline selection by strategy + +--- + +### T7: Add Tests + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T3, T4 + +**Description**: +Add tests for delta computation. + +**Implementation Path**: `src/Policy/__Tests/StellaOps.Policy.Tests/Deltas/` + +**Test Cases**: +```csharp +public class DeltaComputerTests +{ + [Fact] + public async Task ComputeDelta_IdenticalSnapshots_ReturnsEmptyDelta() + { + var snapshotId = await CreateSnapshotAsync(); + + var delta = await _computer.ComputeDeltaAsync( + snapshotId, snapshotId, + new ArtifactRef("sha256:test", null, null)); + + delta.Summary.TotalChanges.Should().Be(0); + delta.Drivers.Should().BeEmpty(); + } + + [Fact] + public async Task ComputeDelta_NewPackage_CreatesDriver() + { + var baseline = await CreateSnapshotAsync(packages: new[] { "pkg:npm/foo@1.0" }); + var target = await CreateSnapshotAsync(packages: new[] { "pkg:npm/foo@1.0", "pkg:npm/bar@1.0" }); + + var delta = await _computer.ComputeDeltaAsync( + baseline, target, + new ArtifactRef("sha256:test", null, null)); + + delta.Sbom.PackagesAdded.Should().Be(1); + delta.Summary.TotalChanges.Should().BeGreaterThan(0); + } + + [Fact] + public async Task ComputeDelta_NewReachableCve_CreatesCriticalDriver() + { + var baseline = await CreateSnapshotAsync(reachableCves: new string[0]); + var target = await CreateSnapshotAsync(reachableCves: new[] { "CVE-2024-001" }); + + var delta = await _computer.ComputeDeltaAsync( + baseline, target, + new ArtifactRef("sha256:test", null, null)); + + delta.Drivers.Should().Contain(d => + d.Type == "new-reachable-cve" && + d.Severity == DeltaDriverSeverity.Critical); + } + + [Fact] + public async Task ComputeDelta_ContentAddressedId_IsDeterministic() + { + var baseline = await CreateSnapshotAsync(); + var target = await CreateSnapshotAsync(); + + var delta1 = await _computer.ComputeDeltaAsync(baseline, target, CreateArtifact()); + var delta2 = await _computer.ComputeDeltaAsync(baseline, target, CreateArtifact()); + + delta1.DeltaId.Should().Be(delta2.DeltaId); + } +} + +public class BaselineSelectorTests +{ + [Fact] + public async Task SelectBaseline_LastApproved_FindsPassingVerdict() + { + await CreateVerdict("sha256:v1", RiskVerdictStatus.Fail); + await CreateVerdict("sha256:v2", RiskVerdictStatus.Pass); + await CreateVerdict("sha256:v3", RiskVerdictStatus.Fail); + + var result = await _selector.SelectBaselineAsync( + "sha256:v4", + BaselineSelectionStrategy.LastApproved); + + result.IsFound.Should().BeTrue(); + // Should select v2 as it's the last passing + } + + [Fact] + public async Task SelectBaseline_NoBaseline_ReturnsNotFound() + { + var result = await _selector.SelectBaselineAsync( + "sha256:new-artifact", + BaselineSelectionStrategy.PreviousBuild); + + result.IsFound.Should().BeFalse(); + result.Error.Should().NotBeNullOrEmpty(); + } +} +``` + +**Acceptance Criteria**: +- [ ] Test for identical snapshots (empty delta) +- [ ] Test for new package detection +- [ ] Test for critical driver creation +- [ ] Test for deterministic delta ID +- [ ] Baseline selector tests +- [ ] All 6+ tests pass + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Policy Team | Define SecurityStateDelta model | +| 2 | T2 | TODO | T1 | Policy Team | Define DeltaVerdict model | +| 3 | T3 | TODO | T1, T2 | Policy Team | Implement DeltaComputer | +| 4 | T4 | TODO | T1 | Policy Team | Implement BaselineSelector | +| 5 | T5 | TODO | T2 | Policy Team | Create DeltaVerdictStatement | +| 6 | T6 | TODO | T3, T4, T5 | Policy Team | Add delta API endpoints | +| 7 | T7 | TODO | T3, T4 | Policy Team | Add tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from MOAT Phase 2 gap analysis. Security state delta identified as requirement from Moat #1 advisory. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Content-addressed delta ID | Decision | Policy Team | delta:sha256:{hash} ensures immutability | +| Driver severity levels | Decision | Policy Team | Critical/High/Medium/Low aligns with CVSS | +| Baseline selection strategies | Decision | Policy Team | Multiple strategies for different workflows | +| Risk direction | Decision | Policy Team | increasing/decreasing/stable for quick assessment | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] Delta model captures all dimensions +- [ ] Drivers correctly identified +- [ ] Baseline selection works +- [ ] Attestations are signed +- [ ] API endpoints functional +- [ ] 6+ delta tests passing +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_4100_0004_0002_risk_budgets_gates.md b/docs/implplan/SPRINT_4100_0004_0002_risk_budgets_gates.md new file mode 100644 index 000000000..e063f1867 --- /dev/null +++ b/docs/implplan/SPRINT_4100_0004_0002_risk_budgets_gates.md @@ -0,0 +1,1460 @@ +# Sprint 4100.0004.0002 · Risk Budgets & Gate Levels + +## Topic & Scope + +- Implement risk budget tracking and enforcement +- Define diff-aware release gate levels (G0-G4) +- Enable budget-constrained release decisions + +**Working directory:** `src/Policy/__Libraries/StellaOps.Policy/Gates/` + +## Dependencies & Concurrency + +- **Upstream**: None (independent sprint) +- **Downstream**: None +- **Safe to parallelize with**: Sprint 4100.0001.0001, Sprint 4100.0002.0001, Sprint 4100.0003.0001 + +## Documentation Prerequisites + +- `src/Policy/__Libraries/StellaOps.Policy/AGENTS.md` +- `docs/product-advisories/20-Dec-2025 - Moat Explanation - Risk Budgets and Diff-Aware Release Gates.md` + +--- + +## Tasks + +### T1: Define RiskBudget Model + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the risk budget model for tracking risk point allocation. + +**Implementation Path**: `Gates/RiskBudget.cs` (new file) + +**Model Definition**: +```csharp +namespace StellaOps.Policy.Gates; + +/// +/// Represents a risk budget for a service/product. +/// Tracks risk point allocation and consumption. +/// +public sealed record RiskBudget +{ + /// + /// Unique identifier for this budget. + /// + public required string BudgetId { get; init; } + + /// + /// Service or product this budget applies to. + /// + public required string ServiceId { get; init; } + + /// + /// Criticality tier (0-3). + /// + public required ServiceTier Tier { get; init; } + + /// + /// Budget window (e.g., "2025-01" for monthly). + /// + public required string Window { get; init; } + + /// + /// Total risk points allocated for this window. + /// + public required int Allocated { get; init; } + + /// + /// Risk points consumed so far. + /// + public int Consumed { get; init; } + + /// + /// Risk points remaining. + /// + public int Remaining => Allocated - Consumed; + + /// + /// Percentage of budget used. + /// + public decimal PercentageUsed => Allocated > 0 + ? (decimal)Consumed / Allocated * 100 + : 0; + + /// + /// Current operating status. + /// + public BudgetStatus Status => PercentageUsed switch + { + < 40 => BudgetStatus.Green, + < 70 => BudgetStatus.Yellow, + < 100 => BudgetStatus.Red, + _ => BudgetStatus.Exhausted + }; + + /// + /// Last updated timestamp. + /// + public DateTimeOffset UpdatedAt { get; init; } +} + +/// +/// Service criticality tiers. +/// +public enum ServiceTier +{ + /// + /// Tier 0: Internal only, low business impact. + /// + Internal = 0, + + /// + /// Tier 1: Customer-facing non-critical. + /// + CustomerFacingNonCritical = 1, + + /// + /// Tier 2: Customer-facing critical. + /// + CustomerFacingCritical = 2, + + /// + /// Tier 3: Safety/financial/data-critical. + /// + SafetyCritical = 3 +} + +/// +/// Budget operating status. +/// +public enum BudgetStatus +{ + /// + /// Green: >= 60% remaining. Normal operation. + /// + Green, + + /// + /// Yellow: 30-59% remaining. Increased caution. + /// + Yellow, + + /// + /// Red: < 30% remaining. Freeze high-risk diffs. + /// + Red, + + /// + /// Exhausted: <= 0% remaining. Incident/security fixes only. + /// + Exhausted +} + +/// +/// Default budget allocations by tier. +/// +public static class DefaultBudgetAllocations +{ + public static int GetMonthlyAllocation(ServiceTier tier) => tier switch + { + ServiceTier.Internal => 300, + ServiceTier.CustomerFacingNonCritical => 200, + ServiceTier.CustomerFacingCritical => 120, + ServiceTier.SafetyCritical => 80, + _ => 100 + }; +} +``` + +**Acceptance Criteria**: +- [ ] `RiskBudget.cs` created with all models +- [ ] Four service tiers defined +- [ ] Budget status thresholds: Green (>=60%), Yellow (30-59%), Red (<30%), Exhausted (<=0%) +- [ ] Default allocations by tier +- [ ] Remaining and percentage calculated + +--- + +### T2: Define RiskPointScoring + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement the Release Risk Score (RRS) calculation. + +**Implementation Path**: `Gates/RiskPointScoring.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Gates; + +/// +/// Calculates Release Risk Score (RRS) for changes. +/// RRS = Base(criticality) + Diff Risk + Operational Context - Mitigations +/// +public sealed class RiskPointScoring : IRiskPointScoring +{ + private readonly IOptionsMonitor _options; + + public RiskPointScoring(IOptionsMonitor options) + { + _options = options; + } + + /// + /// Calculates the Release Risk Score for a change. + /// + public RiskScoreResult CalculateScore(RiskScoreInput input) + { + var breakdown = new RiskScoreBreakdown(); + + // Base score from service tier + var baseScore = GetBaseScore(input.Tier); + breakdown.Base = baseScore; + + // Diff risk (additive) + var diffRisk = CalculateDiffRisk(input.DiffCategory); + breakdown.DiffRisk = diffRisk; + + // Operational context (additive) + var operationalContext = CalculateOperationalContext(input.Context); + breakdown.OperationalContext = operationalContext; + + // Mitigations (subtract) + var mitigations = CalculateMitigations(input.Mitigations); + breakdown.Mitigations = mitigations; + + // Total (minimum 1) + var total = Math.Max(1, baseScore + diffRisk + operationalContext - mitigations); + breakdown.Total = total; + + // Determine gate level + var gate = DetermineGateLevel(total, input.Context.BudgetStatus); + + return new RiskScoreResult + { + Score = total, + Breakdown = breakdown, + RecommendedGate = gate + }; + } + + private int GetBaseScore(ServiceTier tier) + { + return tier switch + { + ServiceTier.Internal => _options.CurrentValue.BaseScores.Tier0, + ServiceTier.CustomerFacingNonCritical => _options.CurrentValue.BaseScores.Tier1, + ServiceTier.CustomerFacingCritical => _options.CurrentValue.BaseScores.Tier2, + ServiceTier.SafetyCritical => _options.CurrentValue.BaseScores.Tier3, + _ => 1 + }; + } + + private int CalculateDiffRisk(DiffCategory category) + { + return category switch + { + DiffCategory.DocsOnly => 1, + DiffCategory.UiNonCore => 3, + DiffCategory.ApiBackwardCompatible => 6, + DiffCategory.DatabaseMigration => 10, + DiffCategory.AuthPermission => 10, + DiffCategory.InfraNetworking => 15, + DiffCategory.CryptoPayment => 15, + _ => 3 + }; + } + + private int CalculateOperationalContext(OperationalContext context) + { + var score = 0; + + if (context.HasRecentIncident) + score += 5; + + if (context.ErrorBudgetBelow50Percent) + score += 3; + + if (context.HighOnCallLoad) + score += 2; + + if (context.InRestrictedWindow) + score += 5; + + return score; + } + + private int CalculateMitigations(MitigationFactors mitigations) + { + var reduction = 0; + + if (mitigations.HasFeatureFlag) + reduction += 3; + + if (mitigations.HasCanaryDeployment) + reduction += 3; + + if (mitigations.HasHighTestCoverage) + reduction += 2; + + if (mitigations.HasBackwardCompatibleMigration) + reduction += 2; + + if (mitigations.HasPermissionBoundary) + reduction += 2; + + return reduction; + } + + private GateLevel DetermineGateLevel(int score, BudgetStatus budgetStatus) + { + var baseGate = score switch + { + <= 5 => GateLevel.G1, + <= 12 => GateLevel.G2, + <= 20 => GateLevel.G3, + _ => GateLevel.G4 + }; + + // Escalate based on budget status + return budgetStatus switch + { + BudgetStatus.Yellow when baseGate >= GateLevel.G2 => EscalateGate(baseGate), + BudgetStatus.Red when baseGate >= GateLevel.G1 => EscalateGate(baseGate), + BudgetStatus.Exhausted => GateLevel.G4, + _ => baseGate + }; + } + + private static GateLevel EscalateGate(GateLevel gate) => + gate < GateLevel.G4 ? gate + 1 : GateLevel.G4; +} + +/// +/// Input for risk score calculation. +/// +public sealed record RiskScoreInput +{ + public required ServiceTier Tier { get; init; } + public required DiffCategory DiffCategory { get; init; } + public required OperationalContext Context { get; init; } + public required MitigationFactors Mitigations { get; init; } +} + +/// +/// Categories of diffs affecting risk score. +/// +public enum DiffCategory +{ + DocsOnly, + UiNonCore, + ApiBackwardCompatible, + ApiBreaking, + DatabaseMigration, + AuthPermission, + InfraNetworking, + CryptoPayment, + Other +} + +/// +/// Operational context affecting risk. +/// +public sealed record OperationalContext +{ + public bool HasRecentIncident { get; init; } + public bool ErrorBudgetBelow50Percent { get; init; } + public bool HighOnCallLoad { get; init; } + public bool InRestrictedWindow { get; init; } + public BudgetStatus BudgetStatus { get; init; } +} + +/// +/// Mitigation factors that reduce risk. +/// +public sealed record MitigationFactors +{ + public bool HasFeatureFlag { get; init; } + public bool HasCanaryDeployment { get; init; } + public bool HasHighTestCoverage { get; init; } + public bool HasBackwardCompatibleMigration { get; init; } + public bool HasPermissionBoundary { get; init; } +} + +/// +/// Result of risk score calculation. +/// +public sealed record RiskScoreResult +{ + public required int Score { get; init; } + public required RiskScoreBreakdown Breakdown { get; init; } + public required GateLevel RecommendedGate { get; init; } +} + +/// +/// Breakdown of score components. +/// +public sealed record RiskScoreBreakdown +{ + public int Base { get; set; } + public int DiffRisk { get; set; } + public int OperationalContext { get; set; } + public int Mitigations { get; set; } + public int Total { get; set; } +} + +public interface IRiskPointScoring +{ + RiskScoreResult CalculateScore(RiskScoreInput input); +} +``` + +**Acceptance Criteria**: +- [ ] `RiskPointScoring.cs` created +- [ ] Base scores by tier (1, 3, 6, 10) +- [ ] Diff risk by category (1-15) +- [ ] Operational context factors (+2 to +5) +- [ ] Mitigation reductions (-2 to -3) +- [ ] Minimum score of 1 +- [ ] Gate escalation based on budget status + +--- + +### T3: Create BudgetLedger + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement ledger for tracking budget consumption. + +**Implementation Path**: `Gates/BudgetLedger.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Gates; + +/// +/// Ledger for tracking risk budget consumption. +/// +public sealed class BudgetLedger : IBudgetLedger +{ + private readonly IBudgetStore _store; + private readonly ILogger _logger; + + public BudgetLedger(IBudgetStore store, ILogger logger) + { + _store = store; + _logger = logger; + } + + /// + /// Gets the current budget for a service. + /// + public async Task GetBudgetAsync( + string serviceId, + string? window = null, + CancellationToken ct = default) + { + window ??= GetCurrentWindow(); + + var budget = await _store.GetAsync(serviceId, window, ct); + if (budget is not null) + return budget; + + // Create default budget if none exists + var tier = await GetServiceTierAsync(serviceId, ct); + return await CreateBudgetAsync(serviceId, tier, window, ct); + } + + /// + /// Records consumption of risk points. + /// + public async Task ConsumeAsync( + string serviceId, + int riskPoints, + string releaseId, + CancellationToken ct = default) + { + var budget = await GetBudgetAsync(serviceId, ct: ct); + + if (budget.Remaining < riskPoints) + { + _logger.LogWarning( + "Budget exceeded for {ServiceId}: {Remaining} remaining, {Requested} requested", + serviceId, budget.Remaining, riskPoints); + + return new BudgetConsumeResult + { + IsSuccess = false, + Budget = budget, + Error = "Insufficient budget remaining" + }; + } + + // Record the consumption + var entry = new BudgetEntry + { + EntryId = Guid.NewGuid().ToString(), + ServiceId = serviceId, + Window = budget.Window, + ReleaseId = releaseId, + RiskPoints = riskPoints, + ConsumedAt = DateTimeOffset.UtcNow + }; + + await _store.AddEntryAsync(entry, ct); + + // Update budget + var updatedBudget = budget with + { + Consumed = budget.Consumed + riskPoints, + UpdatedAt = DateTimeOffset.UtcNow + }; + + await _store.UpdateAsync(updatedBudget, ct); + + _logger.LogInformation( + "Consumed {RiskPoints} RP for {ServiceId}. Remaining: {Remaining}/{Allocated}", + riskPoints, serviceId, updatedBudget.Remaining, updatedBudget.Allocated); + + return new BudgetConsumeResult + { + IsSuccess = true, + Budget = updatedBudget, + Entry = entry + }; + } + + /// + /// Gets the consumption history for a service. + /// + public async Task> GetHistoryAsync( + string serviceId, + string? window = null, + CancellationToken ct = default) + { + window ??= GetCurrentWindow(); + return await _store.GetEntriesAsync(serviceId, window, ct); + } + + /// + /// Adjusts budget allocation (e.g., for earned capacity). + /// + public async Task AdjustAllocationAsync( + string serviceId, + int adjustment, + string reason, + CancellationToken ct = default) + { + var budget = await GetBudgetAsync(serviceId, ct: ct); + + var newAllocation = Math.Max(0, budget.Allocated + adjustment); + var updatedBudget = budget with + { + Allocated = newAllocation, + UpdatedAt = DateTimeOffset.UtcNow + }; + + await _store.UpdateAsync(updatedBudget, ct); + + _logger.LogInformation( + "Adjusted budget for {ServiceId} by {Adjustment} RP. Reason: {Reason}", + serviceId, adjustment, reason); + + return updatedBudget; + } + + private async Task CreateBudgetAsync( + string serviceId, + ServiceTier tier, + string window, + CancellationToken ct) + { + var budget = new RiskBudget + { + BudgetId = $"budget:{serviceId}:{window}", + ServiceId = serviceId, + Tier = tier, + Window = window, + Allocated = DefaultBudgetAllocations.GetMonthlyAllocation(tier), + Consumed = 0, + UpdatedAt = DateTimeOffset.UtcNow + }; + + await _store.CreateAsync(budget, ct); + return budget; + } + + private static string GetCurrentWindow() => + DateTimeOffset.UtcNow.ToString("yyyy-MM"); + + private async Task GetServiceTierAsync(string serviceId, CancellationToken ct) + { + // Look up service tier from configuration or default to Tier 1 + return ServiceTier.CustomerFacingNonCritical; + } +} + +/// +/// Entry recording a budget consumption. +/// +public sealed record BudgetEntry +{ + public required string EntryId { get; init; } + public required string ServiceId { get; init; } + public required string Window { get; init; } + public required string ReleaseId { get; init; } + public required int RiskPoints { get; init; } + public required DateTimeOffset ConsumedAt { get; init; } +} + +/// +/// Result of budget consumption attempt. +/// +public sealed record BudgetConsumeResult +{ + public required bool IsSuccess { get; init; } + public required RiskBudget Budget { get; init; } + public BudgetEntry? Entry { get; init; } + public string? Error { get; init; } +} + +public interface IBudgetLedger +{ + Task GetBudgetAsync(string serviceId, string? window = null, CancellationToken ct = default); + Task ConsumeAsync(string serviceId, int riskPoints, string releaseId, CancellationToken ct = default); + Task> GetHistoryAsync(string serviceId, string? window = null, CancellationToken ct = default); + Task AdjustAllocationAsync(string serviceId, int adjustment, string reason, CancellationToken ct = default); +} + +public interface IBudgetStore +{ + Task GetAsync(string serviceId, string window, CancellationToken ct); + Task CreateAsync(RiskBudget budget, CancellationToken ct); + Task UpdateAsync(RiskBudget budget, CancellationToken ct); + Task AddEntryAsync(BudgetEntry entry, CancellationToken ct); + Task> GetEntriesAsync(string serviceId, string window, CancellationToken ct); +} +``` + +**Acceptance Criteria**: +- [ ] `BudgetLedger.cs` created +- [ ] Budget retrieval with auto-creation +- [ ] Consumption tracking with entries +- [ ] History retrieval +- [ ] Allocation adjustment for earned capacity +- [ ] Logging for observability + +--- + +### T4: Define GateLevel Enum + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Define the gate level enum with requirements documentation. + +**Implementation Path**: `Gates/GateLevel.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Gates; + +/// +/// Diff-aware release gate levels (G0-G4). +/// Higher levels require more checks before release. +/// +public enum GateLevel +{ + /// + /// G0: No-risk / Administrative. + /// Requirements: Lint/format checks, basic CI pass. + /// Use for: docs-only, comments-only, non-functional metadata. + /// + G0 = 0, + + /// + /// G1: Low risk. + /// Requirements: All automated unit tests, static analysis, 1 peer review, staging deploy, smoke checks. + /// Use for: small localized changes, non-core UI, telemetry additions. + /// + G1 = 1, + + /// + /// G2: Moderate risk. + /// Requirements: G1 + integration tests, code owner review, feature flag required, staged rollout, rollback plan. + /// Use for: moderate logic changes, dependency upgrades, backward-compatible API changes. + /// + G2 = 2, + + /// + /// G3: High risk. + /// Requirements: G2 + security scan, migration plan reviewed, load/performance checks, observability updates, release captain sign-off, progressive delivery with health gates. + /// Use for: schema migrations, auth/permission changes, core business logic, infra changes. + /// + G3 = 3, + + /// + /// G4: Very high risk / Safety-critical. + /// Requirements: G3 + formal risk review (PM+DM+Security), rollback rehearsal, extended canary, customer comms plan, post-release verification checklist. + /// Use for: Tier 3 systems with low budget, freeze window exceptions, platform-wide changes. + /// + G4 = 4 +} + +/// +/// Gate level requirements documentation. +/// +public static class GateLevelRequirements +{ + public static IReadOnlyList GetRequirements(GateLevel level) + { + return level switch + { + GateLevel.G0 => new[] + { + "Lint/format checks pass", + "Basic CI build passes" + }, + + GateLevel.G1 => new[] + { + "All automated unit tests pass", + "Static analysis/linting clean", + "1 peer review (code owner not required)", + "Automated deploy to staging", + "Post-deploy smoke checks pass" + }, + + GateLevel.G2 => new[] + { + "All G1 requirements", + "Integration tests for impacted modules pass", + "Code owner review for touched modules", + "Feature flag required if customer impact possible", + "Staged rollout: canary or small cohort", + "Rollback plan documented in PR" + }, + + GateLevel.G3 => new[] + { + "All G2 requirements", + "Security scan + dependency audit pass", + "Migration plan (forward + rollback) reviewed", + "Load/performance checks if in hot path", + "Observability: new/updated dashboards/alerts", + "Release captain / on-call sign-off", + "Progressive delivery with automatic health gates" + }, + + GateLevel.G4 => new[] + { + "All G3 requirements", + "Formal risk review (PM+DM+Security/SRE) in writing", + "Explicit rollback rehearsal or proven rollback path", + "Extended canary period with success/abort criteria", + "Customer comms plan if impact is plausible", + "Post-release verification checklist executed and logged" + }, + + _ => Array.Empty() + }; + } + + public static string GetDescription(GateLevel level) + { + return level switch + { + GateLevel.G0 => "No-risk: Basic CI only", + GateLevel.G1 => "Low risk: Unit tests + 1 review", + GateLevel.G2 => "Moderate risk: Integration tests + code owner + canary", + GateLevel.G3 => "High risk: Security scan + release captain + progressive", + GateLevel.G4 => "Very high risk: Formal review + extended canary + comms", + _ => "Unknown" + }; + } +} +``` + +**Acceptance Criteria**: +- [ ] `GateLevel.cs` created +- [ ] Five levels defined (G0-G4) +- [ ] Requirements for each level documented +- [ ] Descriptions for quick reference +- [ ] XML documentation on enum values + +--- + +### T5: Create GateSelector + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T2, T4 + +**Description**: +Implement gate level selection based on risk score and context. + +**Implementation Path**: `Gates/GateSelector.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Gates; + +/// +/// Selects the appropriate gate level for a release. +/// +public sealed class GateSelector : IGateSelector +{ + private readonly IRiskPointScoring _scoring; + private readonly IBudgetLedger _budgetLedger; + private readonly ILogger _logger; + + public GateSelector( + IRiskPointScoring scoring, + IBudgetLedger budgetLedger, + ILogger logger) + { + _scoring = scoring; + _budgetLedger = budgetLedger; + _logger = logger; + } + + /// + /// Determines the gate level for a change. + /// + public async Task SelectGateAsync( + GateSelectionInput input, + CancellationToken ct = default) + { + // Get current budget status + var budget = await _budgetLedger.GetBudgetAsync(input.ServiceId, ct: ct); + + // Build context with budget status + var context = input.Context with { BudgetStatus = budget.Status }; + + // Calculate risk score + var scoreInput = new RiskScoreInput + { + Tier = input.Tier, + DiffCategory = input.DiffCategory, + Context = context, + Mitigations = input.Mitigations + }; + + var scoreResult = _scoring.CalculateScore(scoreInput); + + // Apply budget-based modifiers + var finalGate = ApplyBudgetModifiers(scoreResult.RecommendedGate, budget); + + // Check for blocks + var isBlocked = CheckForBlocks(finalGate, budget, input); + + _logger.LogInformation( + "Gate selection for {ServiceId}: Score={Score}, Gate={Gate}, Budget={BudgetStatus}", + input.ServiceId, scoreResult.Score, finalGate, budget.Status); + + return new GateSelectionResult + { + Gate = finalGate, + RiskScore = scoreResult.Score, + ScoreBreakdown = scoreResult.Breakdown, + Budget = budget, + IsBlocked = isBlocked.IsBlocked, + BlockReason = isBlocked.Reason, + Requirements = GateLevelRequirements.GetRequirements(finalGate).ToList(), + Recommendations = GenerateRecommendations(scoreResult, budget) + }; + } + + private GateLevel ApplyBudgetModifiers(GateLevel gate, RiskBudget budget) + { + return budget.Status switch + { + // Yellow: Escalate G2+ by one level + BudgetStatus.Yellow when gate >= GateLevel.G2 => + gate < GateLevel.G4 ? gate + 1 : GateLevel.G4, + + // Red: Escalate G1+ by one level + BudgetStatus.Red when gate >= GateLevel.G1 => + gate < GateLevel.G4 ? gate + 1 : GateLevel.G4, + + // Exhausted: Everything is G4 + BudgetStatus.Exhausted => GateLevel.G4, + + _ => gate + }; + } + + private (bool IsBlocked, string? Reason) CheckForBlocks( + GateLevel gate, RiskBudget budget, GateSelectionInput input) + { + // Red budget blocks high-risk categories + if (budget.Status == BudgetStatus.Red && + input.DiffCategory is DiffCategory.DatabaseMigration or DiffCategory.AuthPermission or DiffCategory.InfraNetworking) + { + return (true, "High-risk changes blocked during Red budget status"); + } + + // Exhausted budget blocks non-emergency changes + if (budget.Status == BudgetStatus.Exhausted && !input.IsEmergencyFix) + { + return (true, "Budget exhausted. Only incident/security fixes allowed."); + } + + return (false, null); + } + + private IReadOnlyList GenerateRecommendations( + RiskScoreResult score, RiskBudget budget) + { + var recommendations = new List(); + + // Score reduction recommendations + if (score.Breakdown.DiffRisk > 5) + { + recommendations.Add("Consider breaking this change into smaller, lower-risk diffs"); + } + + if (score.Breakdown.Mitigations == 0) + { + recommendations.Add("Add mitigations: feature flag, canary deployment, or increased test coverage"); + } + + // Budget recommendations + if (budget.Status == BudgetStatus.Yellow) + { + recommendations.Add("Budget at Yellow status. Prioritize reliability work to restore capacity."); + } + + if (budget.Status == BudgetStatus.Red) + { + recommendations.Add("Budget at Red status. Defer high-risk changes or decompose into smaller diffs."); + } + + return recommendations; + } +} + +/// +/// Input for gate selection. +/// +public sealed record GateSelectionInput +{ + public required string ServiceId { get; init; } + public required ServiceTier Tier { get; init; } + public required DiffCategory DiffCategory { get; init; } + public required OperationalContext Context { get; init; } + public required MitigationFactors Mitigations { get; init; } + public bool IsEmergencyFix { get; init; } +} + +/// +/// Result of gate selection. +/// +public sealed record GateSelectionResult +{ + public required GateLevel Gate { get; init; } + public required int RiskScore { get; init; } + public required RiskScoreBreakdown ScoreBreakdown { get; init; } + public required RiskBudget Budget { get; init; } + public required bool IsBlocked { get; init; } + public string? BlockReason { get; init; } + public IReadOnlyList Requirements { get; init; } = []; + public IReadOnlyList Recommendations { get; init; } = []; +} + +public interface IGateSelector +{ + Task SelectGateAsync(GateSelectionInput input, CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] `GateSelector.cs` created +- [ ] Score-to-gate mapping (1-5→G1, 6-12→G2, 13-20→G3, 21+→G4) +- [ ] Budget status escalation +- [ ] Block detection for Red/Exhausted +- [ ] Recommendations generated +- [ ] Requirements included in result + +--- + +### T6: Implement Budget Constraints + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T3, T5 + +**Description**: +Implement operating rules for budget status. + +**Implementation Path**: `Gates/BudgetConstraintEnforcer.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Gates; + +/// +/// Enforces budget constraints on release operations. +/// +public sealed class BudgetConstraintEnforcer : IBudgetConstraintEnforcer +{ + private readonly IBudgetLedger _ledger; + private readonly IGateSelector _gateSelector; + private readonly ILogger _logger; + + /// + /// Checks if a release can proceed given current budget. + /// + public async Task CheckReleaseAsync( + ReleaseCheckInput input, + CancellationToken ct = default) + { + var budget = await _ledger.GetBudgetAsync(input.ServiceId, ct: ct); + var gateResult = await _gateSelector.SelectGateAsync(input.ToGateInput(), ct); + + var result = new BudgetCheckResult + { + CanProceed = !gateResult.IsBlocked, + RequiredGate = gateResult.Gate, + RiskPoints = gateResult.RiskScore, + BudgetBefore = budget, + BudgetAfter = budget with { Consumed = budget.Consumed + gateResult.RiskScore }, + BlockReason = gateResult.BlockReason, + Requirements = gateResult.Requirements, + Recommendations = gateResult.Recommendations + }; + + // Log the check + _logger.LogInformation( + "Release check for {ServiceId}: CanProceed={CanProceed}, Gate={Gate}, RP={RP}", + input.ServiceId, result.CanProceed, result.RequiredGate, result.RiskPoints); + + return result; + } + + /// + /// Records a release and consumes budget. + /// + public async Task RecordReleaseAsync( + ReleaseRecordInput input, + CancellationToken ct = default) + { + // First check if release can proceed + var checkResult = await CheckReleaseAsync(input.ToCheckInput(), ct); + + if (!checkResult.CanProceed) + { + return new ReleaseRecordResult + { + IsSuccess = false, + Error = checkResult.BlockReason ?? "Release blocked by budget constraints" + }; + } + + // Consume budget + var consumeResult = await _ledger.ConsumeAsync( + input.ServiceId, + checkResult.RiskPoints, + input.ReleaseId, + ct); + + if (!consumeResult.IsSuccess) + { + return new ReleaseRecordResult + { + IsSuccess = false, + Error = consumeResult.Error + }; + } + + _logger.LogInformation( + "Recorded release {ReleaseId} for {ServiceId}. Budget: {Remaining}/{Allocated} RP remaining", + input.ReleaseId, input.ServiceId, + consumeResult.Budget.Remaining, consumeResult.Budget.Allocated); + + return new ReleaseRecordResult + { + IsSuccess = true, + ReleaseId = input.ReleaseId, + ConsumedRiskPoints = checkResult.RiskPoints, + Budget = consumeResult.Budget, + Gate = checkResult.RequiredGate + }; + } + + /// + /// Handles break-glass exception for urgent releases. + /// + public async Task RecordExceptionAsync( + ExceptionInput input, + CancellationToken ct = default) + { + // Record the exception + var baseRiskPoints = await CalculateBaseRiskPointsAsync(input, ct); + + // Apply 50% penalty for exception + var penaltyRiskPoints = (int)(baseRiskPoints * 1.5); + + var consumeResult = await _ledger.ConsumeAsync( + input.ServiceId, + penaltyRiskPoints, + input.ReleaseId, + ct); + + _logger.LogWarning( + "Break-glass exception for {ServiceId}: {ReleaseId}. Penalty: {Penalty} RP. Reason: {Reason}", + input.ServiceId, input.ReleaseId, penaltyRiskPoints - baseRiskPoints, input.Reason); + + return new ExceptionResult + { + IsSuccess = consumeResult.IsSuccess, + ReleaseId = input.ReleaseId, + BaseRiskPoints = baseRiskPoints, + PenaltyRiskPoints = penaltyRiskPoints - baseRiskPoints, + TotalRiskPoints = penaltyRiskPoints, + Budget = consumeResult.Budget, + FollowUpRequired = true, + FollowUpDeadline = DateTimeOffset.UtcNow.AddBusinessDays(5) + }; + } + + private async Task CalculateBaseRiskPointsAsync(ExceptionInput input, CancellationToken ct) + { + var gateResult = await _gateSelector.SelectGateAsync(new GateSelectionInput + { + ServiceId = input.ServiceId, + Tier = input.Tier, + DiffCategory = input.DiffCategory, + Context = input.Context, + Mitigations = input.Mitigations, + IsEmergencyFix = true + }, ct); + + return gateResult.RiskScore; + } +} + +public sealed record ReleaseCheckInput +{ + public required string ServiceId { get; init; } + public required ServiceTier Tier { get; init; } + public required DiffCategory DiffCategory { get; init; } + public required OperationalContext Context { get; init; } + public required MitigationFactors Mitigations { get; init; } + + public GateSelectionInput ToGateInput() => new() + { + ServiceId = ServiceId, + Tier = Tier, + DiffCategory = DiffCategory, + Context = Context, + Mitigations = Mitigations + }; +} + +public sealed record BudgetCheckResult +{ + public required bool CanProceed { get; init; } + public required GateLevel RequiredGate { get; init; } + public required int RiskPoints { get; init; } + public required RiskBudget BudgetBefore { get; init; } + public required RiskBudget BudgetAfter { get; init; } + public string? BlockReason { get; init; } + public IReadOnlyList Requirements { get; init; } = []; + public IReadOnlyList Recommendations { get; init; } = []; +} + +public sealed record ReleaseRecordInput +{ + public required string ReleaseId { get; init; } + public required string ServiceId { get; init; } + public required ServiceTier Tier { get; init; } + public required DiffCategory DiffCategory { get; init; } + public required OperationalContext Context { get; init; } + public required MitigationFactors Mitigations { get; init; } +} + +public sealed record ReleaseRecordResult +{ + public required bool IsSuccess { get; init; } + public string? ReleaseId { get; init; } + public int ConsumedRiskPoints { get; init; } + public RiskBudget? Budget { get; init; } + public GateLevel? Gate { get; init; } + public string? Error { get; init; } +} + +public sealed record ExceptionInput +{ + public required string ReleaseId { get; init; } + public required string ServiceId { get; init; } + public required ServiceTier Tier { get; init; } + public required DiffCategory DiffCategory { get; init; } + public required OperationalContext Context { get; init; } + public required MitigationFactors Mitigations { get; init; } + public required string Reason { get; init; } + public required string ApprovedBy { get; init; } +} + +public sealed record ExceptionResult +{ + public required bool IsSuccess { get; init; } + public required string ReleaseId { get; init; } + public required int BaseRiskPoints { get; init; } + public required int PenaltyRiskPoints { get; init; } + public required int TotalRiskPoints { get; init; } + public required RiskBudget Budget { get; init; } + public required bool FollowUpRequired { get; init; } + public DateTimeOffset? FollowUpDeadline { get; init; } +} + +public interface IBudgetConstraintEnforcer +{ + Task CheckReleaseAsync(ReleaseCheckInput input, CancellationToken ct = default); + Task RecordReleaseAsync(ReleaseRecordInput input, CancellationToken ct = default); + Task RecordExceptionAsync(ExceptionInput input, CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] `BudgetConstraintEnforcer.cs` created +- [ ] Release checking with block detection +- [ ] Release recording with consumption +- [ ] Break-glass exception with 50% penalty +- [ ] Follow-up deadline tracking +- [ ] Logging for audit trail + +--- + +### T7: Add API Endpoints + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T5, T6 + +**Description**: +Add REST API endpoints for budget and gate operations. + +**Implementation Path**: `src/Policy/StellaOps.Policy.WebService/Controllers/GatesController.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.WebService.Controllers; + +[ApiController] +[Route("api/v1/policy/gates")] +public class GatesController : ControllerBase +{ + private readonly IGateSelector _gateSelector; + private readonly IBudgetLedger _budgetLedger; + private readonly IBudgetConstraintEnforcer _constraintEnforcer; + + /// + /// Gets gate requirements for a change. + /// + [HttpPost("select")] + public async Task> SelectGate( + [FromBody] GateSelectionRequest request, + CancellationToken ct) + { + var input = MapToInput(request); + var result = await _gateSelector.SelectGateAsync(input, ct); + return Ok(result); + } + + /// + /// Gets current budget status for a service. + /// + [HttpGet("budget/{serviceId}")] + public async Task> GetBudget( + string serviceId, + [FromQuery] string? window, + CancellationToken ct) + { + var budget = await _budgetLedger.GetBudgetAsync(serviceId, window, ct); + return Ok(budget); + } + + /// + /// Gets budget consumption history. + /// + [HttpGet("budget/{serviceId}/history")] + public async Task>> GetBudgetHistory( + string serviceId, + [FromQuery] string? window, + CancellationToken ct) + { + var history = await _budgetLedger.GetHistoryAsync(serviceId, window, ct); + return Ok(history); + } + + /// + /// Checks if a release can proceed. + /// + [HttpPost("check")] + public async Task> CheckRelease( + [FromBody] ReleaseCheckRequest request, + CancellationToken ct) + { + var input = MapToCheckInput(request); + var result = await _constraintEnforcer.CheckReleaseAsync(input, ct); + return Ok(result); + } + + /// + /// Records a release and consumes budget. + /// + [HttpPost("record")] + public async Task> RecordRelease( + [FromBody] ReleaseRecordRequest request, + CancellationToken ct) + { + var input = MapToRecordInput(request); + var result = await _constraintEnforcer.RecordReleaseAsync(input, ct); + + if (!result.IsSuccess) + return BadRequest(result); + + return Ok(result); + } + + /// + /// Records a break-glass exception. + /// + [HttpPost("exception")] + public async Task> RecordException( + [FromBody] ExceptionRequest request, + CancellationToken ct) + { + var input = MapToExceptionInput(request); + var result = await _constraintEnforcer.RecordExceptionAsync(input, ct); + return Ok(result); + } + + private GateSelectionInput MapToInput(GateSelectionRequest request) => new() + { + ServiceId = request.ServiceId, + Tier = request.Tier, + DiffCategory = request.DiffCategory, + Context = request.Context, + Mitigations = request.Mitigations + }; + + // ... other mapping methods +} + +public sealed record GateSelectionRequest +{ + public required string ServiceId { get; init; } + public required ServiceTier Tier { get; init; } + public required DiffCategory DiffCategory { get; init; } + public required OperationalContext Context { get; init; } + public required MitigationFactors Mitigations { get; init; } +} + +public sealed record ReleaseCheckRequest +{ + public required string ServiceId { get; init; } + public required ServiceTier Tier { get; init; } + public required DiffCategory DiffCategory { get; init; } + public required OperationalContext Context { get; init; } + public required MitigationFactors Mitigations { get; init; } +} + +public sealed record ReleaseRecordRequest +{ + public required string ReleaseId { get; init; } + public required string ServiceId { get; init; } + public required ServiceTier Tier { get; init; } + public required DiffCategory DiffCategory { get; init; } + public required OperationalContext Context { get; init; } + public required MitigationFactors Mitigations { get; init; } +} + +public sealed record ExceptionRequest +{ + public required string ReleaseId { get; init; } + public required string ServiceId { get; init; } + public required ServiceTier Tier { get; init; } + public required DiffCategory DiffCategory { get; init; } + public required OperationalContext Context { get; init; } + public required MitigationFactors Mitigations { get; init; } + public required string Reason { get; init; } + public required string ApprovedBy { get; init; } +} +``` + +**Acceptance Criteria**: +- [ ] `GatesController.cs` created +- [ ] `POST /api/v1/policy/gates/select` endpoint +- [ ] `GET /api/v1/policy/gates/budget/{serviceId}` endpoint +- [ ] `GET /api/v1/policy/gates/budget/{serviceId}/history` endpoint +- [ ] `POST /api/v1/policy/gates/check` endpoint +- [ ] `POST /api/v1/policy/gates/record` endpoint +- [ ] `POST /api/v1/policy/gates/exception` endpoint + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Policy Team | Define RiskBudget model | +| 2 | T2 | TODO | T1 | Policy Team | Define RiskPointScoring | +| 3 | T3 | TODO | T1 | Policy Team | Create BudgetLedger | +| 4 | T4 | TODO | — | Policy Team | Define GateLevel enum | +| 5 | T5 | TODO | T2, T4 | Policy Team | Create GateSelector | +| 6 | T6 | TODO | T3, T5 | Policy Team | Implement budget constraints | +| 7 | T7 | TODO | T5, T6 | Policy Team | Add API endpoints | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from MOAT Phase 2 gap analysis. Risk budgets and gate levels identified as requirement from Risk Budgets advisory. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Four service tiers | Decision | Policy Team | Tier 0-3 matches advisory recommendation | +| Budget thresholds | Decision | Policy Team | Green (>=60%), Yellow (30-59%), Red (<30%), Exhausted (<=0%) | +| Gate levels G0-G4 | Decision | Policy Team | Matches advisory requirements by level | +| Exception penalty | Decision | Policy Team | 50% additional RP for break-glass releases | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] Risk scoring calculates correctly +- [ ] Budget tracking works +- [ ] Gate selection uses budget status +- [ ] Exceptions apply penalty +- [ ] API endpoints functional +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_4200_0001_0001_proof_chain_verification_ui.md b/docs/implplan/SPRINT_4200_0001_0001_proof_chain_verification_ui.md new file mode 100644 index 000000000..66ea1d205 --- /dev/null +++ b/docs/implplan/SPRINT_4200_0001_0001_proof_chain_verification_ui.md @@ -0,0 +1,377 @@ +# Sprint 4200.0001.0001 · Proof Chain Verification UI — Evidence Transparency Dashboard + +## Topic & Scope +- Implement a "Show Me The Proof" UI component that visualizes the evidence chain from finding to verdict. +- Enable auditors to point at an image digest and see all linked SBOMs, VEX claims, attestations, and verdicts. +- Connect existing Attestor verification APIs to Angular UI components. +- **Working directory:** `src/Web/StellaOps.Web/`, `src/Attestor/` + +## Dependencies & Concurrency +- **Upstream**: Attestor ProofChain APIs (implemented), TimelineIndexer (implemented) +- **Downstream**: Audit workflows, compliance reporting +- **Safe to parallelize with**: Sprints 5200.*, 3600.* + +## Documentation Prerequisites +- `docs/modules/attestor/architecture.md` +- `docs/modules/ui/architecture.md` +- `docs/product-advisories/archived/2025-12-21-reference-architecture/20-Dec-2025 - Stella Ops Reference Architecture.md` + +--- + +## Tasks + +### T1: Proof Chain API Endpoints + +**Assignee**: Attestor Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Expose REST endpoints for proof chain visualization data. + +**Implementation Path**: `src/Attestor/StellaOps.Attestor.WebService/` + +**Acceptance Criteria**: +- [ ] `GET /api/v1/proofs/{subjectDigest}` - Get all proofs for an artifact +- [ ] `GET /api/v1/proofs/{subjectDigest}/chain` - Get linked evidence chain +- [ ] `GET /api/v1/proofs/{proofId}` - Get specific proof details +- [ ] `GET /api/v1/proofs/{proofId}/verify` - Verify proof integrity +- [ ] Response includes: SBOM refs, VEX refs, verdict refs, attestation refs +- [ ] Pagination for large proof sets +- [ ] Tenant isolation enforced + +**API Response Model**: +```csharp +public sealed record ProofChainResponse +{ + public required string SubjectDigest { get; init; } + public required string SubjectType { get; init; } // "oci-image", "file", etc. + public required DateTimeOffset QueryTime { get; init; } + + public ImmutableArray Nodes { get; init; } + public ImmutableArray Edges { get; init; } + + public ProofSummary Summary { get; init; } +} + +public sealed record ProofNode +{ + public required string NodeId { get; init; } + public required ProofNodeType Type { get; init; } // Sbom, Vex, Verdict, Attestation + public required string Digest { get; init; } + public required DateTimeOffset CreatedAt { get; init; } + public string? RekorLogIndex { get; init; } + public ImmutableDictionary Metadata { get; init; } +} + +public sealed record ProofEdge +{ + public required string FromNode { get; init; } + public required string ToNode { get; init; } + public required string Relationship { get; init; } // "attests", "references", "supersedes" +} + +public sealed record ProofSummary +{ + public int TotalProofs { get; init; } + public int VerifiedCount { get; init; } + public int UnverifiedCount { get; init; } + public DateTimeOffset? OldestProof { get; init; } + public DateTimeOffset? NewestProof { get; init; } + public bool HasRekorAnchoring { get; init; } +} +``` + +--- + +### T2: Proof Verification Service + +**Assignee**: Attestor Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Implement on-demand proof verification with detailed results. + +**Acceptance Criteria**: +- [ ] DSSE signature verification +- [ ] Payload hash verification +- [ ] Rekor inclusion proof verification +- [ ] Key ID validation against Authority +- [ ] Expiration checking +- [ ] Returns detailed verification result with failure reasons + +**Verification Result**: +```csharp +public sealed record ProofVerificationResult +{ + public required string ProofId { get; init; } + public required bool IsValid { get; init; } + public required ProofVerificationStatus Status { get; init; } + + public SignatureVerification? Signature { get; init; } + public RekorVerification? Rekor { get; init; } + public PayloadVerification? Payload { get; init; } + + public ImmutableArray Warnings { get; init; } + public ImmutableArray Errors { get; init; } +} + +public enum ProofVerificationStatus +{ + Valid, + SignatureInvalid, + PayloadTampered, + KeyNotTrusted, + Expired, + RekorNotAnchored, + RekorInclusionFailed +} +``` + +--- + +### T3: Angular Proof Chain Component + +**Assignee**: UI Team +**Story Points**: 5 +**Status**: TODO + +**Description**: +Create the main proof chain visualization component. + +**Implementation Path**: `src/Web/StellaOps.Web/src/app/components/proof-chain/` + +**Acceptance Criteria**: +- [ ] `` component +- [ ] Input: subject digest or artifact reference +- [ ] Fetches proof chain from API +- [ ] Renders interactive graph visualization +- [ ] Node click shows detail panel +- [ ] Color coding by proof type +- [ ] Verification status indicators +- [ ] Loading and error states + +**Component Structure**: +```typescript +@Component({ + selector: 'stella-proof-chain', + templateUrl: './proof-chain.component.html', + styleUrls: ['./proof-chain.component.scss'], + changeDetection: ChangeDetectionStrategy.OnPush +}) +export class ProofChainComponent implements OnInit { + @Input() subjectDigest: string; + @Input() showVerification = true; + @Input() expandedView = false; + + @Output() nodeSelected = new EventEmitter(); + @Output() verificationRequested = new EventEmitter(); + + proofChain$: Observable; + selectedNode$: BehaviorSubject; +} +``` + +--- + +### T4: Graph Visualization Library Integration + +**Assignee**: UI Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Integrate a graph visualization library for proof chain rendering. + +**Acceptance Criteria**: +- [ ] Choose library: D3.js, Cytoscape.js, or vis.js +- [ ] Directed graph rendering +- [ ] Node icons by type (SBOM, VEX, Verdict, Attestation) +- [ ] Edge labels for relationships +- [ ] Zoom and pan controls +- [ ] Responsive layout +- [ ] Accessibility support (keyboard navigation, screen reader) + +**Layout Options**: +```typescript +interface ProofChainLayout { + type: 'hierarchical' | 'force-directed' | 'dagre'; + direction: 'TB' | 'LR' | 'BT' | 'RL'; + nodeSpacing: number; + rankSpacing: number; +} +``` + +--- + +### T5: Proof Detail Panel + +**Assignee**: UI Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Create detail panel showing full proof information. + +**Acceptance Criteria**: +- [ ] Slide-out panel on node selection +- [ ] Shows proof metadata +- [ ] Shows DSSE envelope summary +- [ ] Shows Rekor log entry if available +- [ ] "Verify Now" button triggers verification +- [ ] Download raw proof option +- [ ] Copy digest to clipboard + +--- + +### T6: Verification Status Badge + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO + +**Description**: +Create reusable verification status indicator. + +**Acceptance Criteria**: +- [ ] `` component +- [ ] States: verified, unverified, failed, pending +- [ ] Tooltip with verification details +- [ ] Consistent styling with design system + +--- + +### T7: Timeline Integration + +**Assignee**: UI Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Integrate proof chain with timeline/audit log view. + +**Acceptance Criteria**: +- [ ] "View Proofs" action from timeline events +- [ ] Deep link to specific proof from timeline +- [ ] Timeline entry shows proof count badge +- [ ] Filter timeline by proof-related events + +--- + +### T8: Image/Artifact Page Integration + +**Assignee**: UI Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Add proof chain tab to image/artifact detail pages. + +**Acceptance Criteria**: +- [ ] New "Evidence Chain" tab on artifact details +- [ ] Summary card showing proof count and status +- [ ] "Audit This Artifact" button opens full chain +- [ ] Export proof bundle (for offline verification) + +--- + +### T9: Unit Tests + +**Assignee**: UI Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Comprehensive unit tests for proof chain components. + +**Acceptance Criteria**: +- [ ] Component rendering tests +- [ ] API service tests with mocks +- [ ] Graph layout tests +- [ ] Verification flow tests +- [ ] Accessibility tests + +--- + +### T10: E2E Tests + +**Assignee**: UI Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +End-to-end tests for proof chain workflow. + +**Acceptance Criteria**: +- [ ] Navigate to artifact → view proof chain +- [ ] Click node → view details +- [ ] Verify proof → see result +- [ ] Export proof bundle +- [ ] Timeline → proof chain navigation + +--- + +### T11: Documentation + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO + +**Description**: +User and developer documentation for proof chain UI. + +**Acceptance Criteria**: +- [ ] User guide: "How to Audit an Artifact" +- [ ] Developer guide: component API +- [ ] Accessibility documentation +- [ ] Screenshots for documentation + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Attestor Team | Proof Chain API Endpoints | +| 2 | T2 | TODO | T1 | Attestor Team | Proof Verification Service | +| 3 | T3 | TODO | T1 | UI Team | Angular Proof Chain Component | +| 4 | T4 | TODO | — | UI Team | Graph Visualization Integration | +| 5 | T5 | TODO | T3, T4 | UI Team | Proof Detail Panel | +| 6 | T6 | TODO | — | UI Team | Verification Status Badge | +| 7 | T7 | TODO | T3 | UI Team | Timeline Integration | +| 8 | T8 | TODO | T3 | UI Team | Artifact Page Integration | +| 9 | T9 | TODO | T3-T8 | UI Team | Unit Tests | +| 10 | T10 | TODO | T9 | UI Team | E2E Tests | +| 11 | T11 | TODO | T3-T8 | UI Team | Documentation | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Reference Architecture advisory - proof chain UI gap. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Graph library | Decision | UI Team | Evaluate D3.js vs Cytoscape.js for complexity vs features | +| Verification on-demand | Decision | Attestor Team | Verify on user request, not pre-computed | +| Proof export format | Decision | Attestor Team | JSON bundle with all DSSE envelopes | +| Large graph handling | Risk | UI Team | May need virtualization for 1000+ nodes | + +--- + +## Success Criteria + +- [ ] Auditors can view complete evidence chain for any artifact +- [ ] One-click verification of any proof in the chain +- [ ] Rekor anchoring visible when available +- [ ] Export proof bundle for offline verification +- [ ] Performance: <2s load time for typical proof chains (<100 nodes) + +**Sprint Status**: TODO (0/11 tasks complete) diff --git a/docs/implplan/SPRINT_4200_0001_0001_triage_rest_api.md b/docs/implplan/SPRINT_4200_0001_0001_triage_rest_api.md new file mode 100644 index 000000000..1e452cb91 --- /dev/null +++ b/docs/implplan/SPRINT_4200_0001_0001_triage_rest_api.md @@ -0,0 +1,1032 @@ +# Sprint 4200.0001.0001 · Triage REST API + +## Topic & Scope + +- Expose complete HTTP API layer for existing Triage database schema +- Enable UI to fetch findings, decisions, evidence, and lanes +- Support signed decision creation and revocation + +**Working directory:** `src/Scanner/StellaOps.Scanner.WebService/` + +## Dependencies & Concurrency + +- **Upstream**: None (foundation sprint - unlocks all Triage UX) +- **Downstream**: Sprint 4200.0002.0001 ("Can I Ship?" Header), Sprint 4200.0002.0002 (Verdict Ladder) +- **Safe to parallelize with**: Sprint 4200.0001.0002 (Excititor-Policy Lattice), Sprint 4100.0002.0001 (Knowledge Snapshots) + +## Documentation Prerequisites + +- `src/Scanner/__Libraries/StellaOps.Scanner.Triage/AGENTS.md` +- `docs/product-advisories/21-Dec-2025 - How Top Scanners Shape Evidence‑First UX.md` +- `docs/product-advisories/21-Dec-2025 - Designing Explainable Triage Workflows.md` +- Existing entities: `TriageFinding`, `TriageDecision`, `TriageEvidenceArtifact`, `TriageSnapshot` + +--- + +## Tasks + +### T1: Create TriageEndpoints.cs + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create minimal API endpoints for triage finding queries. + +**Implementation Path**: `Endpoints/TriageEndpoints.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Scanner.WebService.Endpoints; + +/// +/// Triage finding query endpoints. +/// +public static class TriageEndpoints +{ + public static void MapTriageEndpoints(this WebApplication app) + { + var group = app.MapGroup("/api/v1/triage") + .WithTags("Triage") + .RequireAuthorization(); + + // GET /triage/findings - List all findings with filtering + group.MapGet("/findings", async ( + [AsParameters] TriageFindingQueryParams query, + ITriageQueryService service, + CancellationToken ct) => + { + var result = await service.GetFindingsAsync(query, ct); + return Results.Ok(result); + }) + .WithName("GetTriageFindings") + .WithDescription("List triage findings with optional filtering"); + + // GET /triage/findings/{id} - Get finding with full case details + group.MapGet("/findings/{id:guid}", async ( + Guid id, + ITriageQueryService service, + CancellationToken ct) => + { + var result = await service.GetFindingCaseAsync(id, ct); + return result is null ? Results.NotFound() : Results.Ok(result); + }) + .WithName("GetTriageFindingCase") + .WithDescription("Get full triage case with verdict, evidence, and decisions"); + + // GET /triage/lanes - Get findings grouped by lane + group.MapGet("/lanes", async ( + ITriageQueryService service, + CancellationToken ct) => + { + var result = await service.GetFindingsByLaneAsync(ct); + return Results.Ok(result); + }) + .WithName("GetTriageLanes") + .WithDescription("Get findings grouped by triage lane"); + + // GET /triage/lanes/{lane} - Get findings for specific lane + group.MapGet("/lanes/{lane}", async ( + TriageLane lane, + [AsParameters] PaginationParams pagination, + ITriageQueryService service, + CancellationToken ct) => + { + var result = await service.GetFindingsForLaneAsync(lane, pagination, ct); + return Results.Ok(result); + }) + .WithName("GetTriageFindingsForLane") + .WithDescription("Get findings in a specific triage lane"); + + // GET /triage/summary - Get summary statistics + group.MapGet("/summary", async ( + ITriageQueryService service, + CancellationToken ct) => + { + var result = await service.GetSummaryAsync(ct); + return Results.Ok(result); + }) + .WithName("GetTriageSummary") + .WithDescription("Get triage summary with counts per lane and verdict"); + } +} +``` + +**Acceptance Criteria**: +- [ ] `TriageEndpoints.cs` file created in `Endpoints/` +- [ ] GET /triage/findings endpoint with filtering +- [ ] GET /triage/findings/{id} endpoint with full case +- [ ] GET /triage/lanes endpoint with grouping +- [ ] GET /triage/lanes/{lane} endpoint with pagination +- [ ] GET /triage/summary endpoint for dashboard +- [ ] All endpoints require authorization + +--- + +### T2: Create TriageDecisionEndpoints.cs + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create endpoints for triage decision management (create, get, revoke). + +**Implementation Path**: `Endpoints/TriageDecisionEndpoints.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Scanner.WebService.Endpoints; + +/// +/// Triage decision management endpoints. +/// +public static class TriageDecisionEndpoints +{ + public static void MapTriageDecisionEndpoints(this WebApplication app) + { + var group = app.MapGroup("/api/v1/triage/decisions") + .WithTags("Triage Decisions") + .RequireAuthorization(); + + // POST /decisions - Create a new decision + group.MapPost("", async ( + CreateDecisionRequest request, + ITriageCommandService service, + ClaimsPrincipal user, + CancellationToken ct) => + { + var actorSubject = user.FindFirst(ClaimTypes.NameIdentifier)?.Value + ?? throw new UnauthorizedAccessException("Actor subject required"); + var actorDisplay = user.FindFirst(ClaimTypes.Name)?.Value; + + var result = await service.CreateDecisionAsync( + request, + actorSubject, + actorDisplay, + ct); + + return Results.Created($"/api/v1/triage/decisions/{result.Id}", result); + }) + .WithName("CreateTriageDecision") + .WithDescription("Create a signed triage decision"); + + // GET /decisions/{id} - Get decision by ID + group.MapGet("/{id:guid}", async ( + Guid id, + ITriageQueryService service, + CancellationToken ct) => + { + var result = await service.GetDecisionAsync(id, ct); + return result is null ? Results.NotFound() : Results.Ok(result); + }) + .WithName("GetTriageDecision") + .WithDescription("Get a triage decision by ID"); + + // DELETE /decisions/{id}/revoke - Revoke a decision + group.MapDelete("/{id:guid}/revoke", async ( + Guid id, + RevokeDecisionRequest request, + ITriageCommandService service, + ClaimsPrincipal user, + CancellationToken ct) => + { + var actorSubject = user.FindFirst(ClaimTypes.NameIdentifier)?.Value + ?? throw new UnauthorizedAccessException("Actor subject required"); + + await service.RevokeDecisionAsync(id, request.Reason, actorSubject, ct); + return Results.NoContent(); + }) + .WithName("RevokeTriageDecision") + .WithDescription("Revoke an active triage decision"); + + // GET /findings/{findingId}/decisions - Get decisions for a finding + group.MapGet("/findings/{findingId:guid}/decisions", async ( + Guid findingId, + ITriageQueryService service, + CancellationToken ct) => + { + var result = await service.GetDecisionsForFindingAsync(findingId, ct); + return Results.Ok(result); + }) + .WithName("GetDecisionsForFinding") + .WithDescription("Get all decisions for a finding"); + } +} +``` + +**Acceptance Criteria**: +- [ ] `TriageDecisionEndpoints.cs` file created +- [ ] POST /decisions creates DSSE-signed decision +- [ ] GET /decisions/{id} retrieves with signature info +- [ ] DELETE /decisions/{id}/revoke revokes with reason +- [ ] Actor subject extracted from JWT claims +- [ ] Authorization required for all endpoints + +--- + +### T3: Create TriageEvidenceEndpoints.cs + +**Assignee**: Scanner Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create endpoints for evidence artifact retrieval. + +**Implementation Path**: `Endpoints/TriageEvidenceEndpoints.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Scanner.WebService.Endpoints; + +/// +/// Triage evidence artifact endpoints. +/// +public static class TriageEvidenceEndpoints +{ + public static void MapTriageEvidenceEndpoints(this WebApplication app) + { + var group = app.MapGroup("/api/v1/triage/evidence") + .WithTags("Triage Evidence") + .RequireAuthorization(); + + // GET /findings/{findingId}/evidence - List evidence for finding + group.MapGet("/findings/{findingId:guid}", async ( + Guid findingId, + ITriageQueryService service, + CancellationToken ct) => + { + var result = await service.GetEvidenceForFindingAsync(findingId, ct); + return Results.Ok(result); + }) + .WithName("GetEvidenceForFinding") + .WithDescription("Get all evidence artifacts for a finding"); + + // GET /evidence/{id} - Get evidence metadata + group.MapGet("/{id:guid}", async ( + Guid id, + ITriageQueryService service, + CancellationToken ct) => + { + var result = await service.GetEvidenceAsync(id, ct); + return result is null ? Results.NotFound() : Results.Ok(result); + }) + .WithName("GetEvidence") + .WithDescription("Get evidence artifact metadata"); + + // GET /evidence/{id}/download - Download evidence content + group.MapGet("/{id:guid}/download", async ( + Guid id, + IEvidenceStorageService storage, + ITriageQueryService service, + CancellationToken ct) => + { + var evidence = await service.GetEvidenceAsync(id, ct); + if (evidence is null) + return Results.NotFound(); + + var content = await storage.GetContentAsync(evidence.Uri, ct); + if (content is null) + return Results.NotFound("Evidence content not found"); + + return Results.File( + content, + contentType: evidence.MediaType ?? "application/octet-stream", + fileDownloadName: $"evidence-{id}"); + }) + .WithName("DownloadEvidence") + .WithDescription("Download evidence artifact content"); + + // GET /evidence/{id}/verify - Verify evidence integrity + group.MapGet("/{id:guid}/verify", async ( + Guid id, + IEvidenceVerificationService verifier, + CancellationToken ct) => + { + var result = await verifier.VerifyAsync(id, ct); + return Results.Ok(result); + }) + .WithName("VerifyEvidence") + .WithDescription("Verify evidence artifact integrity and signature"); + } +} +``` + +**Acceptance Criteria**: +- [ ] `TriageEvidenceEndpoints.cs` file created +- [ ] GET /findings/{findingId}/evidence lists artifacts +- [ ] GET /evidence/{id} returns metadata +- [ ] GET /evidence/{id}/download streams content +- [ ] GET /evidence/{id}/verify checks hash and signature +- [ ] Content-addressable retrieval by hash + +--- + +### T4: Create ITriageQueryService + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create service layer for triage queries with filtering and pagination. + +**Implementation Path**: `Services/ITriageQueryService.cs` and `Services/TriageQueryService.cs` (new files) + +**Implementation**: +```csharp +namespace StellaOps.Scanner.WebService.Services; + +/// +/// Service for triage query operations. +/// +public interface ITriageQueryService +{ + Task> GetFindingsAsync( + TriageFindingQueryParams query, CancellationToken ct = default); + + Task GetFindingCaseAsync(Guid id, CancellationToken ct = default); + + Task GetFindingsByLaneAsync(CancellationToken ct = default); + + Task> GetFindingsForLaneAsync( + TriageLane lane, PaginationParams pagination, CancellationToken ct = default); + + Task GetSummaryAsync(CancellationToken ct = default); + + Task GetDecisionAsync(Guid id, CancellationToken ct = default); + + Task> GetDecisionsForFindingAsync( + Guid findingId, CancellationToken ct = default); + + Task> GetEvidenceForFindingAsync( + Guid findingId, CancellationToken ct = default); + + Task GetEvidenceAsync(Guid id, CancellationToken ct = default); +} + +/// +/// Implementation of triage query service. +/// +public sealed class TriageQueryService : ITriageQueryService +{ + private readonly TriageDbContext _db; + private readonly ILogger _logger; + + public TriageQueryService(TriageDbContext db, ILogger logger) + { + _db = db; + _logger = logger; + } + + public async Task> GetFindingsAsync( + TriageFindingQueryParams query, CancellationToken ct = default) + { + var q = _db.Findings.AsQueryable(); + + // Apply filters + if (query.Lane.HasValue) + q = q.Where(f => f.Lane == query.Lane.Value); + if (query.Verdict.HasValue) + q = q.Where(f => f.Verdict == query.Verdict.Value); + if (!string.IsNullOrEmpty(query.VulnId)) + q = q.Where(f => f.VulnId == query.VulnId); + if (!string.IsNullOrEmpty(query.Purl)) + q = q.Where(f => f.Purl.Contains(query.Purl)); + if (query.SeverityMin.HasValue) + q = q.Where(f => f.CvssScore >= query.SeverityMin.Value); + + // Order by severity descending, then created + q = q.OrderByDescending(f => f.CvssScore) + .ThenByDescending(f => f.CreatedAt); + + // Execute with pagination + var total = await q.CountAsync(ct); + var items = await q + .Skip(query.Offset) + .Take(query.Limit) + .Select(f => MapToDto(f)) + .ToListAsync(ct); + + return new PagedResult(items, total, query.Offset, query.Limit); + } + + public async Task GetFindingCaseAsync(Guid id, CancellationToken ct = default) + { + var finding = await _db.Findings + .Include(f => f.Decisions.Where(d => d.RevokedAt == null)) + .Include(f => f.EvidenceArtifacts) + .Include(f => f.Snapshots.OrderByDescending(s => s.CreatedAt).Take(10)) + .FirstOrDefaultAsync(f => f.Id == id, ct); + + if (finding is null) + return null; + + return new TriageCaseDto + { + Finding = MapToDto(finding), + ActiveDecisions = finding.Decisions.Select(MapDecisionToDto).ToList(), + Evidence = finding.EvidenceArtifacts.Select(MapEvidenceToDto).ToList(), + RecentSnapshots = finding.Snapshots.Select(MapSnapshotToDto).ToList(), + VerdictExplanation = ComputeVerdictExplanation(finding) + }; + } + + public async Task GetFindingsByLaneAsync(CancellationToken ct = default) + { + var groups = await _db.Findings + .GroupBy(f => f.Lane) + .Select(g => new LaneGroupDto + { + Lane = g.Key, + Count = g.Count(), + CriticalCount = g.Count(f => f.CvssScore >= 9.0m), + HighCount = g.Count(f => f.CvssScore >= 7.0m && f.CvssScore < 9.0m) + }) + .ToListAsync(ct); + + return new TriageLaneGroupsDto(groups); + } + + public async Task GetSummaryAsync(CancellationToken ct = default) + { + var counts = await _db.Findings + .GroupBy(_ => 1) + .Select(g => new + { + Total = g.Count(), + Ship = g.Count(f => f.Verdict == TriageVerdict.Ship), + Block = g.Count(f => f.Verdict == TriageVerdict.Block), + Exception = g.Count(f => f.Verdict == TriageVerdict.Exception), + Active = g.Count(f => f.Lane == TriageLane.Active), + Blocked = g.Count(f => f.Lane == TriageLane.Blocked), + NeedsException = g.Count(f => f.Lane == TriageLane.NeedsException), + MutedReach = g.Count(f => f.Lane == TriageLane.MutedReach), + MutedVex = g.Count(f => f.Lane == TriageLane.MutedVex) + }) + .FirstOrDefaultAsync(ct); + + return new TriageSummaryDto( + counts?.Total ?? 0, + counts?.Ship ?? 0, + counts?.Block ?? 0, + counts?.Exception ?? 0, + new Dictionary + { + [TriageLane.Active] = counts?.Active ?? 0, + [TriageLane.Blocked] = counts?.Blocked ?? 0, + [TriageLane.NeedsException] = counts?.NeedsException ?? 0, + [TriageLane.MutedReach] = counts?.MutedReach ?? 0, + [TriageLane.MutedVex] = counts?.MutedVex ?? 0 + }); + } + + // ... Additional methods and mapping helpers +} +``` + +**Acceptance Criteria**: +- [ ] `ITriageQueryService` interface defined +- [ ] `TriageQueryService` implementation created +- [ ] Filtering by lane, verdict, vulnId, purl, severity +- [ ] Pagination with offset/limit +- [ ] Full case includes decisions, evidence, snapshots +- [ ] Lane grouping with counts +- [ ] Summary statistics + +--- + +### T5: Create ITriageCommandService + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T4 + +**Description**: +Create service for decision creation and revocation with DSSE signing. + +**Implementation Path**: `Services/ITriageCommandService.cs` and `Services/TriageCommandService.cs` (new files) + +**Implementation**: +```csharp +namespace StellaOps.Scanner.WebService.Services; + +/// +/// Service for triage command operations. +/// +public interface ITriageCommandService +{ + Task CreateDecisionAsync( + CreateDecisionRequest request, + string actorSubject, + string? actorDisplay, + CancellationToken ct = default); + + Task RevokeDecisionAsync( + Guid decisionId, + string reason, + string actorSubject, + CancellationToken ct = default); +} + +/// +/// Implementation of triage command service. +/// +public sealed class TriageCommandService : ITriageCommandService +{ + private readonly TriageDbContext _db; + private readonly IDsseSigner _signer; + private readonly ILogger _logger; + + public TriageCommandService( + TriageDbContext db, + IDsseSigner signer, + ILogger logger) + { + _db = db; + _signer = signer; + _logger = logger; + } + + public async Task CreateDecisionAsync( + CreateDecisionRequest request, + string actorSubject, + string? actorDisplay, + CancellationToken ct = default) + { + // Validate finding exists + var finding = await _db.Findings.FindAsync([request.FindingId], ct) + ?? throw new NotFoundException($"Finding {request.FindingId} not found"); + + // Build decision + var decision = new TriageDecision + { + FindingId = request.FindingId, + Kind = request.Kind, + ReasonCode = request.ReasonCode, + Note = request.Note, + PolicyRef = request.PolicyRef, + Ttl = request.Ttl, + ActorSubject = actorSubject, + ActorDisplay = actorDisplay + }; + + // Create DSSE envelope and sign + var envelope = new DecisionDssePayload + { + DecisionId = decision.Id, + FindingId = decision.FindingId, + Kind = decision.Kind.ToString(), + ReasonCode = decision.ReasonCode, + ActorSubject = actorSubject, + CreatedAt = decision.CreatedAt.ToString("O") + }; + + var signatureResult = await _signer.SignAsync(envelope, ct); + decision.SignatureRef = signatureResult.SignatureRef; + decision.DsseHash = signatureResult.EnvelopeHash; + + // Update finding lane based on decision + finding.Lane = DetermineLane(finding, decision); + + _db.Decisions.Add(decision); + await _db.SaveChangesAsync(ct); + + _logger.LogInformation( + "Created {Kind} decision {DecisionId} for finding {FindingId} by {Actor}", + decision.Kind, decision.Id, decision.FindingId, actorSubject); + + // Create snapshot for audit trail + await CreateDecisionSnapshotAsync(finding, decision, ct); + + return MapToDto(decision); + } + + public async Task RevokeDecisionAsync( + Guid decisionId, + string reason, + string actorSubject, + CancellationToken ct = default) + { + var decision = await _db.Decisions + .Include(d => d.Finding) + .FirstOrDefaultAsync(d => d.Id == decisionId, ct) + ?? throw new NotFoundException($"Decision {decisionId} not found"); + + if (!decision.IsActive) + throw new InvalidOperationException("Decision is already revoked"); + + // Sign revocation + var revokePayload = new RevocationDssePayload + { + DecisionId = decisionId, + Reason = reason, + RevokedBy = actorSubject, + RevokedAt = DateTimeOffset.UtcNow.ToString("O") + }; + + var signatureResult = await _signer.SignAsync(revokePayload, ct); + + decision.RevokedAt = DateTimeOffset.UtcNow; + decision.RevokeReason = reason; + decision.RevokeSignatureRef = signatureResult.SignatureRef; + decision.RevokeDsseHash = signatureResult.EnvelopeHash; + + // Recalculate finding lane + if (decision.Finding is not null) + { + decision.Finding.Lane = await RecalculateLaneAsync(decision.Finding, ct); + } + + await _db.SaveChangesAsync(ct); + + _logger.LogInformation( + "Revoked decision {DecisionId} by {Actor}: {Reason}", + decisionId, actorSubject, reason); + } + + private static TriageLane DetermineLane(TriageFinding finding, TriageDecision decision) + { + return decision.Kind switch + { + TriageDecisionKind.MuteReach => TriageLane.MutedReach, + TriageDecisionKind.MuteVex => TriageLane.MutedVex, + TriageDecisionKind.Exception => TriageLane.Compensated, + TriageDecisionKind.Ack => finding.Lane, // Keep current lane + _ => finding.Lane + }; + } + + // ... Additional helper methods +} +``` + +**Acceptance Criteria**: +- [ ] `ITriageCommandService` interface defined +- [ ] `TriageCommandService` implementation created +- [ ] Decision creation with DSSE signing +- [ ] Revocation with signed envelope +- [ ] Finding lane updates on decision +- [ ] Snapshot creation for audit trail +- [ ] Logging for observability + +--- + +### T6: Add TriageContracts.cs + +**Assignee**: Scanner Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Define API request/response DTOs for triage endpoints. + +**Implementation Path**: `Contracts/TriageContracts.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Scanner.WebService.Contracts; + +// Query parameters +public sealed record TriageFindingQueryParams( + TriageLane? Lane = null, + TriageVerdict? Verdict = null, + string? VulnId = null, + string? Purl = null, + decimal? SeverityMin = null, + int Offset = 0, + int Limit = 50); + +public sealed record PaginationParams(int Offset = 0, int Limit = 50); + +// Request DTOs +public sealed record CreateDecisionRequest( + Guid FindingId, + TriageDecisionKind Kind, + string ReasonCode, + string? Note = null, + string? PolicyRef = null, + DateTimeOffset? Ttl = null); + +public sealed record RevokeDecisionRequest(string Reason); + +// Response DTOs +public sealed record TriageFindingDto( + Guid Id, + string VulnId, + string Purl, + string? Version, + string? Location, + TriageLane Lane, + TriageVerdict Verdict, + decimal? CvssScore, + string? CvssVector, + TriageReachability Reachability, + TriageVexStatus VexStatus, + bool Kev, + decimal? EpssScore, + DateTimeOffset CreatedAt, + DateTimeOffset UpdatedAt); + +public sealed record TriageCaseDto +{ + public required TriageFindingDto Finding { get; init; } + public required IReadOnlyList ActiveDecisions { get; init; } + public required IReadOnlyList Evidence { get; init; } + public required IReadOnlyList RecentSnapshots { get; init; } + public string? VerdictExplanation { get; init; } +} + +public sealed record TriageDecisionDto( + Guid Id, + Guid FindingId, + TriageDecisionKind Kind, + string ReasonCode, + string? Note, + string? PolicyRef, + DateTimeOffset? Ttl, + string ActorSubject, + string? ActorDisplay, + string? SignatureRef, + string? DsseHash, + DateTimeOffset CreatedAt, + bool IsActive, + DateTimeOffset? RevokedAt, + string? RevokeReason); + +public sealed record TriageEvidenceDto( + Guid Id, + Guid FindingId, + TriageEvidenceType Type, + string Title, + string? Issuer, + bool Signed, + string? SignedBy, + string ContentHash, + string? SignatureRef, + string? MediaType, + string Uri, + long? SizeBytes, + DateTimeOffset CreatedAt); + +public sealed record TriageSnapshotDto( + Guid Id, + TriageSnapshotTrigger Trigger, + string? FromInputsHash, + string ToInputsHash, + string Summary, + DateTimeOffset CreatedAt); + +public sealed record LaneGroupDto +{ + public TriageLane Lane { get; init; } + public int Count { get; init; } + public int CriticalCount { get; init; } + public int HighCount { get; init; } +} + +public sealed record TriageLaneGroupsDto(IReadOnlyList Groups); + +public sealed record TriageSummaryDto( + int Total, + int Ship, + int Block, + int Exception, + IReadOnlyDictionary ByLane); + +public sealed record PagedResult( + IReadOnlyList Items, + int Total, + int Offset, + int Limit) +{ + public bool HasMore => Offset + Items.Count < Total; +} + +public sealed record EvidenceVerificationResult( + bool IsValid, + bool HashMatches, + bool SignatureValid, + string? Error); +``` + +**Acceptance Criteria**: +- [ ] `TriageContracts.cs` file created +- [ ] Query parameter records defined +- [ ] Request DTOs for create/revoke +- [ ] Response DTOs for all entities +- [ ] PagedResult generic for pagination +- [ ] Verification result for evidence + +--- + +### T7: Integration Tests + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T2, T3, T4, T5, T6 + +**Description**: +Create integration tests for all triage API endpoints. + +**Implementation Path**: `src/Scanner/__Tests/StellaOps.Scanner.WebService.Tests/Endpoints/TriageEndpointsTests.cs` + +**Test Cases**: +```csharp +public class TriageEndpointsTests : IClassFixture +{ + private readonly HttpClient _client; + private readonly TriageDbContext _db; + + [Fact] + public async Task GetFindings_ReturnsPagedResults() + { + // Arrange - seed findings + await SeedFindingsAsync(10); + + // Act + var response = await _client.GetAsync("/api/v1/triage/findings?limit=5"); + + // Assert + response.StatusCode.Should().Be(HttpStatusCode.OK); + var result = await response.Content.ReadFromJsonAsync>(); + result.Should().NotBeNull(); + result!.Items.Should().HaveCount(5); + result.Total.Should().Be(10); + result.HasMore.Should().BeTrue(); + } + + [Fact] + public async Task GetFindings_FiltersByLane() + { + // Arrange + await SeedFindingsWithLanesAsync(); + + // Act + var response = await _client.GetAsync("/api/v1/triage/findings?lane=Blocked"); + + // Assert + response.StatusCode.Should().Be(HttpStatusCode.OK); + var result = await response.Content.ReadFromJsonAsync>(); + result!.Items.Should().OnlyContain(f => f.Lane == TriageLane.Blocked); + } + + [Fact] + public async Task GetFindingCase_ReturnsFullCase() + { + // Arrange + var finding = await SeedFindingWithDecisionsAndEvidenceAsync(); + + // Act + var response = await _client.GetAsync($"/api/v1/triage/findings/{finding.Id}"); + + // Assert + response.StatusCode.Should().Be(HttpStatusCode.OK); + var result = await response.Content.ReadFromJsonAsync(); + result.Should().NotBeNull(); + result!.Finding.Id.Should().Be(finding.Id); + result.ActiveDecisions.Should().NotBeEmpty(); + result.Evidence.Should().NotBeEmpty(); + } + + [Fact] + public async Task CreateDecision_CreatesDsseSignedDecision() + { + // Arrange + var finding = await SeedFindingAsync(); + var request = new CreateDecisionRequest( + FindingId: finding.Id, + Kind: TriageDecisionKind.MuteReach, + ReasonCode: "REACH_ANALYSIS", + Note: "Static analysis confirmed unreachable"); + + // Act + var response = await _client.PostAsJsonAsync("/api/v1/triage/decisions", request); + + // Assert + response.StatusCode.Should().Be(HttpStatusCode.Created); + var result = await response.Content.ReadFromJsonAsync(); + result.Should().NotBeNull(); + result!.SignatureRef.Should().NotBeNullOrEmpty(); + result.DsseHash.Should().NotBeNullOrEmpty(); + } + + [Fact] + public async Task RevokeDecision_RevokesWithSignature() + { + // Arrange + var decision = await SeedActiveDecisionAsync(); + + // Act + var response = await _client.DeleteAsync( + $"/api/v1/triage/decisions/{decision.Id}/revoke"); + + // Assert + response.StatusCode.Should().Be(HttpStatusCode.NoContent); + + var revoked = await _db.Decisions.FindAsync(decision.Id); + revoked!.IsActive.Should().BeFalse(); + revoked.RevokeSignatureRef.Should().NotBeNullOrEmpty(); + } + + [Fact] + public async Task GetLanes_ReturnsGroupedCounts() + { + // Arrange + await SeedFindingsWithLanesAsync(); + + // Act + var response = await _client.GetAsync("/api/v1/triage/lanes"); + + // Assert + response.StatusCode.Should().Be(HttpStatusCode.OK); + var result = await response.Content.ReadFromJsonAsync(); + result.Should().NotBeNull(); + result!.Groups.Should().NotBeEmpty(); + } + + [Fact] + public async Task GetEvidence_VerifiesIntegrity() + { + // Arrange + var evidence = await SeedSignedEvidenceAsync(); + + // Act + var response = await _client.GetAsync($"/api/v1/triage/evidence/{evidence.Id}/verify"); + + // Assert + response.StatusCode.Should().Be(HttpStatusCode.OK); + var result = await response.Content.ReadFromJsonAsync(); + result!.IsValid.Should().BeTrue(); + result.HashMatches.Should().BeTrue(); + result.SignatureValid.Should().BeTrue(); + } +} +``` + +**Acceptance Criteria**: +- [ ] Integration tests file created +- [ ] Test for paginated findings +- [ ] Test for lane filtering +- [ ] Test for full case retrieval +- [ ] Test for decision creation with DSSE +- [ ] Test for decision revocation +- [ ] Test for lane grouping +- [ ] Test for evidence verification +- [ ] All tests pass with Testcontainers PostgreSQL + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Scanner Team | Create TriageEndpoints.cs | +| 2 | T2 | TODO | T1 | Scanner Team | Create TriageDecisionEndpoints.cs | +| 3 | T3 | TODO | T1 | Scanner Team | Create TriageEvidenceEndpoints.cs | +| 4 | T4 | TODO | — | Scanner Team | Create ITriageQueryService | +| 5 | T5 | TODO | T4 | Scanner Team | Create ITriageCommandService | +| 6 | T6 | TODO | — | Scanner Team | Add TriageContracts.cs | +| 7 | T7 | TODO | T1-T6 | Scanner Team | Integration tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from UX Gap Analysis. Triage API identified as blocking dependency for all UI work. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Minimal API | Decision | Scanner Team | Use .NET minimal API over controllers for consistency | +| DSSE signing | Decision | Scanner Team | All decisions cryptographically signed | +| Lane recalculation | Decision | Scanner Team | Decisions trigger automatic lane updates | +| Pagination | Decision | Scanner Team | Default limit 50, max 200 | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] GET /triage/findings returns paginated results +- [ ] GET /triage/findings/{id} returns full case +- [ ] POST /triage/decisions creates signed decision +- [ ] DELETE /triage/decisions/{id}/revoke works +- [ ] GET /triage/lanes returns grouped counts +- [ ] All integration tests pass +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_4200_0001_0002_excititor_policy_lattice.md b/docs/implplan/SPRINT_4200_0001_0002_excititor_policy_lattice.md new file mode 100644 index 000000000..6ba647586 --- /dev/null +++ b/docs/implplan/SPRINT_4200_0001_0002_excititor_policy_lattice.md @@ -0,0 +1,994 @@ +# Sprint 4200.0001.0002 · Wire Excititor to Policy K4 Lattice + +## Topic & Scope + +- Replace hardcoded VEX precedence in Excititor with Policy's K4 TrustLatticeEngine +- Enable trust weight propagation in VEX merge decisions +- Add structured merge trace for explainability + +**Working directory:** `src/Excititor/__Libraries/StellaOps.Excititor.Core/` + +## Dependencies & Concurrency + +- **Upstream**: None +- **Downstream**: Sprint 4500.0002.0001 (VEX Conflict Studio) +- **Safe to parallelize with**: Sprint 4200.0001.0001 (Triage REST API), Sprint 4100.0002.0001 (Knowledge Snapshots) + +## Documentation Prerequisites + +- `src/Excititor/__Libraries/StellaOps.Excititor.Core/AGENTS.md` +- `src/Policy/__Libraries/StellaOps.Policy/Lattice/AGENTS.md` +- `docs/product-advisories/21-Dec-2025 - How Top Scanners Shape Evidence‑First UX.md` (VEX Conflict Studio) +- Existing files: `OpenVexStatementMerger.cs`, `TrustLatticeEngine.cs` + +--- + +## Problem Analysis + +The Policy module has a sophisticated `TrustLatticeEngine` implementing K4 logic (True, False, Both, Neither), but Excititor's `OpenVexStatementMerger` uses hardcoded precedence: + +```csharp +// CURRENT (hardcoded in OpenVexStatementMerger.cs): +private static int GetStatusPrecedence(VexStatus status) => status switch +{ + VexStatus.Affected => 3, + VexStatus.UnderInvestigation => 2, + VexStatus.Fixed => 1, + VexStatus.NotAffected => 0, + _ => -1 +}; +``` + +This disconnect means VEX merge outcomes are inconsistent with policy intent and cannot consider trust weights. + +--- + +## Tasks + +### T1: Create IVexLatticeProvider Interface + +**Assignee**: Excititor Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Define abstraction over Policy's TrustLatticeEngine for Excititor consumption. + +**Implementation Path**: `Lattice/IVexLatticeProvider.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Excititor.Core.Lattice; + +/// +/// Abstraction for VEX status lattice operations. +/// +public interface IVexLatticeProvider +{ + /// + /// Computes the lattice join (least upper bound) of two VEX statuses. + /// + VexLatticeResult Join(VexStatement left, VexStatement right); + + /// + /// Computes the lattice meet (greatest lower bound) of two VEX statuses. + /// + VexLatticeResult Meet(VexStatement left, VexStatement right); + + /// + /// Determines if one status is higher in the lattice than another. + /// + bool IsHigher(VexStatus a, VexStatus b); + + /// + /// Gets the trust weight for a VEX statement based on its source. + /// + decimal GetTrustWeight(VexStatement statement); + + /// + /// Resolves a conflict using trust weights and lattice logic. + /// + VexConflictResolution ResolveConflict(VexStatement left, VexStatement right); +} + +/// +/// Result of a lattice operation. +/// +public sealed record VexLatticeResult( + VexStatus ResultStatus, + VexStatement? WinningStatement, + string Reason, + decimal? TrustDelta); + +/// +/// Detailed conflict resolution result. +/// +public sealed record VexConflictResolution( + VexStatement Winner, + VexStatement Loser, + ConflictResolutionReason Reason, + MergeTrace Trace); + +/// +/// Why one statement won over another. +/// +public enum ConflictResolutionReason +{ + /// Higher trust weight from source. + TrustWeight, + + /// More recent timestamp. + Freshness, + + /// Lattice position (e.g., Affected > NotAffected). + LatticePosition, + + /// Both equal, first used. + Tie +} + +/// +/// Structured trace of merge decision. +/// +public sealed record MergeTrace +{ + public required string LeftSource { get; init; } + public required string RightSource { get; init; } + public required VexStatus LeftStatus { get; init; } + public required VexStatus RightStatus { get; init; } + public required decimal LeftTrust { get; init; } + public required decimal RightTrust { get; init; } + public required VexStatus ResultStatus { get; init; } + public required string Explanation { get; init; } + public DateTimeOffset EvaluatedAt { get; init; } = DateTimeOffset.UtcNow; +} +``` + +**Acceptance Criteria**: +- [ ] `IVexLatticeProvider.cs` file created in `Lattice/` +- [ ] Join and Meet operations defined +- [ ] Trust weight method defined +- [ ] Conflict resolution with trace +- [ ] MergeTrace for explainability + +--- + +### T2: Implement PolicyLatticeAdapter + +**Assignee**: Excititor Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Adapt Policy module's TrustLatticeEngine for Excititor consumption. + +**Implementation Path**: `Lattice/PolicyLatticeAdapter.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Excititor.Core.Lattice; + +/// +/// Adapts Policy's TrustLatticeEngine for VEX operations. +/// +public sealed class PolicyLatticeAdapter : IVexLatticeProvider +{ + private readonly ITrustLatticeEngine _lattice; + private readonly ITrustWeightRegistry _trustRegistry; + private readonly ILogger _logger; + + // K4 lattice: Affected(Both) > UnderInvestigation(Unknown) > {Fixed, NotAffected} + private static readonly Dictionary StatusToLabel = new() + { + [VexStatus.Affected] = TrustLabel.Both, // Conflict - must address + [VexStatus.UnderInvestigation] = TrustLabel.Neither, // Unknown + [VexStatus.Fixed] = TrustLabel.True, // Known good + [VexStatus.NotAffected] = TrustLabel.False // Known not affected + }; + + public PolicyLatticeAdapter( + ITrustLatticeEngine lattice, + ITrustWeightRegistry trustRegistry, + ILogger logger) + { + _lattice = lattice; + _trustRegistry = trustRegistry; + _logger = logger; + } + + public VexLatticeResult Join(VexStatement left, VexStatement right) + { + var leftLabel = StatusToLabel.GetValueOrDefault(left.Status, TrustLabel.Neither); + var rightLabel = StatusToLabel.GetValueOrDefault(right.Status, TrustLabel.Neither); + + var joinResult = _lattice.Join(leftLabel, rightLabel); + var resultStatus = LabelToStatus(joinResult); + + var winner = DetermineWinner(left, right, resultStatus); + + return new VexLatticeResult( + ResultStatus: resultStatus, + WinningStatement: winner, + Reason: $"K4 join: {leftLabel} ∨ {rightLabel} = {joinResult}", + TrustDelta: Math.Abs(GetTrustWeight(left) - GetTrustWeight(right))); + } + + public VexLatticeResult Meet(VexStatement left, VexStatement right) + { + var leftLabel = StatusToLabel.GetValueOrDefault(left.Status, TrustLabel.Neither); + var rightLabel = StatusToLabel.GetValueOrDefault(right.Status, TrustLabel.Neither); + + var meetResult = _lattice.Meet(leftLabel, rightLabel); + var resultStatus = LabelToStatus(meetResult); + + var winner = DetermineWinner(left, right, resultStatus); + + return new VexLatticeResult( + ResultStatus: resultStatus, + WinningStatement: winner, + Reason: $"K4 meet: {leftLabel} ∧ {rightLabel} = {meetResult}", + TrustDelta: Math.Abs(GetTrustWeight(left) - GetTrustWeight(right))); + } + + public bool IsHigher(VexStatus a, VexStatus b) + { + var labelA = StatusToLabel.GetValueOrDefault(a, TrustLabel.Neither); + var labelB = StatusToLabel.GetValueOrDefault(b, TrustLabel.Neither); + return _lattice.IsAbove(labelA, labelB); + } + + public decimal GetTrustWeight(VexStatement statement) + { + // Get trust weight from registry based on source + var sourceKey = ExtractSourceKey(statement); + return _trustRegistry.GetWeight(sourceKey); + } + + public VexConflictResolution ResolveConflict(VexStatement left, VexStatement right) + { + var leftWeight = GetTrustWeight(left); + var rightWeight = GetTrustWeight(right); + + VexStatement winner; + VexStatement loser; + ConflictResolutionReason reason; + + // 1. Trust weight takes precedence + if (Math.Abs(leftWeight - rightWeight) > 0.01m) + { + if (leftWeight > rightWeight) + { + winner = left; + loser = right; + } + else + { + winner = right; + loser = left; + } + reason = ConflictResolutionReason.TrustWeight; + } + // 2. Lattice position as tiebreaker + else if (IsHigher(left.Status, right.Status)) + { + winner = left; + loser = right; + reason = ConflictResolutionReason.LatticePosition; + } + else if (IsHigher(right.Status, left.Status)) + { + winner = right; + loser = left; + reason = ConflictResolutionReason.LatticePosition; + } + // 3. Freshness as final tiebreaker + else if (left.Timestamp > right.Timestamp) + { + winner = left; + loser = right; + reason = ConflictResolutionReason.Freshness; + } + else if (right.Timestamp > left.Timestamp) + { + winner = right; + loser = left; + reason = ConflictResolutionReason.Freshness; + } + // 4. True tie - use first + else + { + winner = left; + loser = right; + reason = ConflictResolutionReason.Tie; + } + + var trace = new MergeTrace + { + LeftSource = left.Source ?? "unknown", + RightSource = right.Source ?? "unknown", + LeftStatus = left.Status, + RightStatus = right.Status, + LeftTrust = leftWeight, + RightTrust = rightWeight, + ResultStatus = winner.Status, + Explanation = BuildExplanation(winner, loser, reason) + }; + + _logger.LogDebug( + "VEX conflict resolved: {Winner} ({WinnerStatus}) won over {Loser} ({LoserStatus}) by {Reason}", + winner.Source, winner.Status, loser.Source, loser.Status, reason); + + return new VexConflictResolution(winner, loser, reason, trace); + } + + private static VexStatus LabelToStatus(TrustLabel label) => label switch + { + TrustLabel.Both => VexStatus.Affected, + TrustLabel.Neither => VexStatus.UnderInvestigation, + TrustLabel.True => VexStatus.Fixed, + TrustLabel.False => VexStatus.NotAffected, + _ => VexStatus.UnderInvestigation + }; + + private VexStatement? DetermineWinner(VexStatement left, VexStatement right, VexStatus resultStatus) + { + if (left.Status == resultStatus) return left; + if (right.Status == resultStatus) return right; + + // Result is computed from lattice, neither matches exactly + // Return the one with higher trust + return GetTrustWeight(left) >= GetTrustWeight(right) ? left : right; + } + + private static string ExtractSourceKey(VexStatement statement) + { + // Extract publisher/issuer from statement for trust lookup + return statement.Source?.ToLowerInvariant() ?? "unknown"; + } + + private static string BuildExplanation( + VexStatement winner, VexStatement loser, ConflictResolutionReason reason) + { + return reason switch + { + ConflictResolutionReason.TrustWeight => + $"'{winner.Source}' has higher trust weight than '{loser.Source}'", + ConflictResolutionReason.Freshness => + $"'{winner.Source}' is more recent ({winner.Timestamp:O}) than '{loser.Source}' ({loser.Timestamp:O})", + ConflictResolutionReason.LatticePosition => + $"'{winner.Status}' is higher in K4 lattice than '{loser.Status}'", + ConflictResolutionReason.Tie => + $"Tie between '{winner.Source}' and '{loser.Source}', using first", + _ => "Unknown resolution" + }; + } +} +``` + +**Acceptance Criteria**: +- [ ] `PolicyLatticeAdapter.cs` file created +- [ ] K4 status mapping defined +- [ ] Join/Meet use TrustLatticeEngine +- [ ] Trust weights from registry +- [ ] Conflict resolution with precedence: trust > lattice > freshness > tie +- [ ] Structured explanation in MergeTrace + +--- + +### T3: Refactor OpenVexStatementMerger + +**Assignee**: Excititor Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Replace hardcoded precedence with lattice-based merge logic. + +**Implementation Path**: `Formats/OpenVEX/OpenVexStatementMerger.cs` (modify existing) + +**Changes**: +```csharp +namespace StellaOps.Excititor.Formats.OpenVEX; + +/// +/// Merges OpenVEX statements using K4 lattice logic. +/// +public sealed class OpenVexStatementMerger : IVexStatementMerger +{ + private readonly IVexLatticeProvider _lattice; + private readonly ILogger _logger; + + public OpenVexStatementMerger( + IVexLatticeProvider lattice, + ILogger logger) + { + _lattice = lattice; + _logger = logger; + } + + /// + /// Merges multiple VEX statements for the same product/vulnerability pair. + /// + public VexMergeResult Merge(IEnumerable statements) + { + var statementList = statements.ToList(); + + if (statementList.Count == 0) + return VexMergeResult.Empty(); + + if (statementList.Count == 1) + return VexMergeResult.Single(statementList[0]); + + // Sort by trust weight descending for stable merge order + var sorted = statementList + .OrderByDescending(s => _lattice.GetTrustWeight(s)) + .ThenByDescending(s => s.Timestamp) + .ToList(); + + var traces = new List(); + var current = sorted[0]; + + for (int i = 1; i < sorted.Count; i++) + { + var next = sorted[i]; + + // Check for conflict + if (current.Status != next.Status) + { + var resolution = _lattice.ResolveConflict(current, next); + traces.Add(resolution.Trace); + current = resolution.Winner; + + _logger.LogDebug( + "Merged VEX statement: {Status} from {Source} (reason: {Reason})", + current.Status, current.Source, resolution.Reason); + } + else + { + // Same status - prefer higher trust + if (_lattice.GetTrustWeight(next) > _lattice.GetTrustWeight(current)) + { + current = next; + } + } + } + + return new VexMergeResult( + ResultStatement: current, + InputCount: statementList.Count, + HadConflicts: traces.Count > 0, + Traces: traces); + } + + // REMOVED: Hardcoded precedence method + // private static int GetStatusPrecedence(VexStatus status) => ... +} + +/// +/// Result of VEX statement merge. +/// +public sealed record VexMergeResult( + VexStatement ResultStatement, + int InputCount, + bool HadConflicts, + IReadOnlyList Traces) +{ + public static VexMergeResult Empty() => + new(default!, 0, false, []); + + public static VexMergeResult Single(VexStatement statement) => + new(statement, 1, false, []); +} +``` + +**Acceptance Criteria**: +- [ ] Hardcoded `GetStatusPrecedence` removed +- [ ] Constructor takes `IVexLatticeProvider` +- [ ] Merge uses lattice conflict resolution +- [ ] MergeTraces collected for all conflicts +- [ ] Result includes conflict information +- [ ] Logging for observability + +--- + +### T4: Add Trust Weight Propagation + +**Assignee**: Excititor Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T2 + +**Description**: +Implement trust weight registry for VEX sources. + +**Implementation Path**: `Lattice/TrustWeightRegistry.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Excititor.Core.Lattice; + +/// +/// Registry for VEX source trust weights. +/// +public interface ITrustWeightRegistry +{ + decimal GetWeight(string sourceKey); + void RegisterWeight(string sourceKey, decimal weight); + IReadOnlyDictionary GetAllWeights(); +} + +/// +/// Default implementation with configurable weights. +/// +public sealed class TrustWeightRegistry : ITrustWeightRegistry +{ + private readonly Dictionary _weights = new(StringComparer.OrdinalIgnoreCase); + private readonly TrustWeightOptions _options; + private readonly ILogger _logger; + + // Default trust hierarchy + private static readonly Dictionary DefaultWeights = new() + { + ["vendor"] = 1.0m, // Vendor statements highest trust + ["distro"] = 0.9m, // Distribution maintainers + ["nvd"] = 0.8m, // NVD/NIST + ["ghsa"] = 0.75m, // GitHub Security Advisories + ["osv"] = 0.7m, // Open Source Vulnerabilities + ["cisa"] = 0.85m, // CISA advisories + ["first-party"] = 0.95m, // First-party (internal) statements + ["community"] = 0.5m, // Community reports + ["unknown"] = 0.3m // Unknown sources + }; + + public TrustWeightRegistry( + IOptions options, + ILogger logger) + { + _options = options.Value; + _logger = logger; + + // Initialize with defaults + foreach (var (key, weight) in DefaultWeights) + { + _weights[key] = weight; + } + + // Override with configured weights + foreach (var (key, weight) in _options.SourceWeights) + { + _weights[key] = weight; + _logger.LogDebug("Configured trust weight: {Source} = {Weight}", key, weight); + } + } + + public decimal GetWeight(string sourceKey) + { + // Try exact match + if (_weights.TryGetValue(sourceKey, out var weight)) + return weight; + + // Try category match (e.g., "red-hat-vendor" -> "vendor") + foreach (var category in DefaultWeights.Keys) + { + if (sourceKey.Contains(category, StringComparison.OrdinalIgnoreCase)) + { + return _weights[category]; + } + } + + return _weights["unknown"]; + } + + public void RegisterWeight(string sourceKey, decimal weight) + { + _weights[sourceKey] = Math.Clamp(weight, 0m, 1m); + _logger.LogInformation("Registered trust weight: {Source} = {Weight}", sourceKey, weight); + } + + public IReadOnlyDictionary GetAllWeights() => + new Dictionary(_weights); +} + +/// +/// Configuration options for trust weights. +/// +public sealed class TrustWeightOptions +{ + public Dictionary SourceWeights { get; set; } = []; +} +``` + +**Acceptance Criteria**: +- [ ] `ITrustWeightRegistry` interface defined +- [ ] `TrustWeightRegistry` implementation created +- [ ] Default weights for common sources +- [ ] Configuration override support +- [ ] Category fallback matching +- [ ] Weight clamping to [0, 1] + +--- + +### T5: Add Merge Trace Output + +**Assignee**: Excititor Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T3 + +**Description**: +Add structured trace output for merge decisions. + +**Implementation Path**: `Formats/OpenVEX/MergeTraceWriter.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Excititor.Formats.OpenVEX; + +/// +/// Writes merge traces in various formats. +/// +public sealed class MergeTraceWriter +{ + /// + /// Formats trace as human-readable explanation. + /// + public static string ToExplanation(VexMergeResult result) + { + if (!result.HadConflicts) + { + return result.InputCount switch + { + 0 => "No VEX statements to merge.", + 1 => $"Single statement from '{result.ResultStatement.Source}': {result.ResultStatement.Status}", + _ => $"All {result.InputCount} statements agreed: {result.ResultStatement.Status}" + }; + } + + var sb = new StringBuilder(); + sb.AppendLine($"Merged {result.InputCount} statements with {result.Traces.Count} conflicts:"); + sb.AppendLine(); + + foreach (var trace in result.Traces) + { + sb.AppendLine($" Conflict: {trace.LeftSource} ({trace.LeftStatus}) vs {trace.RightSource} ({trace.RightStatus})"); + sb.AppendLine($" Trust: {trace.LeftTrust:P0} vs {trace.RightTrust:P0}"); + sb.AppendLine($" Resolution: {trace.Explanation}"); + sb.AppendLine(); + } + + sb.AppendLine($"Final result: {result.ResultStatement.Status} from '{result.ResultStatement.Source}'"); + return sb.ToString(); + } + + /// + /// Formats trace as structured JSON. + /// + public static string ToJson(VexMergeResult result) + { + var trace = new + { + inputCount = result.InputCount, + hadConflicts = result.HadConflicts, + result = new + { + status = result.ResultStatement.Status.ToString(), + source = result.ResultStatement.Source, + timestamp = result.ResultStatement.Timestamp + }, + conflicts = result.Traces.Select(t => new + { + left = new { source = t.LeftSource, status = t.LeftStatus.ToString(), trust = t.LeftTrust }, + right = new { source = t.RightSource, status = t.RightStatus.ToString(), trust = t.RightTrust }, + outcome = t.ResultStatus.ToString(), + explanation = t.Explanation, + evaluatedAt = t.EvaluatedAt + }) + }; + + return JsonSerializer.Serialize(trace, new JsonSerializerOptions + { + WriteIndented = true, + PropertyNamingPolicy = JsonNamingPolicy.CamelCase + }); + } + + /// + /// Creates a VEX annotation with merge provenance. + /// + public static VexAnnotation ToAnnotation(VexMergeResult result) + { + return new VexAnnotation + { + Type = "merge-provenance", + Text = result.HadConflicts + ? $"Merged from {result.InputCount} sources with {result.Traces.Count} conflicts" + : $"Merged from {result.InputCount} sources (no conflicts)", + Details = new Dictionary + { + ["inputCount"] = result.InputCount, + ["hadConflicts"] = result.HadConflicts, + ["conflictCount"] = result.Traces.Count, + ["traces"] = result.Traces + } + }; + } +} +``` + +**Acceptance Criteria**: +- [ ] `MergeTraceWriter.cs` file created +- [ ] Human-readable explanation format +- [ ] Structured JSON format +- [ ] VEX annotation for provenance +- [ ] Conflict count and details included + +--- + +### T6: Deprecate VexConsensusResolver + +**Assignee**: Excititor Team +**Story Points**: 1 +**Status**: TODO +**Dependencies**: T3 + +**Description**: +Remove deprecated VexConsensusResolver per AOC-19. + +**Implementation Path**: `Resolvers/VexConsensusResolver.cs` (delete or mark obsolete) + +**Changes**: +```csharp +// Option 1: Mark obsolete with error +[Obsolete("Use OpenVexStatementMerger with IVexLatticeProvider instead. Will be removed in v2.0.", error: true)] +public sealed class VexConsensusResolver +{ + // Existing implementation... +} + +// Option 2: Delete file entirely if no external consumers +// Delete: src/Excititor/__Libraries/StellaOps.Excititor.Core/Resolvers/VexConsensusResolver.cs +``` + +**Acceptance Criteria**: +- [ ] `VexConsensusResolver` marked obsolete with error OR deleted +- [ ] All internal references updated to use `OpenVexStatementMerger` +- [ ] No compile errors +- [ ] AOC-19 compliance noted + +--- + +### T7: Tests for Lattice Merge + +**Assignee**: Excititor Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T2, T3, T4, T5 + +**Description**: +Comprehensive tests for lattice-based VEX merge. + +**Implementation Path**: `src/Excititor/__Tests/StellaOps.Excititor.Core.Tests/Lattice/` + +**Test Cases**: +```csharp +public class PolicyLatticeAdapterTests +{ + [Theory] + [InlineData(VexStatus.Affected, VexStatus.NotAffected, VexStatus.Affected)] + [InlineData(VexStatus.Fixed, VexStatus.NotAffected, VexStatus.Fixed)] + [InlineData(VexStatus.UnderInvestigation, VexStatus.Fixed, VexStatus.UnderInvestigation)] + public void Join_ReturnsExpectedK4Result(VexStatus left, VexStatus right, VexStatus expected) + { + var leftStmt = CreateStatement(left, "source1"); + var rightStmt = CreateStatement(right, "source2"); + + var result = _adapter.Join(leftStmt, rightStmt); + + result.ResultStatus.Should().Be(expected); + } + + [Fact] + public void ResolveConflict_TrustWeightWins() + { + // Arrange + var vendor = CreateStatement(VexStatus.NotAffected, "vendor"); + var community = CreateStatement(VexStatus.Affected, "community"); + // vendor has weight 1.0, community has 0.5 + + // Act + var result = _adapter.ResolveConflict(vendor, community); + + // Assert + result.Winner.Should().Be(vendor); + result.Reason.Should().Be(ConflictResolutionReason.TrustWeight); + } + + [Fact] + public void ResolveConflict_EqualTrust_UsesLatticePosition() + { + // Arrange - both from vendor (same trust) + var affected = CreateStatement(VexStatus.Affected, "vendor-a"); + var notAffected = CreateStatement(VexStatus.NotAffected, "vendor-b"); + _registry.RegisterWeight("vendor-a", 0.9m); + _registry.RegisterWeight("vendor-b", 0.9m); + + // Act + var result = _adapter.ResolveConflict(affected, notAffected); + + // Assert - Affected is higher in K4 + result.Winner.Status.Should().Be(VexStatus.Affected); + result.Reason.Should().Be(ConflictResolutionReason.LatticePosition); + } + + [Fact] + public void ResolveConflict_EqualTrustAndStatus_UsesFreshness() + { + // Arrange + var older = CreateStatement(VexStatus.Affected, "vendor", DateTimeOffset.UtcNow.AddDays(-1)); + var newer = CreateStatement(VexStatus.Affected, "vendor", DateTimeOffset.UtcNow); + + // Act + var result = _adapter.ResolveConflict(older, newer); + + // Assert + result.Winner.Should().Be(newer); + result.Reason.Should().Be(ConflictResolutionReason.Freshness); + } + + [Fact] + public void ResolveConflict_GeneratesTrace() + { + var left = CreateStatement(VexStatus.Affected, "vendor"); + var right = CreateStatement(VexStatus.NotAffected, "distro"); + + var result = _adapter.ResolveConflict(left, right); + + result.Trace.Should().NotBeNull(); + result.Trace.LeftSource.Should().Be("vendor"); + result.Trace.RightSource.Should().Be("distro"); + result.Trace.Explanation.Should().NotBeNullOrEmpty(); + } +} + +public class OpenVexStatementMergerTests +{ + [Fact] + public void Merge_NoStatements_ReturnsEmpty() + { + var result = _merger.Merge([]); + + result.InputCount.Should().Be(0); + result.HadConflicts.Should().BeFalse(); + } + + [Fact] + public void Merge_SingleStatement_ReturnsSingle() + { + var statement = CreateStatement(VexStatus.NotAffected, "vendor"); + + var result = _merger.Merge([statement]); + + result.InputCount.Should().Be(1); + result.ResultStatement.Should().Be(statement); + result.HadConflicts.Should().BeFalse(); + } + + [Fact] + public void Merge_ConflictingStatements_UsesLattice() + { + var vendor = CreateStatement(VexStatus.NotAffected, "vendor"); + var nvd = CreateStatement(VexStatus.Affected, "nvd"); + + var result = _merger.Merge([vendor, nvd]); + + result.InputCount.Should().Be(2); + result.HadConflicts.Should().BeTrue(); + result.Traces.Should().HaveCount(1); + // Vendor has higher trust, wins + result.ResultStatement.Status.Should().Be(VexStatus.NotAffected); + } + + [Fact] + public void Merge_MultipleStatements_CollectsAllTraces() + { + var statements = new[] + { + CreateStatement(VexStatus.Affected, "source1"), + CreateStatement(VexStatus.NotAffected, "source2"), + CreateStatement(VexStatus.Fixed, "source3") + }; + + var result = _merger.Merge(statements); + + result.InputCount.Should().Be(3); + result.Traces.Should().HaveCountGreaterThan(0); + } +} + +public class TrustWeightRegistryTests +{ + [Fact] + public void GetWeight_KnownSource_ReturnsConfiguredWeight() + { + var weight = _registry.GetWeight("vendor"); + + weight.Should().Be(1.0m); + } + + [Fact] + public void GetWeight_UnknownSource_ReturnsFallback() + { + var weight = _registry.GetWeight("random-source"); + + weight.Should().Be(0.3m); // "unknown" default + } + + [Fact] + public void GetWeight_CategoryMatch_ReturnsCategory() + { + var weight = _registry.GetWeight("red-hat-vendor-advisory"); + + weight.Should().Be(1.0m); // Contains "vendor" + } +} +``` + +**Acceptance Criteria**: +- [ ] K4 join/meet tests +- [ ] Trust weight precedence tests +- [ ] Lattice position tiebreaker tests +- [ ] Freshness tiebreaker tests +- [ ] Merge trace generation tests +- [ ] Empty/single/multiple merge tests +- [ ] Trust registry tests +- [ ] All tests pass + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Excititor Team | Create IVexLatticeProvider interface | +| 2 | T2 | TODO | T1 | Excititor Team | Implement PolicyLatticeAdapter | +| 3 | T3 | TODO | T1, T2 | Excititor Team | Refactor OpenVexStatementMerger | +| 4 | T4 | TODO | T2 | Excititor Team | Add trust weight propagation | +| 5 | T5 | TODO | T3 | Excititor Team | Add merge trace output | +| 6 | T6 | TODO | T3 | Excititor Team | Deprecate VexConsensusResolver | +| 7 | T7 | TODO | T1-T5 | Excititor Team | Tests for lattice merge | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from UX Gap Analysis. K4 lattice disconnect identified between Policy and Excititor modules. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| K4 mapping | Decision | Excititor Team | Affected=Both, UnderInvestigation=Neither, Fixed=True, NotAffected=False | +| Trust precedence | Decision | Excititor Team | Trust > Lattice > Freshness > Tie | +| Default weights | Decision | Excititor Team | vendor=1.0, distro=0.9, nvd=0.8, etc. | +| AOC-19 compliance | Risk | Excititor Team | Must remove VexConsensusResolver | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] No hardcoded VEX precedence values +- [ ] Merge uses K4 lattice logic +- [ ] Trust weights influence outcomes +- [ ] MergeTrace explains decisions +- [ ] VexConsensusResolver deprecated +- [ ] All tests pass +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_4200_0002_0001_can_i_ship_header.md b/docs/implplan/SPRINT_4200_0002_0001_can_i_ship_header.md new file mode 100644 index 000000000..0b5b31e96 --- /dev/null +++ b/docs/implplan/SPRINT_4200_0002_0001_can_i_ship_header.md @@ -0,0 +1,839 @@ +# Sprint 4200.0002.0001 · "Can I Ship?" Case Header + +## Topic & Scope + +- Create above-the-fold verdict display for triage cases +- Show primary verdict (SHIP/BLOCK/EXCEPTION) prominently +- Display risk delta from baseline and actionable counts +- Link to signed attestation and knowledge snapshot + +**Working directory:** `src/Web/StellaOps.Web/src/app/features/triage/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 4200.0001.0001 (Triage REST API) +- **Downstream**: None +- **Safe to parallelize with**: Sprint 4200.0002.0002 (Verdict Ladder), Sprint 4200.0002.0003 (Delta/Compare View) + +## Documentation Prerequisites + +- `src/Web/StellaOps.Web/AGENTS.md` +- `docs/product-advisories/21-Dec-2025 - How Top Scanners Shape Evidence‑First UX.md` +- `docs/product-advisories/16-Dec-2025 - Reimagining Proof‑Linked UX in Security Workflows.md` + +--- + +## Tasks + +### T1: Create case-header.component.ts + +**Assignee**: UI Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the primary "Can I Ship?" verdict header component. + +**Implementation Path**: `components/case-header/case-header.component.ts` (new file) + +**Implementation**: +```typescript +import { Component, Input, Output, EventEmitter, ChangeDetectionStrategy } from '@angular/core'; +import { CommonModule } from '@angular/common'; +import { MatChipsModule } from '@angular/material/chips'; +import { MatIconModule } from '@angular/material/icon'; +import { MatTooltipModule } from '@angular/material/tooltip'; +import { MatButtonModule } from '@angular/material/button'; + +export type Verdict = 'ship' | 'block' | 'exception'; + +export interface CaseHeaderData { + verdict: Verdict; + findingCount: number; + criticalCount: number; + highCount: number; + actionableCount: number; + deltaFromBaseline?: DeltaInfo; + attestationId?: string; + snapshotId?: string; + evaluatedAt: Date; +} + +export interface DeltaInfo { + newBlockers: number; + resolvedBlockers: number; + newFindings: number; + resolvedFindings: number; + baselineName: string; +} + +@Component({ + selector: 'stella-case-header', + standalone: true, + imports: [ + CommonModule, + MatChipsModule, + MatIconModule, + MatTooltipModule, + MatButtonModule + ], + templateUrl: './case-header.component.html', + styleUrls: ['./case-header.component.scss'], + changeDetection: ChangeDetectionStrategy.OnPush +}) +export class CaseHeaderComponent { + @Input({ required: true }) data!: CaseHeaderData; + @Output() verdictClick = new EventEmitter(); + @Output() attestationClick = new EventEmitter(); + @Output() snapshotClick = new EventEmitter(); + + get verdictLabel(): string { + switch (this.data.verdict) { + case 'ship': return 'CAN SHIP'; + case 'block': return 'BLOCKED'; + case 'exception': return 'EXCEPTION'; + } + } + + get verdictIcon(): string { + switch (this.data.verdict) { + case 'ship': return 'check_circle'; + case 'block': return 'block'; + case 'exception': return 'warning'; + } + } + + get verdictClass(): string { + return `verdict-chip verdict-${this.data.verdict}`; + } + + get hasNewBlockers(): boolean { + return (this.data.deltaFromBaseline?.newBlockers ?? 0) > 0; + } + + get deltaText(): string { + if (!this.data.deltaFromBaseline) return ''; + const d = this.data.deltaFromBaseline; + + const parts: string[] = []; + if (d.newBlockers > 0) parts.push(`+${d.newBlockers} blockers`); + if (d.resolvedBlockers > 0) parts.push(`-${d.resolvedBlockers} resolved`); + if (d.newFindings > 0) parts.push(`+${d.newFindings} new`); + + return parts.join(', ') + ` since ${d.baselineName}`; + } + + get shortSnapshotId(): string { + if (!this.data.snapshotId) return ''; + // ksm:sha256:abc123... -> ksm:abc123 + const parts = this.data.snapshotId.split(':'); + if (parts.length >= 3) { + return `ksm:${parts[2].substring(0, 8)}`; + } + return this.data.snapshotId.substring(0, 16); + } + + onVerdictClick(): void { + this.verdictClick.emit(); + } + + onAttestationClick(): void { + if (this.data.attestationId) { + this.attestationClick.emit(this.data.attestationId); + } + } + + onSnapshotClick(): void { + if (this.data.snapshotId) { + this.snapshotClick.emit(this.data.snapshotId); + } + } +} +``` + +**Template** (`case-header.component.html`): +```html +
+ +
+ + + + +
+ + +
+ + {{ deltaText }} + +
+ + +
+ + + {{ data.criticalCount }} Critical + + + {{ data.highCount }} High + + + {{ data.actionableCount }} need attention + + +
+ + +
+ + + Evaluated {{ data.evaluatedAt | date:'short' }} + +
+
+``` + +**Styles** (`case-header.component.scss`): +```scss +.case-header { + display: flex; + flex-wrap: wrap; + align-items: center; + gap: 16px; + padding: 16px 24px; + background: var(--surface-container); + border-radius: 8px; + margin-bottom: 16px; +} + +.verdict-section { + display: flex; + align-items: center; + gap: 8px; +} + +.verdict-chip { + font-size: 1.25rem; + font-weight: 600; + padding: 12px 24px; + border-radius: 24px; + + mat-icon { + margin-right: 8px; + } + + &.verdict-ship { + background-color: var(--success-container); + color: var(--on-success-container); + } + + &.verdict-block { + background-color: var(--error-container); + color: var(--on-error-container); + } + + &.verdict-exception { + background-color: var(--warning-container); + color: var(--on-warning-container); + } +} + +.signed-badge { + color: var(--primary); +} + +.delta-section { + flex: 1; + min-width: 200px; + + .has-blockers { + color: var(--error); + font-weight: 500; + } +} + +.actionables-section { + .chip-critical { + background-color: var(--error); + color: var(--on-error); + } + + .chip-high { + background-color: var(--warning); + color: var(--on-warning); + } + + .chip-actionable { + background-color: var(--tertiary-container); + color: var(--on-tertiary-container); + } +} + +.snapshot-section { + display: flex; + align-items: center; + gap: 8px; + + .snapshot-badge { + font-family: monospace; + font-size: 0.875rem; + } + + .evaluated-at { + font-size: 0.75rem; + color: var(--on-surface-variant); + } +} + +// Responsive +@media (max-width: 768px) { + .case-header { + flex-direction: column; + align-items: flex-start; + } + + .verdict-section { + width: 100%; + justify-content: center; + } + + .delta-section, + .actionables-section, + .snapshot-section { + width: 100%; + } +} +``` + +**Acceptance Criteria**: +- [ ] `case-header.component.ts` file created +- [ ] Primary verdict chip (SHIP/BLOCK/EXCEPTION) with icon +- [ ] Color coding for each verdict state +- [ ] Signed attestation badge with click handler +- [ ] Standalone component with modern Angular features + +--- + +### T2: Add Risk Delta Display + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Add delta display showing changes since baseline. + +**Implementation**: Included in T1 template with `DeltaInfo` interface. + +**Additional Styles** (add to `case-header.component.scss`): +```scss +.delta-breakdown { + display: flex; + gap: 16px; + margin-top: 8px; + + .delta-item { + display: flex; + align-items: center; + gap: 4px; + font-size: 0.875rem; + + &.positive { + color: var(--error); + } + + &.negative { + color: var(--success); + } + + mat-icon { + font-size: 16px; + width: 16px; + height: 16px; + } + } +} +``` + +**Acceptance Criteria**: +- [ ] Delta from baseline shown: "+3 new blockers since baseline" +- [ ] Red highlighting for new blockers +- [ ] Green highlighting for resolved issues +- [ ] Baseline name displayed + +--- + +### T3: Add Actionables Count + +**Assignee**: UI Team +**Story Points**: 1 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Display count of items needing attention. + +**Implementation**: Included in T1 with mat-chip-set. + +**Acceptance Criteria**: +- [ ] Critical count chip with red color +- [ ] High count chip with orange color +- [ ] "X items need attention" chip +- [ ] Chips clickable to filter list + +--- + +### T4: Add Signed Gate Link + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Link verdict to DSSE attestation viewer. + +**Implementation Path**: Add attestation dialog/drawer + +```typescript +// attestation-viewer.component.ts +@Component({ + selector: 'stella-attestation-viewer', + standalone: true, + imports: [CommonModule, MatDialogModule, MatButtonModule], + template: ` +

DSSE Attestation

+ +
+
+ + {{ data.attestationId }} +
+
+ + {{ data.subject }} +
+
+ + {{ data.predicateType }} +
+
+ + {{ data.signedBy }} +
+
+ + {{ data.timestamp | date:'medium' }} +
+
+ + View in Rekor +
+
+ +
{{ data.envelope | json }}
+
+
+
+ + + + + ` +}) +export class AttestationViewerComponent { + constructor( + @Inject(MAT_DIALOG_DATA) public data: AttestationData, + private clipboard: Clipboard + ) {} + + copyEnvelope(): void { + this.clipboard.copy(JSON.stringify(this.data.envelope, null, 2)); + } +} +``` + +**Acceptance Criteria**: +- [ ] "Verified" badge next to verdict +- [ ] Click opens attestation viewer dialog +- [ ] Shows DSSE envelope details +- [ ] Link to Rekor if available +- [ ] Copy envelope button + +--- + +### T5: Add Knowledge Snapshot Badge + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Display knowledge snapshot ID with link to snapshot details. + +**Implementation**: Included in T1 with snapshot-section. + +**Additional Component** - Snapshot Viewer: +```typescript +// snapshot-viewer.component.ts +@Component({ + selector: 'stella-snapshot-viewer', + standalone: true, + template: ` +
+

Knowledge Snapshot

+
+ {{ snapshot.snapshotId }} + +
+ +

Sources

+ + + {{ getSourceIcon(source.type) }} + {{ source.name }} + {{ source.epoch }} • {{ source.digest | slice:0:16 }} + + + +

Environment

+
+ Platform: {{ snapshot.environment.platform }} + Engine: {{ snapshot.engine.version }} +
+ +
+ + +
+
+ ` +}) +export class SnapshotViewerComponent { + @Input({ required: true }) snapshot!: KnowledgeSnapshot; + @Output() export = new EventEmitter(); + @Output() replay = new EventEmitter(); +} +``` + +**Acceptance Criteria**: +- [ ] Snapshot ID badge: "ksm:abc123..." +- [ ] Truncated display with full ID on hover +- [ ] Click opens snapshot details panel +- [ ] Shows sources included in snapshot +- [ ] Export and replay buttons + +--- + +### T6: Responsive Design + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1, T2, T3, T4, T5 + +**Description**: +Ensure header works on mobile and tablet. + +**Implementation**: Included in T1 SCSS with media queries. + +**Additional Breakpoints**: +```scss +// Tablet +@media (min-width: 769px) and (max-width: 1024px) { + .case-header { + .verdict-section { + flex: 0 0 auto; + } + + .delta-section { + flex: 1; + text-align: center; + } + + .actionables-section { + flex: 0 0 auto; + } + + .snapshot-section { + width: 100%; + justify-content: flex-end; + } + } +} + +// Mobile +@media (max-width: 480px) { + .case-header { + padding: 12px 16px; + gap: 12px; + } + + .verdict-chip { + width: 100%; + justify-content: center; + font-size: 1.1rem; + padding: 10px 20px; + } + + .actionables-section mat-chip-set { + flex-wrap: wrap; + justify-content: center; + } +} +``` + +**Acceptance Criteria**: +- [ ] Stacks vertically on mobile (<768px) +- [ ] Verdict centered on mobile +- [ ] Chips wrap appropriately +- [ ] Touch-friendly tap targets (min 44px) +- [ ] No horizontal scroll + +--- + +### T7: Tests + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1-T6 + +**Description**: +Component tests with mocks. + +**Implementation Path**: `components/case-header/case-header.component.spec.ts` + +**Test Cases**: +```typescript +describe('CaseHeaderComponent', () => { + let component: CaseHeaderComponent; + let fixture: ComponentFixture; + + beforeEach(async () => { + await TestBed.configureTestingModule({ + imports: [CaseHeaderComponent, NoopAnimationsModule] + }).compileComponents(); + + fixture = TestBed.createComponent(CaseHeaderComponent); + component = fixture.componentInstance; + }); + + it('should create', () => { + component.data = createMockData('ship'); + fixture.detectChanges(); + expect(component).toBeTruthy(); + }); + + describe('Verdict Display', () => { + it('should show CAN SHIP for ship verdict', () => { + component.data = createMockData('ship'); + fixture.detectChanges(); + + const label = fixture.nativeElement.querySelector('.verdict-label'); + expect(label.textContent).toContain('CAN SHIP'); + }); + + it('should show BLOCKED for block verdict', () => { + component.data = createMockData('block'); + fixture.detectChanges(); + + const label = fixture.nativeElement.querySelector('.verdict-label'); + expect(label.textContent).toContain('BLOCKED'); + }); + + it('should show EXCEPTION for exception verdict', () => { + component.data = createMockData('exception'); + fixture.detectChanges(); + + const label = fixture.nativeElement.querySelector('.verdict-label'); + expect(label.textContent).toContain('EXCEPTION'); + }); + + it('should apply correct CSS class for verdict', () => { + component.data = createMockData('block'); + fixture.detectChanges(); + + const chip = fixture.nativeElement.querySelector('.verdict-chip'); + expect(chip.classList).toContain('verdict-block'); + }); + }); + + describe('Delta Display', () => { + it('should show delta when present', () => { + component.data = { + ...createMockData('block'), + deltaFromBaseline: { + newBlockers: 3, + resolvedBlockers: 1, + newFindings: 5, + resolvedFindings: 2, + baselineName: 'v1.2.0' + } + }; + fixture.detectChanges(); + + const delta = fixture.nativeElement.querySelector('.delta-section'); + expect(delta.textContent).toContain('+3 blockers'); + expect(delta.textContent).toContain('v1.2.0'); + }); + + it('should highlight new blockers', () => { + component.data = { + ...createMockData('block'), + deltaFromBaseline: { + newBlockers: 3, + resolvedBlockers: 0, + newFindings: 0, + resolvedFindings: 0, + baselineName: 'main' + } + }; + fixture.detectChanges(); + + const delta = fixture.nativeElement.querySelector('.delta-section span'); + expect(delta.classList).toContain('has-blockers'); + }); + }); + + describe('Attestation Badge', () => { + it('should show signed badge when attestation present', () => { + component.data = { + ...createMockData('ship'), + attestationId: 'att-123' + }; + fixture.detectChanges(); + + const badge = fixture.nativeElement.querySelector('.signed-badge'); + expect(badge).toBeTruthy(); + }); + + it('should emit attestationClick on badge click', () => { + component.data = { + ...createMockData('ship'), + attestationId: 'att-123' + }; + fixture.detectChanges(); + + spyOn(component.attestationClick, 'emit'); + const badge = fixture.nativeElement.querySelector('.signed-badge'); + badge.click(); + + expect(component.attestationClick.emit).toHaveBeenCalledWith('att-123'); + }); + }); + + describe('Snapshot Badge', () => { + it('should show truncated snapshot ID', () => { + component.data = { + ...createMockData('ship'), + snapshotId: 'ksm:sha256:abcdef1234567890' + }; + fixture.detectChanges(); + + const badge = fixture.nativeElement.querySelector('.snapshot-badge'); + expect(badge.textContent).toContain('ksm:abcdef12'); + }); + }); + + function createMockData(verdict: Verdict): CaseHeaderData { + return { + verdict, + findingCount: 10, + criticalCount: 2, + highCount: 5, + actionableCount: 7, + evaluatedAt: new Date() + }; + } +}); +``` + +**Acceptance Criteria**: +- [ ] Test for each verdict state +- [ ] Test for delta display +- [ ] Test for attestation badge +- [ ] Test for snapshot badge +- [ ] Test event emissions +- [ ] All tests pass + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | UI Team | Create case-header.component.ts | +| 2 | T2 | TODO | T1 | UI Team | Add risk delta display | +| 3 | T3 | TODO | T1 | UI Team | Add actionables count | +| 4 | T4 | TODO | T1 | UI Team | Add signed gate link | +| 5 | T5 | TODO | T1 | UI Team | Add knowledge snapshot badge | +| 6 | T6 | TODO | T1-T5 | UI Team | Responsive design | +| 7 | T7 | TODO | T1-T6 | UI Team | Tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from UX Gap Analysis. "Can I Ship?" header identified as core UX pattern. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Standalone component | Decision | UI Team | Use Angular 17 standalone components | +| Material Design | Decision | UI Team | Use Angular Material for consistency | +| Verdict colors | Decision | UI Team | Ship=success, Block=error, Exception=warning | +| Snapshot truncation | Decision | UI Team | Show first 8 chars of hash | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] Verdict visible without scrolling +- [ ] Delta from baseline shown +- [ ] Clicking verdict chip shows attestation +- [ ] Snapshot ID visible with link +- [ ] Responsive on mobile/tablet +- [ ] All component tests pass +- [ ] `ng build` succeeds +- [ ] `ng test` succeeds diff --git a/docs/implplan/SPRINT_4200_0002_0002_verdict_ladder.md b/docs/implplan/SPRINT_4200_0002_0002_verdict_ladder.md new file mode 100644 index 000000000..761e1c3ee --- /dev/null +++ b/docs/implplan/SPRINT_4200_0002_0002_verdict_ladder.md @@ -0,0 +1,979 @@ +# Sprint 4200.0002.0002 · Verdict Ladder UI + +## Topic & Scope + +- Create vertical timeline visualization showing 8 steps from detection to verdict +- Enable click-to-expand evidence at each step +- Show the complete audit trail for how a finding became a verdict + +**Working directory:** `src/Web/StellaOps.Web/src/app/features/triage/components/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 4200.0001.0001 (Triage REST API) +- **Downstream**: None +- **Safe to parallelize with**: Sprint 4200.0002.0001 ("Can I Ship?" Header), Sprint 4200.0002.0003 (Delta/Compare View) + +## Documentation Prerequisites + +- `src/Web/StellaOps.Web/AGENTS.md` +- `docs/product-advisories/16-Dec-2025 - Reimagining Proof‑Linked UX in Security Workflows.md` +- `docs/product-advisories/21-Dec-2025 - Designing Explainable Triage Workflows.md` + +--- + +## The 8-Step Verdict Ladder + +``` +Step 1: Detection → CVE source, SBOM match +Step 2: Component ID → PURL, version, location +Step 3: Applicability → OVAL/version range match +Step 4: Reachability → Static analysis path +Step 5: Runtime → Process trace, signal +Step 6: VEX Merge → Lattice outcome with trust weights +Step 7: Policy Trace → Rule → verdict mapping +Step 8: Attestation → Signature, transparency log +``` + +--- + +## Tasks + +### T1: Create verdict-ladder.component.ts + +**Assignee**: UI Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the main vertical timeline component. + +**Implementation Path**: `verdict-ladder/verdict-ladder.component.ts` (new file) + +**Implementation**: +```typescript +import { Component, Input, ChangeDetectionStrategy } from '@angular/core'; +import { CommonModule } from '@angular/common'; +import { MatExpansionModule } from '@angular/material/expansion'; +import { MatIconModule } from '@angular/material/icon'; +import { MatChipsModule } from '@angular/material/chips'; +import { MatButtonModule } from '@angular/material/button'; +import { MatTooltipModule } from '@angular/material/tooltip'; + +export interface VerdictLadderStep { + step: number; + name: string; + status: 'complete' | 'partial' | 'missing' | 'na'; + summary: string; + evidence?: EvidenceItem[]; + expandable: boolean; +} + +export interface EvidenceItem { + type: string; + title: string; + source?: string; + hash?: string; + signed?: boolean; + signedBy?: string; + uri?: string; + preview?: string; +} + +export interface VerdictLadderData { + findingId: string; + steps: VerdictLadderStep[]; + finalVerdict: 'ship' | 'block' | 'exception'; +} + +@Component({ + selector: 'stella-verdict-ladder', + standalone: true, + imports: [ + CommonModule, + MatExpansionModule, + MatIconModule, + MatChipsModule, + MatButtonModule, + MatTooltipModule + ], + templateUrl: './verdict-ladder.component.html', + styleUrls: ['./verdict-ladder.component.scss'], + changeDetection: ChangeDetectionStrategy.OnPush +}) +export class VerdictLadderComponent { + @Input({ required: true }) data!: VerdictLadderData; + + getStepIcon(step: VerdictLadderStep): string { + switch (step.status) { + case 'complete': return 'check_circle'; + case 'partial': return 'radio_button_checked'; + case 'missing': return 'error'; + case 'na': return 'remove_circle_outline'; + } + } + + getStepClass(step: VerdictLadderStep): string { + return `step-${step.status}`; + } + + getStepLabel(stepNumber: number): string { + switch (stepNumber) { + case 1: return 'Detection'; + case 2: return 'Component'; + case 3: return 'Applicability'; + case 4: return 'Reachability'; + case 5: return 'Runtime'; + case 6: return 'VEX Merge'; + case 7: return 'Policy'; + case 8: return 'Attestation'; + default: return `Step ${stepNumber}`; + } + } + + getEvidenceIcon(type: string): string { + switch (type) { + case 'sbom_slice': return 'inventory_2'; + case 'vex_doc': return 'description'; + case 'provenance': return 'verified'; + case 'callstack_slice': return 'account_tree'; + case 'reachability_proof': return 'route'; + case 'replay_manifest': return 'replay'; + case 'policy': return 'policy'; + case 'scan_log': return 'article'; + default: return 'attachment'; + } + } + + trackByStep(index: number, step: VerdictLadderStep): number { + return step.step; + } + + trackByEvidence(index: number, evidence: EvidenceItem): string { + return evidence.hash ?? evidence.title; + } +} +``` + +**Template** (`verdict-ladder.component.html`): +```html +
+
+

Verdict Trail

+ + {{ data.finalVerdict | uppercase }} + +
+ +
+ + + + +
+
{{ step.step }}
+ + {{ getStepIcon(step) }} + + {{ step.name }} +
+
+ + {{ step.summary }} + +
+ + +
+
+
+
+ {{ getEvidenceIcon(ev.type) }} + {{ ev.title }} + + verified + +
+ +
+ + Source: {{ ev.source }} + + + {{ ev.hash | slice:0:16 }}... + +
+ +
+
{{ ev.preview }}
+
+ +
+ + +
+
+
+
+ +
+

No evidence artifacts attached to this step.

+
+
+
+
+ + +
+
+``` + +**Styles** (`verdict-ladder.component.scss`): +```scss +.verdict-ladder { + position: relative; + padding: 16px; + background: var(--surface); + border-radius: 8px; +} + +.ladder-header { + display: flex; + justify-content: space-between; + align-items: center; + margin-bottom: 24px; + + h3 { + margin: 0; + font-size: 1.125rem; + font-weight: 500; + } + + .verdict-ship { background-color: var(--success); color: white; } + .verdict-block { background-color: var(--error); color: white; } + .verdict-exception { background-color: var(--warning); color: black; } +} + +.ladder-timeline { + position: relative; + z-index: 1; + + mat-expansion-panel { + margin-bottom: 8px; + border-left: 3px solid var(--outline); + + &.step-complete { + border-left-color: var(--success); + } + + &.step-partial { + border-left-color: var(--warning); + } + + &.step-missing { + border-left-color: var(--error); + } + + &.step-na { + border-left-color: var(--outline-variant); + opacity: 0.7; + } + } +} + +.step-header { + display: flex; + align-items: center; + gap: 12px; + + .step-number { + width: 24px; + height: 24px; + border-radius: 50%; + background: var(--primary-container); + color: var(--on-primary-container); + display: flex; + align-items: center; + justify-content: center; + font-size: 0.75rem; + font-weight: 600; + } + + .status-icon { + font-size: 20px; + width: 20px; + height: 20px; + + &.step-complete { color: var(--success); } + &.step-partial { color: var(--warning); } + &.step-missing { color: var(--error); } + &.step-na { color: var(--outline); } + } + + .step-name { + font-weight: 500; + } +} + +.step-content { + padding: 16px 0; +} + +.evidence-list { + display: flex; + flex-direction: column; + gap: 16px; +} + +.evidence-item { + padding: 12px; + background: var(--surface-variant); + border-radius: 8px; + + .evidence-header { + display: flex; + align-items: center; + gap: 8px; + margin-bottom: 8px; + + mat-icon { + color: var(--primary); + } + + .evidence-title { + flex: 1; + font-weight: 500; + } + + .signed-icon { + color: var(--success); + } + } + + .evidence-details { + display: flex; + gap: 16px; + font-size: 0.875rem; + color: var(--on-surface-variant); + margin-bottom: 8px; + + .evidence-hash { + font-family: monospace; + background: var(--surface); + padding: 2px 6px; + border-radius: 4px; + } + } + + .evidence-preview { + pre { + background: var(--surface); + padding: 12px; + border-radius: 4px; + overflow-x: auto; + font-size: 0.75rem; + max-height: 200px; + } + } + + .evidence-actions { + display: flex; + gap: 8px; + margin-top: 12px; + } +} + +.no-evidence { + color: var(--on-surface-variant); + font-style: italic; +} + +// Timeline connector +.timeline-connector { + position: absolute; + left: 36px; + top: 80px; + bottom: 20px; + width: 2px; + background: linear-gradient( + to bottom, + var(--success) 0%, + var(--warning) 50%, + var(--error) 100% + ); + z-index: 0; + opacity: 0.3; +} +``` + +**Acceptance Criteria**: +- [ ] `verdict-ladder.component.ts` file created +- [ ] Vertical timeline with 8 steps +- [ ] Accordion expansion for each step +- [ ] Status icons (complete/partial/missing/na) +- [ ] Color-coded border by status + +--- + +### T2: Step 1 - Detection Sources + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement detection step showing CVE sources and SBOM match. + +**Implementation** - Detection Step Data: +```typescript +// detection-step.service.ts +export interface DetectionEvidence { + cveId: string; + sources: { + name: string; + publishedAt: Date; + url?: string; + }[]; + sbomMatch: { + purl: string; + matchedVersion: string; + location: string; + sbomDigest: string; + }; +} + +export function buildDetectionStep(evidence: DetectionEvidence): VerdictLadderStep { + return { + step: 1, + name: 'Detection', + status: evidence.sources.length > 0 ? 'complete' : 'missing', + summary: `${evidence.cveId} from ${evidence.sources.length} source(s)`, + expandable: true, + evidence: [ + { + type: 'scan_log', + title: `CVE Sources for ${evidence.cveId}`, + preview: evidence.sources.map(s => `${s.name}: ${s.publishedAt.toISOString()}`).join('\n') + }, + { + type: 'sbom_slice', + title: 'SBOM Match', + source: evidence.sbomMatch.purl, + hash: evidence.sbomMatch.sbomDigest, + preview: `Package: ${evidence.sbomMatch.purl}\nVersion: ${evidence.sbomMatch.matchedVersion}\nLocation: ${evidence.sbomMatch.location}` + } + ] + }; +} +``` + +**Acceptance Criteria**: +- [ ] Shows CVE ID and source count +- [ ] Lists all CVE sources with timestamps +- [ ] Shows SBOM match details +- [ ] Links to CVE source URLs + +--- + +### T3: Step 2 - Component Identification + +**Assignee**: UI Team +**Story Points**: 1 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Show PURL, version, and location. + +**Implementation**: +```typescript +export interface ComponentEvidence { + purl: string; + version: string; + location: string; + ecosystem: string; + name: string; + namespace?: string; +} + +export function buildComponentStep(evidence: ComponentEvidence): VerdictLadderStep { + return { + step: 2, + name: 'Component', + status: 'complete', + summary: `${evidence.name}@${evidence.version}`, + expandable: true, + evidence: [ + { + type: 'sbom_slice', + title: 'Component Identity', + preview: `PURL: ${evidence.purl}\nEcosystem: ${evidence.ecosystem}\nName: ${evidence.name}\nVersion: ${evidence.version}\nLocation: ${evidence.location}` + } + ] + }; +} +``` + +**Acceptance Criteria**: +- [ ] Shows component PURL +- [ ] Displays version +- [ ] Shows file location in container + +--- + +### T4: Step 3 - Applicability + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Show OVAL or version range match. + +**Implementation**: +```typescript +export interface ApplicabilityEvidence { + matchType: 'oval' | 'version_range' | 'exact'; + ovalDefinition?: string; + versionRange?: string; + installedVersion: string; + result: 'applicable' | 'not_applicable' | 'unknown'; +} + +export function buildApplicabilityStep(evidence: ApplicabilityEvidence): VerdictLadderStep { + return { + step: 3, + name: 'Applicability', + status: evidence.result === 'applicable' ? 'complete' + : evidence.result === 'not_applicable' ? 'na' + : 'partial', + summary: evidence.result === 'applicable' + ? `Version ${evidence.installedVersion} is in affected range` + : evidence.result === 'not_applicable' + ? 'Version not in affected range' + : 'Could not determine applicability', + expandable: true, + evidence: [ + { + type: 'policy', + title: 'Applicability Check', + preview: evidence.matchType === 'oval' + ? `OVAL Definition: ${evidence.ovalDefinition}` + : `Version Range: ${evidence.versionRange}\nInstalled: ${evidence.installedVersion}` + } + ] + }; +} +``` + +**Acceptance Criteria**: +- [ ] Shows version range match +- [ ] OVAL definition if used +- [ ] Clear applicable/not-applicable status + +--- + +### T5: Step 4 - Reachability Evidence + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Show static analysis call path. + +**Implementation**: +```typescript +export interface ReachabilityEvidence { + result: 'reachable' | 'not_reachable' | 'unknown'; + analysisType: 'static' | 'dynamic' | 'both'; + callPath?: string[]; + confidence: number; + proofHash?: string; + proofSigned?: boolean; +} + +export function buildReachabilityStep(evidence: ReachabilityEvidence): VerdictLadderStep { + return { + step: 4, + name: 'Reachability', + status: evidence.result === 'reachable' ? 'complete' + : evidence.result === 'not_reachable' ? 'na' + : 'missing', + summary: evidence.result === 'reachable' + ? `Reachable (${(evidence.confidence * 100).toFixed(0)}% confidence)` + : evidence.result === 'not_reachable' + ? 'Not reachable from entry points' + : 'Reachability unknown', + expandable: evidence.callPath !== undefined, + evidence: evidence.callPath ? [ + { + type: 'reachability_proof', + title: 'Call Path', + hash: evidence.proofHash, + signed: evidence.proofSigned, + preview: evidence.callPath.map((fn, i) => `${' '.repeat(i)}→ ${fn}`).join('\n') + } + ] : undefined + }; +} +``` + +**Acceptance Criteria**: +- [ ] Shows reachability result +- [ ] Displays call path if reachable +- [ ] Shows confidence percentage +- [ ] Indicates if proof is signed + +--- + +### T6: Step 5 - Runtime Confirmation + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Show process trace or runtime signal. + +**Implementation**: +```typescript +export interface RuntimeEvidence { + observed: boolean; + signalType?: 'process_trace' | 'memory_access' | 'network_call'; + timestamp?: Date; + processInfo?: { + pid: number; + name: string; + container: string; + }; + stackTrace?: string; +} + +export function buildRuntimeStep(evidence: RuntimeEvidence | null): VerdictLadderStep { + if (!evidence || !evidence.observed) { + return { + step: 5, + name: 'Runtime', + status: 'na', + summary: 'No runtime observation', + expandable: false + }; + } + + return { + step: 5, + name: 'Runtime', + status: 'complete', + summary: `Observed via ${evidence.signalType} at ${evidence.timestamp?.toISOString()}`, + expandable: true, + evidence: [ + { + type: 'scan_log', + title: 'Runtime Observation', + preview: evidence.stackTrace ?? `Process: ${evidence.processInfo?.name} (PID ${evidence.processInfo?.pid})\nContainer: ${evidence.processInfo?.container}` + } + ] + }; +} +``` + +**Acceptance Criteria**: +- [ ] Shows runtime observation if present +- [ ] Process/container info displayed +- [ ] Stack trace if available +- [ ] N/A status if no runtime data + +--- + +### T7: Step 6 - VEX Merge + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Show lattice merge outcome with trust weights. + +**Implementation**: +```typescript +export interface VexMergeEvidence { + resultStatus: 'affected' | 'not_affected' | 'fixed' | 'under_investigation'; + inputStatements: { + source: string; + status: string; + trustWeight: number; + }[]; + hadConflicts: boolean; + winningSource?: string; + mergeTrace?: string; +} + +export function buildVexStep(evidence: VexMergeEvidence): VerdictLadderStep { + const statusLabel = evidence.resultStatus.replace('_', ' '); + + return { + step: 6, + name: 'VEX Merge', + status: evidence.resultStatus === 'not_affected' ? 'na' + : evidence.resultStatus === 'affected' ? 'complete' + : 'partial', + summary: evidence.hadConflicts + ? `${statusLabel} (resolved from ${evidence.inputStatements.length} sources)` + : statusLabel, + expandable: true, + evidence: [ + { + type: 'vex_doc', + title: 'VEX Merge Result', + source: evidence.winningSource, + preview: evidence.inputStatements.map(s => + `${s.source}: ${s.status} (trust: ${(s.trustWeight * 100).toFixed(0)}%)` + ).join('\n') + (evidence.mergeTrace ? `\n\n${evidence.mergeTrace}` : '') + } + ] + }; +} +``` + +**Acceptance Criteria**: +- [ ] Shows merged VEX status +- [ ] Lists all input statements +- [ ] Shows trust weights +- [ ] Displays merge trace if conflicts + +--- + +### T8: Step 7 - Policy Trace + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Show policy rule to verdict mapping. + +**Implementation**: +```typescript +export interface PolicyTraceEvidence { + policyId: string; + policyVersion: string; + matchedRules: { + ruleId: string; + ruleName: string; + effect: 'allow' | 'deny' | 'warn'; + condition: string; + }[]; + finalDecision: 'ship' | 'block' | 'exception'; + explanation: string; +} + +export function buildPolicyStep(evidence: PolicyTraceEvidence): VerdictLadderStep { + return { + step: 7, + name: 'Policy', + status: 'complete', + summary: `${evidence.matchedRules.length} rule(s) → ${evidence.finalDecision}`, + expandable: true, + evidence: [ + { + type: 'policy', + title: `Policy ${evidence.policyId} v${evidence.policyVersion}`, + preview: evidence.matchedRules.map(r => + `${r.ruleId}: ${r.ruleName}\n Effect: ${r.effect}\n Condition: ${r.condition}` + ).join('\n\n') + `\n\n${evidence.explanation}` + } + ] + }; +} +``` + +**Acceptance Criteria**: +- [ ] Shows policy ID and version +- [ ] Lists matched rules +- [ ] Shows rule conditions +- [ ] Explains final decision + +--- + +### T9: Step 8 - Attestation + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Show signature and transparency log entry. + +**Implementation**: +```typescript +export interface AttestationEvidence { + attestationId: string; + predicateType: string; + signedBy: string; + signedAt: Date; + signatureAlgorithm: string; + rekorEntry?: { + logId: string; + logIndex: number; + url: string; + }; + envelope?: object; +} + +export function buildAttestationStep(evidence: AttestationEvidence | null): VerdictLadderStep { + if (!evidence) { + return { + step: 8, + name: 'Attestation', + status: 'missing', + summary: 'Not attested', + expandable: false + }; + } + + return { + step: 8, + name: 'Attestation', + status: 'complete', + summary: `Signed by ${evidence.signedBy}${evidence.rekorEntry ? ' (in Rekor)' : ''}`, + expandable: true, + evidence: [ + { + type: 'provenance', + title: 'DSSE Attestation', + signed: true, + signedBy: evidence.signedBy, + hash: evidence.attestationId, + preview: `Type: ${evidence.predicateType}\nSigned: ${evidence.signedAt.toISOString()}\nAlgorithm: ${evidence.signatureAlgorithm}${evidence.rekorEntry ? `\n\nRekor Log Index: ${evidence.rekorEntry.logIndex}` : ''}` + } + ] + }; +} +``` + +**Acceptance Criteria**: +- [ ] Shows signer identity +- [ ] Displays signature timestamp +- [ ] Links to Rekor entry if available +- [ ] Shows predicate type + +--- + +### T10: Expand/Collapse Steps + +**Assignee**: UI Team +**Story Points**: 1 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Add expand all / collapse all controls. + +**Implementation** - Add to component: +```typescript +// Add to verdict-ladder.component.ts +@ViewChildren(MatExpansionPanel) panels!: QueryList; + +expandAll(): void { + this.panels.forEach(panel => { + if (!panel.disabled) { + panel.open(); + } + }); +} + +collapseAll(): void { + this.panels.forEach(panel => panel.close()); +} +``` + +**Add to template**: +```html +
+ + +
+``` + +**Acceptance Criteria**: +- [ ] Expand all button works +- [ ] Collapse all button works +- [ ] Disabled panels skipped + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | UI Team | Create verdict-ladder.component.ts | +| 2 | T2 | TODO | T1 | UI Team | Step 1: Detection sources | +| 3 | T3 | TODO | T1 | UI Team | Step 2: Component identification | +| 4 | T4 | TODO | T1 | UI Team | Step 3: Applicability | +| 5 | T5 | TODO | T1 | UI Team | Step 4: Reachability evidence | +| 6 | T6 | TODO | T1 | UI Team | Step 5: Runtime confirmation | +| 7 | T7 | TODO | T1 | UI Team | Step 6: VEX merge | +| 8 | T8 | TODO | T1 | UI Team | Step 7: Policy trace | +| 9 | T9 | TODO | T1 | UI Team | Step 8: Attestation | +| 10 | T10 | TODO | T1 | UI Team | Expand/collapse steps | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from UX Gap Analysis. Verdict Ladder identified as key explainability pattern. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| 8 steps | Decision | UI Team | Based on advisory: Detection→Attestation | +| Accordion UI | Decision | UI Team | Use Material expansion panels | +| Status colors | Decision | UI Team | complete=green, partial=yellow, missing=red, na=gray | +| Evidence types | Decision | UI Team | Map to existing TriageEvidenceType enum | + +--- + +## Success Criteria + +- [ ] All 10 tasks marked DONE +- [ ] All 8 steps visible in vertical ladder +- [ ] Each step shows evidence type and source +- [ ] Clicking step expands to show proof artifact +- [ ] Final attestation link at bottom +- [ ] Expand/collapse all works +- [ ] `ng build` succeeds +- [ ] `ng test` succeeds diff --git a/docs/implplan/SPRINT_4200_0002_0003_delta_compare_view.md b/docs/implplan/SPRINT_4200_0002_0003_delta_compare_view.md new file mode 100644 index 000000000..d1022518d --- /dev/null +++ b/docs/implplan/SPRINT_4200_0002_0003_delta_compare_view.md @@ -0,0 +1,799 @@ +# Sprint 4200.0002.0003 · Delta/Compare View UI + +## Topic & Scope + +- Create three-pane layout for comparing artifacts/verdicts +- Enable baseline selection (last green, previous release, custom) +- Show delta summary and categorized changes with evidence + +**Working directory:** `src/Web/StellaOps.Web/src/app/features/compare/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 4100.0002.0001 (Knowledge Snapshot Manifest) +- **Downstream**: None +- **Safe to parallelize with**: Sprint 4200.0002.0001 ("Can I Ship?" Header), Sprint 4200.0002.0004 (CLI Compare) + +## Documentation Prerequisites + +- `src/Web/StellaOps.Web/AGENTS.md` +- `docs/product-advisories/21-Dec-2025 - Smart Diff - Reproducibility as a Feature.md` +- `docs/product-advisories/21-Dec-2025 - How Top Scanners Shape Evidence‑First UX.md` + +--- + +## Tasks + +### T1: Create compare-view.component.ts + +**Assignee**: UI Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the main three-pane comparison layout. + +**Implementation Path**: `compare-view/compare-view.component.ts` (new file) + +**Implementation**: +```typescript +import { Component, OnInit, ChangeDetectionStrategy, signal, computed } from '@angular/core'; +import { CommonModule } from '@angular/common'; +import { MatSelectModule } from '@angular/material/select'; +import { MatButtonModule } from '@angular/material/button'; +import { MatIconModule } from '@angular/material/icon'; +import { MatListModule } from '@angular/material/list'; +import { MatChipsModule } from '@angular/material/chips'; +import { MatSidenavModule } from '@angular/material/sidenav'; +import { MatToolbarModule } from '@angular/material/toolbar'; +import { ActivatedRoute } from '@angular/router'; + +export interface CompareTarget { + id: string; + type: 'artifact' | 'snapshot' | 'verdict'; + label: string; + digest?: string; + timestamp: Date; +} + +export interface DeltaCategory { + id: string; + name: string; + icon: string; + added: number; + removed: number; + changed: number; +} + +export interface DeltaItem { + id: string; + category: string; + changeType: 'added' | 'removed' | 'changed'; + title: string; + severity?: 'critical' | 'high' | 'medium' | 'low'; + beforeValue?: string; + afterValue?: string; +} + +export interface EvidencePane { + itemId: string; + title: string; + beforeEvidence?: object; + afterEvidence?: object; +} + +@Component({ + selector: 'stella-compare-view', + standalone: true, + imports: [ + CommonModule, + MatSelectModule, + MatButtonModule, + MatIconModule, + MatListModule, + MatChipsModule, + MatSidenavModule, + MatToolbarModule + ], + templateUrl: './compare-view.component.html', + styleUrls: ['./compare-view.component.scss'], + changeDetection: ChangeDetectionStrategy.OnPush +}) +export class CompareViewComponent implements OnInit { + // State + currentTarget = signal(null); + baselineTarget = signal(null); + categories = signal([]); + selectedCategory = signal(null); + items = signal([]); + selectedItem = signal(null); + evidence = signal(null); + viewMode = signal<'side-by-side' | 'unified'>('side-by-side'); + + // Computed + filteredItems = computed(() => { + const cat = this.selectedCategory(); + if (!cat) return this.items(); + return this.items().filter(i => i.category === cat); + }); + + deltaSummary = computed(() => { + const cats = this.categories(); + return { + totalAdded: cats.reduce((sum, c) => sum + c.added, 0), + totalRemoved: cats.reduce((sum, c) => sum + c.removed, 0), + totalChanged: cats.reduce((sum, c) => sum + c.changed, 0) + }; + }); + + // Baseline presets + baselinePresets = [ + { id: 'last-green', label: 'Last Green Build' }, + { id: 'previous-release', label: 'Previous Release' }, + { id: 'main-branch', label: 'Main Branch' }, + { id: 'custom', label: 'Custom...' } + ]; + + constructor( + private route: ActivatedRoute, + private compareService: CompareService + ) {} + + ngOnInit(): void { + // Load from route params + const currentId = this.route.snapshot.paramMap.get('current'); + const baselineId = this.route.snapshot.queryParamMap.get('baseline'); + + if (currentId) { + this.loadTarget(currentId, 'current'); + } + if (baselineId) { + this.loadTarget(baselineId, 'baseline'); + } + } + + async loadTarget(id: string, type: 'current' | 'baseline'): Promise { + const target = await this.compareService.getTarget(id); + if (type === 'current') { + this.currentTarget.set(target); + } else { + this.baselineTarget.set(target); + } + this.loadDelta(); + } + + async loadDelta(): Promise { + const current = this.currentTarget(); + const baseline = this.baselineTarget(); + if (!current || !baseline) return; + + const delta = await this.compareService.computeDelta(current.id, baseline.id); + this.categories.set(delta.categories); + this.items.set(delta.items); + } + + selectCategory(categoryId: string): void { + this.selectedCategory.set( + this.selectedCategory() === categoryId ? null : categoryId + ); + } + + selectItem(item: DeltaItem): void { + this.selectedItem.set(item); + this.loadEvidence(item); + } + + async loadEvidence(item: DeltaItem): Promise { + const current = this.currentTarget(); + const baseline = this.baselineTarget(); + if (!current || !baseline) return; + + const evidence = await this.compareService.getItemEvidence( + item.id, + baseline.id, + current.id + ); + this.evidence.set(evidence); + } + + toggleViewMode(): void { + this.viewMode.set( + this.viewMode() === 'side-by-side' ? 'unified' : 'side-by-side' + ); + } + + getChangeIcon(changeType: 'added' | 'removed' | 'changed'): string { + switch (changeType) { + case 'added': return 'add_circle'; + case 'removed': return 'remove_circle'; + case 'changed': return 'change_circle'; + } + } + + getChangeClass(changeType: 'added' | 'removed' | 'changed'): string { + return `change-${changeType}`; + } +} +``` + +**Template** (`compare-view.component.html`): +```html +
+ + +
+ Comparing: + {{ currentTarget()?.label }} + arrow_forward + + + {{ preset.label }} + + +
+ +
+ + +
+
+ + +
+
+ add + +{{ summary.totalAdded }} added +
+
+ remove + -{{ summary.totalRemoved }} removed +
+
+ swap_horiz + {{ summary.totalChanged }} changed +
+
+ + +
+ +
+

Categories

+ + + {{ cat.icon }} + {{ cat.name }} + + +{{ cat.added }} + -{{ cat.removed }} + ~{{ cat.changed }} + + + +
+ + +
+

Changes

+ + + + {{ getChangeIcon(item.changeType) }} + + {{ item.title }} + + {{ item.severity }} + + + + +
+ check_circle +

No changes in this category

+
+
+ + +
+

Evidence

+ +
+
+ {{ ev.title }} +
+ +
+ +
+
+
Baseline
+
{{ ev.beforeEvidence | json }}
+
+
+
Current
+
{{ ev.afterEvidence | json }}
+
+
+ + +
+
+              
+            
+
+
+
+ + +
+ touch_app +

Select an item to view evidence

+
+
+
+
+
+``` + +**Styles** (`compare-view.component.scss`): +```scss +.compare-view { + display: flex; + flex-direction: column; + height: 100%; +} + +.compare-toolbar { + display: flex; + justify-content: space-between; + padding: 8px 16px; + background: var(--surface-container); + + .target-selector { + display: flex; + align-items: center; + gap: 12px; + + .label { + color: var(--on-surface-variant); + } + + .target { + font-weight: 500; + padding: 4px 12px; + background: var(--primary-container); + border-radius: 16px; + } + } + + .toolbar-actions { + display: flex; + gap: 8px; + } +} + +.delta-summary { + display: flex; + gap: 16px; + padding: 12px 16px; + background: var(--surface); + border-bottom: 1px solid var(--outline-variant); + + .summary-chip { + display: flex; + align-items: center; + gap: 4px; + padding: 4px 12px; + border-radius: 16px; + font-weight: 500; + + &.added { + background: var(--success-container); + color: var(--on-success-container); + } + + &.removed { + background: var(--error-container); + color: var(--on-error-container); + } + + &.changed { + background: var(--warning-container); + color: var(--on-warning-container); + } + } +} + +.panes-container { + display: flex; + flex: 1; + overflow: hidden; +} + +.pane { + display: flex; + flex-direction: column; + border-right: 1px solid var(--outline-variant); + overflow-y: auto; + + h4 { + padding: 12px 16px; + margin: 0; + background: var(--surface-variant); + font-size: 0.875rem; + font-weight: 600; + text-transform: uppercase; + letter-spacing: 0.5px; + } + + &:last-child { + border-right: none; + } +} + +.categories-pane { + width: 220px; + flex-shrink: 0; + + .category-counts { + display: flex; + gap: 8px; + font-size: 0.75rem; + + .added { color: var(--success); } + .removed { color: var(--error); } + .changed { color: var(--warning); } + } +} + +.items-pane { + width: 320px; + flex-shrink: 0; + + .change-added { color: var(--success); } + .change-removed { color: var(--error); } + .change-changed { color: var(--warning); } + + .severity-critical { background: var(--error); color: white; } + .severity-high { background: var(--warning); color: black; } + .severity-medium { background: var(--tertiary); color: white; } + .severity-low { background: var(--outline); color: white; } +} + +.evidence-pane { + flex: 1; + + .evidence-content { + padding: 16px; + } + + .side-by-side { + display: grid; + grid-template-columns: 1fr 1fr; + gap: 16px; + + .before, .after { + h5 { + margin: 0 0 8px; + font-size: 0.875rem; + color: var(--on-surface-variant); + } + + pre { + background: var(--surface-variant); + padding: 12px; + border-radius: 8px; + overflow-x: auto; + font-size: 0.75rem; + } + } + + .before pre { + border-left: 3px solid var(--error); + } + + .after pre { + border-left: 3px solid var(--success); + } + } + + .unified { + .diff-view { + background: var(--surface-variant); + padding: 12px; + border-radius: 8px; + + .added { background: rgba(var(--success-rgb), 0.2); } + .removed { background: rgba(var(--error-rgb), 0.2); } + } + } +} + +.empty-state { + display: flex; + flex-direction: column; + align-items: center; + justify-content: center; + padding: 48px; + color: var(--on-surface-variant); + + mat-icon { + font-size: 48px; + width: 48px; + height: 48px; + margin-bottom: 16px; + } +} + +mat-list-item.selected { + background: var(--primary-container); +} +``` + +**Acceptance Criteria**: +- [ ] Three-pane layout implemented +- [ ] Responsive to screen size +- [ ] Categories, items, evidence panes work +- [ ] Selection highlighting works + +--- + +### T2: Baseline Selector + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement baseline selection with presets. + +**Implementation**: Included in T1 with `baselinePresets` and mat-select. + +**Acceptance Criteria**: +- [ ] "Last Green" preset +- [ ] "Previous Release" preset +- [ ] "Main Branch" preset +- [ ] Custom selection option + +--- + +### T3: Delta Summary Strip + +**Assignee**: UI Team +**Story Points**: 1 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Show added/removed/changed counts. + +**Implementation**: Included in T1 template with `.delta-summary`. + +**Acceptance Criteria**: +- [ ] Shows total added count +- [ ] Shows total removed count +- [ ] Shows total changed count +- [ ] Color coded chips + +--- + +### T4: Categories Pane + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Left pane showing change categories. + +**Implementation**: +```typescript +// Category definitions +const DELTA_CATEGORIES: DeltaCategory[] = [ + { id: 'sbom', name: 'SBOM Changes', icon: 'inventory_2', added: 0, removed: 0, changed: 0 }, + { id: 'reachability', name: 'Reachability', icon: 'route', added: 0, removed: 0, changed: 0 }, + { id: 'vex', name: 'VEX Status', icon: 'description', added: 0, removed: 0, changed: 0 }, + { id: 'policy', name: 'Policy', icon: 'policy', added: 0, removed: 0, changed: 0 }, + { id: 'findings', name: 'Findings', icon: 'bug_report', added: 0, removed: 0, changed: 0 }, + { id: 'unknowns', name: 'Unknowns', icon: 'help', added: 0, removed: 0, changed: 0 } +]; +``` + +**Acceptance Criteria**: +- [ ] SBOM, Reachability, VEX, Policy categories +- [ ] Counts per category +- [ ] Click to filter items +- [ ] Selection highlighting + +--- + +### T5: Items Pane + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1, T4 + +**Description**: +Middle pane showing list of changes. + +**Implementation**: Included in T1 template. + +**Acceptance Criteria**: +- [ ] List of changes filtered by category +- [ ] Add/remove/change icons +- [ ] Severity chips +- [ ] Click to select + +--- + +### T6: Proof Pane + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1, T5 + +**Description**: +Right pane showing evidence for selected item. + +**Implementation**: Included in T1 template with side-by-side and unified views. + +**Acceptance Criteria**: +- [ ] Shows before/after evidence +- [ ] Side-by-side view +- [ ] Unified diff view +- [ ] Empty state when no selection + +--- + +### T7: Before/After Toggle + +**Assignee**: UI Team +**Story Points**: 1 +**Status**: TODO +**Dependencies**: T6 + +**Description**: +Toggle between side-by-side and unified view. + +**Implementation**: Included in T1 with `viewMode` signal. + +**Acceptance Criteria**: +- [ ] Toggle button in toolbar +- [ ] Side-by-side shows two columns +- [ ] Unified shows inline diff +- [ ] State preserved during navigation + +--- + +### T8: Export Delta Report + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Export comparison as JSON or PDF. + +**Implementation Path**: Add export service + +```typescript +// compare-export.service.ts +@Injectable({ providedIn: 'root' }) +export class CompareExportService { + async exportJson( + current: CompareTarget, + baseline: CompareTarget, + categories: DeltaCategory[], + items: DeltaItem[] + ): Promise { + const report = { + exportedAt: new Date().toISOString(), + comparison: { + current: { id: current.id, label: current.label, digest: current.digest }, + baseline: { id: baseline.id, label: baseline.label, digest: baseline.digest } + }, + summary: { + added: categories.reduce((sum, c) => sum + c.added, 0), + removed: categories.reduce((sum, c) => sum + c.removed, 0), + changed: categories.reduce((sum, c) => sum + c.changed, 0) + }, + categories, + items + }; + + const blob = new Blob([JSON.stringify(report, null, 2)], { type: 'application/json' }); + const url = URL.createObjectURL(blob); + const a = document.createElement('a'); + a.href = url; + a.download = `delta-report-${current.id}-vs-${baseline.id}.json`; + a.click(); + URL.revokeObjectURL(url); + } + + async exportPdf( + current: CompareTarget, + baseline: CompareTarget, + categories: DeltaCategory[], + items: DeltaItem[] + ): Promise { + // PDF generation using jsPDF or server-side + // Implementation depends on PDF library choice + } +} +``` + +**Acceptance Criteria**: +- [ ] Export button in toolbar +- [ ] JSON export works +- [ ] PDF export works +- [ ] Filename includes comparison IDs + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | UI Team | Create compare-view.component.ts | +| 2 | T2 | TODO | T1 | UI Team | Baseline selector | +| 3 | T3 | TODO | T1 | UI Team | Delta summary strip | +| 4 | T4 | TODO | T1 | UI Team | Categories pane | +| 5 | T5 | TODO | T1, T4 | UI Team | Items pane | +| 6 | T6 | TODO | T1, T5 | UI Team | Proof pane | +| 7 | T7 | TODO | T6 | UI Team | Before/After toggle | +| 8 | T8 | TODO | T1 | UI Team | Export delta report | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from UX Gap Analysis. Smart-Diff UI identified as key comparison feature. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Three-pane layout | Decision | UI Team | Categories → Items → Evidence | +| Baseline presets | Decision | UI Team | Last green, previous release, main, custom | +| View modes | Decision | UI Team | Side-by-side and unified diff | +| Categories | Decision | UI Team | SBOM, Reachability, VEX, Policy, Findings, Unknowns | + +--- + +## Success Criteria + +- [ ] All 8 tasks marked DONE +- [ ] Baseline can be selected +- [ ] Delta summary shows counts +- [ ] Three-pane layout works +- [ ] Evidence accessible for each change +- [ ] Export works (JSON/PDF) +- [ ] `ng build` succeeds +- [ ] `ng test` succeeds diff --git a/docs/implplan/SPRINT_4200_0002_0004_cli_compare.md b/docs/implplan/SPRINT_4200_0002_0004_cli_compare.md new file mode 100644 index 000000000..fd1ebf8fd --- /dev/null +++ b/docs/implplan/SPRINT_4200_0002_0004_cli_compare.md @@ -0,0 +1,930 @@ +# Sprint 4200.0002.0004 · CLI `stella compare` Command + +## Topic & Scope + +- Implement CLI commands for comparing artifacts, snapshots, and verdicts +- Support multiple output formats (table, JSON, SARIF) +- Enable baseline options for CI/CD integration + +**Working directory:** `src/Cli/StellaOps.Cli/Commands/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 4100.0002.0001 (Knowledge Snapshot Manifest) +- **Downstream**: None +- **Safe to parallelize with**: Sprint 4200.0002.0003 (Delta/Compare View UI) + +## Documentation Prerequisites + +- `src/Cli/StellaOps.Cli/AGENTS.md` +- `docs/product-advisories/21-Dec-2025 - Smart Diff - Reproducibility as a Feature.md` +- Existing CLI patterns in `src/Cli/StellaOps.Cli/Commands/` + +--- + +## Tasks + +### T1: Create CompareCommandGroup.cs + +**Assignee**: CLI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the parent command group for `stella compare`. + +**Implementation Path**: `Commands/Compare/CompareCommandGroup.cs` (new file) + +**Implementation**: +```csharp +using System.CommandLine; + +namespace StellaOps.Cli.Commands.Compare; + +/// +/// Parent command group for comparison operations. +/// +public sealed class CompareCommandGroup : Command +{ + public CompareCommandGroup() : base("compare", "Compare artifacts, snapshots, or verdicts") + { + AddCommand(new CompareArtifactsCommand()); + AddCommand(new CompareSnapshotsCommand()); + AddCommand(new CompareVerdictsCommand()); + } +} +``` + +**Acceptance Criteria**: +- [ ] `CompareCommandGroup.cs` file created +- [ ] Parent command `stella compare` works +- [ ] Help text displayed for subcommands +- [ ] Registered in root command + +--- + +### T2: Add `compare artifacts` Command + +**Assignee**: CLI Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Compare two container image digests. + +**Implementation Path**: `Commands/Compare/CompareArtifactsCommand.cs` (new file) + +**Implementation**: +```csharp +using System.CommandLine; +using System.CommandLine.Invocation; + +namespace StellaOps.Cli.Commands.Compare; + +/// +/// Compares two container artifacts by digest. +/// +public sealed class CompareArtifactsCommand : Command +{ + public CompareArtifactsCommand() : base("artifacts", "Compare two container image artifacts") + { + var currentArg = new Argument("current", "Current artifact reference (image@sha256:...)"); + var baselineArg = new Argument("baseline", "Baseline artifact reference"); + + var formatOption = new Option( + ["--format", "-f"], + () => OutputFormat.Table, + "Output format (table, json, sarif)"); + + var outputOption = new Option( + ["--output", "-o"], + "Output file path (stdout if not specified)"); + + var categoriesOption = new Option( + ["--categories", "-c"], + () => Array.Empty(), + "Filter to specific categories (sbom, vex, reachability, policy)"); + + var severityOption = new Option( + "--min-severity", + "Minimum severity to include (critical, high, medium, low)"); + + AddArgument(currentArg); + AddArgument(baselineArg); + AddOption(formatOption); + AddOption(outputOption); + AddOption(categoriesOption); + AddOption(severityOption); + + this.SetHandler(ExecuteAsync, + currentArg, baselineArg, formatOption, outputOption, categoriesOption, severityOption); + } + + private async Task ExecuteAsync( + string current, + string baseline, + OutputFormat format, + FileInfo? output, + string[] categories, + string? minSeverity) + { + var console = AnsiConsole.Create(new AnsiConsoleSettings()); + + console.MarkupLine($"[blue]Comparing artifacts...[/]"); + console.MarkupLine($" Current: [green]{current}[/]"); + console.MarkupLine($" Baseline: [yellow]{baseline}[/]"); + + // Parse artifact references + var currentRef = ArtifactReference.Parse(current); + var baselineRef = ArtifactReference.Parse(baseline); + + // Compute delta + var comparer = new ArtifactComparer(_scannerClient, _snapshotService); + var delta = await comparer.CompareAsync(currentRef, baselineRef); + + // Apply filters + if (categories.Length > 0) + { + delta = delta.FilterByCategories(categories); + } + if (!string.IsNullOrEmpty(minSeverity)) + { + delta = delta.FilterBySeverity(Enum.Parse(minSeverity, ignoreCase: true)); + } + + // Format output + var formatter = GetFormatter(format); + var result = formatter.Format(delta); + + // Write output + if (output is not null) + { + await File.WriteAllTextAsync(output.FullName, result); + console.MarkupLine($"[green]Output written to {output.FullName}[/]"); + } + else + { + console.WriteLine(result); + } + + // Exit code based on delta + if (delta.HasBlockingChanges) + { + Environment.ExitCode = 1; + } + } +} + +public enum OutputFormat +{ + Table, + Json, + Sarif +} +``` + +**Acceptance Criteria**: +- [ ] `stella compare artifacts img1@sha256:a img2@sha256:b` works +- [ ] Table output by default +- [ ] JSON output with `--format json` +- [ ] SARIF output with `--format sarif` +- [ ] Category filtering works +- [ ] Severity filtering works +- [ ] Exit code 1 if blocking changes + +--- + +### T3: Add `compare snapshots` Command + +**Assignee**: CLI Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Compare two knowledge snapshots. + +**Implementation Path**: `Commands/Compare/CompareSnapshotsCommand.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Cli.Commands.Compare; + +/// +/// Compares two knowledge snapshots. +/// +public sealed class CompareSnapshotsCommand : Command +{ + public CompareSnapshotsCommand() : base("snapshots", "Compare two knowledge snapshots") + { + var currentArg = new Argument("current", "Current snapshot ID (ksm:sha256:...)"); + var baselineArg = new Argument("baseline", "Baseline snapshot ID"); + + var formatOption = new Option( + ["--format", "-f"], + () => OutputFormat.Table, + "Output format"); + + var outputOption = new Option( + ["--output", "-o"], + "Output file path"); + + var showSourcesOption = new Option( + "--show-sources", + () => false, + "Show detailed source changes"); + + AddArgument(currentArg); + AddArgument(baselineArg); + AddOption(formatOption); + AddOption(outputOption); + AddOption(showSourcesOption); + + this.SetHandler(ExecuteAsync, + currentArg, baselineArg, formatOption, outputOption, showSourcesOption); + } + + private async Task ExecuteAsync( + string current, + string baseline, + OutputFormat format, + FileInfo? output, + bool showSources) + { + var console = AnsiConsole.Create(new AnsiConsoleSettings()); + + // Validate snapshot IDs + if (!current.StartsWith("ksm:")) + { + console.MarkupLine("[red]Error: Current must be a snapshot ID (ksm:sha256:...)[/]"); + Environment.ExitCode = 1; + return; + } + + console.MarkupLine($"[blue]Comparing snapshots...[/]"); + console.MarkupLine($" Current: [green]{current}[/]"); + console.MarkupLine($" Baseline: [yellow]{baseline}[/]"); + + // Load snapshots + var currentSnapshot = await _snapshotService.GetSnapshotAsync(current); + var baselineSnapshot = await _snapshotService.GetSnapshotAsync(baseline); + + if (currentSnapshot is null || baselineSnapshot is null) + { + console.MarkupLine("[red]Error: One or both snapshots not found[/]"); + Environment.ExitCode = 1; + return; + } + + // Compute delta + var delta = ComputeSnapshotDelta(currentSnapshot, baselineSnapshot); + + // Format output + if (format == OutputFormat.Table) + { + RenderSnapshotDeltaTable(console, delta, showSources); + } + else + { + var formatter = GetFormatter(format); + var result = formatter.Format(delta); + + if (output is not null) + { + await File.WriteAllTextAsync(output.FullName, result); + } + else + { + console.WriteLine(result); + } + } + } + + private static void RenderSnapshotDeltaTable( + IAnsiConsole console, + SnapshotDelta delta, + bool showSources) + { + var table = new Table(); + table.AddColumn("Category"); + table.AddColumn("Added"); + table.AddColumn("Removed"); + table.AddColumn("Changed"); + + table.AddRow("Advisory Feeds", + delta.AddedFeeds.Count.ToString(), + delta.RemovedFeeds.Count.ToString(), + delta.ChangedFeeds.Count.ToString()); + + table.AddRow("VEX Documents", + delta.AddedVex.Count.ToString(), + delta.RemovedVex.Count.ToString(), + delta.ChangedVex.Count.ToString()); + + table.AddRow("Policy Rules", + delta.AddedPolicies.Count.ToString(), + delta.RemovedPolicies.Count.ToString(), + delta.ChangedPolicies.Count.ToString()); + + table.AddRow("Trust Roots", + delta.AddedTrust.Count.ToString(), + delta.RemovedTrust.Count.ToString(), + delta.ChangedTrust.Count.ToString()); + + console.Write(table); + + if (showSources) + { + console.WriteLine(); + console.MarkupLine("[bold]Source Details:[/]"); + + foreach (var source in delta.AllChangedSources) + { + console.MarkupLine($" {source.ChangeType}: {source.Name} ({source.Type})"); + console.MarkupLine($" Before: {source.BeforeDigest ?? "N/A"}"); + console.MarkupLine($" After: {source.AfterDigest ?? "N/A"}"); + } + } + } +} +``` + +**Acceptance Criteria**: +- [ ] `stella compare snapshots ksm:abc ksm:def` works +- [ ] Shows delta by source type +- [ ] `--show-sources` shows detailed changes +- [ ] JSON/SARIF output works +- [ ] Validates snapshot ID format + +--- + +### T4: Add `compare verdicts` Command + +**Assignee**: CLI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Compare two verdict IDs. + +**Implementation Path**: `Commands/Compare/CompareVerdictsCommand.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Cli.Commands.Compare; + +/// +/// Compares two verdicts. +/// +public sealed class CompareVerdictsCommand : Command +{ + public CompareVerdictsCommand() : base("verdicts", "Compare two verdicts") + { + var currentArg = new Argument("current", "Current verdict ID"); + var baselineArg = new Argument("baseline", "Baseline verdict ID"); + + var formatOption = new Option( + ["--format", "-f"], + () => OutputFormat.Table, + "Output format"); + + var showFindingsOption = new Option( + "--show-findings", + () => false, + "Show individual finding changes"); + + AddArgument(currentArg); + AddArgument(baselineArg); + AddOption(formatOption); + AddOption(showFindingsOption); + + this.SetHandler(ExecuteAsync, + currentArg, baselineArg, formatOption, showFindingsOption); + } + + private async Task ExecuteAsync( + string current, + string baseline, + OutputFormat format, + bool showFindings) + { + var console = AnsiConsole.Create(new AnsiConsoleSettings()); + + console.MarkupLine($"[blue]Comparing verdicts...[/]"); + + var currentVerdict = await _verdictService.GetVerdictAsync(current); + var baselineVerdict = await _verdictService.GetVerdictAsync(baseline); + + if (currentVerdict is null || baselineVerdict is null) + { + console.MarkupLine("[red]Error: One or both verdicts not found[/]"); + Environment.ExitCode = 1; + return; + } + + // Show verdict comparison + var table = new Table(); + table.AddColumn(""); + table.AddColumn("Baseline"); + table.AddColumn("Current"); + + table.AddRow("Decision", + baselineVerdict.Decision.ToString(), + currentVerdict.Decision.ToString()); + + table.AddRow("Total Findings", + baselineVerdict.FindingCount.ToString(), + currentVerdict.FindingCount.ToString()); + + table.AddRow("Critical", + baselineVerdict.CriticalCount.ToString(), + currentVerdict.CriticalCount.ToString()); + + table.AddRow("High", + baselineVerdict.HighCount.ToString(), + currentVerdict.HighCount.ToString()); + + table.AddRow("Blocked By", + baselineVerdict.BlockedBy?.ToString() ?? "N/A", + currentVerdict.BlockedBy?.ToString() ?? "N/A"); + + table.AddRow("Snapshot ID", + baselineVerdict.SnapshotId ?? "N/A", + currentVerdict.SnapshotId ?? "N/A"); + + console.Write(table); + + // Show decision change + if (baselineVerdict.Decision != currentVerdict.Decision) + { + console.WriteLine(); + console.MarkupLine($"[bold yellow]Decision changed: {baselineVerdict.Decision} → {currentVerdict.Decision}[/]"); + } + + // Show findings delta if requested + if (showFindings) + { + var findingsDelta = ComputeFindingsDelta( + baselineVerdict.Findings, + currentVerdict.Findings); + + console.WriteLine(); + console.MarkupLine("[bold]Finding Changes:[/]"); + + foreach (var added in findingsDelta.Added) + { + console.MarkupLine($" [green]+[/] {added.VulnId} in {added.Purl}"); + } + + foreach (var removed in findingsDelta.Removed) + { + console.MarkupLine($" [red]-[/] {removed.VulnId} in {removed.Purl}"); + } + + foreach (var changed in findingsDelta.Changed) + { + console.MarkupLine($" [yellow]~[/] {changed.VulnId}: {changed.BeforeStatus} → {changed.AfterStatus}"); + } + } + } +} +``` + +**Acceptance Criteria**: +- [ ] `stella compare verdicts v1 v2` works +- [ ] Shows decision comparison +- [ ] Shows count changes +- [ ] `--show-findings` shows individual changes +- [ ] Highlights decision changes + +--- + +### T5: Output Formatters + +**Assignee**: CLI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T2, T3, T4 + +**Description**: +Implement table, JSON, and SARIF formatters. + +**Implementation Path**: `Commands/Compare/Formatters/` (new directory) + +**Implementation**: +```csharp +// ICompareFormatter.cs +public interface ICompareFormatter +{ + string Format(ComparisonDelta delta); +} + +// TableFormatter.cs +public sealed class TableFormatter : ICompareFormatter +{ + public string Format(ComparisonDelta delta) + { + var sb = new StringBuilder(); + + // Summary + sb.AppendLine($"Comparison Summary:"); + sb.AppendLine($" Added: {delta.AddedCount}"); + sb.AppendLine($" Removed: {delta.RemovedCount}"); + sb.AppendLine($" Changed: {delta.ChangedCount}"); + sb.AppendLine(); + + // Categories + foreach (var category in delta.Categories) + { + sb.AppendLine($"{category.Name}:"); + foreach (var item in category.Items) + { + var prefix = item.ChangeType switch + { + ChangeType.Added => "+", + ChangeType.Removed => "-", + ChangeType.Changed => "~", + _ => " " + }; + sb.AppendLine($" {prefix} {item.Title}"); + } + } + + return sb.ToString(); + } +} + +// JsonFormatter.cs +public sealed class JsonFormatter : ICompareFormatter +{ + public string Format(ComparisonDelta delta) + { + var output = new + { + comparison = new + { + current = delta.Current, + baseline = delta.Baseline, + computedAt = DateTimeOffset.UtcNow + }, + summary = new + { + added = delta.AddedCount, + removed = delta.RemovedCount, + changed = delta.ChangedCount + }, + categories = delta.Categories.Select(c => new + { + name = c.Name, + items = c.Items.Select(i => new + { + changeType = i.ChangeType.ToString().ToLower(), + title = i.Title, + severity = i.Severity?.ToString().ToLower(), + before = i.BeforeValue, + after = i.AfterValue + }) + }) + }; + + return JsonSerializer.Serialize(output, new JsonSerializerOptions + { + WriteIndented = true, + PropertyNamingPolicy = JsonNamingPolicy.CamelCase + }); + } +} + +// SarifFormatter.cs +public sealed class SarifFormatter : ICompareFormatter +{ + public string Format(ComparisonDelta delta) + { + var sarif = new + { + version = "2.1.0", + schema = "https://raw.githubusercontent.com/oasis-tcs/sarif-spec/master/Schemata/sarif-schema-2.1.0.json", + runs = new[] + { + new + { + tool = new + { + driver = new + { + name = "stella-compare", + version = "1.0.0", + informationUri = "https://stellaops.io" + } + }, + results = delta.AllItems.Select(item => new + { + ruleId = $"DELTA-{item.ChangeType.ToString().ToUpper()}", + level = item.Severity switch + { + Severity.Critical => "error", + Severity.High => "error", + Severity.Medium => "warning", + _ => "note" + }, + message = new { text = item.Title }, + properties = new + { + changeType = item.ChangeType.ToString(), + category = item.Category, + before = item.BeforeValue, + after = item.AfterValue + } + }) + } + } + }; + + return JsonSerializer.Serialize(sarif, new JsonSerializerOptions + { + WriteIndented = true, + PropertyNamingPolicy = JsonNamingPolicy.CamelCase + }); + } +} +``` + +**Acceptance Criteria**: +- [ ] Table formatter produces readable output +- [ ] JSON formatter produces valid JSON +- [ ] SARIF formatter produces valid SARIF 2.1.0 +- [ ] All formatters handle empty deltas + +--- + +### T6: Baseline Option + +**Assignee**: CLI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T2 + +**Description**: +Implement `--baseline=last-green` and similar presets. + +**Implementation Path**: Add to `CompareArtifactsCommand.cs` + +**Implementation**: +```csharp +// Add to CompareArtifactsCommand +var baselinePresetOption = new Option( + "--baseline", + "Baseline preset: last-green, previous-release, main-branch, or artifact reference"); + +// In ExecuteAsync +string resolvedBaseline; +if (!string.IsNullOrEmpty(baselinePreset)) +{ + resolvedBaseline = baselinePreset switch + { + "last-green" => await _baselineResolver.GetLastGreenAsync(currentRef), + "previous-release" => await _baselineResolver.GetPreviousReleaseAsync(currentRef), + "main-branch" => await _baselineResolver.GetMainBranchAsync(currentRef), + _ => baselinePreset // Assume it's an artifact reference + }; +} +else +{ + resolvedBaseline = baseline; +} + +// BaselineResolver.cs +public sealed class BaselineResolver +{ + private readonly IScannerClient _scanner; + private readonly IGitService _git; + + public async Task GetLastGreenAsync(ArtifactReference current) + { + // Find most recent artifact with passing verdict + var history = await _scanner.GetArtifactHistoryAsync(current.Repository); + var lastGreen = history + .Where(a => a.Verdict == VerdictDecision.Ship) + .OrderByDescending(a => a.ScannedAt) + .FirstOrDefault(); + + return lastGreen?.Reference ?? throw new InvalidOperationException("No green builds found"); + } + + public async Task GetPreviousReleaseAsync(ArtifactReference current) + { + // Find artifact tagged with previous semver release + var tags = await _git.GetTagsAsync(current.Repository); + var semverTags = tags + .Where(t => SemVersion.TryParse(t.Name, out _)) + .OrderByDescending(t => SemVersion.Parse(t.Name)) + .Skip(1) // Skip current release + .FirstOrDefault(); + + return semverTags?.ArtifactRef ?? throw new InvalidOperationException("No previous release found"); + } + + public async Task GetMainBranchAsync(ArtifactReference current) + { + // Find latest artifact from main branch + var mainArtifact = await _scanner.GetLatestArtifactAsync( + current.Repository, + branch: "main"); + + return mainArtifact?.Reference ?? throw new InvalidOperationException("No main branch artifact found"); + } +} +``` + +**Acceptance Criteria**: +- [ ] `--baseline=last-green` resolves to last passing build +- [ ] `--baseline=previous-release` resolves to previous semver tag +- [ ] `--baseline=main-branch` resolves to latest main +- [ ] Falls back to treating value as artifact reference + +--- + +### T7: Tests + +**Assignee**: CLI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1-T6 + +**Description**: +Integration tests for compare commands. + +**Implementation Path**: `src/Cli/__Tests/StellaOps.Cli.Tests/Commands/Compare/` + +**Test Cases**: +```csharp +public class CompareArtifactsCommandTests +{ + [Fact] + public async Task Execute_TwoArtifacts_ShowsDelta() + { + // Arrange + var cmd = new CompareArtifactsCommand(); + var console = new TestConsole(); + + // Act + var result = await cmd.InvokeAsync( + new[] { "image@sha256:aaa", "image@sha256:bbb" }, + console); + + // Assert + result.Should().Be(0); + console.Output.Should().Contain("Added"); + console.Output.Should().Contain("Removed"); + } + + [Fact] + public async Task Execute_JsonFormat_ValidJson() + { + var cmd = new CompareArtifactsCommand(); + var console = new TestConsole(); + + var result = await cmd.InvokeAsync( + new[] { "img@sha256:a", "img@sha256:b", "--format", "json" }, + console); + + result.Should().Be(0); + var json = console.Output; + var parsed = JsonDocument.Parse(json); + parsed.RootElement.TryGetProperty("summary", out _).Should().BeTrue(); + } + + [Fact] + public async Task Execute_SarifFormat_ValidSarif() + { + var cmd = new CompareArtifactsCommand(); + var console = new TestConsole(); + + var result = await cmd.InvokeAsync( + new[] { "img@sha256:a", "img@sha256:b", "--format", "sarif" }, + console); + + result.Should().Be(0); + var sarif = JsonDocument.Parse(console.Output); + sarif.RootElement.GetProperty("version").GetString().Should().Be("2.1.0"); + } + + [Fact] + public async Task Execute_BlockingChanges_ExitCode1() + { + var cmd = new CompareArtifactsCommand(); + var console = new TestConsole(); + // Mock: Delta with blocking changes + + var result = await cmd.InvokeAsync( + new[] { "img@sha256:a", "img@sha256:b" }, + console); + + result.Should().Be(1); + } +} + +public class CompareSnapshotsCommandTests +{ + [Fact] + public async Task Execute_ValidSnapshots_ShowsDelta() + { + var cmd = new CompareSnapshotsCommand(); + var console = new TestConsole(); + + var result = await cmd.InvokeAsync( + new[] { "ksm:sha256:aaa", "ksm:sha256:bbb" }, + console); + + result.Should().Be(0); + console.Output.Should().Contain("Advisory Feeds"); + console.Output.Should().Contain("VEX Documents"); + } + + [Fact] + public async Task Execute_InvalidSnapshotId_Error() + { + var cmd = new CompareSnapshotsCommand(); + var console = new TestConsole(); + + var result = await cmd.InvokeAsync( + new[] { "invalid", "ksm:sha256:bbb" }, + console); + + result.Should().Be(1); + console.Output.Should().Contain("Error"); + } +} + +public class BaselineResolverTests +{ + [Fact] + public async Task GetLastGreen_ReturnsPassingBuild() + { + var resolver = new BaselineResolver(_mockScanner, _mockGit); + + var result = await resolver.GetLastGreenAsync( + ArtifactReference.Parse("myapp@sha256:current")); + + result.Should().Contain("sha256"); + } +} +``` + +**Acceptance Criteria**: +- [ ] Test for table output +- [ ] Test for JSON output validity +- [ ] Test for SARIF output validity +- [ ] Test for exit codes +- [ ] Test for baseline resolution +- [ ] All tests pass + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | CLI Team | Create CompareCommandGroup.cs | +| 2 | T2 | TODO | T1 | CLI Team | Add `compare artifacts` | +| 3 | T3 | TODO | T1 | CLI Team | Add `compare snapshots` | +| 4 | T4 | TODO | T1 | CLI Team | Add `compare verdicts` | +| 5 | T5 | TODO | T2-T4 | CLI Team | Output formatters | +| 6 | T6 | TODO | T2 | CLI Team | Baseline option | +| 7 | T7 | TODO | T1-T6 | CLI Team | Tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from UX Gap Analysis. CLI compare commands for CI/CD integration. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| System.CommandLine | Decision | CLI Team | Use for argument parsing | +| SARIF 2.1.0 | Decision | CLI Team | Standard for security findings | +| Exit codes | Decision | CLI Team | 0=success, 1=blocking changes | +| Baseline presets | Decision | CLI Team | last-green, previous-release, main-branch | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] `stella compare artifacts img1@sha256:a img2@sha256:b` works +- [ ] `stella compare snapshots ksm:abc ksm:def` shows delta +- [ ] `stella compare verdicts v1 v2` works +- [ ] Output shows introduced/fixed/changed +- [ ] JSON output is machine-readable +- [ ] Exit code 1 for blocking changes +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_4200_0002_0005_counterfactuals.md b/docs/implplan/SPRINT_4200_0002_0005_counterfactuals.md new file mode 100644 index 000000000..0669ebfb3 --- /dev/null +++ b/docs/implplan/SPRINT_4200_0002_0005_counterfactuals.md @@ -0,0 +1,1046 @@ +# Sprint 4200.0002.0005 · Policy Counterfactuals + +## Topic & Scope + +- Compute minimal changes needed to make a blocked finding pass +- Show "what would flip the verdict" for VEX, exceptions, and reachability +- Provide actionable guidance for remediation + +**Working directory:** `src/Policy/__Libraries/StellaOps.Policy/` + +## Dependencies & Concurrency + +- **Upstream**: None (can start immediately) +- **Downstream**: None +- **Safe to parallelize with**: All other UX sprints + +## Documentation Prerequisites + +- `src/Policy/__Libraries/StellaOps.Policy/AGENTS.md` +- `docs/product-advisories/16-Dec-2025 - Reimagining Proof‑Linked UX in Security Workflows.md` +- Existing: `PolicyExplanation`, `PolicyEvaluator` + +--- + +## Problem Statement + +`PolicyExplanation` currently shows "why blocked" but not "what would make it pass". This creates friction - users see a block but don't know the minimal path to resolution. + +--- + +## Tasks + +### T1: Define CounterfactualResult + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create model for counterfactual analysis results. + +**Implementation Path**: `Counterfactuals/CounterfactualResult.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Counterfactuals; + +/// +/// Result of counterfactual analysis - what would flip the verdict. +/// +public sealed record CounterfactualResult +{ + /// + /// The finding this analysis applies to. + /// + public required Guid FindingId { get; init; } + + /// + /// Current verdict for this finding. + /// + public required string CurrentVerdict { get; init; } + + /// + /// What the verdict would change to. + /// + public required string TargetVerdict { get; init; } + + /// + /// Possible paths to flip the verdict. + /// + public required IReadOnlyList Paths { get; init; } + + /// + /// Whether any path exists. + /// + public bool HasPaths => Paths.Count > 0; + + /// + /// The recommended path (lowest effort). + /// + public CounterfactualPath? RecommendedPath => + Paths.OrderBy(p => p.EstimatedEffort).FirstOrDefault(); +} + +/// +/// A single path that would flip the verdict. +/// +public sealed record CounterfactualPath +{ + /// + /// Type of change required. + /// + public required CounterfactualType Type { get; init; } + + /// + /// Human-readable description of what would need to change. + /// + public required string Description { get; init; } + + /// + /// Specific conditions that would need to be met. + /// + public required IReadOnlyList Conditions { get; init; } + + /// + /// Estimated effort level (1-5). + /// + public int EstimatedEffort { get; init; } + + /// + /// Who can take this action. + /// + public required string Actor { get; init; } + + /// + /// Link to relevant documentation or action. + /// + public string? ActionUri { get; init; } +} + +/// +/// A specific condition in a counterfactual path. +/// +public sealed record CounterfactualCondition +{ + /// + /// What needs to change. + /// + public required string Field { get; init; } + + /// + /// Current value. + /// + public required string CurrentValue { get; init; } + + /// + /// Required value. + /// + public required string RequiredValue { get; init; } + + /// + /// Whether this condition is currently met. + /// + public bool IsMet { get; init; } +} + +/// +/// Type of counterfactual change. +/// +public enum CounterfactualType +{ + /// VEX status would need to change. + VexStatus, + + /// An exception would need to be granted. + Exception, + + /// Reachability status would need to change. + Reachability, + + /// Component version would need to change. + VersionUpgrade, + + /// Policy rule would need to be modified. + PolicyChange, + + /// Component would need to be removed. + ComponentRemoval, + + /// Compensating control would need to be applied. + CompensatingControl +} +``` + +**Acceptance Criteria**: +- [ ] `CounterfactualResult.cs` file created +- [ ] Models for result, path, and condition +- [ ] CounterfactualType enum defined +- [ ] Estimated effort field +- [ ] Actor field for who can take action + +--- + +### T2: Create CounterfactualEngine + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement engine to compute minimal changes needed. + +**Implementation Path**: `Counterfactuals/CounterfactualEngine.cs` (new file) + +**Implementation**: +```csharp +namespace StellaOps.Policy.Counterfactuals; + +/// +/// Engine for computing policy counterfactuals. +/// +public interface ICounterfactualEngine +{ + Task ComputeAsync( + PolicyEvaluationContext context, + Guid findingId, + CancellationToken ct = default); +} + +/// +/// Default implementation of counterfactual engine. +/// +public sealed class CounterfactualEngine : ICounterfactualEngine +{ + private readonly IPolicyEvaluator _evaluator; + private readonly ILogger _logger; + + public CounterfactualEngine( + IPolicyEvaluator evaluator, + ILogger logger) + { + _evaluator = evaluator; + _logger = logger; + } + + public async Task ComputeAsync( + PolicyEvaluationContext context, + Guid findingId, + CancellationToken ct = default) + { + var finding = context.GetFinding(findingId); + if (finding is null) + { + throw new ArgumentException($"Finding {findingId} not found in context"); + } + + var currentEval = await _evaluator.EvaluateFindingAsync(context, finding, ct); + if (currentEval.Decision == PolicyDecision.Allow) + { + // Already passing - no counterfactuals needed + return new CounterfactualResult + { + FindingId = findingId, + CurrentVerdict = "Ship", + TargetVerdict = "Ship", + Paths = [] + }; + } + + var paths = new List(); + + // Check VEX counterfactual + var vexPath = await ComputeVexCounterfactualAsync(context, finding, ct); + if (vexPath is not null) paths.Add(vexPath); + + // Check exception counterfactual + var exceptionPath = ComputeExceptionCounterfactual(context, finding); + if (exceptionPath is not null) paths.Add(exceptionPath); + + // Check reachability counterfactual + var reachPath = await ComputeReachabilityCounterfactualAsync(context, finding, ct); + if (reachPath is not null) paths.Add(reachPath); + + // Check version upgrade counterfactual + var versionPath = await ComputeVersionUpgradeCounterfactualAsync(context, finding, ct); + if (versionPath is not null) paths.Add(versionPath); + + // Check compensating control counterfactual + var compensatingPath = ComputeCompensatingControlCounterfactual(context, finding); + if (compensatingPath is not null) paths.Add(compensatingPath); + + return new CounterfactualResult + { + FindingId = findingId, + CurrentVerdict = currentEval.Decision == PolicyDecision.Deny ? "Block" : "Exception", + TargetVerdict = "Ship", + Paths = paths.OrderBy(p => p.EstimatedEffort).ToList() + }; + } + + private async Task ComputeVexCounterfactualAsync( + PolicyEvaluationContext context, + Finding finding, + CancellationToken ct) + { + // Only applicable if current VEX status is Affected or UnderInvestigation + if (finding.VexStatus == VexStatus.NotAffected) + return null; + + // Simulate with NotAffected status + var modifiedContext = context.WithModifiedFinding(finding with + { + VexStatus = VexStatus.NotAffected + }); + + var simResult = await _evaluator.EvaluateFindingAsync(modifiedContext, finding, ct); + if (simResult.Decision != PolicyDecision.Allow) + return null; + + return new CounterfactualPath + { + Type = CounterfactualType.VexStatus, + Description = "Would pass if VEX status is 'not_affected'", + Conditions = + [ + new CounterfactualCondition + { + Field = "VEX Status", + CurrentValue = finding.VexStatus.ToString(), + RequiredValue = "NotAffected", + IsMet = false + } + ], + EstimatedEffort = 2, + Actor = "Vendor or Security Team", + ActionUri = "/vex/create" + }; + } + + private CounterfactualPath? ComputeExceptionCounterfactual( + PolicyEvaluationContext context, + Finding finding) + { + // Check if an exception is allowed by policy + if (!context.Policy.AllowsExceptions) + return null; + + return new CounterfactualPath + { + Type = CounterfactualType.Exception, + Description = $"Would pass with a security exception for {finding.VulnId}", + Conditions = + [ + new CounterfactualCondition + { + Field = "Exception", + CurrentValue = "None", + RequiredValue = "Approved exception covering this CVE", + IsMet = false + } + ], + EstimatedEffort = 3, + Actor = "Security Team or Exception Approver", + ActionUri = $"/exceptions/request?cve={finding.VulnId}" + }; + } + + private async Task ComputeReachabilityCounterfactualAsync( + PolicyEvaluationContext context, + Finding finding, + CancellationToken ct) + { + // Only if reachability affects this decision + if (finding.Reachability == Reachability.No) + return null; + + if (!context.Policy.ConsidersReachability) + return null; + + // Simulate with not reachable + var modifiedContext = context.WithModifiedFinding(finding with + { + Reachability = Reachability.No + }); + + var simResult = await _evaluator.EvaluateFindingAsync(modifiedContext, finding, ct); + if (simResult.Decision != PolicyDecision.Allow) + return null; + + return new CounterfactualPath + { + Type = CounterfactualType.Reachability, + Description = "Would pass if vulnerable code is not reachable", + Conditions = + [ + new CounterfactualCondition + { + Field = "Reachability", + CurrentValue = finding.Reachability.ToString(), + RequiredValue = "No (not reachable)", + IsMet = false + } + ], + EstimatedEffort = 4, + Actor = "Development Team", + ActionUri = $"/reachability/analyze?finding={finding.Id}" + }; + } + + private async Task ComputeVersionUpgradeCounterfactualAsync( + PolicyEvaluationContext context, + Finding finding, + CancellationToken ct) + { + // Check if there's a fixed version available + var fixedVersion = await GetFixedVersionAsync(finding.VulnId, finding.Purl, ct); + if (fixedVersion is null) + return null; + + return new CounterfactualPath + { + Type = CounterfactualType.VersionUpgrade, + Description = $"Would pass by upgrading to {fixedVersion}", + Conditions = + [ + new CounterfactualCondition + { + Field = "Version", + CurrentValue = finding.Version, + RequiredValue = fixedVersion, + IsMet = false + } + ], + EstimatedEffort = 2, + Actor = "Development Team", + ActionUri = $"/components/{Uri.EscapeDataString(finding.Purl)}/upgrade" + }; + } + + private CounterfactualPath? ComputeCompensatingControlCounterfactual( + PolicyEvaluationContext context, + Finding finding) + { + // Only if compensating controls are supported + if (!context.Policy.AllowsCompensatingControls) + return null; + + return new CounterfactualPath + { + Type = CounterfactualType.CompensatingControl, + Description = "Would pass with documented compensating control", + Conditions = + [ + new CounterfactualCondition + { + Field = "Compensating Control", + CurrentValue = "None", + RequiredValue = "Approved control mitigating the risk", + IsMet = false + } + ], + EstimatedEffort = 4, + Actor = "Security Team", + ActionUri = $"/controls/create?finding={finding.Id}" + }; + } + + private async Task GetFixedVersionAsync( + string vulnId, string purl, CancellationToken ct) + { + // Query advisory database for fixed version + // Implementation depends on advisory service + return null; // Placeholder + } +} +``` + +**Acceptance Criteria**: +- [ ] `CounterfactualEngine.cs` file created +- [ ] Computes VEX counterfactual +- [ ] Computes exception counterfactual +- [ ] Computes reachability counterfactual +- [ ] Computes version upgrade counterfactual +- [ ] Computes compensating control counterfactual +- [ ] Orders by estimated effort + +--- + +### T3: Integrate with PolicyExplanation + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T2 + +**Description**: +Add `WouldPassIf` field to PolicyExplanation. + +**Implementation Path**: Modify `PolicyExplanation.cs` + +**Implementation**: +```csharp +// Add to PolicyExplanation.cs +public sealed record PolicyExplanation +{ + // ... existing fields ... + + /// + /// Counterfactual paths showing what would flip the verdict. + /// + public CounterfactualResult? WouldPassIf { get; init; } +} + +// Modify PolicyExplanationBuilder or PolicyEvaluator +public async Task BuildExplanationAsync( + PolicyEvaluationContext context, + Finding finding, + bool includeCounterfactuals = true, + CancellationToken ct = default) +{ + var explanation = new PolicyExplanation + { + Decision = evaluation.Decision, + Reason = evaluation.Reason, + MatchedRules = evaluation.MatchedRules, + // ... other fields ... + }; + + if (includeCounterfactuals && evaluation.Decision != PolicyDecision.Allow) + { + var counterfactuals = await _counterfactualEngine.ComputeAsync( + context, finding.Id, ct); + explanation = explanation with { WouldPassIf = counterfactuals }; + } + + return explanation; +} +``` + +**Acceptance Criteria**: +- [ ] `WouldPassIf` field added to `PolicyExplanation` +- [ ] Populated when decision is not Allow +- [ ] Optional flag to skip computation +- [ ] Does not break existing API + +--- + +### T4: Handle VEX Counterfactuals + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T2 + +**Description**: +Detailed VEX counterfactual handling. + +**Implementation**: Extended in T2, add helper methods: + +```csharp +private async Task ComputeDetailedVexCounterfactualAsync( + PolicyEvaluationContext context, + Finding finding, + CancellationToken ct) +{ + // Check existing VEX statements + var existingVex = await _vexService.GetStatementsAsync( + finding.VulnId, finding.Purl, ct); + + if (existingVex.Any(v => v.Status == VexStatus.NotAffected && v.Source == "vendor")) + { + // Vendor already says not affected - policy might be ignoring it + return new CounterfactualPath + { + Type = CounterfactualType.VexStatus, + Description = "Vendor VEX exists but is not trusted. Review trust policy.", + Conditions = + [ + new CounterfactualCondition + { + Field = "VEX Trust", + CurrentValue = "Untrusted", + RequiredValue = "Trusted vendor VEX", + IsMet = false + } + ], + EstimatedEffort = 1, + Actor = "Security Team", + ActionUri = "/settings/vex-trust" + }; + } + + // Standard VEX counterfactual + return new CounterfactualPath + { + Type = CounterfactualType.VexStatus, + Description = $"Would pass if vendor publishes VEX with status 'not_affected' for {finding.VulnId}", + Conditions = + [ + new CounterfactualCondition + { + Field = "VEX Status", + CurrentValue = finding.VexStatus.ToString(), + RequiredValue = "NotAffected (from trusted source)", + IsMet = false + } + ], + EstimatedEffort = 3, + Actor = "Vendor", + ActionUri = $"https://github.com/{ExtractRepo(finding.Purl)}/security/advisories" + }; +} +``` + +**Acceptance Criteria**: +- [ ] Detects existing untrusted VEX +- [ ] Suggests trust policy review +- [ ] Links to vendor advisory creation +- [ ] Handles different VEX sources + +--- + +### T5: Handle Exception Counterfactuals + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T2 + +**Description**: +Detailed exception counterfactual handling. + +**Implementation**: +```csharp +private CounterfactualPath? ComputeDetailedExceptionCounterfactual( + PolicyEvaluationContext context, + Finding finding) +{ + // Check exception eligibility + var eligibility = CheckExceptionEligibility(context.Policy, finding); + + if (!eligibility.IsEligible) + { + return new CounterfactualPath + { + Type = CounterfactualType.Exception, + Description = $"Exception not allowed: {eligibility.Reason}", + Conditions = + [ + new CounterfactualCondition + { + Field = "Exception Policy", + CurrentValue = eligibility.Reason, + RequiredValue = "Policy allows exceptions for this severity", + IsMet = false + } + ], + EstimatedEffort = 5, + Actor = "Policy Admin", + ActionUri = "/policy/edit" + }; + } + + // Check if there's a pending exception request + var pendingRequest = context.PendingExceptions + .FirstOrDefault(e => e.CoverId == finding.VulnId); + + if (pendingRequest is not null) + { + return new CounterfactualPath + { + Type = CounterfactualType.Exception, + Description = $"Exception request pending approval (ID: {pendingRequest.Id})", + Conditions = + [ + new CounterfactualCondition + { + Field = "Exception Status", + CurrentValue = "Pending", + RequiredValue = "Approved", + IsMet = false + } + ], + EstimatedEffort = 1, + Actor = pendingRequest.ApproverRole, + ActionUri = $"/exceptions/{pendingRequest.Id}/approve" + }; + } + + // Standard exception path + return new CounterfactualPath + { + Type = CounterfactualType.Exception, + Description = $"Would pass with approved security exception for {finding.VulnId}", + Conditions = + [ + new CounterfactualCondition + { + Field = "Exception", + CurrentValue = "None", + RequiredValue = "Approved exception with risk acceptance", + IsMet = false + } + ], + EstimatedEffort = ComputeExceptionEffort(finding), + Actor = GetExceptionApprover(context.Policy, finding), + ActionUri = $"/exceptions/request?cve={finding.VulnId}&purl={Uri.EscapeDataString(finding.Purl)}" + }; +} + +private static int ComputeExceptionEffort(Finding finding) +{ + // Higher severity = more effort to get exception + return finding.CvssScore switch + { + >= 9.0m => 5, + >= 7.0m => 4, + >= 4.0m => 3, + _ => 2 + }; +} +``` + +**Acceptance Criteria**: +- [ ] Checks exception eligibility +- [ ] Detects pending requests +- [ ] Links to approval workflow +- [ ] Effort scales with severity + +--- + +### T6: Handle Reachability Counterfactuals + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T2 + +**Description**: +Detailed reachability counterfactual handling. + +**Implementation**: +```csharp +private async Task ComputeDetailedReachabilityCounterfactualAsync( + PolicyEvaluationContext context, + Finding finding, + CancellationToken ct) +{ + if (!context.Policy.ConsidersReachability) + { + return new CounterfactualPath + { + Type = CounterfactualType.PolicyChange, + Description = "Policy does not consider reachability. Enable reachability analysis in policy.", + Conditions = + [ + new CounterfactualCondition + { + Field = "Policy Setting", + CurrentValue = "Reachability disabled", + RequiredValue = "Reachability enabled", + IsMet = false + } + ], + EstimatedEffort = 2, + Actor = "Policy Admin", + ActionUri = "/policy/edit?setting=reachability" + }; + } + + // Check if reachability analysis was attempted + if (finding.Reachability == Reachability.Unknown) + { + return new CounterfactualPath + { + Type = CounterfactualType.Reachability, + Description = "Reachability unknown. Run reachability analysis to potentially mute.", + Conditions = + [ + new CounterfactualCondition + { + Field = "Reachability Analysis", + CurrentValue = "Not run", + RequiredValue = "Complete analysis showing not reachable", + IsMet = false + } + ], + EstimatedEffort = 2, + Actor = "Development Team", + ActionUri = $"/scan/{context.ScanId}/reachability/run" + }; + } + + // Currently reachable - would need code changes + return new CounterfactualPath + { + Type = CounterfactualType.Reachability, + Description = "Vulnerable code is reachable. Remove call path to mute.", + Conditions = + [ + new CounterfactualCondition + { + Field = "Call Path", + CurrentValue = "Reachable from entry points", + RequiredValue = "No path from entry points", + IsMet = false + } + ], + EstimatedEffort = 4, + Actor = "Development Team", + ActionUri = $"/findings/{finding.Id}/callgraph" + }; +} +``` + +**Acceptance Criteria**: +- [ ] Checks if policy considers reachability +- [ ] Handles unknown reachability +- [ ] Links to reachability analysis +- [ ] Shows call path info + +--- + +### T7: API Endpoint + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T2, T3 + +**Description**: +Create API endpoint for counterfactual queries. + +**Implementation Path**: `src/Policy/StellaOps.Policy.WebService/Endpoints/CounterfactualEndpoints.cs` + +**Implementation**: +```csharp +namespace StellaOps.Policy.WebService.Endpoints; + +public static class CounterfactualEndpoints +{ + public static void MapCounterfactualEndpoints(this WebApplication app) + { + var group = app.MapGroup("/api/v1/policy/counterfactuals") + .WithTags("Policy Counterfactuals") + .RequireAuthorization(); + + // GET /counterfactuals/{findingId} + group.MapGet("/{findingId:guid}", async ( + Guid findingId, + [FromQuery] Guid? evaluationId, + ICounterfactualEngine engine, + IPolicyContextProvider contextProvider, + CancellationToken ct) => + { + var context = evaluationId.HasValue + ? await contextProvider.GetContextForEvaluationAsync(evaluationId.Value, ct) + : await contextProvider.GetCurrentContextAsync(findingId, ct); + + if (context is null) + return Results.NotFound("Context not found"); + + var result = await engine.ComputeAsync(context, findingId, ct); + return Results.Ok(result); + }) + .WithName("GetCounterfactuals") + .WithDescription("Get counterfactual paths for a finding"); + + // GET /evaluations/{evaluationId}/counterfactuals + group.MapGet("/evaluations/{evaluationId:guid}", async ( + Guid evaluationId, + ICounterfactualEngine engine, + IPolicyContextProvider contextProvider, + CancellationToken ct) => + { + var context = await contextProvider.GetContextForEvaluationAsync(evaluationId, ct); + if (context is null) + return Results.NotFound("Evaluation not found"); + + var results = new List(); + foreach (var finding in context.BlockedFindings) + { + var result = await engine.ComputeAsync(context, finding.Id, ct); + results.Add(result); + } + + return Results.Ok(new { evaluationId, counterfactuals = results }); + }) + .WithName("GetEvaluationCounterfactuals") + .WithDescription("Get counterfactuals for all blocked findings in an evaluation"); + } +} +``` + +**Acceptance Criteria**: +- [ ] GET /counterfactuals/{findingId} works +- [ ] GET /evaluations/{id}/counterfactuals works +- [ ] Returns structured JSON +- [ ] Handles missing contexts gracefully + +--- + +### T8: Tests + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1-T7 + +**Description**: +Comprehensive tests for counterfactual scenarios. + +**Implementation Path**: `src/Policy/__Tests/StellaOps.Policy.Tests/Counterfactuals/` + +**Test Cases**: +```csharp +public class CounterfactualEngineTests +{ + [Fact] + public async Task Compute_AlreadyPassing_ReturnsEmptyPaths() + { + var context = CreateContext(PolicyDecision.Allow); + var finding = CreateFinding(); + + var result = await _engine.ComputeAsync(context, finding.Id); + + result.Paths.Should().BeEmpty(); + result.CurrentVerdict.Should().Be("Ship"); + } + + [Fact] + public async Task Compute_VexWouldFlip_ReturnsVexPath() + { + var context = CreateContext(PolicyDecision.Deny); + context.Policy.ConsidersVex = true; + var finding = CreateFinding(vexStatus: VexStatus.Affected); + + var result = await _engine.ComputeAsync(context, finding.Id); + + result.Paths.Should().Contain(p => p.Type == CounterfactualType.VexStatus); + } + + [Fact] + public async Task Compute_ExceptionWouldFlip_ReturnsExceptionPath() + { + var context = CreateContext(PolicyDecision.Deny); + context.Policy.AllowsExceptions = true; + var finding = CreateFinding(); + + var result = await _engine.ComputeAsync(context, finding.Id); + + result.Paths.Should().Contain(p => p.Type == CounterfactualType.Exception); + } + + [Fact] + public async Task Compute_ReachabilityWouldFlip_ReturnsReachabilityPath() + { + var context = CreateContext(PolicyDecision.Deny); + context.Policy.ConsidersReachability = true; + var finding = CreateFinding(reachability: Reachability.Yes); + + var result = await _engine.ComputeAsync(context, finding.Id); + + result.Paths.Should().Contain(p => p.Type == CounterfactualType.Reachability); + } + + [Fact] + public async Task Compute_MultiplePaths_OrdersByEffort() + { + var context = CreateContext(PolicyDecision.Deny); + context.Policy.AllowsExceptions = true; + context.Policy.ConsidersVex = true; + var finding = CreateFinding(); + + var result = await _engine.ComputeAsync(context, finding.Id); + + result.Paths.Should().BeInAscendingOrder(p => p.EstimatedEffort); + } + + [Fact] + public async Task Compute_RecommendedPath_IsLowestEffort() + { + var context = CreateContext(PolicyDecision.Deny); + var finding = CreateFinding(); + + var result = await _engine.ComputeAsync(context, finding.Id); + + result.RecommendedPath.Should().NotBeNull(); + result.RecommendedPath!.EstimatedEffort.Should().Be( + result.Paths.Min(p => p.EstimatedEffort)); + } + + [Fact] + public async Task Compute_PendingException_ShowsPendingPath() + { + var context = CreateContext(PolicyDecision.Deny); + context.PendingExceptions = [new PendingException { CoveId = "CVE-2024-1234" }]; + var finding = CreateFinding(vulnId: "CVE-2024-1234"); + + var result = await _engine.ComputeAsync(context, finding.Id); + + var exceptionPath = result.Paths.First(p => p.Type == CounterfactualType.Exception); + exceptionPath.Description.Should().Contain("pending"); + exceptionPath.EstimatedEffort.Should().Be(1); + } +} +``` + +**Acceptance Criteria**: +- [ ] Test for passing findings +- [ ] Test for VEX counterfactual +- [ ] Test for exception counterfactual +- [ ] Test for reachability counterfactual +- [ ] Test for effort ordering +- [ ] Test for recommended path +- [ ] All tests pass + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Policy Team | Define CounterfactualResult | +| 2 | T2 | TODO | T1 | Policy Team | Create CounterfactualEngine | +| 3 | T3 | TODO | T2 | Policy Team | Integrate with PolicyExplanation | +| 4 | T4 | TODO | T2 | Policy Team | Handle VEX counterfactuals | +| 5 | T5 | TODO | T2 | Policy Team | Handle exception counterfactuals | +| 6 | T6 | TODO | T2 | Policy Team | Handle reachability counterfactuals | +| 7 | T7 | TODO | T2, T3 | Policy Team | API endpoint | +| 8 | T8 | TODO | T1-T7 | Policy Team | Tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from UX Gap Analysis. Counterfactuals identified as key actionability feature. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Effort scale | Decision | Policy Team | 1-5 scale, lower is easier | +| Simulation approach | Decision | Policy Team | Modify context and re-evaluate | +| Path ordering | Decision | Policy Team | Order by effort ascending | +| Actor field | Decision | Policy Team | Who can take the remediation action | + +--- + +## Success Criteria + +- [ ] All 8 tasks marked DONE +- [ ] Counterfactuals show minimal changes to pass +- [ ] VEX, exception, reachability scenarios covered +- [ ] API returns structured counterfactual list +- [ ] Each counterfactual has actionable guidance +- [ ] Integration with PolicyExplanation works +- [ ] All tests pass +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_4500_0001_0001_binary_evidence_db.md b/docs/implplan/SPRINT_4500_0001_0001_binary_evidence_db.md new file mode 100644 index 000000000..5679f143a --- /dev/null +++ b/docs/implplan/SPRINT_4500_0001_0001_binary_evidence_db.md @@ -0,0 +1,995 @@ +# Sprint 4500.0001.0001 · Binary Evidence Database + +## Topic & Scope + +- Persist binary identity evidence (Build-ID, text hash) to PostgreSQL +- Create binary-to-package mapping store +- Support binary-level vulnerability assertions + +**Working directory:** `src/Scanner/__Libraries/StellaOps.Scanner.Storage/Postgres/` + +## Dependencies & Concurrency + +- **Upstream**: None +- **Downstream**: None +- **Safe to parallelize with**: All other sprints + +## Documentation Prerequisites + +- `src/Scanner/__Libraries/StellaOps.Scanner.Storage/AGENTS.md` +- `docs/db/SPECIFICATION.md` +- `docs/product-advisories/21-Dec-2025 - Mapping Evidence Within Compiled Binaries.md` +- Existing: `BuildIdLookupResult` + +--- + +## Problem Statement + +Build-ID indexing exists in memory (`BuildIdLookupResult`) but there's no persistent storage. This means: +- Build-ID matches are lost between scans +- Cannot query historical binary evidence +- No binary-level vulnerability status tracking + +--- + +## Tasks + +### T1: Migration - binary_identity Table + +**Assignee**: Scanner Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create migration for binary identity storage. + +**Implementation Path**: `Migrations/YYYYMMDDHHMMSS_AddBinaryIdentityTable.cs` + +**Migration**: +```csharp +public partial class AddBinaryIdentityTable : Migration +{ + protected override void Up(MigrationBuilder migrationBuilder) + { + migrationBuilder.CreateTable( + name: "binary_identity", + columns: table => new + { + id = table.Column(nullable: false, defaultValueSql: "gen_random_uuid()"), + scan_id = table.Column(nullable: false), + file_path = table.Column(maxLength: 1024, nullable: false), + file_sha256 = table.Column(maxLength: 64, nullable: false), + text_sha256 = table.Column(maxLength: 64, nullable: true), + build_id = table.Column(maxLength: 128, nullable: true), + build_id_type = table.Column(maxLength: 32, nullable: true), + architecture = table.Column(maxLength: 32, nullable: false), + binary_format = table.Column(maxLength: 16, nullable: false), + file_size = table.Column(nullable: false), + is_stripped = table.Column(nullable: false, defaultValue: false), + has_debug_info = table.Column(nullable: false, defaultValue: false), + created_at = table.Column(nullable: false, defaultValueSql: "now()") + }, + constraints: table => + { + table.PrimaryKey("pk_binary_identity", x => x.id); + table.ForeignKey( + name: "fk_binary_identity_scan", + column: x => x.scan_id, + principalTable: "scan", + principalColumn: "id", + onDelete: ReferentialAction.Cascade); + }); + + // Indexes for lookups + migrationBuilder.CreateIndex( + name: "ix_binary_identity_build_id", + table: "binary_identity", + column: "build_id"); + + migrationBuilder.CreateIndex( + name: "ix_binary_identity_file_sha256", + table: "binary_identity", + column: "file_sha256"); + + migrationBuilder.CreateIndex( + name: "ix_binary_identity_text_sha256", + table: "binary_identity", + column: "text_sha256"); + + migrationBuilder.CreateIndex( + name: "ix_binary_identity_scan_id", + table: "binary_identity", + column: "scan_id"); + } + + protected override void Down(MigrationBuilder migrationBuilder) + { + migrationBuilder.DropTable(name: "binary_identity"); + } +} +``` + +**Acceptance Criteria**: +- [ ] Migration creates binary_identity table +- [ ] Columns for build_id, file_sha256, text_sha256, architecture +- [ ] Indexes on lookup columns +- [ ] Foreign key to scan table + +--- + +### T2: Migration - binary_package_map Table + +**Assignee**: Scanner Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create table mapping binaries to packages (PURLs). + +**Migration**: +```csharp +public partial class AddBinaryPackageMapTable : Migration +{ + protected override void Up(MigrationBuilder migrationBuilder) + { + migrationBuilder.CreateTable( + name: "binary_package_map", + columns: table => new + { + id = table.Column(nullable: false, defaultValueSql: "gen_random_uuid()"), + binary_identity_id = table.Column(nullable: false), + purl = table.Column(maxLength: 512, nullable: false), + match_type = table.Column(maxLength: 32, nullable: false), + confidence = table.Column(precision: 3, scale: 2, nullable: false), + match_source = table.Column(maxLength: 64, nullable: false), + evidence_json = table.Column(type: "jsonb", nullable: true), + created_at = table.Column(nullable: false, defaultValueSql: "now()") + }, + constraints: table => + { + table.PrimaryKey("pk_binary_package_map", x => x.id); + table.ForeignKey( + name: "fk_binary_package_map_identity", + column: x => x.binary_identity_id, + principalTable: "binary_identity", + principalColumn: "id", + onDelete: ReferentialAction.Cascade); + }); + + migrationBuilder.CreateIndex( + name: "ix_binary_package_map_purl", + table: "binary_package_map", + column: "purl"); + + migrationBuilder.CreateIndex( + name: "ix_binary_package_map_binary_identity_id", + table: "binary_package_map", + column: "binary_identity_id"); + + // Unique constraint: one mapping per binary per PURL + migrationBuilder.CreateIndex( + name: "ix_binary_package_map_unique", + table: "binary_package_map", + columns: new[] { "binary_identity_id", "purl" }, + unique: true); + } + + protected override void Down(MigrationBuilder migrationBuilder) + { + migrationBuilder.DropTable(name: "binary_package_map"); + } +} +``` + +**Acceptance Criteria**: +- [ ] Migration creates binary_package_map table +- [ ] Links binary identity to PURL +- [ ] Match type and confidence stored +- [ ] Evidence JSON for detailed proof +- [ ] Unique constraint on binary+purl + +--- + +### T3: Migration - binary_vuln_assertion Table + +**Assignee**: Scanner Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create table for binary-level vulnerability assertions. + +**Migration**: +```csharp +public partial class AddBinaryVulnAssertionTable : Migration +{ + protected override void Up(MigrationBuilder migrationBuilder) + { + migrationBuilder.CreateTable( + name: "binary_vuln_assertion", + columns: table => new + { + id = table.Column(nullable: false, defaultValueSql: "gen_random_uuid()"), + binary_identity_id = table.Column(nullable: false), + vuln_id = table.Column(maxLength: 64, nullable: false), + status = table.Column(maxLength: 32, nullable: false), + source = table.Column(maxLength: 64, nullable: false), + assertion_type = table.Column(maxLength: 32, nullable: false), + confidence = table.Column(precision: 3, scale: 2, nullable: false), + evidence_json = table.Column(type: "jsonb", nullable: true), + valid_from = table.Column(nullable: false), + valid_until = table.Column(nullable: true), + signature_ref = table.Column(maxLength: 256, nullable: true), + created_at = table.Column(nullable: false, defaultValueSql: "now()") + }, + constraints: table => + { + table.PrimaryKey("pk_binary_vuln_assertion", x => x.id); + table.ForeignKey( + name: "fk_binary_vuln_assertion_identity", + column: x => x.binary_identity_id, + principalTable: "binary_identity", + principalColumn: "id", + onDelete: ReferentialAction.Cascade); + }); + + migrationBuilder.CreateIndex( + name: "ix_binary_vuln_assertion_vuln_id", + table: "binary_vuln_assertion", + column: "vuln_id"); + + migrationBuilder.CreateIndex( + name: "ix_binary_vuln_assertion_binary_identity_id", + table: "binary_vuln_assertion", + column: "binary_identity_id"); + + migrationBuilder.CreateIndex( + name: "ix_binary_vuln_assertion_status", + table: "binary_vuln_assertion", + column: "status"); + } + + protected override void Down(MigrationBuilder migrationBuilder) + { + migrationBuilder.DropTable(name: "binary_vuln_assertion"); + } +} +``` + +**Acceptance Criteria**: +- [ ] Migration creates binary_vuln_assertion table +- [ ] Links to binary identity +- [ ] Status (affected/not_affected/fixed) +- [ ] Assertion type (static_analysis, symbol_match, etc.) +- [ ] Validity period +- [ ] Optional signature reference + +--- + +### T4: Create IBinaryEvidenceRepository + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T2, T3 + +**Description**: +Create repository interface and entities. + +**Implementation Path**: `Entities/` and `Repositories/` + +**Entities**: +```csharp +// Entities/BinaryIdentity.cs +namespace StellaOps.Scanner.Storage.Postgres.Entities; + +[Table("binary_identity")] +public sealed class BinaryIdentity +{ + [Key] + [Column("id")] + public Guid Id { get; init; } = Guid.NewGuid(); + + [Column("scan_id")] + public Guid ScanId { get; init; } + + [Required] + [MaxLength(1024)] + [Column("file_path")] + public required string FilePath { get; init; } + + [Required] + [MaxLength(64)] + [Column("file_sha256")] + public required string FileSha256 { get; init; } + + [MaxLength(64)] + [Column("text_sha256")] + public string? TextSha256 { get; init; } + + [MaxLength(128)] + [Column("build_id")] + public string? BuildId { get; init; } + + [MaxLength(32)] + [Column("build_id_type")] + public string? BuildIdType { get; init; } + + [Required] + [MaxLength(32)] + [Column("architecture")] + public required string Architecture { get; init; } + + [Required] + [MaxLength(16)] + [Column("binary_format")] + public required string BinaryFormat { get; init; } + + [Column("file_size")] + public long FileSize { get; init; } + + [Column("is_stripped")] + public bool IsStripped { get; init; } + + [Column("has_debug_info")] + public bool HasDebugInfo { get; init; } + + [Column("created_at")] + public DateTimeOffset CreatedAt { get; init; } = DateTimeOffset.UtcNow; + + // Navigation + public ICollection PackageMaps { get; init; } = []; + public ICollection VulnAssertions { get; init; } = []; +} + +// Entities/BinaryPackageMap.cs +[Table("binary_package_map")] +public sealed class BinaryPackageMap +{ + [Key] + [Column("id")] + public Guid Id { get; init; } = Guid.NewGuid(); + + [Column("binary_identity_id")] + public Guid BinaryIdentityId { get; init; } + + [Required] + [MaxLength(512)] + [Column("purl")] + public required string Purl { get; init; } + + [Required] + [MaxLength(32)] + [Column("match_type")] + public required string MatchType { get; init; } + + [Column("confidence")] + public decimal Confidence { get; init; } + + [Required] + [MaxLength(64)] + [Column("match_source")] + public required string MatchSource { get; init; } + + [Column("evidence_json", TypeName = "jsonb")] + public string? EvidenceJson { get; init; } + + [Column("created_at")] + public DateTimeOffset CreatedAt { get; init; } = DateTimeOffset.UtcNow; + + // Navigation + [ForeignKey(nameof(BinaryIdentityId))] + public BinaryIdentity? BinaryIdentity { get; init; } +} + +// Entities/BinaryVulnAssertion.cs +[Table("binary_vuln_assertion")] +public sealed class BinaryVulnAssertion +{ + [Key] + [Column("id")] + public Guid Id { get; init; } = Guid.NewGuid(); + + [Column("binary_identity_id")] + public Guid BinaryIdentityId { get; init; } + + [Required] + [MaxLength(64)] + [Column("vuln_id")] + public required string VulnId { get; init; } + + [Required] + [MaxLength(32)] + [Column("status")] + public required string Status { get; init; } + + [Required] + [MaxLength(64)] + [Column("source")] + public required string Source { get; init; } + + [Required] + [MaxLength(32)] + [Column("assertion_type")] + public required string AssertionType { get; init; } + + [Column("confidence")] + public decimal Confidence { get; init; } + + [Column("evidence_json", TypeName = "jsonb")] + public string? EvidenceJson { get; init; } + + [Column("valid_from")] + public DateTimeOffset ValidFrom { get; init; } + + [Column("valid_until")] + public DateTimeOffset? ValidUntil { get; init; } + + [MaxLength(256)] + [Column("signature_ref")] + public string? SignatureRef { get; init; } + + [Column("created_at")] + public DateTimeOffset CreatedAt { get; init; } = DateTimeOffset.UtcNow; + + // Navigation + [ForeignKey(nameof(BinaryIdentityId))] + public BinaryIdentity? BinaryIdentity { get; init; } +} +``` + +**Repository Interface**: +```csharp +// Repositories/IBinaryEvidenceRepository.cs +public interface IBinaryEvidenceRepository +{ + // Identity operations + Task GetByIdAsync(Guid id, CancellationToken ct = default); + Task GetByBuildIdAsync(string buildId, CancellationToken ct = default); + Task GetByFileSha256Async(string sha256, CancellationToken ct = default); + Task GetByTextSha256Async(string sha256, CancellationToken ct = default); + Task> GetByScanIdAsync(Guid scanId, CancellationToken ct = default); + Task AddAsync(BinaryIdentity identity, CancellationToken ct = default); + + // Package map operations + Task> GetPackageMapsAsync(Guid binaryId, CancellationToken ct = default); + Task AddPackageMapAsync(BinaryPackageMap map, CancellationToken ct = default); + + // Vuln assertion operations + Task> GetVulnAssertionsAsync(Guid binaryId, CancellationToken ct = default); + Task> GetVulnAssertionsByVulnIdAsync(string vulnId, CancellationToken ct = default); + Task AddVulnAssertionAsync(BinaryVulnAssertion assertion, CancellationToken ct = default); +} +``` + +**Acceptance Criteria**: +- [ ] Entity classes created +- [ ] Repository interface defined +- [ ] CRUD operations for all three tables +- [ ] Lookup by build_id, file_sha256, text_sha256 + +--- + +### T5: Create BinaryEvidenceService + +**Assignee**: Scanner Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T4 + +**Description**: +Business logic layer for binary evidence. + +**Implementation Path**: `Services/BinaryEvidenceService.cs` + +**Implementation**: +```csharp +namespace StellaOps.Scanner.Storage.Services; + +public interface IBinaryEvidenceService +{ + Task RecordBinaryAsync( + Guid scanId, + BinaryInfo binary, + CancellationToken ct = default); + + Task MatchToPackageAsync( + Guid binaryId, + string purl, + PackageMatchEvidence evidence, + CancellationToken ct = default); + + Task RecordAssertionAsync( + Guid binaryId, + string vulnId, + AssertionInfo assertion, + CancellationToken ct = default); + + Task GetEvidenceForBinaryAsync( + string buildIdOrHash, + CancellationToken ct = default); +} + +public sealed class BinaryEvidenceService : IBinaryEvidenceService +{ + private readonly IBinaryEvidenceRepository _repository; + private readonly ILogger _logger; + + public async Task RecordBinaryAsync( + Guid scanId, + BinaryInfo binary, + CancellationToken ct = default) + { + // Check if we've seen this binary before (by hash) + var existing = await _repository.GetByFileSha256Async(binary.FileSha256, ct); + if (existing is not null) + { + _logger.LogDebug( + "Binary {Path} already recorded as {Id}", + binary.FilePath, existing.Id); + return existing; + } + + var identity = new BinaryIdentity + { + ScanId = scanId, + FilePath = binary.FilePath, + FileSha256 = binary.FileSha256, + TextSha256 = binary.TextSha256, + BuildId = binary.BuildId, + BuildIdType = binary.BuildIdType, + Architecture = binary.Architecture, + BinaryFormat = binary.Format, + FileSize = binary.FileSize, + IsStripped = binary.IsStripped, + HasDebugInfo = binary.HasDebugInfo + }; + + return await _repository.AddAsync(identity, ct); + } + + public async Task MatchToPackageAsync( + Guid binaryId, + string purl, + PackageMatchEvidence evidence, + CancellationToken ct = default) + { + var map = new BinaryPackageMap + { + BinaryIdentityId = binaryId, + Purl = purl, + MatchType = evidence.MatchType, + Confidence = evidence.Confidence, + MatchSource = evidence.Source, + EvidenceJson = JsonSerializer.Serialize(evidence.Details) + }; + + try + { + return await _repository.AddPackageMapAsync(map, ct); + } + catch (DbUpdateException ex) when (ex.InnerException is PostgresException { SqlState: "23505" }) + { + // Unique constraint violation - mapping already exists + _logger.LogDebug("Package map already exists for {Binary} -> {Purl}", binaryId, purl); + return null; + } + } + + public async Task RecordAssertionAsync( + Guid binaryId, + string vulnId, + AssertionInfo assertion, + CancellationToken ct = default) + { + var vulnAssertion = new BinaryVulnAssertion + { + BinaryIdentityId = binaryId, + VulnId = vulnId, + Status = assertion.Status, + Source = assertion.Source, + AssertionType = assertion.Type, + Confidence = assertion.Confidence, + EvidenceJson = JsonSerializer.Serialize(assertion.Evidence), + ValidFrom = assertion.ValidFrom, + ValidUntil = assertion.ValidUntil, + SignatureRef = assertion.SignatureRef + }; + + return await _repository.AddVulnAssertionAsync(vulnAssertion, ct); + } + + public async Task GetEvidenceForBinaryAsync( + string buildIdOrHash, + CancellationToken ct = default) + { + // Try build ID first + var identity = await _repository.GetByBuildIdAsync(buildIdOrHash, ct); + + // Fallback to SHA256 + identity ??= await _repository.GetByFileSha256Async(buildIdOrHash, ct); + identity ??= await _repository.GetByTextSha256Async(buildIdOrHash, ct); + + if (identity is null) + return null; + + var packages = await _repository.GetPackageMapsAsync(identity.Id, ct); + var assertions = await _repository.GetVulnAssertionsAsync(identity.Id, ct); + + return new BinaryEvidence + { + Identity = identity, + PackageMaps = packages, + VulnAssertions = assertions + }; + } +} + +// DTOs +public sealed record BinaryInfo( + string FilePath, + string FileSha256, + string? TextSha256, + string? BuildId, + string? BuildIdType, + string Architecture, + string Format, + long FileSize, + bool IsStripped, + bool HasDebugInfo); + +public sealed record PackageMatchEvidence( + string MatchType, + decimal Confidence, + string Source, + object? Details); + +public sealed record AssertionInfo( + string Status, + string Source, + string Type, + decimal Confidence, + object? Evidence, + DateTimeOffset ValidFrom, + DateTimeOffset? ValidUntil, + string? SignatureRef); + +public sealed record BinaryEvidence( + BinaryIdentity Identity, + IReadOnlyList PackageMaps, + IReadOnlyList VulnAssertions); +``` + +**Acceptance Criteria**: +- [ ] Service interface defined +- [ ] Record binary with dedup check +- [ ] Package mapping with constraint handling +- [ ] Vulnerability assertion recording +- [ ] Evidence retrieval by ID or hash + +--- + +### T6: Integrate with Scanner + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T5 + +**Description**: +Wire binary evidence service into scanner workflow. + +**Implementation Path**: Modify scanner analyzer + +**Integration Points**: +```csharp +// In BinaryAnalyzer or similar +public sealed class BinaryAnalyzer : IAnalyzer +{ + private readonly IBinaryEvidenceService _evidenceService; + + public async Task AnalyzeAsync( + ScanContext context, + CancellationToken ct = default) + { + foreach (var binary in context.Binaries) + { + // Parse binary headers + var info = ParseBinaryInfo(binary); + + // Record in evidence store + var identity = await _evidenceService.RecordBinaryAsync( + context.ScanId, + info, + ct); + + // Attempt package matching + var matchResult = await MatchBinaryToPackageAsync(identity, context, ct); + if (matchResult is not null) + { + await _evidenceService.MatchToPackageAsync( + identity.Id, + matchResult.Purl, + new PackageMatchEvidence( + MatchType: matchResult.MatchType, + Confidence: matchResult.Confidence, + Source: "build-id-index", + Details: matchResult.Evidence), + ct); + } + + // Record any vuln assertions from static analysis + foreach (var assertion in staticAnalysisResults) + { + await _evidenceService.RecordAssertionAsync( + identity.Id, + assertion.VulnId, + new AssertionInfo( + Status: assertion.Status, + Source: "static-analysis", + Type: assertion.Type, + Confidence: assertion.Confidence, + Evidence: assertion.Evidence, + ValidFrom: DateTimeOffset.UtcNow, + ValidUntil: null, + SignatureRef: null), + ct); + } + } + + return results; + } +} +``` + +**Acceptance Criteria**: +- [ ] Scanner calls evidence service +- [ ] Binaries recorded during scan +- [ ] Package matches persisted +- [ ] Vuln assertions stored +- [ ] No performance regression + +--- + +### T7: API Endpoints + +**Assignee**: Scanner Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T5 + +**Description**: +Create API for binary evidence queries. + +**Implementation Path**: `src/Scanner/StellaOps.Scanner.WebService/Endpoints/BinaryEvidenceEndpoints.cs` + +**Implementation**: +```csharp +public static class BinaryEvidenceEndpoints +{ + public static void MapBinaryEvidenceEndpoints(this WebApplication app) + { + var group = app.MapGroup("/api/v1/binaries") + .WithTags("Binary Evidence") + .RequireAuthorization(); + + // GET /binaries/{id} + group.MapGet("/{id:guid}", async ( + Guid id, + IBinaryEvidenceService service, + CancellationToken ct) => + { + var evidence = await service.GetEvidenceForBinaryAsync(id.ToString(), ct); + return evidence is null ? Results.NotFound() : Results.Ok(evidence); + }) + .WithName("GetBinaryEvidence") + .WithDescription("Get evidence for a binary by ID"); + + // GET /binaries/by-build-id/{buildId} + group.MapGet("/by-build-id/{buildId}", async ( + string buildId, + IBinaryEvidenceService service, + CancellationToken ct) => + { + var evidence = await service.GetEvidenceForBinaryAsync(buildId, ct); + return evidence is null ? Results.NotFound() : Results.Ok(evidence); + }) + .WithName("GetBinaryEvidenceByBuildId") + .WithDescription("Get evidence for a binary by Build-ID"); + + // GET /binaries/by-hash/{hash} + group.MapGet("/by-hash/{hash}", async ( + string hash, + IBinaryEvidenceService service, + CancellationToken ct) => + { + var evidence = await service.GetEvidenceForBinaryAsync(hash, ct); + return evidence is null ? Results.NotFound() : Results.Ok(evidence); + }) + .WithName("GetBinaryEvidenceByHash") + .WithDescription("Get evidence for a binary by SHA256 hash"); + + // GET /scans/{scanId}/binaries + group.MapGet("/scans/{scanId:guid}", async ( + Guid scanId, + IBinaryEvidenceRepository repository, + CancellationToken ct) => + { + var binaries = await repository.GetByScanIdAsync(scanId, ct); + return Results.Ok(binaries); + }) + .WithName("GetBinariesByScan") + .WithDescription("Get all binaries from a scan"); + } +} +``` + +**Acceptance Criteria**: +- [ ] GET /binaries/{id} works +- [ ] GET /binaries/by-build-id/{buildId} works +- [ ] GET /binaries/by-hash/{hash} works +- [ ] GET /scans/{scanId}/binaries works +- [ ] Authorization required + +--- + +### T8: Tests + +**Assignee**: Scanner Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1-T7 + +**Description**: +Tests for binary evidence functionality. + +**Test Cases**: +```csharp +public class BinaryEvidenceServiceTests : IClassFixture +{ + [Fact] + public async Task RecordBinary_NewBinary_CreatesRecord() + { + var info = new BinaryInfo( + FilePath: "/usr/lib/libc.so.6", + FileSha256: "abc123...", + TextSha256: "def456...", + BuildId: "aabbccdd", + BuildIdType: "gnu", + Architecture: "x86_64", + Format: "ELF", + FileSize: 1024000, + IsStripped: false, + HasDebugInfo: true); + + var identity = await _service.RecordBinaryAsync(Guid.NewGuid(), info); + + identity.Should().NotBeNull(); + identity.BuildId.Should().Be("aabbccdd"); + } + + [Fact] + public async Task RecordBinary_DuplicateHash_ReturnsExisting() + { + var info = CreateBinaryInfo(); + var first = await _service.RecordBinaryAsync(Guid.NewGuid(), info); + var second = await _service.RecordBinaryAsync(Guid.NewGuid(), info); + + first.Id.Should().Be(second.Id); + } + + [Fact] + public async Task MatchToPackage_Valid_CreatesMapping() + { + var identity = await CreateBinaryAsync(); + var evidence = new PackageMatchEvidence( + MatchType: "build-id", + Confidence: 0.95m, + Source: "build-id-index", + Details: new { debugInfo = "/usr/lib/debug/..." }); + + var map = await _service.MatchToPackageAsync( + identity.Id, + "pkg:rpm/glibc@2.28", + evidence); + + map.Should().NotBeNull(); + map!.Confidence.Should().Be(0.95m); + } + + [Fact] + public async Task RecordAssertion_Valid_CreatesAssertion() + { + var identity = await CreateBinaryAsync(); + var assertion = new AssertionInfo( + Status: "not_affected", + Source: "static-analysis", + Type: "symbol_absence", + Confidence: 0.8m, + Evidence: new { checkedSymbols = new[] { "vulnerable_func" } }, + ValidFrom: DateTimeOffset.UtcNow, + ValidUntil: null, + SignatureRef: null); + + var result = await _service.RecordAssertionAsync( + identity.Id, + "CVE-2024-1234", + assertion); + + result.Should().NotBeNull(); + result.Status.Should().Be("not_affected"); + } + + [Fact] + public async Task GetEvidence_ByBuildId_ReturnsComplete() + { + var identity = await CreateBinaryWithMappingsAndAssertionsAsync(); + + var evidence = await _service.GetEvidenceForBinaryAsync(identity.BuildId!); + + evidence.Should().NotBeNull(); + evidence!.Identity.Id.Should().Be(identity.Id); + evidence.PackageMaps.Should().NotBeEmpty(); + evidence.VulnAssertions.Should().NotBeEmpty(); + } +} +``` + +**Acceptance Criteria**: +- [ ] Record binary tests +- [ ] Duplicate handling tests +- [ ] Package mapping tests +- [ ] Vuln assertion tests +- [ ] Evidence retrieval tests +- [ ] Tests use Testcontainers PostgreSQL + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Scanner Team | Migration: binary_identity table | +| 2 | T2 | TODO | T1 | Scanner Team | Migration: binary_package_map table | +| 3 | T3 | TODO | T1 | Scanner Team | Migration: binary_vuln_assertion table | +| 4 | T4 | TODO | T1-T3 | Scanner Team | Create IBinaryEvidenceRepository | +| 5 | T5 | TODO | T4 | Scanner Team | Create BinaryEvidenceService | +| 6 | T6 | TODO | T5 | Scanner Team | Integrate with scanner | +| 7 | T7 | TODO | T5 | Scanner Team | API endpoints | +| 8 | T8 | TODO | T1-T7 | Scanner Team | Tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from UX Gap Analysis. Binary evidence persistence identified as required feature. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Schema design | Decision | Scanner Team | Three tables: identity, package_map, vuln_assertion | +| Dedup by hash | Decision | Scanner Team | Use file_sha256 for deduplication | +| Build-ID index | Decision | Scanner Team | Primary lookup by build-id when available | +| Validity period | Decision | Scanner Team | Assertions can have expiry | + +--- + +## Success Criteria + +- [ ] All 8 tasks marked DONE +- [ ] Binary identities persisted to PostgreSQL +- [ ] Package mapping queryable by digest +- [ ] Vulnerability assertions stored +- [ ] Build-ID lookups use persistent store +- [ ] API endpoints work +- [ ] All tests pass +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_4500_0002_0001_vex_conflict_studio.md b/docs/implplan/SPRINT_4500_0002_0001_vex_conflict_studio.md new file mode 100644 index 000000000..83630307a --- /dev/null +++ b/docs/implplan/SPRINT_4500_0002_0001_vex_conflict_studio.md @@ -0,0 +1,1281 @@ +# Sprint 4500.0002.0001 · VEX Conflict Studio UI + +## Topic & Scope + +- Create UI for visualizing and resolving VEX conflicts +- Show side-by-side conflicting statements with provenance +- Display K4 lattice merge outcome with trust weights +- Enable manual override with audit trail + +**Working directory:** `src/Web/StellaOps.Web/src/app/features/vex-studio/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 4200.0001.0002 (Wire Excititor to Policy K4 Lattice) +- **Downstream**: None +- **Safe to parallelize with**: Sprint 4500.0003.0001 (Operator/Auditor Mode) + +## Documentation Prerequisites + +- `src/Web/StellaOps.Web/AGENTS.md` +- `docs/product-advisories/21-Dec-2025 - How Top Scanners Shape Evidence‑First UX.md` +- Existing: MergeTrace, VexConflictResolution from Excititor + +--- + +## Tasks + +### T1: Create vex-conflict-studio.component.ts + +**Assignee**: UI Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the main VEX conflict studio view. + +**Implementation Path**: `vex-conflict-studio/vex-conflict-studio.component.ts` (new file) + +**Implementation**: +```typescript +import { Component, OnInit, ChangeDetectionStrategy, signal, computed } from '@angular/core'; +import { CommonModule } from '@angular/common'; +import { MatCardModule } from '@angular/material/card'; +import { MatButtonModule } from '@angular/material/button'; +import { MatIconModule } from '@angular/material/icon'; +import { MatChipsModule } from '@angular/material/chips'; +import { MatDividerModule } from '@angular/material/divider'; +import { MatSelectModule } from '@angular/material/select'; +import { MatDialogModule, MatDialog } from '@angular/material/dialog'; +import { ActivatedRoute } from '@angular/router'; + +export interface VexStatement { + id: string; + vulnId: string; + productId: string; + status: 'affected' | 'not_affected' | 'fixed' | 'under_investigation'; + source: string; + issuer?: string; + timestamp: Date; + signature?: { + signedBy: string; + signedAt: Date; + valid: boolean; + }; + justification?: string; + actionStatement?: string; +} + +export interface VexConflict { + id: string; + vulnId: string; + productId: string; + statements: VexStatement[]; + mergeResult: { + winningStatement: VexStatement; + reason: 'trust_weight' | 'freshness' | 'lattice_position' | 'tie'; + trace: MergeTrace; + }; + hasManualOverride: boolean; + overrideStatement?: VexStatement; +} + +export interface MergeTrace { + leftSource: string; + rightSource: string; + leftStatus: string; + rightStatus: string; + leftTrust: number; + rightTrust: number; + resultStatus: string; + explanation: string; +} + +@Component({ + selector: 'stella-vex-conflict-studio', + standalone: true, + imports: [ + CommonModule, + MatCardModule, + MatButtonModule, + MatIconModule, + MatChipsModule, + MatDividerModule, + MatSelectModule, + MatDialogModule + ], + templateUrl: './vex-conflict-studio.component.html', + styleUrls: ['./vex-conflict-studio.component.scss'], + changeDetection: ChangeDetectionStrategy.OnPush +}) +export class VexConflictStudioComponent implements OnInit { + // State + conflicts = signal([]); + selectedConflict = signal(null); + filterStatus = signal(null); + sortBy = signal<'timestamp' | 'severity' | 'source'>('timestamp'); + + // Computed + filteredConflicts = computed(() => { + let result = this.conflicts(); + const status = this.filterStatus(); + + if (status) { + result = result.filter(c => + c.statements.some(s => s.status === status) + ); + } + + const sort = this.sortBy(); + return result.sort((a, b) => { + switch (sort) { + case 'timestamp': + return new Date(b.statements[0].timestamp).getTime() - + new Date(a.statements[0].timestamp).getTime(); + case 'source': + return a.statements[0].source.localeCompare(b.statements[0].source); + default: + return 0; + } + }); + }); + + constructor( + private route: ActivatedRoute, + private dialog: MatDialog, + private vexService: VexConflictService + ) {} + + async ngOnInit(): Promise { + const productId = this.route.snapshot.paramMap.get('productId'); + const vulnId = this.route.snapshot.queryParamMap.get('vulnId'); + + await this.loadConflicts(productId, vulnId); + } + + async loadConflicts(productId?: string | null, vulnId?: string | null): Promise { + const conflicts = await this.vexService.getConflicts({ + productId: productId ?? undefined, + vulnId: vulnId ?? undefined + }); + this.conflicts.set(conflicts); + } + + selectConflict(conflict: VexConflict): void { + this.selectedConflict.set(conflict); + } + + getStatusIcon(status: string): string { + switch (status) { + case 'affected': return 'error'; + case 'not_affected': return 'check_circle'; + case 'fixed': return 'build_circle'; + case 'under_investigation': return 'help'; + default: return 'help_outline'; + } + } + + getStatusClass(status: string): string { + return `status-${status.replace('_', '-')}`; + } + + getTrustPercent(trust: number): string { + return `${(trust * 100).toFixed(0)}%`; + } + + getReasonLabel(reason: string): string { + switch (reason) { + case 'trust_weight': return 'Higher Trust'; + case 'freshness': return 'More Recent'; + case 'lattice_position': return 'K4 Lattice'; + case 'tie': return 'Tie (First Used)'; + default: return reason; + } + } + + async openOverrideDialog(conflict: VexConflict): Promise { + const dialogRef = this.dialog.open(OverrideDialogComponent, { + width: '600px', + data: { conflict } + }); + + const result = await dialogRef.afterClosed().toPromise(); + if (result) { + await this.applyOverride(conflict, result); + } + } + + async applyOverride(conflict: VexConflict, override: OverrideRequest): Promise { + await this.vexService.applyOverride(conflict.id, override); + await this.loadConflicts(); + } + + async removeOverride(conflict: VexConflict): Promise { + await this.vexService.removeOverride(conflict.id); + await this.loadConflicts(); + } +} +``` + +**Template** (`vex-conflict-studio.component.html`): +```html +
+
+

VEX Conflict Studio

+
+ + All + Affected + Not Affected + Fixed + Under Investigation + + + + Sort by Time + Sort by Source + +
+
+ +
+ + + + +
+

{{ conflict.vulnId }} - {{ conflict.productId }}

+ + +
+
+
+ + {{ getStatusIcon(stmt.status) }} + + {{ stmt.status | uppercase }} + + Winner + +
+ +
+ Source: {{ stmt.source }} +
+ +
+ Issuer: {{ stmt.issuer }} +
+ +
+ Timestamp: {{ stmt.timestamp | date:'medium' }} +
+ + +
+ + {{ stmt.signature.valid ? 'verified' : 'dangerous' }} + + + Signed by {{ stmt.signature.signedBy }} + {{ stmt.signature.valid ? '' : '(Invalid)' }} + +
+ +
+ Justification: +

{{ stmt.justification }}

+
+
+
+ + + + +
+

Merge Decision

+ +
+
+ Resolution: + + {{ getReasonLabel(conflict.mergeResult.reason) }} + +
+ +
+ {{ conflict.mergeResult.trace.leftSource }}: + {{ conflict.mergeResult.trace.leftStatus }} + Trust: {{ getTrustPercent(conflict.mergeResult.trace.leftTrust) }} +
+ +
+ {{ conflict.mergeResult.trace.rightSource }}: + {{ conflict.mergeResult.trace.rightStatus }} + Trust: {{ getTrustPercent(conflict.mergeResult.trace.rightTrust) }} +
+ +
+ {{ conflict.mergeResult.trace.explanation }} +
+
+ + +
+
K4 Lattice Position
+ +
+
+ + + + +
+

Manual Override

+ +
+

+ Override active: Using + {{ conflict.overrideStatement?.source }} + ({{ conflict.overrideStatement?.status }}) +

+ +
+ +
+

No override active. The automatic merge decision is being used.

+ +
+
+
+ +
+ touch_app +

Select a conflict to view details

+
+
+
+``` + +**Acceptance Criteria**: +- [ ] Main studio view created +- [ ] Conflict list with filtering +- [ ] Detail view for selected conflict +- [ ] Statement comparison layout + +--- + +### T2: Side-by-Side Statements + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement side-by-side statement comparison. + +**Styles** (`vex-conflict-studio.component.scss`): +```scss +.vex-conflict-studio { + display: flex; + flex-direction: column; + height: 100%; +} + +.studio-header { + display: flex; + justify-content: space-between; + align-items: center; + padding: 16px 24px; + background: var(--surface-container); + border-bottom: 1px solid var(--outline-variant); + + h2 { + margin: 0; + } + + .filters { + display: flex; + gap: 16px; + } +} + +.studio-content { + display: flex; + flex: 1; + overflow: hidden; +} + +.conflict-list { + width: 300px; + flex-shrink: 0; + overflow-y: auto; + padding: 16px; + border-right: 1px solid var(--outline-variant); + background: var(--surface); + + mat-card { + margin-bottom: 12px; + cursor: pointer; + transition: all 0.2s; + + &:hover { + box-shadow: var(--elevation-2); + } + + &.selected { + border: 2px solid var(--primary); + } + } + + .override-chip { + background: var(--warning); + color: var(--on-warning); + } +} + +.conflict-detail { + flex: 1; + overflow-y: auto; + padding: 24px; + + h3 { + margin: 0 0 24px; + } +} + +.statements-comparison { + display: grid; + grid-template-columns: repeat(auto-fit, minmax(300px, 1fr)); + gap: 16px; + margin-bottom: 24px; +} + +.statement-card { + padding: 16px; + background: var(--surface-variant); + border-radius: 8px; + border: 2px solid transparent; + + &.winner { + border-color: var(--primary); + background: var(--primary-container); + } + + .statement-header { + display: flex; + align-items: center; + gap: 8px; + margin-bottom: 12px; + + .status { + font-weight: 600; + } + } + + .statement-source, + .statement-issuer, + .statement-time { + margin-bottom: 8px; + font-size: 0.875rem; + } + + .statement-signature { + display: flex; + align-items: center; + gap: 8px; + margin-top: 12px; + padding-top: 12px; + border-top: 1px solid var(--outline-variant); + + mat-icon.valid { color: var(--success); } + mat-icon.invalid { color: var(--error); } + } +} + +.status-affected { color: var(--error); } +.status-not-affected { color: var(--success); } +.status-fixed { color: var(--primary); } +.status-under-investigation { color: var(--warning); } + +.merge-explanation { + margin: 24px 0; + + h4 { + margin: 0 0 16px; + } +} + +.merge-trace { + background: var(--surface-variant); + padding: 16px; + border-radius: 8px; + margin-bottom: 16px; + + .trace-row { + display: flex; + align-items: center; + gap: 12px; + margin-bottom: 8px; + + .label { + font-weight: 500; + min-width: 120px; + } + + .trust { + margin-left: auto; + color: var(--on-surface-variant); + } + } + + .trace-explanation { + margin-top: 16px; + padding-top: 16px; + border-top: 1px solid var(--outline-variant); + font-style: italic; + } +} + +.reason-trust_weight { background: var(--primary-container); } +.reason-freshness { background: var(--tertiary-container); } +.reason-lattice_position { background: var(--secondary-container); } +.reason-tie { background: var(--surface-variant); } + +.override-section { + margin-top: 24px; + + .active-override { + display: flex; + align-items: center; + justify-content: space-between; + padding: 16px; + background: var(--warning-container); + border-radius: 8px; + } +} + +.no-selection, +.empty-state { + display: flex; + flex-direction: column; + align-items: center; + justify-content: center; + height: 100%; + color: var(--on-surface-variant); + + mat-icon { + font-size: 64px; + width: 64px; + height: 64px; + margin-bottom: 16px; + } +} +``` + +**Acceptance Criteria**: +- [ ] Statements shown side by side +- [ ] Winner highlighted +- [ ] Status icons colored +- [ ] Responsive grid layout + +--- + +### T3: Provenance Display + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Show signature, issuer, and timestamp for each statement. + +**Implementation**: Included in T1 template with signature section. + +**Acceptance Criteria**: +- [ ] Shows issuer name +- [ ] Shows timestamp +- [ ] Shows signature validity +- [ ] Indicates signed-by identity + +--- + +### T4: Lattice Merge Visualization + +**Assignee**: UI Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create K4 lattice diagram component. + +**Implementation Path**: `shared/lattice-diagram/lattice-diagram.component.ts` + +**Implementation**: +```typescript +import { Component, Input, ChangeDetectionStrategy } from '@angular/core'; +import { CommonModule } from '@angular/common'; + +@Component({ + selector: 'stella-lattice-diagram', + standalone: true, + imports: [CommonModule], + template: ` +
+ + + + + Both + + + + True + + + False + + + + Neither + + + + + + + + + + + +
+
+ + {{ leftValue }} (Left) +
+
+ + {{ rightValue }} (Right) +
+
+ + {{ result }} (Result) +
+
+ +
+

+ The K4 lattice determines merge outcomes: + Affected (Both) is highest, + Under Investigation (Neither) is lowest. +

+
+
+ `, + styles: [` + .lattice-diagram { + padding: 16px; + } + + .lattice-svg { + width: 100%; + max-width: 300px; + height: auto; + margin: 0 auto; + display: block; + } + + circle { + fill: var(--surface-variant); + stroke: var(--outline); + stroke-width: 2; + transition: all 0.3s; + + &.active-left { + fill: var(--tertiary-container); + stroke: var(--tertiary); + stroke-width: 3; + } + + &.active-right { + fill: var(--secondary-container); + stroke: var(--secondary); + stroke-width: 3; + } + + &.active-result { + fill: var(--primary-container); + stroke: var(--primary); + stroke-width: 4; + } + } + + .node-label { + font-size: 10px; + fill: var(--on-surface); + } + + .lattice-edge { + stroke: var(--outline-variant); + stroke-width: 1; + } + + .join-path { + stroke: var(--primary); + stroke-dasharray: 5,5; + animation: dash 1s linear infinite; + } + + @keyframes dash { + to { stroke-dashoffset: -10; } + } + + .lattice-legend { + display: flex; + justify-content: center; + gap: 24px; + margin-top: 16px; + + .legend-item { + display: flex; + align-items: center; + gap: 8px; + font-size: 0.875rem; + + .dot { + width: 12px; + height: 12px; + border-radius: 50%; + + &.left { background: var(--tertiary); } + &.right { background: var(--secondary); } + &.result { background: var(--primary); } + } + } + } + + .lattice-explanation { + margin-top: 16px; + font-size: 0.75rem; + color: var(--on-surface-variant); + text-align: center; + } + `], + changeDetection: ChangeDetectionStrategy.OnPush +}) +export class LatticeDiagramComponent { + @Input() leftValue?: string; + @Input() rightValue?: string; + @Input() result?: string; + + private readonly positions: Record = { + 'affected': { x: 100, y: 20 }, + 'fixed': { x: 40, y: 80 }, + 'not_affected': { x: 160, y: 80 }, + 'under_investigation': { x: 100, y: 140 } + }; + + getNodeClass(status: string): string { + const classes: string[] = []; + if (this.leftValue === status) classes.push('active-left'); + if (this.rightValue === status) classes.push('active-right'); + if (this.result === status) classes.push('active-result'); + return classes.join(' '); + } + + getJoinPath(): string { + if (!this.leftValue || !this.rightValue || !this.result) return ''; + + const left = this.positions[this.leftValue]; + const right = this.positions[this.rightValue]; + const res = this.positions[this.result]; + + if (!left || !right || !res) return ''; + + return `M ${left.x} ${left.y} L ${res.x} ${res.y} L ${right.x} ${right.y}`; + } +} +``` + +**Acceptance Criteria**: +- [ ] K4 lattice diagram rendered +- [ ] Left/right values highlighted +- [ ] Result value highlighted +- [ ] Path animation showing join +- [ ] Legend explaining nodes + +--- + +### T5: Trust Weight Display + +**Assignee**: UI Team +**Story Points**: 1 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Show why one statement won based on trust. + +**Implementation**: Included in T1 merge-trace section. + +**Acceptance Criteria**: +- [ ] Shows trust percentage for each statement +- [ ] Highlights higher trust +- [ ] Explains trust source + +--- + +### T6: Manual Override Option + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Allow admin to set preferred statement. + +**Implementation Path**: `override-dialog/override-dialog.component.ts` + +**Implementation**: +```typescript +import { Component, Inject } from '@angular/core'; +import { CommonModule } from '@angular/common'; +import { FormsModule } from '@angular/forms'; +import { MatDialogModule, MAT_DIALOG_DATA, MatDialogRef } from '@angular/material/dialog'; +import { MatButtonModule } from '@angular/material/button'; +import { MatRadioModule } from '@angular/material/radio'; +import { MatFormFieldModule } from '@angular/material/form-field'; +import { MatInputModule } from '@angular/material/input'; + +export interface OverrideRequest { + preferredStatementId: string; + reason: string; +} + +@Component({ + selector: 'stella-override-dialog', + standalone: true, + imports: [ + CommonModule, + FormsModule, + MatDialogModule, + MatButtonModule, + MatRadioModule, + MatFormFieldModule, + MatInputModule + ], + template: ` +

Set Manual Override

+ +

Select which VEX statement should be used instead of the automatic merge result.

+ + + + + {{ stmt.source }}: {{ stmt.status }} + ({{ stmt.timestamp | date:'short' }}) + + + + + + Reason for override + + This will be recorded in the audit log + + +
+ warning + Manual overrides bypass the trust-based merge logic. Use with caution. +
+
+ + + + + `, + styles: [` + .statement-options { + display: flex; + flex-direction: column; + gap: 12px; + margin: 16px 0; + } + + .statement-option { + .timestamp { + color: var(--on-surface-variant); + font-size: 0.875rem; + } + } + + .reason-field { + width: 100%; + margin-top: 16px; + } + + .warning-box { + display: flex; + align-items: center; + gap: 12px; + padding: 12px; + background: var(--warning-container); + border-radius: 8px; + margin-top: 16px; + + mat-icon { + color: var(--warning); + } + } + `] +}) +export class OverrideDialogComponent { + selectedStatementId: string = ''; + reason: string = ''; + + constructor( + @Inject(MAT_DIALOG_DATA) public data: { conflict: VexConflict }, + private dialogRef: MatDialogRef + ) {} + + confirm(): void { + this.dialogRef.close({ + preferredStatementId: this.selectedStatementId, + reason: this.reason + } as OverrideRequest); + } +} +``` + +**Acceptance Criteria**: +- [ ] Radio selection for preferred statement +- [ ] Required reason field +- [ ] Warning about bypassing trust +- [ ] Confirmation dialog + +--- + +### T7: Evidence Checklist + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Show required evidence for each VEX status. + +**Implementation Path**: Add to `vex-conflict-studio.component.ts` + +**Component**: +```typescript +// evidence-checklist.component.ts +@Component({ + selector: 'stella-evidence-checklist', + standalone: true, + imports: [CommonModule, MatIconModule, MatListModule], + template: ` +
+
Required Evidence for "{{ status }}"
+ + + + {{ item.met ? 'check_circle' : 'radio_button_unchecked' }} + + {{ item.label }} + {{ item.description }} + + +
+ `, + styles: [` + .evidence-checklist { + margin-top: 16px; + padding: 16px; + background: var(--surface-variant); + border-radius: 8px; + } + + h5 { + margin: 0 0 12px; + } + + mat-icon.met { color: var(--success); } + mat-icon.unmet { color: var(--outline); } + `] +}) +export class EvidenceChecklistComponent { + @Input() status!: string; + @Input() statement?: VexStatement; + + private readonly requirements: Record = { + 'not_affected': [ + { label: 'Justification provided', key: 'justification' }, + { label: 'Impact statement', key: 'impactStatement' }, + { label: 'Signed by trusted issuer', key: 'signature' } + ], + 'affected': [ + { label: 'Action statement', key: 'actionStatement' }, + { label: 'Severity assessment', key: 'severity' } + ], + 'fixed': [ + { label: 'Fixed version specified', key: 'fixedVersion' }, + { label: 'Fix commit reference', key: 'fixCommit' } + ], + 'under_investigation': [ + { label: 'Investigation timeline', key: 'timeline' } + ] + }; + + getRequiredEvidence(status: string): { label: string; met: boolean; description?: string }[] { + const reqs = this.requirements[status] ?? []; + return reqs.map(req => ({ + label: req.label, + met: this.checkRequirement(req), + description: req.description + })); + } + + private checkRequirement(req: EvidenceRequirement): boolean { + if (!this.statement) return false; + switch (req.key) { + case 'justification': return !!this.statement.justification; + case 'signature': return !!this.statement.signature?.valid; + case 'actionStatement': return !!this.statement.actionStatement; + default: return false; + } + } +} + +interface EvidenceRequirement { + label: string; + key: string; + description?: string; +} +``` + +**Acceptance Criteria**: +- [ ] Shows required evidence per status +- [ ] Checkmarks for met requirements +- [ ] Unchecked for missing requirements +- [ ] Different requirements per status + +--- + +### T8: Tests + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1-T7 + +**Description**: +Component tests for conflict studio. + +**Test Cases**: +```typescript +describe('VexConflictStudioComponent', () => { + let component: VexConflictStudioComponent; + let fixture: ComponentFixture; + + beforeEach(async () => { + await TestBed.configureTestingModule({ + imports: [VexConflictStudioComponent, NoopAnimationsModule], + providers: [ + { provide: VexConflictService, useValue: mockVexService } + ] + }).compileComponents(); + }); + + it('should load conflicts on init', async () => { + mockVexService.getConflicts.and.returnValue(Promise.resolve([mockConflict])); + fixture.detectChanges(); + await fixture.whenStable(); + + expect(component.conflicts().length).toBe(1); + }); + + it('should show side-by-side statements', () => { + component.conflicts.set([mockConflict]); + component.selectConflict(mockConflict); + fixture.detectChanges(); + + const statements = fixture.nativeElement.querySelectorAll('.statement-card'); + expect(statements.length).toBe(2); + }); + + it('should highlight winner statement', () => { + component.conflicts.set([mockConflict]); + component.selectConflict(mockConflict); + fixture.detectChanges(); + + const winner = fixture.nativeElement.querySelector('.statement-card.winner'); + expect(winner).toBeTruthy(); + }); + + it('should show merge explanation', () => { + component.conflicts.set([mockConflict]); + component.selectConflict(mockConflict); + fixture.detectChanges(); + + const explanation = fixture.nativeElement.querySelector('.merge-trace'); + expect(explanation.textContent).toContain('Higher Trust'); + }); + + it('should open override dialog', async () => { + const dialogSpy = spyOn(component['dialog'], 'open').and.returnValue({ + afterClosed: () => of(null) + } as any); + + component.conflicts.set([mockConflict]); + component.selectConflict(mockConflict); + fixture.detectChanges(); + + await component.openOverrideDialog(mockConflict); + + expect(dialogSpy).toHaveBeenCalled(); + }); +}); + +describe('LatticeDiagramComponent', () => { + it('should highlight nodes correctly', () => { + component.leftValue = 'affected'; + component.rightValue = 'not_affected'; + component.result = 'affected'; + fixture.detectChanges(); + + expect(component.getNodeClass('affected')).toContain('active-left'); + expect(component.getNodeClass('affected')).toContain('active-result'); + expect(component.getNodeClass('not_affected')).toContain('active-right'); + }); +}); +``` + +**Acceptance Criteria**: +- [ ] Test conflict loading +- [ ] Test statement display +- [ ] Test winner highlighting +- [ ] Test merge explanation +- [ ] Test override dialog +- [ ] Test lattice diagram +- [ ] All tests pass + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | UI Team | Create vex-conflict-studio.component.ts | +| 2 | T2 | TODO | T1 | UI Team | Side-by-side statements | +| 3 | T3 | TODO | T1 | UI Team | Provenance display | +| 4 | T4 | TODO | T1 | UI Team | Lattice merge visualization | +| 5 | T5 | TODO | T1 | UI Team | Trust weight display | +| 6 | T6 | TODO | T1 | UI Team | Manual override option | +| 7 | T7 | TODO | T1 | UI Team | Evidence checklist | +| 8 | T8 | TODO | T1-T7 | UI Team | Tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from UX Gap Analysis. VEX Conflict Studio identified as key transparency feature. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Side-by-side layout | Decision | UI Team | Better comparison than stacked | +| K4 visualization | Decision | UI Team | SVG diagram with animation | +| Override audit | Decision | UI Team | Reason required, logged | +| Evidence checklist | Decision | UI Team | Per-status requirements | + +--- + +## Success Criteria + +- [ ] All 8 tasks marked DONE +- [ ] Conflicts shown side-by-side +- [ ] Provenance visible for each statement +- [ ] Merge outcome explained with K4 diagram +- [ ] Manual override with audit trail +- [ ] Evidence checklist shows requirements +- [ ] All tests pass +- [ ] `ng build` succeeds +- [ ] `ng test` succeeds diff --git a/docs/implplan/SPRINT_4500_0003_0001_operator_auditor_mode.md b/docs/implplan/SPRINT_4500_0003_0001_operator_auditor_mode.md new file mode 100644 index 000000000..d483ed2f3 --- /dev/null +++ b/docs/implplan/SPRINT_4500_0003_0001_operator_auditor_mode.md @@ -0,0 +1,749 @@ +# Sprint 4500.0003.0001 · Operator/Auditor Mode Toggle + +## Topic & Scope + +- Add UI mode toggle for operators vs auditors +- Operators see minimal, action-focused views +- Auditors see full provenance, signatures, and evidence +- Persist preference across sessions + +**Working directory:** `src/Web/StellaOps.Web/src/app/core/` + +## Dependencies & Concurrency + +- **Upstream**: None +- **Downstream**: None +- **Safe to parallelize with**: All other sprints + +## Documentation Prerequisites + +- `src/Web/StellaOps.Web/AGENTS.md` +- `docs/product-advisories/21-Dec-2025 - How Top Scanners Shape Evidence‑First UX.md` +- Angular service patterns + +--- + +## Problem Statement + +The same UI serves two different audiences with different needs: +- **Operators**: Need speed, want quick answers ("Can I ship?"), minimal detail +- **Auditors**: Need completeness, want full provenance, signatures, evidence chains + +Currently, there's no way to toggle between these views. + +--- + +## Tasks + +### T1: Create ViewModeService + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create service to manage operator/auditor view state. + +**Implementation Path**: `services/view-mode.service.ts` (new file) + +**Implementation**: +```typescript +import { Injectable, signal, computed, effect } from '@angular/core'; + +export type ViewMode = 'operator' | 'auditor'; + +export interface ViewModeConfig { + showSignatures: boolean; + showProvenance: boolean; + showEvidenceDetails: boolean; + showSnapshots: boolean; + showMergeTraces: boolean; + showPolicyDetails: boolean; + compactFindings: boolean; + autoExpandEvidence: boolean; +} + +const OPERATOR_CONFIG: ViewModeConfig = { + showSignatures: false, + showProvenance: false, + showEvidenceDetails: false, + showSnapshots: false, + showMergeTraces: false, + showPolicyDetails: false, + compactFindings: true, + autoExpandEvidence: false +}; + +const AUDITOR_CONFIG: ViewModeConfig = { + showSignatures: true, + showProvenance: true, + showEvidenceDetails: true, + showSnapshots: true, + showMergeTraces: true, + showPolicyDetails: true, + compactFindings: false, + autoExpandEvidence: true +}; + +const STORAGE_KEY = 'stella-view-mode'; + +@Injectable({ providedIn: 'root' }) +export class ViewModeService { + // Current mode + private readonly _mode = signal(this.loadFromStorage()); + + // Public readonly signals + readonly mode = this._mode.asReadonly(); + + // Computed config based on mode + readonly config = computed(() => { + return this._mode() === 'operator' ? OPERATOR_CONFIG : AUDITOR_CONFIG; + }); + + // Convenience computed properties + readonly isOperator = computed(() => this._mode() === 'operator'); + readonly isAuditor = computed(() => this._mode() === 'auditor'); + readonly showSignatures = computed(() => this.config().showSignatures); + readonly showProvenance = computed(() => this.config().showProvenance); + readonly showEvidenceDetails = computed(() => this.config().showEvidenceDetails); + readonly showSnapshots = computed(() => this.config().showSnapshots); + readonly compactFindings = computed(() => this.config().compactFindings); + + constructor() { + // Persist changes to storage + effect(() => { + const mode = this._mode(); + localStorage.setItem(STORAGE_KEY, mode); + }); + } + + /** + * Toggle between operator and auditor mode. + */ + toggle(): void { + this._mode.set(this._mode() === 'operator' ? 'auditor' : 'operator'); + } + + /** + * Set a specific mode. + */ + setMode(mode: ViewMode): void { + this._mode.set(mode); + } + + /** + * Check if a specific feature should be shown. + */ + shouldShow(feature: keyof ViewModeConfig): boolean { + return this.config()[feature] as boolean; + } + + private loadFromStorage(): ViewMode { + const stored = localStorage.getItem(STORAGE_KEY); + if (stored === 'operator' || stored === 'auditor') { + return stored; + } + return 'operator'; // Default to operator mode + } +} +``` + +**Acceptance Criteria**: +- [ ] `ViewModeService` file created +- [ ] Signal-based reactive state +- [ ] Config objects for each mode +- [ ] LocalStorage persistence +- [ ] Toggle and setMode methods + +--- + +### T2: Add Mode Toggle Component + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create toggle switch for the header. + +**Implementation Path**: `components/view-mode-toggle/view-mode-toggle.component.ts` + +**Implementation**: +```typescript +import { Component, ChangeDetectionStrategy } from '@angular/core'; +import { CommonModule } from '@angular/common'; +import { MatSlideToggleModule } from '@angular/material/slide-toggle'; +import { MatIconModule } from '@angular/material/icon'; +import { MatTooltipModule } from '@angular/material/tooltip'; +import { ViewModeService, ViewMode } from '../../services/view-mode.service'; + +@Component({ + selector: 'stella-view-mode-toggle', + standalone: true, + imports: [CommonModule, MatSlideToggleModule, MatIconModule, MatTooltipModule], + template: ` +
+ {{ isAuditor() ? 'verified_user' : 'speed' }} + + + {{ modeLabel() }} +
+ `, + styles: [` + .view-mode-toggle { + display: flex; + align-items: center; + gap: 8px; + padding: 4px 12px; + background: var(--surface-variant); + border-radius: 20px; + + .mode-icon { + font-size: 18px; + width: 18px; + height: 18px; + } + + .mode-label { + font-size: 0.875rem; + font-weight: 500; + min-width: 60px; + } + } + `], + changeDetection: ChangeDetectionStrategy.OnPush +}) +export class ViewModeToggleComponent { + constructor(private viewModeService: ViewModeService) {} + + isAuditor = this.viewModeService.isAuditor; + + modeLabel() { + return this.viewModeService.isAuditor() ? 'Auditor' : 'Operator'; + } + + tooltipText() { + return this.viewModeService.isAuditor() + ? 'Full provenance and evidence details. Switch to Operator for streamlined view.' + : 'Streamlined action-focused view. Switch to Auditor for full details.'; + } + + onToggle(): void { + this.viewModeService.toggle(); + } +} +``` + +**Add to Header**: +```typescript +// In app-header.component.ts +import { ViewModeToggleComponent } from '../view-mode-toggle/view-mode-toggle.component'; + +@Component({ + // ... + imports: [ + // ... + ViewModeToggleComponent + ], + template: ` + + + + + + + + + + ` +}) +export class AppHeaderComponent {} +``` + +**Acceptance Criteria**: +- [ ] Toggle component created +- [ ] Shows in header +- [ ] Icon changes per mode +- [ ] Label shows current mode +- [ ] Tooltip explains modes + +--- + +### T3: Operator Mode Defaults + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Define and implement operator mode display rules. + +**Implementation** - Operator Mode Directive: +```typescript +// directives/auditor-only.directive.ts +import { Directive, TemplateRef, ViewContainerRef, effect } from '@angular/core'; +import { ViewModeService } from '../services/view-mode.service'; + +/** + * Shows content only in auditor mode. + * Usage:
Full provenance details...
+ */ +@Directive({ + selector: '[stellaAuditorOnly]', + standalone: true +}) +export class AuditorOnlyDirective { + constructor( + private templateRef: TemplateRef, + private viewContainer: ViewContainerRef, + private viewModeService: ViewModeService + ) { + effect(() => { + if (this.viewModeService.isAuditor()) { + this.viewContainer.createEmbeddedView(this.templateRef); + } else { + this.viewContainer.clear(); + } + }); + } +} + +/** + * Shows content only in operator mode. + * Usage:
Quick action buttons...
+ */ +@Directive({ + selector: '[stellaOperatorOnly]', + standalone: true +}) +export class OperatorOnlyDirective { + constructor( + private templateRef: TemplateRef, + private viewContainer: ViewContainerRef, + private viewModeService: ViewModeService + ) { + effect(() => { + if (this.viewModeService.isOperator()) { + this.viewContainer.createEmbeddedView(this.templateRef); + } else { + this.viewContainer.clear(); + } + }); + } +} +``` + +**Operator Mode Features**: +- Compact finding cards +- Hide signature details +- Hide merge traces +- Hide snapshot info +- Show only verdict, not reasoning +- Quick action buttons prominent + +**Acceptance Criteria**: +- [ ] AuditorOnly directive created +- [ ] OperatorOnly directive created +- [ ] Operator mode shows minimal UI +- [ ] No signature details in operator mode +- [ ] Quick actions prominent + +--- + +### T4: Auditor Mode Defaults + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Define and implement auditor mode display rules. + +**Auditor Mode Features**: +- Expanded finding cards by default +- Full signature verification display +- Complete merge traces +- Snapshot IDs and links +- Policy rule details +- Evidence chains +- DSSE envelope viewer +- Rekor transparency log links + +**Implementation** - Auditor-specific components: +```typescript +// components/signature-badge/signature-badge.component.ts +@Component({ + selector: 'stella-signature-badge', + standalone: true, + template: ` +
+ + {{ signature.valid ? 'verified' : 'dangerous' }} + +
+ {{ signature.signedBy }} + {{ signature.signedAt | date:'medium' }} + + Rekor #{{ signature.rekorLogIndex }} + +
+
+ ` +}) +export class SignatureBadgeComponent { + @Input() signature!: SignatureInfo; + + viewMode = inject(ViewModeService); + + get rekorUrl(): string { + return `https://search.sigstore.dev/?logIndex=${this.signature.rekorLogIndex}`; + } +} +``` + +**Acceptance Criteria**: +- [ ] Auditor mode shows full details +- [ ] Signature badges with verification +- [ ] Rekor links when available +- [ ] Merge traces visible +- [ ] Snapshot references shown + +--- + +### T5: Component Conditionals + +**Assignee**: UI Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T3, T4 + +**Description**: +Update existing components to respect view mode. + +**Implementation** - Update case-header.component.ts: +```typescript +// Update CaseHeaderComponent +@Component({ + // ... + template: ` +
+ +
+ + + + +
+ + +
+ {{ data.actionableCount }} items need attention +
+ + +
+
{{ deltaText }}
+
+ Snapshot: {{ shortSnapshotId }} +
+
+
+ ` +}) +export class CaseHeaderComponent { + viewMode = inject(ViewModeService); + // ... +} +``` + +**Implementation** - Update verdict-ladder.component.ts: +```typescript +@Component({ + template: ` +
+ + + + {{ step.name }} + + {{ getStepIcon(step) }} + + {{ step.summary }} + + + +
+ +
+
+
+ ` +}) +export class VerdictLadderComponent { + viewMode = inject(ViewModeService); +} +``` + +**Files to Update**: +- `case-header.component.ts` +- `verdict-ladder.component.ts` +- `triage-finding-card.component.ts` +- `evidence-chip.component.ts` +- `decision-card.component.ts` +- `compare-view.component.ts` + +**Acceptance Criteria**: +- [ ] Case header respects view mode +- [ ] Verdict ladder respects view mode +- [ ] Finding cards compact in operator mode +- [ ] Evidence details hidden in operator mode +- [ ] All affected components updated + +--- + +### T6: Persist Preference + +**Assignee**: UI Team +**Story Points**: 1 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Save preference to LocalStorage and user settings API. + +**Implementation** - Already in ViewModeService (T1), add user settings sync: + +```typescript +// Update ViewModeService +@Injectable({ providedIn: 'root' }) +export class ViewModeService { + constructor(private userSettingsService: UserSettingsService) { + // Load from user settings if logged in, otherwise localStorage + this.loadPreference(); + + // Sync to server when changed + effect(() => { + const mode = this._mode(); + localStorage.setItem(STORAGE_KEY, mode); + + // Also sync to user settings API if authenticated + if (this.userSettingsService.isAuthenticated()) { + this.userSettingsService.updateSetting('viewMode', mode); + } + }); + } + + private async loadPreference(): Promise { + // Try user settings first + if (this.userSettingsService.isAuthenticated()) { + const settings = await this.userSettingsService.getSettings(); + if (settings?.viewMode) { + this._mode.set(settings.viewMode); + return; + } + } + + // Fall back to localStorage + const stored = localStorage.getItem(STORAGE_KEY); + if (stored === 'operator' || stored === 'auditor') { + this._mode.set(stored); + } + } +} +``` + +**Acceptance Criteria**: +- [ ] LocalStorage persistence works +- [ ] User settings API sync (if authenticated) +- [ ] Preference loaded on app init +- [ ] Survives page refresh + +--- + +### T7: Tests + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1-T6 + +**Description**: +Test view mode switching behavior. + +**Test Cases**: +```typescript +describe('ViewModeService', () => { + let service: ViewModeService; + + beforeEach(() => { + localStorage.clear(); + TestBed.configureTestingModule({}); + service = TestBed.inject(ViewModeService); + }); + + it('should default to operator mode', () => { + expect(service.mode()).toBe('operator'); + }); + + it('should toggle between modes', () => { + expect(service.mode()).toBe('operator'); + + service.toggle(); + expect(service.mode()).toBe('auditor'); + + service.toggle(); + expect(service.mode()).toBe('operator'); + }); + + it('should persist to localStorage', () => { + service.setMode('auditor'); + + expect(localStorage.getItem('stella-view-mode')).toBe('auditor'); + }); + + it('should load from localStorage', () => { + localStorage.setItem('stella-view-mode', 'auditor'); + + const newService = TestBed.inject(ViewModeService); + expect(newService.mode()).toBe('auditor'); + }); + + it('should return operator config', () => { + service.setMode('operator'); + + expect(service.config().showSignatures).toBe(false); + expect(service.config().compactFindings).toBe(true); + }); + + it('should return auditor config', () => { + service.setMode('auditor'); + + expect(service.config().showSignatures).toBe(true); + expect(service.config().compactFindings).toBe(false); + }); +}); + +describe('ViewModeToggleComponent', () => { + it('should show operator label by default', () => { + const fixture = TestBed.createComponent(ViewModeToggleComponent); + fixture.detectChanges(); + + expect(fixture.nativeElement.textContent).toContain('Operator'); + }); + + it('should toggle on click', () => { + const fixture = TestBed.createComponent(ViewModeToggleComponent); + const service = TestBed.inject(ViewModeService); + fixture.detectChanges(); + + const toggle = fixture.nativeElement.querySelector('mat-slide-toggle'); + toggle.click(); + + expect(service.mode()).toBe('auditor'); + }); +}); + +describe('AuditorOnlyDirective', () => { + @Component({ + template: `
Auditor content
` + }) + class TestComponent {} + + it('should hide content in operator mode', () => { + const service = TestBed.inject(ViewModeService); + service.setMode('operator'); + + const fixture = TestBed.createComponent(TestComponent); + fixture.detectChanges(); + + expect(fixture.nativeElement.textContent).not.toContain('Auditor content'); + }); + + it('should show content in auditor mode', () => { + const service = TestBed.inject(ViewModeService); + service.setMode('auditor'); + + const fixture = TestBed.createComponent(TestComponent); + fixture.detectChanges(); + + expect(fixture.nativeElement.textContent).toContain('Auditor content'); + }); +}); +``` + +**Acceptance Criteria**: +- [ ] Service tests for toggle +- [ ] Service tests for config +- [ ] Service tests for persistence +- [ ] Toggle component tests +- [ ] Directive tests +- [ ] All tests pass + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | UI Team | Create ViewModeService | +| 2 | T2 | TODO | T1 | UI Team | Add mode toggle component | +| 3 | T3 | TODO | T1 | UI Team | Operator mode defaults | +| 4 | T4 | TODO | T1 | UI Team | Auditor mode defaults | +| 5 | T5 | TODO | T1, T3, T4 | UI Team | Component conditionals | +| 6 | T6 | TODO | T1 | UI Team | Persist preference | +| 7 | T7 | TODO | T1-T6 | UI Team | Tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from UX Gap Analysis. Operator/Auditor mode toggle identified as key UX differentiator. | Claude | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Default mode | Decision | UI Team | Default to Operator (most common use case) | +| Signal-based | Decision | UI Team | Use Angular signals for reactivity | +| Persistence | Decision | UI Team | LocalStorage + user settings API | +| Directives | Decision | UI Team | Use structural directives for show/hide | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] Toggle visible in header +- [ ] Operator mode shows minimal info +- [ ] Auditor mode shows full provenance +- [ ] Preference persists across sessions +- [ ] All affected components updated +- [ ] All tests pass +- [ ] `ng build` succeeds +- [ ] `ng test` succeeds diff --git a/docs/implplan/SPRINT_5100_0001_0001_run_manifest_schema.md b/docs/implplan/SPRINT_5100_0001_0001_run_manifest_schema.md new file mode 100644 index 000000000..3238b8a27 --- /dev/null +++ b/docs/implplan/SPRINT_5100_0001_0001_run_manifest_schema.md @@ -0,0 +1,581 @@ +# Sprint 5100.0001.0001 · Run Manifest Schema + +## Topic & Scope + +- Define the Run Manifest schema as the foundational artifact for deterministic replay. +- Captures all inputs required to reproduce a scan verdict: artifact digests, feed versions, policy versions, tool versions, PRNG seed, and canonicalization version. +- Implement C# models, JSON schema, serialization utilities, and validation. +- **Working directory:** `src/__Libraries/StellaOps.Testing.Manifests/` + +## Dependencies & Concurrency + +- **Upstream**: None (foundational sprint) +- **Downstream**: Sprint 5100.0002.0002 (Replay Runner) depends on this +- **Safe to parallelize with**: Sprint 5100.0001.0002, 5100.0001.0003, 5100.0001.0004 + +## Documentation Prerequisites + +- `docs/product-advisories/20-Dec-2025 - Testing strategy.md` +- `docs/07_HIGH_LEVEL_ARCHITECTURE.md` +- `docs/modules/scanner/architecture.md` + +--- + +## Tasks + +### T1: Define RunManifest Domain Model + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the core RunManifest domain model that captures all inputs for a reproducible scan. + +**Implementation Path**: `src/__Libraries/StellaOps.Testing.Manifests/Models/RunManifest.cs` + +**Model Definition**: +```csharp +namespace StellaOps.Testing.Manifests.Models; + +/// +/// Captures all inputs required to reproduce a scan verdict deterministically. +/// This is the "replay key" that enables time-travel verification. +/// +public sealed record RunManifest +{ + /// + /// Unique identifier for this run. + /// + public required string RunId { get; init; } + + /// + /// Schema version for forward compatibility. + /// + public required string SchemaVersion { get; init; } = "1.0.0"; + + /// + /// Artifact digests being scanned (image layers, binaries, etc.). + /// + public required ImmutableArray ArtifactDigests { get; init; } + + /// + /// SBOM digests produced or consumed during the run. + /// + public ImmutableArray SbomDigests { get; init; } = []; + + /// + /// Vulnerability feed snapshot used for matching. + /// + public required FeedSnapshot FeedSnapshot { get; init; } + + /// + /// Policy version and lattice rules digest. + /// + public required PolicySnapshot PolicySnapshot { get; init; } + + /// + /// Tool versions used in the scan pipeline. + /// + public required ToolVersions ToolVersions { get; init; } + + /// + /// Cryptographic profile: trust roots, key IDs, algorithm set. + /// + public required CryptoProfile CryptoProfile { get; init; } + + /// + /// Environment profile: postgres-only vs postgres+valkey. + /// + public required EnvironmentProfile EnvironmentProfile { get; init; } + + /// + /// PRNG seed for any randomized operations (ensures reproducibility). + /// + public long? PrngSeed { get; init; } + + /// + /// Canonicalization algorithm version for stable JSON output. + /// + public required string CanonicalizationVersion { get; init; } + + /// + /// UTC timestamp when the run was initiated. + /// + public required DateTimeOffset InitiatedAt { get; init; } + + /// + /// SHA-256 hash of this manifest (excluding this field). + /// + public string? ManifestDigest { get; init; } +} + +public sealed record ArtifactDigest( + string Algorithm, // sha256, sha512 + string Digest, + string? MediaType, + string? Reference); // image ref, file path + +public sealed record SbomReference( + string Format, // cyclonedx-1.6, spdx-3.0.1 + string Digest, + string? Uri); + +public sealed record FeedSnapshot( + string FeedId, + string Version, + string Digest, + DateTimeOffset SnapshotAt); + +public sealed record PolicySnapshot( + string PolicyVersion, + string LatticeRulesDigest, + ImmutableArray EnabledRules); + +public sealed record ToolVersions( + string ScannerVersion, + string SbomGeneratorVersion, + string ReachabilityEngineVersion, + string AttestorVersion, + ImmutableDictionary AdditionalTools); + +public sealed record CryptoProfile( + string ProfileName, // fips, eidas, gost, sm, default + ImmutableArray TrustRootIds, + ImmutableArray AllowedAlgorithms); + +public sealed record EnvironmentProfile( + string Name, // postgres-only, postgres-valkey + bool ValkeyEnabled, + string? PostgresVersion, + string? ValkeyVersion); +``` + +**Acceptance Criteria**: +- [ ] `RunManifest.cs` created with all fields +- [ ] Supporting records for each component (ArtifactDigest, FeedSnapshot, etc.) +- [ ] ImmutableArray/ImmutableDictionary for collections +- [ ] XML documentation on all types and properties +- [ ] Nullable fields use `?` appropriately + +--- + +### T2: JSON Schema Definition + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create JSON Schema for RunManifest validation and documentation. + +**Implementation Path**: `src/__Libraries/StellaOps.Testing.Manifests/Schemas/run-manifest.schema.json` + +**Schema Outline**: +```json +{ + "$schema": "https://json-schema.org/draft/2020-12/schema", + "$id": "https://stellaops.io/schemas/run-manifest/v1", + "title": "StellaOps Run Manifest", + "description": "Captures all inputs for deterministic scan replay", + "type": "object", + "required": [ + "runId", "schemaVersion", "artifactDigests", "feedSnapshot", + "policySnapshot", "toolVersions", "cryptoProfile", + "environmentProfile", "canonicalizationVersion", "initiatedAt" + ], + "properties": { + "runId": { "type": "string", "format": "uuid" }, + "schemaVersion": { "type": "string", "pattern": "^\\d+\\.\\d+\\.\\d+$" }, + "artifactDigests": { + "type": "array", + "items": { "$ref": "#/$defs/artifactDigest" }, + "minItems": 1 + } + }, + "$defs": { + "artifactDigest": { + "type": "object", + "required": ["algorithm", "digest"], + "properties": { + "algorithm": { "enum": ["sha256", "sha512"] }, + "digest": { "type": "string", "pattern": "^[a-f0-9]{64,128}$" } + } + } + } +} +``` + +**Acceptance Criteria**: +- [ ] Complete JSON Schema covering all fields +- [ ] Schema validates sample manifests correctly +- [ ] Schema rejects invalid manifests +- [ ] Embedded as resource in assembly + +--- + +### T3: Serialization Utilities + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Implement serialization/deserialization with canonical JSON output. + +**Implementation Path**: `src/__Libraries/StellaOps.Testing.Manifests/Serialization/RunManifestSerializer.cs` + +**Implementation**: +```csharp +namespace StellaOps.Testing.Manifests.Serialization; + +public static class RunManifestSerializer +{ + private static readonly JsonSerializerOptions CanonicalOptions = new() + { + WriteIndented = false, + PropertyNamingPolicy = JsonNamingPolicy.CamelCase, + DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull, + Encoder = JavaScriptEncoder.UnsafeRelaxedJsonEscaping, + // Custom converter for stable key ordering + Converters = { new StableOrderDictionaryConverter() } + }; + + public static string Serialize(RunManifest manifest) => + JsonSerializer.Serialize(manifest, CanonicalOptions); + + public static RunManifest Deserialize(string json) => + JsonSerializer.Deserialize(json, CanonicalOptions) + ?? throw new InvalidOperationException("Failed to deserialize manifest"); + + public static string ComputeDigest(RunManifest manifest) + { + var withoutDigest = manifest with { ManifestDigest = null }; + var json = Serialize(withoutDigest); + return Convert.ToHexString(SHA256.HashData(Encoding.UTF8.GetBytes(json))).ToLowerInvariant(); + } + + public static RunManifest WithDigest(RunManifest manifest) => + manifest with { ManifestDigest = ComputeDigest(manifest) }; +} +``` + +**Acceptance Criteria**: +- [ ] Canonical JSON output (stable key ordering) +- [ ] Round-trip serialization preserves data +- [ ] Digest computation excludes ManifestDigest field +- [ ] UTF-8 encoding consistently applied + +--- + +### T4: Manifest Validation Service + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T2, T3 + +**Description**: +Validate manifests against schema and business rules. + +**Implementation Path**: `src/__Libraries/StellaOps.Testing.Manifests/Validation/RunManifestValidator.cs` + +**Implementation**: +```csharp +namespace StellaOps.Testing.Manifests.Validation; + +public sealed class RunManifestValidator : IRunManifestValidator +{ + private readonly JsonSchema _schema; + + public RunManifestValidator() + { + var schemaJson = EmbeddedResources.GetSchema("run-manifest.schema.json"); + _schema = JsonSchema.FromText(schemaJson); + } + + public ValidationResult Validate(RunManifest manifest) + { + var errors = new List(); + + // Schema validation + var json = RunManifestSerializer.Serialize(manifest); + var schemaResult = _schema.Evaluate(JsonDocument.Parse(json)); + if (!schemaResult.IsValid) + { + errors.AddRange(schemaResult.Errors.Select(e => + new ValidationError("Schema", e.Message))); + } + + // Business rules + if (manifest.ArtifactDigests.Length == 0) + errors.Add(new ValidationError("ArtifactDigests", "At least one artifact required")); + + if (manifest.FeedSnapshot.SnapshotAt > manifest.InitiatedAt) + errors.Add(new ValidationError("FeedSnapshot", "Feed snapshot cannot be after run initiation")); + + // Digest verification + if (manifest.ManifestDigest != null) + { + var computed = RunManifestSerializer.ComputeDigest(manifest); + if (computed != manifest.ManifestDigest) + errors.Add(new ValidationError("ManifestDigest", "Digest mismatch")); + } + + return new ValidationResult(errors.Count == 0, errors); + } +} + +public sealed record ValidationResult(bool IsValid, IReadOnlyList Errors); +public sealed record ValidationError(string Field, string Message); +``` + +**Acceptance Criteria**: +- [ ] Schema validation integrated +- [ ] Business rule validation (non-empty artifacts, timestamp ordering) +- [ ] Digest verification +- [ ] Clear error messages + +--- + +### T5: Manifest Capture Service + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1, T3 + +**Description**: +Service to capture run manifests during scan execution. + +**Implementation Path**: `src/__Libraries/StellaOps.Testing.Manifests/Services/ManifestCaptureService.cs` + +**Implementation**: +```csharp +namespace StellaOps.Testing.Manifests.Services; + +public sealed class ManifestCaptureService : IManifestCaptureService +{ + private readonly IFeedVersionProvider _feedProvider; + private readonly IPolicyVersionProvider _policyProvider; + private readonly TimeProvider _timeProvider; + + public async Task CaptureAsync( + ScanContext context, + CancellationToken ct = default) + { + var feedSnapshot = await _feedProvider.GetCurrentSnapshotAsync(ct); + var policySnapshot = await _policyProvider.GetCurrentSnapshotAsync(ct); + + var manifest = new RunManifest + { + RunId = context.RunId, + SchemaVersion = "1.0.0", + ArtifactDigests = context.ArtifactDigests, + SbomDigests = context.GeneratedSboms, + FeedSnapshot = feedSnapshot, + PolicySnapshot = policySnapshot, + ToolVersions = GetToolVersions(), + CryptoProfile = context.CryptoProfile, + EnvironmentProfile = GetEnvironmentProfile(), + PrngSeed = context.PrngSeed, + CanonicalizationVersion = "1.0.0", + InitiatedAt = _timeProvider.GetUtcNow() + }; + + return RunManifestSerializer.WithDigest(manifest); + } + + private static ToolVersions GetToolVersions() => new( + ScannerVersion: typeof(Scanner).Assembly.GetName().Version?.ToString() ?? "unknown", + SbomGeneratorVersion: "1.0.0", + ReachabilityEngineVersion: "1.0.0", + AttestorVersion: "1.0.0", + AdditionalTools: ImmutableDictionary.Empty); + + private EnvironmentProfile GetEnvironmentProfile() => new( + Name: Environment.GetEnvironmentVariable("STELLAOPS_ENV_PROFILE") ?? "postgres-only", + ValkeyEnabled: Environment.GetEnvironmentVariable("STELLAOPS_VALKEY_ENABLED") == "true", + PostgresVersion: "16", + ValkeyVersion: null); +} +``` + +**Acceptance Criteria**: +- [ ] Captures all required fields during scan +- [ ] Integrates with feed and policy version providers +- [ ] Computes digest automatically +- [ ] Environment detection for profile + +--- + +### T6: Unit Tests + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1-T5 + +**Description**: +Comprehensive unit tests for manifest models and utilities. + +**Implementation Path**: `src/__Libraries/__Tests/StellaOps.Testing.Manifests.Tests/` + +**Test Cases**: +```csharp +public class RunManifestTests +{ + [Fact] + public void Serialize_ValidManifest_ProducesCanonicalJson() + { + var manifest = CreateTestManifest(); + var json1 = RunManifestSerializer.Serialize(manifest); + var json2 = RunManifestSerializer.Serialize(manifest); + json1.Should().Be(json2); + } + + [Fact] + public void ComputeDigest_SameManifest_ProducesSameDigest() + { + var manifest = CreateTestManifest(); + var digest1 = RunManifestSerializer.ComputeDigest(manifest); + var digest2 = RunManifestSerializer.ComputeDigest(manifest); + digest1.Should().Be(digest2); + } + + [Fact] + public void ComputeDigest_DifferentManifest_ProducesDifferentDigest() + { + var manifest1 = CreateTestManifest(); + var manifest2 = manifest1 with { RunId = Guid.NewGuid().ToString() }; + var digest1 = RunManifestSerializer.ComputeDigest(manifest1); + var digest2 = RunManifestSerializer.ComputeDigest(manifest2); + digest1.Should().NotBe(digest2); + } + + [Fact] + public void Validate_ValidManifest_ReturnsSuccess() + { + var manifest = CreateTestManifest(); + var validator = new RunManifestValidator(); + var result = validator.Validate(manifest); + result.IsValid.Should().BeTrue(); + } + + [Fact] + public void Validate_EmptyArtifacts_ReturnsFalse() + { + var manifest = CreateTestManifest() with + { + ArtifactDigests = [] + }; + var validator = new RunManifestValidator(); + var result = validator.Validate(manifest); + result.IsValid.Should().BeFalse(); + } + + [Fact] + public void RoundTrip_PreservesAllFields() + { + var manifest = CreateTestManifest(); + var json = RunManifestSerializer.Serialize(manifest); + var deserialized = RunManifestSerializer.Deserialize(json); + deserialized.Should().BeEquivalentTo(manifest); + } +} +``` + +**Acceptance Criteria**: +- [ ] Serialization determinism tests +- [ ] Digest computation tests +- [ ] Validation tests (positive and negative) +- [ ] Round-trip tests +- [ ] All tests pass + +--- + +### T7: Project Setup + +**Assignee**: QA Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the project structure and dependencies. + +**Implementation Path**: `src/__Libraries/StellaOps.Testing.Manifests/StellaOps.Testing.Manifests.csproj` + +**Project File**: +```xml + + + net10.0 + enable + enable + preview + + + + + + + + + + + +``` + +**Acceptance Criteria**: +- [ ] Project compiles +- [ ] Dependencies resolved +- [ ] Schema embedded as resource + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | QA Team | Define RunManifest Domain Model | +| 2 | T2 | TODO | T1 | QA Team | JSON Schema Definition | +| 3 | T3 | TODO | T1 | QA Team | Serialization Utilities | +| 4 | T4 | TODO | T2, T3 | QA Team | Manifest Validation Service | +| 5 | T5 | TODO | T1, T3 | QA Team | Manifest Capture Service | +| 6 | T6 | TODO | T1-T5 | QA Team | Unit Tests | +| 7 | T7 | TODO | — | QA Team | Project Setup | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Testing Strategy advisory. Run Manifest identified as foundational artifact for deterministic replay. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Schema version strategy | Decision | QA Team | Semantic versioning with backward compatibility | +| Digest algorithm | Decision | QA Team | SHA-256 for manifest digest | +| Canonical JSON | Decision | QA Team | Stable key ordering, camelCase, no whitespace | +| PRNG seed storage | Decision | QA Team | Optional field, used when reproducibility requires randomness control | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] RunManifest model captures all inputs for replay +- [ ] JSON schema validates manifests +- [ ] Serialization produces canonical, deterministic output +- [ ] Digest computation is stable across platforms +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds with 100% pass rate diff --git a/docs/implplan/SPRINT_5100_0001_0002_evidence_index_schema.md b/docs/implplan/SPRINT_5100_0001_0002_evidence_index_schema.md new file mode 100644 index 000000000..51fec8089 --- /dev/null +++ b/docs/implplan/SPRINT_5100_0001_0002_evidence_index_schema.md @@ -0,0 +1,527 @@ +# Sprint 5100.0001.0002 · Evidence Index Schema + +## Topic & Scope + +- Define the Evidence Index schema that links verdicts to their supporting evidence chain. +- Creates the machine-readable graph: verdict -> SBOM digest -> attestation IDs -> tool versions -> reachability proofs. +- Implement C# models, JSON schema, and linking utilities. +- **Working directory:** `src/__Libraries/StellaOps.Evidence/` + +## Dependencies & Concurrency + +- **Upstream**: None (foundational sprint) +- **Downstream**: Sprint 5100.0003.0001 (SBOM Interop) uses evidence linking +- **Safe to parallelize with**: Sprint 5100.0001.0001, 5100.0001.0003, 5100.0001.0004 + +## Documentation Prerequisites + +- `docs/product-advisories/20-Dec-2025 - Testing strategy.md` +- `docs/modules/attestor/architecture.md` +- `docs/product-advisories/18-Dec-2025 - Designing Explainable Triage and Proof-Linked Evidence.md` + +--- + +## Tasks + +### T1: Define Evidence Index Domain Model + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the Evidence Index model that establishes the complete provenance chain for a verdict. + +**Implementation Path**: `src/__Libraries/StellaOps.Evidence/Models/EvidenceIndex.cs` + +**Model Definition**: +```csharp +namespace StellaOps.Evidence.Models; + +/// +/// Machine-readable index linking a verdict to all supporting evidence. +/// The product is not the verdict; the product is verdict + evidence graph. +/// +public sealed record EvidenceIndex +{ + /// + /// Unique identifier for this evidence index. + /// + public required string IndexId { get; init; } + + /// + /// Schema version for forward compatibility. + /// + public required string SchemaVersion { get; init; } = "1.0.0"; + + /// + /// Reference to the verdict this evidence supports. + /// + public required VerdictReference Verdict { get; init; } + + /// + /// SBOM references used to produce the verdict. + /// + public required ImmutableArray Sboms { get; init; } + + /// + /// Attestations in the evidence chain. + /// + public required ImmutableArray Attestations { get; init; } + + /// + /// VEX documents applied to the verdict. + /// + public ImmutableArray VexDocuments { get; init; } = []; + + /// + /// Reachability proofs for vulnerability correlation. + /// + public ImmutableArray ReachabilityProofs { get; init; } = []; + + /// + /// Unknowns encountered during analysis. + /// + public ImmutableArray Unknowns { get; init; } = []; + + /// + /// Tool versions used to produce evidence. + /// + public required ToolChainEvidence ToolChain { get; init; } + + /// + /// Run manifest reference for replay capability. + /// + public required string RunManifestDigest { get; init; } + + /// + /// UTC timestamp when index was created. + /// + public required DateTimeOffset CreatedAt { get; init; } + + /// + /// SHA-256 digest of this index (excluding this field). + /// + public string? IndexDigest { get; init; } +} + +public sealed record VerdictReference( + string VerdictId, + string Digest, + VerdictOutcome Outcome, + string? PolicyVersion); + +public enum VerdictOutcome +{ + Pass, + Fail, + Warn, + Unknown +} + +public sealed record SbomEvidence( + string SbomId, + string Format, // cyclonedx-1.6, spdx-3.0.1 + string Digest, + string? Uri, + int ComponentCount, + DateTimeOffset GeneratedAt); + +public sealed record AttestationEvidence( + string AttestationId, + string Type, // sbom, vex, build-provenance, verdict + string Digest, + string SignerKeyId, + bool SignatureValid, + DateTimeOffset SignedAt, + string? RekorLogIndex); + +public sealed record VexEvidence( + string VexId, + string Format, // openvex, csaf, cyclonedx + string Digest, + string Source, // vendor, distro, internal + int StatementCount, + ImmutableArray AffectedVulnerabilities); + +public sealed record ReachabilityEvidence( + string ProofId, + string VulnerabilityId, + string ComponentPurl, + ReachabilityStatus Status, + string? EntryPoint, + ImmutableArray CallPath, + string Digest); + +public enum ReachabilityStatus +{ + Reachable, + NotReachable, + Inconclusive, + NotAnalyzed +} + +public sealed record UnknownEvidence( + string UnknownId, + string ReasonCode, + string Description, + string? ComponentPurl, + string? VulnerabilityId, + UnknownSeverity Severity); + +public enum UnknownSeverity +{ + Low, + Medium, + High, + Critical +} + +public sealed record ToolChainEvidence( + string ScannerVersion, + string SbomGeneratorVersion, + string ReachabilityEngineVersion, + string AttestorVersion, + string PolicyEngineVersion, + ImmutableDictionary AdditionalTools); +``` + +**Acceptance Criteria**: +- [ ] `EvidenceIndex.cs` created with all fields +- [ ] Supporting records for each evidence type +- [ ] Outcome enum covers all verdict states +- [ ] ReachabilityStatus captures analysis result +- [ ] XML documentation on all types + +--- + +### T2: JSON Schema Definition + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create JSON Schema for Evidence Index validation. + +**Implementation Path**: `src/__Libraries/StellaOps.Evidence/Schemas/evidence-index.schema.json` + +**Acceptance Criteria**: +- [ ] Complete JSON Schema for all evidence types +- [ ] Schema validates sample indexes correctly +- [ ] Schema rejects malformed indexes +- [ ] Embedded as resource in assembly + +--- + +### T3: Evidence Linker Service + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Service that builds the evidence index by collecting references during scan execution. + +**Implementation Path**: `src/__Libraries/StellaOps.Evidence/Services/EvidenceLinker.cs` + +**Implementation**: +```csharp +namespace StellaOps.Evidence.Services; + +public sealed class EvidenceLinker : IEvidenceLinker +{ + private readonly List _sboms = []; + private readonly List _attestations = []; + private readonly List _vexDocuments = []; + private readonly List _reachabilityProofs = []; + private readonly List _unknowns = []; + private ToolChainEvidence? _toolChain; + + public void AddSbom(SbomEvidence sbom) => _sboms.Add(sbom); + public void AddAttestation(AttestationEvidence attestation) => _attestations.Add(attestation); + public void AddVex(VexEvidence vex) => _vexDocuments.Add(vex); + public void AddReachabilityProof(ReachabilityEvidence proof) => _reachabilityProofs.Add(proof); + public void AddUnknown(UnknownEvidence unknown) => _unknowns.Add(unknown); + public void SetToolChain(ToolChainEvidence toolChain) => _toolChain = toolChain; + + public EvidenceIndex Build(VerdictReference verdict, string runManifestDigest) + { + if (_toolChain == null) + throw new InvalidOperationException("ToolChain must be set before building index"); + + var index = new EvidenceIndex + { + IndexId = Guid.NewGuid().ToString(), + SchemaVersion = "1.0.0", + Verdict = verdict, + Sboms = [.. _sboms], + Attestations = [.. _attestations], + VexDocuments = [.. _vexDocuments], + ReachabilityProofs = [.. _reachabilityProofs], + Unknowns = [.. _unknowns], + ToolChain = _toolChain, + RunManifestDigest = runManifestDigest, + CreatedAt = DateTimeOffset.UtcNow + }; + + return EvidenceIndexSerializer.WithDigest(index); + } +} + +public interface IEvidenceLinker +{ + void AddSbom(SbomEvidence sbom); + void AddAttestation(AttestationEvidence attestation); + void AddVex(VexEvidence vex); + void AddReachabilityProof(ReachabilityEvidence proof); + void AddUnknown(UnknownEvidence unknown); + void SetToolChain(ToolChainEvidence toolChain); + EvidenceIndex Build(VerdictReference verdict, string runManifestDigest); +} +``` + +**Acceptance Criteria**: +- [ ] Collects all evidence types during scan +- [ ] Builds complete index with digest +- [ ] Validates required fields before build +- [ ] Thread-safe collection + +--- + +### T4: Evidence Validator + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Validate evidence indexes for completeness and correctness. + +**Implementation Path**: `src/__Libraries/StellaOps.Evidence/Validation/EvidenceIndexValidator.cs` + +**Validation Rules**: +```csharp +public sealed class EvidenceIndexValidator : IEvidenceIndexValidator +{ + public ValidationResult Validate(EvidenceIndex index) + { + var errors = new List(); + + // Required evidence checks + if (index.Sboms.Length == 0) + errors.Add(new ValidationError("Sboms", "At least one SBOM required")); + + // Verdict-SBOM linkage + // Every "not affected" claim must have evidence hooks per policy + foreach (var vex in index.VexDocuments) + { + if (vex.StatementCount == 0) + errors.Add(new ValidationError("VexDocuments", + $"VEX {vex.VexId} has no statements")); + } + + // Reachability evidence for reachable vulns + foreach (var proof in index.ReachabilityProofs) + { + if (proof.Status == ReachabilityStatus.Inconclusive && + !index.Unknowns.Any(u => u.VulnerabilityId == proof.VulnerabilityId)) + { + errors.Add(new ValidationError("ReachabilityProofs", + $"Inconclusive reachability for {proof.VulnerabilityId} not recorded as unknown")); + } + } + + // Attestation signature validity + foreach (var att in index.Attestations) + { + if (!att.SignatureValid) + errors.Add(new ValidationError("Attestations", + $"Attestation {att.AttestationId} has invalid signature")); + } + + // Digest verification + if (index.IndexDigest != null) + { + var computed = EvidenceIndexSerializer.ComputeDigest(index); + if (computed != index.IndexDigest) + errors.Add(new ValidationError("IndexDigest", "Digest mismatch")); + } + + return new ValidationResult(errors.Count == 0, errors); + } +} +``` + +**Acceptance Criteria**: +- [ ] Validates required evidence presence +- [ ] Checks SBOM linkage +- [ ] Validates attestation signatures +- [ ] Verifies digest integrity +- [ ] Reports all errors with context + +--- + +### T5: Evidence Query Service + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T3 + +**Description**: +Query service for navigating evidence chains. + +**Implementation Path**: `src/__Libraries/StellaOps.Evidence/Services/EvidenceQueryService.cs` + +**Implementation**: +```csharp +namespace StellaOps.Evidence.Services; + +public sealed class EvidenceQueryService : IEvidenceQueryService +{ + public IEnumerable GetAttestationsForSbom( + EvidenceIndex index, string sbomDigest) + { + return index.Attestations + .Where(a => a.Type == "sbom" && + index.Sboms.Any(s => s.Digest == sbomDigest)); + } + + public IEnumerable GetReachabilityForVulnerability( + EvidenceIndex index, string vulnerabilityId) + { + return index.ReachabilityProofs + .Where(r => r.VulnerabilityId == vulnerabilityId); + } + + public IEnumerable GetVexForVulnerability( + EvidenceIndex index, string vulnerabilityId) + { + return index.VexDocuments + .Where(v => v.AffectedVulnerabilities.Contains(vulnerabilityId)); + } + + public EvidenceChainReport BuildChainReport(EvidenceIndex index) + { + return new EvidenceChainReport + { + VerdictDigest = index.Verdict.Digest, + SbomCount = index.Sboms.Length, + AttestationCount = index.Attestations.Length, + VexCount = index.VexDocuments.Length, + ReachabilityProofCount = index.ReachabilityProofs.Length, + UnknownCount = index.Unknowns.Length, + AllSignaturesValid = index.Attestations.All(a => a.SignatureValid), + HasRekorEntries = index.Attestations.Any(a => a.RekorLogIndex != null), + ToolChainComplete = index.ToolChain != null + }; + } +} + +public sealed record EvidenceChainReport +{ + public required string VerdictDigest { get; init; } + public int SbomCount { get; init; } + public int AttestationCount { get; init; } + public int VexCount { get; init; } + public int ReachabilityProofCount { get; init; } + public int UnknownCount { get; init; } + public bool AllSignaturesValid { get; init; } + public bool HasRekorEntries { get; init; } + public bool ToolChainComplete { get; init; } +} +``` + +**Acceptance Criteria**: +- [ ] Query attestations by SBOM +- [ ] Query reachability by vulnerability +- [ ] Query VEX by vulnerability +- [ ] Build summary chain report + +--- + +### T6: Unit Tests + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1-T5 + +**Description**: +Comprehensive tests for evidence index functionality. + +**Implementation Path**: `src/__Libraries/__Tests/StellaOps.Evidence.Tests/` + +**Acceptance Criteria**: +- [ ] EvidenceLinker build tests +- [ ] Validation tests (positive and negative) +- [ ] Query service tests +- [ ] Serialization round-trip tests +- [ ] Digest computation tests + +--- + +### T7: Project Setup + +**Assignee**: QA Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the project structure and dependencies. + +**Implementation Path**: `src/__Libraries/StellaOps.Evidence/StellaOps.Evidence.csproj` + +**Acceptance Criteria**: +- [ ] Project compiles +- [ ] Dependencies resolved +- [ ] Schema embedded as resource + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | QA Team | Define Evidence Index Domain Model | +| 2 | T2 | TODO | T1 | QA Team | JSON Schema Definition | +| 3 | T3 | TODO | T1 | QA Team | Evidence Linker Service | +| 4 | T4 | TODO | T1, T2 | QA Team | Evidence Validator | +| 5 | T5 | TODO | T1, T3 | QA Team | Evidence Query Service | +| 6 | T6 | TODO | T1-T5 | QA Team | Unit Tests | +| 7 | T7 | TODO | — | QA Team | Project Setup | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Testing Strategy advisory. Evidence Index identified as key artifact for proof-linked UX. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Evidence chain depth | Decision | QA Team | Link to immediate evidence only; transitive links via query | +| Unknown tracking | Decision | QA Team | All unknowns recorded in evidence for audit | +| Rekor integration | Decision | QA Team | Optional; RekorLogIndex null when offline | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] Evidence Index links verdict to all evidence +- [ ] Validation catches incomplete chains +- [ ] Query service enables chain navigation +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_5100_0001_0003_offline_bundle_manifest.md b/docs/implplan/SPRINT_5100_0001_0003_offline_bundle_manifest.md new file mode 100644 index 000000000..0d45b54b8 --- /dev/null +++ b/docs/implplan/SPRINT_5100_0001_0003_offline_bundle_manifest.md @@ -0,0 +1,530 @@ +# Sprint 5100.0001.0003 · Offline Bundle Manifest + +## Topic & Scope + +- Define the Offline Bundle Manifest schema for air-gapped operation. +- Captures all components required for offline scanning: feeds, policies, keys, certificates, Rekor mirror snapshots. +- Implement bundle validation, integrity checking, and content-addressed storage. +- **Working directory:** `src/AirGap/__Libraries/StellaOps.AirGap.Bundle/` + +## Dependencies & Concurrency + +- **Upstream**: None (foundational sprint) +- **Downstream**: Sprint 5100.0003.0002 (No-Egress Enforcement) uses bundle validation +- **Safe to parallelize with**: Sprint 5100.0001.0001, 5100.0001.0002, 5100.0001.0004 + +## Documentation Prerequisites + +- `docs/product-advisories/20-Dec-2025 - Testing strategy.md` +- `docs/24_OFFLINE_KIT.md` +- `docs/modules/airgap/architecture.md` + +--- + +## Tasks + +### T1: Define Bundle Manifest Model + +**Assignee**: AirGap Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the Offline Bundle Manifest model that inventories all bundle contents with digests. + +**Implementation Path**: `src/AirGap/__Libraries/StellaOps.AirGap.Bundle/Models/BundleManifest.cs` + +**Model Definition**: +```csharp +namespace StellaOps.AirGap.Bundle.Models; + +/// +/// Manifest for an offline bundle, inventorying all components with content digests. +/// Used for integrity verification and completeness checking in air-gapped environments. +/// +public sealed record BundleManifest +{ + /// + /// Unique identifier for this bundle. + /// + public required string BundleId { get; init; } + + /// + /// Schema version for forward compatibility. + /// + public required string SchemaVersion { get; init; } = "1.0.0"; + + /// + /// Human-readable bundle name. + /// + public required string Name { get; init; } + + /// + /// Bundle version. + /// + public required string Version { get; init; } + + /// + /// UTC timestamp when bundle was created. + /// + public required DateTimeOffset CreatedAt { get; init; } + + /// + /// Bundle expiry (feeds/policies may become stale). + /// + public DateTimeOffset? ExpiresAt { get; init; } + + /// + /// Vulnerability feed components. + /// + public required ImmutableArray Feeds { get; init; } + + /// + /// Policy and lattice rule components. + /// + public required ImmutableArray Policies { get; init; } + + /// + /// Trust roots and certificates. + /// + public required ImmutableArray CryptoMaterials { get; init; } + + /// + /// Package catalogs for ecosystem matching. + /// + public ImmutableArray Catalogs { get; init; } = []; + + /// + /// Rekor mirror snapshot for offline transparency. + /// + public RekorSnapshot? RekorSnapshot { get; init; } + + /// + /// Crypto provider modules for sovereign crypto. + /// + public ImmutableArray CryptoProviders { get; init; } = []; + + /// + /// Total size in bytes. + /// + public long TotalSizeBytes { get; init; } + + /// + /// SHA-256 digest of the entire bundle (excluding this field). + /// + public string? BundleDigest { get; init; } +} + +public sealed record FeedComponent( + string FeedId, + string Name, + string Version, + string RelativePath, + string Digest, + long SizeBytes, + DateTimeOffset SnapshotAt, + FeedFormat Format); + +public enum FeedFormat +{ + StellaOpsNative, + TrivyDb, + GrypeDb, + OsvJson +} + +public sealed record PolicyComponent( + string PolicyId, + string Name, + string Version, + string RelativePath, + string Digest, + long SizeBytes, + PolicyType Type); + +public enum PolicyType +{ + OpaRego, + LatticeRules, + UnknownBudgets, + ScoringWeights +} + +public sealed record CryptoComponent( + string ComponentId, + string Name, + string RelativePath, + string Digest, + long SizeBytes, + CryptoComponentType Type, + DateTimeOffset? ExpiresAt); + +public enum CryptoComponentType +{ + TrustRoot, + IntermediateCa, + TimestampRoot, + SigningKey, + FulcioRoot +} + +public sealed record CatalogComponent( + string CatalogId, + string Ecosystem, // npm, pypi, maven, nuget + string Version, + string RelativePath, + string Digest, + long SizeBytes, + DateTimeOffset SnapshotAt); + +public sealed record RekorSnapshot( + string TreeId, + long TreeSize, + string RootHash, + string RelativePath, + string Digest, + DateTimeOffset SnapshotAt); + +public sealed record CryptoProviderComponent( + string ProviderId, + string Name, // CryptoPro, OpenSSL-GOST, SM-Crypto + string Version, + string RelativePath, + string Digest, + long SizeBytes, + ImmutableArray SupportedAlgorithms); +``` + +**Acceptance Criteria**: +- [ ] `BundleManifest.cs` with all component types +- [ ] Feed, Policy, Crypto, Catalog components defined +- [ ] RekorSnapshot for offline transparency +- [ ] CryptoProvider for sovereign crypto support +- [ ] All fields documented + +--- + +### T2: Bundle Validator + +**Assignee**: AirGap Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Validate bundle manifest and verify content integrity. + +**Implementation Path**: `src/AirGap/__Libraries/StellaOps.AirGap.Bundle/Validation/BundleValidator.cs` + +**Implementation**: +```csharp +namespace StellaOps.AirGap.Bundle.Validation; + +public sealed class BundleValidator : IBundleValidator +{ + public async Task ValidateAsync( + BundleManifest manifest, + string bundlePath, + CancellationToken ct = default) + { + var errors = new List(); + var warnings = new List(); + + // Check required components + if (manifest.Feeds.Length == 0) + errors.Add(new BundleValidationError("Feeds", "At least one feed required")); + + if (manifest.CryptoMaterials.Length == 0) + errors.Add(new BundleValidationError("CryptoMaterials", "Trust roots required")); + + // Verify all file digests + foreach (var feed in manifest.Feeds) + { + var filePath = Path.Combine(bundlePath, feed.RelativePath); + var result = await VerifyFileDigestAsync(filePath, feed.Digest, ct); + if (!result.IsValid) + errors.Add(new BundleValidationError("Feeds", + $"Feed {feed.FeedId} digest mismatch: expected {feed.Digest}, got {result.ActualDigest}")); + } + + // Check expiry + if (manifest.ExpiresAt.HasValue && manifest.ExpiresAt.Value < DateTimeOffset.UtcNow) + warnings.Add(new BundleValidationWarning("ExpiresAt", "Bundle has expired")); + + // Check feed freshness + foreach (var feed in manifest.Feeds) + { + var age = DateTimeOffset.UtcNow - feed.SnapshotAt; + if (age.TotalDays > 7) + warnings.Add(new BundleValidationWarning("Feeds", + $"Feed {feed.FeedId} is {age.TotalDays:F0} days old")); + } + + // Verify bundle digest + if (manifest.BundleDigest != null) + { + var computed = ComputeBundleDigest(manifest); + if (computed != manifest.BundleDigest) + errors.Add(new BundleValidationError("BundleDigest", "Bundle digest mismatch")); + } + + return new BundleValidationResult( + errors.Count == 0, + errors, + warnings, + manifest.TotalSizeBytes); + } + + private async Task<(bool IsValid, string ActualDigest)> VerifyFileDigestAsync( + string filePath, string expectedDigest, CancellationToken ct) + { + if (!File.Exists(filePath)) + return (false, "FILE_NOT_FOUND"); + + using var stream = File.OpenRead(filePath); + var hash = await SHA256.HashDataAsync(stream, ct); + var actualDigest = Convert.ToHexString(hash).ToLowerInvariant(); + return (actualDigest == expectedDigest.ToLowerInvariant(), actualDigest); + } + + private static string ComputeBundleDigest(BundleManifest manifest) + { + var withoutDigest = manifest with { BundleDigest = null }; + var json = BundleManifestSerializer.Serialize(withoutDigest); + return Convert.ToHexString(SHA256.HashData(Encoding.UTF8.GetBytes(json))).ToLowerInvariant(); + } +} + +public sealed record BundleValidationResult( + bool IsValid, + IReadOnlyList Errors, + IReadOnlyList Warnings, + long TotalSizeBytes); + +public sealed record BundleValidationError(string Component, string Message); +public sealed record BundleValidationWarning(string Component, string Message); +``` + +**Acceptance Criteria**: +- [ ] Validates required components present +- [ ] Verifies all file digests +- [ ] Checks expiry and freshness +- [ ] Reports errors and warnings separately +- [ ] Async file operations + +--- + +### T3: Bundle Builder + +**Assignee**: AirGap Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Service to build offline bundles from online sources. + +**Implementation Path**: `src/AirGap/__Libraries/StellaOps.AirGap.Bundle/Services/BundleBuilder.cs` + +**Implementation**: +```csharp +namespace StellaOps.AirGap.Bundle.Services; + +public sealed class BundleBuilder : IBundleBuilder +{ + public async Task BuildAsync( + BundleBuildRequest request, + string outputPath, + CancellationToken ct = default) + { + var feeds = new List(); + var policies = new List(); + var cryptoMaterials = new List(); + + // Download and hash feeds + foreach (var feedConfig in request.Feeds) + { + var component = await DownloadFeedAsync(feedConfig, outputPath, ct); + feeds.Add(component); + } + + // Export policies + foreach (var policyConfig in request.Policies) + { + var component = await ExportPolicyAsync(policyConfig, outputPath, ct); + policies.Add(component); + } + + // Export crypto materials + foreach (var cryptoConfig in request.CryptoMaterials) + { + var component = await ExportCryptoAsync(cryptoConfig, outputPath, ct); + cryptoMaterials.Add(component); + } + + var totalSize = feeds.Sum(f => f.SizeBytes) + + policies.Sum(p => p.SizeBytes) + + cryptoMaterials.Sum(c => c.SizeBytes); + + var manifest = new BundleManifest + { + BundleId = Guid.NewGuid().ToString(), + SchemaVersion = "1.0.0", + Name = request.Name, + Version = request.Version, + CreatedAt = DateTimeOffset.UtcNow, + ExpiresAt = request.ExpiresAt, + Feeds = [.. feeds], + Policies = [.. policies], + CryptoMaterials = [.. cryptoMaterials], + TotalSizeBytes = totalSize + }; + + return BundleManifestSerializer.WithDigest(manifest); + } +} + +public sealed record BundleBuildRequest( + string Name, + string Version, + DateTimeOffset? ExpiresAt, + IReadOnlyList Feeds, + IReadOnlyList Policies, + IReadOnlyList CryptoMaterials); +``` + +**Acceptance Criteria**: +- [ ] Downloads feeds with integrity verification +- [ ] Exports policies and lattice rules +- [ ] Includes crypto materials +- [ ] Computes total size and digest +- [ ] Progress reporting + +--- + +### T4: Bundle Loader + +**Assignee**: AirGap Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Load and mount a validated bundle for offline scanning. + +**Implementation Path**: `src/AirGap/__Libraries/StellaOps.AirGap.Bundle/Services/BundleLoader.cs` + +**Acceptance Criteria**: +- [ ] Validates bundle before loading +- [ ] Registers feeds with scanner +- [ ] Loads policies into policy engine +- [ ] Configures crypto providers +- [ ] Fails explicitly on validation errors + +--- + +### T5: CLI Integration + +**Assignee**: AirGap Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T3, T4 + +**Description**: +Add CLI commands for bundle management. + +**Commands**: +```bash +stella bundle create --name "offline-2025-Q1" --output bundle.tar.gz +stella bundle validate bundle.tar.gz +stella bundle info bundle.tar.gz +stella bundle load bundle.tar.gz +``` + +**Acceptance Criteria**: +- [ ] `bundle create` command +- [ ] `bundle validate` command +- [ ] `bundle info` command +- [ ] `bundle load` command +- [ ] JSON output option + +--- + +### T6: Unit and Integration Tests + +**Assignee**: AirGap Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1-T5 + +**Description**: +Comprehensive tests for bundle functionality. + +**Acceptance Criteria**: +- [ ] Manifest serialization tests +- [ ] Validation tests with fixtures +- [ ] Digest verification tests +- [ ] Builder integration tests +- [ ] Loader integration tests + +--- + +### T7: Project Setup + +**Assignee**: AirGap Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the project structure. + +**Implementation Path**: `src/AirGap/__Libraries/StellaOps.AirGap.Bundle/StellaOps.AirGap.Bundle.csproj` + +**Acceptance Criteria**: +- [ ] Project compiles +- [ ] Dependencies resolved + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | AirGap Team | Define Bundle Manifest Model | +| 2 | T2 | TODO | T1 | AirGap Team | Bundle Validator | +| 3 | T3 | TODO | T1 | AirGap Team | Bundle Builder | +| 4 | T4 | TODO | T1, T2 | AirGap Team | Bundle Loader | +| 5 | T5 | TODO | T3, T4 | AirGap Team | CLI Integration | +| 6 | T6 | TODO | T1-T5 | AirGap Team | Unit and Integration Tests | +| 7 | T7 | TODO | — | AirGap Team | Project Setup | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Testing Strategy advisory. Offline bundle manifest is critical for air-gap compliance testing. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Bundle format | Decision | AirGap Team | tar.gz with manifest.json at root | +| Expiry enforcement | Decision | AirGap Team | Warn on expired, block configurable | +| Freshness threshold | Decision | AirGap Team | 7 days default, configurable | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] Bundle manifest captures all offline components +- [ ] Validation verifies integrity +- [ ] CLI commands functional +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_5100_0001_0004_golden_corpus_expansion.md b/docs/implplan/SPRINT_5100_0001_0004_golden_corpus_expansion.md new file mode 100644 index 000000000..5376d1ae1 --- /dev/null +++ b/docs/implplan/SPRINT_5100_0001_0004_golden_corpus_expansion.md @@ -0,0 +1,444 @@ +# Sprint 5100.0001.0004 · Golden Corpus Expansion + +## Topic & Scope + +- Expand the golden test corpus with comprehensive test cases covering all testing scenarios. +- Add negative fixtures, multi-distro coverage, large SBOM cases, and interop fixtures. +- Create corpus versioning and management utilities. +- **Working directory:** `bench/golden-corpus/` and `tests/fixtures/` + +## Dependencies & Concurrency + +- **Upstream**: Sprints 5100.0001.0001, 5100.0001.0002, 5100.0001.0003 (schemas for manifest format) +- **Downstream**: All E2E test sprints use corpus fixtures +- **Safe to parallelize with**: Phase 1 sprints (can use existing corpus during expansion) + +## Documentation Prerequisites + +- `docs/product-advisories/20-Dec-2025 - Testing strategy.md` +- `docs/implplan/SPRINT_3500_0004_0003_integration_tests_corpus.md` (existing corpus) +- `bench/golden-corpus/README.md` + +--- + +## Tasks + +### T1: Corpus Structure Redesign + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: — + +**Description**: +Redesign corpus structure for comprehensive test coverage and easy navigation. + +**Implementation Path**: `bench/golden-corpus/` + +**Proposed Structure**: +``` +bench/golden-corpus/ +├── corpus-manifest.json # Master index with all cases +├── corpus-version.json # Versioning metadata +├── README.md # Documentation +├── categories/ +│ ├── severity/ # CVE severity level cases +│ │ ├── critical/ +│ │ ├── high/ +│ │ ├── medium/ +│ │ └── low/ +│ ├── vex/ # VEX scenario cases +│ │ ├── not-affected/ +│ │ ├── affected/ +│ │ ├── under-investigation/ +│ │ └── conflicting/ +│ ├── reachability/ # Reachability analysis cases +│ │ ├── reachable/ +│ │ ├── not-reachable/ +│ │ └── inconclusive/ +│ ├── unknowns/ # Unknowns scenarios +│ │ ├── pkg-source-unknown/ +│ │ ├── cpe-ambiguous/ +│ │ ├── version-unparseable/ +│ │ └── mixed-unknowns/ +│ ├── scale/ # Large SBOM cases +│ │ ├── small-200/ +│ │ ├── medium-2k/ +│ │ ├── large-20k/ +│ │ └── xlarge-50k/ +│ ├── distro/ # Multi-distro cases +│ │ ├── alpine/ +│ │ ├── debian/ +│ │ ├── rhel/ +│ │ ├── suse/ +│ │ └── ubuntu/ +│ ├── interop/ # Interop test cases +│ │ ├── syft-generated/ +│ │ ├── trivy-generated/ +│ │ └── grype-consumed/ +│ └── negative/ # Negative/error cases +│ ├── malformed-spdx/ +│ ├── corrupted-dsse/ +│ ├── missing-digests/ +│ └── unsupported-distro/ +└── shared/ + ├── policies/ # Shared policy fixtures + ├── feeds/ # Feed snapshots + └── keys/ # Test signing keys +``` + +**Each Case Structure**: +``` +case-name/ +├── case-manifest.json # Case metadata +├── input/ +│ ├── image.tar.gz # Container image (or reference) +│ ├── sbom-cyclonedx.json # SBOM (CycloneDX format) +│ └── sbom-spdx.json # SBOM (SPDX format) +├── expected/ +│ ├── verdict.json # Expected verdict +│ ├── evidence-index.json # Expected evidence +│ ├── unknowns.json # Expected unknowns +│ └── delta-verdict.json # Expected delta (if applicable) +└── run-manifest.json # Run manifest for replay +``` + +**Acceptance Criteria**: +- [ ] Directory structure created +- [ ] All category directories exist +- [ ] Template case structure documented +- [ ] Existing cases migrated to new structure + +--- + +### T2: Severity Level Cases + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create comprehensive test cases for each CVE severity level. + +**Cases to Create**: + +| Case ID | Severity | Description | +|---------|----------|-------------| +| SEV-001 | Critical | Log4Shell (CVE-2021-44228) in Java app | +| SEV-002 | Critical | Spring4Shell (CVE-2022-22965) in Spring Boot | +| SEV-003 | High | OpenSSL CVE-2022-3602 in Alpine | +| SEV-004 | High | Multiple high CVEs in npm packages | +| SEV-005 | Medium | Medium-severity in Python dependencies | +| SEV-006 | Medium | Medium with VEX mitigation | +| SEV-007 | Low | Low-severity informational | +| SEV-008 | Low | Low with compensating control | + +**Each Case Includes**: +- Minimal container image with vulnerable package +- SBOM in both CycloneDX and SPDX formats +- Expected verdict with scoring breakdown +- Run manifest for replay + +**Acceptance Criteria**: +- [ ] 8 severity cases created +- [ ] Each case has all required artifacts +- [ ] Cases validate against schemas +- [ ] Cases pass determinism tests + +--- + +### T3: VEX Scenario Cases + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create test cases for VEX document handling and precedence. + +**Cases to Create**: + +| Case ID | Scenario | Description | +|---------|----------|-------------| +| VEX-001 | Not Affected | Vendor VEX marks CVE not affected | +| VEX-002 | Not Affected | Feature flag disables vulnerable code | +| VEX-003 | Affected | VEX confirms affected with fix available | +| VEX-004 | Under Investigation | Status pending vendor analysis | +| VEX-005 | Conflicting | Vendor vs distro VEX conflict | +| VEX-006 | Conflicting | Multiple vendor VEX with different status | +| VEX-007 | Precedence | Vendor > distro > internal precedence test | +| VEX-008 | Expiry | VEX with expiration date | + +**Acceptance Criteria**: +- [ ] 8 VEX cases created +- [ ] VEX documents in OpenVEX, CSAF, CycloneDX formats +- [ ] Precedence rules exercised +- [ ] Expected evidence includes VEX references + +--- + +### T4: Reachability Cases + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create test cases for reachability analysis outcomes. + +**Cases to Create**: + +| Case ID | Status | Description | +|---------|--------|-------------| +| REACH-001 | Reachable | Direct call to vulnerable function | +| REACH-002 | Reachable | Transitive call path (3 hops) | +| REACH-003 | Not Reachable | Vulnerable code never invoked | +| REACH-004 | Not Reachable | Dead code path | +| REACH-005 | Inconclusive | Dynamic dispatch prevents analysis | +| REACH-006 | Inconclusive | Reflection-based invocation | +| REACH-007 | Binary | Binary-level reachability (Go) | +| REACH-008 | Binary | Binary-level reachability (Rust) | + +**Each Case Includes**: +- Source code demonstrating call path +- Call graph in expected output +- Reachability evidence with paths + +**Acceptance Criteria**: +- [ ] 8 reachability cases created +- [ ] Call paths documented +- [ ] Evidence includes entry points +- [ ] Both source and binary cases + +--- + +### T5: Unknowns Cases + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create test cases for unknowns detection and budgeting. + +**Cases to Create**: + +| Case ID | Unknown Type | Description | +|---------|--------------|-------------| +| UNK-001 | PKG_SOURCE_UNKNOWN | Package with no identifiable source | +| UNK-002 | CPE_AMBIG | Multiple CPE candidates | +| UNK-003 | VERSION_UNPARSEABLE | Non-standard version string | +| UNK-004 | DISTRO_UNRECOGNIZED | Unknown Linux distribution | +| UNK-005 | REACHABILITY_INCONCLUSIVE | Analysis cannot determine | +| UNK-006 | Mixed | Multiple unknown types combined | + +**Acceptance Criteria**: +- [ ] 6 unknowns cases created +- [ ] Each unknown type represented +- [ ] Expected unknowns list in evidence +- [ ] Budget violation case included + +--- + +### T6: Scale Cases + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create large SBOM cases for performance testing. + +**Cases to Create**: + +| Case ID | Size | Components | Description | +|---------|------|------------|-------------| +| SCALE-001 | Small | 200 | Minimal Node.js app | +| SCALE-002 | Medium | 2,000 | Enterprise Java app | +| SCALE-003 | Large | 20,000 | Monorepo with many deps | +| SCALE-004 | XLarge | 50,000 | Worst-case container | + +**Each Case Includes**: +- Synthetic SBOM with realistic structure +- Expected performance metrics +- Memory usage baselines + +**Acceptance Criteria**: +- [ ] 4 scale cases created +- [ ] SBOMs pass schema validation +- [ ] Performance baselines documented +- [ ] Determinism verified at scale + +--- + +### T7: Distro Cases + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create multi-distro test cases for OS package matching. + +**Cases to Create**: + +| Case ID | Distro | Description | +|---------|--------|-------------| +| DISTRO-001 | Alpine 3.18 | musl-based, apk packages | +| DISTRO-002 | Debian 12 | dpkg-based, apt packages | +| DISTRO-003 | RHEL 9 | rpm-based, dnf packages | +| DISTRO-004 | SUSE 15 | rpm-based, zypper packages | +| DISTRO-005 | Ubuntu 22.04 | dpkg-based, snap support | + +**Each Case Includes**: +- Real container image digest +- OS-specific CVEs +- NEVRA/EVR matching tests + +**Acceptance Criteria**: +- [ ] 5 distro cases created +- [ ] Each uses real CVEs for that distro +- [ ] Package version matching tested +- [ ] Security tracker references included + +--- + +### T8: Interop Cases + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create interop test cases with third-party tools. + +**Cases to Create**: + +| Case ID | Tool | Description | +|---------|------|-------------| +| INTEROP-001 | Syft | SBOM generated by Syft (CycloneDX) | +| INTEROP-002 | Syft | SBOM generated by Syft (SPDX) | +| INTEROP-003 | Trivy | SBOM generated by Trivy | +| INTEROP-004 | Grype | Findings from Grype scan | +| INTEROP-005 | cosign | Attestation signed with cosign | + +**Acceptance Criteria**: +- [ ] 5 interop cases created +- [ ] Real tool outputs captured +- [ ] Findings parity documented +- [ ] Round-trip verification + +--- + +### T9: Negative Cases + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create negative test cases for error handling. + +**Cases to Create**: + +| Case ID | Error Type | Description | +|---------|------------|-------------| +| NEG-001 | Malformed SPDX | Invalid SPDX JSON structure | +| NEG-002 | Malformed CycloneDX | Invalid CycloneDX schema | +| NEG-003 | Corrupted DSSE | DSSE envelope with bad signature | +| NEG-004 | Missing Digests | SBOM without component hashes | +| NEG-005 | Unsupported Distro | Unknown Linux distribution | +| NEG-006 | Zip Bomb | Malicious compressed artifact | + +**Acceptance Criteria**: +- [ ] 6 negative cases created +- [ ] Each triggers specific error +- [ ] Error messages documented +- [ ] No crashes on malformed input + +--- + +### T10: Corpus Management Tooling + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1-T9 + +**Description**: +Create tooling for corpus management and validation. + +**Tools to Create**: +```bash +# Validate all corpus cases +python3 scripts/corpus/validate-corpus.py + +# Generate corpus manifest +python3 scripts/corpus/generate-manifest.py + +# Run determinism check on all cases +python3 scripts/corpus/check-determinism.py + +# Add new case from template +python3 scripts/corpus/add-case.py --category severity --name "new-case" +``` + +**Acceptance Criteria**: +- [ ] Validation script validates all cases +- [ ] Manifest generation script +- [ ] Determinism check script +- [ ] Case template generator + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | QA Team | Corpus Structure Redesign | +| 2 | T2 | TODO | T1 | QA Team | Severity Level Cases | +| 3 | T3 | TODO | T1 | QA Team | VEX Scenario Cases | +| 4 | T4 | TODO | T1 | QA Team | Reachability Cases | +| 5 | T5 | TODO | T1 | QA Team | Unknowns Cases | +| 6 | T6 | TODO | T1 | QA Team | Scale Cases | +| 7 | T7 | TODO | T1 | QA Team | Distro Cases | +| 8 | T8 | TODO | T1 | QA Team | Interop Cases | +| 9 | T9 | TODO | T1 | QA Team | Negative Cases | +| 10 | T10 | TODO | T1-T9 | QA Team | Corpus Management Tooling | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Testing Strategy advisory. Golden corpus expansion required for comprehensive E2E testing. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Case naming convention | Decision | QA Team | CATEGORY-NNN format | +| Image storage | Decision | QA Team | Reference digests, not full images | +| Corpus versioning | Decision | QA Team | Semantic versioning tied to algorithm changes | + +--- + +## Success Criteria + +- [ ] All 10 tasks marked DONE +- [ ] 50+ test cases in corpus +- [ ] All categories have representative cases +- [ ] Corpus passes validation +- [ ] Determinism verified across all cases +- [ ] Management tooling functional diff --git a/docs/implplan/SPRINT_5100_0002_0001_canonicalization_utilities.md b/docs/implplan/SPRINT_5100_0002_0001_canonicalization_utilities.md new file mode 100644 index 000000000..c4ac04582 --- /dev/null +++ b/docs/implplan/SPRINT_5100_0002_0001_canonicalization_utilities.md @@ -0,0 +1,742 @@ +# Sprint 5100.0002.0001 · Canonicalization Utilities + +## Topic & Scope + +- Implement canonical JSON serialization for deterministic output. +- Create stable ordering utilities for packages, vulnerabilities, edges, and evidence lists. +- Ensure UTF-8/invariant culture enforcement across all outputs. +- Add property-based tests for ordering invariants. +- **Working directory:** `src/__Libraries/StellaOps.Canonicalization/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 5100.0001.0001 (Run Manifest Schema) uses canonicalization +- **Downstream**: Sprint 5100.0002.0002 (Replay Runner) depends on deterministic output +- **Safe to parallelize with**: Sprint 5100.0001.0002, 5100.0001.0003 + +## Documentation Prerequisites + +- `docs/product-advisories/20-Dec-2025 - Testing strategy.md` +- `docs/product-advisories/21-Dec-2025 - Smart Diff - Reproducibility as a Feature.md` + +--- + +## Tasks + +### T1: Canonical JSON Serializer + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: — + +**Description**: +Implement canonical JSON serialization with stable key ordering and consistent formatting. + +**Implementation Path**: `src/__Libraries/StellaOps.Canonicalization/Json/CanonicalJsonSerializer.cs` + +**Implementation**: +```csharp +namespace StellaOps.Canonicalization.Json; + +/// +/// Produces canonical JSON output with deterministic ordering. +/// Implements RFC 8785 (JSON Canonicalization Scheme) principles. +/// +public static class CanonicalJsonSerializer +{ + private static readonly JsonSerializerOptions Options = new() + { + // Deterministic settings + WriteIndented = false, + PropertyNamingPolicy = JsonNamingPolicy.CamelCase, + DictionaryKeyPolicy = JsonNamingPolicy.CamelCase, + DefaultIgnoreCondition = JsonIgnoreCondition.WhenWritingNull, + Encoder = JavaScriptEncoder.UnsafeRelaxedJsonEscaping, + + // Ordering converters + Converters = + { + new StableDictionaryConverter(), + new StableArrayConverter(), + new Iso8601DateTimeConverter() + }, + + // Number handling for cross-platform consistency + NumberHandling = JsonNumberHandling.Strict + }; + + /// + /// Serializes an object to canonical JSON. + /// + public static string Serialize(T value) + { + return JsonSerializer.Serialize(value, Options); + } + + /// + /// Serializes and computes SHA-256 digest. + /// + public static (string Json, string Digest) SerializeWithDigest(T value) + { + var json = Serialize(value); + var hash = SHA256.HashData(Encoding.UTF8.GetBytes(json)); + var digest = Convert.ToHexString(hash).ToLowerInvariant(); + return (json, digest); + } + + /// + /// Deserializes from canonical JSON. + /// + public static T Deserialize(string json) + { + return JsonSerializer.Deserialize(json, Options) + ?? throw new InvalidOperationException($"Failed to deserialize {typeof(T).Name}"); + } +} + +/// +/// Converter that orders dictionary keys alphabetically. +/// +public sealed class StableDictionaryConverter : JsonConverterFactory +{ + public override bool CanConvert(Type typeToConvert) => + typeToConvert.IsGenericType && + (typeToConvert.GetGenericTypeDefinition() == typeof(Dictionary<,>) || + typeToConvert.GetGenericTypeDefinition() == typeof(ImmutableDictionary<,>)); + + public override JsonConverter CreateConverter(Type typeToConvert, JsonSerializerOptions options) + { + var keyType = typeToConvert.GetGenericArguments()[0]; + var valueType = typeToConvert.GetGenericArguments()[1]; + var converterType = typeof(StableDictionaryConverter<,>).MakeGenericType(keyType, valueType); + return (JsonConverter)Activator.CreateInstance(converterType)!; + } +} + +public sealed class StableDictionaryConverter : JsonConverter> + where TKey : notnull +{ + public override Dictionary? Read( + ref Utf8JsonReader reader, Type typeToConvert, JsonSerializerOptions options) + { + return JsonSerializer.Deserialize>(ref reader, options); + } + + public override void Write( + Utf8JsonWriter writer, Dictionary value, JsonSerializerOptions options) + { + writer.WriteStartObject(); + foreach (var kvp in value.OrderBy(x => x.Key?.ToString(), StringComparer.Ordinal)) + { + writer.WritePropertyName(kvp.Key?.ToString() ?? ""); + JsonSerializer.Serialize(writer, kvp.Value, options); + } + writer.WriteEndObject(); + } +} + +/// +/// Converter for ISO 8601 date/time with UTC normalization. +/// +public sealed class Iso8601DateTimeConverter : JsonConverter +{ + public override DateTimeOffset Read( + ref Utf8JsonReader reader, Type typeToConvert, JsonSerializerOptions options) + { + return DateTimeOffset.Parse(reader.GetString()!, CultureInfo.InvariantCulture); + } + + public override void Write( + Utf8JsonWriter writer, DateTimeOffset value, JsonSerializerOptions options) + { + // Always output in UTC with fixed format + writer.WriteStringValue(value.ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ss.fffZ", CultureInfo.InvariantCulture)); + } +} +``` + +**Acceptance Criteria**: +- [ ] Stable key ordering (alphabetical) +- [ ] Consistent array ordering +- [ ] UTC ISO-8601 timestamps +- [ ] No whitespace in output +- [ ] camelCase property naming + +--- + +### T2: Collection Orderers + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: — + +**Description**: +Implement stable ordering for domain collections: packages, vulnerabilities, edges, evidence. + +**Implementation Path**: `src/__Libraries/StellaOps.Canonicalization/Ordering/` + +**Implementation**: +```csharp +namespace StellaOps.Canonicalization.Ordering; + +/// +/// Provides stable ordering for SBOM packages. +/// Order: purl (if present) -> name -> version -> type +/// +public static class PackageOrderer +{ + public static IOrderedEnumerable StableOrder( + this IEnumerable packages, + Func getPurl, + Func getName, + Func getVersion, + Func getType) + { + return packages + .OrderBy(p => getPurl(p) ?? "", StringComparer.Ordinal) + .ThenBy(p => getName(p) ?? "", StringComparer.Ordinal) + .ThenBy(p => getVersion(p) ?? "", StringComparer.Ordinal) + .ThenBy(p => getType(p) ?? "", StringComparer.Ordinal); + } +} + +/// +/// Provides stable ordering for vulnerabilities. +/// Order: id (CVE/GHSA) -> source -> severity +/// +public static class VulnerabilityOrderer +{ + public static IOrderedEnumerable StableOrder( + this IEnumerable vulnerabilities, + Func getId, + Func getSource, + Func getSeverity) + { + return vulnerabilities + .OrderBy(v => getId(v), StringComparer.Ordinal) + .ThenBy(v => getSource(v) ?? "", StringComparer.Ordinal) + .ThenByDescending(v => getSeverity(v) ?? 0); + } +} + +/// +/// Provides stable ordering for graph edges. +/// Order: source -> target -> type +/// +public static class EdgeOrderer +{ + public static IOrderedEnumerable StableOrder( + this IEnumerable edges, + Func getSource, + Func getTarget, + Func getType) + { + return edges + .OrderBy(e => getSource(e), StringComparer.Ordinal) + .ThenBy(e => getTarget(e), StringComparer.Ordinal) + .ThenBy(e => getType(e) ?? "", StringComparer.Ordinal); + } +} + +/// +/// Provides stable ordering for evidence lists. +/// Order: type -> id -> digest +/// +public static class EvidenceOrderer +{ + public static IOrderedEnumerable StableOrder( + this IEnumerable evidence, + Func getType, + Func getId, + Func getDigest) + { + return evidence + .OrderBy(e => getType(e), StringComparer.Ordinal) + .ThenBy(e => getId(e), StringComparer.Ordinal) + .ThenBy(e => getDigest(e) ?? "", StringComparer.Ordinal); + } +} +``` + +**Acceptance Criteria**: +- [ ] PackageOrderer with PURL priority +- [ ] VulnerabilityOrderer with ID priority +- [ ] EdgeOrderer for graph determinism +- [ ] EvidenceOrderer for chain ordering +- [ ] All use StringComparer.Ordinal + +--- + +### T3: Culture Invariant Utilities + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: — + +**Description**: +Utilities for culture-invariant operations. + +**Implementation Path**: `src/__Libraries/StellaOps.Canonicalization/Culture/InvariantCulture.cs` + +**Implementation**: +```csharp +namespace StellaOps.Canonicalization.Culture; + +/// +/// Ensures all string operations use invariant culture. +/// +public static class InvariantCulture +{ + /// + /// Forces invariant culture for the current thread. + /// + public static IDisposable Scope() + { + var original = CultureInfo.CurrentCulture; + CultureInfo.CurrentCulture = CultureInfo.InvariantCulture; + CultureInfo.CurrentUICulture = CultureInfo.InvariantCulture; + return new CultureScope(original); + } + + /// + /// Compares strings using ordinal comparison. + /// + public static int Compare(string? a, string? b) => + string.Compare(a, b, StringComparison.Ordinal); + + /// + /// Formats a decimal with invariant culture. + /// + public static string FormatDecimal(decimal value) => + value.ToString("G", CultureInfo.InvariantCulture); + + /// + /// Parses a decimal with invariant culture. + /// + public static decimal ParseDecimal(string value) => + decimal.Parse(value, CultureInfo.InvariantCulture); + + private sealed class CultureScope : IDisposable + { + private readonly CultureInfo _original; + public CultureScope(CultureInfo original) => _original = original; + public void Dispose() + { + CultureInfo.CurrentCulture = _original; + CultureInfo.CurrentUICulture = _original; + } + } +} + +/// +/// UTF-8 encoding utilities. +/// +public static class Utf8Encoding +{ + /// + /// Ensures string is valid UTF-8. + /// + public static string Normalize(string input) + { + // Normalize to NFC form for consistent representation + return input.Normalize(NormalizationForm.FormC); + } + + /// + /// Converts to UTF-8 bytes. + /// + public static byte[] GetBytes(string input) => + Encoding.UTF8.GetBytes(Normalize(input)); +} +``` + +**Acceptance Criteria**: +- [ ] Culture scope for thread isolation +- [ ] Ordinal string comparison +- [ ] Invariant number formatting +- [ ] UTF-8 normalization (NFC) + +--- + +### T4: Determinism Verifier + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T2, T3 + +**Description**: +Service to verify determinism of serialization. + +**Implementation Path**: `src/__Libraries/StellaOps.Canonicalization/Verification/DeterminismVerifier.cs` + +**Implementation**: +```csharp +namespace StellaOps.Canonicalization.Verification; + +/// +/// Verifies that serialization produces identical output across runs. +/// +public sealed class DeterminismVerifier +{ + /// + /// Serializes an object multiple times and verifies identical output. + /// + public DeterminismResult Verify(T value, int iterations = 10) + { + var outputs = new HashSet(); + var digests = new HashSet(); + + for (var i = 0; i < iterations; i++) + { + var (json, digest) = CanonicalJsonSerializer.SerializeWithDigest(value); + outputs.Add(json); + digests.Add(digest); + } + + return new DeterminismResult( + IsDeterministic: outputs.Count == 1 && digests.Count == 1, + UniqueOutputs: outputs.Count, + UniqueDigests: digests.Count, + SampleOutput: outputs.First(), + SampleDigest: digests.First()); + } + + /// + /// Compares two serialized objects for byte-identical output. + /// + public ComparisonResult Compare(T a, T b) + { + var (jsonA, digestA) = CanonicalJsonSerializer.SerializeWithDigest(a); + var (jsonB, digestB) = CanonicalJsonSerializer.SerializeWithDigest(b); + + if (digestA == digestB) + return new ComparisonResult(IsIdentical: true, Differences: []); + + var differences = FindDifferences(jsonA, jsonB); + return new ComparisonResult(IsIdentical: false, Differences: differences); + } + + private static IReadOnlyList FindDifferences(string a, string b) + { + var differences = new List(); + var docA = JsonDocument.Parse(a); + var docB = JsonDocument.Parse(b); + CompareElements(docA.RootElement, docB.RootElement, "$", differences); + return differences; + } + + private static void CompareElements( + JsonElement a, JsonElement b, string path, List differences) + { + if (a.ValueKind != b.ValueKind) + { + differences.Add($"{path}: type mismatch ({a.ValueKind} vs {b.ValueKind})"); + return; + } + + switch (a.ValueKind) + { + case JsonValueKind.Object: + var propsA = a.EnumerateObject().ToDictionary(p => p.Name); + var propsB = b.EnumerateObject().ToDictionary(p => p.Name); + foreach (var key in propsA.Keys.Union(propsB.Keys).Order()) + { + var hasA = propsA.TryGetValue(key, out var propA); + var hasB = propsB.TryGetValue(key, out var propB); + if (!hasA) differences.Add($"{path}.{key}: missing in first"); + else if (!hasB) differences.Add($"{path}.{key}: missing in second"); + else CompareElements(propA.Value, propB.Value, $"{path}.{key}", differences); + } + break; + case JsonValueKind.Array: + var arrA = a.EnumerateArray().ToList(); + var arrB = b.EnumerateArray().ToList(); + if (arrA.Count != arrB.Count) + differences.Add($"{path}: array length mismatch ({arrA.Count} vs {arrB.Count})"); + for (var i = 0; i < Math.Min(arrA.Count, arrB.Count); i++) + CompareElements(arrA[i], arrB[i], $"{path}[{i}]", differences); + break; + default: + if (a.GetRawText() != b.GetRawText()) + differences.Add($"{path}: value mismatch"); + break; + } + } +} + +public sealed record DeterminismResult( + bool IsDeterministic, + int UniqueOutputs, + int UniqueDigests, + string SampleOutput, + string SampleDigest); + +public sealed record ComparisonResult( + bool IsIdentical, + IReadOnlyList Differences); +``` + +**Acceptance Criteria**: +- [ ] Multi-iteration verification +- [ ] Deep comparison with path reporting +- [ ] Difference details for debugging +- [ ] JSON path format for differences + +--- + +### T5: Property-Based Tests + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1-T4 + +**Description**: +Property-based tests using FsCheck for ordering invariants. + +**Implementation Path**: `src/__Libraries/__Tests/StellaOps.Canonicalization.Tests/Properties/` + +**Test Properties**: +```csharp +using FsCheck; +using FsCheck.Xunit; + +namespace StellaOps.Canonicalization.Tests.Properties; + +public class CanonicalJsonProperties +{ + [Property] + public Property Serialize_IsIdempotent(Dictionary dict) + { + var json1 = CanonicalJsonSerializer.Serialize(dict); + var json2 = CanonicalJsonSerializer.Serialize(dict); + return (json1 == json2).ToProperty(); + } + + [Property] + public Property Serialize_OrderIndependent(Dictionary dict) + { + var reversed = dict.Reverse().ToDictionary(x => x.Key, x => x.Value); + var json1 = CanonicalJsonSerializer.Serialize(dict); + var json2 = CanonicalJsonSerializer.Serialize(reversed); + return (json1 == json2).ToProperty(); + } + + [Property] + public Property Digest_IsDeterministic(string input) + { + var obj = new { Value = input ?? "" }; + var (_, digest1) = CanonicalJsonSerializer.SerializeWithDigest(obj); + var (_, digest2) = CanonicalJsonSerializer.SerializeWithDigest(obj); + return (digest1 == digest2).ToProperty(); + } +} + +public class OrderingProperties +{ + [Property] + public Property PackageOrdering_IsStable(List<(string purl, string name, string version)> packages) + { + var ordered1 = packages.StableOrder(p => p.purl, p => p.name, p => p.version, _ => null).ToList(); + var ordered2 = packages.StableOrder(p => p.purl, p => p.name, p => p.version, _ => null).ToList(); + return ordered1.SequenceEqual(ordered2).ToProperty(); + } + + [Property] + public Property VulnerabilityOrdering_IsTransitive( + List<(string id, string source, decimal severity)> vulns) + { + var ordered = vulns.StableOrder(v => v.id, v => v.source, v => v.severity).ToList(); + // Verify transitivity: if a < b and b < c, then a < c + for (var i = 0; i < ordered.Count - 2; i++) + { + var a = ordered[i]; + var b = ordered[i + 1]; + var c = ordered[i + 2]; + // This should always hold for a stable total ordering + } + return true.ToProperty(); + } +} +``` + +**Acceptance Criteria**: +- [ ] Idempotency property tests +- [ ] Order-independence property tests +- [ ] Digest determinism property tests +- [ ] 1000+ generated test cases +- [ ] All properties pass + +--- + +### T6: Unit Tests + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1-T4 + +**Description**: +Standard unit tests for canonicalization utilities. + +**Implementation Path**: `src/__Libraries/__Tests/StellaOps.Canonicalization.Tests/` + +**Test Cases**: +```csharp +public class CanonicalJsonSerializerTests +{ + [Fact] + public void Serialize_Dictionary_OrdersKeysAlphabetically() + { + var dict = new Dictionary { ["z"] = 1, ["a"] = 2, ["m"] = 3 }; + var json = CanonicalJsonSerializer.Serialize(dict); + json.Should().Be("{\"a\":2,\"m\":3,\"z\":1}"); + } + + [Fact] + public void Serialize_DateTimeOffset_UsesUtcIso8601() + { + var dt = new DateTimeOffset(2024, 1, 15, 10, 30, 0, TimeSpan.FromHours(5)); + var obj = new { Timestamp = dt }; + var json = CanonicalJsonSerializer.Serialize(obj); + json.Should().Contain("2024-01-15T05:30:00.000Z"); + } + + [Fact] + public void Serialize_NullValues_AreOmitted() + { + var obj = new { Name = "test", Value = (string?)null }; + var json = CanonicalJsonSerializer.Serialize(obj); + json.Should().NotContain("value"); + } + + [Fact] + public void SerializeWithDigest_ProducesConsistentDigest() + { + var obj = new { Name = "test", Value = 123 }; + var (_, digest1) = CanonicalJsonSerializer.SerializeWithDigest(obj); + var (_, digest2) = CanonicalJsonSerializer.SerializeWithDigest(obj); + digest1.Should().Be(digest2); + } +} + +public class PackageOrdererTests +{ + [Fact] + public void StableOrder_OrdersByPurlFirst() + { + var packages = new[] + { + (purl: "pkg:npm/b@1.0.0", name: "b", version: "1.0.0"), + (purl: "pkg:npm/a@1.0.0", name: "a", version: "1.0.0") + }; + var ordered = packages.StableOrder(p => p.purl, p => p.name, p => p.version, _ => null).ToList(); + ordered[0].purl.Should().Be("pkg:npm/a@1.0.0"); + } +} +``` + +**Acceptance Criteria**: +- [ ] Key ordering tests +- [ ] DateTime formatting tests +- [ ] Null handling tests +- [ ] Digest consistency tests +- [ ] All orderer tests + +--- + +### T7: Project Setup + +**Assignee**: QA Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the project structure and dependencies. + +**Implementation Path**: `src/__Libraries/StellaOps.Canonicalization/StellaOps.Canonicalization.csproj` + +**Project File**: +```xml + + + net10.0 + enable + enable + preview + + + + + + +``` + +**Test Project**: +```xml + + + net10.0 + + + + + + + + +``` + +**Acceptance Criteria**: +- [ ] Main project compiles +- [ ] Test project compiles +- [ ] FsCheck integrated + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | QA Team | Canonical JSON Serializer | +| 2 | T2 | TODO | — | QA Team | Collection Orderers | +| 3 | T3 | TODO | — | QA Team | Culture Invariant Utilities | +| 4 | T4 | TODO | T1, T2, T3 | QA Team | Determinism Verifier | +| 5 | T5 | TODO | T1-T4 | QA Team | Property-Based Tests | +| 6 | T6 | TODO | T1-T4 | QA Team | Unit Tests | +| 7 | T7 | TODO | — | QA Team | Project Setup | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Testing Strategy advisory. Canonicalization is foundational for deterministic replay. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| JSON canonicalization | Decision | QA Team | Follow RFC 8785 principles | +| String comparison | Decision | QA Team | Ordinal comparison for portability | +| DateTime format | Decision | QA Team | ISO 8601 with milliseconds, always UTC | +| Unicode normalization | Decision | QA Team | NFC form for consistency | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] Canonical JSON produces identical output +- [ ] All orderers are stable and deterministic +- [ ] Property-based tests pass with 1000+ cases +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_5100_0002_0002_replay_runner_service.md b/docs/implplan/SPRINT_5100_0002_0002_replay_runner_service.md new file mode 100644 index 000000000..142aeec70 --- /dev/null +++ b/docs/implplan/SPRINT_5100_0002_0002_replay_runner_service.md @@ -0,0 +1,585 @@ +# Sprint 5100.0002.0002 · Replay Runner Service + +## Topic & Scope + +- Implement the Replay Runner service for deterministic verdict replay. +- Load run manifests and execute scans with identical inputs. +- Compare verdict bytes and report differences. +- Enable time-travel verification for auditors. +- **Working directory:** `src/__Libraries/StellaOps.Replay/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 5100.0001.0001 (Run Manifest), Sprint 5100.0002.0001 (Canonicalization) +- **Downstream**: Sprint 5100.0006.0001 (Audit Pack) uses replay for verification +- **Safe to parallelize with**: Sprint 5100.0002.0003 (Delta-Verdict) + +## Documentation Prerequisites + +- `docs/product-advisories/20-Dec-2025 - Testing strategy.md` +- `docs/product-advisories/21-Dec-2025 - Smart Diff - Reproducibility as a Feature.md` +- Sprint 5100.0001.0001 completion + +--- + +## Tasks + +### T1: Replay Engine Core + +**Assignee**: QA Team +**Story Points**: 8 +**Status**: TODO +**Dependencies**: — + +**Description**: +Core replay engine that executes scans from run manifests. + +**Implementation Path**: `src/__Libraries/StellaOps.Replay/Engine/ReplayEngine.cs` + +**Implementation**: +```csharp +namespace StellaOps.Replay.Engine; + +/// +/// Executes scans deterministically from run manifests. +/// Enables time-travel replay for verification and auditing. +/// +public sealed class ReplayEngine : IReplayEngine +{ + private readonly IFeedLoader _feedLoader; + private readonly IPolicyLoader _policyLoader; + private readonly IScannerFactory _scannerFactory; + private readonly ILogger _logger; + + public ReplayEngine( + IFeedLoader feedLoader, + IPolicyLoader policyLoader, + IScannerFactory scannerFactory, + ILogger logger) + { + _feedLoader = feedLoader; + _policyLoader = policyLoader; + _scannerFactory = scannerFactory; + _logger = logger; + } + + /// + /// Replays a scan from a run manifest. + /// + public async Task ReplayAsync( + RunManifest manifest, + ReplayOptions options, + CancellationToken ct = default) + { + _logger.LogInformation("Starting replay for run {RunId}", manifest.RunId); + + // Validate manifest + var validationResult = ValidateManifest(manifest); + if (!validationResult.IsValid) + { + return ReplayResult.Failed( + manifest.RunId, + "Manifest validation failed", + validationResult.Errors); + } + + // Load frozen inputs + var feedResult = await LoadFeedSnapshotAsync(manifest.FeedSnapshot, ct); + if (!feedResult.Success) + return ReplayResult.Failed(manifest.RunId, "Failed to load feed snapshot", [feedResult.Error]); + + var policyResult = await LoadPolicySnapshotAsync(manifest.PolicySnapshot, ct); + if (!policyResult.Success) + return ReplayResult.Failed(manifest.RunId, "Failed to load policy snapshot", [policyResult.Error]); + + // Configure scanner with frozen time and PRNG + var scannerOptions = new ScannerOptions + { + FeedSnapshot = feedResult.Value, + PolicySnapshot = policyResult.Value, + CryptoProfile = manifest.CryptoProfile, + PrngSeed = manifest.PrngSeed, + FrozenTime = options.UseFrozenTime ? manifest.InitiatedAt : null, + CanonicalizationVersion = manifest.CanonicalizationVersion + }; + + // Execute scan + var scanner = _scannerFactory.Create(scannerOptions); + var scanResult = await scanner.ScanAsync(manifest.ArtifactDigests, ct); + + // Serialize verdict canonically + var (verdictJson, verdictDigest) = CanonicalJsonSerializer.SerializeWithDigest(scanResult.Verdict); + + return new ReplayResult + { + RunId = manifest.RunId, + Success = true, + VerdictJson = verdictJson, + VerdictDigest = verdictDigest, + EvidenceIndex = scanResult.EvidenceIndex, + ExecutedAt = DateTimeOffset.UtcNow, + DurationMs = scanResult.DurationMs + }; + } + + /// + /// Compares two replay results for determinism. + /// + public DeterminismCheckResult CheckDeterminism(ReplayResult a, ReplayResult b) + { + if (a.VerdictDigest == b.VerdictDigest) + { + return new DeterminismCheckResult + { + IsDeterministic = true, + DigestA = a.VerdictDigest, + DigestB = b.VerdictDigest, + Differences = [] + }; + } + + var differences = FindJsonDifferences(a.VerdictJson, b.VerdictJson); + return new DeterminismCheckResult + { + IsDeterministic = false, + DigestA = a.VerdictDigest, + DigestB = b.VerdictDigest, + Differences = differences + }; + } + + private ValidationResult ValidateManifest(RunManifest manifest) + { + var errors = new List(); + + if (string.IsNullOrEmpty(manifest.RunId)) + errors.Add("RunId is required"); + + if (manifest.ArtifactDigests.Length == 0) + errors.Add("At least one artifact digest required"); + + if (manifest.FeedSnapshot.Digest == null) + errors.Add("Feed snapshot digest required"); + + return new ValidationResult(errors.Count == 0, errors); + } + + private async Task> LoadFeedSnapshotAsync( + FeedSnapshot snapshot, CancellationToken ct) + { + try + { + var feed = await _feedLoader.LoadByDigestAsync(snapshot.Digest, ct); + if (feed.Digest != snapshot.Digest) + return LoadResult.Fail($"Feed digest mismatch: expected {snapshot.Digest}"); + return LoadResult.Ok(feed); + } + catch (Exception ex) + { + return LoadResult.Fail($"Failed to load feed: {ex.Message}"); + } + } + + private async Task> LoadPolicySnapshotAsync( + PolicySnapshot snapshot, CancellationToken ct) + { + try + { + var policy = await _policyLoader.LoadByDigestAsync(snapshot.LatticeRulesDigest, ct); + return LoadResult.Ok(policy); + } + catch (Exception ex) + { + return LoadResult.Fail($"Failed to load policy: {ex.Message}"); + } + } + + private static IReadOnlyList FindJsonDifferences(string? a, string? b) + { + if (a == null || b == null) + return [new JsonDifference("$", "One or both values are null")]; + + var verifier = new DeterminismVerifier(); + var result = verifier.Compare(a, b); + return result.Differences.Select(d => new JsonDifference(d, "Value mismatch")).ToList(); + } +} + +public sealed record ReplayResult +{ + public required string RunId { get; init; } + public bool Success { get; init; } + public string? VerdictJson { get; init; } + public string? VerdictDigest { get; init; } + public EvidenceIndex? EvidenceIndex { get; init; } + public DateTimeOffset ExecutedAt { get; init; } + public long DurationMs { get; init; } + public IReadOnlyList? Errors { get; init; } + + public static ReplayResult Failed(string runId, string message, IReadOnlyList errors) => + new() + { + RunId = runId, + Success = false, + Errors = errors.Prepend(message).ToList(), + ExecutedAt = DateTimeOffset.UtcNow + }; +} + +public sealed record DeterminismCheckResult +{ + public bool IsDeterministic { get; init; } + public string? DigestA { get; init; } + public string? DigestB { get; init; } + public IReadOnlyList Differences { get; init; } = []; +} + +public sealed record JsonDifference(string Path, string Description); + +public sealed record ReplayOptions +{ + public bool UseFrozenTime { get; init; } = true; + public bool VerifyDigests { get; init; } = true; + public bool CaptureEvidence { get; init; } = true; +} +``` + +**Acceptance Criteria**: +- [ ] Load and validate run manifests +- [ ] Load frozen feed/policy snapshots by digest +- [ ] Configure scanner with frozen time/PRNG +- [ ] Produce canonical verdict output +- [ ] Report differences on non-determinism + +--- + +### T2: Feed Snapshot Loader + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: — + +**Description**: +Load vulnerability feeds by digest for exact reproduction. + +**Implementation Path**: `src/__Libraries/StellaOps.Replay/Loaders/FeedSnapshotLoader.cs` + +**Implementation**: +```csharp +namespace StellaOps.Replay.Loaders; + +public sealed class FeedSnapshotLoader : IFeedLoader +{ + private readonly IFeedStorage _storage; + private readonly ILogger _logger; + + public async Task LoadByDigestAsync(string digest, CancellationToken ct) + { + _logger.LogDebug("Loading feed snapshot with digest {Digest}", digest); + + // Try local content-addressed store first + var localPath = GetLocalPath(digest); + if (File.Exists(localPath)) + { + var feed = await LoadFromFileAsync(localPath, ct); + VerifyDigest(feed, digest); + return feed; + } + + // Try storage backend + var storedFeed = await _storage.GetByDigestAsync(digest, ct); + if (storedFeed != null) + { + VerifyDigest(storedFeed, digest); + return storedFeed; + } + + throw new FeedNotFoundException($"Feed snapshot not found: {digest}"); + } + + private static void VerifyDigest(FeedSnapshot feed, string expected) + { + var actual = ComputeDigest(feed); + if (actual != expected) + throw new DigestMismatchException($"Feed digest mismatch: expected {expected}, got {actual}"); + } + + private static string ComputeDigest(FeedSnapshot feed) + { + var json = CanonicalJsonSerializer.Serialize(feed); + return Convert.ToHexString(SHA256.HashData(Encoding.UTF8.GetBytes(json))).ToLowerInvariant(); + } + + private static string GetLocalPath(string digest) => + Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData), + "stellaops", "feeds", digest[..2], digest); +} +``` + +**Acceptance Criteria**: +- [ ] Load by digest from local store +- [ ] Fall back to storage backend +- [ ] Verify digest on load +- [ ] Clear error on not found + +--- + +### T3: Policy Snapshot Loader + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: — + +**Description**: +Load policy configurations by digest. + +**Implementation Path**: `src/__Libraries/StellaOps.Replay/Loaders/PolicySnapshotLoader.cs` + +**Acceptance Criteria**: +- [ ] Load by digest +- [ ] Include lattice rules +- [ ] Verify digest integrity +- [ ] Support offline bundle source + +--- + +### T4: Replay CLI Commands + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1, T2, T3 + +**Description**: +CLI commands for replay operations. + +**Commands**: +```bash +# Replay a scan from manifest +stella replay --manifest run-manifest.json --output verdict.json + +# Verify determinism (replay twice and compare) +stella replay verify --manifest run-manifest.json + +# Compare two verdicts +stella replay diff --a verdict-a.json --b verdict-b.json + +# Batch replay from corpus +stella replay batch --corpus bench/golden-corpus/ --output results/ +``` + +**Implementation Path**: `src/Cli/StellaOps.Cli/Commands/Replay/` + +**Acceptance Criteria**: +- [ ] `replay` command executes single replay +- [ ] `replay verify` checks determinism +- [ ] `replay diff` compares verdicts +- [ ] `replay batch` processes corpus +- [ ] JSON output option + +--- + +### T5: CI Integration + +**Assignee**: DevOps Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T4 + +**Description**: +Integrate replay verification into CI. + +**Implementation Path**: `.gitea/workflows/replay-verification.yml` + +**Workflow**: +```yaml +name: Replay Verification + +on: + pull_request: + paths: + - 'src/Scanner/**' + - 'src/__Libraries/StellaOps.Canonicalization/**' + - 'bench/golden-corpus/**' + +jobs: + replay-verification: + runs-on: ubuntu-22.04 + steps: + - uses: actions/checkout@v4 + + - name: Setup .NET + uses: actions/setup-dotnet@v4 + with: + dotnet-version: '10.0.100' + + - name: Build CLI + run: dotnet build src/Cli/StellaOps.Cli -c Release + + - name: Run replay verification on corpus + run: | + ./out/stella replay batch \ + --corpus bench/golden-corpus/ \ + --output results/ \ + --verify-determinism \ + --fail-on-diff + + - name: Upload diff report + if: failure() + uses: actions/upload-artifact@v4 + with: + name: replay-diff-report + path: results/diff-report.json +``` + +**Acceptance Criteria**: +- [ ] Runs on scanner/canonicalization changes +- [ ] Processes entire golden corpus +- [ ] Fails PR on determinism violation +- [ ] Uploads diff report on failure + +--- + +### T6: Unit and Integration Tests + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1-T4 + +**Description**: +Comprehensive tests for replay functionality. + +**Test Cases**: +```csharp +public class ReplayEngineTests +{ + [Fact] + public async Task Replay_SameManifest_ProducesIdenticalVerdict() + { + var manifest = CreateTestManifest(); + var engine = CreateEngine(); + + var result1 = await engine.ReplayAsync(manifest, new ReplayOptions()); + var result2 = await engine.ReplayAsync(manifest, new ReplayOptions()); + + result1.VerdictDigest.Should().Be(result2.VerdictDigest); + } + + [Fact] + public async Task Replay_DifferentManifest_ProducesDifferentVerdict() + { + var manifest1 = CreateTestManifest(); + var manifest2 = manifest1 with + { + FeedSnapshot = manifest1.FeedSnapshot with { Version = "v2" } + }; + var engine = CreateEngine(); + + var result1 = await engine.ReplayAsync(manifest1, new ReplayOptions()); + var result2 = await engine.ReplayAsync(manifest2, new ReplayOptions()); + + result1.VerdictDigest.Should().NotBe(result2.VerdictDigest); + } + + [Fact] + public async Task CheckDeterminism_IdenticalResults_ReturnsTrue() + { + var result1 = new ReplayResult { VerdictDigest = "abc123" }; + var result2 = new ReplayResult { VerdictDigest = "abc123" }; + + var check = engine.CheckDeterminism(result1, result2); + + check.IsDeterministic.Should().BeTrue(); + } + + [Fact] + public async Task CheckDeterminism_DifferentResults_ReturnsDifferences() + { + var result1 = new ReplayResult + { + VerdictJson = "{\"score\":100}", + VerdictDigest = "abc123" + }; + var result2 = new ReplayResult + { + VerdictJson = "{\"score\":99}", + VerdictDigest = "def456" + }; + + var check = engine.CheckDeterminism(result1, result2); + + check.IsDeterministic.Should().BeFalse(); + check.Differences.Should().NotBeEmpty(); + } +} +``` + +**Acceptance Criteria**: +- [ ] Replay determinism tests +- [ ] Feed loading tests +- [ ] Policy loading tests +- [ ] Diff detection tests +- [ ] Integration tests with real corpus + +--- + +### T7: Project Setup + +**Assignee**: QA Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the project structure. + +**Acceptance Criteria**: +- [ ] Main project compiles +- [ ] Test project compiles +- [ ] Dependencies on Manifest and Canonicalization + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | QA Team | Replay Engine Core | +| 2 | T2 | TODO | — | QA Team | Feed Snapshot Loader | +| 3 | T3 | TODO | — | QA Team | Policy Snapshot Loader | +| 4 | T4 | TODO | T1-T3 | QA Team | Replay CLI Commands | +| 5 | T5 | TODO | T4 | DevOps Team | CI Integration | +| 6 | T6 | TODO | T1-T4 | QA Team | Unit and Integration Tests | +| 7 | T7 | TODO | — | QA Team | Project Setup | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Testing Strategy advisory. Replay runner is key for determinism verification. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Frozen time | Decision | QA Team | Use manifest InitiatedAt for time-dependent operations | +| Content-addressed storage | Decision | QA Team | Store feeds/policies by digest for exact retrieval | +| Diff granularity | Decision | QA Team | JSON path-based diff for debugging | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] Replay produces identical verdicts from same manifest +- [ ] Differences are detected and reported +- [ ] CI blocks on determinism violations +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_5100_0002_0003_delta_verdict_generator.md b/docs/implplan/SPRINT_5100_0002_0003_delta_verdict_generator.md new file mode 100644 index 000000000..46adf0422 --- /dev/null +++ b/docs/implplan/SPRINT_5100_0002_0003_delta_verdict_generator.md @@ -0,0 +1,610 @@ +# Sprint 5100.0002.0003 · Delta-Verdict Generator + +## Topic & Scope + +- Implement delta-verdict generation for diff-aware release gates. +- Compare two scan verdicts and produce signed deltas containing only changes. +- Enable risk budget computation based on delta magnitude. +- Support OCI artifact attachment for delta verdicts. +- **Working directory:** `src/__Libraries/StellaOps.DeltaVerdict/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 5100.0002.0001 (Canonicalization), Sprint 5100.0001.0002 (Evidence Index) +- **Downstream**: UI components display deltas, Policy gates use delta for decisions +- **Safe to parallelize with**: Sprint 5100.0002.0002 (Replay Runner) + +## Documentation Prerequisites + +- `docs/product-advisories/20-Dec-2025 - Testing strategy.md` +- `docs/product-advisories/21-Dec-2025 - Smart Diff - Reproducibility as a Feature.md` +- `docs/product-advisories/20-Dec-2025 - Moat Explanation - Risk Budgets and Diff-Aware Release Gates.md` + +--- + +## Tasks + +### T1: Delta-Verdict Domain Model + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: — + +**Description**: +Define the delta-verdict model capturing changes between two verdicts. + +**Implementation Path**: `src/__Libraries/StellaOps.DeltaVerdict/Models/DeltaVerdict.cs` + +**Model Definition**: +```csharp +namespace StellaOps.DeltaVerdict.Models; + +/// +/// Represents the difference between two scan verdicts. +/// Used for diff-aware release gates and risk budget computation. +/// +public sealed record DeltaVerdict +{ + /// + /// Unique identifier for this delta. + /// + public required string DeltaId { get; init; } + + /// + /// Schema version for forward compatibility. + /// + public required string SchemaVersion { get; init; } = "1.0.0"; + + /// + /// Reference to the base (before) verdict. + /// + public required VerdictReference BaseVerdict { get; init; } + + /// + /// Reference to the head (after) verdict. + /// + public required VerdictReference HeadVerdict { get; init; } + + /// + /// Components added in head. + /// + public ImmutableArray AddedComponents { get; init; } = []; + + /// + /// Components removed in head. + /// + public ImmutableArray RemovedComponents { get; init; } = []; + + /// + /// Components with version changes. + /// + public ImmutableArray ChangedComponents { get; init; } = []; + + /// + /// New vulnerabilities introduced in head. + /// + public ImmutableArray AddedVulnerabilities { get; init; } = []; + + /// + /// Vulnerabilities fixed in head. + /// + public ImmutableArray RemovedVulnerabilities { get; init; } = []; + + /// + /// Vulnerabilities with status changes (e.g., VEX update). + /// + public ImmutableArray ChangedVulnerabilityStatuses { get; init; } = []; + + /// + /// Risk score changes. + /// + public required RiskScoreDelta RiskScoreDelta { get; init; } + + /// + /// Summary statistics for the delta. + /// + public required DeltaSummary Summary { get; init; } + + /// + /// Whether this is an "empty delta" (no changes). + /// + public bool IsEmpty => Summary.TotalChanges == 0; + + /// + /// UTC timestamp when delta was computed. + /// + public required DateTimeOffset ComputedAt { get; init; } + + /// + /// SHA-256 digest of this delta (excluding this field and signature). + /// + public string? DeltaDigest { get; init; } + + /// + /// DSSE signature if signed. + /// + public string? Signature { get; init; } +} + +public sealed record VerdictReference( + string VerdictId, + string Digest, + string? ArtifactRef, + DateTimeOffset ScannedAt); + +public sealed record ComponentDelta( + string Purl, + string Name, + string Version, + string Type, + ImmutableArray AssociatedVulnerabilities); + +public sealed record ComponentVersionDelta( + string Purl, + string Name, + string OldVersion, + string NewVersion, + ImmutableArray VulnerabilitiesFixed, + ImmutableArray VulnerabilitiesIntroduced); + +public sealed record VulnerabilityDelta( + string VulnerabilityId, + string Severity, + decimal? CvssScore, + string? ComponentPurl, + string? ReachabilityStatus); + +public sealed record VulnerabilityStatusDelta( + string VulnerabilityId, + string OldStatus, + string NewStatus, + string? Reason); + +public sealed record RiskScoreDelta( + decimal OldScore, + decimal NewScore, + decimal Change, + decimal PercentChange, + RiskTrend Trend); + +public enum RiskTrend +{ + Improved, + Degraded, + Stable +} + +public sealed record DeltaSummary( + int ComponentsAdded, + int ComponentsRemoved, + int ComponentsChanged, + int VulnerabilitiesAdded, + int VulnerabilitiesRemoved, + int VulnerabilityStatusChanges, + int TotalChanges, + DeltaMagnitude Magnitude); + +public enum DeltaMagnitude +{ + None, // 0 changes + Minimal, // 1-5 changes + Small, // 6-20 changes + Medium, // 21-50 changes + Large, // 51-100 changes + Major // 100+ changes +} +``` + +**Acceptance Criteria**: +- [ ] Complete delta model with all change types +- [ ] Component additions/removals/changes +- [ ] Vulnerability additions/removals/status changes +- [ ] Risk score delta with trend +- [ ] Summary with magnitude classification + +--- + +### T2: Delta Computation Engine + +**Assignee**: QA Team +**Story Points**: 8 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Engine that computes deltas between two verdicts. + +**Implementation Path**: `src/__Libraries/StellaOps.DeltaVerdict/Engine/DeltaComputationEngine.cs` + +**Implementation**: +```csharp +namespace StellaOps.DeltaVerdict.Engine; + +public sealed class DeltaComputationEngine : IDeltaComputationEngine +{ + public DeltaVerdict ComputeDelta(Verdict baseVerdict, Verdict headVerdict) + { + // Component diff + var baseComponents = baseVerdict.Components.ToDictionary(c => c.Purl); + var headComponents = headVerdict.Components.ToDictionary(c => c.Purl); + + var addedComponents = ComputeAddedComponents(baseComponents, headComponents); + var removedComponents = ComputeRemovedComponents(baseComponents, headComponents); + var changedComponents = ComputeChangedComponents(baseComponents, headComponents); + + // Vulnerability diff + var baseVulns = baseVerdict.Vulnerabilities.ToDictionary(v => v.Id); + var headVulns = headVerdict.Vulnerabilities.ToDictionary(v => v.Id); + + var addedVulns = ComputeAddedVulnerabilities(baseVulns, headVulns); + var removedVulns = ComputeRemovedVulnerabilities(baseVulns, headVulns); + var changedStatuses = ComputeStatusChanges(baseVulns, headVulns); + + // Risk score delta + var riskDelta = ComputeRiskScoreDelta(baseVerdict.RiskScore, headVerdict.RiskScore); + + // Summary + var summary = new DeltaSummary( + ComponentsAdded: addedComponents.Length, + ComponentsRemoved: removedComponents.Length, + ComponentsChanged: changedComponents.Length, + VulnerabilitiesAdded: addedVulns.Length, + VulnerabilitiesRemoved: removedVulns.Length, + VulnerabilityStatusChanges: changedStatuses.Length, + TotalChanges: addedComponents.Length + removedComponents.Length + changedComponents.Length + + addedVulns.Length + removedVulns.Length + changedStatuses.Length, + Magnitude: ClassifyMagnitude(/* total changes */)); + + return new DeltaVerdict + { + DeltaId = Guid.NewGuid().ToString(), + SchemaVersion = "1.0.0", + BaseVerdict = CreateVerdictReference(baseVerdict), + HeadVerdict = CreateVerdictReference(headVerdict), + AddedComponents = addedComponents, + RemovedComponents = removedComponents, + ChangedComponents = changedComponents, + AddedVulnerabilities = addedVulns, + RemovedVulnerabilities = removedVulns, + ChangedVulnerabilityStatuses = changedStatuses, + RiskScoreDelta = riskDelta, + Summary = summary, + ComputedAt = DateTimeOffset.UtcNow + }; + } + + private static ImmutableArray ComputeAddedComponents( + Dictionary baseComponents, + Dictionary headComponents) + { + return headComponents + .Where(kv => !baseComponents.ContainsKey(kv.Key)) + .Select(kv => new ComponentDelta( + kv.Value.Purl, + kv.Value.Name, + kv.Value.Version, + kv.Value.Type, + kv.Value.Vulnerabilities.ToImmutableArray())) + .ToImmutableArray(); + } + + private static RiskScoreDelta ComputeRiskScoreDelta(decimal oldScore, decimal newScore) + { + var change = newScore - oldScore; + var percentChange = oldScore > 0 ? (change / oldScore) * 100 : (newScore > 0 ? 100 : 0); + var trend = change switch + { + < 0 => RiskTrend.Improved, + > 0 => RiskTrend.Degraded, + _ => RiskTrend.Stable + }; + + return new RiskScoreDelta(oldScore, newScore, change, percentChange, trend); + } + + private static DeltaMagnitude ClassifyMagnitude(int totalChanges) => totalChanges switch + { + 0 => DeltaMagnitude.None, + <= 5 => DeltaMagnitude.Minimal, + <= 20 => DeltaMagnitude.Small, + <= 50 => DeltaMagnitude.Medium, + <= 100 => DeltaMagnitude.Large, + _ => DeltaMagnitude.Major + }; +} +``` + +**Acceptance Criteria**: +- [ ] Compute component diffs (add/remove/change) +- [ ] Compute vulnerability diffs +- [ ] Calculate risk score delta +- [ ] Classify magnitude +- [ ] Handle edge cases (empty verdicts, identical verdicts) + +--- + +### T3: Delta Signing Service + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Sign delta verdicts using DSSE format. + +**Implementation Path**: `src/__Libraries/StellaOps.DeltaVerdict/Signing/DeltaSigningService.cs` + +**Implementation**: +```csharp +namespace StellaOps.DeltaVerdict.Signing; + +public sealed class DeltaSigningService : IDeltaSigningService +{ + private readonly ISignerService _signer; + + public async Task SignAsync( + DeltaVerdict delta, + SigningOptions options, + CancellationToken ct = default) + { + // Compute digest of unsigned delta + var withDigest = DeltaVerdictSerializer.WithDigest(delta); + + // Create DSSE envelope + var payload = DeltaVerdictSerializer.Serialize(withDigest); + var envelope = await _signer.CreateDsseEnvelopeAsync( + payload, + "application/vnd.stellaops.delta-verdict+json", + options, + ct); + + return withDigest with { Signature = envelope.Signature }; + } + + public async Task VerifyAsync( + DeltaVerdict delta, + VerificationOptions options, + CancellationToken ct = default) + { + if (string.IsNullOrEmpty(delta.Signature)) + return VerificationResult.Fail("Delta is not signed"); + + // Verify signature + var payload = DeltaVerdictSerializer.Serialize(delta with { Signature = null }); + var result = await _signer.VerifyDsseEnvelopeAsync( + payload, + delta.Signature, + options, + ct); + + // Verify digest + if (delta.DeltaDigest != null) + { + var computed = DeltaVerdictSerializer.ComputeDigest(delta); + if (computed != delta.DeltaDigest) + return VerificationResult.Fail("Delta digest mismatch"); + } + + return result; + } +} +``` + +**Acceptance Criteria**: +- [ ] Sign deltas with DSSE format +- [ ] Verify signatures +- [ ] Verify digest integrity +- [ ] Support multiple key types + +--- + +### T4: Risk Budget Evaluator + +**Assignee**: Policy Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Evaluate deltas against risk budgets for release gates. + +**Implementation Path**: `src/__Libraries/StellaOps.DeltaVerdict/Policy/RiskBudgetEvaluator.cs` + +**Implementation**: +```csharp +namespace StellaOps.DeltaVerdict.Policy; + +/// +/// Evaluates delta verdicts against risk budgets for release gates. +/// +public sealed class RiskBudgetEvaluator : IRiskBudgetEvaluator +{ + public RiskBudgetResult Evaluate(DeltaVerdict delta, RiskBudget budget) + { + var violations = new List(); + + // Check new critical vulnerabilities + var criticalAdded = delta.AddedVulnerabilities + .Count(v => v.Severity == "critical"); + if (criticalAdded > budget.MaxNewCriticalVulnerabilities) + { + violations.Add(new RiskBudgetViolation( + "CriticalVulnerabilities", + $"Added {criticalAdded} critical vulnerabilities (budget: {budget.MaxNewCriticalVulnerabilities})")); + } + + // Check risk score increase + if (delta.RiskScoreDelta.Change > budget.MaxRiskScoreIncrease) + { + violations.Add(new RiskBudgetViolation( + "RiskScoreIncrease", + $"Risk score increased by {delta.RiskScoreDelta.Change} (budget: {budget.MaxRiskScoreIncrease})")); + } + + // Check magnitude threshold + if ((int)delta.Summary.Magnitude > (int)budget.MaxMagnitude) + { + violations.Add(new RiskBudgetViolation( + "DeltaMagnitude", + $"Delta magnitude {delta.Summary.Magnitude} exceeds budget {budget.MaxMagnitude}")); + } + + // Check specific vulnerability additions + foreach (var vuln in delta.AddedVulnerabilities) + { + if (budget.BlockedVulnerabilities.Contains(vuln.VulnerabilityId)) + { + violations.Add(new RiskBudgetViolation( + "BlockedVulnerability", + $"Added blocked vulnerability {vuln.VulnerabilityId}")); + } + } + + return new RiskBudgetResult( + IsWithinBudget: violations.Count == 0, + Violations: violations, + Delta: delta, + Budget: budget); + } +} + +public sealed record RiskBudget +{ + public int MaxNewCriticalVulnerabilities { get; init; } = 0; + public int MaxNewHighVulnerabilities { get; init; } = 3; + public decimal MaxRiskScoreIncrease { get; init; } = 10; + public DeltaMagnitude MaxMagnitude { get; init; } = DeltaMagnitude.Medium; + public ImmutableHashSet BlockedVulnerabilities { get; init; } = []; +} + +public sealed record RiskBudgetResult( + bool IsWithinBudget, + IReadOnlyList Violations, + DeltaVerdict Delta, + RiskBudget Budget); + +public sealed record RiskBudgetViolation(string Category, string Message); +``` + +**Acceptance Criteria**: +- [ ] Check new vulnerability counts +- [ ] Check risk score increases +- [ ] Check magnitude thresholds +- [ ] Check blocked vulnerabilities +- [ ] Return detailed violations + +--- + +### T5: OCI Attachment Support + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T3 + +**Description**: +Attach delta verdicts to OCI artifacts. + +**Implementation Path**: `src/__Libraries/StellaOps.DeltaVerdict/Oci/DeltaOciAttacher.cs` + +**Acceptance Criteria**: +- [ ] Attach delta to OCI artifact +- [ ] Use standardized media type +- [ ] Include base/head references +- [ ] Support cosign-style annotations + +--- + +### T6: CLI Commands + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1-T5 + +**Description**: +CLI commands for delta operations. + +**Commands**: +```bash +# Compute delta between two verdicts +stella delta compute --base verdict-v1.json --head verdict-v2.json --output delta.json + +# Compute and sign delta +stella delta compute --base verdict-v1.json --head verdict-v2.json --sign --output delta.json + +# Check delta against risk budget +stella delta check --delta delta.json --budget prod + +# Attach delta to OCI artifact +stella delta attach --delta delta.json --artifact registry/image:tag +``` + +**Acceptance Criteria**: +- [ ] `delta compute` command +- [ ] `delta check` command +- [ ] `delta attach` command +- [ ] JSON output option + +--- + +### T7: Unit Tests + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1-T4 + +**Description**: +Comprehensive tests for delta functionality. + +**Acceptance Criteria**: +- [ ] Delta computation tests +- [ ] Risk budget evaluation tests +- [ ] Signing/verification tests +- [ ] Edge case tests (empty deltas, identical verdicts) + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | QA Team | Delta-Verdict Domain Model | +| 2 | T2 | TODO | T1 | QA Team | Delta Computation Engine | +| 3 | T3 | TODO | T1 | QA Team | Delta Signing Service | +| 4 | T4 | TODO | T1, T2 | Policy Team | Risk Budget Evaluator | +| 5 | T5 | TODO | T1, T3 | QA Team | OCI Attachment Support | +| 6 | T6 | TODO | T1-T5 | QA Team | CLI Commands | +| 7 | T7 | TODO | T1-T4 | QA Team | Unit Tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Testing Strategy advisory. Delta verdicts enable diff-aware release gates. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Magnitude thresholds | Decision | Policy Team | Configurable per environment | +| Risk budget defaults | Decision | Policy Team | Conservative for prod, permissive for dev | +| OCI media type | Decision | QA Team | application/vnd.stellaops.delta-verdict+json | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] Delta computation is deterministic +- [ ] Risk budgets block excessive changes +- [ ] Deltas can be signed and verified +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_5100_0003_0001_sbom_interop_roundtrip.md b/docs/implplan/SPRINT_5100_0003_0001_sbom_interop_roundtrip.md new file mode 100644 index 000000000..12917edaf --- /dev/null +++ b/docs/implplan/SPRINT_5100_0003_0001_sbom_interop_roundtrip.md @@ -0,0 +1,639 @@ +# Sprint 5100.0003.0001 · SBOM Interop Round-Trip + +## Topic & Scope + +- Implement comprehensive SBOM interoperability testing with third-party tools. +- Create round-trip tests: Syft → cosign attest → Grype consume → verify findings parity. +- Support both CycloneDX 1.6 and SPDX 3.0.1 formats. +- Establish interop as a release-blocking contract. +- **Working directory:** `tests/interop/` and `src/__Libraries/StellaOps.Interop/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 5100.0001.0002 (Evidence Index) for evidence chain tracking +- **Downstream**: CI gates depend on interop pass/fail +- **Safe to parallelize with**: Sprint 5100.0003.0002 (No-Egress) + +## Documentation Prerequisites + +- `docs/product-advisories/20-Dec-2025 - Testing strategy.md` +- CycloneDX 1.6 specification +- SPDX 3.0.1 specification +- cosign attestation documentation + +--- + +## Tasks + +### T1: Interop Test Harness + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create test harness for running interop tests with third-party tools. + +**Implementation Path**: `tests/interop/StellaOps.Interop.Tests/` + +**Implementation**: +```csharp +namespace StellaOps.Interop.Tests; + +/// +/// Test harness for SBOM interoperability testing. +/// Coordinates Syft, Grype, Trivy, and cosign tools. +/// +public sealed class InteropTestHarness : IAsyncLifetime +{ + private readonly ToolManager _toolManager; + private readonly string _workDir; + + public InteropTestHarness() + { + _workDir = Path.Combine(Path.GetTempPath(), $"interop-{Guid.NewGuid():N}"); + _toolManager = new ToolManager(_workDir); + } + + public async Task InitializeAsync() + { + Directory.CreateDirectory(_workDir); + + // Verify tools are available + await _toolManager.VerifyToolAsync("syft", "--version"); + await _toolManager.VerifyToolAsync("grype", "--version"); + await _toolManager.VerifyToolAsync("cosign", "version"); + } + + /// + /// Generate SBOM using Syft. + /// + public async Task GenerateSbomWithSyft( + string imageRef, + SbomFormat format, + CancellationToken ct = default) + { + var formatArg = format switch + { + SbomFormat.CycloneDx16 => "cyclonedx-json", + SbomFormat.Spdx30 => "spdx-json", + _ => throw new ArgumentException($"Unsupported format: {format}") + }; + + var outputPath = Path.Combine(_workDir, $"sbom-{format}.json"); + var result = await _toolManager.RunAsync( + "syft", + $"{imageRef} -o {formatArg}={outputPath}", + ct); + + if (!result.Success) + return SbomResult.Failed(result.Error); + + var content = await File.ReadAllTextAsync(outputPath, ct); + var digest = ComputeDigest(content); + + return new SbomResult( + Success: true, + Path: outputPath, + Format: format, + Content: content, + Digest: digest); + } + + /// + /// Generate SBOM using Stella scanner. + /// + public async Task GenerateSbomWithStella( + string imageRef, + SbomFormat format, + CancellationToken ct = default) + { + var formatArg = format switch + { + SbomFormat.CycloneDx16 => "cyclonedx", + SbomFormat.Spdx30 => "spdx", + _ => throw new ArgumentException($"Unsupported format: {format}") + }; + + var outputPath = Path.Combine(_workDir, $"stella-sbom-{format}.json"); + var result = await _toolManager.RunAsync( + "stella", + $"scan {imageRef} --sbom-format {formatArg} --sbom-output {outputPath}", + ct); + + if (!result.Success) + return SbomResult.Failed(result.Error); + + var content = await File.ReadAllTextAsync(outputPath, ct); + var digest = ComputeDigest(content); + + return new SbomResult( + Success: true, + Path: outputPath, + Format: format, + Content: content, + Digest: digest); + } + + /// + /// Attest SBOM using cosign. + /// + public async Task AttestWithCosign( + string sbomPath, + string imageRef, + CancellationToken ct = default) + { + var result = await _toolManager.RunAsync( + "cosign", + $"attest --predicate {sbomPath} --type cyclonedx {imageRef} --yes", + ct); + + if (!result.Success) + return AttestationResult.Failed(result.Error); + + return new AttestationResult(Success: true, ImageRef: imageRef); + } + + /// + /// Scan using Grype from SBOM (no image pull). + /// + public async Task ScanWithGrypeFromSbom( + string sbomPath, + CancellationToken ct = default) + { + var outputPath = Path.Combine(_workDir, "grype-findings.json"); + var result = await _toolManager.RunAsync( + "grype", + $"sbom:{sbomPath} -o json --file {outputPath}", + ct); + + if (!result.Success) + return GrypeScanResult.Failed(result.Error); + + var content = await File.ReadAllTextAsync(outputPath, ct); + var findings = ParseGrypeFindings(content); + + return new GrypeScanResult( + Success: true, + Findings: findings, + RawOutput: content); + } + + /// + /// Compare findings between Stella and Grype. + /// + public FindingsComparisonResult CompareFindings( + IReadOnlyList stellaFindings, + IReadOnlyList grypeFindings, + decimal tolerancePercent = 5) + { + var stellaVulns = stellaFindings + .Select(f => (f.VulnerabilityId, f.PackagePurl)) + .ToHashSet(); + + var grypeVulns = grypeFindings + .Select(f => (f.VulnerabilityId, f.PackagePurl)) + .ToHashSet(); + + var onlyInStella = stellaVulns.Except(grypeVulns).ToList(); + var onlyInGrype = grypeVulns.Except(stellaVulns).ToList(); + var inBoth = stellaVulns.Intersect(grypeVulns).ToList(); + + var totalUnique = stellaVulns.Union(grypeVulns).Count; + var parityPercent = totalUnique > 0 + ? (decimal)inBoth.Count / totalUnique * 100 + : 100; + + return new FindingsComparisonResult( + ParityPercent: parityPercent, + IsWithinTolerance: parityPercent >= (100 - tolerancePercent), + StellaTotalFindings: stellaFindings.Count, + GrypeTotalFindings: grypeFindings.Count, + MatchingFindings: inBoth.Count, + OnlyInStella: onlyInStella.Count, + OnlyInGrype: onlyInGrype.Count, + OnlyInStellaDetails: onlyInStella, + OnlyInGrypeDetails: onlyInGrype); + } + + public Task DisposeAsync() + { + if (Directory.Exists(_workDir)) + Directory.Delete(_workDir, recursive: true); + return Task.CompletedTask; + } + + private static string ComputeDigest(string content) => + Convert.ToHexString(SHA256.HashData(Encoding.UTF8.GetBytes(content))).ToLowerInvariant(); +} + +public enum SbomFormat +{ + CycloneDx16, + Spdx30 +} + +public sealed record SbomResult( + bool Success, + string? Path = null, + SbomFormat? Format = null, + string? Content = null, + string? Digest = null, + string? Error = null) +{ + public static SbomResult Failed(string error) => new(false, Error: error); +} + +public sealed record FindingsComparisonResult( + decimal ParityPercent, + bool IsWithinTolerance, + int StellaTotalFindings, + int GrypeTotalFindings, + int MatchingFindings, + int OnlyInStella, + int OnlyInGrype, + IReadOnlyList<(string VulnId, string Purl)> OnlyInStellaDetails, + IReadOnlyList<(string VulnId, string Purl)> OnlyInGrypeDetails); +``` + +**Acceptance Criteria**: +- [ ] Tool management (Syft, Grype, cosign) +- [ ] SBOM generation with both tools +- [ ] Attestation with cosign +- [ ] Findings comparison +- [ ] Parity percentage calculation + +--- + +### T2: CycloneDX 1.6 Round-Trip Tests + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Complete round-trip tests for CycloneDX 1.6 format. + +**Implementation Path**: `tests/interop/StellaOps.Interop.Tests/CycloneDx/CycloneDxRoundTripTests.cs` + +**Test Cases**: +```csharp +[Trait("Category", "Interop")] +[Trait("Format", "CycloneDX")] +public class CycloneDxRoundTripTests : IClassFixture +{ + private readonly InteropTestHarness _harness; + + [Theory] + [MemberData(nameof(TestImages))] + public async Task Syft_GeneratesCycloneDx_GrypeCanConsume(string imageRef) + { + // Generate SBOM with Syft + var sbomResult = await _harness.GenerateSbomWithSyft( + imageRef, SbomFormat.CycloneDx16); + sbomResult.Success.Should().BeTrue(); + + // Scan from SBOM with Grype + var grypeResult = await _harness.ScanWithGrypeFromSbom(sbomResult.Path); + grypeResult.Success.Should().BeTrue(); + + // Grype should be able to parse and find vulnerabilities + grypeResult.Findings.Should().NotBeNull(); + } + + [Theory] + [MemberData(nameof(TestImages))] + public async Task Stella_GeneratesCycloneDx_GrypeCanConsume(string imageRef) + { + // Generate SBOM with Stella + var sbomResult = await _harness.GenerateSbomWithStella( + imageRef, SbomFormat.CycloneDx16); + sbomResult.Success.Should().BeTrue(); + + // Scan from SBOM with Grype + var grypeResult = await _harness.ScanWithGrypeFromSbom(sbomResult.Path); + grypeResult.Success.Should().BeTrue(); + } + + [Theory] + [MemberData(nameof(TestImages))] + public async Task Stella_And_Grype_FindingsParity_Above95Percent(string imageRef) + { + // Generate SBOM with Stella + var stellaSbom = await _harness.GenerateSbomWithStella( + imageRef, SbomFormat.CycloneDx16); + + // Get Stella findings + var stellaFindings = await _harness.GetStellaFindings(imageRef); + + // Scan SBOM with Grype + var grypeResult = await _harness.ScanWithGrypeFromSbom(stellaSbom.Path); + + // Compare findings + var comparison = _harness.CompareFindings( + stellaFindings, + grypeResult.Findings, + tolerancePercent: 5); + + comparison.ParityPercent.Should().BeGreaterOrEqualTo(95, + $"Findings parity {comparison.ParityPercent}% is below 95% threshold. " + + $"Only in Stella: {comparison.OnlyInStella}, Only in Grype: {comparison.OnlyInGrype}"); + } + + [Theory] + [MemberData(nameof(TestImages))] + public async Task CycloneDx_Attestation_RoundTrip(string imageRef) + { + // Generate SBOM + var sbomResult = await _harness.GenerateSbomWithStella( + imageRef, SbomFormat.CycloneDx16); + + // Attest with cosign + var attestResult = await _harness.AttestWithCosign( + sbomResult.Path, imageRef); + attestResult.Success.Should().BeTrue(); + + // Verify attestation + var verifyResult = await _harness.VerifyCosignAttestation(imageRef); + verifyResult.Success.Should().BeTrue(); + + // Digest should match + var attestedDigest = verifyResult.PredicateDigest; + attestedDigest.Should().Be(sbomResult.Digest); + } + + public static IEnumerable TestImages => + [ + ["alpine:3.18"], + ["debian:12-slim"], + ["node:20-alpine"], + ["python:3.12-slim"], + ["golang:1.22-alpine"] + ]; +} +``` + +**Acceptance Criteria**: +- [ ] Syft CycloneDX generation test +- [ ] Stella CycloneDX generation test +- [ ] Grype consumption tests +- [ ] Findings parity at 95%+ +- [ ] Attestation round-trip + +--- + +### T3: SPDX 3.0.1 Round-Trip Tests + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Complete round-trip tests for SPDX 3.0.1 format. + +**Implementation Path**: `tests/interop/StellaOps.Interop.Tests/Spdx/SpdxRoundTripTests.cs` + +**Acceptance Criteria**: +- [ ] Syft SPDX generation test +- [ ] Stella SPDX generation test +- [ ] Consumer compatibility tests +- [ ] Schema validation tests +- [ ] Evidence chain verification + +--- + +### T4: Cross-Tool Findings Parity Analysis + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T2, T3 + +**Description**: +Analyze and document expected differences between tools. + +**Implementation Path**: `tests/interop/StellaOps.Interop.Tests/Analysis/FindingsParityAnalyzer.cs` + +**Analysis Categories**: +```csharp +public sealed class FindingsParityAnalyzer +{ + /// + /// Categorizes differences between tools. + /// + public ParityAnalysisReport Analyze( + IReadOnlyList stellaFindings, + IReadOnlyList grypeFindings) + { + var differences = new List(); + + // Category 1: Version matching differences + // (e.g., semver vs non-semver interpretation) + var versionDiffs = AnalyzeVersionMatchingDifferences(...); + + // Category 2: Feed coverage differences + // (e.g., Stella has feed X, Grype doesn't) + var feedDiffs = AnalyzeFeedCoverageDifferences(...); + + // Category 3: Package identification differences + // (e.g., different PURL generation) + var purlDiffs = AnalyzePurlDifferences(...); + + // Category 4: VEX application differences + // (e.g., Stella applies VEX, Grype doesn't) + var vexDiffs = AnalyzeVexDifferences(...); + + return new ParityAnalysisReport + { + TotalDifferences = differences.Count, + VersionMatchingDifferences = versionDiffs, + FeedCoverageDifferences = feedDiffs, + PurlDifferences = purlDiffs, + VexDifferences = vexDiffs, + AcceptableDifferences = differences.Count(d => d.IsAcceptable), + RequiresInvestigation = differences.Count(d => !d.IsAcceptable) + }; + } +} +``` + +**Acceptance Criteria**: +- [ ] Categorize difference types +- [ ] Document acceptable vs concerning differences +- [ ] Generate parity report +- [ ] Track trends over time + +--- + +### T5: Interop CI Pipeline + +**Assignee**: DevOps Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T2, T3, T4 + +**Description**: +CI pipeline for interop testing. + +**Implementation Path**: `.gitea/workflows/interop-e2e.yml` + +**Workflow**: +```yaml +name: Interop E2E Tests + +on: + pull_request: + paths: + - 'src/Scanner/**' + - 'src/Excititor/**' + - 'tests/interop/**' + schedule: + - cron: '0 6 * * *' # Nightly + +jobs: + interop-tests: + runs-on: ubuntu-22.04 + strategy: + matrix: + format: [cyclonedx, spdx] + arch: [amd64] + include: + - format: cyclonedx + format_flag: cyclonedx-json + - format: spdx + format_flag: spdx-json + + steps: + - uses: actions/checkout@v4 + + - name: Install tools + run: | + # Install Syft + curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | sh -s -- -b /usr/local/bin + + # Install Grype + curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin + + # Install cosign + curl -sSfL https://github.com/sigstore/cosign/releases/latest/download/cosign-linux-amd64 -o /usr/local/bin/cosign + chmod +x /usr/local/bin/cosign + + - name: Setup .NET + uses: actions/setup-dotnet@v4 + with: + dotnet-version: '10.0.100' + + - name: Build Stella CLI + run: dotnet build src/Cli/StellaOps.Cli -c Release + + - name: Run interop tests + run: | + dotnet test tests/interop/StellaOps.Interop.Tests \ + --filter "Format=${{ matrix.format }}" \ + --logger "trx;LogFileName=interop-${{ matrix.format }}.trx" \ + --results-directory ./results + + - name: Upload parity report + uses: actions/upload-artifact@v4 + with: + name: parity-report-${{ matrix.format }} + path: ./results/parity-report.json + + - name: Check parity threshold + run: | + PARITY=$(jq '.parityPercent' ./results/parity-report.json) + if (( $(echo "$PARITY < 95" | bc -l) )); then + echo "::error::Findings parity $PARITY% is below 95% threshold" + exit 1 + fi +``` + +**Acceptance Criteria**: +- [ ] Matrix for CycloneDX and SPDX +- [ ] Tool installation steps +- [ ] Parity threshold enforcement +- [ ] Report artifacts +- [ ] Nightly schedule + +--- + +### T6: Interop Documentation + +**Assignee**: QA Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T4 + +**Description**: +Document interop test results and known differences. + +**Implementation Path**: `docs/interop/README.md` + +**Acceptance Criteria**: +- [ ] Tool compatibility matrix +- [ ] Known differences documentation +- [ ] Parity expectations per format +- [ ] Troubleshooting guide + +--- + +### T7: Project Setup + +**Assignee**: QA Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the interop test project structure. + +**Acceptance Criteria**: +- [ ] Test project compiles +- [ ] Dependencies resolved +- [ ] Tool wrappers functional + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | QA Team | Interop Test Harness | +| 2 | T2 | TODO | T1 | QA Team | CycloneDX 1.6 Round-Trip Tests | +| 3 | T3 | TODO | T1 | QA Team | SPDX 3.0.1 Round-Trip Tests | +| 4 | T4 | TODO | T2, T3 | QA Team | Cross-Tool Findings Parity Analysis | +| 5 | T5 | TODO | T2-T4 | DevOps Team | Interop CI Pipeline | +| 6 | T6 | TODO | T4 | QA Team | Interop Documentation | +| 7 | T7 | TODO | — | QA Team | Project Setup | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Testing Strategy advisory. SBOM interop is critical for ecosystem compatibility. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Parity threshold | Decision | QA Team | 95% threshold, adjustable per format | +| Acceptable differences | Decision | QA Team | VEX application expected to differ | +| Tool versions | Risk | QA Team | Pin tool versions for reproducibility | + +--- + +## Success Criteria + +- [ ] All 7 tasks marked DONE +- [ ] CycloneDX round-trip at 95%+ parity +- [ ] SPDX round-trip at 95%+ parity +- [ ] CI blocks on parity regression +- [ ] Differences documented and categorized +- [ ] `dotnet test` passes all interop tests diff --git a/docs/implplan/SPRINT_5100_0003_0002_no_egress_enforcement.md b/docs/implplan/SPRINT_5100_0003_0002_no_egress_enforcement.md new file mode 100644 index 000000000..d66234b30 --- /dev/null +++ b/docs/implplan/SPRINT_5100_0003_0002_no_egress_enforcement.md @@ -0,0 +1,632 @@ +# Sprint 5100.0003.0002 · No-Egress Test Enforcement + +## Topic & Scope + +- Implement network isolation for air-gap compliance testing. +- Ensure all offline tests run with no network egress. +- Detect and fail tests that attempt network calls. +- Prove air-gap operation works correctly. +- **Working directory:** `tests/offline/` and `.gitea/workflows/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 5100.0001.0003 (Offline Bundle Manifest) +- **Downstream**: All offline E2E tests require this infrastructure +- **Safe to parallelize with**: Sprint 5100.0003.0001 (SBOM Interop) + +## Documentation Prerequisites + +- `docs/product-advisories/20-Dec-2025 - Testing strategy.md` +- `docs/24_OFFLINE_KIT.md` +- Docker/Podman network isolation documentation + +--- + +## Tasks + +### T1: Network Isolation Test Base Class + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create base class for tests that must run without network access. + +**Implementation Path**: `src/__Libraries/StellaOps.Testing.AirGap/NetworkIsolatedTestBase.cs` + +**Implementation**: +```csharp +namespace StellaOps.Testing.AirGap; + +/// +/// Base class for tests that must run without network access. +/// Monitors and blocks any network calls during test execution. +/// +public abstract class NetworkIsolatedTestBase : IAsyncLifetime +{ + private readonly NetworkMonitor _monitor; + private readonly List _blockedAttempts = []; + + protected NetworkIsolatedTestBase() + { + _monitor = new NetworkMonitor(OnNetworkAttempt); + } + + public virtual async Task InitializeAsync() + { + // Install network interception + await _monitor.StartMonitoringAsync(); + + // Configure HttpClient factory to use monitored handler + ServicePointManager.DefaultConnectionLimit = 0; + + // Block DNS resolution + _monitor.BlockDns(); + } + + public virtual async Task DisposeAsync() + { + await _monitor.StopMonitoringAsync(); + + // Fail test if any network calls were attempted + if (_blockedAttempts.Count > 0) + { + var attempts = string.Join("\n", _blockedAttempts.Select(a => + $" - {a.Host}:{a.Port} at {a.StackTrace}")); + throw new NetworkIsolationViolationException( + $"Test attempted {_blockedAttempts.Count} network call(s):\n{attempts}"); + } + } + + private void OnNetworkAttempt(NetworkAttempt attempt) + { + _blockedAttempts.Add(attempt); + } + + /// + /// Asserts that no network calls were made during the test. + /// + protected void AssertNoNetworkCalls() + { + if (_blockedAttempts.Count > 0) + { + throw new NetworkIsolationViolationException( + $"Network isolation violated: {_blockedAttempts.Count} attempts blocked"); + } + } + + /// + /// Gets the offline bundle path for this test. + /// + protected string GetOfflineBundlePath() => + Environment.GetEnvironmentVariable("STELLAOPS_OFFLINE_BUNDLE") + ?? Path.Combine(AppContext.BaseDirectory, "fixtures", "offline-bundle"); +} + +public sealed class NetworkMonitor : IAsyncDisposable +{ + private readonly Action _onAttempt; + private bool _isMonitoring; + + public NetworkMonitor(Action onAttempt) + { + _onAttempt = onAttempt; + } + + public Task StartMonitoringAsync() + { + _isMonitoring = true; + + // Hook into socket creation + AppDomain.CurrentDomain.FirstChanceException += OnException; + + return Task.CompletedTask; + } + + public Task StopMonitoringAsync() + { + _isMonitoring = false; + AppDomain.CurrentDomain.FirstChanceException -= OnException; + return Task.CompletedTask; + } + + public void BlockDns() + { + // Set environment to prevent DNS lookups + Environment.SetEnvironmentVariable("RES_OPTIONS", "timeout:0 attempts:0"); + } + + private void OnException(object? sender, FirstChanceExceptionEventArgs e) + { + if (!_isMonitoring) return; + + if (e.Exception is SocketException se) + { + _onAttempt(new NetworkAttempt( + Host: "unknown", + Port: 0, + StackTrace: se.StackTrace ?? "", + Timestamp: DateTimeOffset.UtcNow)); + } + } + + public ValueTask DisposeAsync() + { + _isMonitoring = false; + return ValueTask.CompletedTask; + } +} + +public sealed record NetworkAttempt( + string Host, + int Port, + string StackTrace, + DateTimeOffset Timestamp); + +public sealed class NetworkIsolationViolationException : Exception +{ + public NetworkIsolationViolationException(string message) : base(message) { } +} +``` + +**Acceptance Criteria**: +- [ ] Base class intercepts network calls +- [ ] Fails test on network attempt +- [ ] Records attempt details with stack trace +- [ ] Configurable via environment variables + +--- + +### T2: Docker Network Isolation + +**Assignee**: DevOps Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: — + +**Description**: +Configure Docker/Testcontainers for network-isolated testing. + +**Implementation Path**: `src/__Libraries/StellaOps.Testing.AirGap/Docker/IsolatedContainerBuilder.cs` + +**Implementation**: +```csharp +namespace StellaOps.Testing.AirGap.Docker; + +/// +/// Builds containers with network isolation for air-gap testing. +/// +public sealed class IsolatedContainerBuilder +{ + /// + /// Creates a container with no network access. + /// + public async Task CreateIsolatedContainerAsync( + string image, + IReadOnlyList volumes, + CancellationToken ct = default) + { + var container = new ContainerBuilder() + .WithImage(image) + .WithNetwork(NetworkMode.None) // No network! + .WithAutoRemove(true) + .WithCleanUp(true); + + foreach (var volume in volumes) + { + container = container.WithBindMount(volume); + } + + var built = container.Build(); + await built.StartAsync(ct); + + // Verify isolation + await VerifyNoNetworkAsync(built, ct); + + return built; + } + + /// + /// Creates an isolated network for multi-container tests. + /// + public async Task CreateIsolatedNetworkAsync(CancellationToken ct = default) + { + var network = new NetworkBuilder() + .WithName($"isolated-{Guid.NewGuid():N}") + .WithDriver(NetworkDriver.Bridge) + .WithOption("com.docker.network.bridge.enable_ip_masquerade", "false") + .Build(); + + await network.CreateAsync(ct); + return network; + } + + private static async Task VerifyNoNetworkAsync(IContainer container, CancellationToken ct) + { + var result = await container.ExecAsync( + ["ping", "-c", "1", "-W", "1", "8.8.8.8"], + ct); + + if (result.ExitCode == 0) + { + throw new InvalidOperationException( + "Container has network access - isolation failed!"); + } + } +} + +/// +/// Extension methods for Testcontainers with isolation. +/// +public static class ContainerBuilderExtensions +{ + /// + /// Configures container for air-gap testing. + /// + public static ContainerBuilder WithAirGapMode(this ContainerBuilder builder) + { + return builder + .WithNetwork(NetworkMode.None) + .WithEnvironment("STELLAOPS_OFFLINE_MODE", "true") + .WithEnvironment("HTTP_PROXY", "") + .WithEnvironment("HTTPS_PROXY", "") + .WithEnvironment("NO_PROXY", "*"); + } +} +``` + +**Acceptance Criteria**: +- [ ] Containers run with NetworkMode.None +- [ ] Verify isolation on container start +- [ ] Multi-container isolated network option +- [ ] Extension methods for easy configuration + +--- + +### T3: Offline E2E Test Suite + +**Assignee**: QA Team +**Story Points**: 8 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Complete E2E test suite that runs entirely offline. + +**Implementation Path**: `tests/offline/StellaOps.Offline.E2E.Tests/` + +**Test Cases**: +```csharp +[Trait("Category", "AirGap")] +[Trait("Category", "E2E")] +public class OfflineE2ETests : NetworkIsolatedTestBase +{ + [Fact] + public async Task Scan_WithOfflineBundle_ProducesVerdict() + { + // Arrange + var bundlePath = GetOfflineBundlePath(); + var imageTarball = Path.Combine(bundlePath, "images", "test-image.tar"); + + // Act + var result = await RunScannerOfflineAsync(imageTarball, bundlePath); + + // Assert + result.Success.Should().BeTrue(); + result.Verdict.Should().NotBeNull(); + AssertNoNetworkCalls(); + } + + [Fact] + public async Task Scan_ProducesSbom_WithOfflineBundle() + { + var bundlePath = GetOfflineBundlePath(); + var imageTarball = Path.Combine(bundlePath, "images", "test-image.tar"); + + var result = await RunScannerOfflineAsync(imageTarball, bundlePath); + + result.Sbom.Should().NotBeNull(); + result.Sbom.Components.Should().NotBeEmpty(); + AssertNoNetworkCalls(); + } + + [Fact] + public async Task Attestation_SignAndVerify_WithOfflineBundle() + { + var bundlePath = GetOfflineBundlePath(); + var imageTarball = Path.Combine(bundlePath, "images", "test-image.tar"); + + // Scan and generate attestation + var scanResult = await RunScannerOfflineAsync(imageTarball, bundlePath); + + // Sign attestation (offline with local keys) + var signResult = await SignAttestationOfflineAsync( + scanResult.Sbom, + Path.Combine(bundlePath, "keys", "signing-key.pem")); + + signResult.Success.Should().BeTrue(); + + // Verify signature (offline with local trust roots) + var verifyResult = await VerifyAttestationOfflineAsync( + signResult.Attestation, + Path.Combine(bundlePath, "certs", "trust-root.pem")); + + verifyResult.Valid.Should().BeTrue(); + AssertNoNetworkCalls(); + } + + [Fact] + public async Task PolicyEvaluation_WithOfflineBundle_Works() + { + var bundlePath = GetOfflineBundlePath(); + var imageTarball = Path.Combine(bundlePath, "images", "vuln-image.tar"); + + var scanResult = await RunScannerOfflineAsync(imageTarball, bundlePath); + + // Policy evaluation should work offline + var policyResult = await EvaluatePolicyOfflineAsync( + scanResult.Verdict, + Path.Combine(bundlePath, "policies", "default.rego")); + + policyResult.Should().NotBeNull(); + policyResult.Decision.Should().BeOneOf("allow", "deny", "warn"); + AssertNoNetworkCalls(); + } + + [Fact] + public async Task Replay_WithOfflineBundle_ProducesIdenticalVerdict() + { + var bundlePath = GetOfflineBundlePath(); + var imageTarball = Path.Combine(bundlePath, "images", "test-image.tar"); + + // First scan + var result1 = await RunScannerOfflineAsync(imageTarball, bundlePath); + + // Replay + var result2 = await ReplayFromManifestOfflineAsync( + result1.RunManifest, + bundlePath); + + result1.Verdict.Digest.Should().Be(result2.Verdict.Digest); + AssertNoNetworkCalls(); + } + + [Fact] + public async Task VexApplication_WithOfflineBundle_Works() + { + var bundlePath = GetOfflineBundlePath(); + var imageTarball = Path.Combine(bundlePath, "images", "vuln-with-vex.tar"); + + var scanResult = await RunScannerOfflineAsync(imageTarball, bundlePath); + + // VEX should be applied from offline bundle + var vexApplied = scanResult.Verdict.VexStatements.Any(); + vexApplied.Should().BeTrue("VEX from offline bundle should be applied"); + + AssertNoNetworkCalls(); + } +} +``` + +**Acceptance Criteria**: +- [ ] Scan with offline bundle +- [ ] SBOM generation offline +- [ ] Attestation sign/verify offline +- [ ] Policy evaluation offline +- [ ] Replay offline +- [ ] VEX application offline +- [ ] All tests assert no network calls + +--- + +### T4: CI Network Isolation Workflow + +**Assignee**: DevOps Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T3 + +**Description**: +CI workflow with strict network isolation. + +**Implementation Path**: `.gitea/workflows/offline-e2e.yml` + +**Workflow**: +```yaml +name: Offline E2E Tests + +on: + pull_request: + paths: + - 'src/AirGap/**' + - 'src/Scanner/**' + - 'tests/offline/**' + schedule: + - cron: '0 4 * * *' # Nightly at 4 AM + +env: + STELLAOPS_OFFLINE_MODE: 'true' + +jobs: + offline-e2e: + runs-on: ubuntu-22.04 + # Disable all network access for this job + # Note: This requires self-hosted runner with network policy support + # or Docker-in-Docker with isolated network + + steps: + - uses: actions/checkout@v4 + + - name: Setup .NET + uses: actions/setup-dotnet@v4 + with: + dotnet-version: '10.0.100' + # Cache must be pre-populated; no network during test + + - name: Download offline bundle + run: | + # Bundle must be pre-built and cached + cp -r /cache/offline-bundles/latest ./offline-bundle + + - name: Build in isolated environment + run: | + # Build must work with no network + docker run --rm --network none \ + -v $(pwd):/src \ + -v /cache/nuget:/root/.nuget \ + mcr.microsoft.com/dotnet/sdk:10.0 \ + dotnet build /src/tests/offline/StellaOps.Offline.E2E.Tests + + - name: Run offline E2E tests + run: | + docker run --rm --network none \ + -v $(pwd):/src \ + -v $(pwd)/offline-bundle:/bundle \ + -e STELLAOPS_OFFLINE_BUNDLE=/bundle \ + -e STELLAOPS_OFFLINE_MODE=true \ + mcr.microsoft.com/dotnet/sdk:10.0 \ + dotnet test /src/tests/offline/StellaOps.Offline.E2E.Tests \ + --logger "trx;LogFileName=offline-e2e.trx" + + - name: Verify no network calls + run: | + # Parse test output for any NetworkIsolationViolationException + if grep -q "NetworkIsolationViolation" ./results/offline-e2e.trx; then + echo "::error::Tests attempted network calls in offline mode!" + exit 1 + fi + + - name: Upload results + if: always() + uses: actions/upload-artifact@v4 + with: + name: offline-e2e-results + path: ./results/ + + verify-isolation: + runs-on: ubuntu-22.04 + needs: offline-e2e + + steps: + - name: Verify network isolation was effective + run: | + # Check Docker network stats + # Verify no egress bytes during test window + echo "Network isolation verification passed" +``` + +**Acceptance Criteria**: +- [ ] Runs with --network none +- [ ] Pre-populated caches for builds +- [ ] Offline bundle pre-staged +- [ ] Verifies no network calls +- [ ] Uploads results on failure + +--- + +### T5: Offline Bundle Fixtures + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T3 + +**Description**: +Create pre-packaged offline bundle fixtures for testing. + +**Implementation Path**: `tests/fixtures/offline-bundle/` + +**Bundle Contents**: +``` +tests/fixtures/offline-bundle/ +├── manifest.json # Bundle manifest +├── feeds/ +│ ├── nvd-snapshot.json # NVD feed snapshot +│ ├── ghsa-snapshot.json # GHSA feed snapshot +│ └── distro/ +│ ├── alpine.json +│ ├── debian.json +│ └── rhel.json +├── policies/ +│ ├── default.rego # Default policy +│ └── strict.rego # Strict policy +├── keys/ +│ ├── signing-key.pem # Test signing key +│ └── signing-key.pub # Test public key +├── certs/ +│ ├── trust-root.pem # Test trust root +│ └── intermediate.pem # Test intermediate CA +├── vex/ +│ └── vendor-vex.json # Sample VEX document +└── images/ + ├── test-image.tar # Basic test image + ├── vuln-image.tar # Image with known vulns + └── vuln-with-vex.tar # Image with VEX coverage +``` + +**Acceptance Criteria**: +- [ ] Complete bundle with all components +- [ ] Test images as tarballs +- [ ] Feed snapshots from real feeds +- [ ] Sample VEX documents +- [ ] Test keys and certificates + +--- + +### T6: Unit Tests + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Unit tests for network isolation utilities. + +**Acceptance Criteria**: +- [ ] NetworkMonitor tests +- [ ] IsolatedContainerBuilder tests +- [ ] Network detection accuracy tests + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | QA Team | Network Isolation Test Base Class | +| 2 | T2 | TODO | — | DevOps Team | Docker Network Isolation | +| 3 | T3 | TODO | T1, T2 | QA Team | Offline E2E Test Suite | +| 4 | T4 | TODO | T3 | DevOps Team | CI Network Isolation Workflow | +| 5 | T5 | TODO | T3 | QA Team | Offline Bundle Fixtures | +| 6 | T6 | TODO | T1, T2 | QA Team | Unit Tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Testing Strategy advisory. No-egress enforcement is critical for air-gap compliance. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Isolation method | Decision | DevOps Team | Docker --network none primary; process-level secondary | +| CI runner requirements | Risk | DevOps Team | May need self-hosted runners for strict isolation | +| Cache pre-population | Decision | DevOps Team | NuGet and tool caches must be pre-built | + +--- + +## Success Criteria + +- [ ] All 6 tasks marked DONE +- [ ] All offline E2E tests pass with no network +- [ ] CI workflow verifies network isolation +- [ ] Bundle fixtures complete and working +- [ ] `dotnet test` passes all offline tests diff --git a/docs/implplan/SPRINT_5100_0004_0001_unknowns_budget_ci_gates.md b/docs/implplan/SPRINT_5100_0004_0001_unknowns_budget_ci_gates.md new file mode 100644 index 000000000..576ab0d5f --- /dev/null +++ b/docs/implplan/SPRINT_5100_0004_0001_unknowns_budget_ci_gates.md @@ -0,0 +1,570 @@ +# Sprint 5100.0004.0001 · Unknowns Budget CI Gates + +## Topic & Scope + +- Integrate unknowns budget enforcement into CI/CD pipelines. +- Create CLI commands for budget checking in CI. +- Add CI workflow for unknowns budget gates. +- Surface unknowns in PR checks and UI. +- **Working directory:** `src/Cli/StellaOps.Cli/Commands/` and `.gitea/workflows/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 4100.0001.0001 (Reason-Coded Unknowns), Sprint 4100.0001.0002 (Unknown Budgets) +- **Downstream**: Release gates depend on budget pass/fail +- **Safe to parallelize with**: Sprint 5100.0003.0001 (SBOM Interop) + +## Documentation Prerequisites + +- `docs/product-advisories/20-Dec-2025 - Testing strategy.md` +- `docs/product-advisories/19-Dec-2025 - Moat #5.md` +- Sprint 4100.0001.0002 (Unknown Budgets model) + +--- + +## Tasks + +### T1: CLI Budget Check Command + +**Assignee**: CLI Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create CLI command for checking scans against unknowns budgets. + +**Implementation Path**: `src/Cli/StellaOps.Cli/Commands/Budget/BudgetCheckCommand.cs` + +**Implementation**: +```csharp +namespace StellaOps.Cli.Commands.Budget; + +[Command("budget", Description = "Unknowns budget operations")] +public class BudgetCommand +{ + [Command("check", Description = "Check scan results against unknowns budget")] + public class CheckCommand + { + [Option("--scan-id", Description = "Scan ID to check")] + public string? ScanId { get; set; } + + [Option("--verdict", Description = "Path to verdict JSON file")] + public string? VerdictPath { get; set; } + + [Option("--environment", Description = "Environment budget to use (prod, stage, dev)")] + public string Environment { get; set; } = "prod"; + + [Option("--config", Description = "Path to budget configuration file")] + public string? ConfigPath { get; set; } + + [Option("--fail-on-exceed", Description = "Exit with error code if budget exceeded")] + public bool FailOnExceed { get; set; } = true; + + [Option("--output", Description = "Output format (text, json, sarif)")] + public string Output { get; set; } = "text"; + + public async Task ExecuteAsync( + IUnknownBudgetService budgetService, + IConsole console, + CancellationToken ct) + { + // Load verdict + var verdict = await LoadVerdictAsync(ct); + if (verdict == null) + { + console.Error.WriteLine("Failed to load verdict"); + return 1; + } + + // Load budget configuration + var budget = await LoadBudgetAsync(budgetService, ct); + + // Check budget + var result = budgetService.CheckBudget(Environment, verdict.Unknowns); + + // Output result + await OutputResultAsync(result, console, ct); + + // Return exit code + if (FailOnExceed && !result.IsWithinBudget) + { + console.Error.WriteLine($"Budget exceeded: {result.Message}"); + return 2; // Distinct exit code for budget failure + } + + return 0; + } + + private async Task OutputResultAsync( + BudgetCheckResult result, + IConsole console, + CancellationToken ct) + { + switch (Output.ToLower()) + { + case "json": + var json = JsonSerializer.Serialize(result, new JsonSerializerOptions + { + WriteIndented = true + }); + console.Out.WriteLine(json); + break; + + case "sarif": + var sarif = ConvertToSarif(result); + console.Out.WriteLine(sarif); + break; + + default: + OutputTextResult(result, console); + break; + } + } + + private static void OutputTextResult(BudgetCheckResult result, IConsole console) + { + var status = result.IsWithinBudget ? "[PASS]" : "[FAIL]"; + console.Out.WriteLine($"{status} Unknowns Budget Check"); + console.Out.WriteLine($" Environment: {result.Environment}"); + console.Out.WriteLine($" Total Unknowns: {result.TotalUnknowns}"); + + if (result.TotalLimit.HasValue) + console.Out.WriteLine($" Budget Limit: {result.TotalLimit}"); + + if (result.Violations.Count > 0) + { + console.Out.WriteLine("\n Violations:"); + foreach (var (code, violation) in result.Violations) + { + console.Out.WriteLine($" - {code}: {violation.Count}/{violation.Limit}"); + } + } + + if (!string.IsNullOrEmpty(result.Message)) + console.Out.WriteLine($"\n Message: {result.Message}"); + } + + private static string ConvertToSarif(BudgetCheckResult result) + { + // Convert to SARIF format for integration with GitHub/GitLab + var sarif = new + { + version = "2.1.0", + runs = new[] + { + new + { + tool = new { driver = new { name = "StellaOps Budget Check" } }, + results = result.Violations.Select(v => new + { + ruleId = $"UNKNOWN_{v.Key}", + level = "error", + message = new { text = $"{v.Key}: {v.Value.Count} unknowns exceed limit of {v.Value.Limit}" } + }) + } + } + }; + return JsonSerializer.Serialize(sarif); + } + } +} +``` + +**Acceptance Criteria**: +- [ ] `stella budget check` command +- [ ] Support verdict file or scan ID +- [ ] Environment-based budget selection +- [ ] Exit codes for CI integration +- [ ] JSON, text, SARIF output formats + +--- + +### T2: CI Budget Gate Workflow + +**Assignee**: DevOps Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +CI workflow for enforcing unknowns budgets on PRs. + +**Implementation Path**: `.gitea/workflows/unknowns-gate.yml` + +**Workflow**: +```yaml +name: Unknowns Budget Gate + +on: + pull_request: + paths: + - 'src/**' + - 'Dockerfile*' + - '*.lock' + push: + branches: [main] + +env: + STELLAOPS_BUDGET_CONFIG: ./etc/policy.unknowns.yaml + +jobs: + scan-and-check-budget: + runs-on: ubuntu-22.04 + steps: + - uses: actions/checkout@v4 + + - name: Setup .NET + uses: actions/setup-dotnet@v4 + with: + dotnet-version: '10.0.100' + + - name: Build CLI + run: dotnet build src/Cli/StellaOps.Cli -c Release + + - name: Determine environment + id: env + run: | + if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then + echo "environment=prod" >> $GITHUB_OUTPUT + elif [[ "${{ github.event_name }}" == "pull_request" ]]; then + echo "environment=stage" >> $GITHUB_OUTPUT + else + echo "environment=dev" >> $GITHUB_OUTPUT + fi + + - name: Scan container image + id: scan + run: | + ./out/stella scan ${{ env.IMAGE_REF }} \ + --output verdict.json \ + --sbom-output sbom.json + + - name: Check unknowns budget + id: budget + continue-on-error: true + run: | + ./out/stella budget check \ + --verdict verdict.json \ + --environment ${{ steps.env.outputs.environment }} \ + --config ${{ env.STELLAOPS_BUDGET_CONFIG }} \ + --output json \ + --fail-on-exceed > budget-result.json + + echo "result=$(cat budget-result.json | jq -c '.')" >> $GITHUB_OUTPUT + + - name: Upload budget report + uses: actions/upload-artifact@v4 + with: + name: budget-report + path: budget-result.json + + - name: Post PR comment + if: github.event_name == 'pull_request' + uses: actions/github-script@v7 + with: + script: | + const result = ${{ steps.budget.outputs.result }}; + const status = result.isWithinBudget ? ':white_check_mark:' : ':x:'; + const body = `## ${status} Unknowns Budget Check + + | Metric | Value | + |--------|-------| + | Environment | ${result.environment || '${{ steps.env.outputs.environment }}'} | + | Total Unknowns | ${result.totalUnknowns} | + | Budget Limit | ${result.totalLimit || 'Unlimited'} | + | Status | ${result.isWithinBudget ? 'PASS' : 'FAIL'} | + + ${result.violations?.length > 0 ? ` + ### Violations + ${result.violations.map(v => `- **${v.code}**: ${v.count}/${v.limit}`).join('\n')} + ` : ''} + + ${result.message || ''} + `; + + github.rest.issues.createComment({ + issue_number: context.issue.number, + owner: context.repo.owner, + repo: context.repo.repo, + body: body + }); + + - name: Fail if budget exceeded (prod) + if: steps.env.outputs.environment == 'prod' && steps.budget.outcome == 'failure' + run: | + echo "::error::Production unknowns budget exceeded!" + exit 1 + + - name: Warn if budget exceeded (non-prod) + if: steps.env.outputs.environment != 'prod' && steps.budget.outcome == 'failure' + run: | + echo "::warning::Unknowns budget exceeded for ${{ steps.env.outputs.environment }}" +``` + +**Acceptance Criteria**: +- [ ] Runs on PRs and pushes +- [ ] Environment detection (prod/stage/dev) +- [ ] Budget check with appropriate config +- [ ] PR comment with results +- [ ] Fail for prod, warn for non-prod + +--- + +### T3: GitHub/GitLab PR Integration + +**Assignee**: DevOps Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Rich PR integration for unknowns budget results. + +**Implementation Path**: `src/Cli/StellaOps.Cli/Commands/Budget/` + +**Features**: +- Status check annotations +- PR comments with budget summary +- SARIF upload for code scanning integration + +**Acceptance Criteria**: +- [ ] GitHub status checks +- [ ] GitLab merge request comments +- [ ] SARIF format for security tab +- [ ] Deep links to unknowns in UI + +--- + +### T4: Unknowns Dashboard Integration + +**Assignee**: UI Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Surface unknowns budget status in the web UI. + +**Implementation Path**: `src/Web/StellaOps.Web/src/app/components/unknowns-budget/` + +**Components**: +```typescript +// unknowns-budget-widget.component.ts +@Component({ + selector: 'stella-unknowns-budget-widget', + template: ` +
+

Unknowns Budget

+ +
+
+ {{ result?.totalUnknowns }} / {{ result?.totalLimit || '∞' }} +
+ +
+ {{ statusText }} +
+ +
+

Violations by Reason

+
    +
  • + {{ v.key }}: + {{ v.value.count }} / {{ v.value.limit }} +
  • +
+
+ +
+

Unknown Items

+ + +
+
+ ` +}) +export class UnknownsBudgetWidgetComponent { + @Input() result: BudgetCheckResult; + @Input() unknowns: Unknown[]; + @Input() showDetails = false; + + get usagePercent(): number { + if (!this.result?.totalLimit) return 0; + return (this.result.totalUnknowns / this.result.totalLimit) * 100; + } + + get statusClass(): string { + return this.result?.isWithinBudget ? 'status-pass' : 'status-fail'; + } + + get statusText(): string { + return this.result?.isWithinBudget ? 'Within Budget' : 'Budget Exceeded'; + } +} +``` + +**Acceptance Criteria**: +- [ ] Budget meter visualization +- [ ] Violation breakdown +- [ ] Unknowns list with details +- [ ] Status badge component + +--- + +### T5: Attestation Integration + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Include unknowns budget status in attestations. + +**Implementation Path**: `src/Attestor/__Libraries/StellaOps.Attestor.Predicates/` + +**Predicate Extension**: +```csharp +public sealed record VerdictPredicate +{ + // Existing fields... + + /// + /// Unknowns budget evaluation result. + /// + public UnknownsBudgetPredicate? UnknownsBudget { get; init; } +} + +public sealed record UnknownsBudgetPredicate +{ + public required string Environment { get; init; } + public required int TotalUnknowns { get; init; } + public int? TotalLimit { get; init; } + public required bool IsWithinBudget { get; init; } + public ImmutableDictionary Violations { get; init; } + = ImmutableDictionary.Empty; +} + +public sealed record BudgetViolationPredicate( + string ReasonCode, + int Count, + int Limit); +``` + +**Acceptance Criteria**: +- [ ] Unknowns budget in verdict attestation +- [ ] Environment recorded +- [ ] Violations detailed +- [ ] Schema backward compatible + +--- + +### T6: Unit Tests + +**Assignee**: QA Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1-T5 + +**Description**: +Comprehensive tests for budget gate functionality. + +**Test Cases**: +```csharp +public class BudgetCheckCommandTests +{ + [Fact] + public async Task Execute_WithinBudget_ReturnsZero() + { + var verdict = CreateVerdict(unknowns: 2); + var budget = CreateBudget(limit: 5); + + var result = await ExecuteCommand(verdict, budget, "prod"); + + result.ExitCode.Should().Be(0); + } + + [Fact] + public async Task Execute_ExceedsBudget_ReturnsTwo() + { + var verdict = CreateVerdict(unknowns: 10); + var budget = CreateBudget(limit: 5); + + var result = await ExecuteCommand(verdict, budget, "prod"); + + result.ExitCode.Should().Be(2); + } + + [Fact] + public async Task Execute_JsonOutput_ValidJson() + { + var verdict = CreateVerdict(unknowns: 3); + var result = await ExecuteCommand(verdict, output: "json"); + + var json = result.Output; + var parsed = JsonSerializer.Deserialize(json); + parsed.Should().NotBeNull(); + } + + [Fact] + public async Task Execute_SarifOutput_ValidSarif() + { + var verdict = CreateVerdict(unknowns: 3); + var result = await ExecuteCommand(verdict, output: "sarif"); + + var sarif = result.Output; + sarif.Should().Contain("\"version\": \"2.1.0\""); + } +} +``` + +**Acceptance Criteria**: +- [ ] Command exit code tests +- [ ] Output format tests +- [ ] Budget calculation tests +- [ ] CI workflow simulation tests + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | CLI Team | CLI Budget Check Command | +| 2 | T2 | TODO | T1 | DevOps Team | CI Budget Gate Workflow | +| 3 | T3 | TODO | T1 | DevOps Team | GitHub/GitLab PR Integration | +| 4 | T4 | TODO | T1 | UI Team | Unknowns Dashboard Integration | +| 5 | T5 | TODO | T1 | QA Team | Attestation Integration | +| 6 | T6 | TODO | T1-T5 | QA Team | Unit Tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Testing Strategy advisory. CI gates for unknowns budget enforcement. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Exit codes | Decision | CLI Team | 0=pass, 1=error, 2=budget exceeded | +| PR comment format | Decision | DevOps Team | Markdown table with status emoji | +| Prod enforcement | Decision | DevOps Team | Hard fail for prod, soft warn for others | + +--- + +## Success Criteria + +- [ ] All 6 tasks marked DONE +- [ ] CLI command works in CI +- [ ] PR comments display budget status +- [ ] Prod builds fail on budget exceed +- [ ] UI shows budget visualization +- [ ] Attestations include budget status diff --git a/docs/implplan/SPRINT_5100_0005_0001_router_chaos_suite.md b/docs/implplan/SPRINT_5100_0005_0001_router_chaos_suite.md new file mode 100644 index 000000000..76e651232 --- /dev/null +++ b/docs/implplan/SPRINT_5100_0005_0001_router_chaos_suite.md @@ -0,0 +1,649 @@ +# Sprint 5100.0005.0001 · Router Chaos Suite + +## Topic & Scope + +- Implement chaos testing for router backpressure and resilience. +- Validate HTTP 429/503 responses with Retry-After headers. +- Test graceful degradation under load spikes. +- Verify no data loss during throttling. +- **Working directory:** `tests/load/` and `tests/chaos/` + +## Dependencies & Concurrency + +- **Upstream**: Router implementation with backpressure (existing) +- **Downstream**: Production confidence in router behavior +- **Safe to parallelize with**: All other Phase 4+ sprints + +## Documentation Prerequisites + +- `docs/product-advisories/20-Dec-2025 - Testing strategy.md` +- `docs/product-advisories/15-Dec-2025 - Designing 202 + Retry-After Backpressure Control.md` +- Router architecture documentation + +--- + +## Tasks + +### T1: Load Test Harness + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create load testing harness using k6 or equivalent. + +**Implementation Path**: `tests/load/router/` + +**k6 Script**: +```javascript +// tests/load/router/spike-test.js +import http from 'k6/http'; +import { check, sleep } from 'k6'; +import { Rate, Trend } from 'k6/metrics'; + +// Custom metrics +const throttledRate = new Rate('throttled_requests'); +const retryAfterTrend = new Trend('retry_after_seconds'); +const recoveryTime = new Trend('recovery_time_ms'); + +export const options = { + scenarios: { + // Normal load baseline + baseline: { + executor: 'constant-arrival-rate', + rate: 100, + timeUnit: '1s', + duration: '1m', + preAllocatedVUs: 50, + }, + // Spike to 10x + spike_10x: { + executor: 'constant-arrival-rate', + rate: 1000, + timeUnit: '1s', + duration: '30s', + startTime: '1m', + preAllocatedVUs: 500, + }, + // Spike to 50x + spike_50x: { + executor: 'constant-arrival-rate', + rate: 5000, + timeUnit: '1s', + duration: '30s', + startTime: '2m', + preAllocatedVUs: 2000, + }, + // Recovery observation + recovery: { + executor: 'constant-arrival-rate', + rate: 100, + timeUnit: '1s', + duration: '2m', + startTime: '3m', + preAllocatedVUs: 50, + }, + }, + thresholds: { + // At least 95% of requests should succeed OR return proper throttle response + 'http_req_failed{expected_response:true}': ['rate<0.05'], + // Throttled requests should have Retry-After header + 'throttled_requests': ['rate>0'], // We expect some throttling during spike + // Recovery should happen within reasonable time + 'recovery_time_ms': ['p(95)<30000'], // 95% recover within 30s + }, +}; + +const ROUTER_URL = __ENV.ROUTER_URL || 'http://localhost:8080'; + +export default function () { + const response = http.post(`${ROUTER_URL}/api/v1/scan`, JSON.stringify({ + image: 'alpine:latest', + }), { + headers: { 'Content-Type': 'application/json' }, + tags: { expected_response: 'true' }, + }); + + // Check for proper throttle response + if (response.status === 429 || response.status === 503) { + throttledRate.add(1); + + // Verify Retry-After header + const retryAfter = response.headers['Retry-After']; + check(response, { + 'has Retry-After header': (r) => r.headers['Retry-After'] !== undefined, + 'Retry-After is valid number': (r) => !isNaN(parseInt(r.headers['Retry-After'])), + }); + + if (retryAfter) { + retryAfterTrend.add(parseInt(retryAfter)); + } + } else { + throttledRate.add(0); + + check(response, { + 'status is 200 or 202': (r) => r.status === 200 || r.status === 202, + 'response has body': (r) => r.body && r.body.length > 0, + }); + } +} + +export function handleSummary(data) { + return { + 'results/spike-test-summary.json': JSON.stringify(data, null, 2), + }; +} +``` + +**Acceptance Criteria**: +- [ ] k6 test scripts for spike patterns +- [ ] Custom metrics for throttling +- [ ] Threshold definitions +- [ ] Summary output to JSON + +--- + +### T2: Backpressure Verification Tests + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Verify router emits correct 429/503 responses with Retry-After. + +**Implementation Path**: `tests/chaos/StellaOps.Chaos.Router.Tests/` + +**Test Cases**: +```csharp +[Trait("Category", "Chaos")] +[Trait("Category", "Router")] +public class BackpressureVerificationTests : IClassFixture +{ + private readonly RouterTestFixture _fixture; + + [Fact] + public async Task Router_UnderLoad_Returns429WithRetryAfter() + { + // Arrange + var client = _fixture.CreateClient(); + var tasks = new List>(); + + // Act - Send burst of requests + for (var i = 0; i < 1000; i++) + { + tasks.Add(client.PostAsync("/api/v1/scan", CreateScanRequest())); + } + + var responses = await Task.WhenAll(tasks); + + // Assert - Some should be throttled + var throttled = responses.Where(r => r.StatusCode == HttpStatusCode.TooManyRequests).ToList(); + throttled.Should().NotBeEmpty("Expected throttling under heavy load"); + + foreach (var response in throttled) + { + response.Headers.Should().Contain(h => h.Key == "Retry-After"); + var retryAfter = response.Headers.GetValues("Retry-After").First(); + int.TryParse(retryAfter, out var seconds).Should().BeTrue(); + seconds.Should().BeInRange(1, 300, "Retry-After should be reasonable"); + } + } + + [Fact] + public async Task Router_UnderLoad_Returns503WhenOverloaded() + { + // Arrange - Configure lower limits + _fixture.ConfigureLowLimits(); + var client = _fixture.CreateClient(); + + // Act - Massive burst + var tasks = Enumerable.Range(0, 5000) + .Select(_ => client.PostAsync("/api/v1/scan", CreateScanRequest())); + var responses = await Task.WhenAll(tasks); + + // Assert - Should see 503s when completely overloaded + var overloaded = responses.Where(r => + r.StatusCode == HttpStatusCode.ServiceUnavailable).ToList(); + + if (overloaded.Any()) + { + foreach (var response in overloaded) + { + response.Headers.Should().Contain(h => h.Key == "Retry-After"); + } + } + } + + [Fact] + public async Task Router_RetryAfterHonored_EventuallySucceeds() + { + var client = _fixture.CreateClient(); + + // First request triggers throttle + var response1 = await client.PostAsync("/api/v1/scan", CreateScanRequest()); + + if (response1.StatusCode == HttpStatusCode.TooManyRequests) + { + var retryAfter = int.Parse( + response1.Headers.GetValues("Retry-After").First()); + + // Wait for Retry-After duration + await Task.Delay(TimeSpan.FromSeconds(retryAfter + 1)); + + // Retry should succeed + var response2 = await client.PostAsync("/api/v1/scan", CreateScanRequest()); + response2.StatusCode.Should().BeOneOf( + HttpStatusCode.OK, + HttpStatusCode.Accepted); + } + } + + [Fact] + public async Task Router_ThrottleMetrics_AreExposed() + { + // Arrange + var client = _fixture.CreateClient(); + + // Trigger some throttling + await TriggerThrottling(client); + + // Act - Check metrics endpoint + var metricsResponse = await client.GetAsync("/metrics"); + var metrics = await metricsResponse.Content.ReadAsStringAsync(); + + // Assert - Throttle metrics present + metrics.Should().Contain("router_requests_throttled_total"); + metrics.Should().Contain("router_retry_after_seconds"); + metrics.Should().Contain("router_queue_depth"); + } +} +``` + +**Acceptance Criteria**: +- [ ] 429 response verification +- [ ] 503 response verification +- [ ] Retry-After header validation +- [ ] Eventual success after wait +- [ ] Metrics exposure verification + +--- + +### T3: Recovery and Resilience Tests + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Test router recovery after load spikes. + +**Implementation Path**: `tests/chaos/StellaOps.Chaos.Router.Tests/RecoveryTests.cs` + +**Test Cases**: +```csharp +public class RecoveryTests : IClassFixture +{ + [Fact] + public async Task Router_AfterSpike_RecoveryWithin30Seconds() + { + var client = _fixture.CreateClient(); + var stopwatch = Stopwatch.StartNew(); + + // Phase 1: Normal operation + var normalResponse = await client.PostAsync("/api/v1/scan", CreateScanRequest()); + normalResponse.IsSuccessStatusCode.Should().BeTrue(); + + // Phase 2: Spike load + await CreateLoadSpike(client, requestCount: 2000, durationSeconds: 10); + + // Phase 3: Measure recovery + var recovered = false; + while (stopwatch.Elapsed < TimeSpan.FromSeconds(60)) + { + var response = await client.PostAsync("/api/v1/scan", CreateScanRequest()); + if (response.IsSuccessStatusCode) + { + recovered = true; + break; + } + await Task.Delay(1000); + } + + stopwatch.Stop(); + + recovered.Should().BeTrue("Router should recover after spike"); + stopwatch.Elapsed.Should().BeLessThan(TimeSpan.FromSeconds(30), + "Recovery should happen within 30 seconds"); + } + + [Fact] + public async Task Router_NoDataLoss_DuringThrottling() + { + var client = _fixture.CreateClient(); + var submittedIds = new ConcurrentBag(); + var successfulIds = new ConcurrentBag(); + + // Submit requests with tracking + var tasks = Enumerable.Range(0, 500).Select(async i => + { + var scanId = Guid.NewGuid().ToString(); + submittedIds.Add(scanId); + + var response = await client.PostAsync("/api/v1/scan", + CreateScanRequest(scanId)); + + // If throttled, retry + while (response.StatusCode == HttpStatusCode.TooManyRequests) + { + var retryAfter = int.Parse( + response.Headers.GetValues("Retry-After").FirstOrDefault() ?? "5"); + await Task.Delay(TimeSpan.FromSeconds(retryAfter)); + response = await client.PostAsync("/api/v1/scan", + CreateScanRequest(scanId)); + } + + if (response.IsSuccessStatusCode) + { + successfulIds.Add(scanId); + } + }); + + await Task.WhenAll(tasks); + + // All submitted requests should eventually succeed + successfulIds.Should().HaveCount(submittedIds.Count, + "No data loss - all requests should eventually succeed"); + } + + [Fact] + public async Task Router_GracefulDegradation_MaintainsPartialService() + { + var client = _fixture.CreateClient(); + + // Start continuous background load + var cts = new CancellationTokenSource(); + var backgroundTask = CreateContinuousLoad(client, cts.Token); + + // Allow load to stabilize + await Task.Delay(5000); + + // Check that some requests are still succeeding + var successCount = 0; + for (var i = 0; i < 10; i++) + { + var response = await client.PostAsync("/api/v1/scan", CreateScanRequest()); + if (response.IsSuccessStatusCode || response.StatusCode == HttpStatusCode.Accepted) + { + successCount++; + } + await Task.Delay(100); + } + + cts.Cancel(); + await backgroundTask; + + successCount.Should().BeGreaterThan(0, + "Router should maintain partial service under load"); + } +} +``` + +**Acceptance Criteria**: +- [ ] Recovery within 30 seconds +- [ ] No data loss during throttling +- [ ] Graceful degradation maintained +- [ ] Latencies bounded during spike + +--- + +### T4: Valkey Failure Injection + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T2 + +**Description**: +Test router behavior when Valkey cache fails. + +**Implementation Path**: `tests/chaos/StellaOps.Chaos.Router.Tests/ValkeyFailureTests.cs` + +**Test Cases**: +```csharp +[Trait("Category", "Chaos")] +public class ValkeyFailureTests : IClassFixture +{ + [Fact] + public async Task Router_ValkeyDown_FallsBackToLocal() + { + // Arrange + var client = _fixture.CreateClient(); + + // Verify normal operation + var response1 = await client.PostAsync("/api/v1/scan", CreateScanRequest()); + response1.IsSuccessStatusCode.Should().BeTrue(); + + // Kill Valkey + await _fixture.StopValkeyAsync(); + + // Act - Router should degrade gracefully + var response2 = await client.PostAsync("/api/v1/scan", CreateScanRequest()); + + // Assert - Should still work with local rate limiter + response2.IsSuccessStatusCode.Should().BeTrue( + "Router should fall back to local rate limiting when Valkey is down"); + + // Restore Valkey + await _fixture.StartValkeyAsync(); + } + + [Fact] + public async Task Router_ValkeyReconnect_ResumesDistributedLimiting() + { + var client = _fixture.CreateClient(); + + // Kill and restart Valkey + await _fixture.StopValkeyAsync(); + await Task.Delay(5000); + await _fixture.StartValkeyAsync(); + await Task.Delay(2000); // Allow reconnection + + // Check metrics show distributed limiting active + var metricsResponse = await client.GetAsync("/metrics"); + var metrics = await metricsResponse.Content.ReadAsStringAsync(); + + metrics.Should().Contain("rate_limiter_backend=\"distributed\"", + "Should resume distributed rate limiting after Valkey reconnect"); + } + + [Fact] + public async Task Router_ValkeyLatency_DoesNotBlock() + { + // Configure Valkey with artificial latency + await _fixture.ConfigureValkeyLatencyAsync(TimeSpan.FromSeconds(2)); + + var client = _fixture.CreateClient(); + var stopwatch = Stopwatch.StartNew(); + + var response = await client.PostAsync("/api/v1/scan", CreateScanRequest()); + + stopwatch.Stop(); + + // Request should complete without waiting for slow Valkey + stopwatch.Elapsed.Should().BeLessThan(TimeSpan.FromSeconds(1), + "Slow Valkey should not block request processing"); + } +} +``` + +**Acceptance Criteria**: +- [ ] Fallback to local limiter +- [ ] Automatic reconnection +- [ ] No blocking on Valkey latency +- [ ] Metrics reflect backend state + +--- + +### T5: CI Chaos Workflow + +**Assignee**: DevOps Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1-T4 + +**Description**: +CI workflow for running chaos tests. + +**Implementation Path**: `.gitea/workflows/router-chaos.yml` + +**Workflow**: +```yaml +name: Router Chaos Tests + +on: + schedule: + - cron: '0 3 * * *' # Nightly at 3 AM + workflow_dispatch: + inputs: + spike_multiplier: + description: 'Load spike multiplier (e.g., 10, 50, 100)' + default: '10' + +jobs: + chaos-tests: + runs-on: ubuntu-22.04 + + services: + postgres: + image: postgres:16-alpine + env: + POSTGRES_PASSWORD: test + ports: + - 5432:5432 + + valkey: + image: valkey/valkey:7-alpine + ports: + - 6379:6379 + + steps: + - uses: actions/checkout@v4 + + - name: Setup .NET + uses: actions/setup-dotnet@v4 + with: + dotnet-version: '10.0.100' + + - name: Install k6 + run: | + curl -sSL https://github.com/grafana/k6/releases/download/v0.47.0/k6-v0.47.0-linux-amd64.tar.gz | tar xz + sudo mv k6-v0.47.0-linux-amd64/k6 /usr/local/bin/ + + - name: Start Router + run: | + dotnet run --project src/Router/StellaOps.Router & + sleep 10 # Wait for startup + + - name: Run load spike test + run: | + k6 run tests/load/router/spike-test.js \ + -e ROUTER_URL=http://localhost:8080 \ + --out json=results/k6-results.json + + - name: Run chaos unit tests + run: | + dotnet test tests/chaos/StellaOps.Chaos.Router.Tests \ + --logger "trx;LogFileName=chaos-results.trx" + + - name: Analyze results + run: | + python3 tests/load/analyze-results.py \ + --k6-results results/k6-results.json \ + --chaos-results results/chaos-results.trx \ + --output results/analysis.json + + - name: Check thresholds + run: | + python3 tests/load/check-thresholds.py \ + --analysis results/analysis.json \ + --thresholds tests/load/thresholds.json + + - name: Upload results + if: always() + uses: actions/upload-artifact@v4 + with: + name: chaos-test-results + path: results/ +``` + +**Acceptance Criteria**: +- [ ] Nightly schedule +- [ ] k6 load tests +- [ ] .NET chaos tests +- [ ] Results analysis +- [ ] Threshold checking + +--- + +### T6: Documentation + +**Assignee**: QA Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1-T5 + +**Description**: +Document chaos testing approach and results interpretation. + +**Acceptance Criteria**: +- [ ] Chaos test runbook +- [ ] Threshold tuning guide +- [ ] Result interpretation guide +- [ ] Recovery playbook + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | QA Team | Load Test Harness | +| 2 | T2 | TODO | T1 | QA Team | Backpressure Verification Tests | +| 3 | T3 | TODO | T1, T2 | QA Team | Recovery and Resilience Tests | +| 4 | T4 | TODO | T2 | QA Team | Valkey Failure Injection | +| 5 | T5 | TODO | T1-T4 | DevOps Team | CI Chaos Workflow | +| 6 | T6 | TODO | T1-T5 | QA Team | Documentation | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Testing Strategy advisory. Router chaos testing for production confidence. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Load tool | Decision | QA Team | k6 for scripting flexibility | +| Spike levels | Decision | QA Team | 10x, 50x, 100x normal load | +| Recovery threshold | Decision | QA Team | 30 seconds maximum | + +--- + +## Success Criteria + +- [ ] All 6 tasks marked DONE +- [ ] 429/503 responses verified correct +- [ ] Retry-After headers present and valid +- [ ] Recovery within 30 seconds +- [ ] No data loss during throttling +- [ ] Valkey failure handled gracefully diff --git a/docs/implplan/SPRINT_5100_0006_0001_audit_pack_export_import.md b/docs/implplan/SPRINT_5100_0006_0001_audit_pack_export_import.md new file mode 100644 index 000000000..0010520f3 --- /dev/null +++ b/docs/implplan/SPRINT_5100_0006_0001_audit_pack_export_import.md @@ -0,0 +1,790 @@ +# Sprint 5100.0006.0001 · Audit Pack Export/Import + +## Topic & Scope + +- Implement sealed audit pack export for auditors and compliance. +- Bundle: run manifest + offline bundle + evidence + verdict. +- Enable one-command replay in clean environment. +- Verify signatures under imported trust roots. +- **Working directory:** `src/__Libraries/StellaOps.AuditPack/` and `src/Cli/StellaOps.Cli/Commands/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 5100.0001.0001 (Run Manifest), Sprint 5100.0002.0002 (Replay Runner) +- **Downstream**: Auditor workflows, compliance verification +- **Safe to parallelize with**: All other Phase 5 sprints + +## Documentation Prerequisites + +- `docs/product-advisories/20-Dec-2025 - Testing strategy.md` +- `docs/24_OFFLINE_KIT.md` +- Sprint 5100.0001.0001 (Run Manifest Schema) +- Sprint 5100.0002.0002 (Replay Runner) + +--- + +## Tasks + +### T1: Audit Pack Domain Model + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: — + +**Description**: +Define the audit pack model and structure. + +**Implementation Path**: `src/__Libraries/StellaOps.AuditPack/Models/AuditPack.cs` + +**Model Definition**: +```csharp +namespace StellaOps.AuditPack.Models; + +/// +/// A sealed, self-contained audit pack for verification and compliance. +/// Contains all inputs and outputs required to reproduce and verify a scan. +/// +public sealed record AuditPack +{ + /// + /// Unique identifier for this audit pack. + /// + public required string PackId { get; init; } + + /// + /// Schema version for forward compatibility. + /// + public required string SchemaVersion { get; init; } = "1.0.0"; + + /// + /// Human-readable name for this pack. + /// + public required string Name { get; init; } + + /// + /// UTC timestamp when pack was created. + /// + public required DateTimeOffset CreatedAt { get; init; } + + /// + /// Run manifest for replay. + /// + public required RunManifest RunManifest { get; init; } + + /// + /// Evidence index linking verdict to all evidence. + /// + public required EvidenceIndex EvidenceIndex { get; init; } + + /// + /// The verdict from the scan. + /// + public required Verdict Verdict { get; init; } + + /// + /// Offline bundle manifest (contents stored separately). + /// + public required BundleManifest OfflineBundle { get; init; } + + /// + /// All attestations in the evidence chain. + /// + public required ImmutableArray Attestations { get; init; } + + /// + /// SBOM documents (CycloneDX and SPDX). + /// + public required ImmutableArray Sboms { get; init; } + + /// + /// VEX documents applied. + /// + public ImmutableArray VexDocuments { get; init; } = []; + + /// + /// Trust roots for signature verification. + /// + public required ImmutableArray TrustRoots { get; init; } + + /// + /// Pack contents inventory with paths and digests. + /// + public required PackContents Contents { get; init; } + + /// + /// SHA-256 digest of this pack manifest (excluding signature). + /// + public string? PackDigest { get; init; } + + /// + /// DSSE signature over the pack. + /// + public string? Signature { get; init; } +} + +public sealed record PackContents +{ + public required ImmutableArray Files { get; init; } + public long TotalSizeBytes { get; init; } + public int FileCount { get; init; } +} + +public sealed record PackFile( + string RelativePath, + string Digest, + long SizeBytes, + PackFileType Type); + +public enum PackFileType +{ + Manifest, + RunManifest, + EvidenceIndex, + Verdict, + Sbom, + Vex, + Attestation, + Feed, + Policy, + TrustRoot, + Other +} + +public sealed record SbomDocument( + string Id, + string Format, + string Content, + string Digest); + +public sealed record VexDocument( + string Id, + string Format, + string Content, + string Digest); + +public sealed record TrustRoot( + string Id, + string Type, // fulcio, rekor, custom + string Content, + string Digest); + +public sealed record Attestation( + string Id, + string Type, + string Envelope, // DSSE envelope + string Digest); +``` + +**Acceptance Criteria**: +- [ ] Complete audit pack model +- [ ] Pack contents inventory +- [ ] Trust roots for offline verification +- [ ] Signature support +- [ ] All fields documented + +--- + +### T2: Audit Pack Builder + +**Assignee**: QA Team +**Story Points**: 8 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Service to build audit packs from scan results. + +**Implementation Path**: `src/__Libraries/StellaOps.AuditPack/Services/AuditPackBuilder.cs` + +**Implementation**: +```csharp +namespace StellaOps.AuditPack.Services; + +public sealed class AuditPackBuilder : IAuditPackBuilder +{ + private readonly IFeedLoader _feedLoader; + private readonly IPolicyLoader _policyLoader; + private readonly IAttestationStorage _attestationStorage; + + /// + /// Builds an audit pack from a scan result. + /// + public async Task BuildAsync( + ScanResult scanResult, + AuditPackOptions options, + CancellationToken ct = default) + { + var files = new List(); + + // Collect all evidence + var attestations = await CollectAttestationsAsync(scanResult, ct); + var sboms = CollectSboms(scanResult); + var vexDocuments = CollectVexDocuments(scanResult); + var trustRoots = await CollectTrustRootsAsync(options, ct); + + // Build offline bundle subset (only required feeds/policies) + var bundleManifest = await BuildMinimalBundleAsync(scanResult, ct); + + // Create pack structure + var pack = new AuditPack + { + PackId = Guid.NewGuid().ToString(), + SchemaVersion = "1.0.0", + Name = options.Name ?? $"audit-pack-{scanResult.ScanId}", + CreatedAt = DateTimeOffset.UtcNow, + RunManifest = scanResult.RunManifest, + EvidenceIndex = scanResult.EvidenceIndex, + Verdict = scanResult.Verdict, + OfflineBundle = bundleManifest, + Attestations = [.. attestations], + Sboms = [.. sboms], + VexDocuments = [.. vexDocuments], + TrustRoots = [.. trustRoots], + Contents = new PackContents + { + Files = [.. files], + TotalSizeBytes = files.Sum(f => f.SizeBytes), + FileCount = files.Count + } + }; + + return AuditPackSerializer.WithDigest(pack); + } + + /// + /// Exports audit pack to archive file. + /// + public async Task ExportAsync( + AuditPack pack, + string outputPath, + ExportOptions options, + CancellationToken ct = default) + { + using var archive = new TarArchive(outputPath); + + // Write pack manifest + var manifestJson = AuditPackSerializer.Serialize(pack); + await archive.WriteEntryAsync("manifest.json", manifestJson, ct); + + // Write run manifest + var runManifestJson = RunManifestSerializer.Serialize(pack.RunManifest); + await archive.WriteEntryAsync("run-manifest.json", runManifestJson, ct); + + // Write evidence index + var evidenceJson = EvidenceIndexSerializer.Serialize(pack.EvidenceIndex); + await archive.WriteEntryAsync("evidence-index.json", evidenceJson, ct); + + // Write verdict + var verdictJson = CanonicalJsonSerializer.Serialize(pack.Verdict); + await archive.WriteEntryAsync("verdict.json", verdictJson, ct); + + // Write SBOMs + foreach (var sbom in pack.Sboms) + { + await archive.WriteEntryAsync($"sboms/{sbom.Id}.json", sbom.Content, ct); + } + + // Write attestations + foreach (var att in pack.Attestations) + { + await archive.WriteEntryAsync($"attestations/{att.Id}.json", att.Envelope, ct); + } + + // Write VEX documents + foreach (var vex in pack.VexDocuments) + { + await archive.WriteEntryAsync($"vex/{vex.Id}.json", vex.Content, ct); + } + + // Write trust roots + foreach (var root in pack.TrustRoots) + { + await archive.WriteEntryAsync($"trust-roots/{root.Id}.pem", root.Content, ct); + } + + // Write offline bundle subset + await WriteOfflineBundleAsync(archive, pack.OfflineBundle, ct); + + // Sign if requested + if (options.Sign) + { + var signature = await SignPackAsync(pack, options.SigningKey, ct); + await archive.WriteEntryAsync("signature.sig", signature, ct); + } + } +} + +public sealed record AuditPackOptions +{ + public string? Name { get; init; } + public bool IncludeFeeds { get; init; } = true; + public bool IncludePolicies { get; init; } = true; + public bool MinimizeSize { get; init; } = false; +} + +public sealed record ExportOptions +{ + public bool Sign { get; init; } = true; + public string? SigningKey { get; init; } + public bool Compress { get; init; } = true; +} +``` + +**Acceptance Criteria**: +- [ ] Builds complete audit pack +- [ ] Exports to tar.gz archive +- [ ] Includes all evidence +- [ ] Optional signing +- [ ] Size minimization option + +--- + +### T3: Audit Pack Importer + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Import and validate audit packs. + +**Implementation Path**: `src/__Libraries/StellaOps.AuditPack/Services/AuditPackImporter.cs` + +**Implementation**: +```csharp +namespace StellaOps.AuditPack.Services; + +public sealed class AuditPackImporter : IAuditPackImporter +{ + /// + /// Imports an audit pack from archive. + /// + public async Task ImportAsync( + string archivePath, + ImportOptions options, + CancellationToken ct = default) + { + var extractDir = options.ExtractDirectory ?? + Path.Combine(Path.GetTempPath(), $"audit-pack-{Guid.NewGuid():N}"); + + // Extract archive + await ExtractArchiveAsync(archivePath, extractDir, ct); + + // Load manifest + var manifestPath = Path.Combine(extractDir, "manifest.json"); + var manifestJson = await File.ReadAllTextAsync(manifestPath, ct); + var pack = AuditPackSerializer.Deserialize(manifestJson); + + // Verify integrity + var integrityResult = await VerifyIntegrityAsync(pack, extractDir, ct); + if (!integrityResult.IsValid) + { + return ImportResult.Failed("Integrity verification failed", integrityResult.Errors); + } + + // Verify signatures if present + if (options.VerifySignatures) + { + var signatureResult = await VerifySignaturesAsync(pack, extractDir, ct); + if (!signatureResult.IsValid) + { + return ImportResult.Failed("Signature verification failed", signatureResult.Errors); + } + } + + return new ImportResult + { + Success = true, + Pack = pack, + ExtractDirectory = extractDir, + IntegrityResult = integrityResult, + SignatureResult = options.VerifySignatures ? await VerifySignaturesAsync(pack, extractDir, ct) : null + }; + } + + private async Task VerifyIntegrityAsync( + AuditPack pack, + string extractDir, + CancellationToken ct) + { + var errors = new List(); + + // Verify each file digest + foreach (var file in pack.Contents.Files) + { + var filePath = Path.Combine(extractDir, file.RelativePath); + if (!File.Exists(filePath)) + { + errors.Add($"Missing file: {file.RelativePath}"); + continue; + } + + var content = await File.ReadAllBytesAsync(filePath, ct); + var actualDigest = Convert.ToHexString(SHA256.HashData(content)).ToLowerInvariant(); + + if (actualDigest != file.Digest.ToLowerInvariant()) + { + errors.Add($"Digest mismatch for {file.RelativePath}: expected {file.Digest}, got {actualDigest}"); + } + } + + // Verify pack digest + if (pack.PackDigest != null) + { + var computed = AuditPackSerializer.ComputeDigest(pack); + if (computed != pack.PackDigest) + { + errors.Add($"Pack digest mismatch: expected {pack.PackDigest}, got {computed}"); + } + } + + return new IntegrityResult(errors.Count == 0, errors); + } + + private async Task VerifySignaturesAsync( + AuditPack pack, + string extractDir, + CancellationToken ct) + { + var errors = new List(); + + // Load signature + var signaturePath = Path.Combine(extractDir, "signature.sig"); + if (!File.Exists(signaturePath)) + { + return new SignatureResult(true, [], "No signature present"); + } + + var signature = await File.ReadAllTextAsync(signaturePath, ct); + + // Verify against trust roots + foreach (var root in pack.TrustRoots) + { + var result = await VerifySignatureWithRootAsync(pack, signature, root, ct); + if (result.IsValid) + { + return new SignatureResult(true, [], $"Verified with {root.Id}"); + } + } + + errors.Add("Signature does not verify against any trust root"); + return new SignatureResult(false, errors); + } +} + +public sealed record ImportResult +{ + public bool Success { get; init; } + public AuditPack? Pack { get; init; } + public string? ExtractDirectory { get; init; } + public IntegrityResult? IntegrityResult { get; init; } + public SignatureResult? SignatureResult { get; init; } + public IReadOnlyList? Errors { get; init; } + + public static ImportResult Failed(string message, IReadOnlyList errors) => + new() { Success = false, Errors = errors.Prepend(message).ToList() }; +} +``` + +**Acceptance Criteria**: +- [ ] Extracts archive +- [ ] Verifies all file digests +- [ ] Verifies pack signature +- [ ] Uses included trust roots +- [ ] Clear error reporting + +--- + +### T4: Replay from Audit Pack + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T2, T3 + +**Description**: +Replay scan from imported audit pack and compare results. + +**Implementation Path**: `src/__Libraries/StellaOps.AuditPack/Services/AuditPackReplayer.cs` + +**Implementation**: +```csharp +namespace StellaOps.AuditPack.Services; + +public sealed class AuditPackReplayer : IAuditPackReplayer +{ + private readonly IReplayEngine _replayEngine; + private readonly IBundleLoader _bundleLoader; + + /// + /// Replays a scan from an imported audit pack. + /// + public async Task ReplayAsync( + ImportResult importResult, + CancellationToken ct = default) + { + if (!importResult.Success || importResult.Pack == null) + { + return ReplayComparisonResult.Failed("Invalid import result"); + } + + var pack = importResult.Pack; + + // Load offline bundle from pack + var bundlePath = Path.Combine(importResult.ExtractDirectory!, "bundle"); + await _bundleLoader.LoadAsync(bundlePath, ct); + + // Execute replay + var replayResult = await _replayEngine.ReplayAsync( + pack.RunManifest, + new ReplayOptions { UseFrozenTime = true }, + ct); + + if (!replayResult.Success) + { + return ReplayComparisonResult.Failed($"Replay failed: {string.Join(", ", replayResult.Errors ?? [])}"); + } + + // Compare verdicts + var comparison = CompareVerdicts(pack.Verdict, replayResult.Verdict); + + return new ReplayComparisonResult + { + Success = true, + IsIdentical = comparison.IsIdentical, + OriginalVerdictDigest = pack.Verdict.Digest, + ReplayedVerdictDigest = replayResult.VerdictDigest, + Differences = comparison.Differences, + ReplayDurationMs = replayResult.DurationMs + }; + } + + private static VerdictComparison CompareVerdicts(Verdict original, Verdict? replayed) + { + if (replayed == null) + return new VerdictComparison(false, ["Replayed verdict is null"]); + + var originalJson = CanonicalJsonSerializer.Serialize(original); + var replayedJson = CanonicalJsonSerializer.Serialize(replayed); + + if (originalJson == replayedJson) + return new VerdictComparison(true, []); + + // Find differences + var differences = FindJsonDifferences(originalJson, replayedJson); + return new VerdictComparison(false, differences); + } +} + +public sealed record ReplayComparisonResult +{ + public bool Success { get; init; } + public bool IsIdentical { get; init; } + public string? OriginalVerdictDigest { get; init; } + public string? ReplayedVerdictDigest { get; init; } + public IReadOnlyList Differences { get; init; } = []; + public long ReplayDurationMs { get; init; } + public string? Error { get; init; } + + public static ReplayComparisonResult Failed(string error) => + new() { Success = false, Error = error }; +} +``` + +**Acceptance Criteria**: +- [ ] Loads bundle from pack +- [ ] Executes replay +- [ ] Compares verdicts byte-for-byte +- [ ] Reports differences +- [ ] Performance measurement + +--- + +### T5: CLI Commands + +**Assignee**: CLI Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T2, T3, T4 + +**Description**: +CLI commands for audit pack operations. + +**Commands**: +```bash +# Export audit pack from scan +stella audit-pack export --scan-id --output audit-pack.tar.gz + +# Export with signing +stella audit-pack export --scan-id --sign --key signing-key.pem --output audit-pack.tar.gz + +# Verify audit pack integrity +stella audit-pack verify audit-pack.tar.gz + +# Import and show info +stella audit-pack info audit-pack.tar.gz + +# Replay from audit pack +stella audit-pack replay audit-pack.tar.gz --output replay-result.json + +# Full verification workflow +stella audit-pack verify-and-replay audit-pack.tar.gz +``` + +**Implementation Path**: `src/Cli/StellaOps.Cli/Commands/AuditPack/` + +**Acceptance Criteria**: +- [ ] `export` command +- [ ] `verify` command +- [ ] `info` command +- [ ] `replay` command +- [ ] `verify-and-replay` combined command +- [ ] JSON output option + +--- + +### T6: Unit and Integration Tests + +**Assignee**: QA Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1-T5 + +**Description**: +Comprehensive tests for audit pack functionality. + +**Test Cases**: +```csharp +public class AuditPackBuilderTests +{ + [Fact] + public async Task Build_FromScanResult_CreatesCompletePack() + { + var scanResult = CreateTestScanResult(); + var builder = CreateBuilder(); + + var pack = await builder.BuildAsync(scanResult, new AuditPackOptions()); + + pack.RunManifest.Should().NotBeNull(); + pack.Verdict.Should().NotBeNull(); + pack.EvidenceIndex.Should().NotBeNull(); + pack.Attestations.Should().NotBeEmpty(); + pack.TrustRoots.Should().NotBeEmpty(); + } + + [Fact] + public async Task Export_CreatesValidArchive() + { + var pack = CreateTestPack(); + var builder = CreateBuilder(); + var outputPath = GetTempPath(); + + await builder.ExportAsync(pack, outputPath, new ExportOptions()); + + File.Exists(outputPath).Should().BeTrue(); + // Verify archive structure + using var archive = new TarReader(File.OpenRead(outputPath)); + var entries = archive.ReadAllEntries().ToList(); + entries.Should().Contain(e => e.Name == "manifest.json"); + entries.Should().Contain(e => e.Name == "run-manifest.json"); + entries.Should().Contain(e => e.Name == "verdict.json"); + } +} + +public class AuditPackImporterTests +{ + [Fact] + public async Task Import_ValidPack_Succeeds() + { + var archivePath = CreateTestArchive(); + var importer = CreateImporter(); + + var result = await importer.ImportAsync(archivePath, new ImportOptions()); + + result.Success.Should().BeTrue(); + result.Pack.Should().NotBeNull(); + result.IntegrityResult.IsValid.Should().BeTrue(); + } + + [Fact] + public async Task Import_TamperedPack_FailsIntegrity() + { + var archivePath = CreateTamperedArchive(); + var importer = CreateImporter(); + + var result = await importer.ImportAsync(archivePath, new ImportOptions()); + + result.Success.Should().BeFalse(); + result.IntegrityResult.IsValid.Should().BeFalse(); + } +} + +public class AuditPackReplayerTests +{ + [Fact] + public async Task Replay_ValidPack_ProducesIdenticalVerdict() + { + var pack = CreateTestPack(); + var importResult = CreateImportResult(pack); + var replayer = CreateReplayer(); + + var result = await replayer.ReplayAsync(importResult); + + result.Success.Should().BeTrue(); + result.IsIdentical.Should().BeTrue(); + result.OriginalVerdictDigest.Should().Be(result.ReplayedVerdictDigest); + } +} +``` + +**Acceptance Criteria**: +- [ ] Builder tests +- [ ] Exporter tests +- [ ] Importer tests +- [ ] Integrity verification tests +- [ ] Replay comparison tests +- [ ] Tamper detection tests + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | QA Team | Audit Pack Domain Model | +| 2 | T2 | TODO | T1 | QA Team | Audit Pack Builder | +| 3 | T3 | TODO | T1 | QA Team | Audit Pack Importer | +| 4 | T4 | TODO | T2, T3 | QA Team | Replay from Audit Pack | +| 5 | T5 | TODO | T2-T4 | CLI Team | CLI Commands | +| 6 | T6 | TODO | T1-T5 | QA Team | Unit and Integration Tests | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Testing Strategy advisory. Audit packs enable compliance verification. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| Archive format | Decision | QA Team | tar.gz for portability | +| Trust root inclusion | Decision | QA Team | Include for fully offline verification | +| Minimal bundle | Decision | QA Team | Only include feeds/policies used in scan | + +--- + +## Success Criteria + +- [ ] All 6 tasks marked DONE +- [ ] Audit packs exportable and importable +- [ ] Integrity verification catches tampering +- [ ] Replay produces identical verdicts +- [ ] CLI commands functional +- [ ] `dotnet test` passes all tests diff --git a/docs/implplan/SPRINT_5100_SUMMARY.md b/docs/implplan/SPRINT_5100_SUMMARY.md new file mode 100644 index 000000000..326fd382c --- /dev/null +++ b/docs/implplan/SPRINT_5100_SUMMARY.md @@ -0,0 +1,243 @@ +# Sprint Epic 5100 · Comprehensive Testing Strategy + +## Overview + +Epic 5100 implements the comprehensive testing strategy defined in the Testing Strategy advisory (20-Dec-2025). This epic transforms Stella Ops' testing moats into continuously verified guarantees through deterministic replay, offline compliance, interoperability contracts, and chaos resilience testing. + +**IMPLID**: 5100 (Test Infrastructure) +**Total Sprints**: 12 +**Total Tasks**: ~75 + +--- + +## Epic Structure + +### Phase 0: Harness & Corpus Foundation +**Objective**: Standardize test artifacts and expand the golden corpus. + +| Sprint | Name | Tasks | Priority | +|--------|------|-------|----------| +| 5100.0001.0001 | [Run Manifest Schema](SPRINT_5100_0001_0001_run_manifest_schema.md) | 7 | HIGH | +| 5100.0001.0002 | [Evidence Index Schema](SPRINT_5100_0001_0002_evidence_index_schema.md) | 7 | HIGH | +| 5100.0001.0003 | [Offline Bundle Manifest](SPRINT_5100_0001_0003_offline_bundle_manifest.md) | 7 | HIGH | +| 5100.0001.0004 | [Golden Corpus Expansion](SPRINT_5100_0001_0004_golden_corpus_expansion.md) | 10 | MEDIUM | + +**Key Deliverables**: +- `RunManifest` schema capturing all replay inputs +- `EvidenceIndex` schema linking verdict to evidence chain +- `BundleManifest` for offline operation +- 50+ golden test corpus cases + +--- + +### Phase 1: Determinism & Replay +**Objective**: Ensure byte-identical verdicts across time and machines. + +| Sprint | Name | Tasks | Priority | +|--------|------|-------|----------| +| 5100.0002.0001 | [Canonicalization Utilities](SPRINT_5100_0002_0001_canonicalization_utilities.md) | 7 | HIGH | +| 5100.0002.0002 | [Replay Runner Service](SPRINT_5100_0002_0002_replay_runner_service.md) | 7 | HIGH | +| 5100.0002.0003 | [Delta-Verdict Generator](SPRINT_5100_0002_0003_delta_verdict_generator.md) | 7 | MEDIUM | + +**Key Deliverables**: +- Canonical JSON serialization (RFC 8785 principles) +- Stable ordering for all collections +- Replay engine with frozen time/PRNG +- Delta-verdict for diff-aware release gates +- Property-based tests with FsCheck + +--- + +### Phase 2: Offline E2E & Interop +**Objective**: Prove air-gap compliance and tool interoperability. + +| Sprint | Name | Tasks | Priority | +|--------|------|-------|----------| +| 5100.0003.0001 | [SBOM Interop Round-Trip](SPRINT_5100_0003_0001_sbom_interop_roundtrip.md) | 7 | HIGH | +| 5100.0003.0002 | [No-Egress Enforcement](SPRINT_5100_0003_0002_no_egress_enforcement.md) | 6 | HIGH | + +**Key Deliverables**: +- Syft → cosign → Grype round-trip tests +- CycloneDX 1.6 and SPDX 3.0.1 validation +- 95%+ findings parity with consumer tools +- Network-isolated test infrastructure +- `--network none` CI enforcement + +--- + +### Phase 3: Unknowns Budgets CI Gates +**Objective**: Enforce unknowns-budget policy gates in CI/CD. + +| Sprint | Name | Tasks | Priority | +|--------|------|-------|----------| +| 5100.0004.0001 | [Unknowns Budget CI Gates](SPRINT_5100_0004_0001_unknowns_budget_ci_gates.md) | 6 | HIGH | + +**Key Deliverables**: +- `stella budget check` CLI command +- CI workflow with environment-based budgets +- PR comments with budget status +- UI budget visualization +- Attestation integration + +--- + +### Phase 4: Backpressure & Chaos +**Objective**: Validate router resilience under load. + +| Sprint | Name | Tasks | Priority | +|--------|------|-------|----------| +| 5100.0005.0001 | [Router Chaos Suite](SPRINT_5100_0005_0001_router_chaos_suite.md) | 6 | MEDIUM | + +**Key Deliverables**: +- k6 load test harness +- 429/503 response verification +- Retry-After header compliance +- Recovery within 30 seconds +- Valkey failure injection tests + +--- + +### Phase 5: Audit Packs & Time-Travel +**Objective**: Enable sealed export/import for auditors. + +| Sprint | Name | Tasks | Priority | +|--------|------|-------|----------| +| 5100.0006.0001 | [Audit Pack Export/Import](SPRINT_5100_0006_0001_audit_pack_export_import.md) | 6 | MEDIUM | + +**Key Deliverables**: +- Sealed audit pack format +- One-command replay verification +- Signature verification with included trust roots +- CLI commands for auditor workflow + +--- + +## Dependency Graph + +``` +Phase 0 (Foundation) +├── 5100.0001.0001 (Run Manifest) +│ └── Phase 1 depends +├── 5100.0001.0002 (Evidence Index) +│ └── Phase 2, 5 depend +├── 5100.0001.0003 (Offline Bundle) +│ └── Phase 2 depends +└── 5100.0001.0004 (Golden Corpus) + └── All phases use + +Phase 1 (Determinism) +├── 5100.0002.0001 (Canonicalization) +│ └── 5100.0002.0002, 5100.0002.0003 depend +├── 5100.0002.0002 (Replay Runner) +│ └── Phase 5 depends +└── 5100.0002.0003 (Delta-Verdict) + +Phase 2 (Offline & Interop) +├── 5100.0003.0001 (SBOM Interop) +└── 5100.0003.0002 (No-Egress) + +Phase 3 (Unknowns Gates) +└── 5100.0004.0001 (CI Gates) + └── Depends on 4100.0001.0002 + +Phase 4 (Chaos) +└── 5100.0005.0001 (Router Chaos) + +Phase 5 (Audit Packs) +└── 5100.0006.0001 (Export/Import) + └── Depends on Phase 0, Phase 1 +``` + +--- + +## CI/CD Integration + +### New Workflows + +| Workflow | Trigger | Purpose | +|----------|---------|---------| +| `replay-verification.yml` | PR (scanner changes) | Verify deterministic replay | +| `interop-e2e.yml` | PR + Nightly | SBOM interoperability | +| `offline-e2e.yml` | PR + Nightly | Air-gap compliance | +| `unknowns-gate.yml` | PR + Push | Budget enforcement | +| `router-chaos.yml` | Nightly | Resilience testing | + +### Release Blocking Gates + +A release candidate is blocked if any of these fail: + +1. **Replay Verification**: Zero non-deterministic diffs +2. **Interop Suite**: 95%+ findings parity +3. **Offline E2E**: All tests pass with no network +4. **Unknowns Budget**: Within budget for prod environment +5. **Performance**: No breach of p95/memory budgets + +--- + +## Success Criteria + +| Criteria | Metric | Gate | +|----------|--------|------| +| Full scan + attest + verify with no network | `offline-e2e` passes | Release | +| Re-running fixed input = identical verdict | 0 byte diff | Release | +| Grype from SBOM matches image scan | 95%+ parity | Release | +| Builds fail when unknowns > budget | Exit code 2 | PR | +| Router under burst emits correct Retry-After | 100% compliance | Nightly | +| Evidence index links complete | Validation passes | Release | + +--- + +## Artifacts Standardized + +| Artifact | Schema Location | Purpose | +|----------|-----------------|---------| +| Run Manifest | `StellaOps.Testing.Manifests` | Replay key | +| Evidence Index | `StellaOps.Evidence` | Verdict → evidence chain | +| Offline Bundle | `StellaOps.AirGap.Bundle` | Air-gap operation | +| Delta Verdict | `StellaOps.DeltaVerdict` | Diff-aware gates | +| Audit Pack | `StellaOps.AuditPack` | Compliance verification | + +--- + +## Implementation Order + +### Immediate (This Week) +1. **5100.0001.0001** - Run Manifest Schema +2. **5100.0002.0001** - Canonicalization Utilities +3. **5100.0004.0001** - Unknowns Budget CI Gates + +### Short Term (Next 2 Sprints) +4. **5100.0001.0002** - Evidence Index Schema +5. **5100.0002.0002** - Replay Runner Service +6. **5100.0003.0001** - SBOM Interop Round-Trip + +### Medium Term (Following Sprints) +7. **5100.0001.0003** - Offline Bundle Manifest +8. **5100.0003.0002** - No-Egress Enforcement +9. **5100.0002.0003** - Delta-Verdict Generator + +### Later +10. **5100.0001.0004** - Golden Corpus Expansion +11. **5100.0005.0001** - Router Chaos Suite +12. **5100.0006.0001** - Audit Pack Export/Import + +--- + +## Related Documentation + +- [Test Suite Overview](../19_TEST_SUITE_OVERVIEW.md) +- [Testing Strategy Advisory](../product-advisories/20-Dec-2025%20-%20Testing%20strategy.md) +- [Offline Operation Guide](../24_OFFLINE_KIT.md) +- [tests/AGENTS.md](../../tests/AGENTS.md) + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Epic created from Testing Strategy advisory analysis. 12 sprints defined across 6 phases. | Agent | + +--- + +**Epic Status**: PLANNING (0/12 sprints complete) diff --git a/docs/implplan/SPRINT_5200_0001_0001_starter_policy_template.md b/docs/implplan/SPRINT_5200_0001_0001_starter_policy_template.md new file mode 100644 index 000000000..2c6e9010b --- /dev/null +++ b/docs/implplan/SPRINT_5200_0001_0001_starter_policy_template.md @@ -0,0 +1,387 @@ +# Sprint 5200.0001.0001 · Starter Policy Template — Day-1 Policy Pack + +## Topic & Scope +- Create a production-ready "starter" policy pack that customers can adopt immediately. +- Implements the minimal policy from the Reference Architecture advisory. +- Provides sensible defaults for vulnerability gating, unknowns thresholds, and signing requirements. +- **Working directory:** `src/Policy/`, `policies/`, `docs/` + +## Dependencies & Concurrency +- **Upstream**: Policy Engine (implemented), Exception Objects (implemented) +- **Downstream**: New customer onboarding, documentation +- **Safe to parallelize with**: All other sprints + +## Documentation Prerequisites +- `docs/modules/policy/architecture.md` +- `docs/product-advisories/archived/2025-12-21-reference-architecture/20-Dec-2025 - Stella Ops Reference Architecture.md` +- `docs/policy/dsl-reference.md` (if exists) + +--- + +## Tasks + +### T1: Starter Policy YAML Definition + +**Assignee**: Policy Team +**Story Points**: 5 +**Status**: TODO + +**Description**: +Create the main starter policy YAML file with recommended defaults. + +**Implementation Path**: `policies/starter-day1.yaml` + +**Acceptance Criteria**: +- [ ] Gate on CVE with `reachability=reachable` AND `severity >= High` +- [ ] Allow bypass if VEX source says `not_affected` with evidence +- [ ] Fail on unknowns above threshold (default: 5% of packages) +- [ ] Require signed SBOM for production environments +- [ ] Require signed verdict for production deployments +- [ ] Clear comments explaining each rule +- [ ] Versioned policy pack format + +**Policy File**: +```yaml +# Stella Ops Starter Policy Pack - Day 1 +# Version: 1.0.0 +# Last Updated: 2025-12-21 +# +# This policy provides sensible defaults for organizations beginning +# their software supply chain security journey. Customize as needed. + +apiVersion: policy.stellaops.io/v1 +kind: PolicyPack +metadata: + name: starter-day1 + version: "1.0.0" + description: "Production-ready starter policy for Day 1 adoption" + labels: + tier: starter + environment: all + +spec: + # Global settings + settings: + defaultAction: warn # warn | block | allow + unknownsThreshold: 0.05 # 5% of packages with missing metadata + requireSignedSbom: true + requireSignedVerdict: true + + # Rule evaluation order: first match wins + rules: + # Rule 1: Block reachable HIGH/CRITICAL vulnerabilities + - name: block-reachable-high-critical + description: "Block deployments with reachable HIGH or CRITICAL vulnerabilities" + match: + severity: + - CRITICAL + - HIGH + reachability: reachable + unless: + # Allow if VEX says not_affected with evidence + vexStatus: not_affected + vexJustification: + - vulnerable_code_not_present + - vulnerable_code_cannot_be_controlled_by_adversary + - inline_mitigations_already_exist + action: block + message: "Reachable {severity} vulnerability {cve} must be remediated or have VEX justification" + + # Rule 2: Warn on reachable MEDIUM vulnerabilities + - name: warn-reachable-medium + description: "Warn on reachable MEDIUM severity vulnerabilities" + match: + severity: MEDIUM + reachability: reachable + unless: + vexStatus: not_affected + action: warn + message: "Reachable MEDIUM vulnerability {cve} should be reviewed" + + # Rule 3: Ignore unreachable vulnerabilities (with logging) + - name: ignore-unreachable + description: "Allow unreachable vulnerabilities but log for awareness" + match: + reachability: unreachable + action: allow + log: true + message: "Vulnerability {cve} is unreachable - allowing" + + # Rule 4: Fail on excessive unknowns + - name: fail-on-unknowns + description: "Block if too many packages have unknown metadata" + type: aggregate # Applies to entire scan, not individual findings + match: + unknownsRatio: + gt: ${settings.unknownsThreshold} + action: block + message: "Unknown packages exceed threshold ({unknownsRatio}% > {threshold}%)" + + # Rule 5: Require signed SBOM for production + - name: require-signed-sbom-prod + description: "Production deployments must have signed SBOM" + match: + environment: production + require: + signedSbom: true + action: block + message: "Production deployment requires signed SBOM" + + # Rule 6: Require signed verdict for production + - name: require-signed-verdict-prod + description: "Production deployments must have signed policy verdict" + match: + environment: production + require: + signedVerdict: true + action: block + message: "Production deployment requires signed verdict" + + # Rule 7: Default allow for everything else + - name: default-allow + description: "Allow everything not matched by above rules" + match: + always: true + action: allow +``` + +--- + +### T2: Policy Pack Metadata & Schema + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Define the policy pack schema and metadata format. + +**Acceptance Criteria**: +- [ ] JSON Schema for policy pack validation +- [ ] Version field with semver +- [ ] Dependencies field for pack composition +- [ ] Labels for categorization +- [ ] Annotations for custom metadata + +--- + +### T3: Environment-Specific Overrides + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Create environment-specific override files. + +**Implementation Path**: `policies/starter-day1/` + +**Acceptance Criteria**: +- [ ] `base.yaml` - Core rules +- [ ] `overrides/production.yaml` - Stricter for prod +- [ ] `overrides/staging.yaml` - Moderate strictness +- [ ] `overrides/development.yaml` - Lenient for dev +- [ ] Clear documentation on override precedence + +**Override Example**: +```yaml +# policies/starter-day1/overrides/development.yaml +apiVersion: policy.stellaops.io/v1 +kind: PolicyOverride +metadata: + name: starter-day1-dev + parent: starter-day1 + environment: development + +spec: + settings: + defaultAction: warn # Never block in dev + unknownsThreshold: 0.20 # Allow more unknowns + + ruleOverrides: + - name: block-reachable-high-critical + action: warn # Downgrade to warn in dev + + - name: require-signed-sbom-prod + enabled: false # Disable in dev + + - name: require-signed-verdict-prod + enabled: false # Disable in dev +``` + +--- + +### T4: Policy Validation CLI Command + +**Assignee**: CLI Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Add CLI command to validate policy packs before deployment. + +**Acceptance Criteria**: +- [ ] `stellaops policy validate ` +- [ ] Schema validation +- [ ] Rule conflict detection +- [ ] Circular dependency detection +- [ ] Warning for missing common rules +- [ ] Exit codes: 0=valid, 1=errors, 2=warnings + +--- + +### T5: Policy Simulation Mode + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Add simulation mode to test policy against historical data. + +**Acceptance Criteria**: +- [ ] `stellaops policy simulate --policy --scan ` +- [ ] Shows what would have happened +- [ ] Diff against current policy +- [ ] Summary statistics +- [ ] No state mutation + +--- + +### T6: Starter Policy Tests + +**Assignee**: Policy Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Comprehensive tests for starter policy behavior. + +**Acceptance Criteria**: +- [ ] Test: Reachable HIGH blocked without VEX +- [ ] Test: Reachable HIGH allowed with VEX not_affected +- [ ] Test: Unreachable HIGH allowed +- [ ] Test: Unknowns threshold enforced +- [ ] Test: Signed SBOM required for prod +- [ ] Test: Dev overrides work correctly + +--- + +### T7: Policy Pack Distribution + +**Assignee**: Policy Team +**Story Points**: 2 +**Status**: TODO + +**Description**: +Package and distribute starter policy pack. + +**Acceptance Criteria**: +- [ ] OCI artifact packaging for policy pack +- [ ] Version tagging +- [ ] Signature on policy pack artifact +- [ ] Registry push (configurable) +- [ ] Offline bundle support + +--- + +### T8: User Documentation + +**Assignee**: Docs Team +**Story Points**: 3 +**Status**: TODO + +**Description**: +Comprehensive user documentation for starter policy. + +**Implementation Path**: `docs/policy/starter-guide.md` + +**Acceptance Criteria**: +- [ ] "Getting Started with Policies" guide +- [ ] Rule-by-rule explanation +- [ ] Customization guide +- [ ] Environment override examples +- [ ] Troubleshooting common issues +- [ ] Migration path to custom policies + +--- + +### T9: Quick Start Integration + +**Assignee**: Docs Team +**Story Points**: 2 +**Status**: TODO + +**Description**: +Integrate starter policy into quick start documentation. + +**Acceptance Criteria**: +- [ ] Update `docs/10_CONCELIER_CLI_QUICKSTART.md` +- [ ] One-liner to install starter policy +- [ ] Example scan with policy evaluation +- [ ] Link to customization docs + +--- + +### T10: UI Policy Selector + +**Assignee**: UI Team +**Story Points**: 2 +**Status**: TODO + +**Description**: +Add starter policy as default option in UI policy selector. + +**Acceptance Criteria**: +- [ ] "Starter (Recommended)" option in dropdown +- [ ] Tooltip explaining starter policy +- [ ] One-click activation +- [ ] Preview of rules before activation + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Policy Team | Starter Policy YAML | +| 2 | T2 | TODO | T1 | Policy Team | Pack Metadata & Schema | +| 3 | T3 | TODO | T1 | Policy Team | Environment Overrides | +| 4 | T4 | TODO | T1 | CLI Team | Validation CLI Command | +| 5 | T5 | TODO | T1 | Policy Team | Simulation Mode | +| 6 | T6 | TODO | T1-T3 | Policy Team | Starter Policy Tests | +| 7 | T7 | TODO | T1-T3 | Policy Team | Pack Distribution | +| 8 | T8 | TODO | T1-T3 | Docs Team | User Documentation | +| 9 | T9 | TODO | T8 | Docs Team | Quick Start Integration | +| 10 | T10 | TODO | T1 | UI Team | UI Policy Selector | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from Reference Architecture advisory - starter policy gap. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| 5% unknowns threshold | Decision | Policy Team | Conservative default; can be adjusted | +| First-match semantics | Decision | Policy Team | Consistent with existing policy engine | +| VEX required for bypass | Decision | Policy Team | Evidence-based exceptions only | +| Prod-only signing req | Decision | Policy Team | Don't burden dev/staging environments | + +--- + +## Success Criteria + +- [ ] New customers can deploy starter policy in <5 minutes +- [ ] Starter policy blocks reachable HIGH/CRITICAL without VEX +- [ ] Clear upgrade path to custom policies +- [ ] Documentation enables self-service adoption +- [ ] Policy pack signed and published to registry + +**Sprint Status**: TODO (0/10 tasks complete) diff --git a/docs/implplan/SPRINT_6000_0001_0001_binaries_schema.md b/docs/implplan/SPRINT_6000_0001_0001_binaries_schema.md new file mode 100644 index 000000000..21e82b894 --- /dev/null +++ b/docs/implplan/SPRINT_6000_0001_0001_binaries_schema.md @@ -0,0 +1,589 @@ +# Sprint 6000.0001.0001 · Binaries Schema + +## Topic & Scope + +- Create the `binaries` PostgreSQL schema for the BinaryIndex module. +- Implement all core tables: `binary_identity`, `binary_package_map`, `vulnerable_buildids`, `binary_vuln_assertion`, `corpus_snapshots`. +- Set up RLS policies and indexes for multi-tenant isolation. +- **Working directory:** `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Persistence/` + +## Dependencies & Concurrency + +- **Upstream**: None (foundational sprint) +- **Downstream**: All 6000.0001.x sprints depend on this +- **Safe to parallelize with**: None within MVP 1 + +## Documentation Prerequisites + +- `docs/db/SPECIFICATION.md` +- `docs/db/schemas/binaries_schema_specification.md` +- `docs/modules/binaryindex/architecture.md` + +--- + +## Tasks + +### T1: Create Project Structure + +**Assignee**: BinaryIndex Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create the BinaryIndex persistence library project structure. + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Persistence/` + +**Project Structure**: +``` +StellaOps.BinaryIndex.Persistence/ +├── StellaOps.BinaryIndex.Persistence.csproj +├── BinaryIndexDbContext.cs +├── Migrations/ +│ └── 001_create_binaries_schema.sql +├── Repositories/ +│ ├── IBinaryIdentityRepository.cs +│ ├── BinaryIdentityRepository.cs +│ ├── IBinaryPackageMapRepository.cs +│ └── BinaryPackageMapRepository.cs +└── Extensions/ + └── ServiceCollectionExtensions.cs +``` + +**Project File**: +```xml + + + net10.0 + enable + enable + preview + + + + + + + + + + + + + + + + +``` + +**Acceptance Criteria**: +- [ ] Project compiles +- [ ] References Infrastructure.Postgres for shared patterns +- [ ] Migrations embedded as resources + +--- + +### T2: Create Initial Migration + +**Assignee**: BinaryIndex Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Create the SQL migration that establishes the binaries schema. + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Persistence/Migrations/001_create_binaries_schema.sql` + +**Migration Content**: +```sql +-- 001_create_binaries_schema.sql +-- Creates the binaries schema for BinaryIndex module +-- Author: BinaryIndex Team +-- Date: 2025-12-21 + +BEGIN; + +-- ============================================================================ +-- SCHEMA CREATION +-- ============================================================================ + +CREATE SCHEMA IF NOT EXISTS binaries; +CREATE SCHEMA IF NOT EXISTS binaries_app; + +-- RLS helper function +CREATE OR REPLACE FUNCTION binaries_app.require_current_tenant() +RETURNS TEXT +LANGUAGE plpgsql STABLE SECURITY DEFINER +AS $$ +DECLARE + v_tenant TEXT; +BEGIN + v_tenant := current_setting('app.tenant_id', true); + IF v_tenant IS NULL OR v_tenant = '' THEN + RAISE EXCEPTION 'app.tenant_id session variable not set'; + END IF; + RETURN v_tenant; +END; +$$; + +-- ============================================================================ +-- CORE TABLES (see binaries_schema_specification.md for full DDL) +-- ============================================================================ + +-- binary_identity table +CREATE TABLE IF NOT EXISTS binaries.binary_identity ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + binary_key TEXT NOT NULL, + build_id TEXT, + build_id_type TEXT CHECK (build_id_type IN ('gnu-build-id', 'pe-cv', 'macho-uuid')), + file_sha256 TEXT NOT NULL, + text_sha256 TEXT, + blake3_hash TEXT, + format TEXT NOT NULL CHECK (format IN ('elf', 'pe', 'macho')), + architecture TEXT NOT NULL, + osabi TEXT, + binary_type TEXT CHECK (binary_type IN ('executable', 'shared_library', 'static_library', 'object')), + is_stripped BOOLEAN DEFAULT FALSE, + first_seen_snapshot_id UUID, + last_seen_snapshot_id UUID, + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + CONSTRAINT binary_identity_key_unique UNIQUE (tenant_id, binary_key) +); + +-- corpus_snapshots table +CREATE TABLE IF NOT EXISTS binaries.corpus_snapshots ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + distro TEXT NOT NULL, + release TEXT NOT NULL, + architecture TEXT NOT NULL, + snapshot_id TEXT NOT NULL, + packages_processed INT NOT NULL DEFAULT 0, + binaries_indexed INT NOT NULL DEFAULT 0, + repo_metadata_digest TEXT, + signing_key_id TEXT, + dsse_envelope_ref TEXT, + status TEXT NOT NULL DEFAULT 'pending' CHECK (status IN ('pending', 'processing', 'completed', 'failed')), + error TEXT, + started_at TIMESTAMPTZ, + completed_at TIMESTAMPTZ, + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + CONSTRAINT corpus_snapshots_unique UNIQUE (tenant_id, distro, release, architecture, snapshot_id) +); + +-- binary_package_map table +CREATE TABLE IF NOT EXISTS binaries.binary_package_map ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + binary_identity_id UUID NOT NULL REFERENCES binaries.binary_identity(id) ON DELETE CASCADE, + binary_key TEXT NOT NULL, + distro TEXT NOT NULL, + release TEXT NOT NULL, + source_pkg TEXT NOT NULL, + binary_pkg TEXT NOT NULL, + pkg_version TEXT NOT NULL, + pkg_purl TEXT, + architecture TEXT NOT NULL, + file_path_in_pkg TEXT NOT NULL, + snapshot_id UUID NOT NULL REFERENCES binaries.corpus_snapshots(id), + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + CONSTRAINT binary_package_map_unique UNIQUE (binary_identity_id, snapshot_id, file_path_in_pkg) +); + +-- vulnerable_buildids table +CREATE TABLE IF NOT EXISTS binaries.vulnerable_buildids ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + buildid_type TEXT NOT NULL CHECK (buildid_type IN ('gnu-build-id', 'pe-cv', 'macho-uuid')), + buildid_value TEXT NOT NULL, + purl TEXT NOT NULL, + pkg_version TEXT NOT NULL, + distro TEXT, + release TEXT, + confidence TEXT NOT NULL DEFAULT 'exact' CHECK (confidence IN ('exact', 'inferred', 'heuristic')), + provenance JSONB DEFAULT '{}', + snapshot_id UUID REFERENCES binaries.corpus_snapshots(id), + indexed_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + CONSTRAINT vulnerable_buildids_unique UNIQUE (tenant_id, buildid_value, buildid_type, purl, pkg_version) +); + +-- binary_vuln_assertion table +CREATE TABLE IF NOT EXISTS binaries.binary_vuln_assertion ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + binary_key TEXT NOT NULL, + binary_identity_id UUID REFERENCES binaries.binary_identity(id), + cve_id TEXT NOT NULL, + advisory_id UUID, + status TEXT NOT NULL CHECK (status IN ('affected', 'not_affected', 'fixed', 'unknown')), + method TEXT NOT NULL CHECK (method IN ('range_match', 'buildid_catalog', 'fingerprint_match', 'fix_index')), + confidence NUMERIC(3,2) CHECK (confidence >= 0 AND confidence <= 1), + evidence_ref TEXT, + evidence_digest TEXT, + evaluated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + CONSTRAINT binary_vuln_assertion_unique UNIQUE (tenant_id, binary_key, cve_id) +); + +-- ============================================================================ +-- INDEXES +-- ============================================================================ + +CREATE INDEX IF NOT EXISTS idx_binary_identity_tenant ON binaries.binary_identity(tenant_id); +CREATE INDEX IF NOT EXISTS idx_binary_identity_buildid ON binaries.binary_identity(build_id) WHERE build_id IS NOT NULL; +CREATE INDEX IF NOT EXISTS idx_binary_identity_sha256 ON binaries.binary_identity(file_sha256); +CREATE INDEX IF NOT EXISTS idx_binary_identity_key ON binaries.binary_identity(binary_key); + +CREATE INDEX IF NOT EXISTS idx_binary_package_map_tenant ON binaries.binary_package_map(tenant_id); +CREATE INDEX IF NOT EXISTS idx_binary_package_map_binary ON binaries.binary_package_map(binary_identity_id); +CREATE INDEX IF NOT EXISTS idx_binary_package_map_distro ON binaries.binary_package_map(distro, release, source_pkg); +CREATE INDEX IF NOT EXISTS idx_binary_package_map_snapshot ON binaries.binary_package_map(snapshot_id); + +CREATE INDEX IF NOT EXISTS idx_corpus_snapshots_tenant ON binaries.corpus_snapshots(tenant_id); +CREATE INDEX IF NOT EXISTS idx_corpus_snapshots_distro ON binaries.corpus_snapshots(distro, release, architecture); +CREATE INDEX IF NOT EXISTS idx_corpus_snapshots_status ON binaries.corpus_snapshots(status) WHERE status IN ('pending', 'processing'); + +CREATE INDEX IF NOT EXISTS idx_vulnerable_buildids_tenant ON binaries.vulnerable_buildids(tenant_id); +CREATE INDEX IF NOT EXISTS idx_vulnerable_buildids_value ON binaries.vulnerable_buildids(buildid_type, buildid_value); +CREATE INDEX IF NOT EXISTS idx_vulnerable_buildids_purl ON binaries.vulnerable_buildids(purl); + +CREATE INDEX IF NOT EXISTS idx_binary_vuln_assertion_tenant ON binaries.binary_vuln_assertion(tenant_id); +CREATE INDEX IF NOT EXISTS idx_binary_vuln_assertion_binary ON binaries.binary_vuln_assertion(binary_key); +CREATE INDEX IF NOT EXISTS idx_binary_vuln_assertion_cve ON binaries.binary_vuln_assertion(cve_id); + +-- ============================================================================ +-- ROW-LEVEL SECURITY +-- ============================================================================ + +ALTER TABLE binaries.binary_identity ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.binary_identity FORCE ROW LEVEL SECURITY; +CREATE POLICY binary_identity_tenant_isolation ON binaries.binary_identity + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.corpus_snapshots ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.corpus_snapshots FORCE ROW LEVEL SECURITY; +CREATE POLICY corpus_snapshots_tenant_isolation ON binaries.corpus_snapshots + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.binary_package_map ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.binary_package_map FORCE ROW LEVEL SECURITY; +CREATE POLICY binary_package_map_tenant_isolation ON binaries.binary_package_map + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.vulnerable_buildids ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.vulnerable_buildids FORCE ROW LEVEL SECURITY; +CREATE POLICY vulnerable_buildids_tenant_isolation ON binaries.vulnerable_buildids + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.binary_vuln_assertion ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.binary_vuln_assertion FORCE ROW LEVEL SECURITY; +CREATE POLICY binary_vuln_assertion_tenant_isolation ON binaries.binary_vuln_assertion + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +COMMIT; +``` + +**Acceptance Criteria**: +- [ ] Migration applies cleanly on fresh database +- [ ] Migration is idempotent (IF NOT EXISTS) +- [ ] RLS policies enforce tenant isolation +- [ ] All indexes created + +--- + +### T3: Implement Migration Runner + +**Assignee**: BinaryIndex Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Implement the migration runner that applies embedded SQL migrations. + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Persistence/BinaryIndexMigrationRunner.cs` + +**Implementation**: +```csharp +namespace StellaOps.BinaryIndex.Persistence; + +public sealed class BinaryIndexMigrationRunner : IMigrationRunner +{ + private readonly NpgsqlDataSource _dataSource; + private readonly ILogger _logger; + + public BinaryIndexMigrationRunner( + NpgsqlDataSource dataSource, + ILogger logger) + { + _dataSource = dataSource; + _logger = logger; + } + + public async Task MigrateAsync(CancellationToken ct = default) + { + const string lockKey = "binaries_schema_migration"; + var lockHash = unchecked((int)lockKey.GetHashCode()); + + await using var connection = await _dataSource.OpenConnectionAsync(ct); + + // Acquire advisory lock + await using var lockCmd = connection.CreateCommand(); + lockCmd.CommandText = $"SELECT pg_try_advisory_lock({lockHash})"; + var acquired = (bool)(await lockCmd.ExecuteScalarAsync(ct))!; + + if (!acquired) + { + _logger.LogInformation("Migration already in progress, skipping"); + return; + } + + try + { + var migrations = GetEmbeddedMigrations(); + foreach (var (name, sql) in migrations.OrderBy(m => m.name)) + { + _logger.LogInformation("Applying migration: {Name}", name); + await using var cmd = connection.CreateCommand(); + cmd.CommandText = sql; + await cmd.ExecuteNonQueryAsync(ct); + } + } + finally + { + await using var unlockCmd = connection.CreateCommand(); + unlockCmd.CommandText = $"SELECT pg_advisory_unlock({lockHash})"; + await unlockCmd.ExecuteScalarAsync(ct); + } + } + + private static IEnumerable<(string name, string sql)> GetEmbeddedMigrations() + { + var assembly = typeof(BinaryIndexMigrationRunner).Assembly; + var prefix = "StellaOps.BinaryIndex.Persistence.Migrations."; + + foreach (var resourceName in assembly.GetManifestResourceNames() + .Where(n => n.StartsWith(prefix) && n.EndsWith(".sql"))) + { + using var stream = assembly.GetManifestResourceStream(resourceName)!; + using var reader = new StreamReader(stream); + var sql = reader.ReadToEnd(); + var name = resourceName[prefix.Length..]; + yield return (name, sql); + } + } +} +``` + +**Acceptance Criteria**: +- [ ] Migrations applied on startup +- [ ] Advisory lock prevents concurrent migrations +- [ ] Embedded resources correctly loaded + +--- + +### T4: Implement DbContext and Repositories + +**Assignee**: BinaryIndex Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T2, T3 + +**Description**: +Implement the database context and repository interfaces for core tables. + +**Implementation Paths**: +- `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Persistence/BinaryIndexDbContext.cs` +- `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Persistence/Repositories/` + +**DbContext**: +```csharp +namespace StellaOps.BinaryIndex.Persistence; + +public sealed class BinaryIndexDbContext : IBinaryIndexDbContext +{ + private readonly NpgsqlDataSource _dataSource; + private readonly ITenantContext _tenantContext; + + public BinaryIndexDbContext( + NpgsqlDataSource dataSource, + ITenantContext tenantContext) + { + _dataSource = dataSource; + _tenantContext = tenantContext; + } + + public async Task OpenConnectionAsync(CancellationToken ct = default) + { + var connection = await _dataSource.OpenConnectionAsync(ct); + + // Set tenant context for RLS + await using var cmd = connection.CreateCommand(); + cmd.CommandText = $"SET app.tenant_id = '{_tenantContext.TenantId}'"; + await cmd.ExecuteNonQueryAsync(ct); + + return connection; + } +} +``` + +**Repository Interface**: +```csharp +public interface IBinaryIdentityRepository +{ + Task GetByBuildIdAsync(string buildId, string buildIdType, CancellationToken ct); + Task GetByKeyAsync(string binaryKey, CancellationToken ct); + Task UpsertAsync(BinaryIdentity identity, CancellationToken ct); + Task> GetBatchAsync(IEnumerable binaryKeys, CancellationToken ct); +} +``` + +**Acceptance Criteria**: +- [ ] DbContext sets tenant context on connection +- [ ] Repositories implement CRUD operations +- [ ] Dapper used for data access +- [ ] Unit tests pass + +--- + +### T5: Integration Tests with Testcontainers + +**Assignee**: BinaryIndex Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1-T4 + +**Description**: +Create integration tests using Testcontainers for PostgreSQL. + +**Implementation Path**: `src/BinaryIndex/__Tests/StellaOps.BinaryIndex.Persistence.Tests/` + +**Test Class**: +```csharp +namespace StellaOps.BinaryIndex.Persistence.Tests; + +public class BinaryIdentityRepositoryTests : IAsyncLifetime +{ + private readonly PostgreSqlContainer _postgres = new PostgreSqlBuilder() + .WithImage("postgres:16-alpine") + .Build(); + + private NpgsqlDataSource _dataSource = null!; + private BinaryIdentityRepository _repository = null!; + + public async Task InitializeAsync() + { + await _postgres.StartAsync(); + _dataSource = NpgsqlDataSource.Create(_postgres.GetConnectionString()); + + var migrationRunner = new BinaryIndexMigrationRunner( + _dataSource, + NullLogger.Instance); + await migrationRunner.MigrateAsync(); + + var dbContext = new BinaryIndexDbContext( + _dataSource, + new TestTenantContext("test-tenant")); + _repository = new BinaryIdentityRepository(dbContext); + } + + public async Task DisposeAsync() + { + await _dataSource.DisposeAsync(); + await _postgres.DisposeAsync(); + } + + [Fact] + public async Task UpsertAsync_NewIdentity_CreatesRecord() + { + var identity = new BinaryIdentity + { + BinaryKey = "test-build-id-123", + BuildId = "abc123def456", + BuildIdType = "gnu-build-id", + FileSha256 = "sha256:...", + Format = "elf", + Architecture = "x86-64" + }; + + var result = await _repository.UpsertAsync(identity, CancellationToken.None); + + result.Id.Should().NotBeEmpty(); + result.BinaryKey.Should().Be(identity.BinaryKey); + } + + [Fact] + public async Task GetByBuildIdAsync_ExistingIdentity_ReturnsRecord() + { + // Arrange + var identity = new BinaryIdentity { /* ... */ }; + await _repository.UpsertAsync(identity, CancellationToken.None); + + // Act + var result = await _repository.GetByBuildIdAsync( + identity.BuildId!, identity.BuildIdType!, CancellationToken.None); + + // Assert + result.Should().NotBeNull(); + result!.BuildId.Should().Be(identity.BuildId); + } +} +``` + +**Acceptance Criteria**: +- [ ] Testcontainers PostgreSQL spins up +- [ ] Migrations apply in tests +- [ ] Repository CRUD operations tested +- [ ] RLS isolation verified + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | BinaryIndex Team | Create Project Structure | +| 2 | T2 | TODO | T1 | BinaryIndex Team | Create Initial Migration | +| 3 | T3 | TODO | T1, T2 | BinaryIndex Team | Implement Migration Runner | +| 4 | T4 | TODO | T2, T3 | BinaryIndex Team | Implement DbContext and Repositories | +| 5 | T5 | TODO | T1-T4 | BinaryIndex Team | Integration Tests with Testcontainers | + +--- + +## Execution Log + +| Date (UTC) | Update | Owner | +|------------|--------|-------| +| 2025-12-21 | Sprint created from BinaryIndex architecture. Schema foundational for all BinaryIndex functionality. | Agent | + +--- + +## Decisions & Risks + +| Item | Type | Owner | Notes | +|------|------|-------|-------| +| RLS for tenant isolation | Decision | BinaryIndex Team | Consistent with other StellaOps schemas | +| Dapper over EF Core | Decision | BinaryIndex Team | Performance-critical lookups | +| Build-ID as primary identity | Decision | BinaryIndex Team | ELF Build-ID preferred, fallback to SHA-256 | + +--- + +## Success Criteria + +- [ ] All 5 tasks marked DONE +- [ ] `binaries` schema deployed and migrated +- [ ] RLS enforces tenant isolation +- [ ] Repository pattern implemented +- [ ] Integration tests pass with Testcontainers +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds with 100% pass rate diff --git a/docs/implplan/SPRINT_6000_0001_0002_binary_identity_service.md b/docs/implplan/SPRINT_6000_0001_0002_binary_identity_service.md new file mode 100644 index 000000000..fe45315c7 --- /dev/null +++ b/docs/implplan/SPRINT_6000_0001_0002_binary_identity_service.md @@ -0,0 +1,390 @@ +# Sprint 6000.0001.0002 · Binary Identity Service + +## Topic & Scope + +- Implement the core Binary Identity extraction and storage service. +- Create domain models for BinaryIdentity, BinaryFeatures, and related types. +- Integrate with existing Scanner.Analyzers.Native for ELF/PE/Mach-O parsing. +- **Working directory:** `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Core/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 6000.0001.0001 (Binaries Schema) +- **Downstream**: Sprints 6000.0001.0003, 6000.0001.0004 +- **Safe to parallelize with**: None + +## Documentation Prerequisites + +- `docs/modules/binaryindex/architecture.md` +- `src/Scanner/StellaOps.Scanner.Analyzers.Native/` (existing ELF parser) + +--- + +## Tasks + +### T1: Create Core Domain Models + +**Assignee**: BinaryIndex Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: — + +**Description**: +Create domain models for binary identity and features. + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Core/Models/` + +**Models**: +```csharp +namespace StellaOps.BinaryIndex.Core.Models; + +/// +/// Unique identity of a binary derived from Build-ID or hashes. +/// +public sealed record BinaryIdentity +{ + public Guid Id { get; init; } + public required string BinaryKey { get; init; } // Primary key: build_id || file_sha256 + public string? BuildId { get; init; } // ELF GNU Build-ID + public string? BuildIdType { get; init; } // gnu-build-id, pe-cv, macho-uuid + public required string FileSha256 { get; init; } + public string? TextSha256 { get; init; } // SHA-256 of .text section + public required BinaryFormat Format { get; init; } + public required string Architecture { get; init; } + public string? OsAbi { get; init; } + public BinaryType? Type { get; init; } + public bool IsStripped { get; init; } + public DateTimeOffset CreatedAt { get; init; } +} + +public enum BinaryFormat { Elf, Pe, Macho } +public enum BinaryType { Executable, SharedLibrary, StaticLibrary, Object } + +/// +/// Extended features extracted from a binary. +/// +public sealed record BinaryFeatures +{ + public required BinaryIdentity Identity { get; init; } + public ImmutableArray DynamicDeps { get; init; } = []; // DT_NEEDED + public ImmutableArray ExportedSymbols { get; init; } = []; + public ImmutableArray ImportedSymbols { get; init; } = []; + public BinaryHardening? Hardening { get; init; } + public string? Interpreter { get; init; } // ELF interpreter path +} + +public sealed record BinaryHardening( + bool HasStackCanary, + bool HasNx, + bool HasPie, + bool HasRelro, + bool HasBindNow); +``` + +**Acceptance Criteria**: +- [ ] All domain models created with immutable records +- [ ] XML documentation on all types +- [ ] Models align with database schema + +--- + +### T2: Create IBinaryFeatureExtractor Interface + +**Assignee**: BinaryIndex Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1 + +**Description**: +Define the interface for binary feature extraction. + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Core/Services/IBinaryFeatureExtractor.cs` + +**Interface**: +```csharp +namespace StellaOps.BinaryIndex.Core.Services; + +public interface IBinaryFeatureExtractor +{ + /// + /// Extract identity from a binary stream. + /// + Task ExtractIdentityAsync( + Stream binaryStream, + CancellationToken ct = default); + + /// + /// Extract full features from a binary stream. + /// + Task ExtractFeaturesAsync( + Stream binaryStream, + FeatureExtractorOptions? options = null, + CancellationToken ct = default); +} + +public sealed record FeatureExtractorOptions +{ + public bool ExtractSymbols { get; init; } = true; + public bool ExtractHardening { get; init; } = true; + public int MaxSymbols { get; init; } = 10000; +} +``` + +**Acceptance Criteria**: +- [ ] Interface defined with async methods +- [ ] Options record for configuration +- [ ] Documentation complete + +--- + +### T3: Implement ElfFeatureExtractor + +**Assignee**: BinaryIndex Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1, T2 + +**Description**: +Implement feature extraction for ELF binaries using existing Scanner.Analyzers.Native code. + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Core/Services/ElfFeatureExtractor.cs` + +**Implementation**: +```csharp +namespace StellaOps.BinaryIndex.Core.Services; + +public sealed class ElfFeatureExtractor : IBinaryFeatureExtractor +{ + private readonly ILogger _logger; + + public async Task ExtractIdentityAsync( + Stream binaryStream, + CancellationToken ct = default) + { + // Compute file hash + var fileHash = await ComputeSha256Async(binaryStream, ct); + binaryStream.Position = 0; + + // Parse ELF header and notes + var elfReader = new ElfReader(); + var elfInfo = await elfReader.ParseAsync(binaryStream, ct); + + // Extract Build-ID from PT_NOTE sections + var buildId = elfInfo.Notes + .FirstOrDefault(n => n.Name == "GNU" && n.Type == 3)? + .DescriptorHex; + + // Compute .text section hash if available + var textHash = elfInfo.Sections + .FirstOrDefault(s => s.Name == ".text") + ?.ContentHash; + + var binaryKey = buildId ?? fileHash; + + return new BinaryIdentity + { + BinaryKey = binaryKey, + BuildId = buildId, + BuildIdType = buildId != null ? "gnu-build-id" : null, + FileSha256 = fileHash, + TextSha256 = textHash, + Format = BinaryFormat.Elf, + Architecture = MapArchitecture(elfInfo.Machine), + OsAbi = elfInfo.OsAbi, + Type = MapBinaryType(elfInfo.Type), + IsStripped = !elfInfo.HasDebugInfo + }; + } + + public async Task ExtractFeaturesAsync( + Stream binaryStream, + FeatureExtractorOptions? options = null, + CancellationToken ct = default) + { + options ??= new FeatureExtractorOptions(); + + var identity = await ExtractIdentityAsync(binaryStream, ct); + binaryStream.Position = 0; + + var elfReader = new ElfReader(); + var elfInfo = await elfReader.ParseAsync(binaryStream, ct); + + var dynamicParser = new ElfDynamicSectionParser(); + var dynamicInfo = dynamicParser.Parse(elfInfo); + + ImmutableArray exportedSymbols = []; + ImmutableArray importedSymbols = []; + + if (options.ExtractSymbols) + { + exportedSymbols = elfInfo.DynamicSymbols + .Where(s => s.Binding == SymbolBinding.Global && s.SectionIndex != 0) + .Take(options.MaxSymbols) + .Select(s => s.Name) + .ToImmutableArray(); + + importedSymbols = elfInfo.DynamicSymbols + .Where(s => s.SectionIndex == 0) + .Take(options.MaxSymbols) + .Select(s => s.Name) + .ToImmutableArray(); + } + + BinaryHardening? hardening = null; + if (options.ExtractHardening) + { + hardening = new BinaryHardening( + HasStackCanary: dynamicInfo.HasStackCanary, + HasNx: elfInfo.HasNxBit, + HasPie: elfInfo.Type == ElfType.Dyn, + HasRelro: dynamicInfo.HasRelro, + HasBindNow: dynamicInfo.HasBindNow); + } + + return new BinaryFeatures + { + Identity = identity, + DynamicDeps = dynamicInfo.Needed.ToImmutableArray(), + ExportedSymbols = exportedSymbols, + ImportedSymbols = importedSymbols, + Hardening = hardening, + Interpreter = dynamicInfo.Interpreter + }; + } + + private static async Task ComputeSha256Async(Stream stream, CancellationToken ct) + { + using var sha256 = SHA256.Create(); + var hash = await sha256.ComputeHashAsync(stream, ct); + return Convert.ToHexString(hash).ToLowerInvariant(); + } + + private static string MapArchitecture(ushort machine) => machine switch + { + 0x3E => "x86-64", + 0xB7 => "aarch64", + 0x03 => "x86", + 0x28 => "arm", + _ => $"unknown-{machine:X}" + }; + + private static BinaryType MapBinaryType(ElfType type) => type switch + { + ElfType.Exec => BinaryType.Executable, + ElfType.Dyn => BinaryType.SharedLibrary, + ElfType.Rel => BinaryType.Object, + _ => BinaryType.Executable + }; +} +``` + +**Acceptance Criteria**: +- [ ] Build-ID extraction from ELF notes +- [ ] File and .text section hashing +- [ ] Symbol extraction with limits +- [ ] Hardening flag detection +- [ ] Reuses Scanner.Analyzers.Native code + +--- + +### T4: Implement IBinaryIdentityService + +**Assignee**: BinaryIndex Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1-T3 + +**Description**: +Implement the service that coordinates extraction and storage. + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Core/Services/BinaryIdentityService.cs` + +**Interface**: +```csharp +public interface IBinaryIdentityService +{ + Task GetOrCreateAsync(Stream binaryStream, CancellationToken ct); + Task GetByBuildIdAsync(string buildId, CancellationToken ct); + Task GetByKeyAsync(string binaryKey, CancellationToken ct); +} +``` + +**Implementation**: +```csharp +public sealed class BinaryIdentityService : IBinaryIdentityService +{ + private readonly IBinaryFeatureExtractor _extractor; + private readonly IBinaryIdentityRepository _repository; + + public async Task GetOrCreateAsync( + Stream binaryStream, + CancellationToken ct = default) + { + var identity = await _extractor.ExtractIdentityAsync(binaryStream, ct); + + // Check if already exists + var existing = await _repository.GetByKeyAsync(identity.BinaryKey, ct); + if (existing != null) + return existing; + + // Create new + return await _repository.UpsertAsync(identity, ct); + } + + public Task GetByBuildIdAsync(string buildId, CancellationToken ct) => + _repository.GetByBuildIdAsync(buildId, "gnu-build-id", ct); + + public Task GetByKeyAsync(string binaryKey, CancellationToken ct) => + _repository.GetByKeyAsync(binaryKey, ct); +} +``` + +**Acceptance Criteria**: +- [ ] Service coordinates extraction and storage +- [ ] Deduplication by binary key +- [ ] Integration with repository + +--- + +### T5: Unit Tests + +**Assignee**: BinaryIndex Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1-T4 + +**Description**: +Unit tests for domain models and feature extraction. + +**Test Cases**: +- ELF Build-ID extraction from real binaries +- SHA-256 computation determinism +- Symbol extraction limits +- Hardening flag detection + +**Acceptance Criteria**: +- [ ] 90%+ code coverage on core models +- [ ] Real ELF binary test fixtures +- [ ] All tests pass + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | BinaryIndex Team | Create Core Domain Models | +| 2 | T2 | TODO | T1 | BinaryIndex Team | Create IBinaryFeatureExtractor Interface | +| 3 | T3 | TODO | T1, T2 | BinaryIndex Team | Implement ElfFeatureExtractor | +| 4 | T4 | TODO | T1-T3 | BinaryIndex Team | Implement IBinaryIdentityService | +| 5 | T5 | TODO | T1-T4 | BinaryIndex Team | Unit Tests | + +--- + +## Success Criteria + +- [ ] All 5 tasks marked DONE +- [ ] ELF Build-ID extraction working +- [ ] Binary identity deduplication +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds with 90%+ coverage diff --git a/docs/implplan/SPRINT_6000_0001_0003_debian_corpus_connector.md b/docs/implplan/SPRINT_6000_0001_0003_debian_corpus_connector.md new file mode 100644 index 000000000..7087a972b --- /dev/null +++ b/docs/implplan/SPRINT_6000_0001_0003_debian_corpus_connector.md @@ -0,0 +1,355 @@ +# Sprint 6000.0001.0003 · Debian Corpus Connector + +## Topic & Scope + +- Implement the Debian/Ubuntu binary corpus connector. +- Fetch packages from Debian/Ubuntu repositories. +- Extract binaries and index them with their identities. +- Support snapshot-based ingestion for determinism. +- **Working directory:** `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Corpus.Debian/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 6000.0001.0001, 6000.0001.0002 +- **Downstream**: Sprint 6000.0001.0004 +- **Safe to parallelize with**: None + +## Documentation Prerequisites + +- `docs/modules/binaryindex/architecture.md` +- Debian repository structure documentation + +--- + +## Tasks + +### T1: Create Corpus Connector Framework + +**Assignee**: BinaryIndex Team +**Story Points**: 3 +**Status**: TODO + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Corpus/` + +**Interfaces**: +```csharp +namespace StellaOps.BinaryIndex.Corpus; + +public interface IBinaryCorpusConnector +{ + string ConnectorId { get; } + string[] SupportedDistros { get; } + + Task FetchSnapshotAsync(CorpusQuery query, CancellationToken ct); + IAsyncEnumerable ListPackagesAsync(CorpusSnapshot snapshot, CancellationToken ct); + IAsyncEnumerable ExtractBinariesAsync(PackageInfo pkg, CancellationToken ct); +} + +public sealed record CorpusQuery( + string Distro, + string Release, + string Architecture, + string[]? ComponentFilter = null); + +public sealed record CorpusSnapshot( + Guid Id, + string Distro, + string Release, + string Architecture, + string MetadataDigest, + DateTimeOffset CapturedAt); + +public sealed record PackageInfo( + string Name, + string Version, + string SourcePackage, + string Architecture, + string Filename, + long Size, + string Sha256); + +public sealed record ExtractedBinary( + BinaryIdentity Identity, + string PathInPackage, + PackageInfo Package); +``` + +**Acceptance Criteria**: +- [ ] Generic connector interface defined +- [ ] Snapshot-based ingestion model +- [ ] Async enumerable for streaming + +--- + +### T2: Implement Debian Repository Client + +**Assignee**: BinaryIndex Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Corpus.Debian/DebianRepoClient.cs` + +**Implementation**: +```csharp +public sealed class DebianRepoClient : IDebianRepoClient +{ + private readonly HttpClient _httpClient; + private readonly ILogger _logger; + + public async Task FetchReleaseAsync( + string mirror, + string release, + CancellationToken ct) + { + // Fetch and parse Release file + var releaseUrl = $"{mirror}/dists/{release}/Release"; + var content = await _httpClient.GetStringAsync(releaseUrl, ct); + + // Parse Release file format + return ParseReleaseFile(content); + } + + public async Task FetchPackagesAsync( + string mirror, + string release, + string component, + string architecture, + CancellationToken ct) + { + // Fetch and decompress Packages.gz + var packagesUrl = $"{mirror}/dists/{release}/{component}/binary-{architecture}/Packages.gz"; + using var response = await _httpClient.GetStreamAsync(packagesUrl, ct); + using var gzip = new GZipStream(response, CompressionMode.Decompress); + using var reader = new StreamReader(gzip); + + var content = await reader.ReadToEndAsync(ct); + return ParsePackagesFile(content); + } + + public async Task DownloadPackageAsync( + string mirror, + string filename, + CancellationToken ct) + { + var url = $"{mirror}/{filename}"; + return await _httpClient.GetStreamAsync(url, ct); + } +} +``` + +**Acceptance Criteria**: +- [ ] Release file parsing +- [ ] Packages file parsing +- [ ] Package download with verification +- [ ] GPG signature verification (optional) + +--- + +### T3: Implement Package Extractor + +**Assignee**: BinaryIndex Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Corpus.Debian/DebPackageExtractor.cs` + +**Implementation**: +```csharp +public sealed class DebPackageExtractor : IPackageExtractor +{ + public async IAsyncEnumerable ExtractAsync( + Stream debStream, + [EnumeratorCancellation] CancellationToken ct) + { + // .deb is an ar archive containing: + // - debian-binary + // - control.tar.gz + // - data.tar.xz (or .gz, .zst) + + using var arReader = new ArReader(debStream); + + // Find and extract data archive + var dataEntry = arReader.Entries + .FirstOrDefault(e => e.Name.StartsWith("data.tar")); + + if (dataEntry == null) + yield break; + + using var dataStream = await DecompressAsync(dataEntry, ct); + using var tarReader = new TarReader(dataStream); + + await foreach (var entry in tarReader.ReadEntriesAsync(ct)) + { + if (!IsElfFile(entry)) + continue; + + yield return new ExtractedFile( + Path: entry.Name, + Stream: entry.DataStream, + Mode: entry.Mode); + } + } + + private static bool IsElfFile(TarEntry entry) + { + // Check if file path suggests a binary + var path = entry.Name; + if (path.StartsWith("./usr/lib/") || + path.StartsWith("./usr/bin/") || + path.StartsWith("./lib/")) + { + // Check ELF magic + if (entry.DataStream.Length >= 4) + { + Span magic = stackalloc byte[4]; + entry.DataStream.ReadExactly(magic); + entry.DataStream.Position = 0; + return magic.SequenceEqual("\x7FELF"u8); + } + } + return false; + } +} +``` + +**Acceptance Criteria**: +- [ ] .deb archive extraction +- [ ] ELF file detection +- [ ] Memory-efficient streaming + +--- + +### T4: Implement DebianCorpusConnector + +**Assignee**: BinaryIndex Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1-T3 + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Corpus.Debian/DebianCorpusConnector.cs` + +**Implementation**: +```csharp +public sealed class DebianCorpusConnector : IBinaryCorpusConnector +{ + public string ConnectorId => "debian"; + public string[] SupportedDistros => ["debian", "ubuntu"]; + + private readonly IDebianRepoClient _repoClient; + private readonly IPackageExtractor _extractor; + private readonly IBinaryFeatureExtractor _featureExtractor; + private readonly ICorpusSnapshotRepository _snapshotRepo; + + public async Task FetchSnapshotAsync( + CorpusQuery query, + CancellationToken ct) + { + var mirror = GetMirrorUrl(query.Distro); + var release = await _repoClient.FetchReleaseAsync(mirror, query.Release, ct); + + var snapshot = new CorpusSnapshot( + Id: Guid.NewGuid(), + Distro: query.Distro, + Release: query.Release, + Architecture: query.Architecture, + MetadataDigest: release.Sha256, + CapturedAt: DateTimeOffset.UtcNow); + + await _snapshotRepo.CreateAsync(snapshot, ct); + return snapshot; + } + + public async IAsyncEnumerable ListPackagesAsync( + CorpusSnapshot snapshot, + [EnumeratorCancellation] CancellationToken ct) + { + var mirror = GetMirrorUrl(snapshot.Distro); + + foreach (var component in new[] { "main", "contrib" }) + { + var packages = await _repoClient.FetchPackagesAsync( + mirror, snapshot.Release, component, snapshot.Architecture, ct); + + foreach (var pkg in packages.Packages) + { + yield return new PackageInfo( + Name: pkg.Package, + Version: pkg.Version, + SourcePackage: pkg.Source ?? pkg.Package, + Architecture: pkg.Architecture, + Filename: pkg.Filename, + Size: pkg.Size, + Sha256: pkg.Sha256); + } + } + } + + public async IAsyncEnumerable ExtractBinariesAsync( + PackageInfo pkg, + [EnumeratorCancellation] CancellationToken ct) + { + var mirror = GetMirrorUrl("debian"); // Simplified + using var debStream = await _repoClient.DownloadPackageAsync(mirror, pkg.Filename, ct); + + await foreach (var file in _extractor.ExtractAsync(debStream, ct)) + { + var identity = await _featureExtractor.ExtractIdentityAsync(file.Stream, ct); + + yield return new ExtractedBinary( + Identity: identity, + PathInPackage: file.Path, + Package: pkg); + } + } +} +``` + +**Acceptance Criteria**: +- [ ] Snapshot capture from Release file +- [ ] Package listing from Packages file +- [ ] Binary extraction and identity creation +- [ ] Integration with identity service + +--- + +### T5: Integration Tests + +**Assignee**: BinaryIndex Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1-T4 + +**Test Cases**: +- Fetch real Debian Release file +- Parse real Packages file +- Extract binaries from sample .deb +- End-to-end snapshot and extraction + +**Acceptance Criteria**: +- [ ] Real Debian repository integration test +- [ ] Sample .deb extraction test +- [ ] Build-ID extraction from real binaries + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | BinaryIndex Team | Create Corpus Connector Framework | +| 2 | T2 | TODO | T1 | BinaryIndex Team | Implement Debian Repository Client | +| 3 | T3 | TODO | T1 | BinaryIndex Team | Implement Package Extractor | +| 4 | T4 | TODO | T1-T3 | BinaryIndex Team | Implement DebianCorpusConnector | +| 5 | T5 | TODO | T1-T4 | BinaryIndex Team | Integration Tests | + +--- + +## Success Criteria + +- [ ] All 5 tasks marked DONE +- [ ] Debian package fetching operational +- [ ] Binary extraction and indexing working +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_6000_0002_0001_fix_evidence_parser.md b/docs/implplan/SPRINT_6000_0002_0001_fix_evidence_parser.md new file mode 100644 index 000000000..b03eff564 --- /dev/null +++ b/docs/implplan/SPRINT_6000_0002_0001_fix_evidence_parser.md @@ -0,0 +1,372 @@ +# Sprint 6000.0002.0001 · Fix Evidence Parser + +## Topic & Scope + +- Implement parsers for distro-specific CVE fix evidence. +- Parse Debian/Ubuntu changelogs for CVE mentions. +- Parse patch headers (DEP-3) for CVE references. +- Parse Alpine APKBUILD secfixes for CVE mappings. +- **Working directory:** `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.FixIndex/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 6000.0001.x (MVP 1 complete) +- **Downstream**: Sprint 6000.0002.0002 (Fix Index Builder) +- **Safe to parallelize with**: Sprint 6000.0002.0003 (Version Comparators) + +## Documentation Prerequisites + +- `docs/modules/binaryindex/architecture.md` +- Advisory: MVP 2 section on patch-aware backport handling +- Debian Policy on changelog format +- DEP-3 patch header specification + +--- + +## Tasks + +### T1: Create Fix Evidence Domain Models + +**Assignee**: BinaryIndex Team +**Story Points**: 2 +**Status**: TODO + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.FixIndex/Models/` + +**Models**: +```csharp +namespace StellaOps.BinaryIndex.FixIndex.Models; + +public sealed record FixEvidence +{ + public required string Distro { get; init; } + public required string Release { get; init; } + public required string SourcePkg { get; init; } + public required string CveId { get; init; } + public required FixState State { get; init; } + public string? FixedVersion { get; init; } + public required FixMethod Method { get; init; } + public required decimal Confidence { get; init; } + public required FixEvidencePayload Evidence { get; init; } + public Guid? SnapshotId { get; init; } + public DateTimeOffset CreatedAt { get; init; } +} + +public enum FixState { Fixed, Vulnerable, NotAffected, Wontfix, Unknown } +public enum FixMethod { SecurityFeed, Changelog, PatchHeader, UpstreamPatchMatch } + +public abstract record FixEvidencePayload; + +public sealed record ChangelogEvidence : FixEvidencePayload +{ + public required string File { get; init; } + public required string Version { get; init; } + public required string Excerpt { get; init; } + public int? LineNumber { get; init; } +} + +public sealed record PatchHeaderEvidence : FixEvidencePayload +{ + public required string PatchPath { get; init; } + public required string PatchSha256 { get; init; } + public required string HeaderExcerpt { get; init; } +} + +public sealed record SecurityFeedEvidence : FixEvidencePayload +{ + public required string FeedId { get; init; } + public required string EntryId { get; init; } + public required DateTimeOffset PublishedAt { get; init; } +} +``` + +**Acceptance Criteria**: +- [ ] All evidence types modeled +- [ ] Confidence levels defined +- [ ] Evidence payloads for auditability + +--- + +### T2: Implement Debian Changelog Parser + +**Assignee**: BinaryIndex Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.FixIndex/Parsers/DebianChangelogParser.cs` + +**Implementation**: +```csharp +namespace StellaOps.BinaryIndex.FixIndex.Parsers; + +public sealed class DebianChangelogParser : IChangelogParser +{ + private static readonly Regex CvePattern = new(@"\bCVE-\d{4}-\d{4,7}\b", RegexOptions.Compiled); + private static readonly Regex EntryHeaderPattern = new(@"^(\S+)\s+\(([^)]+)\)\s+", RegexOptions.Compiled); + private static readonly Regex TrailerPattern = new(@"^\s+--\s+", RegexOptions.Compiled); + + public IEnumerable ParseTopEntry( + string changelog, + string distro, + string release, + string sourcePkg) + { + var lines = changelog.Split('\n'); + if (lines.Length == 0) + yield break; + + // Parse first entry header + var headerMatch = EntryHeaderPattern.Match(lines[0]); + if (!headerMatch.Success) + yield break; + + var version = headerMatch.Groups[2].Value; + + // Collect entry lines until trailer + var entryLines = new List { lines[0] }; + foreach (var line in lines.Skip(1)) + { + entryLines.Add(line); + if (TrailerPattern.IsMatch(line)) + break; + } + + var entryText = string.Join('\n', entryLines); + var cves = CvePattern.Matches(entryText) + .Select(m => m.Value) + .Distinct(); + + foreach (var cve in cves) + { + yield return new FixEvidence + { + Distro = distro, + Release = release, + SourcePkg = sourcePkg, + CveId = cve, + State = FixState.Fixed, + FixedVersion = version, + Method = FixMethod.Changelog, + Confidence = 0.80m, + Evidence = new ChangelogEvidence + { + File = "debian/changelog", + Version = version, + Excerpt = entryText.Length > 2000 ? entryText[..2000] : entryText + } + }; + } + } +} +``` + +**Acceptance Criteria**: +- [ ] Parse top changelog entry +- [ ] Extract CVE mentions +- [ ] Store evidence excerpt +- [ ] Handle malformed changelogs gracefully + +--- + +### T3: Implement Patch Header Parser + +**Assignee**: BinaryIndex Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.FixIndex/Parsers/PatchHeaderParser.cs` + +**Implementation**: +```csharp +public sealed class PatchHeaderParser : IPatchParser +{ + private static readonly Regex CvePattern = new(@"\bCVE-\d{4}-\d{4,7}\b", RegexOptions.Compiled); + + public IEnumerable ParsePatches( + string patchesDir, + IEnumerable<(string path, string content, string sha256)> patches, + string distro, + string release, + string sourcePkg, + string version) + { + foreach (var (path, content, sha256) in patches) + { + // Read first 80 lines as header + var headerLines = content.Split('\n').Take(80); + var header = string.Join('\n', headerLines); + + // Also check filename for CVE + var searchText = header + "\n" + Path.GetFileName(path); + var cves = CvePattern.Matches(searchText) + .Select(m => m.Value) + .Distinct(); + + foreach (var cve in cves) + { + yield return new FixEvidence + { + Distro = distro, + Release = release, + SourcePkg = sourcePkg, + CveId = cve, + State = FixState.Fixed, + FixedVersion = version, + Method = FixMethod.PatchHeader, + Confidence = 0.87m, + Evidence = new PatchHeaderEvidence + { + PatchPath = path, + PatchSha256 = sha256, + HeaderExcerpt = header.Length > 1200 ? header[..1200] : header + } + }; + } + } + } +} +``` + +**Acceptance Criteria**: +- [ ] Parse patch headers for CVE mentions +- [ ] Check patch filenames +- [ ] Store patch digests for verification +- [ ] Support DEP-3 format + +--- + +### T4: Implement Alpine Secfixes Parser + +**Assignee**: BinaryIndex Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.FixIndex/Parsers/AlpineSecfixesParser.cs` + +**Implementation**: +```csharp +public sealed class AlpineSecfixesParser : ISecfixesParser +{ + // APKBUILD secfixes format: + // # secfixes: + // # 1.2.3-r0: + // # - CVE-2024-1234 + // # - CVE-2024-1235 + private static readonly Regex SecfixesPattern = new( + @"^#\s*secfixes:\s*$", RegexOptions.Compiled | RegexOptions.Multiline); + private static readonly Regex VersionPattern = new( + @"^#\s+(\d+\.\d+[^:]*):$", RegexOptions.Compiled); + private static readonly Regex CvePattern = new( + @"^#\s+-\s+(CVE-\d{4}-\d{4,7})$", RegexOptions.Compiled); + + public IEnumerable Parse( + string apkbuild, + string distro, + string release, + string sourcePkg) + { + var lines = apkbuild.Split('\n'); + var inSecfixes = false; + string? currentVersion = null; + + foreach (var line in lines) + { + if (SecfixesPattern.IsMatch(line)) + { + inSecfixes = true; + continue; + } + + if (!inSecfixes) + continue; + + // Exit secfixes block on non-comment line + if (!line.TrimStart().StartsWith('#')) + { + inSecfixes = false; + continue; + } + + var versionMatch = VersionPattern.Match(line); + if (versionMatch.Success) + { + currentVersion = versionMatch.Groups[1].Value; + continue; + } + + var cveMatch = CvePattern.Match(line); + if (cveMatch.Success && currentVersion != null) + { + yield return new FixEvidence + { + Distro = distro, + Release = release, + SourcePkg = sourcePkg, + CveId = cveMatch.Groups[1].Value, + State = FixState.Fixed, + FixedVersion = currentVersion, + Method = FixMethod.SecurityFeed, // APKBUILD is authoritative + Confidence = 0.95m, + Evidence = new SecurityFeedEvidence + { + FeedId = "alpine-secfixes", + EntryId = $"{sourcePkg}/{currentVersion}", + PublishedAt = DateTimeOffset.UtcNow + } + }; + } + } + } +} +``` + +**Acceptance Criteria**: +- [ ] Parse APKBUILD secfixes section +- [ ] Extract version-to-CVE mappings +- [ ] High confidence for authoritative source + +--- + +### T5: Unit Tests with Real Changelogs + +**Assignee**: BinaryIndex Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1-T4 + +**Test Fixtures**: +- Real Debian openssl changelog +- Real Ubuntu libssl changelog +- Sample patches with CVE headers +- Real Alpine openssl APKBUILD + +**Acceptance Criteria**: +- [ ] Test fixtures from real packages +- [ ] CVE extraction accuracy tests +- [ ] Confidence scoring validation + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | BinaryIndex Team | Create Fix Evidence Domain Models | +| 2 | T2 | TODO | T1 | BinaryIndex Team | Implement Debian Changelog Parser | +| 3 | T3 | TODO | T1 | BinaryIndex Team | Implement Patch Header Parser | +| 4 | T4 | TODO | T1 | BinaryIndex Team | Implement Alpine Secfixes Parser | +| 5 | T5 | TODO | T1-T4 | BinaryIndex Team | Unit Tests with Real Changelogs | + +--- + +## Success Criteria + +- [ ] All 5 tasks marked DONE +- [ ] Changelog CVE extraction working +- [ ] Patch header parsing working +- [ ] 95%+ accuracy on test fixtures +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_6000_0003_0001_fingerprint_storage.md b/docs/implplan/SPRINT_6000_0003_0001_fingerprint_storage.md new file mode 100644 index 000000000..239b9a250 --- /dev/null +++ b/docs/implplan/SPRINT_6000_0003_0001_fingerprint_storage.md @@ -0,0 +1,395 @@ +# Sprint 6000.0003.0001 · Fingerprint Storage + +## Topic & Scope + +- Implement database and blob storage for vulnerable function fingerprints. +- Create tables for fingerprint storage, corpus metadata, and validation results. +- Implement RustFS storage for fingerprint blobs and reference builds. +- **Working directory:** `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Fingerprints/` + +## Dependencies & Concurrency + +- **Upstream**: Sprint 6000.0001.x (MVP 1), 6000.0002.x (MVP 2) +- **Downstream**: Sprint 6000.0003.0002-0005 +- **Safe to parallelize with**: Sprint 6000.0003.0002 (Reference Build Pipeline) + +## Documentation Prerequisites + +- `docs/modules/binaryindex/architecture.md` +- `docs/db/schemas/binaries_schema_specification.md` (fingerprint tables) +- Existing fingerprinting: `src/Scanner/__Libraries/StellaOps.Scanner.EntryTrace/Binary/` + +--- + +## Tasks + +### T1: Create Fingerprint Schema Migration + +**Assignee**: BinaryIndex Team +**Story Points**: 3 +**Status**: TODO + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Persistence/Migrations/002_create_fingerprint_tables.sql` + +**Migration**: +```sql +-- 002_create_fingerprint_tables.sql +-- Adds fingerprint-related tables for MVP 3 + +BEGIN; + +-- Fix index tables (from MVP 2, if not already created) +CREATE TABLE IF NOT EXISTS binaries.cve_fix_evidence ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + distro TEXT NOT NULL, + release TEXT NOT NULL, + source_pkg TEXT NOT NULL, + cve_id TEXT NOT NULL, + state TEXT NOT NULL CHECK (state IN ('fixed', 'vulnerable', 'not_affected', 'wontfix', 'unknown')), + fixed_version TEXT, + method TEXT NOT NULL CHECK (method IN ('security_feed', 'changelog', 'patch_header', 'upstream_patch_match')), + confidence NUMERIC(3,2) NOT NULL CHECK (confidence >= 0 AND confidence <= 1), + evidence JSONB NOT NULL, + snapshot_id UUID REFERENCES binaries.corpus_snapshots(id), + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() +); + +CREATE TABLE IF NOT EXISTS binaries.cve_fix_index ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + distro TEXT NOT NULL, + release TEXT NOT NULL, + source_pkg TEXT NOT NULL, + cve_id TEXT NOT NULL, + architecture TEXT, + state TEXT NOT NULL CHECK (state IN ('fixed', 'vulnerable', 'not_affected', 'wontfix', 'unknown')), + fixed_version TEXT, + primary_method TEXT NOT NULL, + confidence NUMERIC(3,2) NOT NULL, + evidence_ids UUID[], + computed_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + CONSTRAINT cve_fix_index_unique UNIQUE (tenant_id, distro, release, source_pkg, cve_id, architecture) +); + +-- Fingerprint tables +CREATE TABLE IF NOT EXISTS binaries.vulnerable_fingerprints ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + cve_id TEXT NOT NULL, + component TEXT NOT NULL, + purl TEXT, + algorithm TEXT NOT NULL CHECK (algorithm IN ('basic_block', 'control_flow_graph', 'string_refs', 'combined')), + fingerprint_id TEXT NOT NULL, + fingerprint_hash BYTEA NOT NULL, + architecture TEXT NOT NULL, + function_name TEXT, + source_file TEXT, + source_line INT, + similarity_threshold NUMERIC(3,2) DEFAULT 0.95, + confidence NUMERIC(3,2) CHECK (confidence >= 0 AND confidence <= 1), + validated BOOLEAN DEFAULT FALSE, + validation_stats JSONB DEFAULT '{}', + vuln_build_ref TEXT, + fixed_build_ref TEXT, + notes TEXT, + evidence_ref TEXT, + indexed_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + CONSTRAINT vulnerable_fingerprints_unique UNIQUE (tenant_id, cve_id, algorithm, fingerprint_id, architecture) +); + +CREATE TABLE IF NOT EXISTS binaries.fingerprint_corpus_metadata ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + purl TEXT NOT NULL, + version TEXT NOT NULL, + algorithm TEXT NOT NULL, + binary_digest TEXT, + function_count INT NOT NULL DEFAULT 0, + fingerprints_indexed INT NOT NULL DEFAULT 0, + indexed_by TEXT, + indexed_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + CONSTRAINT fingerprint_corpus_metadata_unique UNIQUE (tenant_id, purl, version, algorithm) +); + +CREATE TABLE IF NOT EXISTS binaries.fingerprint_matches ( + id UUID PRIMARY KEY DEFAULT gen_random_uuid(), + tenant_id UUID NOT NULL, + scan_id UUID NOT NULL, + match_type TEXT NOT NULL CHECK (match_type IN ('fingerprint', 'buildid', 'hash_exact')), + binary_key TEXT NOT NULL, + binary_identity_id UUID REFERENCES binaries.binary_identity(id), + vulnerable_purl TEXT NOT NULL, + vulnerable_version TEXT NOT NULL, + matched_fingerprint_id UUID REFERENCES binaries.vulnerable_fingerprints(id), + matched_function TEXT, + similarity NUMERIC(3,2), + advisory_ids TEXT[], + reachability_status TEXT CHECK (reachability_status IN ('reachable', 'unreachable', 'unknown', 'partial')), + evidence JSONB DEFAULT '{}', + matched_at TIMESTAMPTZ NOT NULL DEFAULT NOW(), + created_at TIMESTAMPTZ NOT NULL DEFAULT NOW() +); + +-- Indexes +CREATE INDEX IF NOT EXISTS idx_cve_fix_evidence_tenant ON binaries.cve_fix_evidence(tenant_id); +CREATE INDEX IF NOT EXISTS idx_cve_fix_evidence_key ON binaries.cve_fix_evidence(distro, release, source_pkg, cve_id); + +CREATE INDEX IF NOT EXISTS idx_cve_fix_index_tenant ON binaries.cve_fix_index(tenant_id); +CREATE INDEX IF NOT EXISTS idx_cve_fix_index_lookup ON binaries.cve_fix_index(distro, release, source_pkg, cve_id); + +CREATE INDEX IF NOT EXISTS idx_vulnerable_fingerprints_tenant ON binaries.vulnerable_fingerprints(tenant_id); +CREATE INDEX IF NOT EXISTS idx_vulnerable_fingerprints_cve ON binaries.vulnerable_fingerprints(cve_id); +CREATE INDEX IF NOT EXISTS idx_vulnerable_fingerprints_component ON binaries.vulnerable_fingerprints(component, architecture); +CREATE INDEX IF NOT EXISTS idx_vulnerable_fingerprints_hash ON binaries.vulnerable_fingerprints USING hash (fingerprint_hash); + +CREATE INDEX IF NOT EXISTS idx_fingerprint_corpus_tenant ON binaries.fingerprint_corpus_metadata(tenant_id); +CREATE INDEX IF NOT EXISTS idx_fingerprint_corpus_purl ON binaries.fingerprint_corpus_metadata(purl, version); + +CREATE INDEX IF NOT EXISTS idx_fingerprint_matches_tenant ON binaries.fingerprint_matches(tenant_id); +CREATE INDEX IF NOT EXISTS idx_fingerprint_matches_scan ON binaries.fingerprint_matches(scan_id); + +-- RLS +ALTER TABLE binaries.cve_fix_evidence ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.cve_fix_evidence FORCE ROW LEVEL SECURITY; +CREATE POLICY cve_fix_evidence_tenant_isolation ON binaries.cve_fix_evidence + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.cve_fix_index ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.cve_fix_index FORCE ROW LEVEL SECURITY; +CREATE POLICY cve_fix_index_tenant_isolation ON binaries.cve_fix_index + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.vulnerable_fingerprints ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.vulnerable_fingerprints FORCE ROW LEVEL SECURITY; +CREATE POLICY vulnerable_fingerprints_tenant_isolation ON binaries.vulnerable_fingerprints + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.fingerprint_corpus_metadata ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.fingerprint_corpus_metadata FORCE ROW LEVEL SECURITY; +CREATE POLICY fingerprint_corpus_metadata_tenant_isolation ON binaries.fingerprint_corpus_metadata + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +ALTER TABLE binaries.fingerprint_matches ENABLE ROW LEVEL SECURITY; +ALTER TABLE binaries.fingerprint_matches FORCE ROW LEVEL SECURITY; +CREATE POLICY fingerprint_matches_tenant_isolation ON binaries.fingerprint_matches + FOR ALL USING (tenant_id::text = binaries_app.require_current_tenant()) + WITH CHECK (tenant_id::text = binaries_app.require_current_tenant()); + +COMMIT; +``` + +**Acceptance Criteria**: +- [ ] All fingerprint tables created +- [ ] Hash index on fingerprint_hash +- [ ] RLS policies enforced + +--- + +### T2: Create Fingerprint Domain Models + +**Assignee**: BinaryIndex Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1 + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Fingerprints/Models/` + +**Models**: +```csharp +namespace StellaOps.BinaryIndex.Fingerprints.Models; + +public sealed record VulnFingerprint +{ + public Guid Id { get; init; } + public required string CveId { get; init; } + public required string Component { get; init; } + public string? Purl { get; init; } + public required FingerprintAlgorithm Algorithm { get; init; } + public required string FingerprintId { get; init; } + public required byte[] FingerprintHash { get; init; } + public required string Architecture { get; init; } + public string? FunctionName { get; init; } + public string? SourceFile { get; init; } + public int? SourceLine { get; init; } + public decimal SimilarityThreshold { get; init; } = 0.95m; + public decimal? Confidence { get; init; } + public bool Validated { get; init; } + public FingerprintValidationStats? ValidationStats { get; init; } + public string? VulnBuildRef { get; init; } + public string? FixedBuildRef { get; init; } + public DateTimeOffset IndexedAt { get; init; } +} + +public enum FingerprintAlgorithm +{ + BasicBlock, + ControlFlowGraph, + StringRefs, + Combined +} + +public sealed record FingerprintValidationStats +{ + public int TruePositives { get; init; } + public int FalsePositives { get; init; } + public int TrueNegatives { get; init; } + public int FalseNegatives { get; init; } + public decimal Precision => TruePositives + FalsePositives == 0 ? 0 : + (decimal)TruePositives / (TruePositives + FalsePositives); + public decimal Recall => TruePositives + FalseNegatives == 0 ? 0 : + (decimal)TruePositives / (TruePositives + FalseNegatives); +} + +public sealed record FingerprintMatch +{ + public Guid Id { get; init; } + public Guid ScanId { get; init; } + public required MatchType Type { get; init; } + public required string BinaryKey { get; init; } + public required string VulnerablePurl { get; init; } + public required string VulnerableVersion { get; init; } + public Guid? MatchedFingerprintId { get; init; } + public string? MatchedFunction { get; init; } + public decimal? Similarity { get; init; } + public string[]? AdvisoryIds { get; init; } + public ReachabilityStatus? ReachabilityStatus { get; init; } + public DateTimeOffset MatchedAt { get; init; } +} + +public enum MatchType { Fingerprint, BuildId, HashExact } +public enum ReachabilityStatus { Reachable, Unreachable, Unknown, Partial } +``` + +**Acceptance Criteria**: +- [ ] All fingerprint models defined +- [ ] Validation stats with precision/recall +- [ ] Match types enumerated + +--- + +### T3: Implement Fingerprint Repository + +**Assignee**: BinaryIndex Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1, T2 + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Persistence/Repositories/FingerprintRepository.cs` + +**Interface**: +```csharp +public interface IFingerprintRepository +{ + Task CreateAsync(VulnFingerprint fingerprint, CancellationToken ct); + Task GetByIdAsync(Guid id, CancellationToken ct); + Task> GetByCveAsync(string cveId, CancellationToken ct); + Task> SearchByHashAsync( + byte[] hash, FingerprintAlgorithm algorithm, string architecture, CancellationToken ct); + Task UpdateValidationStatsAsync(Guid id, FingerprintValidationStats stats, CancellationToken ct); +} + +public interface IFingerprintMatchRepository +{ + Task CreateAsync(FingerprintMatch match, CancellationToken ct); + Task> GetByScanAsync(Guid scanId, CancellationToken ct); + Task UpdateReachabilityAsync(Guid id, ReachabilityStatus status, CancellationToken ct); +} +``` + +**Acceptance Criteria**: +- [ ] CRUD operations for fingerprints +- [ ] Hash-based search +- [ ] Match recording + +--- + +### T4: Implement RustFS Fingerprint Storage + +**Assignee**: BinaryIndex Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T2 + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Fingerprints/Storage/FingerprintBlobStorage.cs` + +**Implementation**: +```csharp +public sealed class FingerprintBlobStorage : IFingerprintBlobStorage +{ + private readonly IRustFsClient _rustFs; + private const string BasePath = "binaryindex/fingerprints"; + + public async Task StoreFingerprintAsync( + VulnFingerprint fingerprint, + byte[] fullData, + CancellationToken ct) + { + var prefix = fingerprint.FingerprintId[..2]; + var path = $"{BasePath}/{fingerprint.Algorithm}/{prefix}/{fingerprint.FingerprintId}.bin"; + + await _rustFs.PutAsync(path, fullData, ct); + return path; + } + + public async Task StoreReferenceBuildAsync( + string cveId, + string buildType, // "vulnerable" or "fixed" + byte[] buildArtifact, + CancellationToken ct) + { + var path = $"{BasePath}/refbuilds/{cveId}/{buildType}.tar.zst"; + await _rustFs.PutAsync(path, buildArtifact, ct); + return path; + } +} +``` + +**Acceptance Criteria**: +- [ ] Fingerprint blob storage +- [ ] Reference build storage +- [ ] Shard-by-prefix layout + +--- + +### T5: Integration Tests + +**Assignee**: BinaryIndex Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T1-T4 + +**Acceptance Criteria**: +- [ ] Fingerprint CRUD tests +- [ ] Hash search tests +- [ ] Blob storage integration tests + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | BinaryIndex Team | Create Fingerprint Schema Migration | +| 2 | T2 | TODO | T1 | BinaryIndex Team | Create Fingerprint Domain Models | +| 3 | T3 | TODO | T1, T2 | BinaryIndex Team | Implement Fingerprint Repository | +| 4 | T4 | TODO | T2 | BinaryIndex Team | Implement RustFS Fingerprint Storage | +| 5 | T5 | TODO | T1-T4 | BinaryIndex Team | Integration Tests | + +--- + +## Success Criteria + +- [ ] All 5 tasks marked DONE +- [ ] Fingerprint tables deployed +- [ ] RustFS storage operational +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_6000_0004_0001_scanner_integration.md b/docs/implplan/SPRINT_6000_0004_0001_scanner_integration.md new file mode 100644 index 000000000..1d92ee638 --- /dev/null +++ b/docs/implplan/SPRINT_6000_0004_0001_scanner_integration.md @@ -0,0 +1,530 @@ +# Sprint 6000.0004.0001 · Scanner Worker Integration + +## Topic & Scope + +- Integrate BinaryIndex into Scanner.Worker for binary vulnerability lookup during scans. +- Query binaries during layer extraction for Build-ID and fingerprint matches. +- Wire results into the existing findings pipeline. +- **Working directory:** `src/Scanner/StellaOps.Scanner.Worker/` + +## Dependencies & Concurrency + +- **Upstream**: Sprints 6000.0001.x, 6000.0002.x, 6000.0003.x (MVPs 1-3) +- **Downstream**: Sprint 6000.0004.0002-0004 +- **Safe to parallelize with**: None + +## Documentation Prerequisites + +- `docs/modules/binaryindex/architecture.md` +- `docs/modules/scanner/architecture.md` +- Existing Scanner.Worker pipeline + +--- + +## Tasks + +### T1: Create IBinaryVulnerabilityService Interface + +**Assignee**: Scanner Team +**Story Points**: 2 +**Status**: TODO + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Core/Services/IBinaryVulnerabilityService.cs` + +**Interface**: +```csharp +namespace StellaOps.BinaryIndex.Core.Services; + +/// +/// Main query interface for binary vulnerability lookup. +/// Consumed by Scanner.Worker during container scanning. +/// +public interface IBinaryVulnerabilityService +{ + /// + /// Look up vulnerabilities by binary identity (Build-ID, hashes). + /// + Task> LookupByIdentityAsync( + BinaryIdentity identity, + LookupOptions? options = null, + CancellationToken ct = default); + + /// + /// Look up vulnerabilities by function fingerprint. + /// + Task> LookupByFingerprintAsync( + CodeFingerprint fingerprint, + decimal minSimilarity = 0.95m, + CancellationToken ct = default); + + /// + /// Batch lookup for scan performance. + /// + Task>> LookupBatchAsync( + IEnumerable identities, + LookupOptions? options = null, + CancellationToken ct = default); + + /// + /// Get distro-specific fix status (patch-aware). + /// + Task GetFixStatusAsync( + string distro, + string release, + string sourcePkg, + string cveId, + CancellationToken ct = default); +} + +public sealed record LookupOptions +{ + public bool IncludeFingerprints { get; init; } = true; + public bool CheckFixIndex { get; init; } = true; + public string? DistroHint { get; init; } + public string? ReleaseHint { get; init; } +} + +public sealed record BinaryVulnMatch +{ + public required string CveId { get; init; } + public required string VulnerablePurl { get; init; } + public required MatchMethod Method { get; init; } + public required decimal Confidence { get; init; } + public MatchEvidence? Evidence { get; init; } + public FixRecord? FixStatus { get; init; } +} + +public enum MatchMethod { BuildIdCatalog, FingerprintMatch, RangeMatch } + +public sealed record MatchEvidence +{ + public string? BuildId { get; init; } + public string? FingerprintId { get; init; } + public decimal? Similarity { get; init; } + public string? MatchedFunction { get; init; } +} + +public sealed record FixRecord +{ + public required string Distro { get; init; } + public required string Release { get; init; } + public required string SourcePkg { get; init; } + public required string CveId { get; init; } + public required FixState State { get; init; } + public string? FixedVersion { get; init; } + public required FixMethod Method { get; init; } + public required decimal Confidence { get; init; } +} +``` + +**Acceptance Criteria**: +- [ ] Interface defined with all lookup methods +- [ ] Options for controlling lookup scope +- [ ] Match evidence structure + +--- + +### T2: Implement BinaryVulnerabilityService + +**Assignee**: BinaryIndex Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1 + +**Implementation Path**: `src/BinaryIndex/__Libraries/StellaOps.BinaryIndex.Core/Services/BinaryVulnerabilityService.cs` + +**Implementation**: +```csharp +public sealed class BinaryVulnerabilityService : IBinaryVulnerabilityService +{ + private readonly IBinaryVulnAssertionRepository _assertionRepo; + private readonly IVulnerableBuildIdRepository _buildIdRepo; + private readonly IFingerprintRepository _fingerprintRepo; + private readonly ICveFixIndexRepository _fixIndexRepo; + private readonly ILogger _logger; + + public async Task> LookupByIdentityAsync( + BinaryIdentity identity, + LookupOptions? options = null, + CancellationToken ct = default) + { + options ??= new LookupOptions(); + var matches = new List(); + + // Tier 1: Check explicit assertions + var assertions = await _assertionRepo.GetByBinaryKeyAsync(identity.BinaryKey, ct); + foreach (var assertion in assertions.Where(a => a.Status == "affected")) + { + matches.Add(new BinaryVulnMatch + { + CveId = assertion.CveId, + VulnerablePurl = "unknown", // Resolve from advisory + Method = MapMethod(assertion.Method), + Confidence = assertion.Confidence ?? 0.9m, + Evidence = new MatchEvidence { BuildId = identity.BuildId } + }); + } + + // Tier 2: Check Build-ID catalog + if (identity.BuildId != null) + { + var buildIdMatches = await _buildIdRepo.GetByBuildIdAsync( + identity.BuildId, identity.BuildIdType ?? "gnu-build-id", ct); + + foreach (var bid in buildIdMatches) + { + // Check if we already have this CVE from assertions + // Look up advisories for this PURL + // Add matches... + } + } + + // Tier 3: Apply fix index adjustments + if (options.CheckFixIndex && options.DistroHint != null) + { + foreach (var match in matches.ToList()) + { + var fixRecord = await GetFixStatusFromMatchAsync(match, options, ct); + if (fixRecord?.State == FixState.Fixed) + { + // Mark as fixed, don't remove from matches + // Let caller decide based on fix status + } + } + } + + return matches.ToImmutableArray(); + } + + public async Task>> LookupBatchAsync( + IEnumerable identities, + LookupOptions? options = null, + CancellationToken ct = default) + { + var results = new Dictionary>(); + + // Batch fetch for performance + var keys = identities.Select(i => i.BinaryKey).ToArray(); + var allAssertions = await _assertionRepo.GetBatchByKeysAsync(keys, ct); + + foreach (var identity in identities) + { + var matches = await LookupByIdentityAsync(identity, options, ct); + results[identity.BinaryKey] = matches; + } + + return results.ToImmutableDictionary(); + } + + public async Task GetFixStatusAsync( + string distro, + string release, + string sourcePkg, + string cveId, + CancellationToken ct = default) + { + var fixIndex = await _fixIndexRepo.GetAsync(distro, release, sourcePkg, cveId, ct); + if (fixIndex == null) + return null; + + return new FixRecord + { + Distro = fixIndex.Distro, + Release = fixIndex.Release, + SourcePkg = fixIndex.SourcePkg, + CveId = fixIndex.CveId, + State = Enum.Parse(fixIndex.State, true), + FixedVersion = fixIndex.FixedVersion, + Method = Enum.Parse(fixIndex.PrimaryMethod, true), + Confidence = fixIndex.Confidence + }; + } + + // ... additional helper methods +} +``` + +**Acceptance Criteria**: +- [ ] Build-ID lookup working +- [ ] Fix index integration +- [ ] Batch lookup for performance +- [ ] Proper tiering (assertions → Build-ID → fingerprints) + +--- + +### T3: Create Scanner.Worker Integration Point + +**Assignee**: Scanner Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1, T2 + +**Implementation Path**: `src/Scanner/StellaOps.Scanner.Worker/Analyzers/BinaryVulnerabilityAnalyzer.cs` + +**Implementation**: +```csharp +namespace StellaOps.Scanner.Worker.Analyzers; + +/// +/// Analyzer that queries BinaryIndex for vulnerable binaries during scan. +/// +public sealed class BinaryVulnerabilityAnalyzer : ILayerAnalyzer +{ + private readonly IBinaryVulnerabilityService _binaryVulnService; + private readonly IBinaryFeatureExtractor _featureExtractor; + private readonly ILogger _logger; + + public string AnalyzerId => "binary-vulnerability"; + public int Priority => 100; // Run after package analyzers + + public async Task AnalyzeAsync( + LayerContext context, + CancellationToken ct) + { + var findings = new List(); + var identities = new List(); + + // Extract identities from all binaries in layer + await foreach (var file in context.EnumerateFilesAsync(ct)) + { + if (!IsBinaryFile(file)) + continue; + + try + { + using var stream = await file.OpenReadAsync(ct); + var identity = await _featureExtractor.ExtractIdentityAsync(stream, ct); + identities.Add(identity); + } + catch (Exception ex) + { + _logger.LogDebug(ex, "Failed to extract identity from {Path}", file.Path); + } + } + + if (identities.Count == 0) + return LayerAnalysisResult.Empty; + + // Batch lookup + var options = new LookupOptions + { + DistroHint = context.DetectedDistro, + ReleaseHint = context.DetectedRelease, + CheckFixIndex = true + }; + + var matches = await _binaryVulnService.LookupBatchAsync(identities, options, ct); + + foreach (var (binaryKey, vulnMatches) in matches) + { + foreach (var match in vulnMatches) + { + findings.Add(new BinaryVulnerabilityFinding + { + ScanId = context.ScanId, + LayerDigest = context.LayerDigest, + BinaryKey = binaryKey, + CveId = match.CveId, + VulnerablePurl = match.VulnerablePurl, + MatchMethod = match.Method.ToString(), + Confidence = match.Confidence, + FixStatus = match.FixStatus, + Evidence = match.Evidence + }); + } + } + + return new LayerAnalysisResult + { + AnalyzerId = AnalyzerId, + BinaryFindings = findings.ToImmutableArray() + }; + } + + private static bool IsBinaryFile(LayerFile file) + { + // Check path patterns + var path = file.Path; + return path.StartsWith("/usr/lib/") || + path.StartsWith("/lib/") || + path.StartsWith("/usr/bin/") || + path.StartsWith("/bin/") || + path.EndsWith(".so") || + path.Contains(".so."); + } +} +``` + +**Acceptance Criteria**: +- [ ] Analyzer integrates with layer analysis pipeline +- [ ] Binary detection heuristics +- [ ] Batch lookup for performance +- [ ] Distro detection passed to lookup + +--- + +### T4: Wire Findings to Existing Pipeline + +**Assignee**: Scanner Team +**Story Points**: 3 +**Status**: TODO +**Dependencies**: T3 + +**Implementation Path**: `src/Scanner/StellaOps.Scanner.Worker/Findings/BinaryVulnerabilityFinding.cs` + +**Finding Model**: +```csharp +namespace StellaOps.Scanner.Worker.Findings; + +public sealed record BinaryVulnerabilityFinding : IFinding +{ + public Guid ScanId { get; init; } + public required string LayerDigest { get; init; } + public required string BinaryKey { get; init; } + public required string CveId { get; init; } + public required string VulnerablePurl { get; init; } + public required string MatchMethod { get; init; } + public required decimal Confidence { get; init; } + public FixRecord? FixStatus { get; init; } + public MatchEvidence? Evidence { get; init; } + + public string FindingType => "binary-vulnerability"; + + public string GetSummary() => + $"{CveId} in {VulnerablePurl} (via {MatchMethod}, confidence {Confidence:P0})"; +} +``` + +**Integration with Findings Ledger**: +```csharp +// In ScanResultAggregator +public async Task AggregateFindingsAsync(ScanContext context, CancellationToken ct) +{ + foreach (var layer in context.Layers) + { + var result = layer.AnalysisResult; + + // Process binary findings + foreach (var binaryFinding in result.BinaryFindings) + { + await _findingsLedger.RecordAsync(new FindingEntry + { + ScanId = context.ScanId, + FindingType = binaryFinding.FindingType, + CveId = binaryFinding.CveId, + Purl = binaryFinding.VulnerablePurl, + Severity = await _advisoryService.GetSeverityAsync(binaryFinding.CveId, ct), + Evidence = new FindingEvidence + { + Type = "binary_match", + Method = binaryFinding.MatchMethod, + Confidence = binaryFinding.Confidence, + BinaryKey = binaryFinding.BinaryKey + } + }, ct); + } + } +} +``` + +**Acceptance Criteria**: +- [ ] Binary findings recorded in ledger +- [ ] Evidence properly structured +- [ ] Integration with existing severity lookup + +--- + +### T5: Add Configuration and DI Registration + +**Assignee**: Scanner Team +**Story Points**: 2 +**Status**: TODO +**Dependencies**: T1-T4 + +**Implementation Path**: `src/Scanner/StellaOps.Scanner.Worker/Extensions/BinaryIndexServiceExtensions.cs` + +**DI Registration**: +```csharp +public static class BinaryIndexServiceExtensions +{ + public static IServiceCollection AddBinaryIndexIntegration( + this IServiceCollection services, + IConfiguration configuration) + { + var options = configuration + .GetSection("BinaryIndex") + .Get() ?? new BinaryIndexOptions(); + + if (!options.Enabled) + return services; + + services.AddSingleton(options); + services.AddScoped(); + services.AddScoped(); + services.AddScoped(); + + // Register analyzer in pipeline + services.AddSingleton(sp => + sp.GetRequiredService()); + + return services; + } +} + +public sealed class BinaryIndexOptions +{ + public bool Enabled { get; init; } = true; + public int BatchSize { get; init; } = 100; + public int TimeoutMs { get; init; } = 5000; +} +``` + +**Acceptance Criteria**: +- [ ] Configuration-driven enablement +- [ ] Proper DI registration +- [ ] Timeout configuration + +--- + +### T6: Integration Tests + +**Assignee**: Scanner Team +**Story Points**: 5 +**Status**: TODO +**Dependencies**: T1-T5 + +**Test Cases**: +- End-to-end scan with binary lookup +- Layer with known vulnerable Build-ID +- Fix index correctly overrides upstream range +- Batch performance test + +**Acceptance Criteria**: +- [ ] Integration test with real container image +- [ ] Binary match correctly recorded +- [ ] Fix status applied + +--- + +## Delivery Tracker + +| # | Task ID | Status | Dependency | Owners | Task Definition | +|---|---------|--------|------------|--------|-----------------| +| 1 | T1 | TODO | — | Scanner Team | Create IBinaryVulnerabilityService Interface | +| 2 | T2 | TODO | T1 | BinaryIndex Team | Implement BinaryVulnerabilityService | +| 3 | T3 | TODO | T1, T2 | Scanner Team | Create Scanner.Worker Integration Point | +| 4 | T4 | TODO | T3 | Scanner Team | Wire Findings to Existing Pipeline | +| 5 | T5 | TODO | T1-T4 | Scanner Team | Add Configuration and DI Registration | +| 6 | T6 | TODO | T1-T5 | Scanner Team | Integration Tests | + +--- + +## Success Criteria + +- [ ] All 6 tasks marked DONE +- [ ] Binary vulnerability analyzer integrated +- [ ] Findings recorded in ledger +- [ ] Configuration-driven enablement +- [ ] < 100ms p95 lookup latency +- [ ] `dotnet build` succeeds +- [ ] `dotnet test` succeeds diff --git a/docs/implplan/SPRINT_6000_SUMMARY.md b/docs/implplan/SPRINT_6000_SUMMARY.md new file mode 100644 index 000000000..5e7768cf2 --- /dev/null +++ b/docs/implplan/SPRINT_6000_SUMMARY.md @@ -0,0 +1,290 @@ +# Sprint 6000 Series Summary: BinaryIndex Module + +## Overview + +The 6000 series implements the **BinaryIndex** module - a vulnerable binaries database that enables detection of vulnerable code at the binary level, independent of package metadata. + +**Advisory Source:** `docs/product-advisories/21-Dec-2025 - Mapping Evidence Within Compiled Binaries.md` + +--- + +## MVP Roadmap + +### MVP 1: Known-Build Binary Catalog (Sprint 6000.0001) + +**Goal:** Query "is this Build-ID vulnerable?" with distro-level precision. + +| Sprint | Topic | Description | +|--------|-------|-------------| +| 6000.0001.0001 | Binaries Schema | PostgreSQL schema creation | +| 6000.0001.0002 | Binary Identity Service | Core identity extraction and storage | +| 6000.0001.0003 | Debian Corpus Connector | Debian/Ubuntu package ingestion | +| 6000.0001.0004 | Build-ID Lookup Service | Query API for Build-ID matching | + +**Acceptance:** Given a Build-ID, return associated CVEs from known distro builds. + +--- + +### MVP 2: Patch-Aware Backport Handling (Sprint 6000.0002) + +**Goal:** Handle "version says vulnerable but distro backported the fix." + +| Sprint | Topic | Description | +|--------|-------|-------------| +| 6000.0002.0001 | Fix Evidence Parser | Changelog and patch header parsing | +| 6000.0002.0002 | Fix Index Builder | Merge evidence into fix index | +| 6000.0002.0003 | Version Comparators | Distro-specific version comparison | +| 6000.0002.0004 | RPM Corpus Connector | RHEL/Fedora package ingestion | + +**Acceptance:** For a CVE that upstream marks vulnerable, correctly identify distro backport as fixed. + +--- + +### MVP 3: Binary Fingerprint Factory (Sprint 6000.0003) + +**Goal:** Detect vulnerable code independent of package metadata. + +| Sprint | Topic | Description | +|--------|-------|-------------| +| 6000.0003.0001 | Fingerprint Storage | Database and blob storage for fingerprints | +| 6000.0003.0002 | Reference Build Pipeline | Generate vulnerable/fixed reference builds | +| 6000.0003.0003 | Fingerprint Generator | Extract function fingerprints from binaries | +| 6000.0003.0004 | Fingerprint Matching Engine | Similarity search and matching | +| 6000.0003.0005 | Validation Corpus | Golden corpus for fingerprint validation | + +**Acceptance:** Detect CVE in stripped binary with no package metadata, confidence > 0.95. + +--- + +### MVP 4: Scanner Integration (Sprint 6000.0004) + +**Goal:** Binary evidence in production scans. + +| Sprint | Topic | Description | +|--------|-------|-------------| +| 6000.0004.0001 | Scanner Worker Integration | Wire BinaryIndex into scan pipeline | +| 6000.0004.0002 | Findings Ledger Integration | Record binary matches as findings | +| 6000.0004.0003 | Proof Segment Attestation | DSSE attestations for binary evidence | +| 6000.0004.0004 | CLI Binary Match Inspection | CLI commands for match inspection | + +**Acceptance:** Container scan produces binary match findings with evidence chain. + +--- + +## Dependencies + +```mermaid +graph TD + subgraph MVP1["MVP 1: Known-Build Catalog"] + S6001[6000.0001.0001
Schema] + S6002[6000.0001.0002
Identity Service] + S6003[6000.0001.0003
Debian Connector] + S6004[6000.0001.0004
Build-ID Lookup] + + S6001 --> S6002 + S6002 --> S6003 + S6002 --> S6004 + S6003 --> S6004 + end + + subgraph MVP2["MVP 2: Patch-Aware"] + S6011[6000.0002.0001
Fix Parser] + S6012[6000.0002.0002
Fix Index Builder] + S6013[6000.0002.0003
Version Comparators] + S6014[6000.0002.0004
RPM Connector] + + S6011 --> S6012 + S6013 --> S6012 + S6012 --> S6014 + end + + subgraph MVP3["MVP 3: Fingerprints"] + S6021[6000.0003.0001
FP Storage] + S6022[6000.0003.0002
Ref Build Pipeline] + S6023[6000.0003.0003
FP Generator] + S6024[6000.0003.0004
Matching Engine] + S6025[6000.0003.0005
Validation Corpus] + + S6021 --> S6023 + S6022 --> S6023 + S6023 --> S6024 + S6024 --> S6025 + end + + subgraph MVP4["MVP 4: Integration"] + S6031[6000.0004.0001
Scanner Integration] + S6032[6000.0004.0002
Findings Ledger] + S6033[6000.0004.0003
Attestations] + S6034[6000.0004.0004
CLI] + + S6031 --> S6032 + S6032 --> S6033 + S6031 --> S6034 + end + + MVP1 --> MVP2 + MVP1 --> MVP3 + MVP2 --> MVP4 + MVP3 --> MVP4 +``` + +--- + +## Module Structure + +``` +src/BinaryIndex/ +├── StellaOps.BinaryIndex.WebService/ # API service +├── StellaOps.BinaryIndex.Worker/ # Corpus ingestion worker +├── __Libraries/ +│ ├── StellaOps.BinaryIndex.Core/ # Domain models, interfaces +│ ├── StellaOps.BinaryIndex.Persistence/ # PostgreSQL + RustFS +│ ├── StellaOps.BinaryIndex.Corpus/ # Corpus connector framework +│ ├── StellaOps.BinaryIndex.Corpus.Debian/ # Debian connector +│ ├── StellaOps.BinaryIndex.Corpus.Rpm/ # RPM connector +│ ├── StellaOps.BinaryIndex.FixIndex/ # Patch-aware fix index +│ └── StellaOps.BinaryIndex.Fingerprints/ # Fingerprint generation +└── __Tests/ + ├── StellaOps.BinaryIndex.Core.Tests/ + ├── StellaOps.BinaryIndex.Persistence.Tests/ + ├── StellaOps.BinaryIndex.Corpus.Tests/ + └── StellaOps.BinaryIndex.Integration.Tests/ +``` + +--- + +## Key Interfaces + +```csharp +// Query interface (consumed by Scanner.Worker) +public interface IBinaryVulnerabilityService +{ + Task> LookupByIdentityAsync(BinaryIdentity identity, CancellationToken ct); + Task> LookupByFingerprintAsync(CodeFingerprint fp, CancellationToken ct); + Task GetFixStatusAsync(string distro, string release, string sourcePkg, string cveId, CancellationToken ct); +} + +// Corpus connector interface +public interface IBinaryCorpusConnector +{ + string ConnectorId { get; } + Task FetchSnapshotAsync(CorpusQuery query, CancellationToken ct); + IAsyncEnumerable ExtractBinariesAsync(PackageReference pkg, CancellationToken ct); +} + +// Fix index interface +public interface IFixIndexBuilder +{ + Task BuildIndexAsync(DistroRelease distro, CancellationToken ct); + Task GetFixRecordAsync(string distro, string release, string sourcePkg, string cveId, CancellationToken ct); +} +``` + +--- + +## Database Schema + +Schema: `binaries` +Owner: BinaryIndex module + +**Key Tables:** + +| Table | Purpose | +|-------|---------| +| `binary_identity` | Known binary identities (Build-ID, hashes) | +| `binary_package_map` | Binary → package mapping per snapshot | +| `vulnerable_buildids` | Build-IDs known to be vulnerable | +| `cve_fix_index` | Patch-aware fix status per distro | +| `vulnerable_fingerprints` | Function fingerprints for CVEs | +| `fingerprint_matches` | Match results (findings evidence) | + +See: `docs/db/schemas/binaries_schema_specification.md` + +--- + +## Integration Points + +### Scanner.Worker + +```csharp +// During binary extraction +var identity = await _featureExtractor.ExtractIdentityAsync(binaryStream, ct); +var matches = await _binaryVulnService.LookupByIdentityAsync(identity, ct); + +// If distro known, check fix status +var fixStatus = await _binaryVulnService.GetFixStatusAsync( + distro, release, sourcePkg, cveId, ct); +``` + +### Findings Ledger + +```csharp +public record BinaryVulnerabilityFinding : IFinding +{ + public string MatchType { get; init; } // "fingerprint", "buildid" + public string VulnerablePurl { get; init; } + public string MatchedSymbol { get; init; } + public float Similarity { get; init; } + public string[] LinkedCves { get; init; } +} +``` + +### Policy Engine + +New proof segment type: `binary_fingerprint_evidence` + +--- + +## Configuration + +```yaml +binaryindex: + enabled: true + corpus: + connectors: + - type: debian + enabled: true + releases: [bookworm, bullseye, jammy, noble] + fingerprinting: + enabled: true + target_components: [openssl, glibc, zlib, curl] + lookup: + cache_ttl: 3600 +``` + +--- + +## Success Criteria + +### MVP 1 +- [ ] `binaries` schema deployed and migrated +- [ ] Debian/Ubuntu corpus ingestion operational +- [ ] Build-ID lookup returns CVEs with < 100ms p95 latency + +### MVP 2 +- [ ] Fix index correctly handles Debian/RHEL backports +- [ ] 95%+ accuracy on backport test corpus + +### MVP 3 +- [ ] Fingerprints generated for OpenSSL, glibc, zlib, curl +- [ ] < 5% false positive rate on validation corpus + +### MVP 4 +- [ ] Scanner produces binary match findings +- [ ] DSSE attestations include binary evidence +- [ ] CLI `stella binary-matches` command operational + +--- + +## References + +- Architecture: `docs/modules/binaryindex/architecture.md` +- Schema: `docs/db/schemas/binaries_schema_specification.md` +- Advisory: `docs/product-advisories/21-Dec-2025 - Mapping Evidence Within Compiled Binaries.md` +- Existing fingerprinting: `src/Scanner/__Libraries/StellaOps.Scanner.EntryTrace/Binary/` +- Build-ID indexing: `src/Scanner/StellaOps.Scanner.Analyzers.Native/Index/` + +--- + +*Document Version: 1.0.0* +*Created: 2025-12-21* diff --git a/docs/modules/binaryindex/architecture.md b/docs/modules/binaryindex/architecture.md new file mode 100644 index 000000000..4ed7849aa --- /dev/null +++ b/docs/modules/binaryindex/architecture.md @@ -0,0 +1,558 @@ +# BinaryIndex Module Architecture + +> **Ownership:** Scanner Guild + Concelier Guild +> **Status:** DRAFT +> **Version:** 1.0.0 +> **Related:** [High-Level Architecture](../../07_HIGH_LEVEL_ARCHITECTURE.md), [Scanner Architecture](../scanner/architecture.md), [Concelier Architecture](../concelier/architecture.md) + +--- + +## 1. Overview + +The **BinaryIndex** module provides a vulnerable binaries database that enables detection of vulnerable code at the binary level, independent of package metadata. This addresses a critical gap in vulnerability scanning: package version strings can lie (backports, custom builds, stripped metadata), but **binary identity doesn't lie**. + +### 1.1 Problem Statement + +Traditional vulnerability scanners rely on package version matching, which fails in several scenarios: + +1. **Backported patches** - Distros backport security fixes without changing upstream version +2. **Custom/vendored builds** - Binaries compiled from source without package metadata +3. **Stripped binaries** - Debug info and version strings removed +4. **Static linking** - Vulnerable library code embedded in final binary +5. **Container base images** - Distroless or scratch images with no package DB + +### 1.2 Solution: Binary-First Vulnerability Detection + +BinaryIndex provides three tiers of binary identification: + +| Tier | Method | Precision | Coverage | +|------|--------|-----------|----------| +| A | Package/version range matching | Medium | High | +| B | Build-ID/hash catalog (exact binary identity) | High | Medium | +| C | Function fingerprints (CFG/basic-block hashes) | Very High | Targeted | + +### 1.3 Module Scope + +**In Scope:** +- Binary identity extraction (Build-ID, PE CodeView GUID, Mach-O UUID) +- Binary-to-advisory mapping database +- Fingerprint storage and matching engine +- Fix index for patch-aware backport handling +- Integration with Scanner.Worker for binary lookup + +**Out of Scope:** +- Binary disassembly/analysis (provided by Scanner.Analyzers.Native) +- Runtime binary tracing (provided by Zastava) +- SBOM generation (provided by Scanner) + +--- + +## 2. Architecture + +### 2.1 System Context + +``` +┌──────────────────────────────────────────────────────────────────────────┐ +│ External Systems │ +│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ +│ │ Distro Repos │ │ Debug Symbol │ │ Upstream Source │ │ +│ │ (Debian, RPM, │ │ Servers │ │ (GitHub, etc.) │ │ +│ │ Alpine) │ │ (debuginfod) │ │ │ │ +│ └────────┬────────┘ └────────┬────────┘ └────────┬────────┘ │ +└───────────│─────────────────────│─────────────────────│──────────────────┘ + │ │ │ + v v v +┌──────────────────────────────────────────────────────────────────────────┐ +│ BinaryIndex Module │ +│ ┌─────────────────────────────────────────────────────────────────────┐ │ +│ │ Corpus Ingestion Layer │ │ +│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ +│ │ │ DebianCorpus │ │ RpmCorpus │ │ AlpineCorpus │ │ │ +│ │ │ Connector │ │ Connector │ │ Connector │ │ │ +│ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │ +│ └─────────────────────────────────────────────────────────────────────┘ │ +│ │ │ +│ v │ +│ ┌─────────────────────────────────────────────────────────────────────┐ │ +│ │ Processing Layer │ │ +│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ +│ │ │ BinaryFeature│ │ FixIndex │ │ Fingerprint │ │ │ +│ │ │ Extractor │ │ Builder │ │ Generator │ │ │ +│ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │ +│ └─────────────────────────────────────────────────────────────────────┘ │ +│ │ │ +│ v │ +│ ┌─────────────────────────────────────────────────────────────────────┐ │ +│ │ Storage Layer │ │ +│ │ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ │ │ +│ │ │ PostgreSQL │ │ RustFS │ │ Valkey │ │ │ +│ │ │ (binaries │ │ (fingerprint │ │ (lookup │ │ │ +│ │ │ schema) │ │ blobs) │ │ cache) │ │ │ +│ │ └──────────────┘ └──────────────┘ └──────────────┘ │ │ +│ └─────────────────────────────────────────────────────────────────────┘ │ +│ │ │ +│ v │ +│ ┌─────────────────────────────────────────────────────────────────────┐ │ +│ │ Query Layer │ │ +│ │ ┌──────────────────────────────────────────────────────────────┐ │ │ +│ │ │ IBinaryVulnerabilityService │ │ │ +│ │ │ - LookupByBuildIdAsync(buildId) │ │ │ +│ │ │ - LookupByFingerprintAsync(fingerprint) │ │ │ +│ │ │ - LookupBatchAsync(identities) │ │ │ +│ │ │ - GetFixStatusAsync(distro, release, sourcePkg, cve) │ │ │ +│ │ └──────────────────────────────────────────────────────────────┘ │ │ +│ └─────────────────────────────────────────────────────────────────────┘ │ +└──────────────────────────────────────────────────────────────────────────┘ + │ + v +┌──────────────────────────────────────────────────────────────────────────┐ +│ Consuming Modules │ +│ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │ +│ │ Scanner.Worker │ │ Policy Engine │ │ Findings Ledger │ │ +│ │ (binary lookup │ │ (evidence in │ │ (match records) │ │ +│ │ during scan) │ │ proof chain) │ │ │ │ +│ └─────────────────┘ └─────────────────┘ └─────────────────┘ │ +└──────────────────────────────────────────────────────────────────────────┘ +``` + +### 2.2 Component Breakdown + +#### 2.2.1 Corpus Connectors + +Plugin-based connectors that ingest binaries from distribution repositories. + +```csharp +public interface IBinaryCorpusConnector +{ + string ConnectorId { get; } + string[] SupportedDistros { get; } + + Task FetchSnapshotAsync(CorpusQuery query, CancellationToken ct); + Task> ExtractBinariesAsync(PackageReference pkg, CancellationToken ct); +} +``` + +**Implementations:** +- `DebianBinaryCorpusConnector` - Debian/Ubuntu packages + debuginfo +- `RpmBinaryCorpusConnector` - RHEL/Fedora/CentOS + SRPM +- `AlpineBinaryCorpusConnector` - Alpine APK + APKBUILD + +#### 2.2.2 Binary Feature Extractor + +Extracts identity and features from binaries. Reuses existing Scanner.Analyzers.Native capabilities. + +```csharp +public interface IBinaryFeatureExtractor +{ + Task ExtractIdentityAsync(Stream binaryStream, CancellationToken ct); + Task ExtractFeaturesAsync(Stream binaryStream, ExtractorOptions opts, CancellationToken ct); +} + +public sealed record BinaryIdentity( + string Format, // elf, pe, macho + string? BuildId, // ELF GNU Build-ID + string? PeCodeViewGuid, // PE CodeView GUID + Age + string? MachoUuid, // Mach-O LC_UUID + string FileSha256, + string TextSectionSha256); + +public sealed record BinaryFeatures( + BinaryIdentity Identity, + string[] DynamicDeps, // DT_NEEDED + string[] ExportedSymbols, + string[] ImportedSymbols, + BinaryHardening Hardening); +``` + +#### 2.2.3 Fix Index Builder + +Builds the patch-aware CVE fix index from distro sources. + +```csharp +public interface IFixIndexBuilder +{ + Task BuildIndexAsync(DistroRelease distro, CancellationToken ct); + Task GetFixRecordAsync(string distro, string release, string sourcePkg, string cveId, CancellationToken ct); +} + +public sealed record FixRecord( + string Distro, + string Release, + string SourcePkg, + string CveId, + FixState State, // fixed, vulnerable, not_affected, wontfix, unknown + string? FixedVersion, // Distro version string + FixMethod Method, // security_feed, changelog, patch_header + decimal Confidence, // 0.00-1.00 + FixEvidence Evidence); + +public enum FixState { Fixed, Vulnerable, NotAffected, Wontfix, Unknown } +public enum FixMethod { SecurityFeed, Changelog, PatchHeader, UpstreamPatchMatch } +``` + +#### 2.2.4 Fingerprint Generator + +Generates function-level fingerprints for vulnerable code detection. + +```csharp +public interface IVulnFingerprintGenerator +{ + Task> GenerateAsync( + string cveId, + BinaryPair vulnAndFixed, // Reference builds + FingerprintOptions opts, + CancellationToken ct); +} + +public sealed record VulnFingerprint( + string CveId, + string Component, // e.g., openssl + string Architecture, // x86-64, aarch64 + FingerprintType Type, // basic_block, cfg, combined + string FingerprintId, // e.g., "bb-abc123..." + byte[] FingerprintHash, // 16-32 bytes + string? FunctionHint, // Function name if known + decimal Confidence, + FingerprintEvidence Evidence); + +public enum FingerprintType { BasicBlock, ControlFlowGraph, StringReferences, Combined } +``` + +#### 2.2.5 Binary Vulnerability Service + +Main query interface for consumers. + +```csharp +public interface IBinaryVulnerabilityService +{ + /// + /// Look up vulnerabilities by Build-ID or equivalent binary identity. + /// + Task> LookupByIdentityAsync( + BinaryIdentity identity, + LookupOptions? opts = null, + CancellationToken ct = default); + + /// + /// Look up vulnerabilities by function fingerprint. + /// + Task> LookupByFingerprintAsync( + CodeFingerprint fingerprint, + decimal minSimilarity = 0.95m, + CancellationToken ct = default); + + /// + /// Batch lookup for scan performance. + /// + Task>> LookupBatchAsync( + IEnumerable identities, + LookupOptions? opts = null, + CancellationToken ct = default); + + /// + /// Get distro-specific fix status (patch-aware). + /// + Task GetFixStatusAsync( + string distro, + string release, + string sourcePkg, + string cveId, + CancellationToken ct = default); +} + +public sealed record BinaryVulnMatch( + string CveId, + string VulnerablePurl, + MatchMethod Method, // buildid_catalog, fingerprint_match, range_match + decimal Confidence, + MatchEvidence Evidence); + +public enum MatchMethod { BuildIdCatalog, FingerprintMatch, RangeMatch } +``` + +--- + +## 3. Data Model + +### 3.1 PostgreSQL Schema (`binaries`) + +The `binaries` schema stores binary identity, fingerprint, and match data. + +```sql +CREATE SCHEMA IF NOT EXISTS binaries; +CREATE SCHEMA IF NOT EXISTS binaries_app; + +-- RLS helper +CREATE OR REPLACE FUNCTION binaries_app.require_current_tenant() +RETURNS TEXT LANGUAGE plpgsql STABLE SECURITY DEFINER AS $$ +DECLARE v_tenant TEXT; +BEGIN + v_tenant := current_setting('app.tenant_id', true); + IF v_tenant IS NULL OR v_tenant = '' THEN + RAISE EXCEPTION 'app.tenant_id session variable not set'; + END IF; + RETURN v_tenant; +END; +$$; +``` + +#### 3.1.1 Core Tables + +See `docs/db/schemas/binaries_schema_specification.md` for complete DDL. + +**Key Tables:** + +| Table | Purpose | +|-------|---------| +| `binaries.binary_identity` | Known binary identities (Build-ID, hashes) | +| `binaries.binary_package_map` | Binary → package mapping per snapshot | +| `binaries.vulnerable_buildids` | Build-IDs known to be vulnerable | +| `binaries.vulnerable_fingerprints` | Function fingerprints for CVEs | +| `binaries.cve_fix_index` | Patch-aware fix status per distro | +| `binaries.fingerprint_matches` | Match results (findings evidence) | +| `binaries.corpus_snapshots` | Corpus ingestion tracking | + +### 3.2 RustFS Layout + +``` +rustfs://stellaops/binaryindex/ + fingerprints///.bin + corpus////manifest.json + corpus////packages/.metadata.json + evidence/.dsse.json +``` + +--- + +## 4. Integration Points + +### 4.1 Scanner.Worker Integration + +During container scanning, Scanner.Worker queries BinaryIndex for each extracted binary: + +```mermaid +sequenceDiagram + participant SW as Scanner.Worker + participant BI as BinaryIndex + participant PG as PostgreSQL + participant FL as Findings Ledger + + SW->>SW: Extract binary from layer + SW->>SW: Compute BinaryIdentity + SW->>BI: LookupByIdentityAsync(identity) + BI->>PG: Query binaries.vulnerable_buildids + PG-->>BI: Matches + BI->>PG: Query binaries.cve_fix_index (if distro known) + PG-->>BI: Fix status + BI-->>SW: BinaryVulnMatch[] + SW->>FL: RecordFinding(match, evidence) +``` + +### 4.2 Concelier Integration + +BinaryIndex subscribes to Concelier's advisory updates: + +```mermaid +sequenceDiagram + participant CO as Concelier + participant BI as BinaryIndex + participant PG as PostgreSQL + + CO->>CO: Ingest new advisory + CO->>BI: advisory.created event + BI->>BI: Check if affected packages in corpus + BI->>PG: Update binaries.binary_vuln_assertion + BI->>BI: Queue fingerprint generation (if high-impact) +``` + +### 4.3 Policy Integration + +Binary matches are recorded as proof segments: + +```json +{ + "segment_type": "binary_fingerprint_evidence", + "payload": { + "binary_identity": { + "format": "elf", + "build_id": "abc123...", + "file_sha256": "def456..." + }, + "matches": [ + { + "cve_id": "CVE-2024-1234", + "method": "buildid_catalog", + "confidence": 0.98, + "vulnerable_purl": "pkg:deb/debian/libssl3@1.1.1n-0+deb11u3" + } + ] + } +} +``` + +--- + +## 5. MVP Roadmap + +### MVP 1: Known-Build Binary Catalog (Sprint 6000.0001) + +**Goal:** Query "is this Build-ID vulnerable?" with distro-level precision. + +**Deliverables:** +- `binaries` PostgreSQL schema +- Build-ID to package mapping tables +- Basic CVE lookup by binary identity +- Debian/Ubuntu corpus connector + +### MVP 2: Patch-Aware Backport Handling (Sprint 6000.0002) + +**Goal:** Handle "version says vulnerable but distro backported the fix." + +**Deliverables:** +- Fix index builder (changelog + patch header parsing) +- Distro-specific version comparison +- RPM corpus connector +- Scanner.Worker integration + +### MVP 3: Binary Fingerprint Factory (Sprint 6000.0003) + +**Goal:** Detect vulnerable code independent of package metadata. + +**Deliverables:** +- Fingerprint storage and matching +- Reference build generation pipeline +- Fingerprint validation corpus +- High-impact CVE coverage (OpenSSL, glibc, zlib, curl) + +### MVP 4: Full Scanner Integration (Sprint 6000.0004) + +**Goal:** Binary evidence in production scans. + +**Deliverables:** +- Scanner.Worker binary lookup integration +- Findings Ledger binary match records +- Proof segment attestations +- CLI binary match inspection + +--- + +## 6. Security Considerations + +### 6.1 Trust Boundaries + +1. **Corpus Ingestion** - Packages are untrusted; extraction runs in sandboxed workers +2. **Fingerprint Generation** - Reference builds compiled in isolated environments +3. **Query API** - Tenant-isolated via RLS; no cross-tenant data leakage + +### 6.2 Signing & Provenance + +- All corpus snapshots are signed (DSSE) +- Fingerprint sets are versioned and signed +- Every match result references evidence digests + +### 6.3 Sandbox Requirements + +Binary extraction and fingerprint generation MUST run with: +- Seccomp profile restricting syscalls +- Read-only root filesystem +- No network access during analysis +- Memory/CPU limits + +--- + +## 7. Observability + +### 7.1 Metrics + +| Metric | Type | Labels | +|--------|------|--------| +| `binaryindex_lookup_total` | Counter | method, result | +| `binaryindex_lookup_latency_ms` | Histogram | method | +| `binaryindex_corpus_packages_total` | Gauge | distro, release | +| `binaryindex_fingerprints_indexed` | Gauge | algorithm, component | +| `binaryindex_match_confidence` | Histogram | method | + +### 7.2 Traces + +- `binaryindex.lookup` - Full lookup span +- `binaryindex.corpus.ingest` - Corpus ingestion +- `binaryindex.fingerprint.generate` - Fingerprint generation + +--- + +## 8. Configuration + +```yaml +# binaryindex.yaml +binaryindex: + enabled: true + + corpus: + connectors: + - type: debian + enabled: true + mirror: http://deb.debian.org/debian + releases: [bookworm, bullseye] + architectures: [amd64, arm64] + - type: ubuntu + enabled: true + mirror: http://archive.ubuntu.com/ubuntu + releases: [jammy, noble] + + fingerprinting: + enabled: true + algorithms: [basic_block, cfg] + target_components: + - openssl + - glibc + - zlib + - curl + - sqlite + min_function_size: 16 # bytes + max_functions_per_binary: 10000 + + lookup: + cache_ttl: 3600 + batch_size: 100 + timeout_ms: 5000 + + storage: + postgres_schema: binaries + rustfs_bucket: stellaops/binaryindex +``` + +--- + +## 9. Testing Strategy + +### 9.1 Unit Tests + +- Identity extraction (Build-ID, hashes) +- Fingerprint generation determinism +- Fix index parsing (changelog, patch headers) + +### 9.2 Integration Tests + +- PostgreSQL schema validation +- Full corpus ingestion flow +- Scanner.Worker lookup integration + +### 9.3 Regression Tests + +- Known CVE detection (golden corpus) +- Backport handling (Debian libssl example) +- False positive rate validation + +--- + +## 10. References + +- Advisory: `docs/product-advisories/21-Dec-2025 - Mapping Evidence Within Compiled Binaries.md` +- Scanner Native Analysis: `src/Scanner/StellaOps.Scanner.Analyzers.Native/` +- Existing Fingerprinting: `src/Scanner/__Libraries/StellaOps.Scanner.EntryTrace/Binary/` +- Build-ID Index: `src/Scanner/StellaOps.Scanner.Analyzers.Native/Index/` + +--- + +*Document Version: 1.0.0* +*Last Updated: 2025-12-21* diff --git a/docs/modules/gateway/architecture.md b/docs/modules/gateway/architecture.md new file mode 100644 index 000000000..ed7f6a99b --- /dev/null +++ b/docs/modules/gateway/architecture.md @@ -0,0 +1,461 @@ +# component_architecture_gateway.md — **Stella Ops Gateway** (Sprint 3600) + +> Derived from Reference Architecture Advisory and Router Architecture Specification + +> **Scope.** The Gateway WebService is the single HTTP ingress point for all external traffic. It authenticates requests via Authority (DPoP/mTLS), routes to microservices via the Router binary protocol, aggregates OpenAPI specifications, and enforces tenant isolation. +> **Ownership:** Platform Guild + +--- + +## 0) Mission & Boundaries + +### What Gateway Does + +- **HTTP Ingress**: Single entry point for all external HTTP/HTTPS traffic +- **Authentication**: DPoP and mTLS token validation via Authority integration +- **Routing**: Routes HTTP requests to microservices via binary protocol (TCP/TLS) +- **OpenAPI Aggregation**: Combines endpoint specs from all registered microservices +- **Health Aggregation**: Provides unified health status from downstream services +- **Rate Limiting**: Per-tenant and per-identity request throttling +- **Tenant Propagation**: Extracts tenant context and propagates to microservices + +### What Gateway Does NOT Do + +- **Business Logic**: No domain logic; pure routing and auth +- **Data Storage**: Stateless; no persistent state beyond connection cache +- **Direct Database Access**: Never connects to PostgreSQL directly +- **SBOM/VEX Processing**: Delegates to Scanner, Excititor, etc. + +--- + +## 1) Solution & Project Layout + +``` +src/Gateway/ +├── StellaOps.Gateway.WebService/ +│ ├── StellaOps.Gateway.WebService.csproj +│ ├── Program.cs # DI bootstrap, transport init +│ ├── Dockerfile +│ ├── appsettings.json +│ ├── appsettings.Development.json +│ ├── Configuration/ +│ │ ├── GatewayOptions.cs # All configuration options +│ │ └── TransportOptions.cs # TCP/TLS transport config +│ ├── Middleware/ +│ │ ├── TenantMiddleware.cs # Tenant context extraction +│ │ ├── RequestRoutingMiddleware.cs # HTTP → binary routing +│ │ ├── AuthenticationMiddleware.cs # DPoP/mTLS validation +│ │ └── RateLimitingMiddleware.cs # Per-tenant throttling +│ ├── Services/ +│ │ ├── GatewayHostedService.cs # Transport lifecycle +│ │ ├── OpenApiAggregationService.cs # Spec aggregation +│ │ └── HealthAggregationService.cs # Downstream health +│ └── Endpoints/ +│ ├── HealthEndpoints.cs # /health/*, /metrics +│ └── OpenApiEndpoints.cs # /openapi.json, /openapi.yaml +``` + +### Dependencies + +```xml + + + + + + +``` + +--- + +## 2) External Dependencies + +| Dependency | Purpose | Required | +|------------|---------|----------| +| **Authority** | OpTok validation, DPoP/mTLS | Yes | +| **Router.Gateway** | Routing state, endpoint discovery | Yes | +| **Router.Transport.Tcp** | Binary transport (dev) | Yes | +| **Router.Transport.Tls** | Binary transport (prod) | Yes | +| **Valkey/Redis** | Rate limiting state | Optional | + +--- + +## 3) Contracts & Data Model + +### Request Flow + +``` +┌──────────────┐ HTTPS ┌─────────────────┐ Binary ┌─────────────────┐ +│ Client │ ─────────────► │ Gateway │ ────────────► │ Microservice │ +│ (CLI/UI) │ │ WebService │ Frame │ (Scanner, │ +│ │ ◄───────────── │ │ ◄──────────── │ Policy, etc) │ +└──────────────┘ HTTPS └─────────────────┘ Binary └─────────────────┘ +``` + +### Binary Frame Protocol + +Gateway uses the Router binary protocol for internal communication: + +| Frame Type | Purpose | +|------------|---------| +| HELLO | Microservice registration with endpoints | +| HEARTBEAT | Health check and latency measurement | +| REQUEST | HTTP request serialized to binary | +| RESPONSE | HTTP response serialized from binary | +| STREAM_DATA | Streaming response chunks | +| CANCEL | Request cancellation propagation | + +### Endpoint Descriptor + +```csharp +public sealed class EndpointDescriptor +{ + public required string Method { get; init; } // GET, POST, etc. + public required string Path { get; init; } // /api/v1/scans/{id} + public required string ServiceName { get; init; } // scanner + public required string Version { get; init; } // 1.0.0 + public TimeSpan DefaultTimeout { get; init; } // 30s + public bool SupportsStreaming { get; init; } // true for large responses + public IReadOnlyList RequiringClaims { get; init; } +} +``` + +### Routing State + +```csharp +public interface IRoutingStateManager +{ + ValueTask RegisterEndpointsAsync(ConnectionState conn, HelloPayload hello); + ValueTask SelectInstanceAsync(string method, string path); + ValueTask UpdateHealthAsync(ConnectionState conn, HeartbeatPayload heartbeat); + ValueTask DrainConnectionAsync(string connectionId); +} +``` + +--- + +## 4) REST API + +Gateway exposes minimal management endpoints; all business APIs are routed to microservices. + +### Health Endpoints + +| Endpoint | Auth | Description | +|----------|------|-------------| +| `GET /health/live` | None | Liveness probe | +| `GET /health/ready` | None | Readiness probe | +| `GET /health/startup` | None | Startup probe | +| `GET /metrics` | None | Prometheus metrics | + +### OpenAPI Endpoints + +| Endpoint | Auth | Description | +|----------|------|-------------| +| `GET /openapi.json` | None | Aggregated OpenAPI 3.1.0 spec | +| `GET /openapi.yaml` | None | YAML format spec | + +--- + +## 5) Execution Flow + +### Request Routing + +```mermaid +sequenceDiagram + participant C as Client + participant G as Gateway + participant A as Authority + participant M as Microservice + + C->>G: HTTPS Request + DPoP Token + G->>A: Validate Token + A-->>G: Claims (sub, tid, scope) + G->>G: Select Instance (Method, Path) + G->>M: Binary REQUEST Frame + M-->>G: Binary RESPONSE Frame + G-->>C: HTTPS Response +``` + +### Microservice Registration + +```mermaid +sequenceDiagram + participant M as Microservice + participant G as Gateway + + M->>G: TCP/TLS Connect + M->>G: HELLO (ServiceName, Version, Endpoints) + G->>G: Register Endpoints + G-->>M: HELLO ACK + + loop Every 10s + G->>M: HEARTBEAT + M-->>G: HEARTBEAT (latency, health) + G->>G: Update Health State + end +``` + +--- + +## 6) Instance Selection Algorithm + +```csharp +public ValueTask SelectInstanceAsync(string method, string path) +{ + // 1. Find all endpoints matching (method, path) + var candidates = _endpoints + .Where(e => e.Method == method && MatchPath(e.Path, path)) + .ToList(); + + // 2. Filter by health + candidates = candidates + .Where(c => c.Health is InstanceHealthStatus.Healthy or InstanceHealthStatus.Degraded) + .ToList(); + + // 3. Region preference + var localRegion = candidates.Where(c => c.Region == _config.Region).ToList(); + var neighborRegions = candidates.Where(c => _config.NeighborRegions.Contains(c.Region)).ToList(); + var otherRegions = candidates.Except(localRegion).Except(neighborRegions).ToList(); + + var preferred = localRegion.Any() ? localRegion + : neighborRegions.Any() ? neighborRegions + : otherRegions; + + // 4. Within tier: prefer lower latency, then most recent heartbeat + return preferred + .OrderBy(c => c.AveragePingMs) + .ThenByDescending(c => c.LastHeartbeatUtc) + .FirstOrDefault(); +} +``` + +--- + +## 7) Configuration + +```yaml +gateway: + node: + region: "eu1" + nodeId: "gw-eu1-01" + environment: "prod" + + transports: + tcp: + enabled: true + port: 9100 + maxConnections: 1000 + receiveBufferSize: 65536 + sendBufferSize: 65536 + tls: + enabled: true + port: 9443 + certificatePath: "/certs/gateway.pfx" + certificatePassword: "${GATEWAY_CERT_PASSWORD}" + clientCertificateMode: "RequireCertificate" + allowedClientCertificateThumbprints: [] + + routing: + defaultTimeout: "30s" + maxRequestBodySize: "100MB" + streamingEnabled: true + streamingBufferSize: 16384 + neighborRegions: ["eu2", "us1"] + + auth: + dpopEnabled: true + dpopMaxClockSkew: "60s" + mtlsEnabled: true + rateLimiting: + enabled: true + requestsPerMinute: 1000 + burstSize: 100 + redisConnectionString: "${REDIS_URL}" + + openapi: + enabled: true + cacheTtlSeconds: 300 + title: "Stella Ops API" + version: "1.0.0" + + health: + heartbeatIntervalSeconds: 10 + heartbeatTimeoutSeconds: 30 + unhealthyThreshold: 3 +``` + +--- + +## 8) Scale & Performance + +| Metric | Target | Notes | +|--------|--------|-------| +| Routing latency (P50) | <2ms | Overhead only; excludes downstream | +| Routing latency (P99) | <5ms | Under normal load | +| Concurrent connections | 10,000 | Per gateway instance | +| Requests/second | 50,000 | Per gateway instance | +| Memory footprint | <512MB | Base; scales with connections | + +### Scaling Strategy + +- Horizontal scaling behind load balancer +- Sticky sessions NOT required (stateless) +- Regional deployment for latency optimization +- Rate limiting via distributed Valkey/Redis + +--- + +## 9) Security Posture + +### Authentication + +| Method | Description | +|--------|-------------| +| DPoP | Proof-of-possession tokens from Authority | +| mTLS | Certificate-bound tokens for machine clients | + +### Authorization + +- Claims-based authorization per endpoint +- Required claims defined in endpoint descriptors +- Tenant isolation via `tid` claim + +### Transport Security + +| Component | Encryption | +|-----------|------------| +| Client → Gateway | TLS 1.3 (HTTPS) | +| Gateway → Microservices | TLS (prod), TCP (dev only) | + +### Rate Limiting + +- Per-tenant: Configurable requests/minute +- Per-identity: Burst protection +- Global: Circuit breaker for overload + +--- + +## 10) Observability & Audit + +### Metrics (Prometheus) + +``` +gateway_requests_total{service,method,path,status} +gateway_request_duration_seconds{service,method,path,quantile} +gateway_active_connections{service} +gateway_transport_frames_total{type} +gateway_auth_failures_total{reason} +gateway_rate_limit_exceeded_total{tenant} +``` + +### Traces (OpenTelemetry) + +- Span per request: `gateway.route` +- Child span: `gateway.auth.validate` +- Child span: `gateway.transport.send` + +### Logs (Structured) + +```json +{ + "timestamp": "2025-12-21T10:00:00Z", + "level": "info", + "message": "Request routed", + "correlationId": "abc123", + "tenantId": "tenant-1", + "method": "GET", + "path": "/api/v1/scans/xyz", + "service": "scanner", + "durationMs": 45, + "status": 200 +} +``` + +--- + +## 11) Testing Matrix + +| Test Type | Scope | Coverage Target | +|-----------|-------|-----------------| +| Unit | Routing algorithm, auth validation | 90% | +| Integration | Transport + routing flow | 80% | +| E2E | Full request path with mock services | Key flows | +| Performance | Latency, throughput, connection limits | SLO targets | +| Chaos | Connection failures, microservice crashes | Resilience | + +### Test Fixtures + +- `StellaOps.Router.Transport.InMemory` for transport mocking +- Mock Authority for auth testing +- `WebApplicationFactory` for integration tests + +--- + +## 12) DevOps & Operations + +### Deployment + +```yaml +# Kubernetes deployment excerpt +apiVersion: apps/v1 +kind: Deployment +metadata: + name: gateway +spec: + replicas: 3 + template: + spec: + containers: + - name: gateway + image: stellaops/gateway:1.0.0 + ports: + - containerPort: 8080 # HTTPS + - containerPort: 9443 # TLS (microservices) + resources: + requests: + memory: "256Mi" + cpu: "250m" + limits: + memory: "512Mi" + cpu: "1000m" + livenessProbe: + httpGet: + path: /health/live + port: 8080 + readinessProbe: + httpGet: + path: /health/ready + port: 8080 +``` + +### SLOs + +| SLO | Target | Measurement | +|-----|--------|-------------| +| Availability | 99.9% | Uptime over 30 days | +| Latency P99 | <50ms | Includes downstream | +| Error rate | <0.1% | 5xx responses | + +--- + +## 13) Roadmap + +| Feature | Sprint | Status | +|---------|--------|--------| +| Core implementation | 3600.0001.0001 | TODO | +| WebSocket support | Future | Planned | +| gRPC passthrough | Future | Planned | +| GraphQL aggregation | Future | Exploration | + +--- + +## 14) References + +- Router Architecture: `docs/modules/router/architecture.md` +- OpenAPI Aggregation: `docs/modules/gateway/openapi.md` +- Authority Integration: `docs/modules/authority/architecture.md` +- Reference Architecture: `docs/product-advisories/archived/2025-12-21-reference-architecture/` + +--- + +**Last Updated**: 2025-12-21 (Sprint 3600) diff --git a/docs/modules/platform/reference-architecture-card.md b/docs/modules/platform/reference-architecture-card.md new file mode 100644 index 000000000..6d4af1064 --- /dev/null +++ b/docs/modules/platform/reference-architecture-card.md @@ -0,0 +1,223 @@ +# Stella Ops Reference Architecture Card (Dec 2025) + +> **One-Pager** for product managers, architects, and auditors. +> Full specification: `docs/07_HIGH_LEVEL_ARCHITECTURE.md` + +--- + +## Topology & Trust Boundaries + +``` +┌─────────────────────────────────────────────────────────────────────────────┐ +│ TRUST BOUNDARY 1 │ +│ ┌─────────────────┐ │ +│ │ EDGE LAYER │ StellaRouter (Gateway) / UI │ +│ │ │ OAuth2/OIDC Authentication │ +│ └────────┬────────┘ │ +│ │ Signed credentials/attestations required │ +├───────────┼─────────────────────────────────────────────────────────────────┤ +│ ▼ TRUST BOUNDARY 2 │ +│ ┌─────────────────────────────────────────────────────────────────────┐ │ +│ │ CONTROL PLANE │ │ +│ │ │ │ +│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ +│ │ │Scheduler │ │ Policy │ │Authority │ │ Attestor │ │ │ +│ │ │ │ │ Engine │ │ │ │ │ │ │ +│ │ │ Routes │ │ Signed │ │ Keys & │ │ DSSE + │ │ │ +│ │ │ work │ │ verdicts │ │ identity │ │ Rekor │ │ │ +│ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │ │ +│ │ │ │ +│ │ ┌──────────────────────────────────────┐ │ │ +│ │ │ Timeline / Notify │ │ │ +│ │ │ Immutable audit + notifications │ │ │ +│ │ └──────────────────────────────────────┘ │ │ +│ └─────────────────────────────────────────────────────────────────────┘ │ +│ │ Only blessed evidence/identities influence decisions │ +├───────────┼─────────────────────────────────────────────────────────────────┤ +│ ▼ TRUST BOUNDARY 3 │ +│ ┌─────────────────────────────────────────────────────────────────────┐ │ +│ │ EVIDENCE PLANE │ │ +│ │ │ │ +│ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ +│ │ │ Sbomer │ │Excititor │ │Concelier │ │Reachabil-│ │ │ +│ │ │ │ │ │ │ │ │ity/Sigs │ │ │ +│ │ │CDX 1.7 / │ │ VEX │ │Advisory │ │ Is vuln │ │ │ +│ │ │SPDX 3.0.1│ │ claims │ │ feeds │ │reachable?│ │ │ +│ │ └──────────┘ └──────────┘ └──────────┘ └──────────┘ │ │ +│ └─────────────────────────────────────────────────────────────────────┘ │ +│ │ Tamper-evident, separately signed; opinions in Policy only │ +├───────────┼─────────────────────────────────────────────────────────────────┤ +│ ▼ TRUST BOUNDARY 4 │ +│ ┌─────────────────────────────────────────────────────────────────────┐ │ +│ │ DATA PLANE │ │ +│ │ │ │ +│ │ ┌──────────────────────────────────────────────────────────────┐ │ │ +│ │ │ Workers / Scanners │ │ │ +│ │ │ Pull tasks → compute → emit artifacts + attestations │ │ │ +│ │ │ Isolated per tenant; outputs tied to inputs cryptographically│ │ │ +│ │ └──────────────────────────────────────────────────────────────┘ │ │ +│ └─────────────────────────────────────────────────────────────────────┘ │ +└─────────────────────────────────────────────────────────────────────────────┘ +``` + +--- + +## Artifact Association (OCI Referrers) + +``` + Image Digest (Subject) + │ + ├──► SBOM (CycloneDX 1.7 / SPDX 3.0.1) + │ └──► DSSE Attestation + │ └──► Rekor Log Entry + │ + ├──► VEX Claims + │ └──► DSSE Attestation + │ + ├──► Reachability Subgraph + │ └──► DSSE Attestation + │ + └──► Policy Verdict + └──► DSSE Attestation + └──► Rekor Log Entry +``` + +- Every artifact is a **subject** in the registry +- SBOMs, VEX, verdicts attached as **OCI referrers** +- Multiple versioned, signed facts per image without altering the image + +--- + +## Data Flows + +### Evidence Flow + +``` +Workers ──► SBOM (CDX 1.7) ──► DSSE Sign ──► OCI Referrer ──► Registry + ├─► VEX Claims ──► DSSE Sign ──► OCI Referrer ──► + ├─► Reachability ──► DSSE Sign ──► OCI Referrer ──► + └─► All wrapped as in-toto attestations +``` + +### Verdict Flow + +``` +Policy Engine ──► Ingests SBOM/VEX/Reachability/Signals + ──► Applies rules (deterministic IR) + ──► Emits signed verdict + ──► Verdict attached via OCI referrer + ──► Replayable: same inputs → same output +``` + +### Audit Flow + +``` +Timeline ──► Captures all events (immutable) + ──► Links to attestation digests + ──► Enables replay and forensics +``` + +--- + +## Tenant Isolation + +| Layer | Mechanism | +|-------|-----------| +| Database | PostgreSQL RLS (Row-Level Security) | +| Application | AsyncLocal tenant context | +| Storage | Tenant-scoped paths | +| Crypto | Per-tenant keys & trust roots | +| Network | Tenant header propagation | + +--- + +## Minimal Day-1 Policy + +```yaml +rules: + # Block reachable HIGH/CRITICAL unless VEX says not_affected + - match: { severity: [CRITICAL, HIGH], reachability: reachable } + unless: { vexStatus: not_affected } + action: block + + # Fail on >5% unknowns + - match: { unknownsRatio: { gt: 0.05 } } + action: block + + # Require signed SBOM + verdict for production + - match: { environment: production } + require: { signedSbom: true, signedVerdict: true } +``` + +--- + +## SBOM Format Support + +| Format | Generation | Parsing | Notes | +|--------|------------|---------|-------| +| CycloneDX 1.7 | Yes | Yes | Primary format | +| CycloneDX 1.6 | - | Yes | Backward compat | +| SPDX 3.0.1 | Yes | Yes | Alternative format | +| SPDX 2.x | - | Yes | Import only | + +--- + +## Key Capabilities + +| Capability | Status | Notes | +|------------|--------|-------| +| Deterministic SBOMs | Complete | Same input → same output | +| Signed Verdicts | Complete | DSSE + in-toto | +| Replayable Verdicts | Complete | Content-addressed proofs | +| OCI Referrers | Complete | Subject digest model | +| Rekor Transparency | Complete | v2 tile-backed | +| Tenant Isolation | Complete | RLS + crypto separation | +| Air-Gap Operation | Complete | Offline bundles | +| CycloneDX 1.7 | Planned | Sprint 3600.0002 | +| SPDX 3.0.1 Generation | Planned | Sprint 3600.0003 | +| Gateway WebService | Planned | Sprint 3600.0001 | +| Proof Chain UI | Planned | Sprint 4200.0001 | + +--- + +## Quick Glossary + +| Term | Definition | +|------|------------| +| **SBOM** | Software Bill of Materials (what's inside) | +| **VEX** | Vulnerability Exploitability eXchange (is CVE relevant?) | +| **Reachability** | Graph proof that vulnerable code is (not) callable | +| **DSSE** | Dead Simple Signing Envelope | +| **in-toto** | Supply chain attestation framework | +| **OCI Referrers** | Registry mechanism to link artifacts to image digest | +| **OpTok** | Short-lived operation token from Authority | +| **DPoP** | Demonstrating Proof of Possession (RFC 9449) | + +--- + +## Implementation Sprints + +| Sprint | Title | Priority | +|--------|-------|----------| +| 3600.0001.0001 | Gateway WebService | HIGH | +| 3600.0002.0001 | CycloneDX 1.7 Upgrade | HIGH | +| 3600.0003.0001 | SPDX 3.0.1 Generation | MEDIUM | +| 4200.0001.0001 | Proof Chain Verification UI | HIGH | +| 5200.0001.0001 | Starter Policy Template | HIGH | + +--- + +## Audit Checklist + +- [ ] All SBOMs have DSSE signatures +- [ ] All verdicts have DSSE signatures +- [ ] Rekor log entries exist for production artifacts +- [ ] Tenant isolation verified (RLS + crypto) +- [ ] Replay tokens verify (same inputs → same verdict) +- [ ] Air-gap bundles include all evidence +- [ ] OCI referrers discoverable for all images + +--- + +**Source**: Reference Architecture Advisory (Dec 2025) +**Last Updated**: 2025-12-21 diff --git a/docs/product-advisories/unprocessed/17-Dec-2025 - Reachability Drift Detection.md b/docs/product-advisories/17-Dec-2025 - Reachability Drift Detection.md similarity index 100% rename from docs/product-advisories/unprocessed/17-Dec-2025 - Reachability Drift Detection.md rename to docs/product-advisories/17-Dec-2025 - Reachability Drift Detection.md diff --git a/docs/product-advisories/unprocessed/18-Dec-2025 - Designing Explainable Triage and Proof‑Linked Evidence.md b/docs/product-advisories/18-Dec-2025 - Designing Explainable Triage and Proof‑Linked Evidence.md similarity index 100% rename from docs/product-advisories/unprocessed/18-Dec-2025 - Designing Explainable Triage and Proof‑Linked Evidence.md rename to docs/product-advisories/18-Dec-2025 - Designing Explainable Triage and Proof‑Linked Evidence.md diff --git a/docs/product-advisories/unprocessed/moats/19-Dec-2025 - Benchmarking Container Scanners Against Stella Ops.md b/docs/product-advisories/19-Dec-2025 - Benchmarking Container Scanners Against Stella Ops.md similarity index 100% rename from docs/product-advisories/unprocessed/moats/19-Dec-2025 - Benchmarking Container Scanners Against Stella Ops.md rename to docs/product-advisories/19-Dec-2025 - Benchmarking Container Scanners Against Stella Ops.md diff --git a/docs/product-advisories/unprocessed/moats/19-Dec-2025 - Stella Ops candidate features mapped to moat strength.md b/docs/product-advisories/19-Dec-2025 - Stella Ops candidate features mapped to moat strength.md similarity index 100% rename from docs/product-advisories/unprocessed/moats/19-Dec-2025 - Stella Ops candidate features mapped to moat strength.md rename to docs/product-advisories/19-Dec-2025 - Stella Ops candidate features mapped to moat strength.md diff --git a/docs/product-advisories/unprocessed/20-Dec-2025 - Layered binary + call‑stack reachability.md b/docs/product-advisories/20-Dec-2025 - Layered binary + call‑stack reachability.md similarity index 100% rename from docs/product-advisories/unprocessed/20-Dec-2025 - Layered binary + call‑stack reachability.md rename to docs/product-advisories/20-Dec-2025 - Layered binary + call‑stack reachability.md diff --git a/docs/product-advisories/unprocessed/21-Dec-2025 - Designing Explainable Triage Workflows.md b/docs/product-advisories/21-Dec-2025 - Designing Explainable Triage Workflows.md similarity index 100% rename from docs/product-advisories/unprocessed/21-Dec-2025 - Designing Explainable Triage Workflows.md rename to docs/product-advisories/21-Dec-2025 - Designing Explainable Triage Workflows.md diff --git a/docs/product-advisories/unprocessed/21-Dec-2025 - Smart Diff - Reproducibility as a Feature.md b/docs/product-advisories/21-Dec-2025 - Smart Diff - Reproducibility as a Feature.md similarity index 100% rename from docs/product-advisories/unprocessed/21-Dec-2025 - Smart Diff - Reproducibility as a Feature.md rename to docs/product-advisories/21-Dec-2025 - Smart Diff - Reproducibility as a Feature.md diff --git a/docs/product-advisories/archived/2025-12-21-binaryindex/21-Dec-2025 - Mapping Evidence Within Compiled Binaries.md b/docs/product-advisories/archived/2025-12-21-binaryindex/21-Dec-2025 - Mapping Evidence Within Compiled Binaries.md new file mode 100644 index 000000000..d9dca9566 Binary files /dev/null and b/docs/product-advisories/archived/2025-12-21-binaryindex/21-Dec-2025 - Mapping Evidence Within Compiled Binaries.md differ diff --git a/docs/product-advisories/archived/2025-12-21-binaryindex/README.md b/docs/product-advisories/archived/2025-12-21-binaryindex/README.md new file mode 100644 index 000000000..314065ecc --- /dev/null +++ b/docs/product-advisories/archived/2025-12-21-binaryindex/README.md @@ -0,0 +1,81 @@ +# Archived Advisory: Mapping Evidence Within Compiled Binaries + +**Original Advisory:** `21-Dec-2025 - Mapping Evidence Within Compiled Binaries.md` +**Archived:** 2025-12-21 +**Status:** Converted to Implementation Plan + +--- + +## Summary + +This advisory proposed building a **Vulnerable Binaries Database** that enables detection of vulnerable code at the binary level, independent of package metadata. + +## Implementation Artifacts Created + +### Architecture Documentation + +- `docs/modules/binaryindex/architecture.md` - Full module architecture +- `docs/db/schemas/binaries_schema_specification.md` - Database schema + +### Sprint Files + +**Summary:** +- `docs/implplan/SPRINT_6000_SUMMARY.md` - MVP roadmap overview + +**MVP 1: Known-Build Binary Catalog (Sprint 6000.0001)** +- `SPRINT_6000_0001_0001_binaries_schema.md` - PostgreSQL schema +- `SPRINT_6000_0001_0002_binary_identity_service.md` - Identity extraction +- `SPRINT_6000_0001_0003_debian_corpus_connector.md` - Debian/Ubuntu ingestion + +**MVP 2: Patch-Aware Backport Handling (Sprint 6000.0002)** +- `SPRINT_6000_0002_0001_fix_evidence_parser.md` - Changelog/patch parsing + +**MVP 3: Binary Fingerprint Factory (Sprint 6000.0003)** +- `SPRINT_6000_0003_0001_fingerprint_storage.md` - Fingerprint storage + +**MVP 4: Scanner Integration (Sprint 6000.0004)** +- `SPRINT_6000_0004_0001_scanner_integration.md` - Scanner.Worker integration + +## Key Decisions + +| Decision | Rationale | +|----------|-----------| +| New `BinaryIndex` module | Binary vulnerability DB is distinct concern from Scanner | +| Build-ID as primary key | Most deterministic identifier for ELF binaries | +| `binaries` PostgreSQL schema | Aligns with existing per-module schema pattern | +| Three-tier lookup | Assertions → Build-ID → Fingerprints for precision | +| Patch-aware fix index | Handles distro backports correctly | + +## Module Structure + +``` +src/BinaryIndex/ +├── StellaOps.BinaryIndex.WebService/ +├── StellaOps.BinaryIndex.Worker/ +├── __Libraries/ +│ ├── StellaOps.BinaryIndex.Core/ +│ ├── StellaOps.BinaryIndex.Persistence/ +│ ├── StellaOps.BinaryIndex.Corpus/ +│ ├── StellaOps.BinaryIndex.Corpus.Debian/ +│ ├── StellaOps.BinaryIndex.FixIndex/ +│ └── StellaOps.BinaryIndex.Fingerprints/ +└── __Tests/ +``` + +## Database Tables + +| Table | Purpose | +|-------|---------| +| `binaries.binary_identity` | Known binary identities | +| `binaries.binary_package_map` | Binary → package mapping | +| `binaries.vulnerable_buildids` | Vulnerable Build-IDs | +| `binaries.cve_fix_index` | Patch-aware fix status | +| `binaries.vulnerable_fingerprints` | Function fingerprints | +| `binaries.fingerprint_matches` | Scan match results | + +## References + +- Original advisory: This folder +- Architecture: `docs/modules/binaryindex/architecture.md` +- Schema: `docs/db/schemas/binaries_schema_specification.md` +- Sprints: `docs/implplan/SPRINT_6000_*.md` diff --git a/docs/product-advisories/14-Dec-2025 - CVSS and Competitive Analysis Technical Reference.md b/docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - CVSS and Competitive Analysis Technical Reference.md similarity index 100% rename from docs/product-advisories/14-Dec-2025 - CVSS and Competitive Analysis Technical Reference.md rename to docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - CVSS and Competitive Analysis Technical Reference.md diff --git a/docs/product-advisories/14-Dec-2025 - Determinism and Reproducibility Technical Reference.md b/docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Determinism and Reproducibility Technical Reference.md similarity index 100% rename from docs/product-advisories/14-Dec-2025 - Determinism and Reproducibility Technical Reference.md rename to docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Determinism and Reproducibility Technical Reference.md diff --git a/docs/product-advisories/14-Dec-2025 - Developer Onboarding Technical Reference.md b/docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Developer Onboarding Technical Reference.md similarity index 100% rename from docs/product-advisories/14-Dec-2025 - Developer Onboarding Technical Reference.md rename to docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Developer Onboarding Technical Reference.md diff --git a/docs/product-advisories/14-Dec-2025 - Offline and Air-Gap Technical Reference.md b/docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Offline and Air-Gap Technical Reference.md similarity index 100% rename from docs/product-advisories/14-Dec-2025 - Offline and Air-Gap Technical Reference.md rename to docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Offline and Air-Gap Technical Reference.md diff --git a/docs/product-advisories/14-Dec-2025 - PostgreSQL Patterns Technical Reference.md b/docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - PostgreSQL Patterns Technical Reference.md similarity index 100% rename from docs/product-advisories/14-Dec-2025 - PostgreSQL Patterns Technical Reference.md rename to docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - PostgreSQL Patterns Technical Reference.md diff --git a/docs/product-advisories/14-Dec-2025 - Proof and Evidence Chain Technical Reference.md b/docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Proof and Evidence Chain Technical Reference.md similarity index 100% rename from docs/product-advisories/14-Dec-2025 - Proof and Evidence Chain Technical Reference.md rename to docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Proof and Evidence Chain Technical Reference.md diff --git a/docs/product-advisories/14-Dec-2025 - Reachability Analysis Technical Reference.md b/docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Reachability Analysis Technical Reference.md similarity index 100% rename from docs/product-advisories/14-Dec-2025 - Reachability Analysis Technical Reference.md rename to docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Reachability Analysis Technical Reference.md diff --git a/docs/product-advisories/14-Dec-2025 - Rekor Integration Technical Reference.md b/docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Rekor Integration Technical Reference.md similarity index 100% rename from docs/product-advisories/14-Dec-2025 - Rekor Integration Technical Reference.md rename to docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Rekor Integration Technical Reference.md diff --git a/docs/product-advisories/14-Dec-2025 - Smart-Diff Technical Reference.md b/docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Smart-Diff Technical Reference.md similarity index 100% rename from docs/product-advisories/14-Dec-2025 - Smart-Diff Technical Reference.md rename to docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Smart-Diff Technical Reference.md diff --git a/docs/product-advisories/14-Dec-2025 - Testing and Quality Guardrails Technical Reference.md b/docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Testing and Quality Guardrails Technical Reference.md similarity index 100% rename from docs/product-advisories/14-Dec-2025 - Testing and Quality Guardrails Technical Reference.md rename to docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Testing and Quality Guardrails Technical Reference.md diff --git a/docs/product-advisories/14-Dec-2025 - Triage and Unknowns Technical Reference.md b/docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Triage and Unknowns Technical Reference.md similarity index 100% rename from docs/product-advisories/14-Dec-2025 - Triage and Unknowns Technical Reference.md rename to docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - Triage and Unknowns Technical Reference.md diff --git a/docs/product-advisories/14-Dec-2025 - UX and Time-to-Evidence Technical Reference.md b/docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - UX and Time-to-Evidence Technical Reference.md similarity index 100% rename from docs/product-advisories/14-Dec-2025 - UX and Time-to-Evidence Technical Reference.md rename to docs/product-advisories/archived/2025-12-21-moat-gap-closure/14-Dec-2025 - UX and Time-to-Evidence Technical Reference.md diff --git a/docs/product-advisories/archived/2025-12-21-moat-gap-closure/ARCHIVE_MANIFEST.md b/docs/product-advisories/archived/2025-12-21-moat-gap-closure/ARCHIVE_MANIFEST.md new file mode 100644 index 000000000..a76f08925 --- /dev/null +++ b/docs/product-advisories/archived/2025-12-21-moat-gap-closure/ARCHIVE_MANIFEST.md @@ -0,0 +1,97 @@ +# MOAT Gap Closure Archive Manifest + +**Archive Date**: 2025-12-21 +**Archive Reason**: Product advisories processed and implementation gaps identified + +--- + +## Summary + +This archive contains 12 MOAT (Market-Oriented Architecture Transformation) product advisories that were analyzed against the StellaOps codebase. After thorough source code exploration, the implementation coverage was assessed at **~92%**. + +--- + +## Implementation Coverage + +| Advisory Topic | Coverage | Notes | +|---------------|----------|-------| +| CVSS and Competitive Analysis | 100% | Full CVSS v4 engine, all attack complexity metrics | +| Determinism and Reproducibility | 100% | Stable ordering, hash chains, replayTokens, NDJSON | +| Developer Onboarding | 100% | AGENTS.md files, CLAUDE.md, module dossiers | +| Offline and Air-Gap | 100% | Bundle system, egress allowlists, offline sources | +| PostgreSQL Patterns | 100% | RLS, tenant isolation, schema per module | +| Proof and Evidence Chain | 100% | ProofSpine, DSSE envelopes, hash chaining | +| Reachability Analysis | 100% | CallGraphAnalyzer, AttackPathScorer, CodePathResult | +| Rekor Integration | 100% | RekorClient, transparency log publishing | +| Smart-Diff | 100% | MaterialRiskChangeDetector, hash-based diffing | +| Testing and Quality Guardrails | 100% | Testcontainers, benchmarks, truth schemas | +| UX and Time-to-Evidence | 100% | EvidencePanel, keyboard shortcuts, motion tokens | +| Triage and Unknowns | 75% | UnknownRanker exists, missing decay/containment | + +**Overall**: ~92% implementation coverage + +--- + +## Identified Gaps & Sprint References + +Three implementation gaps were identified and documented in sprints: + +### Gap 1: Decay Algorithm (Sprint 4000.0001.0001) +- **File**: `docs/implplan/SPRINT_4000_0001_0001_unknowns_decay_algorithm.md` +- **Scope**: Add time-based decay factor to UnknownRanker +- **Story Points**: 15 +- **Working Directory**: `src/Policy/__Libraries/StellaOps.Policy.Unknowns/` + +### Gap 2: BlastRadius & Containment (Sprint 4000.0001.0002) +- **File**: `docs/implplan/SPRINT_4000_0001_0002_unknowns_blast_radius_containment.md` +- **Scope**: Add BlastRadius and ContainmentSignals to ranking +- **Story Points**: 19 +- **Working Directory**: `src/Policy/__Libraries/StellaOps.Policy.Unknowns/` + +### Gap 3: EPSS Feed Connector (Sprint 4000.0002.0001) +- **File**: `docs/implplan/SPRINT_4000_0002_0001_epss_feed_connector.md` +- **Scope**: Create Concelier connector for orchestrated EPSS ingestion +- **Story Points**: 22 +- **Working Directory**: `src/Concelier/__Libraries/StellaOps.Concelier.Connector.Epss/` + +**Total Gap Closure Effort**: 56 story points + +--- + +## Archived Files (12) + +1. `14-Dec-2025 - CVSS and Competitive Analysis Technical Reference.md` +2. `14-Dec-2025 - Determinism and Reproducibility Technical Reference.md` +3. `14-Dec-2025 - Developer Onboarding Technical Reference.md` +4. `14-Dec-2025 - Offline and Air-Gap Technical Reference.md` +5. `14-Dec-2025 - PostgreSQL Patterns Technical Reference.md` +6. `14-Dec-2025 - Proof and Evidence Chain Technical Reference.md` +7. `14-Dec-2025 - Reachability Analysis Technical Reference.md` +8. `14-Dec-2025 - Rekor Integration Technical Reference.md` +9. `14-Dec-2025 - Smart-Diff Technical Reference.md` +10. `14-Dec-2025 - Testing and Quality Guardrails Technical Reference.md` +11. `14-Dec-2025 - Triage and Unknowns Technical Reference.md` +12. `14-Dec-2025 - UX and Time-to-Evidence Technical Reference.md` + +--- + +## Key Discoveries + +Features that were discovered to exist with different naming than expected: + +| Expected | Actual Implementation | +|----------|----------------------| +| FipsProfile, GostProfile, SmProfile | ComplianceProfiles (unified) | +| FindingsLedger.HashChain | Exists in FindingsSnapshot with replayTokens | +| Benchmark suite | Exists in `__Benchmarks/` directories | +| EvidencePanel | Exists in Web UI with motion tokens | + +--- + +## Post-Closure Target + +After completing the three gap-closure sprints: +- Implementation coverage: **95%+** +- All advisory requirements addressed +- Triage/Unknowns module fully featured + diff --git a/docs/product-advisories/unprocessed/moats/19-Dec-2025 - Moat #1.md b/docs/product-advisories/archived/2025-12-21-moat-phase2/19-Dec-2025 - Moat #1.md similarity index 100% rename from docs/product-advisories/unprocessed/moats/19-Dec-2025 - Moat #1.md rename to docs/product-advisories/archived/2025-12-21-moat-phase2/19-Dec-2025 - Moat #1.md diff --git a/docs/product-advisories/unprocessed/moats/19-Dec-2025 - Moat #2.md b/docs/product-advisories/archived/2025-12-21-moat-phase2/19-Dec-2025 - Moat #2.md similarity index 100% rename from docs/product-advisories/unprocessed/moats/19-Dec-2025 - Moat #2.md rename to docs/product-advisories/archived/2025-12-21-moat-phase2/19-Dec-2025 - Moat #2.md diff --git a/docs/product-advisories/unprocessed/moats/19-Dec-2025 - Moat #3.md b/docs/product-advisories/archived/2025-12-21-moat-phase2/19-Dec-2025 - Moat #3.md similarity index 100% rename from docs/product-advisories/unprocessed/moats/19-Dec-2025 - Moat #3.md rename to docs/product-advisories/archived/2025-12-21-moat-phase2/19-Dec-2025 - Moat #3.md diff --git a/docs/product-advisories/unprocessed/moats/19-Dec-2025 - Moat #4.md b/docs/product-advisories/archived/2025-12-21-moat-phase2/19-Dec-2025 - Moat #4.md similarity index 100% rename from docs/product-advisories/unprocessed/moats/19-Dec-2025 - Moat #4.md rename to docs/product-advisories/archived/2025-12-21-moat-phase2/19-Dec-2025 - Moat #4.md diff --git a/docs/product-advisories/unprocessed/moats/19-Dec-2025 - Moat #5.md b/docs/product-advisories/archived/2025-12-21-moat-phase2/19-Dec-2025 - Moat #5.md similarity index 100% rename from docs/product-advisories/unprocessed/moats/19-Dec-2025 - Moat #5.md rename to docs/product-advisories/archived/2025-12-21-moat-phase2/19-Dec-2025 - Moat #5.md diff --git a/docs/product-advisories/unprocessed/moats/19-Dec-2025 - Moat #6.md b/docs/product-advisories/archived/2025-12-21-moat-phase2/19-Dec-2025 - Moat #6.md similarity index 100% rename from docs/product-advisories/unprocessed/moats/19-Dec-2025 - Moat #6.md rename to docs/product-advisories/archived/2025-12-21-moat-phase2/19-Dec-2025 - Moat #6.md diff --git a/docs/product-advisories/unprocessed/moats/19-Dec-2025 - Moat #7.md b/docs/product-advisories/archived/2025-12-21-moat-phase2/19-Dec-2025 - Moat #7.md similarity index 100% rename from docs/product-advisories/unprocessed/moats/19-Dec-2025 - Moat #7.md rename to docs/product-advisories/archived/2025-12-21-moat-phase2/19-Dec-2025 - Moat #7.md diff --git a/docs/product-advisories/unprocessed/moats/20-Dec-2025 - Moat Explanation - Exception management as auditable objects.md b/docs/product-advisories/archived/2025-12-21-moat-phase2/20-Dec-2025 - Moat Explanation - Exception management as auditable objects.md similarity index 100% rename from docs/product-advisories/unprocessed/moats/20-Dec-2025 - Moat Explanation - Exception management as auditable objects.md rename to docs/product-advisories/archived/2025-12-21-moat-phase2/20-Dec-2025 - Moat Explanation - Exception management as auditable objects.md diff --git a/docs/product-advisories/unprocessed/moats/20-Dec-2025 - Moat Explanation - Guidelines for Product and Development Managers - Signed, Replayable Risk Verdicts.md b/docs/product-advisories/archived/2025-12-21-moat-phase2/20-Dec-2025 - Moat Explanation - Guidelines for Product and Development Managers - Signed, Replayable Risk Verdicts.md similarity index 100% rename from docs/product-advisories/unprocessed/moats/20-Dec-2025 - Moat Explanation - Guidelines for Product and Development Managers - Signed, Replayable Risk Verdicts.md rename to docs/product-advisories/archived/2025-12-21-moat-phase2/20-Dec-2025 - Moat Explanation - Guidelines for Product and Development Managers - Signed, Replayable Risk Verdicts.md diff --git a/docs/product-advisories/unprocessed/moats/20-Dec-2025 - Moat Explanation - Knowledge Snapshots and Time‑Travel Replay.md b/docs/product-advisories/archived/2025-12-21-moat-phase2/20-Dec-2025 - Moat Explanation - Knowledge Snapshots and Time‑Travel Replay.md similarity index 100% rename from docs/product-advisories/unprocessed/moats/20-Dec-2025 - Moat Explanation - Knowledge Snapshots and Time‑Travel Replay.md rename to docs/product-advisories/archived/2025-12-21-moat-phase2/20-Dec-2025 - Moat Explanation - Knowledge Snapshots and Time‑Travel Replay.md diff --git a/docs/product-advisories/unprocessed/moats/20-Dec-2025 - Moat Explanation - Risk Budgets and Diff-Aware Release Gates.md b/docs/product-advisories/archived/2025-12-21-moat-phase2/20-Dec-2025 - Moat Explanation - Risk Budgets and Diff-Aware Release Gates.md similarity index 100% rename from docs/product-advisories/unprocessed/moats/20-Dec-2025 - Moat Explanation - Risk Budgets and Diff-Aware Release Gates.md rename to docs/product-advisories/archived/2025-12-21-moat-phase2/20-Dec-2025 - Moat Explanation - Risk Budgets and Diff-Aware Release Gates.md diff --git a/docs/product-advisories/archived/2025-12-21-moat-phase2/ARCHIVE_MANIFEST.md b/docs/product-advisories/archived/2025-12-21-moat-phase2/ARCHIVE_MANIFEST.md new file mode 100644 index 000000000..bb485e79a --- /dev/null +++ b/docs/product-advisories/archived/2025-12-21-moat-phase2/ARCHIVE_MANIFEST.md @@ -0,0 +1,146 @@ +# MOAT Phase 2 Archive Manifest + +**Archive Date**: 2025-12-21 +**Archive Reason**: Product advisories processed and implementation gaps identified +**Epoch**: 4100 (MOAT Phase 2 - Governance & Replay) + +--- + +## Summary + +This archive contains 11 MOAT (Market-Oriented Architecture Transformation) product advisories from 19-Dec and 20-Dec 2025 that were analyzed against the StellaOps codebase. After thorough source code exploration, the implementation coverage was assessed at **~65% baseline** with sprints planned to reach **~90% target**. + +--- + +## Gap Analysis (from 65% baseline) + +| Area | Current | Target | Gap | +|------|---------|--------|-----| +| Security Snapshots & Deltas | 55% | 90% | Unified snapshot, DeltaVerdict | +| Risk Verdict Attestations | 50% | 90% | RVA contract, OCI push | +| VEX Claims Resolution | 80% | 95% | JSON parsing, evidence providers | +| Unknowns First-Class | 60% | 95% | Reason codes, budgets, attestations | +| Knowledge Snapshots | 60% | 90% | Manifest, ReplayEngine | +| Risk Budgets & Gates | 20% | 80% | RP scoring, gate levels | + +--- + +## Sprint Structure (10 Sprints, 169 Story Points) + +### Batch 4100.0001: Unknowns Enhancement (40 pts) + +| Sprint | Topic | Points | Status | +|--------|-------|--------|--------| +| 4100.0001.0001 | Reason-Coded Unknowns | 15 | Planned | +| 4100.0001.0002 | Unknown Budgets & Env Thresholds | 13 | Planned | +| 4100.0001.0003 | Unknowns in Attestations | 12 | Planned | + +### Batch 4100.0002: Knowledge Snapshots & Replay (55 pts) + +| Sprint | Topic | Points | Status | +|--------|-------|--------|--------| +| 4100.0002.0001 | Knowledge Snapshot Manifest | 18 | Planned | +| 4100.0002.0002 | Replay Engine | 22 | Planned | +| 4100.0002.0003 | Snapshot Export/Import | 15 | Planned | + +### Batch 4100.0003: Risk Verdict & OCI (34 pts) + +| Sprint | Topic | Points | Status | +|--------|-------|--------|--------| +| 4100.0003.0001 | Risk Verdict Attestation Contract | 16 | Planned | +| 4100.0003.0002 | OCI Referrer Push & Discovery | 18 | Planned | + +### Batch 4100.0004: Deltas & Gates (38 pts) + +| Sprint | Topic | Points | Status | +|--------|-------|--------|--------| +| 4100.0004.0001 | Security State Delta & Verdict | 20 | Planned | +| 4100.0004.0002 | Risk Budgets & Gate Levels | 18 | Planned | + +--- + +## Sprint File References + +| Sprint | File | +|--------|------| +| 4100.0001.0001 | `docs/implplan/SPRINT_4100_0001_0001_reason_coded_unknowns.md` | +| 4100.0001.0002 | `docs/implplan/SPRINT_4100_0001_0002_unknown_budgets.md` | +| 4100.0001.0003 | `docs/implplan/SPRINT_4100_0001_0003_unknowns_attestations.md` | +| 4100.0002.0001 | `docs/implplan/SPRINT_4100_0002_0001_knowledge_snapshot_manifest.md` | +| 4100.0002.0002 | `docs/implplan/SPRINT_4100_0002_0002_replay_engine.md` | +| 4100.0002.0003 | `docs/implplan/SPRINT_4100_0002_0003_snapshot_export_import.md` | +| 4100.0003.0001 | `docs/implplan/SPRINT_4100_0003_0001_risk_verdict_attestation.md` | +| 4100.0003.0002 | `docs/implplan/SPRINT_4100_0003_0002_oci_referrer_push.md` | +| 4100.0004.0001 | `docs/implplan/SPRINT_4100_0004_0001_security_state_delta.md` | +| 4100.0004.0002 | `docs/implplan/SPRINT_4100_0004_0002_risk_budgets_gates.md` | + +--- + +## Archived Files (11) + +### 19-Dec-2025 Moat Advisories (7) + +1. `19-Dec-2025 - Moat #1.md` +2. `19-Dec-2025 - Moat #2.md` +3. `19-Dec-2025 - Moat #3.md` +4. `19-Dec-2025 - Moat #4.md` +5. `19-Dec-2025 - Moat #5.md` +6. `19-Dec-2025 - Moat #6.md` +7. `19-Dec-2025 - Moat #7.md` + +### 20-Dec-2025 Moat Explanation Advisories (4) + +8. `20-Dec-2025 - Moat Explanation - Exception management as auditable objects.md` +9. `20-Dec-2025 - Moat Explanation - Guidelines for Product and Development Managers - Signed, Replayable Risk Verdicts.md` +10. `20-Dec-2025 - Moat Explanation - Knowledge Snapshots and Time-Travel Replay.md` +11. `20-Dec-2025 - Moat Explanation - Risk Budgets and Diff-Aware Release Gates.md` + +--- + +## Key New Concepts + +| Concept | Description | Sprint | +|---------|-------------|--------| +| UnknownReasonCode | 7 reason codes: U-RCH, U-ID, U-PROV, U-VEX, U-FEED, U-CONFIG, U-ANALYZER | 4100.0001.0001 | +| UnknownBudget | Environment-aware thresholds (prod: block, stage: warn, dev: warn_only) | 4100.0001.0002 | +| KnowledgeSnapshotManifest | Content-addressed bundle (ksm:sha256:{hash}) | 4100.0002.0001 | +| ReplayEngine | Time-travel replay with frozen inputs for determinism verification | 4100.0002.0002 | +| RiskVerdictAttestation | PASS/FAIL/PASS_WITH_EXCEPTIONS/INDETERMINATE verdicts | 4100.0003.0001 | +| OCI Referrer Push | OCI 1.1 referrers API with fallback to tagged indexes | 4100.0003.0002 | +| SecurityStateDelta | Baseline vs target comparison with DeltaVerdict | 4100.0004.0001 | +| GateLevel | G0-G4 diff-aware release gates with RP scoring | 4100.0004.0002 | + +--- + +## Recommended Parallel Execution + +``` +Phase 1: 4100.0001.0001 + 4100.0002.0001 + 4100.0003.0001 + 4100.0004.0002 +Phase 2: 4100.0001.0002 + 4100.0002.0002 + 4100.0003.0002 +Phase 3: 4100.0001.0003 + 4100.0002.0003 + 4100.0004.0001 +``` + +--- + +## Success Criteria + +| Metric | Target | +|--------|--------| +| Reason-coded unknowns | 7 codes implemented | +| Unknown budget tests | 5+ passing | +| Knowledge snapshot tests | 8+ passing | +| Replay engine golden tests | 10+ passing | +| RVA verification tests | 6+ passing | +| OCI push integration tests | 4+ passing | +| Delta computation tests | 6+ passing | +| Overall MOAT coverage | 85%+ | + +--- + +## Post-Closure Target + +After completing all 10 sprints: +- Implementation coverage: **90%+** +- All Phase 2 advisory requirements addressed +- Full governance and replay capabilities +- Risk budgets and gate levels operational diff --git a/docs/product-advisories/unprocessed/20-Dec-2025 - Stella Ops Reference Architecture, Dec 2025.md b/docs/product-advisories/archived/2025-12-21-reference-architecture/20-Dec-2025 - Stella Ops Reference Architecture.md similarity index 100% rename from docs/product-advisories/unprocessed/20-Dec-2025 - Stella Ops Reference Architecture, Dec 2025.md rename to docs/product-advisories/archived/2025-12-21-reference-architecture/20-Dec-2025 - Stella Ops Reference Architecture.md diff --git a/docs/product-advisories/unprocessed/20-Dec-2025 - Testing strategy.md b/docs/product-advisories/archived/2025-12-21-testing-strategy/20-Dec-2025 - Testing strategy.md similarity index 86% rename from docs/product-advisories/unprocessed/20-Dec-2025 - Testing strategy.md rename to docs/product-advisories/archived/2025-12-21-testing-strategy/20-Dec-2025 - Testing strategy.md index 64a0d2aab..9480bfbc5 100644 --- a/docs/product-advisories/unprocessed/20-Dec-2025 - Testing strategy.md +++ b/docs/product-advisories/archived/2025-12-21-testing-strategy/20-Dec-2025 - Testing strategy.md @@ -1,12 +1,12 @@ -Here’s a compact, practical plan to harden Stella Ops around **offline‑ready security evidence and deterministic verdicts**, with just enough background so it all clicks. +Here's a compact, practical plan to harden Stella Ops around **offline‑ready security evidence and deterministic verdicts**, with just enough background so it all clicks. --- # Why this matters (quick primer) -* **Air‑gapped/offline**: Many customers can’t reach public feeds or registries. Your scanners, SBOM tooling, and attestations must work with **pre‑synced bundles** and prove what data they used. +* **Air‑gapped/offline**: Many customers can't reach public feeds or registries. Your scanners, SBOM tooling, and attestations must work with **pre‑synced bundles** and prove what data they used. * **Interoperability**: Teams mix tools (Syft/Grype/Trivy, cosign, CycloneDX/SPDX). Your CI should **round‑trip** SBOMs and attestations end‑to‑end and prove that downstream consumers (e.g., Grype) can load them. -* **Determinism**: Auditors expect **“same inputs → same verdict.”** Capture inputs, policies, and feed hashes so a verdict is exactly reproducible later. +* **Determinism**: Auditors expect **"same inputs → same verdict."** Capture inputs, policies, and feed hashes so a verdict is exactly reproducible later. * **Operational guardrails**: Shipping gates should fail early on **unknowns** and apply **backpressure** gracefully when load spikes. --- @@ -15,14 +15,14 @@ Here’s a compact, practical plan to harden Stella Ops around **offline‑rea 1. **Air‑gapped operation e2e** -* Package “offline bundle” (vuln feeds, package catalogs, policy/lattice rules, certs, keys). +* Package "offline bundle" (vuln feeds, package catalogs, policy/lattice rules, certs, keys). * Run scans (containers, OS, language deps, binaries) **without network**. * Assert: SBOMs generated, attestations signed/verified, verdicts emitted. * Evidence: manifest of bundle contents + hashes in the run log. 2. **Interop round‑trips (SBOM ⇄ attestation ⇄ scanner)** -* Produce SBOM (CycloneDX 1.6 and SPDX 3.0.1) with Syft. +* Produce SBOM (CycloneDX 1.6 and SPDX 3.0.1) with Syft. * Create **DSSE/cosign** attestation for that SBOM. * Verify consumer tools: @@ -33,11 +33,11 @@ Here’s a compact, practical plan to harden Stella Ops around **offline‑rea 3. **Replayability (delta‑verdicts + strict replay)** * Store input set: artifact digest(s), SBOM digests, policy version, feed digests, lattice rules, tool versions. -* Re‑run later; assert **byte‑identical verdict** and same “delta‑verdict” when inputs unchanged. +* Re‑run later; assert **byte‑identical verdict** and same "delta‑verdict" when inputs unchanged. 4. **Unknowns‑budget policy gates** -* Inject controlled “unknown” conditions (missing CPE mapping, unresolved package source, unparsed distro). +* Inject controlled "unknown" conditions (missing CPE mapping, unresolved package source, unparsed distro). * Gate: **fail build if unknowns > budget** (e.g., prod=0, staging≤N). * Assert: UI, CLI, and attestation all record unknown counts and gate decision. @@ -45,7 +45,7 @@ Here’s a compact, practical plan to harden Stella Ops around **offline‑rea * Produce: build‑provenance (in‑toto/DSSE), SBOM attest, VEX attest, final **verdict attest**. * Verify: signature (cosign), certificate chain, time‑stamping, Rekor‑style (or mirror) inclusion when online; cached proofs when offline. -* Assert: each attestation is linked in the verdict’s evidence index. +* Assert: each attestation is linked in the verdict's evidence index. 6. **Router backpressure chaos (HTTP 429/503 + Retry‑After)** @@ -55,7 +55,7 @@ Here’s a compact, practical plan to harden Stella Ops around **offline‑rea 7. **UI reducer tests for reachability & VEX chips** * Component tests: large SBOM graphs, focused **reachability subgraphs**, and VEX status chips (affected/not‑affected/under‑investigation). -* Assert: stable rendering under 50k+ nodes; interactions remain <200 ms. +* Assert: stable rendering under 50k+ nodes; interactions remain <200 ms. --- @@ -95,7 +95,7 @@ Here’s a compact, practical plan to harden Stella Ops around **offline‑rea * Router under burst emits **correct Retry‑After** and recovers cleanly. * UI handles huge graphs; VEX chips never desync from evidence. -If you want, I’ll turn this into GitLab/Gitea pipeline YAML + a tiny sample repo (image, SBOM, policies, and goldens) so your team can plug‑and‑play. +If you want, I'll turn this into GitLab/Gitea pipeline YAML + a tiny sample repo (image, SBOM, policies, and goldens) so your team can plug‑and‑play. Below is a complete, end-to-end testing strategy for Stella Ops that turns your moats (offline readiness, deterministic replayable verdicts, lattice/policy decisioning, attestation provenance, unknowns budgets, router backpressure, UI reachability evidence) into continuously verified guarantees. --- @@ -124,21 +124,21 @@ A scan/verdict is *deterministic* iff **same inputs → byte-identical outputs** ### 1.2 Offline by default -Every CI job (except explicitly tagged “online”) runs with **no egress**. +Every CI job (except explicitly tagged "online") runs with **no egress**. * Offline bundle is mandatory input for scanning. * Any attempted network call fails the test (proves air-gap compliance). ### 1.3 Evidence-first validation -No assertion is “verdict == pass” without verifying the chain of evidence: +No assertion is "verdict == pass" without verifying the chain of evidence: * verdict references SBOM digest(s) * SBOM references artifact digest(s) * VEX claims reference vulnerabilities + components + reachability evidence * attestations verify cryptographically and chain to configured roots. -### 1.4 Interop is required, not “nice to have” +### 1.4 Interop is required, not "nice to have" Stella Ops must round-trip with: @@ -146,19 +146,19 @@ Stella Ops must round-trip with: * Attestation: DSSE / in-toto style envelopes, cosign-compatible flows * Consumer scanners: at least Grype from SBOM; ideally Trivy as cross-check -Interop tests are treated as “compatibility contracts” and block releases. +Interop tests are treated as "compatibility contracts" and block releases. ### 1.5 Architectural boundary enforcement (your standing rule) * Lattice/policy merge algorithms run **in `scanner.webservice`**. -* `Concelier` and `Excitors` must “preserve prune source”. +* `Concelier` and `Excitors` must "preserve prune source". This is enforced with tests that detect forbidden behavior (see §6.2). --- ## 2) The test portfolio (what kinds of tests exist) -Think “coverage by risk”, not “coverage by lines”. +Think "coverage by risk", not "coverage by lines". ### 2.1 Test layers and what they prove @@ -172,9 +172,9 @@ Think “coverage by risk”, not “coverage by lines”. 2. **Property-based tests** (FsCheck) -* “Reordering inputs does not change verdict hash” -* “Graph merge is associative/commutative where policy declares it” -* “Unknowns budgets always monotonic with missing evidence” +* "Reordering inputs does not change verdict hash" +* "Graph merge is associative/commutative where policy declares it" +* "Unknowns budgets always monotonic with missing evidence" * Parser robustness: arbitrary JSON for SBOM/VEX envelopes never crashes 3. **Component tests** (service + Postgres; optional Valkey) @@ -194,7 +194,7 @@ Think “coverage by risk”, not “coverage by lines”. * Router → scanner.webservice → attestor → storage * Offline bundle import/export -* Knowledge snapshot “time travel” replay pipeline +* Knowledge snapshot "time travel" replay pipeline 6. **End-to-end tests** (realistic flows) @@ -224,10 +224,10 @@ Both must pass. ### 3.2 Environment isolation -* Containers started with **no network** unless a test explicitly declares “online”. +* Containers started with **no network** unless a test explicitly declares "online". * For Kubernetes e2e: apply a default-deny egress NetworkPolicy. -### 3.3 Golden corpora repository (your “truth set”) +### 3.3 Golden corpora repository (your "truth set") Create a versioned `stellaops-test-corpus/` containing: @@ -285,7 +285,7 @@ Bundle includes: * crypto provider modules (for sovereign readiness) * optional: Rekor mirror snapshot / inclusion proofs cache -**Test invariant:** offline scan is blocked if bundle is missing required parts; error is explicit and counts as “unknown” only where policy says so. +**Test invariant:** offline scan is blocked if bundle is missing required parts; error is explicit and counts as "unknown" only where policy says so. ### 4.3 Evidence Index @@ -295,7 +295,7 @@ The verdict is not the product; the product is verdict + evidence graph: * their digests and verification status * unknowns list with codes + remediation hints -**Test invariant:** every “not affected” claim has required evidence hooks per policy (“because feature flag off” etc.), otherwise becomes unknown/fail. +**Test invariant:** every "not affected" claim has required evidence hooks per policy ("because feature flag off" etc.), otherwise becomes unknown/fail. --- @@ -333,8 +333,8 @@ These are your release blockers. * Assertions: * verdict bytes identical - * evidence index identical (except allowed “execution metadata” section) - * delta verdict is “empty delta” + * evidence index identical (except allowed "execution metadata" section) + * delta verdict is "empty delta" ### Flow D: Diff-aware delta verdict (smart-diff) @@ -366,7 +366,7 @@ These are your release blockers. * clients backoff; no request loss * metrics expose throttling reasons -### Flow G: Evidence export (“audit pack”) +### Flow G: Evidence export ("audit pack") * Run scan * Export a sealed audit pack (bundle + run manifest + evidence + verdict) @@ -390,16 +390,16 @@ Must have: **Critical invariant tests:** -* “Vendor > distro > internal” must be demonstrably *configurable*, and wrong merges must fail deterministically. +* "Vendor > distro > internal" must be demonstrably *configurable*, and wrong merges must fail deterministically. ### 6.2 Boundary enforcement: Concelier & Excitors preserve prune source -Add a “behavioral boundary suite”: +Add a "behavioral boundary suite": * instrument events/telemetry that records where merges happened * feed in conflicting VEX claims and assert: - * Concelier/Excitors do not resolve conflicts; they retain provenance and “prune source” + * Concelier/Excitors do not resolve conflicts; they retain provenance and "prune source" * only `scanner.webservice` produces the final merged semantics If Concelier/Excitors output a resolved claim, the test fails. @@ -439,7 +439,7 @@ Define standard workloads: * small image (200 packages) * medium (2k packages) * large (20k+ packages) -* “monorepo container” worst case (50k+ nodes graph) +* "monorepo container" worst case (50k+ nodes graph) Metrics collected: @@ -529,7 +529,7 @@ Release candidate is blocked if any of these fail: ### Phase 2: Offline e2e + interop -* offline bundle builder + strict “no egress” enforcement +* offline bundle builder + strict "no egress" enforcement * SBOM attestation round-trip + consumer parsing suite ### Phase 3: Unknowns budgets + delta verdict @@ -556,7 +556,7 @@ If you do only three things, do these: 1. **Run Manifest** as first-class test artifact 2. **Golden corpus** that pins all digests (feeds, policies, images, expected outputs) -3. **“No egress” default** in CI with explicit opt-in for online tests +3. **"No egress" default** in CI with explicit opt-in for online tests Everything else becomes far easier once these are in place. diff --git a/docs/product-advisories/archived/2025-12-21-testing-strategy/README.md b/docs/product-advisories/archived/2025-12-21-testing-strategy/README.md new file mode 100644 index 000000000..9ff70962a --- /dev/null +++ b/docs/product-advisories/archived/2025-12-21-testing-strategy/README.md @@ -0,0 +1,56 @@ +# Archived Advisory: Testing Strategy + +**Archived**: 2025-12-21 +**Original**: `docs/product-advisories/20-Dec-2025 - Testing strategy.md` + +## Processing Summary + +This advisory was processed into Sprint Epic 5100 - Comprehensive Testing Strategy. + +### Artifacts Created + +**Sprint Files** (12 sprints, ~75 tasks): + +| Sprint | Name | Phase | +|--------|------|-------| +| 5100.0001.0001 | Run Manifest Schema | Phase 0 | +| 5100.0001.0002 | Evidence Index Schema | Phase 0 | +| 5100.0001.0003 | Offline Bundle Manifest | Phase 0 | +| 5100.0001.0004 | Golden Corpus Expansion | Phase 0 | +| 5100.0002.0001 | Canonicalization Utilities | Phase 1 | +| 5100.0002.0002 | Replay Runner Service | Phase 1 | +| 5100.0002.0003 | Delta-Verdict Generator | Phase 1 | +| 5100.0003.0001 | SBOM Interop Round-Trip | Phase 2 | +| 5100.0003.0002 | No-Egress Enforcement | Phase 2 | +| 5100.0004.0001 | Unknowns Budget CI Gates | Phase 3 | +| 5100.0005.0001 | Router Chaos Suite | Phase 4 | +| 5100.0006.0001 | Audit Pack Export/Import | Phase 5 | + +**Documentation Updated**: +- `docs/implplan/SPRINT_5100_SUMMARY.md` - Master epic summary +- `docs/19_TEST_SUITE_OVERVIEW.md` - Test suite documentation +- `tests/AGENTS.md` - AI agent guidance for tests directory + +### Key Concepts Implemented + +1. **Deterministic Replay**: Run Manifests capture all inputs for byte-identical verdict reproduction +2. **Canonical JSON**: RFC 8785 principles for stable serialization +3. **Evidence Index**: Linking verdicts to complete evidence chain +4. **Air-Gap Compliance**: Network-isolated testing with `--network none` +5. **SBOM Interoperability**: Round-trip testing with Syft, Grype, cosign +6. **Unknowns Budget Gates**: Environment-based budget enforcement +7. **Router Backpressure**: HTTP 429/503 with Retry-After validation +8. **Audit Packs**: Sealed export/import for compliance verification + +### Release Blocking Gates + +- Replay Verification: 0 byte diff +- Interop Suite: 95%+ findings parity +- Offline E2E: All pass with no network +- Unknowns Budget: Within configured limits +- Router Retry-After: 100% compliance + +--- + +*Processed by: Claude Code* +*Date: 2025-12-21* diff --git a/docs/product-advisories/unprocessed/21-Dec-2025 - Mapping Evidence Within Compiled Binaries.md b/docs/product-advisories/archived/2025-12-22-binaryindex/21-Dec-2025 - Mapping Evidence Within Compiled Binaries.md similarity index 100% rename from docs/product-advisories/unprocessed/21-Dec-2025 - Mapping Evidence Within Compiled Binaries.md rename to docs/product-advisories/archived/2025-12-22-binaryindex/21-Dec-2025 - Mapping Evidence Within Compiled Binaries.md diff --git a/docs/product-advisories/unprocessed/16-Dec-2025 - Reimagining Proof‑Linked UX in Security Workflows.md b/docs/product-advisories/archived/2025-12-22-ux-sprints/16-Dec-2025 - Reimagining Proof‑Linked UX in Security Workflows.md similarity index 100% rename from docs/product-advisories/unprocessed/16-Dec-2025 - Reimagining Proof‑Linked UX in Security Workflows.md rename to docs/product-advisories/archived/2025-12-22-ux-sprints/16-Dec-2025 - Reimagining Proof‑Linked UX in Security Workflows.md diff --git a/docs/product-advisories/unprocessed/20-Dec-2025 - Branch · UX patterns worth borrowing from top scanners.md b/docs/product-advisories/archived/2025-12-22-ux-sprints/20-Dec-2025 - Branch · UX patterns worth borrowing from top scanners.md similarity index 100% rename from docs/product-advisories/unprocessed/20-Dec-2025 - Branch · UX patterns worth borrowing from top scanners.md rename to docs/product-advisories/archived/2025-12-22-ux-sprints/20-Dec-2025 - Branch · UX patterns worth borrowing from top scanners.md diff --git a/docs/product-advisories/unprocessed/21-Dec-2025 - How Top Scanners Shape Evidence‑First UX.md b/docs/product-advisories/archived/2025-12-22-ux-sprints/21-Dec-2025 - How Top Scanners Shape Evidence‑First UX.md similarity index 100% rename from docs/product-advisories/unprocessed/21-Dec-2025 - How Top Scanners Shape Evidence‑First UX.md rename to docs/product-advisories/archived/2025-12-22-ux-sprints/21-Dec-2025 - How Top Scanners Shape Evidence‑First UX.md diff --git a/tests/AGENTS.md b/tests/AGENTS.md new file mode 100644 index 000000000..32a474e9e --- /dev/null +++ b/tests/AGENTS.md @@ -0,0 +1,189 @@ +# tests/AGENTS.md + +## Overview + +This document provides guidance for AI agents and developers working in the `tests/` directory of the StellaOps codebase. + +## Directory Structure + +``` +tests/ +├── acceptance/ # Acceptance test suites +├── AirGap/ # Air-gap specific tests +├── authority/ # Authority module tests +├── chaos/ # Chaos engineering tests +├── e2e/ # End-to-end test suites +├── EvidenceLocker/ # Evidence storage tests +├── fixtures/ # Shared test fixtures +│ ├── offline-bundle/ # Offline bundle for air-gap tests +│ ├── images/ # Container image tarballs +│ └── sboms/ # Sample SBOM documents +├── Graph/ # Graph module tests +├── integration/ # Integration test suites +├── interop/ # Interoperability tests +├── load/ # Load testing scripts +├── native/ # Native code tests +├── offline/ # Offline operation tests +├── plugins/ # Plugin tests +├── Policy/ # Policy module tests +├── Provenance/ # Provenance/attestation tests +├── reachability/ # Reachability analysis tests +├── Replay/ # Replay functionality tests +├── security/ # Security tests (OWASP) +├── shared/ # Shared test utilities +└── Vex/ # VEX processing tests +``` + +## Test Categories + +### When writing tests, use appropriate category traits: + +```csharp +[Trait("Category", "Unit")] // Fast, isolated unit tests +[Trait("Category", "Integration")] // Tests requiring infrastructure +[Trait("Category", "E2E")] // Full end-to-end workflows +[Trait("Category", "AirGap")] // Must work without network +[Trait("Category", "Interop")] // Third-party tool compatibility +[Trait("Category", "Performance")] // Performance benchmarks +[Trait("Category", "Chaos")] // Failure injection tests +[Trait("Category", "Security")] // Security-focused tests +``` + +## Key Patterns + +### 1. PostgreSQL Integration Tests + +Use the shared fixture from `StellaOps.Infrastructure.Postgres.Testing`: + +```csharp +public class MyIntegrationTests : IClassFixture +{ + private readonly MyPostgresFixture _fixture; + + public MyIntegrationTests(MyPostgresFixture fixture) + { + _fixture = fixture; + } + + [Fact] + public async Task MyTest() + { + // _fixture.ConnectionString is available + // _fixture.TruncateAllTablesAsync() for cleanup + } +} +``` + +### 2. Air-Gap Tests + +Inherit from `NetworkIsolatedTestBase` for network-free tests: + +```csharp +[Trait("Category", "AirGap")] +public class OfflineTests : NetworkIsolatedTestBase +{ + [Fact] + public async Task Test_WorksOffline() + { + // Test implementation + AssertNoNetworkCalls(); // Fails if network accessed + } + + protected string GetOfflineBundlePath() => + Path.Combine(AppContext.BaseDirectory, "fixtures", "offline-bundle"); +} +``` + +### 3. Determinism Tests + +Use `DeterminismVerifier` to ensure reproducibility: + +```csharp +[Fact] +public void Output_IsDeterministic() +{ + var verifier = new DeterminismVerifier(); + var result = verifier.Verify(myObject, iterations: 10); + + result.IsDeterministic.Should().BeTrue(); +} +``` + +### 4. Golden Corpus Tests + +Reference cases from `bench/golden-corpus/`: + +```csharp +[Theory] +[MemberData(nameof(GetCorpusCases))] +public async Task Corpus_Case_Passes(string caseId) +{ + var testCase = CorpusLoader.Load(caseId); + var result = await ProcessAsync(testCase.Input); + result.Should().BeEquivalentTo(testCase.Expected); +} +``` + +## Rules for Test Development + +### DO: + +1. **Tag tests with appropriate categories** for filtering +2. **Use Testcontainers** for infrastructure dependencies +3. **Inherit from shared fixtures** to avoid duplication +4. **Assert no network calls** in air-gap tests +5. **Verify determinism** for any serialization output +6. **Use property-based tests** (FsCheck) for invariants +7. **Document test purpose** in method names + +### DON'T: + +1. **Don't skip tests** without documenting why +2. **Don't use Thread.Sleep** - use proper async waits +3. **Don't hardcode paths** - use `AppContext.BaseDirectory` +4. **Don't make network calls** in non-interop tests +5. **Don't depend on test execution order** +6. **Don't leave test data in shared databases** + +## Test Infrastructure + +### Required Services (CI) + +```yaml +services: + postgres: + image: postgres:16-alpine + env: + POSTGRES_PASSWORD: test + valkey: + image: valkey/valkey:7-alpine +``` + +### Environment Variables + +| Variable | Purpose | Default | +|----------|---------|---------| +| `STELLAOPS_OFFLINE_MODE` | Enable offline mode | `false` | +| `STELLAOPS_OFFLINE_BUNDLE` | Path to offline bundle | - | +| `STELLAOPS_TEST_POSTGRES` | PostgreSQL connection | Testcontainers | +| `STELLAOPS_TEST_VALKEY` | Valkey connection | Testcontainers | + +## Related Sprints + +| Sprint | Topic | +|--------|-------| +| 5100.0001.0001 | Run Manifest Schema | +| 5100.0001.0002 | Evidence Index Schema | +| 5100.0001.0004 | Golden Corpus Expansion | +| 5100.0002.0001 | Canonicalization Utilities | +| 5100.0002.0002 | Replay Runner Service | +| 5100.0003.0001 | SBOM Interop Round-Trip | +| 5100.0003.0002 | No-Egress Enforcement | +| 5100.0005.0001 | Router Chaos Suite | + +## Contact + +For test infrastructure questions, see: +- `docs/19_TEST_SUITE_OVERVIEW.md` +- `docs/implplan/SPRINT_5100_SUMMARY.md` +- Sprint files in `docs/implplan/SPRINT_5100_*.md`