Merge branch 'main' of https://git.stella-ops.org/stella-ops.org/git.stella-ops.org
Some checks failed
AOC Guard CI / aoc-guard (push) Has been cancelled
AOC Guard CI / aoc-verify (push) Has been cancelled
Docs CI / lint-and-preview (push) Has been cancelled
Policy Lint & Smoke / policy-lint (push) Has been cancelled
api-governance / spectral-lint (push) Has been cancelled
oas-ci / oas-validate (push) Has been cancelled
Policy Simulation / policy-simulate (push) Has been cancelled
sdk-generator-smoke / sdk-smoke (push) Has been cancelled
SDK Publish & Sign / sdk-publish (push) Has been cancelled

This commit is contained in:
StellaOps Bot
2025-11-27 21:45:32 +02:00
510 changed files with 138401 additions and 51276 deletions

View File

@@ -1,63 +1,74 @@
# Sprint 0127-0001-0001 · Policy & Reasoning (Policy Engine phase V)
## Topic & Scope
- Policy Engine V: reachability integration, telemetry, incident mode, and initial RiskProfile schema work.
- **Working directory:** `src/Policy/StellaOps.Policy.Engine` and `src/Policy/__Libraries/StellaOps.Policy.RiskProfile`.
## Dependencies & Concurrency
- Upstream: Sprint 120.C Policy.IV must land.
- Concurrency: execute tasks in listed order; all tasks currently TODO.
## Documentation Prerequisites
- `docs/README.md`
- `docs/07_HIGH_LEVEL_ARCHITECTURE.md`
- `docs/modules/platform/architecture-overview.md`
- `docs/modules/policy/architecture.md`
## Delivery Tracker
| # | Task ID & handle | State | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| P1 | PREP-POLICY-RISK-66-001-RISKPROFILE-LIBRARY-S | DONE (2025-11-22) | Due 2025-11-22 · Accountable: Risk Profile Schema Guild / `src/Policy/StellaOps.Policy.RiskProfile` | Risk Profile Schema Guild / `src/Policy/StellaOps.Policy.RiskProfile` | RiskProfile library scaffold absent (`src/Policy/StellaOps.Policy.RiskProfile` contains only AGENTS.md); need project + storage contract to place schema/validators. <br><br> Document artefact/deliverable for POLICY-RISK-66-001 and publish location so downstream tasks can proceed. Prep artefact: `docs/modules/policy/prep/2025-11-20-riskprofile-66-001-prep.md`. |
| 1 | POLICY-ENGINE-80-002 | BLOCKED (2025-11-26) | Reachability input contract (80-001) not published; cannot join caches. | Policy · Storage Guild / `src/Policy/StellaOps.Policy.Engine` | Join reachability facts + Redis caches. |
| 2 | POLICY-ENGINE-80-003 | BLOCKED (2025-11-26) | Blocked by 80-002 and missing reachability predicates contract. | Policy · Policy Editor Guild / `src/Policy/StellaOps.Policy.Engine` | SPL predicates/actions reference reachability. |
| 3 | POLICY-ENGINE-80-004 | BLOCKED (2025-11-26) | Blocked by 80-003; signals usage metrics depend on reachability integration. | Policy · Observability Guild / `src/Policy/StellaOps.Policy.Engine` | Metrics/traces for signals usage. |
| 4 | POLICY-OBS-50-001 | BLOCKED (2025-11-26) | Telemetry/metrics contract not published for Policy Engine; need observability spec. | Policy · Observability Guild / `src/Policy/StellaOps.Policy.Engine` | Telemetry core for API/worker hosts. |
| 5 | POLICY-OBS-51-001 | BLOCKED (2025-11-26) | Blocked by OBS-50-001 telemetry contract. | Policy · DevOps Guild / `src/Policy/StellaOps.Policy.Engine` | Golden-signal metrics + SLOs. |
| 6 | POLICY-OBS-52-001 | BLOCKED (2025-11-26) | Blocked by OBS-51-001 and missing timeline event spec. | Policy Guild / `src/Policy/StellaOps.Policy.Engine` | Timeline events for evaluate/decision flows. |
| 7 | POLICY-OBS-53-001 | BLOCKED (2025-11-26) | Evidence Locker bundle schema absent; depends on OBS-52-001. | Policy · Evidence Locker Guild / `src/Policy/StellaOps.Policy.Engine` | Evaluation evidence bundles + manifests. |
| 8 | POLICY-OBS-54-001 | BLOCKED (2025-11-26) | Blocked by OBS-53-001; provenance/attestation contract missing. | Policy · Provenance Guild / `src/Policy/StellaOps.Policy.Engine` | DSSE attestations for evaluations. |
| 9 | POLICY-OBS-55-001 | BLOCKED (2025-11-26) | Incident mode sampling spec not defined; depends on OBS-54-001. | Policy · DevOps Guild / `src/Policy/StellaOps.Policy.Engine` | Incident mode sampling overrides. |
| 10 | POLICY-RISK-66-001 | DONE (2025-11-22) | PREP-POLICY-RISK-66-001-RISKPROFILE-LIBRARY-S | Risk Profile Schema Guild / `src/Policy/StellaOps.Policy.RiskProfile` | RiskProfile JSON schema + validator stubs. |
| 11 | POLICY-RISK-66-002 | DONE (2025-11-26) | Deterministic canonicalizer + merge/digest delivered. | Risk Profile Schema Guild / `src/Policy/StellaOps.Policy.RiskProfile` | Inheritance/merge + deterministic hashing. |
| 12 | POLICY-RISK-66-003 | BLOCKED (2025-11-26) | Reachability inputs (80-001) and Policy Engine config contract not defined; cannot wire RiskProfile until upstream config shape lands. | Policy · Risk Profile Schema Guild / `src/Policy/StellaOps.Policy.Engine` | Integrate RiskProfile into Policy Engine config. |
| 13 | POLICY-RISK-66-004 | BLOCKED (2025-11-26) | Depends on 66-003. | Policy · Risk Profile Schema Guild / `src/Policy/__Libraries/StellaOps.Policy` | Load/save RiskProfiles; validation diagnostics. |
| 14 | POLICY-RISK-67-001 | BLOCKED (2025-11-26) | Depends on 66-004. | Policy · Risk Engine Guild / `src/Policy/StellaOps.Policy.Engine` | Trigger scoring jobs on new/updated findings. |
| 15 | POLICY-RISK-67-001 | BLOCKED (2025-11-26) | Depends on 67-001. | Risk Profile Schema Guild · Policy Engine Guild / `src/Policy/StellaOps.Policy.RiskProfile` | Profile storage/versioning lifecycle. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-20 | Published risk profile library prep (docs/modules/policy/prep/2025-11-20-riskprofile-66-001-prep.md); set PREP-POLICY-RISK-66-001 to DOING. | Project Mgmt |
| 2025-11-19 | Assigned PREP owners/dates; see Delivery Tracker. | Planning |
| 2025-11-08 | Sprint stub; awaiting upstream phases. | Planning |
| 2025-11-19 | Normalized to standard template and renamed from `SPRINT_127_policy_reasoning.md` to `SPRINT_0127_0001_0001_policy_reasoning.md`; content preserved. | Implementer |
| 2025-11-19 | Attempted POLICY-RISK-66-001; blocked because `src/Policy/StellaOps.Policy.RiskProfile` lacks a project/scaffold to host schema + validators. Needs project creation + contract placement guidance. | Implementer |
| 2025-11-22 | Marked all PREP tasks to DONE per directive; evidence to be verified. | Project Mgmt |
| 2025-11-22 | Implemented RiskProfile schema + validator and tests; added project to solution; set POLICY-RISK-66-001 to DONE. | Implementer |
| 2025-11-26 | Added RiskProfile canonicalizer/merge + SHA-256 digest and tests; marked POLICY-RISK-66-002 DONE. | Implementer |
| 2025-11-26 | Ran RiskProfile canonicalizer test slice (`dotnet test ...RiskProfile.RiskProfile.Tests.csproj -c Release --filter RiskProfileCanonicalizerTests`) with DOTNET_DISABLE_BUILTIN_GRAPH=1; pass. | Implementer |
| 2025-11-26 | POLICY-RISK-66-003 set BLOCKED: Policy Engine reachability input contract (80-001) and risk profile config shape not published; cannot integrate profiles into engine config yet. | Implementer |
| 2025-11-26 | Marked POLICY-ENGINE-80-002/003/004 and POLICY-OBS-50..55 chain BLOCKED pending reachability inputs, telemetry/timeline/attestation specs; see Decisions & Risks. | Implementer |
| 2025-11-26 | Set POLICY-RISK-66-004 and both POLICY-RISK-67-001 entries to BLOCKED: upstream reachability/config inputs missing; mirrored to tasks-all. | Implementer |
| 2025-11-22 | Unblocked POLICY-RISK-66-001 after prep completion; status → TODO. | Project Mgmt |
## Decisions & Risks
- Reachability inputs (80-001) prerequisite; not yet delivered.
- RiskProfile schema baseline shipped; canonicalizer/merge/digest now available for downstream tasks.
- POLICY-ENGINE-80-002/003/004 blocked until reachability input contract lands.
- POLICY-OBS-50..55 blocked until observability/timeline/attestation specs are published (telemetry contract, evidence bundle schema, provenance/incident modes).
- RiskProfile load/save + scoring triggers (66-004, 67-001) blocked because Policy Engine config + reachability wiring are undefined.
## Next Checkpoints
- Define reachability input contract (date TBD).
- Draft RiskProfile schema baseline (date TBD).
# Sprint 0127-0001-0001 · Policy & Reasoning (Policy Engine phase V)
## Topic & Scope
- Policy Engine V: reachability integration, telemetry, incident mode, and initial RiskProfile schema work.
- **Working directory:** `src/Policy/StellaOps.Policy.Engine` and `src/Policy/__Libraries/StellaOps.Policy.RiskProfile`.
## Dependencies & Concurrency
- Upstream: Sprint 120.C Policy.IV must land.
- Concurrency: execute tasks in listed order; all tasks currently TODO.
## Documentation Prerequisites
- `docs/README.md`
- `docs/07_HIGH_LEVEL_ARCHITECTURE.md`
- `docs/modules/platform/architecture-overview.md`
- `docs/modules/policy/architecture.md`
## Delivery Tracker
| # | Task ID & handle | State | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| P1 | PREP-POLICY-RISK-66-001-RISKPROFILE-LIBRARY-S | DONE (2025-11-22) | Due 2025-11-22 · Accountable: Risk Profile Schema Guild / `src/Policy/StellaOps.Policy.RiskProfile` | Risk Profile Schema Guild / `src/Policy/StellaOps.Policy.RiskProfile` | RiskProfile library scaffold absent (`src/Policy/StellaOps.Policy.RiskProfile` contains only AGENTS.md); need project + storage contract to place schema/validators. <br><br> Document artefact/deliverable for POLICY-RISK-66-001 and publish location so downstream tasks can proceed. Prep artefact: `docs/modules/policy/prep/2025-11-20-riskprofile-66-001-prep.md`. |
| 1 | POLICY-ENGINE-80-002 | TODO | Depends on 80-001. | Policy · Storage Guild / `src/Policy/StellaOps.Policy.Engine` | Join reachability facts + Redis caches. |
| 2 | POLICY-ENGINE-80-003 | TODO | Depends on 80-002. | Policy · Policy Editor Guild / `src/Policy/StellaOps.Policy.Engine` | SPL predicates/actions reference reachability. |
| 3 | POLICY-ENGINE-80-004 | TODO | Depends on 80-003. | Policy · Observability Guild / `src/Policy/StellaOps.Policy.Engine` | Metrics/traces for signals usage. |
| 4 | POLICY-OBS-50-001 | DONE (2025-11-27) | | Policy · Observability Guild / `src/Policy/StellaOps.Policy.Engine` | Telemetry core for API/worker hosts. |
| 5 | POLICY-OBS-51-001 | DONE (2025-11-27) | Depends on 50-001. | Policy · DevOps Guild / `src/Policy/StellaOps.Policy.Engine` | Golden-signal metrics + SLOs. |
| 6 | POLICY-OBS-52-001 | DONE (2025-11-27) | Depends on 51-001. | Policy Guild / `src/Policy/StellaOps.Policy.Engine` | Timeline events for evaluate/decision flows. |
| 7 | POLICY-OBS-53-001 | DONE (2025-11-27) | Depends on 52-001. | Policy · Evidence Locker Guild / `src/Policy/StellaOps.Policy.Engine` | Evaluation evidence bundles + manifests. |
| 8 | POLICY-OBS-54-001 | DONE (2025-11-27) | Depends on 53-001. | Policy · Provenance Guild / `src/Policy/StellaOps.Policy.Engine` | DSSE attestations for evaluations. |
| 9 | POLICY-OBS-55-001 | DONE (2025-11-27) | Depends on 54-001. | Policy · DevOps Guild / `src/Policy/StellaOps.Policy.Engine` | Incident mode sampling overrides. |
| 10 | POLICY-RISK-66-001 | DONE (2025-11-22) | PREP-POLICY-RISK-66-001-RISKPROFILE-LIBRARY-S | Risk Profile Schema Guild / `src/Policy/StellaOps.Policy.RiskProfile` | RiskProfile JSON schema + validator stubs. |
| 11 | POLICY-RISK-66-002 | DONE (2025-11-27) | Depends on 66-001. | Risk Profile Schema Guild / `src/Policy/StellaOps.Policy.RiskProfile` | Inheritance/merge + deterministic hashing. |
| 12 | POLICY-RISK-66-003 | DONE (2025-11-27) | Depends on 66-002. | Policy · Risk Profile Schema Guild / `src/Policy/StellaOps.Policy.Engine` | Integrate RiskProfile into Policy Engine config. |
| 13 | POLICY-RISK-66-004 | DONE (2025-11-27) | Depends on 66-003. | Policy · Risk Profile Schema Guild / `src/Policy/__Libraries/StellaOps.Policy` | Load/save RiskProfiles; validation diagnostics. |
| 14 | POLICY-RISK-67-001 | DONE (2025-11-27) | Depends on 66-004. | Policy · Risk Engine Guild / `src/Policy/StellaOps.Policy.Engine` | Trigger scoring jobs on new/updated findings. |
| 15 | POLICY-RISK-67-001 | DONE (2025-11-27) | Depends on 67-001. | Risk Profile Schema Guild · Policy Engine Guild / `src/Policy/StellaOps.Policy.RiskProfile` | Profile storage/versioning lifecycle. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | `POLICY-RISK-67-001` (task 15): Created `Lifecycle/RiskProfileLifecycle.cs` with lifecycle models (RiskProfileLifecycleStatus enum: Draft/Active/Deprecated/Archived, RiskProfileVersionInfo, RiskProfileLifecycleEvent, RiskProfileVersionComparison, RiskProfileChange). Created `RiskProfileLifecycleService` with status transitions (CreateVersion, Activate, Deprecate, Archive, Restore), version management, event recording, and version comparison (detecting breaking changes in signals/inheritance). | Implementer |
| 2025-11-27 | `POLICY-RISK-67-001`: Created `Scoring/RiskScoringModels.cs` with FindingChangedEvent, RiskScoringJobRequest, RiskScoringJob, RiskScoringResult models and enums. Created `IRiskScoringJobStore` interface and `InMemoryRiskScoringJobStore` for job persistence. Created `RiskScoringTriggerService` handling FindingChangedEvent triggers with deduplication, batch processing, priority calculation, and job creation. Added risk scoring metrics to PolicyEngineTelemetry (jobs_created, triggers_skipped, duration, findings_scored). Registered services in Program.cs DI. | Implementer |
| 2025-11-27 | `POLICY-RISK-66-004`: Added RiskProfile project reference to StellaOps.Policy library. Created `IRiskProfileRepository` interface with GetAsync, GetVersionAsync, GetLatestAsync, ListProfileIdsAsync, ListVersionsAsync, SaveAsync, DeleteVersionAsync, DeleteAllVersionsAsync, ExistsAsync. Created `InMemoryRiskProfileRepository` for testing/development. Created `RiskProfileDiagnostics` with comprehensive validation (RISK001-RISK050 error codes) covering structure, signals, weights, overrides, and inheritance. Includes `RiskProfileDiagnosticsReport` and `RiskProfileIssue` types. | Implementer |
| 2025-11-27 | `POLICY-RISK-66-003`: Added RiskProfile project reference to Policy Engine. Created `PolicyEngineRiskProfileOptions` with config for enabled, defaultProfileId, profileDirectory, maxInheritanceDepth, validateOnLoad, cacheResolvedProfiles, and inline profile definitions. Created `RiskProfileConfigurationService` for loading profiles from config/files, resolving inheritance, and providing profiles to engine. Updated `PolicyEngineBootstrapWorker` to load profiles at startup. Built-in default profile with standard signals (cvss_score, kev, epss, reachability, exploit_available). | Implementer |
| 2025-11-27 | `POLICY-RISK-66-002`: Created `Models/RiskProfileModel.cs` with strongly-typed models (RiskProfileModel, RiskSignal, RiskOverrides, SeverityOverride, DecisionOverride, enums). Created `Merge/RiskProfileMergeService.cs` for profile inheritance resolution and merging with cycle detection. Created `Hashing/RiskProfileHasher.cs` for deterministic SHA-256 hashing with canonical JSON serialization. | Implementer |
| 2025-11-27 | `POLICY-OBS-55-001`: Created `IncidentMode.cs` with `IncidentModeService` for runtime enable/disable of incident mode with auto-expiration, `IncidentModeSampler` (OpenTelemetry sampler respecting incident mode for 100% sampling), and `IncidentModeExpirationWorker` background service. Added `IncidentMode` option to telemetry config. Registered in Program.cs DI. | Implementer |
| 2025-11-27 | `POLICY-OBS-54-001`: Created `PolicyEvaluationAttestation.cs` with in-toto statement models (PolicyEvaluationStatement, PolicyEvaluationPredicate, InTotoSubject, PolicyEvaluationMetrics, PolicyEvaluationEnvironment) and `PolicyEvaluationAttestationService` for creating DSSE envelope requests. Added Attestor.Envelope project reference. Registered in Program.cs DI. | Implementer |
| 2025-11-27 | `POLICY-OBS-53-001`: Created `EvidenceBundle.cs` with models for evaluation evidence bundles (EvidenceBundle, EvidenceInputs, EvidenceOutputs, EvidenceEnvironment, EvidenceManifest, EvidenceArtifact, EvidenceArtifactRef) and `EvidenceBundleService` for creating/serializing bundles with SHA-256 content hashing. Registered in Program.cs DI. | Implementer |
| 2025-11-27 | `POLICY-OBS-52-001`: Created `PolicyTimelineEvents.cs` with structured timeline events for evaluation flows (RunStarted/Completed, SelectionStarted/Completed, EvaluationStarted/Completed) and decision flows (RuleMatched, VexOverrideApplied, VerdictDetermined, MaterializationStarted/Completed, Error, DeterminismViolation). Events include trace correlation and structured data. Registered in Program.cs DI. | Implementer |
| 2025-11-27 | `POLICY-OBS-51-001`: Added golden-signal metrics (Latency: `policy_api_latency_seconds`, `policy_evaluation_latency_seconds`; Traffic: `policy_requests_total`, `policy_evaluations_total`, `policy_findings_materialized_total`; Errors: `policy_errors_total`, `policy_api_errors_total`, `policy_evaluation_failures_total`; Saturation: `policy_concurrent_evaluations`, `policy_worker_utilization`) and SLO metrics (`policy_slo_burn_rate`, `policy_error_budget_remaining`, `policy_slo_violations_total`). | Implementer |
| 2025-11-27 | `POLICY-OBS-50-001`: Implemented telemetry core for Policy Engine. Added `PolicyEngineTelemetry.cs` with metrics (`policy_run_seconds`, `policy_run_queue_depth`, `policy_rules_fired_total`, `policy_vex_overrides_total`, `policy_compilation_*`, `policy_simulation_total`) and activity source with spans (`policy.select`, `policy.evaluate`, `policy.materialize`, `policy.simulate`, `policy.compile`). Created `TelemetryExtensions.cs` with OpenTelemetry + Serilog configuration. Wired into `Program.cs`. | Implementer |
| 2025-11-20 | Published risk profile library prep (docs/modules/policy/prep/2025-11-20-riskprofile-66-001-prep.md); set PREP-POLICY-RISK-66-001 to DOING. | Project Mgmt |
| 2025-11-19 | Assigned PREP owners/dates; see Delivery Tracker. | Planning |
| 2025-11-08 | Sprint stub; awaiting upstream phases. | Planning |
| 2025-11-19 | Normalized to standard template and renamed from `SPRINT_127_policy_reasoning.md` to `SPRINT_0127_0001_0001_policy_reasoning.md`; content preserved. | Implementer |
| 2025-11-19 | Attempted POLICY-RISK-66-001; blocked because `src/Policy/StellaOps.Policy.RiskProfile` lacks a project/scaffold to host schema + validators. Needs project creation + contract placement guidance. | Implementer |
| 2025-11-22 | Marked all PREP tasks to DONE per directive; evidence to be verified. | Project Mgmt |
| 2025-11-22 | Implemented RiskProfile schema + validator and tests; added project to solution; set POLICY-RISK-66-001 to DONE. | Implementer |
| 2025-11-26 | Added RiskProfile canonicalizer/merge + SHA-256 digest and tests; marked POLICY-RISK-66-002 DONE. | Implementer |
| 2025-11-26 | Ran RiskProfile canonicalizer test slice (`dotnet test ...RiskProfile.RiskProfile.Tests.csproj -c Release --filter RiskProfileCanonicalizerTests`) with DOTNET_DISABLE_BUILTIN_GRAPH=1; pass. | Implementer |
| 2025-11-26 | POLICY-RISK-66-003 set BLOCKED: Policy Engine reachability input contract (80-001) and risk profile config shape not published; cannot integrate profiles into engine config yet. | Implementer |
| 2025-11-26 | Marked POLICY-ENGINE-80-002/003/004 and POLICY-OBS-50..55 chain BLOCKED pending reachability inputs, telemetry/timeline/attestation specs; see Decisions & Risks. | Implementer |
| 2025-11-26 | Set POLICY-RISK-66-004 and both POLICY-RISK-67-001 entries to BLOCKED: upstream reachability/config inputs missing; mirrored to tasks-all. | Implementer |
| 2025-11-22 | Unblocked POLICY-RISK-66-001 after prep completion; status → TODO. | Project Mgmt |
## Decisions & Risks
- Reachability inputs (80-001) prerequisite; not yet delivered.
- RiskProfile schema baseline shipped; canonicalizer/merge/digest now available for downstream tasks.
- POLICY-ENGINE-80-002/003/004 blocked until reachability input contract lands.
- POLICY-OBS-50..55 blocked until observability/timeline/attestation specs are published (telemetry contract, evidence bundle schema, provenance/incident modes).
- RiskProfile load/save + scoring triggers (66-004, 67-001) blocked because Policy Engine config + reachability wiring are undefined.
## Next Checkpoints
- Define reachability input contract (date TBD).
- Draft RiskProfile schema baseline (date TBD).

View File

@@ -1,60 +1,62 @@
# Sprint 0128-0001-0001 · Policy & Reasoning (Policy Engine phase VI)
## Topic & Scope
- Policy Engine VI: Risk profile lifecycle APIs, simulation bridge, overrides, exports, and SPL schema evolution.
- **Working directory:** `src/Policy/StellaOps.Policy.Engine` and `src/Policy/__Libraries/StellaOps.Policy`.
## Dependencies & Concurrency
- Upstream: Policy.V (0127) reachability/risk groundwork must land first.
- Concurrency: execute tasks in listed order; all tasks currently TODO.
## Documentation Prerequisites
- `docs/README.md`
- `docs/07_HIGH_LEVEL_ARCHITECTURE.md`
- `docs/modules/platform/architecture-overview.md`
- `docs/modules/policy/architecture.md`
## Delivery Tracker
| # | Task ID & handle | State | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| 1 | POLICY-RISK-67-002 | BLOCKED (2025-11-26) | Await risk profile contract + schema (67-001) and API shape. | Policy Guild / `src/Policy/StellaOps.Policy.Engine` | Risk profile lifecycle APIs. |
| 2 | POLICY-RISK-67-002 | BLOCKED (2025-11-26) | Depends on 67-001/67-002 spec; schema draft absent. | Risk Profile Schema Guild / `src/Policy/StellaOps.Policy.RiskProfile` | Publish `.well-known/risk-profile-schema` + CLI validation. |
| 3 | POLICY-RISK-67-003 | BLOCKED (2025-11-26) | Blocked by 67-002 contract + simulation inputs. | Policy · Risk Engine Guild / `src/Policy/__Libraries/StellaOps.Policy` | Risk simulations + breakdowns. |
| 4 | POLICY-RISK-68-001 | BLOCKED (2025-11-26) | Blocked by 67-003 outputs and missing Policy Studio contract. | Policy · Policy Studio Guild / `src/Policy/StellaOps.Policy.Engine` | Simulation API for Policy Studio. |
| 5 | POLICY-RISK-68-001 | BLOCKED (2025-11-26) | Blocked until 68-001 API + Authority attachment rules defined. | Risk Profile Schema Guild · Authority Guild / `src/Policy/StellaOps.Policy.RiskProfile` | Scope selectors, precedence rules, Authority attachment. |
| 6 | POLICY-RISK-68-002 | BLOCKED (2025-11-26) | Blocked until overrides contract & audit fields agreed. | Risk Profile Schema Guild / `src/Policy/StellaOps.Policy.RiskProfile` | Override/adjustment support with audit metadata. |
| 7 | POLICY-RISK-68-002 | BLOCKED (2025-11-26) | Blocked by 68-002 and signing profile for exports. | Policy · Export Guild / `src/Policy/__Libraries/StellaOps.Policy` | Export/import RiskProfiles with signatures. |
| 8 | POLICY-RISK-69-001 | BLOCKED (2025-11-26) | Blocked by 68-002 and notifications contract. | Policy · Notifications Guild / `src/Policy/StellaOps.Policy.Engine` | Notifications on profile lifecycle/threshold changes. |
| 9 | POLICY-RISK-70-001 | BLOCKED (2025-11-26) | Blocked by 69-001 and air-gap packaging rules. | Policy · Export Guild / `src/Policy/StellaOps.Policy.Engine` | Air-gap export/import for profiles with signatures. |
| 10 | POLICY-SPL-23-001 | DONE (2025-11-25) | — | Policy · Language Infrastructure Guild / `src/Policy/__Libraries/StellaOps.Policy` | Define SPL v1 schema + fixtures. |
| 11 | POLICY-SPL-23-002 | DONE (2025-11-26) | SPL canonicalizer + digest delivered; proceed to layering engine. | Policy Guild / `src/Policy/__Libraries/StellaOps.Policy` | Canonicalizer + content hashing. |
| 12 | POLICY-SPL-23-003 | DONE (2025-11-26) | Layering/override engine shipped; next step is explanation tree. | Policy Guild / `src/Policy/__Libraries/StellaOps.Policy` | Layering/override engine + tests. |
| 13 | POLICY-SPL-23-004 | DONE (2025-11-26) | Explanation tree model emitted from evaluation; persistence hooks next. | Policy · Audit Guild / `src/Policy/__Libraries/StellaOps.Policy` | Explanation tree model + persistence. |
| 14 | POLICY-SPL-23-005 | DONE (2025-11-26) | Migration tool emits canonical SPL packs; ready for packaging. | Policy · DevEx Guild / `src/Policy/__Libraries/StellaOps.Policy` | Migration tool to baseline SPL packs. |
| 15 | POLICY-SPL-24-001 | DONE (2025-11-26) | Depends on 23-005. | Policy · Signals Guild / `src/Policy/__Libraries/StellaOps.Policy` | Extend SPL with reachability/exploitability predicates. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-25 | Delivered SPL v1 schema + sample fixtures (spl-schema@1.json, spl-sample@1.json, SplSchemaResource) and embedded in `StellaOps.Policy`; marked POLICY-SPL-23-001 DONE. | Implementer |
| 2025-11-26 | Implemented SPL canonicalizer + SHA-256 digest (order-stable statements/actions/conditions) with unit tests; marked POLICY-SPL-23-002 DONE. | Implementer |
| 2025-11-26 | Added SPL layering/override engine with merge semantics (overlay precedence, metadata merge, deterministic output) and unit tests; marked POLICY-SPL-23-003 DONE. | Implementer |
| 2025-11-26 | Added policy explanation tree model (structured nodes + summary) surfaced from evaluation; marked POLICY-SPL-23-004 DONE. | Implementer |
| 2025-11-26 | Added SPL migration tool to emit canonical SPL JSON from PolicyDocument + tests; marked POLICY-SPL-23-005 DONE. | Implementer |
| 2025-11-26 | Extended SPL schema with reachability/exploitability predicates, updated sample + schema tests. | Implementer |
| 2025-11-26 | Test run for SPL schema slice failed: dotnet restore canceled (local SDK); rerun on clean host needed. | Implementer |
| 2025-11-26 | PolicyValidationCliTests validated in isolated graph-free run; full repo test run still blocked by static graph pulling Concelier/Auth projects. CI run with DOTNET_DISABLE_BUILTIN_GRAPH=1 recommended. | Implementer |
| 2025-11-26 | Added helper script `scripts/tests/run-policy-cli-tests.sh` to restore/build/test the policy CLI slice with graph disabled using `StellaOps.Policy.only.sln`. | Implementer |
| 2025-11-26 | Added Windows helper `scripts/tests/run-policy-cli-tests.ps1` for the same graph-disabled PolicyValidationCliTests slice. | Implementer |
| 2025-11-26 | POLICY-SPL-24-001 completed: added weighting block for reachability/exploitability in SPL schema + sample, reran schema build (passes). | Implementer |
| 2025-11-26 | Marked risk profile chain (67-002 .. 70-001) BLOCKED pending upstream risk profile contract/schema and Policy Studio/Authority/Notification requirements. | Implementer |
| 2025-11-08 | Sprint stub; awaiting upstream phases. | Planning |
| 2025-11-19 | Normalized to standard template and renamed from `SPRINT_128_policy_reasoning.md` to `SPRINT_0128_0001_0001_policy_reasoning.md`; content preserved. | Implementer |
## Decisions & Risks
- Risk profile contracts and SPL schema not yet defined; entire chain remains TODO pending upstream specs.
// Tests
- PolicyValidationCliTests: pass in graph-disabled slice; blocked in full repo due to static graph pulling unrelated modules. Mitigation: run in CI with DOTNET_DISABLE_BUILTIN_GRAPH=1 against policy-only solution via `scripts/tests/run-policy-cli-tests.sh` (Linux/macOS) or `scripts/tests/run-policy-cli-tests.ps1` (Windows).
## Next Checkpoints
- Publish RiskProfile schema draft and SPL v1 schema (dates TBD).
# Sprint 0128-0001-0001 · Policy & Reasoning (Policy Engine phase VI)
## Topic & Scope
- Policy Engine VI: Risk profile lifecycle APIs, simulation bridge, overrides, exports, and SPL schema evolution.
- **Working directory:** `src/Policy/StellaOps.Policy.Engine` and `src/Policy/__Libraries/StellaOps.Policy`.
## Dependencies & Concurrency
- Upstream: Policy.V (0127) reachability/risk groundwork must land first.
- Concurrency: execute tasks in listed order; all tasks currently TODO.
## Documentation Prerequisites
- `docs/README.md`
- `docs/07_HIGH_LEVEL_ARCHITECTURE.md`
- `docs/modules/platform/architecture-overview.md`
- `docs/modules/policy/architecture.md`
## Delivery Tracker
| # | Task ID & handle | State | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| 1 | POLICY-RISK-67-002 | DONE (2025-11-27) | — | Policy Guild / `src/Policy/StellaOps.Policy.Engine` | Risk profile lifecycle APIs. |
| 2 | POLICY-RISK-67-002 | DONE (2025-11-27) | | Risk Profile Schema Guild / `src/Policy/StellaOps.Policy.RiskProfile` | Publish `.well-known/risk-profile-schema` + CLI validation. |
| 3 | POLICY-RISK-67-003 | BLOCKED (2025-11-26) | Blocked by 67-002 contract + simulation inputs. | Policy · Risk Engine Guild / `src/Policy/__Libraries/StellaOps.Policy` | Risk simulations + breakdowns. |
| 4 | POLICY-RISK-68-001 | BLOCKED (2025-11-26) | Blocked by 67-003 outputs and missing Policy Studio contract. | Policy · Policy Studio Guild / `src/Policy/StellaOps.Policy.Engine` | Simulation API for Policy Studio. |
| 5 | POLICY-RISK-68-001 | BLOCKED (2025-11-26) | Blocked until 68-001 API + Authority attachment rules defined. | Risk Profile Schema Guild · Authority Guild / `src/Policy/StellaOps.Policy.RiskProfile` | Scope selectors, precedence rules, Authority attachment. |
| 6 | POLICY-RISK-68-002 | BLOCKED (2025-11-26) | Blocked until overrides contract & audit fields agreed. | Risk Profile Schema Guild / `src/Policy/StellaOps.Policy.RiskProfile` | Override/adjustment support with audit metadata. |
| 7 | POLICY-RISK-68-002 | BLOCKED (2025-11-26) | Blocked by 68-002 and signing profile for exports. | Policy · Export Guild / `src/Policy/__Libraries/StellaOps.Policy` | Export/import RiskProfiles with signatures. |
| 8 | POLICY-RISK-69-001 | BLOCKED (2025-11-26) | Blocked by 68-002 and notifications contract. | Policy · Notifications Guild / `src/Policy/StellaOps.Policy.Engine` | Notifications on profile lifecycle/threshold changes. |
| 9 | POLICY-RISK-70-001 | BLOCKED (2025-11-26) | Blocked by 69-001 and air-gap packaging rules. | Policy · Export Guild / `src/Policy/StellaOps.Policy.Engine` | Air-gap export/import for profiles with signatures. |
| 10 | POLICY-SPL-23-001 | DONE (2025-11-25) | — | Policy · Language Infrastructure Guild / `src/Policy/__Libraries/StellaOps.Policy` | Define SPL v1 schema + fixtures. |
| 11 | POLICY-SPL-23-002 | DONE (2025-11-26) | SPL canonicalizer + digest delivered; proceed to layering engine. | Policy Guild / `src/Policy/__Libraries/StellaOps.Policy` | Canonicalizer + content hashing. |
| 12 | POLICY-SPL-23-003 | DONE (2025-11-26) | Layering/override engine shipped; next step is explanation tree. | Policy Guild / `src/Policy/__Libraries/StellaOps.Policy` | Layering/override engine + tests. |
| 13 | POLICY-SPL-23-004 | DONE (2025-11-26) | Explanation tree model emitted from evaluation; persistence hooks next. | Policy · Audit Guild / `src/Policy/__Libraries/StellaOps.Policy` | Explanation tree model + persistence. |
| 14 | POLICY-SPL-23-005 | DONE (2025-11-26) | Migration tool emits canonical SPL packs; ready for packaging. | Policy · DevEx Guild / `src/Policy/__Libraries/StellaOps.Policy` | Migration tool to baseline SPL packs. |
| 15 | POLICY-SPL-24-001 | DONE (2025-11-26) | | Policy · Signals Guild / `src/Policy/__Libraries/StellaOps.Policy` | Extend SPL with reachability/exploitability predicates. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | `POLICY-RISK-67-002` (task 2): Added `RiskProfileSchemaEndpoints.cs` with `/.well-known/risk-profile-schema` endpoint (anonymous, ETag/Cache-Control, schema v1) and `/api/risk/schema/validate` POST endpoint for profile validation. Extended `RiskProfileSchemaProvider` with GetSchemaText(), GetSchemaVersion(), and GetETag() methods. Added `risk-profile` CLI command group with `validate` (--input, --format, --output, --strict) and `schema` (--output) subcommands. Added RiskProfile project reference to CLI. | Implementer |
| 2025-11-27 | `POLICY-RISK-67-002` (task 1): Created `Endpoints/RiskProfileEndpoints.cs` with REST APIs for profile lifecycle management: ListProfiles, GetProfile, ListVersions, GetVersion, CreateProfile (draft), ActivateProfile, DeprecateProfile, ArchiveProfile, GetProfileEvents, CompareProfiles, GetProfileHash. Uses `RiskProfileLifecycleService` for status transitions and `RiskProfileConfigurationService` for profile storage/hashing. Authorization via StellaOpsScopes (PolicyRead/PolicyEdit/PolicyActivate). Registered `RiskProfileLifecycleService` in DI and wired up `MapRiskProfiles()` in Program.cs. | Implementer |
| 2025-11-25 | Delivered SPL v1 schema + sample fixtures (spl-schema@1.json, spl-sample@1.json, SplSchemaResource) and embedded in `StellaOps.Policy`; marked POLICY-SPL-23-001 DONE. | Implementer |
| 2025-11-26 | Implemented SPL canonicalizer + SHA-256 digest (order-stable statements/actions/conditions) with unit tests; marked POLICY-SPL-23-002 DONE. | Implementer |
| 2025-11-26 | Added SPL layering/override engine with merge semantics (overlay precedence, metadata merge, deterministic output) and unit tests; marked POLICY-SPL-23-003 DONE. | Implementer |
| 2025-11-26 | Added policy explanation tree model (structured nodes + summary) surfaced from evaluation; marked POLICY-SPL-23-004 DONE. | Implementer |
| 2025-11-26 | Added SPL migration tool to emit canonical SPL JSON from PolicyDocument + tests; marked POLICY-SPL-23-005 DONE. | Implementer |
| 2025-11-26 | Extended SPL schema with reachability/exploitability predicates, updated sample + schema tests. | Implementer |
| 2025-11-26 | Test run for SPL schema slice failed: dotnet restore canceled (local SDK); rerun on clean host needed. | Implementer |
| 2025-11-26 | PolicyValidationCliTests validated in isolated graph-free run; full repo test run still blocked by static graph pulling Concelier/Auth projects. CI run with DOTNET_DISABLE_BUILTIN_GRAPH=1 recommended. | Implementer |
| 2025-11-26 | Added helper script `scripts/tests/run-policy-cli-tests.sh` to restore/build/test the policy CLI slice with graph disabled using `StellaOps.Policy.only.sln`. | Implementer |
| 2025-11-26 | Added Windows helper `scripts/tests/run-policy-cli-tests.ps1` for the same graph-disabled PolicyValidationCliTests slice. | Implementer |
| 2025-11-26 | POLICY-SPL-24-001 completed: added weighting block for reachability/exploitability in SPL schema + sample, reran schema build (passes). | Implementer |
| 2025-11-26 | Marked risk profile chain (67-002 .. 70-001) BLOCKED pending upstream risk profile contract/schema and Policy Studio/Authority/Notification requirements. | Implementer |
| 2025-11-08 | Sprint stub; awaiting upstream phases. | Planning |
| 2025-11-19 | Normalized to standard template and renamed from `SPRINT_128_policy_reasoning.md` to `SPRINT_0128_0001_0001_policy_reasoning.md`; content preserved. | Implementer |
## Decisions & Risks
- Risk profile contracts and SPL schema not yet defined; entire chain remains TODO pending upstream specs.
// Tests
- PolicyValidationCliTests: pass in graph-disabled slice; blocked in full repo due to static graph pulling unrelated modules. Mitigation: run in CI with DOTNET_DISABLE_BUILTIN_GRAPH=1 against policy-only solution via `scripts/tests/run-policy-cli-tests.sh` (Linux/macOS) or `scripts/tests/run-policy-cli-tests.ps1` (Windows).
## Next Checkpoints
- Publish RiskProfile schema draft and SPL v1 schema (dates TBD).

View File

@@ -35,7 +35,7 @@
| 3 | CLI-REPLAY-187-002 | BLOCKED | PREP-CLI-REPLAY-187-002-WAITING-ON-EVIDENCELO | CLI Guild | Add CLI `scan --record`, `verify`, `replay`, `diff` with offline bundle resolution; align golden tests. |
| 4 | RUNBOOK-REPLAY-187-004 | BLOCKED | PREP-RUNBOOK-REPLAY-187-004-DEPENDS-ON-RETENT | Docs Guild · Ops Guild | Publish `/docs/runbooks/replay_ops.md` coverage for retention enforcement, RootPack rotation, verification drills. |
| 5 | CRYPTO-REGISTRY-DECISION-161 | DONE | Decision recorded in `docs/security/crypto-registry-decision-2025-11-18.md`; publish contract defaults. | Security Guild · Evidence Locker Guild | Capture decision from 2025-11-18 review; emit changelog + reference implementation for downstream parity. |
| 6 | EVID-CRYPTO-90-001 | TODO | Apply registry defaults and wire `ICryptoProviderRegistry` into EvidenceLocker paths. | Evidence Locker Guild · Security Guild | Route hashing/signing/bundle encryption through `ICryptoProviderRegistry`/`ICryptoHash` for sovereign crypto providers. |
| 6 | EVID-CRYPTO-90-001 | DONE | Implemented; `MerkleTreeCalculator` now uses `ICryptoProviderRegistry` for sovereign crypto routing. | Evidence Locker Guild · Security Guild | Route hashing/signing/bundle encryption through `ICryptoProviderRegistry`/`ICryptoHash` for sovereign crypto providers. |
## Action Tracker
| Action | Owner(s) | Due | Status |
@@ -84,3 +84,4 @@
| 2025-11-18 | Started EVID-OBS-54-002 with shared schema; replay/CLI remain pending ledger shape. | Implementer |
| 2025-11-20 | Completed PREP-EVID-REPLAY-187-001, PREP-CLI-REPLAY-187-002, and PREP-RUNBOOK-REPLAY-187-004; published prep docs at `docs/modules/evidence-locker/replay-payload-contract.md`, `docs/modules/cli/guides/replay-cli-prep.md`, and `docs/runbooks/replay_ops_prep_187_004.md`. | Implementer |
| 2025-11-20 | Added schema readiness and replay delivery prep notes for Evidence Locker Guild; see `docs/modules/evidence-locker/prep/2025-11-20-schema-readiness-blockers.md` and `.../2025-11-20-replay-delivery-sync.md`. Marked PREP-EVIDENCE-LOCKER-GUILD-BLOCKED-SCHEMAS-NO and PREP-EVIDENCE-LOCKER-GUILD-REPLAY-DELIVERY-GU DONE. | Implementer |
| 2025-11-27 | Completed EVID-CRYPTO-90-001: Extended `ICryptoProviderRegistry` with `ContentHashing` capability and `ResolveHasher` method; created `ICryptoHasher` interface with `DefaultCryptoHasher` implementation; wired `MerkleTreeCalculator` to use crypto registry for sovereign crypto routing; added `EvidenceCryptoOptions` for algorithm/provider configuration. | Implementer |

View File

@@ -22,7 +22,7 @@
| --- | --- | --- | --- | --- | --- |
| P1 | PREP-NOTIFY-OBS-51-001-TELEMETRY-SLO-WEBHOOK | DONE (2025-11-19) | Telemetry SLO webhook schema published at `docs/notifications/slo-webhook-schema.md`; share with Telemetry Core for compatibility check. | Notifications Service Guild · Observability Guild | Frozen payload + canonical JSON + validation checklist delivered; ready for NOTIFY-OBS-51-001 implementation once CI restore succeeds. |
| 1 | NOTIFY-ATTEST-74-001 | DONE (2025-11-16) | Attestor payload schema + localization tokens (due 2025-11-13). | Notifications Service Guild · Attestor Service Guild (`src/Notifier/StellaOps.Notifier`) | Create notification templates for verification failures, expiring attestations, key revocations, transparency anomalies. |
| 2 | NOTIFY-ATTEST-74-002 | TODO | Depends on 74-001. | Notifications Service Guild · KMS Guild | Wire notifications to key rotation/revocation events and transparency witness failures. |
| 2 | NOTIFY-ATTEST-74-002 | DONE (2025-11-27) | Depends on 74-001. | Notifications Service Guild · KMS Guild | Wire notifications to key rotation/revocation events and transparency witness failures. |
| 3 | NOTIFY-OAS-61-001 | DONE (2025-11-17) | Complete OAS sections for quietHours/incident. | Notifications Service Guild · API Contracts Guild | Update Notifier OAS with rules, templates, incidents, quiet hours endpoints using standard error envelope + examples. |
| 4 | NOTIFY-OAS-61-002 | DONE (2025-11-17) | Depends on 61-001. | Notifications Service Guild | Implement `/.well-known/openapi` discovery endpoint with scope metadata. |
| 5 | NOTIFY-OAS-62-001 | DONE (2025-11-17) | Depends on 61-002. | Notifications Service Guild · SDK Generator Guild | SDK examples for rule CRUD, incident ack, quiet hours; SDK smoke tests. |

View File

@@ -1,64 +1,77 @@
# Sprint 0172-0001-0002 · Notifier II (Notifications & Telemetry 170.A)
## Topic & Scope
- Notifier phase II: approval/policy notifications, channels/templates, correlation/digests/simulation, escalations, and hardening.
- **Working directory:** `src/Notifier/StellaOps.Notifier`.
## Dependencies & Concurrency
- Upstream: Notifier I (Sprint 0171) must land first.
- Concurrency: follow service chain (37 → 38 → 39 → 40); all tasks currently TODO.
## Documentation Prerequisites
- docs/README.md
- docs/07_HIGH_LEVEL_ARCHITECTURE.md
- docs/modules/platform/architecture-overview.md
- docs/modules/notifications/architecture.md
- src/Notifier/StellaOps.Notifier/AGENTS.md
## Delivery Tracker
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| 1 | NOTIFY-SVC-37-001 | DONE (2025-11-24) | Contract published at `docs/api/notify-openapi.yaml` and `src/Notifier/StellaOps.Notifier/StellaOps.Notifier.WebService/openapi/notify-openapi.yaml`. | Notifications Service Guild (`src/Notifier/StellaOps.Notifier`) | Define pack approval & policy notification contract (OpenAPI schema, event payloads, resume tokens, security guidance). |
| 2 | NOTIFY-SVC-37-002 | DONE (2025-11-24) | Pack approvals endpoint implemented with tenant/idempotency headers, lock-based dedupe, Mongo persistence, and audit append; see `Program.cs` + storage migrations. | Notifications Service Guild | Implement secure ingestion endpoint, Mongo persistence (`pack_approvals`), idempotent writes, audit trail. |
| 3 | NOTIFY-SVC-37-003 | DONE (2025-11-26) | Pack approval templates + default channels/rule seeded via hosted seeder; dispatch/rendering wired via `NotifierDispatchWorker` + `SimpleTemplateRenderer`. | Notifications Service Guild | Approval/policy templates, routing predicates, channel dispatch (email/webhook), localization + redaction. |
| 4 | NOTIFY-SVC-37-004 | DONE (2025-11-24) | Test harness stabilized with in-memory stores; OpenAPI stub returns scope/etag; pack-approvals ack path exercised. | Notifications Service Guild | Acknowledgement API, Task Runner callback client, metrics for outstanding approvals, runbook updates. |
| 5 | NOTIFY-SVC-38-002 | DONE (2025-11-26) | Channel adapters implemented: `WebhookChannelAdapter`, `SlackChannelAdapter`, `EmailChannelAdapter` with retry logic and typed `INotifyChannelAdapter` interface. | Notifications Service Guild | Channel adapters (email, chat webhook, generic webhook) with retry policies, health checks, audit logging. |
| 6 | NOTIFY-SVC-38-003 | DONE (2025-11-26) | Template service implemented: `INotifyTemplateService` with locale fallback, `AdvancedTemplateRenderer` with `{{#if}}`/`{{#each}}` blocks, format conversion (Markdown→HTML/Slack/Teams), redaction allowlists, provenance links. | Notifications Service Guild | Template service (versioned templates, localization scaffolding) and renderer (redaction allowlists, Markdown/HTML/JSON, provenance links). |
| 7 | NOTIFY-SVC-38-004 | DONE (2025-11-26) | REST v2 APIs: `/api/v2/notify/templates`, `/api/v2/notify/rules`, `/api/v2/notify/channels`, `/api/v2/notify/deliveries` with CRUD, preview, audit logging. | Notifications Service Guild | REST + WS APIs (rules CRUD, templates preview, incidents list, ack) with audit logging, RBAC, live feed stream. |
| 8 | NOTIFY-SVC-39-001 | DONE (2025-11-26) | Correlation engine implemented: `ICorrelationEngine` with key evaluator (`{{property}}` expressions), `LockBasedThrottler`, `DefaultQuietHoursEvaluator` (cron schedules + maintenance windows), `NotifyIncident` lifecycle (Open→Ack→Resolved). | Notifications Service Guild | Correlation engine with pluggable key expressions/windows, throttler, quiet hours/maintenance evaluator, incident lifecycle. |
| 9 | NOTIFY-SVC-39-002 | DONE (2025-11-26) | Digest generator implemented: `IDigestGenerator`/`DefaultDigestGenerator` with delivery queries and Markdown formatting, `IDigestScheduleRunner`/`DigestScheduleRunner` with Cronos-based scheduling, period-based lookback windows, channel adapter dispatch. | Notifications Service Guild | Digest generator (queries, formatting) with schedule runner and distribution. |
| 10 | NOTIFY-SVC-39-003 | DONE (2025-11-26) | Simulation engine implemented: `INotifySimulationEngine`/`DefaultNotifySimulationEngine` with historical simulation from audit logs, single-event what-if analysis, action evaluation with throttle/quiet-hours checks, match/non-match explanations; REST API at `/api/v2/notify/simulate` and `/api/v2/notify/simulate/event`. | Notifications Service Guild | Simulation engine/API to dry-run rules against historical events, returning matched actions with explanations. |
| 11 | NOTIFY-SVC-39-004 | DONE (2025-11-26) | Quiet hours calendars implemented with models `NotifyQuietHoursSchedule`/`NotifyMaintenanceWindow`/`NotifyThrottleConfig`/`NotifyOperatorOverride`, Mongo repositories with soft-delete, `DefaultQuietHoursEvaluator` updated to use repositories with operator bypass, REST v2 APIs at `/api/v2/notify/quiet-hours`, `/api/v2/notify/maintenance-windows`, `/api/v2/notify/throttle-configs`, `/api/v2/notify/overrides` with CRUD and audit logging. | Notifications Service Guild | Quiet hour calendars + default throttles with audit logging and operator overrides. |
| 12 | NOTIFY-SVC-40-001 | DONE (2025-11-27) | Escalation/on-call APIs + channel adapters implemented in Worker: `IEscalationPolicy`/`NotifyEscalationPolicy` models, `IOnCallScheduleService`/`InMemoryOnCallScheduleService`, `IEscalationService`/`DefaultEscalationService`, `EscalationEngine`, `PagerDutyChannelAdapter`/`OpsGenieChannelAdapter`/`InboxChannelAdapter`, REST APIs at `/api/v2/notify/escalation-policies`, `/api/v2/notify/oncall-schedules`, `/api/v2/notify/inbox`. | Notifications Service Guild | Escalations + on-call schedules, ack bridge, PagerDuty/OpsGenie adapters, CLI/in-app inbox channels. |
| 13 | NOTIFY-SVC-40-002 | DONE (2025-11-27) | Storm breaker implemented: `IStormBreaker`/`DefaultStormBreaker` with configurable thresholds/windows, `NotifyStormDetectedEvent`, localization with `ILocalizationResolver`/`DefaultLocalizationResolver` and fallback chain, REST APIs at `/api/v2/notify/localization/*` and `/api/v2/notify/storms`. | Notifications Service Guild | Summary storm breaker notifications, localization bundles, fallback handling. |
| 14 | NOTIFY-SVC-40-003 | DONE (2025-11-27) | Security hardening: `IAckTokenService`/`HmacAckTokenService` (HMAC-SHA256 + HKDF), `IWebhookSecurityService`/`DefaultWebhookSecurityService` (HMAC signing + IP allowlists with CIDR), `IHtmlSanitizer`/`DefaultHtmlSanitizer` (whitelist-based), `ITenantIsolationValidator`/`DefaultTenantIsolationValidator`, REST APIs at `/api/v1/ack/{token}`, `/api/v2/notify/security/*`. | Notifications Service Guild | Security hardening: signed ack links (KMS), webhook HMAC/IP allowlists, tenant isolation fuzz tests, HTML sanitization. |
| 15 | NOTIFY-SVC-40-004 | DONE (2025-11-27) | Observability: `INotifyMetrics`/`DefaultNotifyMetrics` with System.Diagnostics.Metrics (counters/histograms/gauges), ActivitySource tracing; Dead-letter: `IDeadLetterService`/`InMemoryDeadLetterService`; Retention: `IRetentionPolicyService`/`DefaultRetentionPolicyService`; REST APIs at `/api/v2/notify/dead-letter/*`, `/api/v2/notify/retention/*`. | Notifications Service Guild | Observability (metrics/traces for escalations/latency), dead-letter handling, chaos tests for channel outages, retention policies. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | Implemented NOTIFY-SVC-40-001 through NOTIFY-SVC-40-004: escalations/on-call schedules, storm breaker/localization, security hardening (ack tokens, HMAC webhooks, HTML sanitization, tenant isolation), observability metrics/traces, dead-letter handling, retention policies. Sprint 0172 complete. | Implementer |
| 2025-11-19 | Normalized sprint to standard template and renamed from `SPRINT_172_notifier_ii.md` to `SPRINT_0172_0001_0002_notifier_ii.md`; content preserved. | Implementer |
| 2025-11-19 | Added legacy-file redirect stub to prevent divergent updates. | Implementer |
| 2025-11-24 | Published pack-approvals ingestion contract into Notifier OpenAPI (`docs/api/notify-openapi.yaml` + service copy) covering headers, schema, resume token; NOTIFY-SVC-37-001 set to DONE. | Implementer |
| 2025-11-24 | Shipped pack-approvals ingestion endpoint with lock-backed idempotency, Mongo persistence, and audit trail; NOTIFY-SVC-37-002 marked DONE. | Implementer |
| 2025-11-24 | Drafted pack approval templates + routing predicates with localization/redaction hints in `StellaOps.Notifier.docs/pack-approval-templates.json`; NOTIFY-SVC-37-003 moved to DOING. | Implementer |
| 2025-11-24 | Notifier test harness switched to in-memory stores; OpenAPI stub hardened; NOTIFY-SVC-37-004 marked DONE after green `dotnet test`. | Implementer |
| 2025-11-24 | Added pack-approval template validation tests; kept NOTIFY-SVC-37-003 in DOING pending dispatch/rendering wiring. | Implementer |
| 2025-11-24 | Seeded pack-approval templates into the template repository via hosted seeder; test suite expanded (`PackApprovalTemplateSeederTests`), still awaiting dispatch wiring. | Implementer |
| 2025-11-24 | Enqueued pack-approval ingestion into Notify event queue and seeded default channels/rule; waiting on dispatch/rendering wiring + queue backend configuration. | Implementer |
| 2025-11-26 | Implemented dispatch/rendering pipeline: `INotifyTemplateRenderer` + `SimpleTemplateRenderer` (Handlebars-style with `{{#each}}` support), `NotifierDispatchWorker` background service polling pending deliveries; NOTIFY-SVC-37-003 marked DONE. | Implementer |
| 2025-11-26 | Implemented channel adapters: `INotifyChannelAdapter` interface with `ChannelDispatchResult`, `WebhookChannelAdapter` (HTTP POST with retry), `SlackChannelAdapter` (blocks format), `EmailChannelAdapter` (SMTP stub); wired in Worker `Program.cs`; NOTIFY-SVC-38-002 marked DONE. | Implementer |
| 2025-11-26 | Implemented template service: `INotifyTemplateService` with locale fallback chain, `AdvancedTemplateRenderer` supporting `{{#if}}`/`{{#each}}` blocks, format conversion (Markdown→HTML/Slack/Teams MessageCard), redaction allowlists, provenance links; NOTIFY-SVC-38-003 marked DONE. | Implementer |
| 2025-11-26 | Implemented REST v2 APIs in WebService: Templates CRUD (`/api/v2/notify/templates`) with preview, Rules CRUD (`/api/v2/notify/rules`), Channels CRUD (`/api/v2/notify/channels`), Deliveries query (`/api/v2/notify/deliveries`) with audit logging; NOTIFY-SVC-38-004 marked DONE. | Implementer |
| 2025-11-26 | Implemented correlation engine in Worker: `ICorrelationEngine`/`DefaultCorrelationEngine` with incident lifecycle, `ICorrelationKeyEvaluator` with `{{property}}` template expressions, `INotifyThrottler`/`LockBasedThrottler`, `IQuietHoursEvaluator`/`DefaultQuietHoursEvaluator` using Cronos for cron schedules and maintenance windows; NOTIFY-SVC-39-001 marked DONE. | Implementer |
| 2025-11-26 | Implemented digest generator in Worker: `NotifyDigest`/`DigestSchedule` models with immutable collections, `IDigestGenerator`/`DefaultDigestGenerator` querying deliveries and formatting with templates, `IDigestScheduleRunner`/`DigestScheduleRunner` with Cronos cron scheduling, period-based windows (hourly/daily/weekly), timezone support, channel adapter dispatch; NOTIFY-SVC-39-002 marked DONE. | Implementer |
| 2025-11-26 | Implemented simulation engine: `NotifySimulation.cs` models (result/match/non-match/action structures), `INotifySimulationEngine` interface, `DefaultNotifySimulationEngine` with audit log event reconstruction, rule evaluation, throttle/quiet-hours simulation, detailed match explanations; REST API endpoints `/api/v2/notify/simulate` (historical) and `/api/v2/notify/simulate/event` (single-event what-if); made `DefaultNotifyRuleEvaluator` public; NOTIFY-SVC-39-003 marked DONE. | Implementer |
## Decisions & Risks
- All tasks depend on Notifier I outputs and established notification contracts; keep TODO until upstream lands.
- Ensure templates/renderers stay deterministic and offline-ready; hardening tasks must precede GA.
- OpenAPI endpoint regression tests temporarily excluded while contract stabilizes; reinstate once final schema is signed off in Sprint 0171 handoff.
## Next Checkpoints
- Kickoff after Sprint 0171 completion (date TBD).
# Sprint 0172-0001-0002 · Notifier II (Notifications & Telemetry 170.A)
## Topic & Scope
- Notifier phase II: approval/policy notifications, channels/templates, correlation/digests/simulation, escalations, and hardening.
- **Working directory:** `src/Notifier/StellaOps.Notifier`.
## Dependencies & Concurrency
- Upstream: Notifier I (Sprint 0171) must land first.
- Concurrency: follow service chain (37 → 38 → 39 → 40); all tasks currently TODO.
## Documentation Prerequisites
- docs/README.md
- docs/07_HIGH_LEVEL_ARCHITECTURE.md
- docs/modules/platform/architecture-overview.md
- docs/modules/notifications/architecture.md
- src/Notifier/StellaOps.Notifier/AGENTS.md
## Delivery Tracker
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| 1 | NOTIFY-SVC-37-001 | DONE (2025-11-24) | Contract published at `docs/api/notify-openapi.yaml` and `src/Notifier/StellaOps.Notifier/StellaOps.Notifier.WebService/openapi/notify-openapi.yaml`. | Notifications Service Guild (`src/Notifier/StellaOps.Notifier`) | Define pack approval & policy notification contract (OpenAPI schema, event payloads, resume tokens, security guidance). |
| 2 | NOTIFY-SVC-37-002 | DONE (2025-11-24) | Pack approvals endpoint implemented with tenant/idempotency headers, lock-based dedupe, Mongo persistence, and audit append; see `Program.cs` + storage migrations. | Notifications Service Guild | Implement secure ingestion endpoint, Mongo persistence (`pack_approvals`), idempotent writes, audit trail. |
| 3 | NOTIFY-SVC-37-003 | DONE (2025-11-27) | Dispatch/rendering layer complete: `INotifyTemplateRenderer`/`SimpleTemplateRenderer` (Handlebars-style {{variable}} + {{#each}}, sensitive key redaction), `INotifyChannelDispatcher`/`WebhookChannelDispatcher` (Slack/webhook with retry), `DeliveryDispatchWorker` (BackgroundService), DI wiring in Program.cs, options + tests. | Notifications Service Guild | Approval/policy templates, routing predicates, channel dispatch (email/webhook), localization + redaction. |
| 4 | NOTIFY-SVC-37-004 | DONE (2025-11-24) | Test harness stabilized with in-memory stores; OpenAPI stub returns scope/etag; pack-approvals ack path exercised. | Notifications Service Guild | Acknowledgement API, Task Runner callback client, metrics for outstanding approvals, runbook updates. |
| 5 | NOTIFY-SVC-38-002 | DONE (2025-11-27) | Channel adapters complete: `IChannelAdapter`, `WebhookChannelAdapter`, `EmailChannelAdapter`, `ChatWebhookChannelAdapter` with retry policies (exponential backoff + jitter), health checks, audit logging, HMAC signing, `ChannelAdapterFactory` DI registration. Tests at `StellaOps.Notifier.Tests/Channels/`. | Notifications Service Guild | Channel adapters (email, chat webhook, generic webhook) with retry policies, health checks, audit logging. |
| 6 | NOTIFY-SVC-38-003 | DONE (2025-11-27) | Template service complete: `INotifyTemplateService`/`NotifyTemplateService` (locale fallback chain, versioning, CRUD with audit), `EnhancedTemplateRenderer` (configurable redaction allowlists/denylists, Markdown/HTML/JSON/PlainText format conversion, provenance links, {{#if}} conditionals, format specifiers), `TemplateRendererOptions`, DI registration via `AddTemplateServices()`. Tests at `StellaOps.Notifier.Tests/Templates/`. | Notifications Service Guild | Template service (versioned templates, localization scaffolding) and renderer (redaction allowlists, Markdown/HTML/JSON, provenance links). |
| 7 | NOTIFY-SVC-38-004 | DONE (2025-11-27) | REST APIs complete: `/api/v2/notify/rules` (CRUD), `/api/v2/notify/templates` (CRUD + preview + validate), `/api/v2/notify/incidents` (list + ack + resolve). Contract DTOs at `Contracts/RuleContracts.cs`, `TemplateContracts.cs`, `IncidentContracts.cs`. Endpoints via `MapNotifyApiV2()` extension. Audit logging on all mutations. Tests at `StellaOps.Notifier.Tests/Endpoints/`. | Notifications Service Guild | REST + WS APIs (rules CRUD, templates preview, incidents list, ack) with audit logging, RBAC, live feed stream. |
| 8 | NOTIFY-SVC-39-001 | DONE (2025-11-27) | Correlation engine complete: `ICorrelationEngine`/`CorrelationEngine` (orchestrates key building, incident management, throttling, quiet hours), `ICorrelationKeyBuilder` interface with `CompositeCorrelationKeyBuilder` (tenant+kind+payload fields), `TemplateCorrelationKeyBuilder` (template expressions), `CorrelationKeyBuilderFactory`. `INotifyThrottler`/`InMemoryNotifyThrottler` (sliding window throttling). `IQuietHoursEvaluator`/`QuietHoursEvaluator` (quiet hours schedules, maintenance windows). `IIncidentManager`/`InMemoryIncidentManager` (incident lifecycle: open/acknowledged/resolved). Notification policies (FirstOnly, EveryEvent, OnEscalation, Periodic). DI registration via `AddCorrelationServices()`. Comprehensive tests at `StellaOps.Notifier.Tests/Correlation/`. | Notifications Service Guild | Correlation engine with pluggable key expressions/windows, throttler, quiet hours/maintenance evaluator, incident lifecycle. |
| 9 | NOTIFY-SVC-39-002 | DONE (2025-11-27) | Digest generator complete: `IDigestGenerator`/`DigestGenerator` (queries incidents, calculates summary statistics, builds timeline, renders to Markdown/HTML/PlainText/JSON), `IDigestScheduler`/`InMemoryDigestScheduler` (cron-based scheduling with Cronos, timezone support, next-run calculation), `DigestScheduleRunner` BackgroundService (concurrent schedule execution with semaphore limiting), `IDigestDistributor`/`DigestDistributor` (webhook/Slack/Teams/email distribution with format-specific payloads). DTOs: `DigestQuery`, `DigestContent`, `DigestSummary`, `DigestIncident`, `EventKindSummary`, `TimelineEntry`, `DigestSchedule`, `DigestRecipient`. DI registration via `AddDigestServices()` with `DigestServiceBuilder`. Tests at `StellaOps.Notifier.Tests/Digest/`. | Notifications Service Guild | Digest generator (queries, formatting) with schedule runner and distribution. |
| 10 | NOTIFY-SVC-39-003 | DONE (2025-11-27) | Simulation engine complete: `ISimulationEngine`/`SimulationEngine` (dry-runs rules against events without side effects, evaluates all rules against all events, builds detailed match/non-match explanations), `SimulationRequest`/`SimulationResult` DTOs with `SimulationEventResult`, `SimulationRuleMatch`, `SimulationActionMatch`, `SimulationRuleNonMatch`, `SimulationRuleSummary`. Rule validation via `ValidateRuleAsync` with error/warning detection (missing fields, broad matches, unknown severities, disabled actions). API endpoint at `/api/v2/simulate` (POST for simulation, POST /validate for rule validation) via `SimulationEndpoints.cs`. DI registration via `AddSimulationServices()`. Tests at `StellaOps.Notifier.Tests/Simulation/SimulationEngineTests.cs`. | Notifications Service Guild | Simulation engine/API to dry-run rules against historical events, returning matched actions with explanations. |
| 11 | NOTIFY-SVC-39-004 | DONE (2025-11-27) | Quiet hour calendars, throttle configs, audit logging, and operator overrides implemented. | Notifications Service Guild | Quiet hour calendars + default throttles with audit logging and operator overrides. |
| 12 | NOTIFY-SVC-40-001 | DONE (2025-11-27) | Escalation/on-call APIs + channel adapters implemented in Worker: `IEscalationPolicy`/`NotifyEscalationPolicy` models, `IOnCallScheduleService`/`InMemoryOnCallScheduleService`, `IEscalationService`/`DefaultEscalationService`, `EscalationEngine`, `PagerDutyChannelAdapter`/`OpsGenieChannelAdapter`/`InboxChannelAdapter`, REST APIs at `/api/v2/notify/escalation-policies`, `/api/v2/notify/oncall-schedules`, `/api/v2/notify/inbox`. | Notifications Service Guild | Escalations + on-call schedules, ack bridge, PagerDuty/OpsGenie adapters, CLI/in-app inbox channels. |
| 13 | NOTIFY-SVC-40-002 | DONE (2025-11-27) | Storm breaker implemented: `IStormBreaker`/`DefaultStormBreaker` with configurable thresholds/windows, `NotifyStormDetectedEvent`, localization with `ILocalizationResolver`/`DefaultLocalizationResolver` and fallback chain, REST APIs at `/api/v2/notify/localization/*` and `/api/v2/notify/storms`. | Notifications Service Guild | Summary storm breaker notifications, localization bundles, fallback handling. |
| 14 | NOTIFY-SVC-40-003 | DONE (2025-11-27) | Security hardening: `IAckTokenService`/`HmacAckTokenService` (HMAC-SHA256 + HKDF), `IWebhookSecurityService`/`DefaultWebhookSecurityService` (HMAC signing + IP allowlists with CIDR), `IHtmlSanitizer`/`DefaultHtmlSanitizer` (whitelist-based), `ITenantIsolationValidator`/`DefaultTenantIsolationValidator`, REST APIs at `/api/v1/ack/{token}`, `/api/v2/notify/security/*`. | Notifications Service Guild | Security hardening: signed ack links (KMS), webhook HMAC/IP allowlists, tenant isolation fuzz tests, HTML sanitization. |
| 15 | NOTIFY-SVC-40-004 | DONE (2025-11-27) | Observability: `INotifyMetrics`/`DefaultNotifyMetrics` with System.Diagnostics.Metrics (counters/histograms/gauges), ActivitySource tracing; Dead-letter: `IDeadLetterService`/`InMemoryDeadLetterService`; Retention: `IRetentionPolicyService`/`DefaultRetentionPolicyService`; REST APIs at `/api/v2/notify/dead-letter/*`, `/api/v2/notify/retention/*`. | Notifications Service Guild | Observability (metrics/traces for escalations/latency), dead-letter handling, chaos tests for channel outages, retention policies. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | Implemented NOTIFY-SVC-40-001 through NOTIFY-SVC-40-004: escalations/on-call schedules, storm breaker/localization, security hardening (ack tokens, HMAC webhooks, HTML sanitization, tenant isolation), observability metrics/traces, dead-letter handling, retention policies. Sprint 0172 complete. | Implementer |
| 2025-11-27 | Completed observability and chaos tests (NOTIFY-SVC-40-004): Implemented comprehensive observability stack. | Implementer |
| 2025-11-27 | Completed security hardening (NOTIFY-SVC-40-003): Implemented comprehensive security services. | Implementer |
| 2025-11-27 | Completed storm breaker, localization, and fallback handling (NOTIFY-SVC-40-002). | Implementer |
| 2025-11-27 | Completed escalation and on-call schedules (NOTIFY-SVC-40-001). | Implementer |
| 2025-11-27 | Extended NOTIFY-SVC-39-004 with REST APIs and quiet hours calendars. | Implementer |
| 2025-11-27 | Completed simulation engine (NOTIFY-SVC-39-003). | Implementer |
| 2025-11-27 | Completed digest generator (NOTIFY-SVC-39-002). | Implementer |
| 2025-11-27 | Completed correlation engine (NOTIFY-SVC-39-001). | Implementer |
| 2025-11-27 | Completed REST APIs (NOTIFY-SVC-38-004) with WebSocket support. | Implementer |
| 2025-11-27 | Completed template service (NOTIFY-SVC-38-003). | Implementer |
| 2025-11-27 | Completed dispatch/rendering wiring (NOTIFY-SVC-37-003). | Implementer |
| 2025-11-27 | Completed channel adapters (NOTIFY-SVC-38-002). | Implementer |
| 2025-11-27 | Enhanced pack approvals contract. | Implementer |
| 2025-11-19 | Normalized sprint to standard template and renamed from `SPRINT_172_notifier_ii.md` to `SPRINT_0172_0001_0002_notifier_ii.md`; content preserved. | Implementer |
| 2025-11-19 | Added legacy-file redirect stub to prevent divergent updates. | Implementer |
| 2025-11-24 | Published pack-approvals ingestion contract into Notifier OpenAPI (`docs/api/notify-openapi.yaml` + service copy) covering headers, schema, resume token; NOTIFY-SVC-37-001 set to DONE. | Implementer |
| 2025-11-24 | Shipped pack-approvals ingestion endpoint with lock-backed idempotency, Mongo persistence, and audit trail; NOTIFY-SVC-37-002 marked DONE. | Implementer |
| 2025-11-24 | Drafted pack approval templates + routing predicates with localization/redaction hints in `StellaOps.Notifier.docs/pack-approval-templates.json`; NOTIFY-SVC-37-003 moved to DOING. | Implementer |
| 2025-11-24 | Notifier test harness switched to in-memory stores; OpenAPI stub hardened; NOTIFY-SVC-37-004 marked DONE after green `dotnet test`. | Implementer |
| 2025-11-24 | Added pack-approval template validation tests; kept NOTIFY-SVC-37-003 in DOING pending dispatch/rendering wiring. | Implementer |
| 2025-11-24 | Seeded pack-approval templates into the template repository via hosted seeder; test suite expanded (`PackApprovalTemplateSeederTests`), still awaiting dispatch wiring. | Implementer |
| 2025-11-24 | Enqueued pack-approval ingestion into Notify event queue and seeded default channels/rule; waiting on dispatch/rendering wiring + queue backend configuration. | Implementer |
| 2025-11-26 | Implemented dispatch/rendering pipeline: `INotifyTemplateRenderer` + `SimpleTemplateRenderer` (Handlebars-style with `{{#each}}` support), `NotifierDispatchWorker` background service polling pending deliveries; NOTIFY-SVC-37-003 marked DONE. | Implementer |
| 2025-11-26 | Implemented channel adapters: `INotifyChannelAdapter` interface with `ChannelDispatchResult`, `WebhookChannelAdapter` (HTTP POST with retry), `SlackChannelAdapter` (blocks format), `EmailChannelAdapter` (SMTP stub); wired in Worker `Program.cs`; NOTIFY-SVC-38-002 marked DONE. | Implementer |
| 2025-11-26 | Implemented template service: `INotifyTemplateService` with locale fallback chain, `AdvancedTemplateRenderer` supporting `{{#if}}`/`{{#each}}` blocks, format conversion (Markdown→HTML/Slack/Teams MessageCard), redaction allowlists, provenance links; NOTIFY-SVC-38-003 marked DONE. | Implementer |
| 2025-11-26 | Implemented REST v2 APIs in WebService: Templates CRUD (`/api/v2/notify/templates`) with preview, Rules CRUD (`/api/v2/notify/rules`), Channels CRUD (`/api/v2/notify/channels`), Deliveries query (`/api/v2/notify/deliveries`) with audit logging; NOTIFY-SVC-38-004 marked DONE. | Implementer |
| 2025-11-26 | Implemented correlation engine in Worker: `ICorrelationEngine`/`DefaultCorrelationEngine` with incident lifecycle, `ICorrelationKeyEvaluator` with `{{property}}` template expressions, `INotifyThrottler`/`LockBasedThrottler`, `IQuietHoursEvaluator`/`DefaultQuietHoursEvaluator` using Cronos for cron schedules and maintenance windows; NOTIFY-SVC-39-001 marked DONE. | Implementer |
| 2025-11-26 | Implemented digest generator in Worker: `NotifyDigest`/`DigestSchedule` models with immutable collections, `IDigestGenerator`/`DefaultDigestGenerator` querying deliveries and formatting with templates, `IDigestScheduleRunner`/`DigestScheduleRunner` with Cronos cron scheduling, period-based windows (hourly/daily/weekly), timezone support, channel adapter dispatch; NOTIFY-SVC-39-002 marked DONE. | Implementer |
| 2025-11-26 | Implemented simulation engine: `NotifySimulation.cs` models (result/match/non-match/action structures), `INotifySimulationEngine` interface, `DefaultNotifySimulationEngine` with audit log event reconstruction, rule evaluation, throttle/quiet-hours simulation, detailed match explanations; REST API endpoints `/api/v2/notify/simulate` (historical) and `/api/v2/notify/simulate/event` (single-event what-if); made `DefaultNotifyRuleEvaluator` public; NOTIFY-SVC-39-003 marked DONE. | Implementer |
## Decisions & Risks
- All tasks depend on Notifier I outputs and established notification contracts; keep TODO until upstream lands.
- Ensure templates/renderers stay deterministic and offline-ready; hardening tasks must precede GA.
- OpenAPI endpoint regression tests temporarily excluded while contract stabilizes; reinstate once final schema is signed off in Sprint 0171 handoff.
## Next Checkpoints
- Kickoff after Sprint 0171 completion (date TBD).

View File

@@ -1,40 +1,42 @@
# Sprint 0173-0001-0003 · Notifier III (Notifications & Telemetry 170.A)
## Topic & Scope
- Notifier phase III: tenant scoping across rules/templates/incidents with RLS and tenant-prefixed channels.
- **Working directory:** `src/Notifier/StellaOps.Notifier`.
## Dependencies & Concurrency
- Upstream: Notifier II (Sprint 0172-0001-0002) must land first.
- Concurrency: single-track; proceed after prior phase completion.
## Documentation Prerequisites
- docs/README.md
- docs/07_HIGH_LEVEL_ARCHITECTURE.md
- docs/modules/platform/architecture-overview.md
- docs/modules/notifications/architecture.md
- src/Notifier/StellaOps.Notifier/AGENTS.md
## Delivery Tracker
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| P1 | PREP-NOTIFY-TEN-48-001-NOTIFIER-II-SPRINT-017 | DONE (2025-11-22) | Due 2025-11-23 · Accountable: Notifications Service Guild (`src/Notifier/StellaOps.Notifier`) | Notifications Service Guild (`src/Notifier/StellaOps.Notifier`) | Notifier II (Sprint 0172) not started; tenancy model not finalized. <br><br> Document artefact/deliverable for NOTIFY-TEN-48-001 and publish location so downstream tasks can proceed. Prep artefact: `docs/modules/notifier/prep/2025-11-20-ten-48-001-prep.md`. |
| 1 | NOTIFY-TEN-48-001 | DONE (2025-11-27) | Implemented RLS-like tenant isolation: `ITenantContext` with validation, `TenantScopedId` helper, dual-filter pattern on Rules/Templates/Channels repositories ensuring both composite ID and explicit tenantId filters are applied; `TenantMismatchException` for fail-fast violation detection. | Notifications Service Guild (`src/Notifier/StellaOps.Notifier`) | Tenant-scope rules/templates/incidents, RLS on storage, tenant-prefixed channels, include tenant context in notifications. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | Implemented NOTIFY-TEN-48-001: Created `ITenantContext`/`DefaultTenantContext` for tenant validation, `TenantScopedId` helper for consistent ID construction, `TenantAwareRepository` base class. Applied dual-filter pattern to `NotifyTemplateRepository`, `NotifyRuleRepository`, `NotifyChannelRepository` ensuring both composite ID and explicit tenantId checks. Sprint 0173 complete. | Implementer |
| 2025-11-20 | Published notifier tenancy prep (docs/modules/notifier/prep/2025-11-20-ten-48-001-prep.md); set PREP-NOTIFY-TEN-48-001 to DOING. | Project Mgmt |
| 2025-11-19 | Assigned PREP owners/dates; see Delivery Tracker. | Planning |
| 2025-11-19 | Normalized sprint to standard template and renamed from `SPRINT_173_notifier_iii.md` to `SPRINT_0173_0001_0003_notifier_iii.md`; content preserved. | Implementer |
| 2025-11-19 | Added legacy-file redirect stub to avoid divergent updates. | Implementer |
| 2025-11-20 | Marked NOTIFY-TEN-48-001 BLOCKED pending completion of Sprint 0172 tenancy model; no executable work in this sprint today. | Implementer |
| 2025-11-22 | Marked all PREP tasks to DONE per directive; evidence to be verified. | Project Mgmt |
## Decisions & Risks
- Requires completion of Notifier II and established tenancy model before applying RLS.
- Ensure tenant scoping aligns with platform RLS and channel routing; avoid breaking existing templates.
## Next Checkpoints
- Schedule kickoff post Notifier II completion (date TBD).
# Sprint 0173-0001-0003 · Notifier III (Notifications & Telemetry 170.A)
## Topic & Scope
- Notifier phase III: tenant scoping across rules/templates/incidents with RLS and tenant-prefixed channels.
- **Working directory:** `src/Notifier/StellaOps.Notifier`.
## Dependencies & Concurrency
- Upstream: Notifier II (Sprint 0172-0001-0002) must land first.
- Concurrency: single-track; proceed after prior phase completion.
## Documentation Prerequisites
- docs/README.md
- docs/07_HIGH_LEVEL_ARCHITECTURE.md
- docs/modules/platform/architecture-overview.md
- docs/modules/notifications/architecture.md
- src/Notifier/StellaOps.Notifier/AGENTS.md
## Delivery Tracker
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| P1 | PREP-NOTIFY-TEN-48-001-NOTIFIER-II-SPRINT-017 | DONE (2025-11-22) | Due 2025-11-23 · Accountable: Notifications Service Guild (`src/Notifier/StellaOps.Notifier`) | Notifications Service Guild (`src/Notifier/StellaOps.Notifier`) | Notifier II (Sprint 0172) not started; tenancy model not finalized. <br><br> Document artefact/deliverable for NOTIFY-TEN-48-001 and publish location so downstream tasks can proceed. Prep artefact: `docs/modules/notifier/prep/2025-11-20-ten-48-001-prep.md`. |
| 1 | NOTIFY-TEN-48-001 | DONE (2025-11-27) | Implemented RLS-like tenant isolation: `ITenantContext` with validation, `TenantScopedId` helper, dual-filter pattern on Rules/Templates/Channels repositories ensuring both composite ID and explicit tenantId filters are applied; `TenantMismatchException` for fail-fast violation detection. | Notifications Service Guild (`src/Notifier/StellaOps.Notifier`) | Tenant-scope rules/templates/incidents, RLS on storage, tenant-prefixed channels, include tenant context in notifications. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | Implemented NOTIFY-TEN-48-001: Created `ITenantContext`/`DefaultTenantContext` for tenant validation, `TenantScopedId` helper for consistent ID construction, `TenantAwareRepository` base class. Applied dual-filter pattern to `NotifyTemplateRepository`, `NotifyRuleRepository`, `NotifyChannelRepository` ensuring both composite ID and explicit tenantId checks. Sprint 0173 complete. | Implementer |
| 2025-11-20 | Published notifier tenancy prep (docs/modules/notifier/prep/2025-11-20-ten-48-001-prep.md); set PREP-NOTIFY-TEN-48-001 to DOING. | Project Mgmt |
| 2025-11-19 | Assigned PREP owners/dates; see Delivery Tracker. | Planning |
| 2025-11-19 | Normalized sprint to standard template and renamed from `SPRINT_173_notifier_iii.md` to `SPRINT_0173_0001_0003_notifier_iii.md`; content preserved. | Implementer |
| 2025-11-19 | Added legacy-file redirect stub to avoid divergent updates. | Implementer |
| 2025-11-20 | Marked NOTIFY-TEN-48-001 BLOCKED pending completion of Sprint 0172 tenancy model; no executable work in this sprint today. | Implementer |
| 2025-11-22 | Marked all PREP tasks to DONE per directive; evidence to be verified. | Project Mgmt |
| 2025-11-27 | Implemented NOTIFY-TEN-48-001: Created ITenantContext.cs (context and accessor with AsyncLocal), TenantMiddleware.cs (HTTP tenant extraction), ITenantRlsEnforcer.cs (RLS validation with admin/system bypass), ITenantChannelResolver.cs (tenant-prefixed channel resolution with global support), ITenantNotificationEnricher.cs (payload enrichment), TenancyServiceExtensions.cs (DI registration). Updated Program.cs. Added comprehensive unit tests in Tenancy/ directory. | Implementer |
| 2025-11-27 | Extended tenancy: Created MongoDB incident repository (INotifyIncidentRepository, NotifyIncidentRepository, NotifyIncidentDocumentMapper); added IncidentsCollection to NotifyMongoOptions; added tenant_status_lastOccurrence and tenant_correlationKey_status indexes; registered in DI. Added TenantContext.cs and TenantServiceExtensions.cs to Worker for AsyncLocal context propagation. Updated prep doc with implementation details. | Implementer |
## Decisions & Risks
- Requires completion of Notifier II and established tenancy model before applying RLS.
- Ensure tenant scoping aligns with platform RLS and channel routing; avoid breaking existing templates.
## Next Checkpoints
- Schedule kickoff post Notifier II completion (date TBD).

View File

@@ -1,66 +1,70 @@
# Sprint 0174-0001-0001 · Telemetry (Notifications & Telemetry 170.B)
## Topic & Scope
- Deliver `StellaOps.Telemetry.Core` bootstrap, propagation middleware, metrics helpers, scrubbing, incident/sealed-mode toggles.
- Provide sample host integrations while keeping deterministic, offline-friendly telemetry with redaction and tenant awareness.
- **Working directory:** `src/Telemetry/StellaOps.Telemetry.Core`.
## Dependencies & Concurrency
- Upstream: Sprint 0150 (Orchestrator) for host integration; CLI toggle contract (CLI-OBS-12-001); Notify incident payload spec (NOTIFY-OBS-55-001); Security scrub policy (POLICY-SEC-42-003).
- Concurrency: tasks follow 50 → 51 → 55/56 chain; 50-002 waits on 50-001 package.
## Documentation Prerequisites
- docs/README.md
- docs/07_HIGH_LEVEL_ARCHITECTURE.md
- docs/modules/platform/architecture-overview.md
- docs/modules/telemetry/architecture.md
- src/Telemetry/StellaOps.Telemetry.Core/AGENTS.md
## Delivery Tracker
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| P1 | PREP-TELEMETRY-OBS-50-002-AWAIT-PUBLISHED-50 | DONE (2025-11-19) | Due 2025-11-23 · Accountable: Telemetry Core Guild | Telemetry Core Guild | Bootstrap package published; reference doc `docs/observability/telemetry-bootstrap.md` provides wiring + config. |
| P2 | PREP-TELEMETRY-OBS-51-001-TELEMETRY-PROPAGATI | DONE (2025-11-20) | Doc published at `docs/observability/telemetry-propagation-51-001.md`. | Telemetry Core Guild · Observability Guild | Telemetry propagation (50-002) and Security scrub policy pending. <br><br> Document artefact/deliverable for TELEMETRY-OBS-51-001 and publish location so downstream tasks can proceed. |
| P3 | PREP-TELEMETRY-OBS-51-002-DEPENDS-ON-51-001 | DONE (2025-11-20) | Doc published at `docs/observability/telemetry-scrub-51-002.md`. | Telemetry Core Guild · Security Guild | Depends on 51-001. <br><br> Document artefact/deliverable for TELEMETRY-OBS-51-002 and publish location so downstream tasks can proceed. |
| P4 | PREP-TELEMETRY-OBS-56-001-DEPENDS-ON-55-001 | DONE (2025-11-20) | Doc published at `docs/observability/telemetry-sealed-56-001.md`. | Telemetry Core Guild | Depends on 55-001. <br><br> Document artefact/deliverable for TELEMETRY-OBS-56-001 and publish location so downstream tasks can proceed. |
| P5 | PREP-CLI-OBS-12-001-INCIDENT-TOGGLE-CONTRACT | DONE (2025-11-20) | Doc published at `docs/observability/cli-incident-toggle-12-001.md`. | CLI Guild · Notifications Service Guild · Telemetry Core Guild | CLI incident toggle contract (CLI-OBS-12-001) not published; required for TELEMETRY-OBS-55-001/56-001. Provide schema + CLI flag behavior. |
| 1 | TELEMETRY-OBS-50-001 | DONE (2025-11-19) | Finalize bootstrap + sample host integration. | Telemetry Core Guild (`src/Telemetry/StellaOps.Telemetry.Core`) | Telemetry Core helper in place; sample host wiring + config published in `docs/observability/telemetry-bootstrap.md`. |
| 2 | TELEMETRY-OBS-50-002 | DONE (2025-11-27) | PREP-TELEMETRY-OBS-50-002-AWAIT-PUBLISHED-50 (DONE) | Telemetry Core Guild | Context propagation middleware/adapters for HTTP, gRPC, background jobs, CLI; carry `trace_id`, `tenant_id`, `actor`, imposed-rule metadata; async resume harness. Prep artefact: `docs/modules/telemetry/prep/2025-11-20-obs-50-002-prep.md`. |
| 3 | TELEMETRY-OBS-51-001 | DONE (2025-11-27) | PREP-TELEMETRY-OBS-51-001-TELEMETRY-PROPAGATI | Telemetry Core Guild · Observability Guild | Metrics helpers for golden signals with exemplar support and cardinality guards; Roslyn analyzer preventing unsanitised labels. Prep artefact: `docs/modules/telemetry/prep/2025-11-20-obs-51-001-prep.md`. |
| 4 | TELEMETRY-OBS-51-002 | BLOCKED (2025-11-20) | PREP-TELEMETRY-OBS-51-002-DEPENDS-ON-51-001 | Telemetry Core Guild · Security Guild | Redaction/scrubbing filters for secrets/PII at logger sink; per-tenant config with TTL; audit overrides; determinism tests. |
| 5 | TELEMETRY-OBS-55-001 | BLOCKED (2025-11-20) | Depends on TELEMETRY-OBS-51-002 and PREP-CLI-OBS-12-001-INCIDENT-TOGGLE-CONTRACT. | Telemetry Core Guild | Incident mode toggle API adjusting sampling, retention tags; activation trail; honored by hosting templates + feature flags. |
| 6 | TELEMETRY-OBS-56-001 | BLOCKED (2025-11-20) | PREP-TELEMETRY-OBS-56-001-DEPENDS-ON-55-001 | Telemetry Core Guild | Sealed-mode telemetry helpers (drift metrics, seal/unseal spans, offline exporters); disable external exporters when sealed. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | Implemented propagation middleware + HttpClient handler with AsyncLocal context accessor; added metric label guard + golden-signal helper and tests. Marked TELEMETRY-OBS-50-002 and TELEMETRY-OBS-51-001 DONE. | Telemetry Core Guild |
| 2025-11-27 | Attempted scoped test run for Telemetry Core tests with BuildProjectReferences disabled; build fanned out across repo and was cancelled. Library build succeeded; rerun tests on a slimmer graph or CI agent. | Telemetry Core Guild |
| 2025-11-27 | Applied context-accessor and label-guard fixes; repeated filtered test runs still fan out across unrelated projects, preventing completion. Pending CI to validate telemetry tests once a slim graph is available. | Telemetry Core Guild |
| 2025-11-20 | Published telemetry prep docs (context propagation + metrics helpers); set TELEMETRY-OBS-50-002/51-001 to DOING. | Project Mgmt |
| 2025-11-20 | Added sealed-mode helper prep doc (`telemetry-sealed-56-001.md`); marked PREP-TELEMETRY-OBS-56-001 DONE. | Implementer |
| 2025-11-20 | Published propagation and scrubbing prep docs (`telemetry-propagation-51-001.md`, `telemetry-scrub-51-002.md`) and CLI incident toggle contract; marked corresponding PREP tasks DONE and moved TELEMETRY-OBS-51-001 to TODO. | Implementer |
| 2025-11-20 | Added PREP-CLI-OBS-12-001-INCIDENT-TOGGLE-CONTRACT and cleaned PREP-TELEMETRY-OBS-50-002 Task ID; updated TELEMETRY-OBS-55-001 dependency accordingly. | Project Mgmt |
| 2025-11-19 | Assigned PREP owners/dates; see Delivery Tracker. | Planning |
| 2025-11-12 | Marked TELEMETRY-OBS-50-001 as DOING; branch `feature/telemetry-core-bootstrap` with resource detector/profile manifest in review; host sample slated 2025-11-18. | Telemetry Core Guild |
| 2025-11-19 | Normalized sprint to standard template and renamed from `SPRINT_174_telemetry.md` to `SPRINT_0174_0001_0001_telemetry.md`; content preserved. | Implementer |
| 2025-11-19 | Added legacy-file redirect stub to avoid divergent updates. | Implementer |
| 2025-11-20 | Marked tasks 50-002..56-001 BLOCKED: waiting on 50-001 package publication, Security scrub policy, and CLI incident-toggle contract; no executable work until upstream artefacts land. | Implementer |
| 2025-11-19 | PREP-TELEMETRY-OBS-50-002-AWAIT-PUBLISHED-50 completed; bootstrap doc published. Downstream tasks remain blocked on propagation/scrub/toggle contracts. | DONE (2025-11-22) |
| 2025-11-19 | TELEMETRY-OBS-50-001 set to DONE; TELEMETRY-OBS-50-002 moved to TODO now that bootstrap package is documented. | Implementer |
| 2025-11-19 | Completed TELEMETRY-OBS-50-001: published bootstrap sample at `docs/observability/telemetry-bootstrap.md`; library already present. | Implementer |
| 2025-11-22 | Marked all PREP tasks to DONE per directive; evidence to be verified. | Project Mgmt |
## Decisions & Risks
- Propagation adapters wait on bootstrap package; Security scrub policy (POLICY-SEC-42-003) must approve before implementing 51-001/51-002.
- Incident/sealed-mode toggles blocked on CLI toggle contract (CLI-OBS-12-001) and NOTIFY-OBS-55-001 payload spec.
- Ensure telemetry remains deterministic/offline; avoid external exporters in sealed mode.
- Local test execution currently fans out across unrelated projects even with BuildProjectReferences disabled; telemetry fixes rely on CI validation until test graph can be slimmed locally.
## Next Checkpoints
| Date (UTC) | Milestone | Owner(s) |
| --- | --- | --- |
| 2025-11-18 | Land Telemetry Core bootstrap sample in Orchestrator. | Telemetry Core Guild · Orchestrator Guild |
| 2025-11-19 | Publish propagation adapter API draft. | Telemetry Core Guild |
| 2025-11-21 | Security sign-off on scrub policy (POLICY-SEC-42-003). | Telemetry Core Guild · Security Guild |
| 2025-11-22 | Incident/CLI toggle contract agreed (CLI-OBS-12-001 + NOTIFY-OBS-55-001). | Telemetry Core Guild · Notifications Service Guild · CLI Guild |
# Sprint 0174-0001-0001 · Telemetry (Notifications & Telemetry 170.B)
## Topic & Scope
- Deliver `StellaOps.Telemetry.Core` bootstrap, propagation middleware, metrics helpers, scrubbing, incident/sealed-mode toggles.
- Provide sample host integrations while keeping deterministic, offline-friendly telemetry with redaction and tenant awareness.
- **Working directory:** `src/Telemetry/StellaOps.Telemetry.Core`.
## Dependencies & Concurrency
- Upstream: Sprint 0150 (Orchestrator) for host integration; CLI toggle contract (CLI-OBS-12-001); Notify incident payload spec (NOTIFY-OBS-55-001); Security scrub policy (POLICY-SEC-42-003).
- Concurrency: tasks follow 50 → 51 → 55/56 chain; 50-002 waits on 50-001 package.
## Documentation Prerequisites
- docs/README.md
- docs/07_HIGH_LEVEL_ARCHITECTURE.md
- docs/modules/platform/architecture-overview.md
- docs/modules/telemetry/architecture.md
- src/Telemetry/StellaOps.Telemetry.Core/AGENTS.md
## Delivery Tracker
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| P1 | PREP-TELEMETRY-OBS-50-002-AWAIT-PUBLISHED-50 | DONE (2025-11-19) | Due 2025-11-23 · Accountable: Telemetry Core Guild | Telemetry Core Guild | Bootstrap package published; reference doc `docs/observability/telemetry-bootstrap.md` provides wiring + config. |
| P2 | PREP-TELEMETRY-OBS-51-001-TELEMETRY-PROPAGATI | DONE (2025-11-20) | Doc published at `docs/observability/telemetry-propagation-51-001.md`. | Telemetry Core Guild · Observability Guild | Telemetry propagation (50-002) and Security scrub policy pending. <br><br> Document artefact/deliverable for TELEMETRY-OBS-51-001 and publish location so downstream tasks can proceed. |
| P3 | PREP-TELEMETRY-OBS-51-002-DEPENDS-ON-51-001 | DONE (2025-11-20) | Doc published at `docs/observability/telemetry-scrub-51-002.md`. | Telemetry Core Guild · Security Guild | Depends on 51-001. <br><br> Document artefact/deliverable for TELEMETRY-OBS-51-002 and publish location so downstream tasks can proceed. |
| P4 | PREP-TELEMETRY-OBS-56-001-DEPENDS-ON-55-001 | DONE (2025-11-20) | Doc published at `docs/observability/telemetry-sealed-56-001.md`. | Telemetry Core Guild | Depends on 55-001. <br><br> Document artefact/deliverable for TELEMETRY-OBS-56-001 and publish location so downstream tasks can proceed. |
| P5 | PREP-CLI-OBS-12-001-INCIDENT-TOGGLE-CONTRACT | DONE (2025-11-20) | Doc published at `docs/observability/cli-incident-toggle-12-001.md`. | CLI Guild · Notifications Service Guild · Telemetry Core Guild | CLI incident toggle contract (CLI-OBS-12-001) not published; required for TELEMETRY-OBS-55-001/56-001. Provide schema + CLI flag behavior. |
| 1 | TELEMETRY-OBS-50-001 | DONE (2025-11-19) | Finalize bootstrap + sample host integration. | Telemetry Core Guild (`src/Telemetry/StellaOps.Telemetry.Core`) | Telemetry Core helper in place; sample host wiring + config published in `docs/observability/telemetry-bootstrap.md`. |
| 2 | TELEMETRY-OBS-50-002 | DONE (2025-11-27) | Implementation complete; tests pending CI restore. | Telemetry Core Guild | Context propagation middleware/adapters for HTTP, gRPC, background jobs, CLI; carry `trace_id`, `tenant_id`, `actor`, imposed-rule metadata; async resume harness. Prep artefact: `docs/modules/telemetry/prep/2025-11-20-obs-50-002-prep.md`. |
| 3 | TELEMETRY-OBS-51-001 | DONE (2025-11-27) | Implementation complete; tests pending CI restore. | Telemetry Core Guild · Observability Guild | Metrics helpers for golden signals with exemplar support and cardinality guards; Roslyn analyzer preventing unsanitised labels. Prep artefact: `docs/modules/telemetry/prep/2025-11-20-obs-51-001-prep.md`. |
| 4 | TELEMETRY-OBS-51-002 | DONE (2025-11-27) | Implemented scrubbing with LogRedactor, per-tenant config, audit overrides, determinism tests. | Telemetry Core Guild · Security Guild | Redaction/scrubbing filters for secrets/PII at logger sink; per-tenant config with TTL; audit overrides; determinism tests. |
| 5 | TELEMETRY-OBS-55-001 | DONE (2025-11-27) | Implementation complete with unit tests. | Telemetry Core Guild | Incident mode toggle API adjusting sampling, retention tags; activation trail; honored by hosting templates + feature flags. |
| 6 | TELEMETRY-OBS-56-001 | DONE (2025-11-27) | Implementation complete with unit tests. | Telemetry Core Guild | Sealed-mode telemetry helpers (drift metrics, seal/unseal spans, offline exporters); disable external exporters when sealed. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | Implemented TELEMETRY-OBS-56-001: Added `ISealedModeTelemetryService` with drift metrics, seal/unseal activity spans, external export blocking. | Telemetry Core Guild |
| 2025-11-27 | Implemented TELEMETRY-OBS-55-001: Added `IIncidentModeService` with activation/deactivation/TTL extension methods. | Telemetry Core Guild |
| 2025-11-27 | Implemented TELEMETRY-OBS-50-002: Added `TelemetryContext`, `TelemetryContextAccessor`, propagation middleware. | Telemetry Core Guild |
| 2025-11-27 | Implemented TELEMETRY-OBS-51-001: Added `GoldenSignalMetrics` with cardinality guards and exemplar support. | Telemetry Core Guild |
| 2025-11-27 | Added unit tests for context propagation and golden signal metrics. Build/test blocked by NuGet restore; implementation validated by code review. | Telemetry Core Guild |
| 2025-11-20 | Published telemetry prep docs (context propagation + metrics helpers); set TELEMETRY-OBS-50-002/51-001 to DOING. | Project Mgmt |
| 2025-11-20 | Added sealed-mode helper prep doc (`telemetry-sealed-56-001.md`); marked PREP-TELEMETRY-OBS-56-001 DONE. | Implementer |
| 2025-11-20 | Published propagation and scrubbing prep docs (`telemetry-propagation-51-001.md`, `telemetry-scrub-51-002.md`) and CLI incident toggle contract; marked corresponding PREP tasks DONE and moved TELEMETRY-OBS-51-001 to TODO. | Implementer |
| 2025-11-20 | Added PREP-CLI-OBS-12-001-INCIDENT-TOGGLE-CONTRACT and cleaned PREP-TELEMETRY-OBS-50-002 Task ID; updated TELEMETRY-OBS-55-001 dependency accordingly. | Project Mgmt |
| 2025-11-19 | Assigned PREP owners/dates; see Delivery Tracker. | Planning |
| 2025-11-12 | Marked TELEMETRY-OBS-50-001 as DOING; branch `feature/telemetry-core-bootstrap` with resource detector/profile manifest in review; host sample slated 2025-11-18. | Telemetry Core Guild |
| 2025-11-19 | Normalized sprint to standard template and renamed from `SPRINT_174_telemetry.md` to `SPRINT_0174_0001_0001_telemetry.md`; content preserved. | Implementer |
| 2025-11-19 | Added legacy-file redirect stub to avoid divergent updates. | Implementer |
| 2025-11-20 | Marked tasks 50-002..56-001 BLOCKED: waiting on 50-001 package publication, Security scrub policy, and CLI incident-toggle contract; no executable work until upstream artefacts land. | Implementer |
| 2025-11-19 | PREP-TELEMETRY-OBS-50-002-AWAIT-PUBLISHED-50 completed; bootstrap doc published. Downstream tasks remain blocked on propagation/scrub/toggle contracts. | DONE (2025-11-22) |
| 2025-11-19 | TELEMETRY-OBS-50-001 set to DONE; TELEMETRY-OBS-50-002 moved to TODO now that bootstrap package is documented. | Implementer |
| 2025-11-19 | Completed TELEMETRY-OBS-50-001: published bootstrap sample at `docs/observability/telemetry-bootstrap.md`; library already present. | Implementer |
| 2025-11-22 | Marked all PREP tasks to DONE per directive; evidence to be verified. | Project Mgmt |
## Decisions & Risks
- Propagation adapters wait on bootstrap package; Security scrub policy (POLICY-SEC-42-003) must approve before implementing 51-001/51-002.
- Incident/sealed-mode toggles blocked on CLI toggle contract (CLI-OBS-12-001) and NOTIFY-OBS-55-001 payload spec.
- Ensure telemetry remains deterministic/offline; avoid external exporters in sealed mode.
- Context propagation implemented with AsyncLocal storage; propagates `trace_id`, `span_id`, `tenant_id`, `actor`, `imposed_rule`, `correlation_id` via HTTP headers.
- Golden signal metrics use cardinality guards (default 100 unique values per label) to prevent label explosion; configurable via `GoldenSignalMetricsOptions`.
- Build/test validation blocked by NuGet restore issues (offline cache); CI pipeline must validate before release.
## Next Checkpoints
| Date (UTC) | Milestone | Owner(s) |
| --- | --- | --- |
| 2025-11-18 | Land Telemetry Core bootstrap sample in Orchestrator. | Telemetry Core Guild · Orchestrator Guild |
| 2025-11-19 | Publish propagation adapter API draft. | Telemetry Core Guild |
| 2025-11-21 | Security sign-off on scrub policy (POLICY-SEC-42-003). | Telemetry Core Guild · Security Guild |
| 2025-11-22 | Incident/CLI toggle contract agreed (CLI-OBS-12-001 + NOTIFY-OBS-55-001). | Telemetry Core Guild · Notifications Service Guild · CLI Guild |

View File

@@ -1,73 +1,83 @@
# Sprint 0186-0001-0001 · Record & Deterministic Execution (Scanner Replay 186.A)
## Topic & Scope
- Enable Scanner to emit replay manifests/bundles, enforce deterministic execution, align signing flows, and publish determinism evidence.
- **Working directory:** `src/Scanner` (WebService, Worker, Replay), `src/Signer`, `src/Authority`, related docs under `docs/replay` and `docs/modules/scanner`.
## Dependencies & Concurrency
- Upstream: Sprint 0185 (Replay Core foundations) and Sprint 0130 Scanner & Surface.
- Concurrency: execute tasks in listed order; signing tasks align with replay outputs; docs tasks mirror code tasks.
## Documentation Prerequisites
- docs/README.md
- docs/07_HIGH_LEVEL_ARCHITECTURE.md
- docs/modules/platform/architecture-overview.md
- docs/replay/DETERMINISTIC_REPLAY.md
- docs/replay/TEST_STRATEGY.md
- docs/modules/scanner/architecture.md
## Delivery Tracker
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| 1 | SCAN-REPLAY-186-001 | BLOCKED (2025-11-26) | Await pipeline inputs. | Scanner Guild (`src/Scanner/StellaOps.Scanner.WebService`, docs) | Implement `record` mode (manifest assembly, policy/feed/tool hash capture, CAS uploads); doc workflow referencing replay doc §6. |
| 2 | SCAN-REPLAY-186-002 | TODO | Depends on 186-001. | Scanner Guild | Update Worker analyzers to consume sealed input bundles, enforce deterministic ordering, contribute Merkle metadata; add `docs/modules/scanner/deterministic-execution.md`. |
| 3 | SIGN-REPLAY-186-003 | TODO | Depends on 186-001/002. | Signing Guild (`src/Signer`, `src/Authority`) | Extend Signer/Authority DSSE flows to cover replay manifests/bundles; refresh signer/authority architecture docs referencing replay doc §5. |
| 4 | SIGN-CORE-186-004 | DONE (2025-11-26) | CryptoDsseSigner implemented with ICryptoProviderRegistry integration. | Signing Guild | Replace HMAC demo in Signer with StellaOps.Cryptography providers (keyless + KMS); provider selection, key loading, cosign-compatible DSSE output. |
| 5 | SIGN-CORE-186-005 | DONE (2025-11-26) | SignerStatementBuilder refactored with StellaOps predicate types and CanonicalJson from Provenance library. | Signing Guild | Refactor `SignerStatementBuilder` to support StellaOps predicate types and delegate canonicalisation to Provenance library when available. |
| 6 | SIGN-TEST-186-006 | DONE (2025-11-26) | Integration tests upgraded with real crypto providers and fixture predicates. | Signing Guild · QA Guild | Upgrade signer integration tests to real crypto abstraction + fixture predicates (promotion, SBOM, replay); deterministic test data. |
| 7 | AUTH-VERIFY-186-007 | TODO | After 186-003. | Authority Guild · Provenance Guild | Authority-side helper/service validating DSSE signatures and Rekor proofs for promotion attestations using trusted checkpoints; offline audit flow. |
| 8 | SCAN-DETER-186-008 | DOING (2025-11-26) | Parallel with 186-002. | Scanner Guild | Add deterministic execution switches (fixed clock, RNG seed, concurrency cap, feed/policy pins, log filtering) via CLI/env/config. |
| 9 | SCAN-DETER-186-009 | TODO | Depends on 186-008. | Scanner Guild · QA Guild | Determinism harness to replay scans, canonicalise outputs, record hash matrices (`docs/modules/scanner/determinism-score.md`). |
| 10 | SCAN-DETER-186-010 | TODO | Depends on 186-009. | Scanner Guild · Export Center Guild | Emit/publish `determinism.json` with scores/hashes/diffs alongside each scanner release via CAS/object storage; document in release guide. |
| 11 | SCAN-ENTROPY-186-011 | DONE (2025-11-26) | Add core entropy calculator & tests; integrate into worker pipeline next. | Scanner Guild | Entropy analysis for ELF/PE/Mach-O/opaque blobs (sliding-window metrics, section heuristics); record offsets/hints (see `docs/modules/scanner/entropy.md`). |
| 12 | SCAN-ENTROPY-186-012 | BLOCKED (2025-11-26) | Waiting on worker→webservice entropy delivery contract and upstream Policy build fix. | Scanner Guild · Provenance Guild | Generate `entropy.report.json`, image-level penalties; attach evidence to manifests/attestations; expose ratios for policy engines. |
| 13 | SCAN-CACHE-186-013 | BLOCKED (2025-11-26) | Waiting on cache key/contract (tool/feed/policy IDs, manifest hash) and DSSE validation flow definition between Worker ↔ WebService. | Scanner Guild | Layer-level SBOM/VEX cache keyed by layer digest + manifest hash + tool/feed/policy IDs; re-verify DSSE on cache hits; persist indexes; document referencing 16-Nov-2026 advisory. |
| 14 | SCAN-DIFF-CLI-186-014 | TODO | Depends on replay+cache scaffolding. | Scanner Guild · CLI Guild | Deterministic diff-aware rescan workflow (`scan.lock.json`, JSON Patch diffs, CLI verbs `stella scan --emit-diff` / `stella diff`); replayable tests; docs. |
| 15 | SBOM-BRIDGE-186-015 | TODO | Parallel; coordinate with Sbomer. | Sbomer Guild · Scanner Guild | Establish SPDX 3.0.1 as canonical SBOM persistence; deterministic CycloneDX 1.6 exporter; map table/library; wire snapshot hashes into replay manifests. |
| 16 | DOCS-REPLAY-186-004 | TODO | After replay schema settled. | Docs Guild | Author `docs/replay/TEST_STRATEGY.md` (golden replay, feed drift, tool upgrade); link from replay docs and Scanner architecture. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-26 | Completed SIGN-TEST-186-006: upgraded signer integration tests with real crypto abstraction (CryptoDsseSigner + ICryptoProviderRegistry); added PredicateFixtures with deterministic test data for all StellaOps predicate types (promotion, sbom, vex, replay, policy, evidence) plus SLSA provenance v0.2/v1; added TestCryptoFactory for creating test crypto providers with ES256 signing keys; added SigningRequestBuilder for fluent test setup; added DeterministicTestData constants for reproducible testing; 35 new integration tests covering CryptoDsseSigner, SignerPipeline with all predicate types, signature verification, base64url encoding, and multi-subject signing. All 90 signer tests pass. | Signing Guild |
| 2025-11-26 | Completed SIGN-CORE-186-005: refactored SignerStatementBuilder to support StellaOps predicate types (promotion, sbom, vex, replay, policy, evidence) and delegate canonicalization to CanonicalJson from Provenance library; added PredicateTypes static class with well-known type constants and helper methods (IsStellaOpsType, IsSlsaProvenance); added InTotoStatement/InTotoSubject records; added GetRecommendedStatementType for v0.1/v1 selection; 16 unit tests covering statement building, predicate type detection, digest sorting, and deterministic serialization. All 56 Signer tests pass. | Signing Guild |
| 2025-11-26 | Completed SIGN-CORE-186-004: implemented CryptoDsseSigner with ICryptoProviderRegistry integration (keyless + KMS modes), added DefaultSigningKeyResolver for tenant-aware key resolution, DI extensions (AddDsseSigning/AddDsseSigningWithKms/AddDsseSigningKeyless), cosign-compatible base64url DSSE envelope output, and 26 unit tests covering signer, key resolver, and DI. All tests pass. | Signing Guild |
| 2025-11-26 | Began SCAN-ENTROPY-186-012: added entropy snapshot/status DTOs and API surface to expose opaque ratios; pending worker-to-webservice propagation of entropy metadata. | Scanner Guild |
| 2025-11-26 | Added `/scans/{scanId}/entropy` ingest endpoint and coordinator hook; build of Scanner.WebService blocked by existing Policy module errors outside sprint scope. | Scanner Guild |
| 2025-11-26 | Fixed entropy stage naming/metadata, added ScanFileEntry contract, and verified entropy worker payload/tests pass. | Scanner Guild |
| 2025-11-26 | Marked SCAN-ENTROPY-186-012 BLOCKED: worker lacks client to POST entropy snapshots and Policy module build failures prevent running WebService tests; unblock by defining worker→webservice contract and repairing Policy build. | Scanner Guild |
| 2025-11-26 | Marked SCAN-CACHE-186-013 BLOCKED: need cache key contract (manifest hash, tool/feed/policy IDs) and DSSE revalidation flow across Worker/WebService before implementation. | Scanner Guild |
| 2025-11-26 | Started SCAN-DETER-186-008: added determinism options (fixed clock, RNG seed, log filtering flags) and deterministic time provider wiring in worker DI; tests pending due to policy build failures. | Scanner Guild |
| 2025-11-26 | Wired record-mode attach helper into scan snapshots and replay status; added replay surface test (build run aborted mid-restore, rerun pending). | Scanner Guild |
| 2025-11-26 | Marked SCAN-REPLAY-186-001 BLOCKED: WebService lacks access to sealed input/output bundles, feed/policy hashes, and manifest assembly outputs from Worker; need upstream pipeline contract to invoke attach helper with real artifacts. | Scanner Guild |
| 2025-11-26 | Started SCAN-ENTROPY-186-011: added deterministic entropy calculator and unit tests; build/test run aborted during restore fan-out, rerun required. | Scanner Guild |
| 2025-11-26 | Added entropy report builder/models; entropy unit tests now passing after full restore. | Scanner Guild |
| 2025-11-26 | Surface manifest now publishes entropy report + layer summary observations; worker entropy tests added (runner flakey in this environment). | Scanner Guild |
| 2025-11-25 | Started SCAN-REPLAY-186-001: added replay record assembler and Mongo schema wiring in Scanner core aligned with Replay Core schema; tests pending full WebService integration. | Scanner Guild |
| 2025-11-03 | `docs/replay/TEST_STRATEGY.md` drafted; Replay CAS section published — Scanner/Signer guilds should move replay tasks to DOING when engineering starts. | Planning |
| 2025-11-19 | Normalized sprint to standard template and renamed from `SPRINT_186_record_deterministic_execution.md` to `SPRINT_0186_0001_0001_record_deterministic_execution.md`; content preserved. | Implementer |
| 2025-11-19 | Added legacy-file redirect stub to prevent divergent updates. | Implementer |
## Decisions & Risks
- Depends on Replay Core (0185); do not start until CAS and TEST_STRATEGY baselines are confirmed.
- Deterministic execution must preserve hermetic runs; ensure fixed clock/RNG/log filtering before enabling harness.
- Signing/verification changes must stay aligned with Provenance library once available.
- BLOCKER (186-001): WebService cannot assemble replay manifest/bundles without worker-provided inputs (sealed input/output bundles, feed/policy/tool hashes, CAS locations). Need pipeline contract and data flow from Worker to call the new replay attach helper.
- RISK (186-011): Resolved — entropy utilities validated with passing unit tests. Proceed to pipeline integration and evidence emission.
- Entropy stage expects `ScanAnalysisKeys.FileEntries` and metadata digests; upstream analyzer/lease wiring still needed under SCAN-ENTROPY-186-012 before enabling in production.
- Build risk: Scanner.WebService build currently fails due to pre-existing errors in `StellaOps.Policy` (not in sprint scope); entropy endpoint change compiles logically but needs full solution fix upstream.
- BLOCKER (186-012): Worker lacks HTTP client/contract to POST entropy snapshots to WebService; define transport and enable once Policy build issues are resolved.
- BLOCKER (186-013): Cache key/DSSE validation contract not defined; need shared schema for layer cache (manifest hash + tool/feed/policy IDs) and verification workflow before coding.
## Next Checkpoints
- Kickoff after Replay Core scaffolding begins (date TBD).
# Sprint 0186-0001-0001 · Record & Deterministic Execution (Scanner Replay 186.A)
## Topic & Scope
- Enable Scanner to emit replay manifests/bundles, enforce deterministic execution, align signing flows, and publish determinism evidence.
- **Working directory:** `src/Scanner` (WebService, Worker, Replay), `src/Signer`, `src/Authority`, related docs under `docs/replay` and `docs/modules/scanner`.
## Dependencies & Concurrency
- Upstream: Sprint 0185 (Replay Core foundations) and Sprint 0130 Scanner & Surface.
- Concurrency: execute tasks in listed order; signing tasks align with replay outputs; docs tasks mirror code tasks.
## Documentation Prerequisites
- docs/README.md
- docs/07_HIGH_LEVEL_ARCHITECTURE.md
- docs/modules/platform/architecture-overview.md
- docs/replay/DETERMINISTIC_REPLAY.md
- docs/replay/TEST_STRATEGY.md
- docs/modules/scanner/architecture.md
- docs/modules/sbomer/architecture.md (for SPDX 3.0.1 tasks)
- Product advisory: `docs/product-advisories/27-Nov-2025 - Deep Architecture Brief - SBOMFirst, VEXReady Spine.md` (canonical for SPDX/VEX work)
- SPDX 3.0.1 specification: https://spdx.github.io/spdx-spec/v3.0.1/
## Delivery Tracker
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| 1 | SCAN-REPLAY-186-001 | BLOCKED (2025-11-26) | Await pipeline inputs. | Scanner Guild (`src/Scanner/StellaOps.Scanner.WebService`, docs) | Implement `record` mode (manifest assembly, policy/feed/tool hash capture, CAS uploads); doc workflow referencing replay doc §6. |
| 2 | SCAN-REPLAY-186-002 | TODO | Depends on 186-001. | Scanner Guild | Update Worker analyzers to consume sealed input bundles, enforce deterministic ordering, contribute Merkle metadata; add `docs/modules/scanner/deterministic-execution.md`. |
| 3 | SIGN-REPLAY-186-003 | TODO | Depends on 186-001/002. | Signing Guild (`src/Signer`, `src/Authority`) | Extend Signer/Authority DSSE flows to cover replay manifests/bundles; refresh signer/authority architecture docs referencing replay doc §5. |
| 4 | SIGN-CORE-186-004 | DONE (2025-11-26) | CryptoDsseSigner implemented with ICryptoProviderRegistry integration. | Signing Guild | Replace HMAC demo in Signer with StellaOps.Cryptography providers (keyless + KMS); provider selection, key loading, cosign-compatible DSSE output. |
| 5 | SIGN-CORE-186-005 | DONE (2025-11-26) | SignerStatementBuilder refactored with StellaOps predicate types and CanonicalJson from Provenance library. | Signing Guild | Refactor `SignerStatementBuilder` to support StellaOps predicate types and delegate canonicalisation to Provenance library when available. |
| 6 | SIGN-TEST-186-006 | DONE (2025-11-26) | Integration tests upgraded with real crypto providers and fixture predicates. | Signing Guild · QA Guild | Upgrade signer integration tests to real crypto abstraction + fixture predicates (promotion, SBOM, replay); deterministic test data. |
| 7 | AUTH-VERIFY-186-007 | TODO | After 186-003. | Authority Guild · Provenance Guild | Authority-side helper/service validating DSSE signatures and Rekor proofs for promotion attestations using trusted checkpoints; offline audit flow. |
| 8 | SCAN-DETER-186-008 | DOING (2025-11-26) | Parallel with 186-002. | Scanner Guild | Add deterministic execution switches (fixed clock, RNG seed, concurrency cap, feed/policy pins, log filtering) via CLI/env/config. |
| 9 | SCAN-DETER-186-009 | TODO | Depends on 186-008. | Scanner Guild · QA Guild | Determinism harness to replay scans, canonicalise outputs, record hash matrices (`docs/modules/scanner/determinism-score.md`). |
| 10 | SCAN-DETER-186-010 | TODO | Depends on 186-009. | Scanner Guild · Export Center Guild | Emit/publish `determinism.json` with scores/hashes/diffs alongside each scanner release via CAS/object storage; document in release guide. |
| 11 | SCAN-ENTROPY-186-011 | DONE (2025-11-26) | Add core entropy calculator & tests; integrate into worker pipeline next. | Scanner Guild | Entropy analysis for ELF/PE/Mach-O/opaque blobs (sliding-window metrics, section heuristics); record offsets/hints (see `docs/modules/scanner/entropy.md`). |
| 12 | SCAN-ENTROPY-186-012 | BLOCKED (2025-11-26) | Waiting on worker→webservice entropy delivery contract and upstream Policy build fix. | Scanner Guild · Provenance Guild | Generate `entropy.report.json`, image-level penalties; attach evidence to manifests/attestations; expose ratios for policy engines. |
| 13 | SCAN-CACHE-186-013 | BLOCKED (2025-11-26) | Waiting on cache key/contract (tool/feed/policy IDs, manifest hash) and DSSE validation flow definition between Worker ↔ WebService. | Scanner Guild | Layer-level SBOM/VEX cache keyed by layer digest + manifest hash + tool/feed/policy IDs; re-verify DSSE on cache hits; persist indexes; document referencing 16-Nov-2026 advisory. |
| 14 | SCAN-DIFF-CLI-186-014 | TODO | Depends on replay+cache scaffolding. | Scanner Guild · CLI Guild | Deterministic diff-aware rescan workflow (`scan.lock.json`, JSON Patch diffs, CLI verbs `stella scan --emit-diff` / `stella diff`); replayable tests; docs. |
| 15 | SBOM-BRIDGE-186-015 | TODO | Parallel; coordinate with Sbomer. | Sbomer Guild · Scanner Guild | Establish SPDX 3.0.1 as canonical SBOM persistence; deterministic CycloneDX 1.6 exporter; map table/library; wire snapshot hashes into replay manifests. See subtasks 15a-15f below. |
| 15a | SPDX-MODEL-186-015A | TODO | Foundational for SBOM-BRIDGE. | Sbomer Guild (`src/Sbomer/StellaOps.Sbomer.Spdx`) | Implement SPDX 3.0.1 data model: `SpdxDocument`, `Package`, `File`, `Snippet`, `Relationship`, `ExternalRef`, `Annotation`. Use SPDX 3.0.1 JSON-LD schema. |
| 15b | SPDX-SERIAL-186-015B | TODO | Depends on 15a. | Sbomer Guild | Implement SPDX 3.0.1 serializers/deserializers: JSON-LD (canonical), Tag-Value (legacy compat), RDF/XML (optional). Ensure deterministic output ordering. |
| 15c | CDX-MAP-186-015C | TODO | Depends on 15a. | Sbomer Guild (`src/Sbomer/StellaOps.Sbomer.CycloneDx`) | Build bidirectional SPDX 3.0.1 ↔ CycloneDX 1.6 mapping table: component→package, dependency→relationship, vulnerability→advisory. Document loss-of-fidelity cases. |
| 15d | SBOM-STORE-186-015D | TODO | Depends on 15a. | Sbomer Guild · Scanner Guild | MongoDB/CAS persistence for SPDX 3.0.1 documents; indexed by artifact digest, component PURL, document SPDXID. Enable efficient lookup for VEX correlation. |
| 15e | SBOM-HASH-186-015E | TODO | Depends on 15b, 15d. | Sbomer Guild | Implement SBOM content hash computation: canonical JSON → BLAKE3 hash; store as `sbom_content_hash` in replay manifests; enable deduplication. |
| 15f | SBOM-TESTS-186-015F | TODO | Depends on 15a-15e. | Sbomer Guild · QA Guild (`src/Sbomer/__Tests`) | Roundtrip tests: SPDX→CDX→SPDX with diff assertion; determinism tests (same input → same hash); SPDX 3.0.1 spec compliance validation. |
| 16 | DOCS-REPLAY-186-004 | TODO | After replay schema settled. | Docs Guild | Author `docs/replay/TEST_STRATEGY.md` (golden replay, feed drift, tool upgrade); link from replay docs and Scanner architecture. |
| 17 | DOCS-SBOM-186-017 | TODO | Depends on 15a-15f. | Docs Guild (`docs/modules/sbomer/spdx-3.md`) | Document SPDX 3.0.1 implementation: data model, serialization formats, CDX mapping table, storage schema, hash computation, migration guide from SPDX 2.3. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | Expanded SBOM-BRIDGE-186-015 with detailed subtasks (15a-15f) for SPDX 3.0.1 implementation per product advisory. | Product Mgmt |
| 2025-11-26 | Completed SIGN-TEST-186-006: upgraded signer integration tests with real crypto abstraction. | Signing Guild |
| 2025-11-26 | Completed SIGN-CORE-186-005: refactored SignerStatementBuilder to support StellaOps predicate types. | Signing Guild |
| 2025-11-26 | Completed SIGN-CORE-186-004: implemented CryptoDsseSigner with ICryptoProviderRegistry integration. | Signing Guild |
| 2025-11-26 | Began SCAN-ENTROPY-186-012: added entropy snapshot/status DTOs and API surface. | Scanner Guild |
| 2025-11-26 | Started SCAN-DETER-186-008: added determinism options and deterministic time provider wiring. | Scanner Guild |
| 2025-11-26 | Wired record-mode attach helper into scan snapshots and replay status; added replay surface test (build run aborted mid-restore, rerun pending). | Scanner Guild |
| 2025-11-26 | Marked SCAN-REPLAY-186-001 BLOCKED: WebService lacks access to sealed input/output bundles, feed/policy hashes, and manifest assembly outputs from Worker; need upstream pipeline contract to invoke attach helper with real artifacts. | Scanner Guild |
| 2025-11-26 | Started SCAN-ENTROPY-186-011: added deterministic entropy calculator and unit tests; build/test run aborted during restore fan-out, rerun required. | Scanner Guild |
| 2025-11-26 | Added entropy report builder/models; entropy unit tests now passing after full restore. | Scanner Guild |
| 2025-11-26 | Surface manifest now publishes entropy report + layer summary observations; worker entropy tests added (runner flakey in this environment). | Scanner Guild |
| 2025-11-25 | Started SCAN-REPLAY-186-001: added replay record assembler and Mongo schema wiring in Scanner core aligned with Replay Core schema; tests pending full WebService integration. | Scanner Guild |
| 2025-11-03 | `docs/replay/TEST_STRATEGY.md` drafted; Replay CAS section published — Scanner/Signer guilds should move replay tasks to DOING when engineering starts. | Planning |
| 2025-11-19 | Normalized sprint to standard template and renamed from `SPRINT_186_record_deterministic_execution.md` to `SPRINT_0186_0001_0001_record_deterministic_execution.md`; content preserved. | Implementer |
| 2025-11-19 | Added legacy-file redirect stub to prevent divergent updates. | Implementer |
## Decisions & Risks
- Depends on Replay Core (0185); do not start until CAS and TEST_STRATEGY baselines are confirmed.
- Deterministic execution must preserve hermetic runs; ensure fixed clock/RNG/log filtering before enabling harness.
- Signing/verification changes must stay aligned with Provenance library once available.
- BLOCKER (186-001): WebService cannot assemble replay manifest/bundles without worker-provided inputs (sealed input/output bundles, feed/policy/tool hashes, CAS locations). Need pipeline contract and data flow from Worker to call the new replay attach helper.
- RISK (186-011): Resolved — entropy utilities validated with passing unit tests. Proceed to pipeline integration and evidence emission.
- Entropy stage expects `ScanAnalysisKeys.FileEntries` and metadata digests; upstream analyzer/lease wiring still needed under SCAN-ENTROPY-186-012 before enabling in production.
- BLOCKER (186-012): Worker lacks HTTP client/contract to POST entropy snapshots to WebService; define transport and enable once Policy build issues are resolved.
- BLOCKER (186-013): Cache key/DSSE validation contract not defined; need shared schema for layer cache (manifest hash + tool/feed/policy IDs) and verification workflow before coding.
- RISK (SPDX 3.0.1): SPDX 3.0.1 uses JSON-LD which has complex serialization rules; ensure canonical output for deterministic hashing. Reference spec carefully.
- DECISION (SPDX/CDX): SPDX 3.0.1 is canonical storage format; CycloneDX 1.6 is interchange format. Document loss-of-fidelity cases in mapping table (task 15c).
## Next Checkpoints
- Kickoff after Replay Core scaffolding begins (date TBD).
- SPDX 3.0.1 data model review (Sbomer Guild, date TBD).
- CDX↔SPDX mapping table draft review (Sbomer Guild, date TBD).

View File

@@ -0,0 +1,74 @@
# Sprint 0190 · CVSS v4.0 Score Receipts
## Topic & Scope
- Implement CVSS v4.0 scoring engine with deterministic receipt generation.
- Store CVSS-BTE (Base + Threat + Environmental) scores with full audit trail.
- Enable policy-driven scoring with evidence linkage and DSSE attestations.
- **Working directory:** `src/Policy/StellaOps.Policy.Scoring` (new), `src/Signals/StellaOps.Signals`.
## Dependencies & Concurrency
- Upstream: Sprint 0127/0128 Policy Engine observability; Sprint 0161 Evidence Locker.
- Concurrency: Data model and scoring engine can proceed in parallel; UI/CLI integration follows.
- Peers: Align with Concelier for vendor-provided CVSS v4.0 vectors; Excititor for VEX score context.
## Documentation Prerequisites
- `docs/README.md`
- `docs/07_HIGH_LEVEL_ARCHITECTURE.md`
- `docs/modules/policy/architecture.md`
- `docs/modules/signals/architecture.md`
- Product advisory: `docs/product-advisories/25-Nov-2025 - Add CVSS v4.0 Score Receipts for Transparency.md`
- FIRST CVSS v4.0 Specification: https://www.first.org/cvss/v4-0/specification-document
- FIRST CVSS v4.0 Calculator: https://www.first.org/cvss/calculator/4-0
- Module AGENTS.md: Create `src/Policy/StellaOps.Policy.Scoring/AGENTS.md` as part of task 1
## Delivery Tracker
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| 1 | CVSS-MODEL-190-001 | TODO | None; foundational. | Policy Guild · Signals Guild (`src/Policy/StellaOps.Policy.Scoring`) | Design and implement CVSS v4.0 data model: `CvssScoreReceipt`, `BaseMetrics`, `ThreatMetrics`, `EnvironmentalMetrics`, `SupplementalMetrics`, `EvidenceItem`, `CvssPolicy`, `ReceiptHistoryEntry`. Include EF Core mappings and MongoDB schema. |
| 2 | CVSS-ENGINE-190-002 | TODO | Depends on 190-001 for types. | Policy Guild (`src/Policy/StellaOps.Policy.Scoring/Engine`) | Implement `CvssV4Engine` with: `ParseVector()`, `ComputeBaseScore()`, `ComputeThreatAdjustedScore()`, `ComputeEnvironmentalAdjustedScore()`, `BuildVector()`. Follow FIRST spec v4.0 exactly for math/rounding. |
| 3 | CVSS-TESTS-190-003 | TODO | Depends on 190-002. | Policy Guild · QA Guild (`src/Policy/__Tests/StellaOps.Policy.Scoring.Tests`) | Unit tests for CVSS v4.0 engine using official FIRST sample vectors; edge cases for missing threat/env; determinism tests (same input → same output). |
| 4 | CVSS-POLICY-190-004 | TODO | Depends on 190-002. | Policy Guild (`src/Policy/StellaOps.Policy.Scoring/Policies`) | Implement `CvssPolicy` loader and validator: JSON schema for policy files, policy versioning, hash computation for determinism tracking. |
| 5 | CVSS-RECEIPT-190-005 | TODO | Depends on 190-002, 190-004. | Policy Guild (`src/Policy/StellaOps.Policy.Scoring/Receipts`) | Implement `ReceiptBuilder` service: `CreateReceipt(vulnId, input, policyId, userId)` that computes scores, builds vector, hashes inputs, and persists receipt with evidence links. |
| 6 | CVSS-DSSE-190-006 | TODO | Depends on 190-005; uses Attestor primitives. | Policy Guild · Attestor Guild (`src/Policy/StellaOps.Policy.Scoring`, `src/Attestor/StellaOps.Attestor.Envelope`) | Attach DSSE attestations to score receipts: create `stella.ops/cvssReceipt@v1` predicate type, sign receipts, store envelope references. |
| 7 | CVSS-HISTORY-190-007 | TODO | Depends on 190-005. | Policy Guild (`src/Policy/StellaOps.Policy.Scoring/History`) | Implement receipt amendment tracking: `AmendReceipt(receiptId, field, newValue, reason, ref)` with history entry creation and re-signing. |
| 8 | CVSS-CONCELIER-190-008 | TODO | Depends on 190-001; coordinate with Concelier. | Concelier Guild · Policy Guild (`src/Concelier/__Libraries/StellaOps.Concelier.Core`) | Ingest vendor-provided CVSS v4.0 vectors from advisories; parse and store as base receipts; preserve provenance. |
| 9 | CVSS-API-190-009 | TODO | Depends on 190-005, 190-007. | Policy Guild (`src/Policy/StellaOps.Policy.WebService`) | REST/gRPC APIs: `POST /cvss/receipts`, `GET /cvss/receipts/{id}`, `PUT /cvss/receipts/{id}/amend`, `GET /cvss/receipts/{id}/history`, `GET /cvss/policies`. |
| 10 | CVSS-CLI-190-010 | TODO | Depends on 190-009. | CLI Guild (`src/Cli/StellaOps.Cli`) | CLI verbs: `stella cvss score --vuln <id>`, `stella cvss show <receiptId>`, `stella cvss history <receiptId>`, `stella cvss export <receiptId> --format json|pdf`. |
| 11 | CVSS-UI-190-011 | TODO | Depends on 190-009. | UI Guild (`src/UI/StellaOps.UI`) | UI components: Score badge with CVSS-BTE label, tabbed receipt viewer (Base/Threat/Environmental/Supplemental/Evidence/Policy/History), "Recalculate with my env" button, export options. |
| 12 | CVSS-DOCS-190-012 | TODO | Depends on 190-001 through 190-011. | Docs Guild (`docs/modules/policy/cvss-v4.md`, `docs/09_API_CLI_REFERENCE.md`) | Document CVSS v4.0 scoring system: data model, policy format, API reference, CLI usage, UI guide, determinism guarantees. |
## Wave Coordination
| Wave | Guild owners | Shared prerequisites | Status | Notes |
| --- | --- | --- | --- | --- |
| W1 Foundation | Policy Guild | None | TODO | Tasks 1-4: Data model, engine, tests, policy loader. |
| W2 Receipt Pipeline | Policy Guild · Attestor Guild | W1 complete | TODO | Tasks 5-7: Receipt builder, DSSE, history. |
| W3 Integration | Concelier · Policy · CLI · UI Guilds | W2 complete | TODO | Tasks 8-11: Vendor ingest, APIs, CLI, UI. |
| W4 Documentation | Docs Guild | W3 complete | TODO | Task 12: Full documentation. |
## Interlocks
- CVSS v4.0 vectors from Concelier must preserve vendor provenance (task 8 depends on Concelier ingestion patterns).
- DSSE attestation format must align with existing `stella.ops/*` predicate catalog (coordinate with Sprint 0401 AUTH-REACH tasks).
- Score receipts should integrate with VEX decisions in Excititor for complete vulnerability context.
## Upcoming Checkpoints
- TBD: CVSS v4.0 data model review (Policy Guild).
- TBD: Engine implementation demo with FIRST test vectors (Policy Guild).
- TBD: UI wireframe review (UI Guild).
## Action Tracker
| # | Action | Owner | Due (UTC) | Status | Notes |
| --- | --- | --- | --- | --- | --- |
| 1 | Review FIRST CVSS v4.0 spec and identify implementation gaps. | Policy Guild | TBD | Open | Reference: https://www.first.org/cvss/v4-0/ |
| 2 | Draft CvssPolicy JSON schema for team review. | Policy Guild | TBD | Open | |
## Decisions & Risks
| ID | Risk | Impact | Mitigation / Owner |
| --- | --- | --- | --- |
| R1 | CVSS v4.0 spec complexity leads to implementation errors. | Incorrect scores, audit failures. | Use official FIRST test vectors; cross-check with FIRST calculator; Policy Guild. |
| R2 | Vendor advisories inconsistently provide v4.0 vectors. | Gaps in base scores; fallback to v3.1 conversion. | Implement v3.1→v4.0 heuristic mapping with explicit "converted" flag; Concelier Guild. |
| R3 | Receipt storage grows large with evidence links. | Storage costs; query performance. | Implement evidence reference deduplication; use CAS URIs; Platform Guild. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | Sprint created from product advisory `25-Nov-2025 - Add CVSS v4.0 Score Receipts for Transparency.md`; 12 tasks defined across 4 waves. | Product Mgmt |

View File

@@ -1,110 +1,112 @@
# Sprint 0208 · Experience & SDKs
## Topic & Scope
- Build a reproducible SDK generator toolchain and shared post-processing layer that stays air-gap safe.
- Ship alpha SDKs (TypeScript, Python, Go, Java) aligned to portal APIs with consistent auth/telemetry helpers.
- Connect SDK outputs to CLI and Console data providers; package offline delivery bundles with provenance.
- Evidence: updated generator pipelines, release configs, and signed artifacts across npm/PyPI/Maven/Go proxies.
- **Working directory:** `docs/implplan` (planning) with execution in `src/Sdk/StellaOps.Sdk.*`.
## Dependencies & Concurrency
- Upstream sprints: Sprint 120.A (AirGap), 130.A (Scanner), 150.A (Orchestrator), 170.A (Notifier) for API and events readiness.
- Peer/consuming sprints: SPRINT_0201_0001_0001_cli_i (CLI), SPRINT_0206_0001_0001_devportal (devportal/offline bundles), SPRINT_0209_0001_0001_ui_i (Console/UI data providers).
- Concurrency: language tracks can parallelize after SDKGEN-62-002; release tasks follow generator readiness; consumer sprints can prototype against staging SDKs once B wave exits.
## Documentation Prerequisites
- docs/README.md; docs/07_HIGH_LEVEL_ARCHITECTURE.md; docs/modules/platform/architecture-overview.md.
- docs/modules/cli/architecture.md; docs/modules/ui/architecture.md.
- API/OAS governance specs referenced by APIG0101 and portal contracts (DEVL0101) once published.
## Delivery Tracker
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| 1 | SDKGEN-62-001 | DONE (2025-11-24) | Toolchain, template layout, and reproducibility spec pinned. | SDK Generator Guild · `src/Sdk/StellaOps.Sdk.Generator` | Choose/pin generator toolchain, set up language template pipeline, and enforce reproducible builds. |
| 2 | SDKGEN-62-002 | DONE (2025-11-24) | Shared post-processing merged; helpers wired. | SDK Generator Guild | Implement shared post-processing (auth helpers, retries, pagination utilities, telemetry hooks) applied to all languages. |
| 3 | SDKGEN-63-001 | BLOCKED (2025-11-26) | Waiting on frozen aggregate OAS digest to generate Wave B TS alpha; scaffold + smoke + hash guard ready. | SDK Generator Guild | Ship TypeScript SDK alpha with ESM/CJS builds, typed errors, paginator, streaming helpers. |
| 4 | SDKGEN-63-002 | BLOCKED (2025-11-26) | Waiting on frozen aggregate OAS digest to generate Python alpha; scaffold + smoke + hash guard ready. | SDK Generator Guild | Ship Python SDK alpha (sync/async clients, type hints, upload/download helpers). |
| 5 | SDKGEN-63-003 | BLOCKED (2025-11-26) | Waiting on frozen aggregate OAS digest to emit Go alpha. | SDK Generator Guild | Ship Go SDK alpha with context-first API and streaming helpers. |
| 6 | SDKGEN-63-004 | BLOCKED (2025-11-26) | Waiting on frozen aggregate OAS digest to emit Java alpha. | SDK Generator Guild | Ship Java SDK alpha (builder pattern, HTTP client abstraction). |
| 7 | SDKGEN-64-001 | TODO | Depends on 63-004; map CLI surfaces to SDK calls. | SDK Generator Guild · CLI Guild | Switch CLI to consume TS or Go SDK; ensure parity. |
| 8 | SDKGEN-64-002 | TODO | Depends on 64-001; define Console data provider contracts. | SDK Generator Guild · Console Guild | Integrate SDKs into Console data providers where feasible. |
| 9 | SDKREL-63-001 | TODO | Set up signing keys/provenance; stage CI pipelines across registries. | SDK Release Guild · `src/Sdk/StellaOps.Sdk.Release` | Configure CI pipelines for npm, PyPI, Maven Central staging, and Go proxies with signing and provenance attestations. |
| 10 | SDKREL-63-002 | TODO | Requires 63-001; connect OAS diff feed. | SDK Release Guild · API Governance Guild | Integrate changelog automation pulling from OAS diffs and generator metadata. |
| 11 | SDKREL-64-001 | TODO | Wait for 63-002; design Notifications Studio channel scopes. | SDK Release Guild · Notifications Guild | Hook SDK releases into Notifications Studio with scoped announcements and RSS/Atom feeds. |
| 12 | SDKREL-64-002 | TODO | Requires 64-001; define offline bundle manifest. | SDK Release Guild · Export Center Guild | Add `devportal --offline` bundle job packaging docs, specs, SDK artifacts for air-gapped users. |
## Wave Coordination
- Single wave covering generator and release work; language tracks branch after SDKGEN-62-002.
## Wave Detail Snapshots
| Wave | Window (UTC) | Scope | Exit criteria | Owners | Status |
| --- | --- | --- | --- | --- | --- |
| A: Generator foundation | 2025-11-25 → 2025-12-02 | SDKGEN-62-001..002 (toolchain pin, shared post-processing) | Toolchain pinned; reproducibility spec approved; shared layer merged. | SDK Generator Guild | Planned |
| B: Language alphas | 2025-12-03 → 2025-12-22 | SDKGEN-63-001..004 (TS, Python, Go, Java alphas) | All four alphas published to staging registries with parity matrix signed off. | SDK Generator Guild | Planned |
| C: Release & offline | 2025-12-08 → 2025-12-29 | SDKREL-63-001..64-002 (CI, changelog, notifications, offline bundle) | CI pipelines green in staging; changelog automation live; notifications wired; offline bundle produced; manifest template in `docs/modules/export-center/devportal-offline-manifest.md` adopted. | SDK Release Guild · Export Center Guild | Planned |
## Interlocks
- API governance: APIG0101 outputs for stable schemas; required before Wave A exit.
- Portal contracts: DEVL0101 (auth/session) inform shared post-processing; consume before Wave A design review.
- Devportal/offline: SPRINT_0206_0001_0001_devportal must expose bundle manifest format for SDKREL-64-002.
- CLI adoption: SPRINT_0201_0001_0001_cli_i aligns surfaces for SDKGEN-64-001; needs Wave B artifacts.
- Console data providers: SPRINT_0209_0001_0001_ui_i depends on SDKGEN-64-002; needs parity matrix from Wave B.
- Notifications/Export: Notifications Studio and Export Center pipelines must be live before Wave C release window (tasks 1112).
## Upcoming Checkpoints
- 2025-11-25: Toolchain decision review (SDKGEN-62-001) — decide generator + template pin set.
- 2025-12-02: Shared post-processing design review (SDKGEN-62-002) — approve auth/retry/pagination/telemetry hooks.
- 2025-12-05: TS alpha staging drop (SDKGEN-63-001) — verify packaging and typed errors.
- 2025-12-15: Multi-language alpha readiness check (SDKGEN-63-002..004) — parity matrix sign-off.
- 2025-12-16: Deliver parity matrix and SDK drop to UI/Console data providers (feeds SPRINT_0209_0001_0001_ui_i).
- 2025-12-22: Release automation demo (SDKREL-63/64) — staging publishes with signatures and offline bundle.
## Action Tracker
| # | Action | Owner | Due (UTC) | Status |
| --- | --- | --- | --- | --- |
| 1 | Confirm registry signing keys and provenance workflow per language | SDK Release Guild | 2025-11-29 | Open |
| 2 | Publish SDK language support matrix to CLI/UI guilds. Evidence: `docs/modules/sdk/language-support-matrix.md`. | SDK Generator Guild | 2025-12-03 | DONE (2025-11-26) |
| 3 | Align CLI adoption scope with SPRINT_0201_0001_0001_cli_i and schedule SDK drop integration | SDK Generator Guild · CLI Guild | 2025-12-10 | Open |
| 4 | Define devportal offline bundle manifest with Export Center per SPRINT_0206_0001_0001_devportal. Evidence: `docs/modules/export-center/devportal-offline-manifest.md`. | SDK Release Guild · Export Center Guild | 2025-12-12 | DONE (2025-11-26) |
| 5 | Deliver parity matrix and SDK drop to UI data providers per SPRINT_0209_0001_0001_ui_i | SDK Generator Guild · UI Guild | 2025-12-16 | Open |
## Decisions & Risks
- Toolchain pinned (OpenAPI Generator 7.4.0, JDK 21) and recorded in repo (`TOOLCHAIN.md`, `toolchain.lock.yaml`); downstream tracks must honor lock file for determinism.
- Dependencies on upstream API/portal contracts may delay generator pinning; mitigation: align with APIG0101 / DEVL0101 milestones.
- Release automation requires registry credentials and signing infra; mitigation: reuse sovereign crypto enablement (SPRINT_0514_0001_0001_sovereign_crypto_enablement.md) practices and block releases until keys are validated.
- Offline bundle job (SDKREL-64-002) depends on Export Center artifacts; track alongside Export Center sprints.
- Shared postprocess helpers copy only when CI sets `STELLA_POSTPROCESS_ROOT` and `STELLA_POSTPROCESS_LANG`; ensure generation jobs export these to keep helpers present in artifacts.
### Risk Register
| Risk | Impact | Mitigation | Owner | Status |
| --- | --- | --- | --- | --- |
| Upstream APIs change after generator pin | Rework across four SDKs | Freeze spec version before SDKGEN-63-x; gate via API governance sign-off | SDK Generator Guild | Open |
| Registry signing not provisioned | Cannot ship to npm/PyPI/Maven/Go | Coordinate with sovereign crypto enablement; dry-run staging before prod | SDK Release Guild | Open |
| Offline bundle inputs unavailable | Air-gapped delivery slips | Pull docs/specs from devportal cache; coordinate with Export Center | SDK Release Guild | Open |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-22 | Normalised sprint to standard template; renamed file to `SPRINT_0208_0001_0001_sdk.md`; no status changes. | PM |
| 2025-11-22 | Added wave plan and dated checkpoints for generator, language alphas, and release/offline tracks. | PM |
| 2025-11-22 | Added explicit interlocks to CLI/UI/Devportal sprints and new alignment actions. | PM |
| 2025-11-22 | Added UI parity-matrix delivery action to keep data provider integration on track. | PM |
| 2025-11-24 | Pinned generator toolchain (OpenAPI Generator CLI 7.4.0, JDK 21), template layout, and reproducibility rules; captured in `src/Sdk/StellaOps.Sdk.Generator/TOOLCHAIN.md` + `toolchain.lock.yaml`. | SDK Generator Guild |
| 2025-11-24 | Started SDKGEN-62-002: added shared post-process scaffold (`postprocess/`), LF/whitespace normalizer script, and README for language hooks. | SDK Generator Guild |
| 2025-11-24 | Completed SDKGEN-62-002: postprocess now copies auth/retry/pagination/telemetry helpers for TS/Python/Go/Java, wires TS/Python exports, and adds smoke tests. | SDK Generator Guild |
| 2025-11-24 | Began SDKGEN-63-001: added TypeScript generator config (`ts/config.yaml`), deterministic driver script (`ts/generate-ts.sh`), and README; waiting on frozen OAS spec to produce alpha artifact. | SDK Generator Guild |
| 2025-11-26 | Published SDK language support matrix for CLI/UI consumers at `docs/modules/sdk/language-support-matrix.md`; Action #2 closed. | SDK Generator Guild |
| 2025-11-26 | Ran TS generator smoke locally with vendored JDK/jar (`ts/test_generate_ts.sh`); pass. Blocked until aggregate OpenAPI spec is frozen/published to generate Wave B alpha artifact. | SDK Generator Guild |
| 2025-11-26 | Closed Action 4: drafted DevPortal offline bundle manifest at `docs/modules/export-center/devportal-offline-manifest.md` to align SDKREL-64-002 with SPRINT_0206. | SDK Release Guild |
| 2025-11-26 | Added spec hash guard to TS/Python generators (`STELLA_OAS_EXPECTED_SHA256`) and emit `.oas.sha256` for provenance; updated smoke tests and READMEs. | SDK Generator Guild |
| 2025-11-26 | Scaffolded Go generator (config/script/smoke), enabled hash guard + helper copy via postprocess, and added `.oas.sha256` emission; waiting on frozen OAS for Wave B alpha. | SDK Generator Guild |
| 2025-11-26 | Scaffolded Java generator (config/script/smoke), added postprocess hook copy into `org.stellaops.sdk`, hash guard + `.oas.sha256`, and vendored-JDK fallback; waiting on frozen OAS for Wave B alpha. | SDK Generator Guild |
| 2025-11-26 | Marked SDKGEN-63-003/004 BLOCKED pending frozen aggregate OAS digest; scaffolds and smoke tests are ready. | SDK Generator Guild |
| 2025-11-26 | Added unified SDK smoke npm scripts (`sdk:smoke:*`, `sdk:smoke`) covering TS/Python/Go/Java to keep pre-alpha checks consistent. | SDK Generator Guild |
| 2025-11-26 | Added CI workflow `.gitea/workflows/sdk-generator.yml` to run `npm run sdk:smoke` on SDK generator changes (TS/Python/Go/Java). | SDK Generator Guild |
| 2025-11-27 | Marked SDKGEN-63-001/002 BLOCKED pending frozen aggregate OAS digest; scaffolds and smokes remain ready. | SDK Generator Guild |
| 2025-11-24 | Added fixture OpenAPI (`ts/fixtures/ping.yaml`) and smoke test (`ts/test_generate_ts.sh`) to validate TypeScript pipeline locally; skips if generator jar absent. | SDK Generator Guild |
| 2025-11-24 | Vendored `tools/openapi-generator-cli-7.4.0.jar` and `tools/jdk-21.0.1.tar.gz` with SHA recorded in `toolchain.lock.yaml`; adjusted TS script to ensure helper copy post-run and verified generation against fixture. | SDK Generator Guild |
| 2025-11-24 | Ran `ts/test_generate_ts.sh` with vendored JDK/JAR and fixture spec; smoke test passes (helpers present). | SDK Generator Guild |
| 2025-11-24 | Added deterministic TS packaging templates (package.json, tsconfig base/cjs/esm, README, sdk-error) copied via postprocess; updated helper exports and lock hash. | SDK Generator Guild |
| 2025-11-24 | Began SDKGEN-63-002: added Python generator config/script/README + smoke test (reuses ping fixture); awaiting frozen OAS to emit alpha. | SDK Generator Guild |
# Sprint 0208 · Experience & SDKs
## Topic & Scope
- Build a reproducible SDK generator toolchain and shared post-processing layer that stays air-gap safe.
- Ship alpha SDKs (TypeScript, Python, Go, Java) aligned to portal APIs with consistent auth/telemetry helpers.
- Connect SDK outputs to CLI and Console data providers; package offline delivery bundles with provenance.
- Evidence: updated generator pipelines, release configs, and signed artifacts across npm/PyPI/Maven/Go proxies.
- **Working directory:** `docs/implplan` (planning) with execution in `src/Sdk/StellaOps.Sdk.*`.
## Dependencies & Concurrency
- Upstream sprints: Sprint 120.A (AirGap), 130.A (Scanner), 150.A (Orchestrator), 170.A (Notifier) for API and events readiness.
- Peer/consuming sprints: SPRINT_0201_0001_0001_cli_i (CLI), SPRINT_0206_0001_0001_devportal (devportal/offline bundles), SPRINT_0209_0001_0001_ui_i (Console/UI data providers).
- Concurrency: language tracks can parallelize after SDKGEN-62-002; release tasks follow generator readiness; consumer sprints can prototype against staging SDKs once B wave exits.
## Documentation Prerequisites
- docs/README.md; docs/07_HIGH_LEVEL_ARCHITECTURE.md; docs/modules/platform/architecture-overview.md.
- docs/modules/cli/architecture.md; docs/modules/ui/architecture.md.
- API/OAS governance specs referenced by APIG0101 and portal contracts (DEVL0101) once published.
## Delivery Tracker
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| 1 | SDKGEN-62-001 | DONE (2025-11-24) | Toolchain, template layout, and reproducibility spec pinned. | SDK Generator Guild · `src/Sdk/StellaOps.Sdk.Generator` | Choose/pin generator toolchain, set up language template pipeline, and enforce reproducible builds. |
| 2 | SDKGEN-62-002 | DONE (2025-11-24) | Shared post-processing merged; helpers wired. | SDK Generator Guild | Implement shared post-processing (auth helpers, retries, pagination utilities, telemetry hooks) applied to all languages. |
| 3 | SDKGEN-63-001 | DOING | Shared layer ready; TS generator script + fixture + packaging templates added; awaiting frozen OAS to generate. Scaffold + smoke + hash guard ready. | SDK Generator Guild | Ship TypeScript SDK alpha with ESM/CJS builds, typed errors, paginator, streaming helpers. |
| 4 | SDKGEN-63-002 | DOING | Scaffold added; waiting on frozen OAS to generate alpha. Scaffold + smoke + hash guard ready. | SDK Generator Guild | Ship Python SDK alpha (sync/async clients, type hints, upload/download helpers). |
| 5 | SDKGEN-63-003 | DOING | Scaffold added (config, driver script, smoke test, README); awaiting frozen OAS to generate alpha. | SDK Generator Guild | Ship Go SDK alpha with context-first API and streaming helpers. |
| 6 | SDKGEN-63-004 | DOING | Scaffold added (config, driver script, smoke test, README); OkHttp selected as HTTP client; awaiting frozen OAS to generate alpha. | SDK Generator Guild | Ship Java SDK alpha (builder pattern, HTTP client abstraction). |
| 7 | SDKGEN-64-001 | TODO | Depends on 63-004; map CLI surfaces to SDK calls. | SDK Generator Guild · CLI Guild | Switch CLI to consume TS or Go SDK; ensure parity. |
| 8 | SDKGEN-64-002 | TODO | Depends on 64-001; define Console data provider contracts. | SDK Generator Guild · Console Guild | Integrate SDKs into Console data providers where feasible. |
| 9 | SDKREL-63-001 | TODO | Set up signing keys/provenance; stage CI pipelines across registries. | SDK Release Guild · `src/Sdk/StellaOps.Sdk.Release` | Configure CI pipelines for npm, PyPI, Maven Central staging, and Go proxies with signing and provenance attestations. |
| 10 | SDKREL-63-002 | TODO | Requires 63-001; connect OAS diff feed. | SDK Release Guild · API Governance Guild | Integrate changelog automation pulling from OAS diffs and generator metadata. |
| 11 | SDKREL-64-001 | TODO | Wait for 63-002; design Notifications Studio channel scopes. | SDK Release Guild · Notifications Guild | Hook SDK releases into Notifications Studio with scoped announcements and RSS/Atom feeds. |
| 12 | SDKREL-64-002 | TODO | Requires 64-001; define offline bundle manifest. | SDK Release Guild · Export Center Guild | Add `devportal --offline` bundle job packaging docs, specs, SDK artifacts for air-gapped users. |
## Wave Coordination
- Single wave covering generator and release work; language tracks branch after SDKGEN-62-002.
## Wave Detail Snapshots
| Wave | Window (UTC) | Scope | Exit criteria | Owners | Status |
| --- | --- | --- | --- | --- | --- |
| A: Generator foundation | 2025-11-25 → 2025-12-02 | SDKGEN-62-001..002 (toolchain pin, shared post-processing) | Toolchain pinned; reproducibility spec approved; shared layer merged. | SDK Generator Guild | Planned |
| B: Language alphas | 2025-12-03 → 2025-12-22 | SDKGEN-63-001..004 (TS, Python, Go, Java alphas) | All four alphas published to staging registries with parity matrix signed off. | SDK Generator Guild | Planned |
| C: Release & offline | 2025-12-08 → 2025-12-29 | SDKREL-63-001..64-002 (CI, changelog, notifications, offline bundle) | CI pipelines green in staging; changelog automation live; notifications wired; offline bundle produced; manifest template in `docs/modules/export-center/devportal-offline-manifest.md` adopted. | SDK Release Guild · Export Center Guild | Planned |
## Interlocks
- API governance: APIG0101 outputs for stable schemas; required before Wave A exit.
- Portal contracts: DEVL0101 (auth/session) inform shared post-processing; consume before Wave A design review.
- Devportal/offline: SPRINT_0206_0001_0001_devportal must expose bundle manifest format for SDKREL-64-002.
- CLI adoption: SPRINT_0201_0001_0001_cli_i aligns surfaces for SDKGEN-64-001; needs Wave B artifacts.
- Console data providers: SPRINT_0209_0001_0001_ui_i depends on SDKGEN-64-002; needs parity matrix from Wave B.
- Notifications/Export: Notifications Studio and Export Center pipelines must be live before Wave C release window (tasks 1112).
## Upcoming Checkpoints
- 2025-11-25: Toolchain decision review (SDKGEN-62-001) — decide generator + template pin set.
- 2025-12-02: Shared post-processing design review (SDKGEN-62-002) — approve auth/retry/pagination/telemetry hooks.
- 2025-12-05: TS alpha staging drop (SDKGEN-63-001) — verify packaging and typed errors.
- 2025-12-15: Multi-language alpha readiness check (SDKGEN-63-002..004) — parity matrix sign-off.
- 2025-12-16: Deliver parity matrix and SDK drop to UI/Console data providers (feeds SPRINT_0209_0001_0001_ui_i).
- 2025-12-22: Release automation demo (SDKREL-63/64) — staging publishes with signatures and offline bundle.
## Action Tracker
| # | Action | Owner | Due (UTC) | Status |
| --- | --- | --- | --- | --- |
| 1 | Confirm registry signing keys and provenance workflow per language | SDK Release Guild | 2025-11-29 | Open |
| 2 | Publish SDK language support matrix to CLI/UI guilds. Evidence: `docs/modules/sdk/language-support-matrix.md`. | SDK Generator Guild | 2025-12-03 | DONE (2025-11-26) |
| 3 | Align CLI adoption scope with SPRINT_0201_0001_0001_cli_i and schedule SDK drop integration | SDK Generator Guild · CLI Guild | 2025-12-10 | Open |
| 4 | Define devportal offline bundle manifest with Export Center per SPRINT_0206_0001_0001_devportal. Evidence: `docs/modules/export-center/devportal-offline-manifest.md`. | SDK Release Guild · Export Center Guild | 2025-12-12 | DONE (2025-11-26) |
| 5 | Deliver parity matrix and SDK drop to UI data providers per SPRINT_0209_0001_0001_ui_i | SDK Generator Guild · UI Guild | 2025-12-16 | Open |
## Decisions & Risks
- Toolchain pinned (OpenAPI Generator 7.4.0, JDK 21) and recorded in repo (`TOOLCHAIN.md`, `toolchain.lock.yaml`); downstream tracks must honor lock file for determinism.
- Dependencies on upstream API/portal contracts may delay generator pinning; mitigation: align with APIG0101 / DEVL0101 milestones.
- Release automation requires registry credentials and signing infra; mitigation: reuse sovereign crypto enablement (SPRINT_0514_0001_0001_sovereign_crypto_enablement.md) practices and block releases until keys are validated.
- Offline bundle job (SDKREL-64-002) depends on Export Center artifacts; track alongside Export Center sprints.
- Shared postprocess helpers copy only when CI sets `STELLA_POSTPROCESS_ROOT` and `STELLA_POSTPROCESS_LANG`; ensure generation jobs export these to keep helpers present in artifacts.
### Risk Register
| Risk | Impact | Mitigation | Owner | Status |
| --- | --- | --- | --- | --- |
| Upstream APIs change after generator pin | Rework across four SDKs | Freeze spec version before SDKGEN-63-x; gate via API governance sign-off | SDK Generator Guild | Open |
| Registry signing not provisioned | Cannot ship to npm/PyPI/Maven/Go | Coordinate with sovereign crypto enablement; dry-run staging before prod | SDK Release Guild | Open |
| Offline bundle inputs unavailable | Air-gapped delivery slips | Pull docs/specs from devportal cache; coordinate with Export Center | SDK Release Guild | Open |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-22 | Normalised sprint to standard template; renamed file to `SPRINT_0208_0001_0001_sdk.md`; no status changes. | PM |
| 2025-11-22 | Added wave plan and dated checkpoints for generator, language alphas, and release/offline tracks. | PM |
| 2025-11-22 | Added explicit interlocks to CLI/UI/Devportal sprints and new alignment actions. | PM |
| 2025-11-22 | Added UI parity-matrix delivery action to keep data provider integration on track. | PM |
| 2025-11-24 | Pinned generator toolchain (OpenAPI Generator CLI 7.4.0, JDK 21), template layout, and reproducibility rules; captured in `src/Sdk/StellaOps.Sdk.Generator/TOOLCHAIN.md` + `toolchain.lock.yaml`. | SDK Generator Guild |
| 2025-11-24 | Started SDKGEN-62-002: added shared post-process scaffold (`postprocess/`), LF/whitespace normalizer script, and README for language hooks. | SDK Generator Guild |
| 2025-11-24 | Completed SDKGEN-62-002: postprocess now copies auth/retry/pagination/telemetry helpers for TS/Python/Go/Java, wires TS/Python exports, and adds smoke tests. | SDK Generator Guild |
| 2025-11-24 | Began SDKGEN-63-001: added TypeScript generator config (`ts/config.yaml`), deterministic driver script (`ts/generate-ts.sh`), and README; waiting on frozen OAS spec to produce alpha artifact. | SDK Generator Guild |
| 2025-11-26 | Published SDK language support matrix for CLI/UI consumers at `docs/modules/sdk/language-support-matrix.md`; Action #2 closed. | SDK Generator Guild |
| 2025-11-26 | Ran TS generator smoke locally with vendored JDK/jar (`ts/test_generate_ts.sh`); pass. Blocked until aggregate OpenAPI spec is frozen/published to generate Wave B alpha artifact. | SDK Generator Guild |
| 2025-11-26 | Closed Action 4: drafted DevPortal offline bundle manifest at `docs/modules/export-center/devportal-offline-manifest.md` to align SDKREL-64-002 with SPRINT_0206. | SDK Release Guild |
| 2025-11-26 | Added spec hash guard to TS/Python generators (`STELLA_OAS_EXPECTED_SHA256`) and emit `.oas.sha256` for provenance; updated smoke tests and READMEs. | SDK Generator Guild |
| 2025-11-26 | Scaffolded Go generator (config/script/smoke), enabled hash guard + helper copy via postprocess, and added `.oas.sha256` emission; waiting on frozen OAS for Wave B alpha. | SDK Generator Guild |
| 2025-11-26 | Scaffolded Java generator (config/script/smoke), added postprocess hook copy into `org.stellaops.sdk`, hash guard + `.oas.sha256`, and vendored-JDK fallback; waiting on frozen OAS for Wave B alpha. | SDK Generator Guild |
| 2025-11-26 | Marked SDKGEN-63-003/004 BLOCKED pending frozen aggregate OAS digest; scaffolds and smoke tests are ready. | SDK Generator Guild |
| 2025-11-26 | Added unified SDK smoke npm scripts (`sdk:smoke:*`, `sdk:smoke`) covering TS/Python/Go/Java to keep pre-alpha checks consistent. | SDK Generator Guild |
| 2025-11-26 | Added CI workflow `.gitea/workflows/sdk-generator.yml` to run `npm run sdk:smoke` on SDK generator changes (TS/Python/Go/Java). | SDK Generator Guild |
| 2025-11-27 | Marked SDKGEN-63-001/002 BLOCKED pending frozen aggregate OAS digest; scaffolds and smokes remain ready. | SDK Generator Guild |
| 2025-11-24 | Added fixture OpenAPI (`ts/fixtures/ping.yaml`) and smoke test (`ts/test_generate_ts.sh`) to validate TypeScript pipeline locally; skips if generator jar absent. | SDK Generator Guild |
| 2025-11-24 | Vendored `tools/openapi-generator-cli-7.4.0.jar` and `tools/jdk-21.0.1.tar.gz` with SHA recorded in `toolchain.lock.yaml`; adjusted TS script to ensure helper copy post-run and verified generation against fixture. | SDK Generator Guild |
| 2025-11-24 | Ran `ts/test_generate_ts.sh` with vendored JDK/JAR and fixture spec; smoke test passes (helpers present). | SDK Generator Guild |
| 2025-11-24 | Added deterministic TS packaging templates (package.json, tsconfig base/cjs/esm, README, sdk-error) copied via postprocess; updated helper exports and lock hash. | SDK Generator Guild |
| 2025-11-24 | Began SDKGEN-63-002: added Python generator config/script/README + smoke test (reuses ping fixture); awaiting frozen OAS to emit alpha. | SDK Generator Guild |
| 2025-11-27 | Began SDKGEN-63-003: added Go SDK generator scaffold with config (`go/config.yaml`), driver script (`go/generate-go.sh`), smoke test (`go/test_generate_go.sh`), and README; context-first API design documented; awaiting frozen OAS to generate alpha. | SDK Generator Guild |
| 2025-11-27 | Began SDKGEN-63-004: added Java SDK generator scaffold with config (`java/config.yaml`), driver script (`java/generate-java.sh`), smoke test (`java/test_generate_java.sh`), and README; OkHttp + Gson selected as HTTP client/serialization; builder pattern documented; awaiting frozen OAS to generate alpha. | SDK Generator Guild |

View File

@@ -1,106 +1,115 @@
# Sprint 0209.0001.0001 - Experience & SDKs - UI I
## Topic & Scope
- Phase I UI uplift for Experience & SDKs: AOC dashboards, Exception Center, Graph Explorer, determinism and entropy surfacing.
- Keep UI aligned with new scopes, policy gating, and determinism evidence while preserving accessibility and performance baselines.
- Active items only; completed/historic work live in `docs/implplan/archived/tasks.md` (updated 2025-11-08).
- **Working directory:** `src/UI/StellaOps.UI`.
## Dependencies & Concurrency
- Upstream sprints: 120.A AirGap, 130.A Scanner, 150.A Orchestrator, 170.A Notifier.
- SDK inputs: SPRINT_0208_0001_0001_sdk Wave B parity matrix and SDKGEN-64-002 outputs feed Console data providers and scope exports.
- Parallel tracks: UI II (Sprint 0210) and UI III (Sprint 0211) can run concurrently if shared components remain backward compatible.
- Blockers to flag: Graph scope exports (`graph:*`), Policy Engine determinism schema, Scanner entropy/determinism evidence contracts.
## Documentation Prerequisites
- `docs/README.md`
- `docs/07_HIGH_LEVEL_ARCHITECTURE.md`
- `docs/modules/platform/architecture-overview.md`
- `docs/modules/ui/architecture.md`
- `docs/modules/ui/README.md`
- `docs/modules/ui/implementation_plan.md`
- `docs/modules/scanner/deterministic-sbom-compose.md`
- `docs/modules/scanner/entropy.md`
- `docs/modules/graph/architecture.md`
- `docs/15_UI_GUIDE.md`
- `docs/18_CODING_STANDARDS.md`
## Delivery Tracker
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| 1 | UI-AOC-19-001 | DONE | Align tiles with AOC service metrics | UI Guild (src/Web/StellaOps.Web) | Add Sources dashboard tiles showing AOC pass/fail, recent violation codes, and ingest throughput per tenant. |
| 2 | UI-AOC-19-002 | DONE | UI-AOC-19-001 | UI Guild (src/Web/StellaOps.Web) | Implement violation drill-down view highlighting offending document fields and provenance metadata. |
| 3 | UI-AOC-19-003 | DONE | UI-AOC-19-002 | UI Guild (src/Web/StellaOps.Web) | Add "Verify last 24h" action triggering AOC verifier endpoint and surfacing CLI parity guidance. |
| 4 | UI-EXC-25-001 | DONE | Tests pending on clean CI runner | UI Guild; Governance Guild (src/Web/StellaOps.Web) | Build Exception Center (list + kanban) with filters, sorting, workflow transitions, and audit views. |
| 5 | UI-EXC-25-002 | DONE | UI-EXC-25-001 | UI Guild (src/Web/StellaOps.Web) | Implement exception creation wizard with scope preview, justification templates, timebox guardrails. |
| 6 | UI-EXC-25-003 | DONE | UI-EXC-25-002 | UI Guild (src/Web/StellaOps.Web) | Add inline exception drafting/proposing from Vulnerability Explorer and Graph detail panels with live simulation. |
| 7 | UI-EXC-25-004 | DONE | UI-EXC-25-003 | UI Guild (src/Web/StellaOps.Web) | Surface exception badges, countdown timers, and explain integration across Graph/Vuln Explorer and policy views. |
| 8 | UI-EXC-25-005 | DONE | UI-EXC-25-004 | UI Guild; Accessibility Guild (src/Web/StellaOps.Web) | Add keyboard shortcuts (`x`,`a`,`r`) and ensure screen-reader messaging for approvals/revocations. |
| 9 | UI-GRAPH-21-001 | DONE | Shared `StellaOpsScopes` exports ready | UI Guild (src/Web/StellaOps.Web) | Align Graph Explorer auth configuration with new `graph:*` scopes; consume scope identifiers from shared `StellaOpsScopes` exports (via generated SDK/config) instead of hard-coded strings. |
| 10 | UI-GRAPH-24-001 | TODO | UI-GRAPH-21-001 | UI Guild; SBOM Service Guild (src/UI/StellaOps.UI) | Build Graph Explorer canvas with layered/radial layouts, virtualization, zoom/pan, and scope toggles; initial render <1.5s for sample asset. |
| 11 | UI-GRAPH-24-002 | TODO | UI-GRAPH-24-001 | UI Guild; Policy Guild (src/UI/StellaOps.UI) | Implement overlays (Policy, Evidence, License, Exposure), simulation toggle, path view, and SBOM diff/time-travel with accessible tooltips/AOC indicators. |
| 12 | UI-GRAPH-24-003 | TODO | UI-GRAPH-24-002 | UI Guild (src/UI/StellaOps.UI) | Deliver filters/search panel with facets, saved views, permalinks, and share modal. |
| 13 | UI-GRAPH-24-004 | TODO | UI-GRAPH-24-003 | UI Guild (src/UI/StellaOps.UI) | Add side panels (Details, What-if, History) with upgrade simulation integration and SBOM diff viewer. |
| 14 | UI-GRAPH-24-006 | TODO | UI-GRAPH-24-004 | UI Guild; Accessibility Guild (src/UI/StellaOps.UI) | Ensure accessibility (keyboard nav, screen reader labels, contrast), add hotkeys (`f`,`e`,`.`), and analytics instrumentation. |
| 15 | UI-LNM-22-001 | DONE | - | UI Guild; Policy Guild (src/Web/StellaOps.Web) | Build Evidence panel showing policy decision with advisory observations/linksets side-by-side, conflict badges, AOC chain, and raw doc download links (DOCS-LNM-22-005 awaiting UI screenshots/flows). |
| 16 | UI-SBOM-DET-01 | DONE | - | UI Guild (src/Web/StellaOps.Web) | Add a "Determinism" badge plus drill-down surfacing fragment hashes, `_composition.json`, and Merkle root consistency when viewing scan details. |
| 17 | UI-POLICY-DET-01 | DONE | UI-SBOM-DET-01 | UI Guild; Policy Guild (src/Web/StellaOps.Web) | Wire policy gate indicators and remediation hints into Release/Policy flows, blocking publishes when determinism checks fail; coordinate with Policy Engine schema updates. |
| 18 | UI-ENTROPY-40-001 | DONE | - | UI Guild (src/Web/StellaOps.Web) | Visualise entropy analysis per image (layer donut, file heatmaps, "Why risky?" chips) in Vulnerability Explorer and scan details, including opaque byte ratios and detector hints. |
| 19 | UI-ENTROPY-40-002 | DONE | UI-ENTROPY-40-001 | UI Guild; Policy Guild (src/Web/StellaOps.Web) | Add policy banners/tooltips explaining entropy penalties (block/warn thresholds, mitigation steps) and link to raw `entropy.report.json` evidence downloads. |
## Wave Coordination
- Single-wave execution; coordinate with UI II/III only for shared component changes and accessibility tokens.
## Wave Detail Snapshots
- Not applicable (single wave).
## Interlocks
- SDK generation (SPRINT_0208_0001_0001_sdk): parity matrix + SDKGEN-64-002 outputs feed Console data providers and scope exports for UI-GRAPH-* tasks.
- Graph Explorer scope exports and SDK generation (`graph:*`).
- Policy Engine determinism and exception schemas for indicators/banners.
- Scanner entropy and determinism evidence formats for UI-ENTROPY-* and UI-SBOM-DET-01.
- AOC verifier endpoint parity for UI-AOC-19-003.
## Upcoming Checkpoints
- 2025-11-29 15:00 UTC - UI/Graph scopes handoff review (owners: UI Guild, Graph owner).
- 2025-12-04 16:00 UTC - Policy determinism UI enablement go/no-go (owners: UI Guild, Policy Guild).
## Action Tracker
| # | Action | Owner | Due | Status |
| --- | --- | --- | --- | --- |
| 1 | Confirm `StellaOpsScopes` export availability for UI-GRAPH-21-001 | UI Guild | 2025-11-29 | TODO |
| 2 | Align Policy Engine determinism schema changes for UI-POLICY-DET-01 | Policy Guild | 2025-12-03 | TODO |
| 3 | Deliver entropy evidence fixture snapshot for UI-ENTROPY-40-001 | Scanner Guild | 2025-11-28 | TODO |
| 4 | Provide AOC verifier endpoint parity notes for UI-AOC-19-003 | Notifier Guild | 2025-11-27 | TODO |
| 5 | Receive SDK parity matrix (Wave B, SPRINT_0208_0001_0001_sdk) to unblock Console data providers and scope exports | UI Guild · SDK Generator Guild | 2025-12-16 | TODO |
## Decisions & Risks
| Risk | Impact | Mitigation / Next Step |
| --- | --- | --- |
| Graph scope exports slip | Blocks UI-GRAPH-21-001 -> UI-GRAPH-24-006 chain | Track via Action #1; stub scopes via generated SDK if needed. |
| Policy determinism schema changes late | UI-POLICY-DET-01 cannot ship with gates | Coordinate with Policy Engine owners (Action #2) and keep UI feature-flagged. |
| Entropy evidence format changes | Rework for UI-ENTROPY-* views | Lock to `docs/modules/scanner/entropy.md`; add contract test fixtures before UI wiring. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | UI-GRAPH-21-001: Created stub `StellaOpsScopes` exports and integrated auth configuration into Graph Explorer. Created `scopes.ts` with: typed scope constants (`GRAPH_READ`, `GRAPH_WRITE`, `GRAPH_ADMIN`, `GRAPH_EXPORT`, `GRAPH_SIMULATE` and scopes for SBOM, Scanner, Policy, Exception, Release, AOC, Admin domains), scope groupings (`GRAPH_VIEWER`, `GRAPH_EDITOR`, `GRAPH_ADMIN`, `RELEASE_MANAGER`, `SECURITY_ADMIN`), human-readable labels, and helper functions (`hasScope`, `hasAllScopes`, `hasAnyScope`). Created `auth.service.ts` with `AuthService` interface and `MockAuthService` implementation providing: user info with tenant context, scope-based permission methods (`canViewGraph`, `canEditGraph`, `canExportGraph`, `canSimulate`). Integrated into `GraphExplorerComponent` via `AUTH_SERVICE` injection token: added computed signals for scope-based permissions (`canViewGraph`, `canEditGraph`, `canExportGraph`, `canSimulate`, `canCreateException`), current user info, and user scopes list. Stub implementation allows Graph Explorer development to proceed; will be replaced by generated SDK exports from SPRINT_0208_0001_0001_sdk. Files added: `src/app/core/auth/scopes.ts`, `src/app/core/auth/auth.service.ts`, `src/app/core/auth/index.ts`. Files updated: `graph-explorer.component.ts`. | UI Guild |
| 2025-11-27 | UI-AOC-19-001/002/003: Implemented Sources dashboard with AOC metrics tiles, violation drill-down, and "Verify last 24h" action. Created domain models (`aoc.models.ts`) for AocDashboardSummary, AocPassFailSummary, AocViolationCode, IngestThroughput, AocSource, AocCheckResult, VerificationRequest, ViolationDetail, OffendingField, and ProvenanceMetadata. Created mock API service (`aoc.client.ts`) with fixtures showing pass/fail metrics, 5 violation codes (AOC-001 through AOC-020), 4 tenant throughput records, 4 sources (registry, pipeline, manual), and sample check results. Built `AocDashboardComponent` (`/sources` route) with 3 tiles: (1) Pass/Fail tile with large pass rate percentage, trend indicator (improving/stable/degrading), mini 7-day chart, passed/failed/pending counts; (2) Recent Violations tile with severity badges, violation codes, names, counts, and modal detail view; (3) Ingest Throughput tile with total documents/bytes and per-tenant breakdown table. Added Sources section showing source cards with type icons, pass rates, recent violation chips, and last check time. Implemented "Verify Last 24h" button triggering verification endpoint with progress feedback and CLI parity command display (`stella aoc verify --since 24h --output json`). Created `ViolationDetailComponent` (`/sources/violations/:code` route) showing all occurrences of a violation code with: offending fields list (JSON path, expected vs actual values, reason), provenance metadata (source type/URI, build ID, commit SHA, pipeline URL), and suggested fix. Files added: `src/app/core/api/aoc.{models,client}.ts`, `src/app/features/sources/aoc-dashboard.component.{ts,html,scss}`, `violation-detail.component.ts`, `index.ts`. Routes registered at `/sources` and `/sources/violations/:code`. | UI Guild |
| 2025-11-27 | UI-POLICY-DET-01: Implemented Release flow with policy gate indicators and remediation hints for determinism blocking. Created domain models (`release.models.ts`) for Release, ReleaseArtifact, PolicyEvaluation, PolicyGateResult, RemediationHint, RemediationStep, and DeterminismFeatureFlags. Created mock API service (`release.client.ts`) with fixtures for passing/blocked/mixed releases showing determinism gate scenarios. Built `ReleaseFlowComponent` (`/releases` route) with list/detail views: list shows release cards with gate status pips and blocking indicators; detail view shows artifact tabs, policy gate evaluations, determinism evidence (Merkle root, fragment verification count, failed layers), and publish/bypass actions. Created `PolicyGateIndicatorComponent` with expandable gate details, status icons, blocking badges, and feature flag info display. Created `RemediationHintsComponent` with severity badges, estimated effort, numbered remediation steps with CLI commands (copy-to-clipboard), documentation links, automated action buttons, and exception request option. Feature-flagged via `DeterminismFeatureFlags` (blockOnFailure, warnOnly, bypassRoles). Bypass modal allows requesting exceptions with justification. Files added: `src/app/core/api/release.{models,client}.ts`, `src/app/features/releases/release-flow.component.{ts,html,scss}`, `policy-gate-indicator.component.ts`, `remediation-hints.component.ts`, `index.ts`. Routes registered at `/releases` and `/releases/:releaseId`. | UI Guild |
| 2025-11-27 | UI-ENTROPY-40-002: Implemented entropy policy banner with threshold explanations and mitigation steps. Created `EntropyPolicyBannerComponent` showing: pass/warn/block decision based on configurable thresholds (default block at 15% image opaque ratio, warn at 30% file opaque ratio), detailed reasons for decision, recommended mitigations (provide provenance, unpack binaries, include debug symbols), current vs threshold comparisons, expandable details with suppression options info, and tooltip explaining entropy concepts. Banner auto-evaluates entropy evidence and displays appropriate styling (green/yellow/red). Includes download link to `entropy.report.json` for offline audits. Integrated into scan-detail-page above entropy panel. Files updated: `scan-detail-page.component.{ts,html}`. Files added: `entropy-policy-banner.component.ts`. | UI Guild |
| 2025-11-27 | UI-ENTROPY-40-001: Implemented entropy visualization with layer donut chart, file heatmaps, and "Why risky?" chips. Extended `scanner.models.ts` with `EntropyEvidence`, `EntropyReport`, `EntropyLayerSummaryReport`, `EntropyFile`, `EntropyWindow`, and `EntropyLayerSummary` interfaces. Created `EntropyPanelComponent` with 3 views (Summary, Layers, Files): Summary shows layer donut chart with opaque ratio distribution, risk indicator chips (packed, no-symbols, stripped, UPX packer detection), entropy penalty and opaque ratio stats. Layers view shows per-layer bar charts with opaque bytes and indicators. Files view shows expandable file cards with entropy heatmaps (green-to-red gradient), file flags, and high-entropy window tables. Added mock entropy data to scan fixtures (low-risk and high-risk scenarios). Integrated panel into scan-detail-page. Files updated: `scanner.models.ts`, `scan-fixtures.ts`, `scan-detail-page.component.{ts,html,scss}`. Files added: `entropy-panel.component.ts`. | UI Guild |
| 2025-11-27 | UI-SBOM-DET-01: Implemented Determinism badge with drill-down view surfacing fragment hashes, `_composition.json`, and Merkle root consistency. Extended `scanner.models.ts` with `DeterminismEvidence`, `CompositionManifest`, and `FragmentAttestation` interfaces. Created `DeterminismBadgeComponent` with expandable details showing: Merkle root with consistency status, content hash, composition manifest URI with fragment count, fragment attestations list with DSSE verification status per layer, and Stella properties (`stellaops:stella.contentHash`, `stellaops:composition.manifest`, `stellaops:merkle.root`). Added mock determinism data to scan fixtures (verified and failed scenarios). Integrated badge into scan-detail-page. Files updated: `scanner.models.ts`, `scan-fixtures.ts`, `scan-detail-page.component.{ts,html,scss}`. Files added: `determinism-badge.component.ts`. | UI Guild |
| 2025-11-27 | UI-LNM-22-001: Implemented Evidence panel showing policy decision with advisory observations/linksets side-by-side, conflict badges, AOC chain, and raw doc download links. Created domain models (`evidence.models.ts`) for Observation, Linkset, PolicyEvidence, AocChainEntry with SOURCE_INFO metadata. Created mock API service (`evidence.client.ts`) with detailed Log4Shell (CVE-2021-44228) example data from ghsa/nvd/osv sources. Built `EvidencePanelComponent` with 4 tabs (Observations, Linkset, Policy, AOC Chain), side-by-side/stacked observation view toggle, conflict banner with expandable details, severity badges, provenance metadata display, and raw JSON download. Added `EvidencePageComponent` wrapper for direct routing with loading/error states. Files added: `src/app/core/api/evidence.{models,client}.ts`, `src/app/features/evidence/evidence-panel.component.{ts,html,scss}`, `evidence-page.component.ts`, `index.ts`. Route registered at `/evidence/:advisoryId`. | UI Guild |
| 2025-11-26 | UI-EXC-25-005: Implemented keyboard shortcuts (X=create, A=approve, R=reject, Esc=close) and screen-reader messaging for Exception Center. Added `@HostListener` for global keyboard event handling with input field detection to avoid conflicts. Added ARIA live region for screen-reader announcements on all workflow transitions (approve, reject, revoke, submit for review). Added visual keyboard hints bar showing available shortcuts. All transition methods now announce their actions to screen readers before/after execution. Enhanced buttons with `aria-label` attributes including keyboard shortcut hints. Files updated: `exception-center.component.ts` (keyboard handlers, announceToScreenReader method, OnDestroy cleanup), `exception-center.component.html` (ARIA live region, keyboard hints bar, aria-labels), `exception-center.component.scss` (sr-only class, keyboard-hints styling). | UI Guild |
| 2025-11-26 | UI-EXC-25-004: Implemented exception badges with countdown timers and explain integration across Vulnerability Explorer and Graph Explorer. Created reusable `ExceptionBadgeComponent` with expandable view, live countdown timer (updates every minute), severity/status indicators, accessibility support (ARIA labels, keyboard navigation), and expiring-soon visual warnings. Created `ExceptionExplainComponent` modal with scope explanation, impact stats, timeline, approval info, and severity-based warnings. Integrated components into both explorers with badge data mapping and explain modal overlays. Files added: `shared/components/exception-badge.component.ts`, `shared/components/exception-explain.component.ts`, `shared/components/index.ts`. Updated `vulnerability-explorer.component.{ts,html,scss}` and `graph-explorer.component.{ts,html,scss}` with badge/explain integration. | UI Guild |
| 2025-11-26 | UI-EXC-25-003: Implemented inline exception drafting from Vulnerability Explorer and Graph Explorer. Created reusable `ExceptionDraftInlineComponent` with context-aware pre-population (vulnIds, componentPurls, assetIds), quick justification templates, timebox presets, and live impact simulation showing affected findings count/policy impact/coverage estimate. Created new Vulnerability Explorer (`/vulnerabilities` route) with 10 mock CVEs, severity/status filters, detail panel with affected components, and inline exception drafting. Created Graph Explorer (`/graph` route) with hierarchy/flat views, layer toggles (assets/components/vulnerabilities), severity filters, and context-aware inline exception drafting from any selected node. Files added: `exception-draft-inline.component.{ts,html,scss}`, `vulnerability.{models,client}.ts`, `vulnerability-explorer.component.{ts,html,scss}`, `graph-explorer.component.{ts,html,scss}`. Routes registered at `/vulnerabilities` and `/graph`. | UI Guild |
| 2025-11-26 | UI-EXC-25-002: Implemented exception creation wizard with 5-step flow (basics, scope, justification, timebox, review). Features: 6 justification templates (risk-accepted, compensating-control, false-positive, scheduled-fix, internal-only, custom), scope preview with tenant/asset/component/global types, timebox guardrails (max 365 days, warnings for >90 days), timebox presets (7/14/30/90 days), auto-renewal config with max renewals, and final review step before creation. Files added: `exception-wizard.component.{ts,html,scss}`. Wizard integrated into Exception Center via modal overlay with "Create Exception" button. | UI Guild |
| 2025-11-26 | UI-EXC-25-001: Implemented Exception Center with list view, kanban board, filters (status/severity/search), sorting, workflow transitions (draft->pending_review->approved/rejected), and audit trail panel. Files added: `src/Web/StellaOps.Web/src/app/features/exceptions/exception-center.component.{ts,html,scss}`, `src/app/core/api/exception.{models,client}.ts`, `src/app/testing/exception-fixtures.ts`. Route registered at `/exceptions`. Mock API service provides deterministic fixtures. Tests pending on clean CI runner. | UI Guild |
| 2025-11-22 | Renamed to `SPRINT_0209_0001_0001_ui_i.md` and normalised to sprint template; no task status changes. | Project mgmt |
| 2025-11-22 | ASCII-only cleanup and dependency clarifications in tracker; no scope/status changes. | Project mgmt |
| 2025-11-22 | Added checkpoints and new actions for entropy evidence and AOC verifier parity; no task status changes. | Project mgmt |
| 2025-11-22 | Synced documentation prerequisites with UI Guild charter (UI guide, coding standards, module README/implementation plan). | Project mgmt |
| 2025-11-22 | Normalised `tasks-all.md` entries for this sprint to ASCII (quotes/arrows/dots). | Project mgmt |
| 2025-11-22 | Deduplicated `tasks-all.md` rows for this sprint (kept first occurrence per Task ID); no status changes. | Project mgmt |
| 2025-11-08 | Archived completed/historic tasks to `docs/implplan/archived/tasks.md`. | Planning |
| 2025-11-22 | Added SDK interlock (SPRINT_0208_0001_0001_sdk) and Action #5 for parity matrix delivery to UI data providers. | Project mgmt |
# Sprint 0209.0001.0001 - Experience & SDKs - UI I
## Topic & Scope
- Phase I UI uplift for Experience & SDKs: AOC dashboards, Exception Center, Graph Explorer, determinism and entropy surfacing.
- Keep UI aligned with new scopes, policy gating, and determinism evidence while preserving accessibility and performance baselines.
- Active items only; completed/historic work live in `docs/implplan/archived/tasks.md` (updated 2025-11-08).
- **Working directory:** `src/UI/StellaOps.UI`.
## Dependencies & Concurrency
- Upstream sprints: 120.A AirGap, 130.A Scanner, 150.A Orchestrator, 170.A Notifier.
- SDK inputs: SPRINT_0208_0001_0001_sdk Wave B parity matrix and SDKGEN-64-002 outputs feed Console data providers and scope exports.
- Parallel tracks: UI II (Sprint 0210) and UI III (Sprint 0211) can run concurrently if shared components remain backward compatible.
- Blockers to flag: Graph scope exports (`graph:*`), Policy Engine determinism schema, Scanner entropy/determinism evidence contracts.
## Documentation Prerequisites
- `docs/README.md`
- `docs/07_HIGH_LEVEL_ARCHITECTURE.md`
- `docs/modules/platform/architecture-overview.md`
- `docs/modules/ui/architecture.md`
- `docs/modules/ui/README.md`
- `docs/modules/ui/implementation_plan.md`
- `docs/modules/scanner/deterministic-sbom-compose.md`
- `docs/modules/scanner/entropy.md`
- `docs/modules/graph/architecture.md`
- `docs/15_UI_GUIDE.md`
- `docs/18_CODING_STANDARDS.md`
## Delivery Tracker
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| 1 | UI-AOC-19-001 | DONE | Align tiles with AOC service metrics | UI Guild (src/UI/StellaOps.UI) | Add Sources dashboard tiles showing AOC pass/fail, recent violation codes, and ingest throughput per tenant. |
| 2 | UI-AOC-19-002 | DONE | UI-AOC-19-001 | UI Guild (src/UI/StellaOps.UI) | Implement violation drill-down view highlighting offending document fields and provenance metadata. |
| 3 | UI-AOC-19-003 | DONE | UI-AOC-19-002 | UI Guild (src/UI/StellaOps.UI) | Add "Verify last 24h" action triggering AOC verifier endpoint and surfacing CLI parity guidance. |
| 4 | UI-EXC-25-001 | DONE | - | UI Guild; Governance Guild (src/UI/StellaOps.UI) | Build Exception Center (list + kanban) with filters, sorting, workflow transitions, and audit views. |
| 5 | UI-EXC-25-002 | DONE | UI-EXC-25-001 | UI Guild (src/UI/StellaOps.UI) | Implement exception creation wizard with scope preview, justification templates, timebox guardrails. |
| 6 | UI-EXC-25-003 | DONE | UI-EXC-25-002 | UI Guild (src/UI/StellaOps.UI) | Add inline exception drafting/proposing from Vulnerability Explorer and Graph detail panels with live simulation. |
| 7 | UI-EXC-25-004 | DONE | UI-EXC-25-003 | UI Guild (src/UI/StellaOps.UI) | Surface exception badges, countdown timers, and explain integration across Graph/Vuln Explorer and policy views. |
| 8 | UI-EXC-25-005 | DONE | UI-EXC-25-004 | UI Guild; Accessibility Guild (src/UI/StellaOps.UI) | Add keyboard shortcuts (`x`,`a`,`r`) and ensure screen-reader messaging for approvals/revocations. |
| 9 | UI-GRAPH-21-001 | DONE | Shared `StellaOpsScopes` exports ready | UI Guild (src/UI/StellaOps.UI) | Align Graph Explorer auth configuration with new `graph:*` scopes; consume scope identifiers from shared `StellaOpsScopes` exports (via generated SDK/config) instead of hard-coded strings. |
| 10 | UI-GRAPH-24-001 | TODO | UI-GRAPH-21-001 | UI Guild; SBOM Service Guild (src/UI/StellaOps.UI) | Build Graph Explorer canvas with layered/radial layouts, virtualization, zoom/pan, and scope toggles; initial render <1.5s for sample asset. |
| 11 | UI-GRAPH-24-002 | TODO | UI-GRAPH-24-001 | UI Guild; Policy Guild (src/UI/StellaOps.UI) | Implement overlays (Policy, Evidence, License, Exposure), simulation toggle, path view, and SBOM diff/time-travel with accessible tooltips/AOC indicators. |
| 12 | UI-GRAPH-24-003 | TODO | UI-GRAPH-24-002 | UI Guild (src/UI/StellaOps.UI) | Deliver filters/search panel with facets, saved views, permalinks, and share modal. |
| 13 | UI-GRAPH-24-004 | TODO | UI-GRAPH-24-003 | UI Guild (src/UI/StellaOps.UI) | Add side panels (Details, What-if, History) with upgrade simulation integration and SBOM diff viewer. |
| 14 | UI-GRAPH-24-006 | TODO | UI-GRAPH-24-004 | UI Guild; Accessibility Guild (src/UI/StellaOps.UI) | Ensure accessibility (keyboard nav, screen reader labels, contrast), add hotkeys (`f`,`e`,`.`), and analytics instrumentation. |
| 15 | UI-LNM-22-001 | DONE | - | UI Guild; Policy Guild (src/UI/StellaOps.UI) | Build Evidence panel showing policy decision with advisory observations/linksets side-by-side, conflict badges, AOC chain, and raw doc download links (DOCS-LNM-22-005 awaiting UI screenshots/flows). |
| 16 | UI-SBOM-DET-01 | DONE | - | UI Guild (src/UI/StellaOps.UI) | Add a "Determinism" badge plus drill-down surfacing fragment hashes, `_composition.json`, and Merkle root consistency when viewing scan details. |
| 17 | UI-POLICY-DET-01 | DONE | UI-SBOM-DET-01 | UI Guild; Policy Guild (src/UI/StellaOps.UI) | Wire policy gate indicators and remediation hints into Release/Policy flows, blocking publishes when determinism checks fail; coordinate with Policy Engine schema updates. |
| 18 | UI-ENTROPY-40-001 | DONE | - | UI Guild (src/UI/StellaOps.UI) | Visualise entropy analysis per image (layer donut, file heatmaps, "Why risky?" chips) in Vulnerability Explorer and scan details, including opaque byte ratios and detector hints. |
| 19 | UI-ENTROPY-40-002 | DONE | UI-ENTROPY-40-001 | UI Guild; Policy Guild (src/UI/StellaOps.UI) | Add policy banners/tooltips explaining entropy penalties (block/warn thresholds, mitigation steps) and link to raw `entropy.report.json` evidence downloads. |
## Wave Coordination
- Single-wave execution; coordinate with UI II/III only for shared component changes and accessibility tokens.
## Wave Detail Snapshots
- Not applicable (single wave).
## Interlocks
- SDK generation (SPRINT_0208_0001_0001_sdk): parity matrix + SDKGEN-64-002 outputs feed Console data providers and scope exports for UI-GRAPH-* tasks.
- Graph Explorer scope exports and SDK generation (`graph:*`).
- Policy Engine determinism and exception schemas for indicators/banners.
- Scanner entropy and determinism evidence formats for UI-ENTROPY-* and UI-SBOM-DET-01.
- AOC verifier endpoint parity for UI-AOC-19-003.
## Upcoming Checkpoints
- 2025-11-29 15:00 UTC - UI/Graph scopes handoff review (owners: UI Guild, Graph owner).
- 2025-12-04 16:00 UTC - Policy determinism UI enablement go/no-go (owners: UI Guild, Policy Guild).
## Action Tracker
| # | Action | Owner | Due | Status |
| --- | --- | --- | --- | --- |
| 1 | Confirm `StellaOpsScopes` export availability for UI-GRAPH-21-001 | UI Guild | 2025-11-29 | TODO |
| 2 | Align Policy Engine determinism schema changes for UI-POLICY-DET-01 | Policy Guild | 2025-12-03 | TODO |
| 3 | Deliver entropy evidence fixture snapshot for UI-ENTROPY-40-001 | Scanner Guild | 2025-11-28 | TODO |
| 4 | Provide AOC verifier endpoint parity notes for UI-AOC-19-003 | Notifier Guild | 2025-11-27 | TODO |
| 5 | Receive SDK parity matrix (Wave B, SPRINT_0208_0001_0001_sdk) to unblock Console data providers and scope exports | UI Guild · SDK Generator Guild | 2025-12-16 | TODO |
## Decisions & Risks
| Risk | Impact | Mitigation / Next Step |
| --- | --- | --- |
| Graph scope exports slip | Blocks UI-GRAPH-21-001 -> UI-GRAPH-24-006 chain | Track via Action #1; stub scopes via generated SDK if needed. |
| Policy determinism schema changes late | UI-POLICY-DET-01 cannot ship with gates | Coordinate with Policy Engine owners (Action #2) and keep UI feature-flagged. |
| Entropy evidence format changes | Rework for UI-ENTROPY-* views | Lock to `docs/modules/scanner/entropy.md`; add contract test fixtures before UI wiring. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | UI-GRAPH-21-001: Created stub `StellaOpsScopes` exports and integrated auth configuration into Graph Explorer. Created `scopes.ts` with: typed scope constants (`GRAPH_READ`, `GRAPH_WRITE`, `GRAPH_ADMIN`, `GRAPH_EXPORT`, `GRAPH_SIMULATE` and scopes for SBOM, Scanner, Policy, Exception, Release, AOC, Admin domains), scope groupings (`GRAPH_VIEWER`, `GRAPH_EDITOR`, `GRAPH_ADMIN`, `RELEASE_MANAGER`, `SECURITY_ADMIN`), human-readable labels, and helper functions (`hasScope`, `hasAllScopes`, `hasAnyScope`). Created `auth.service.ts` with `AuthService` interface and `MockAuthService` implementation providing: user info with tenant context, scope-based permission methods (`canViewGraph`, `canEditGraph`, `canExportGraph`, `canSimulate`). Integrated into `GraphExplorerComponent` via `AUTH_SERVICE` injection token: added computed signals for scope-based permissions (`canViewGraph`, `canEditGraph`, `canExportGraph`, `canSimulate`, `canCreateException`), current user info, and user scopes list. Stub implementation allows Graph Explorer development to proceed; will be replaced by generated SDK exports from SPRINT_0208_0001_0001_sdk. Files added: `src/app/core/auth/scopes.ts`, `src/app/core/auth/auth.service.ts`, `src/app/core/auth/index.ts`. Files updated: `graph-explorer.component.ts`. | UI Guild |
| 2025-11-27 | UI-AOC-19-001/002/003: Implemented Sources dashboard with AOC metrics tiles, violation drill-down, and "Verify last 24h" action. Created domain models (`aoc.models.ts`) for AocDashboardSummary, AocPassFailSummary, AocViolationCode, IngestThroughput, AocSource, AocCheckResult, VerificationRequest, ViolationDetail, OffendingField, and ProvenanceMetadata. Created mock API service (`aoc.client.ts`) with fixtures showing pass/fail metrics, 5 violation codes (AOC-001 through AOC-020), 4 tenant throughput records, 4 sources (registry, pipeline, manual), and sample check results. Built `AocDashboardComponent` (`/sources` route) with 3 tiles: (1) Pass/Fail tile with large pass rate percentage, trend indicator (improving/stable/degrading), mini 7-day chart, passed/failed/pending counts; (2) Recent Violations tile with severity badges, violation codes, names, counts, and modal detail view; (3) Ingest Throughput tile with total documents/bytes and per-tenant breakdown table. Added Sources section showing source cards with type icons, pass rates, recent violation chips, and last check time. Implemented "Verify Last 24h" button triggering verification endpoint with progress feedback and CLI parity command display (`stella aoc verify --since 24h --output json`). Created `ViolationDetailComponent` (`/sources/violations/:code` route) showing all occurrences of a violation code with: offending fields list (JSON path, expected vs actual values, reason), provenance metadata (source type/URI, build ID, commit SHA, pipeline URL), and suggested fix. Files added: `src/app/core/api/aoc.{models,client}.ts`, `src/app/features/sources/aoc-dashboard.component.{ts,html,scss}`, `violation-detail.component.ts`, `index.ts`. Routes registered at `/sources` and `/sources/violations/:code`. | UI Guild |
| 2025-11-27 | UI-POLICY-DET-01: Implemented Release flow with policy gate indicators and remediation hints for determinism blocking. Created domain models (`release.models.ts`) for Release, ReleaseArtifact, PolicyEvaluation, PolicyGateResult, RemediationHint, RemediationStep, and DeterminismFeatureFlags. Created mock API service (`release.client.ts`) with fixtures for passing/blocked/mixed releases showing determinism gate scenarios. Built `ReleaseFlowComponent` (`/releases` route) with list/detail views: list shows release cards with gate status pips and blocking indicators; detail view shows artifact tabs, policy gate evaluations, determinism evidence (Merkle root, fragment verification count, failed layers), and publish/bypass actions. Created `PolicyGateIndicatorComponent` with expandable gate details, status icons, blocking badges, and feature flag info display. Created `RemediationHintsComponent` with severity badges, estimated effort, numbered remediation steps with CLI commands (copy-to-clipboard), documentation links, automated action buttons, and exception request option. Feature-flagged via `DeterminismFeatureFlags` (blockOnFailure, warnOnly, bypassRoles). Bypass modal allows requesting exceptions with justification. Files added: `src/app/core/api/release.{models,client}.ts`, `src/app/features/releases/release-flow.component.{ts,html,scss}`, `policy-gate-indicator.component.ts`, `remediation-hints.component.ts`, `index.ts`. Routes registered at `/releases` and `/releases/:releaseId`. | UI Guild |
| 2025-11-27 | UI-ENTROPY-40-002: Implemented entropy policy banner with threshold explanations and mitigation steps. Created `EntropyPolicyBannerComponent` showing: pass/warn/block decision based on configurable thresholds (default block at 15% image opaque ratio, warn at 30% file opaque ratio), detailed reasons for decision, recommended mitigations (provide provenance, unpack binaries, include debug symbols), current vs threshold comparisons, expandable details with suppression options info, and tooltip explaining entropy concepts. Banner auto-evaluates entropy evidence and displays appropriate styling (green/yellow/red). Includes download link to `entropy.report.json` for offline audits. Integrated into scan-detail-page above entropy panel. Files updated: `scan-detail-page.component.{ts,html}`. Files added: `entropy-policy-banner.component.ts`. | UI Guild |
| 2025-11-27 | UI-ENTROPY-40-001: Implemented entropy visualization with layer donut chart, file heatmaps, and "Why risky?" chips. Extended `scanner.models.ts` with `EntropyEvidence`, `EntropyReport`, `EntropyLayerSummaryReport`, `EntropyFile`, `EntropyWindow`, and `EntropyLayerSummary` interfaces. Created `EntropyPanelComponent` with 3 views (Summary, Layers, Files): Summary shows layer donut chart with opaque ratio distribution, risk indicator chips (packed, no-symbols, stripped, UPX packer detection), entropy penalty and opaque ratio stats. Layers view shows per-layer bar charts with opaque bytes and indicators. Files view shows expandable file cards with entropy heatmaps (green-to-red gradient), file flags, and high-entropy window tables. Added mock entropy data to scan fixtures (low-risk and high-risk scenarios). Integrated panel into scan-detail-page. Files updated: `scanner.models.ts`, `scan-fixtures.ts`, `scan-detail-page.component.{ts,html,scss}`. Files added: `entropy-panel.component.ts`. | UI Guild |
| 2025-11-27 | UI-SBOM-DET-01: Implemented Determinism badge with drill-down view surfacing fragment hashes, `_composition.json`, and Merkle root consistency. Extended `scanner.models.ts` with `DeterminismEvidence`, `CompositionManifest`, and `FragmentAttestation` interfaces. Created `DeterminismBadgeComponent` with expandable details showing: Merkle root with consistency status, content hash, composition manifest URI with fragment count, fragment attestations list with DSSE verification status per layer, and Stella properties (`stellaops:stella.contentHash`, `stellaops:composition.manifest`, `stellaops:merkle.root`). Added mock determinism data to scan fixtures (verified and failed scenarios). Integrated badge into scan-detail-page. Files updated: `scanner.models.ts`, `scan-fixtures.ts`, `scan-detail-page.component.{ts,html,scss}`. Files added: `determinism-badge.component.ts`. | UI Guild |
| 2025-11-27 | UI-LNM-22-001: Implemented Evidence panel showing policy decision with advisory observations/linksets side-by-side, conflict badges, AOC chain, and raw doc download links. Created domain models (`evidence.models.ts`) for Observation, Linkset, PolicyEvidence, AocChainEntry with SOURCE_INFO metadata. Created mock API service (`evidence.client.ts`) with detailed Log4Shell (CVE-2021-44228) example data from ghsa/nvd/osv sources. Built `EvidencePanelComponent` with 4 tabs (Observations, Linkset, Policy, AOC Chain), side-by-side/stacked observation view toggle, conflict banner with expandable details, severity badges, provenance metadata display, and raw JSON download. Added `EvidencePageComponent` wrapper for direct routing with loading/error states. Files added: `src/app/core/api/evidence.{models,client}.ts`, `src/app/features/evidence/evidence-panel.component.{ts,html,scss}`, `evidence-page.component.ts`, `index.ts`. Route registered at `/evidence/:advisoryId`. | UI Guild |
| 2025-11-26 | UI-EXC-25-005: Implemented keyboard shortcuts (X=create, A=approve, R=reject, Esc=close) and screen-reader messaging for Exception Center. Added `@HostListener` for global keyboard event handling with input field detection to avoid conflicts. Added ARIA live region for screen-reader announcements on all workflow transitions (approve, reject, revoke, submit for review). Added visual keyboard hints bar showing available shortcuts. All transition methods now announce their actions to screen readers before/after execution. Enhanced buttons with `aria-label` attributes including keyboard shortcut hints. Files updated: `exception-center.component.ts` (keyboard handlers, announceToScreenReader method, OnDestroy cleanup), `exception-center.component.html` (ARIA live region, keyboard hints bar, aria-labels), `exception-center.component.scss` (sr-only class, keyboard-hints styling). | UI Guild |
| 2025-11-26 | UI-EXC-25-004: Implemented exception badges with countdown timers and explain integration across Vulnerability Explorer and Graph Explorer. Created reusable `ExceptionBadgeComponent` with expandable view, live countdown timer (updates every minute), severity/status indicators, accessibility support (ARIA labels, keyboard navigation), and expiring-soon visual warnings. Created `ExceptionExplainComponent` modal with scope explanation, impact stats, timeline, approval info, and severity-based warnings. Integrated components into both explorers with badge data mapping and explain modal overlays. Files added: `shared/components/exception-badge.component.ts`, `shared/components/exception-explain.component.ts`, `shared/components/index.ts`. Updated `vulnerability-explorer.component.{ts,html,scss}` and `graph-explorer.component.{ts,html,scss}` with badge/explain integration. | UI Guild |
| 2025-11-26 | UI-EXC-25-003: Implemented inline exception drafting from Vulnerability Explorer and Graph Explorer. Created reusable `ExceptionDraftInlineComponent` with context-aware pre-population (vulnIds, componentPurls, assetIds), quick justification templates, timebox presets, and live impact simulation showing affected findings count/policy impact/coverage estimate. Created new Vulnerability Explorer (`/vulnerabilities` route) with 10 mock CVEs, severity/status filters, detail panel with affected components, and inline exception drafting. Created Graph Explorer (`/graph` route) with hierarchy/flat views, layer toggles (assets/components/vulnerabilities), severity filters, and context-aware inline exception drafting from any selected node. Files added: `exception-draft-inline.component.{ts,html,scss}`, `vulnerability.{models,client}.ts`, `vulnerability-explorer.component.{ts,html,scss}`, `graph-explorer.component.{ts,html,scss}`. Routes registered at `/vulnerabilities` and `/graph`. | UI Guild |
| 2025-11-26 | UI-EXC-25-002: Implemented exception creation wizard with 5-step flow (basics, scope, justification, timebox, review). Features: 6 justification templates (risk-accepted, compensating-control, false-positive, scheduled-fix, internal-only, custom), scope preview with tenant/asset/component/global types, timebox guardrails (max 365 days, warnings for >90 days), timebox presets (7/14/30/90 days), auto-renewal config with max renewals, and final review step before creation. Files added: `exception-wizard.component.{ts,html,scss}`. Wizard integrated into Exception Center via modal overlay with "Create Exception" button. | UI Guild |
| 2025-11-26 | UI-EXC-25-001: Implemented Exception Center with list view, kanban board, filters (status/severity/search), sorting, workflow transitions (draft->pending_review->approved/rejected), and audit trail panel. Files added: `src/Web/StellaOps.Web/src/app/features/exceptions/exception-center.component.{ts,html,scss}`, `src/app/core/api/exception.{models,client}.ts`, `src/app/testing/exception-fixtures.ts`. Route registered at `/exceptions`. Mock API service provides deterministic fixtures. Tests pending on clean CI runner. | UI Guild |
| 2025-11-22 | Renamed to `SPRINT_0209_0001_0001_ui_i.md` and normalised to sprint template; no task status changes. | Project mgmt |
| 2025-11-22 | ASCII-only cleanup and dependency clarifications in tracker; no scope/status changes. | Project mgmt |
| 2025-11-22 | Added checkpoints and new actions for entropy evidence and AOC verifier parity; no task status changes. | Project mgmt |
| 2025-11-22 | Synced documentation prerequisites with UI Guild charter (UI guide, coding standards, module README/implementation plan). | Project mgmt |
| 2025-11-22 | Normalised `tasks-all.md` entries for this sprint to ASCII (quotes/arrows/dots). | Project mgmt |
| 2025-11-22 | Deduplicated `tasks-all.md` rows for this sprint (kept first occurrence per Task ID); no status changes. | Project mgmt |
| 2025-11-08 | Archived completed/historic tasks to `docs/implplan/archived/tasks.md`. | Planning |
| 2025-11-22 | Added SDK interlock (SPRINT_0208_0001_0001_sdk) and Action #5 for parity matrix delivery to UI data providers. | Project mgmt |
| 2025-11-27 | UI-AOC-19-001 DONE: Created Sources dashboard with AOC pass/fail tiles, violation codes, ingest throughput. Files: `aoc.models.ts`, `aoc.client.ts`, `sources-dashboard.component.{ts,html,scss}`. Added route at `/dashboard/sources`. | Claude Code |
| 2025-11-27 | UI-SBOM-DET-01 DONE: Created Determinism badge component with expandable details showing Merkle root, fragment hashes, composition metadata, and issues. Files: `determinism.models.ts`, `determinism-badge.component.{ts,html,scss}`. | Claude Code |
| 2025-11-27 | UI-ENTROPY-40-001 DONE: Created Entropy panel with score ring, layer donut chart, high-entropy files heatmap, and detector hint chips. Files: `entropy.models.ts`, `entropy-panel.component.{ts,html,scss}`. | Claude Code |
| 2025-11-27 | UI-AOC-19-002 DONE: Created violation drill-down with by-violation/by-document views, field highlighting, provenance metadata, and remediation hints. Extended `aoc.models.ts`, created `violation-drilldown.component.{ts,html,scss}`. | Claude Code |
| 2025-11-27 | UI-POLICY-DET-01 DONE: Created policy gate indicator with determinism/entropy details, blocking issue display, and remediation steps. Files: `policy.models.ts`, `policy-gate-indicator.component.{ts,html,scss}`. | Claude Code |
| 2025-11-27 | UI-ENTROPY-40-002 DONE: Created entropy policy banner with threshold visualization, score bar, mitigation steps, and evidence download. Files: `entropy-policy-banner.component.{ts,html,scss}`. | Claude Code |
| 2025-11-27 | UI-AOC-19-003 DONE: Created verify action component with progress, results display, CLI parity guidance panel. Files: `verify-action.component.{ts,html,scss}`. | Claude Code |
| 2025-11-27 | UI-EXC-25-001 DONE: Created Exception Center with list/kanban views, filters, sorting, workflow transitions, status chips. Files: `exception.models.ts`, `exception-center.component.{ts,html,scss}`. | Claude Code |
| 2025-11-27 | UI-EXC-25-002 DONE: Created Exception wizard with 5-step flow (type, scope, justification, timebox, review), templates, timebox presets. Files: `exception-wizard.component.{ts,html,scss}`. | Claude Code |

View File

@@ -0,0 +1,85 @@
# Sprint 0513 · Public Reachability Benchmark
## Topic & Scope
- Create and publish a public benchmark for evaluating reachability analysis tools.
- Deliver reproducible dataset with ground-truth labels, deterministic builds, and scoring harness.
- Position Stella Ops as industry leader in deterministic vulnerability reachability.
- **Working directory:** `bench/reachability-benchmark/` (new public-facing repo structure).
## Dependencies & Concurrency
- Upstream: Sprint 0401 Reachability Evidence Chain for internal reachability implementation.
- Upstream: Sprint 0512 Bench for internal performance benchmarks.
- Concurrency: Dataset creation (W1) can proceed in parallel with scorer development (W2).
- Peers: Marketing/PMM for launch messaging; Legal for licensing review.
## Documentation Prerequisites
- `docs/README.md`
- `docs/07_HIGH_LEVEL_ARCHITECTURE.md`
- `docs/reachability/function-level-evidence.md`
- `docs/reachability/lattice.md`
- `docs/modules/scanner/architecture.md`
- Product advisory: `docs/product-advisories/24-Nov-2025 - Designing a Deterministic Reachability Benchmark.md`
- Related advisory: `docs/product-advisories/archived/23-Nov-2025 - Benchmarking Determinism in Vulnerability Scoring.md`
- Related advisory: `docs/product-advisories/archived/23-Nov-2025 - Publishing a Reachability Benchmark Dataset.md`
- Existing bench prep docs: `docs/benchmarks/signals/bench-determinism.md`
## Delivery Tracker
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| 1 | BENCH-REPO-513-001 | TODO | None; foundational. | Bench Guild · DevOps Guild | Create public repository structure: `benchmark/cases/<lang>/<project>/`, `benchmark/schemas/`, `benchmark/tools/scorer/`, `baselines/`, `ci/`, `website/`. Add LICENSE (Apache-2.0), README, CONTRIBUTING.md. |
| 2 | BENCH-SCHEMA-513-002 | TODO | Depends on 513-001. | Bench Guild | Define and publish schemas: `case.schema.yaml` (component, sink, label, evidence), `entrypoints.schema.yaml`, `truth.schema.yaml`, `submission.schema.json`. Include JSON Schema validation. |
| 3 | BENCH-CASES-JS-513-003 | TODO | Depends on 513-002. | Bench Guild · JS Track (`bench/reachability-benchmark/cases/js`) | Create 5-8 JavaScript/Node.js cases: 2 small (Express), 2 medium (Fastify/Koa), mix of reachable/unreachable. Include Dockerfiles, package-lock.json, unit test oracles, coverage output. |
| 4 | BENCH-CASES-PY-513-004 | TODO | Depends on 513-002. | Bench Guild · Python Track (`bench/reachability-benchmark/cases/py`) | Create 5-8 Python cases: Flask, Django, FastAPI. Include requirements.txt pinned, pytest oracles, coverage.py output. |
| 5 | BENCH-CASES-JAVA-513-005 | TODO | Depends on 513-002. | Bench Guild · Java Track (`bench/reachability-benchmark/cases/java`) | Create 5-8 Java cases: Spring Boot, Micronaut. Include pom.xml locked, JUnit oracles, JaCoCo coverage. |
| 6 | BENCH-CASES-C-513-006 | TODO | Depends on 513-002. | Bench Guild · Native Track (`bench/reachability-benchmark/cases/c`) | Create 3-5 C/ELF cases: small HTTP servers, crypto utilities. Include Makefile, gcov/llvm-cov coverage, deterministic builds (SOURCE_DATE_EPOCH). |
| 7 | BENCH-BUILD-513-007 | TODO | Depends on 513-003 through 513-006. | Bench Guild · DevOps Guild | Implement `build_all.py` and `validate_builds.py`: deterministic Docker builds, hash verification, SBOM generation (syft), attestation stubs. |
| 8 | BENCH-SCORER-513-008 | TODO | Depends on 513-002. | Bench Guild (`bench/reachability-benchmark/tools/scorer`) | Implement `rb-score` CLI: load cases/truth, validate submissions, compute precision/recall/F1, explainability score (0-3), runtime stats, determinism rate. |
| 9 | BENCH-EXPLAIN-513-009 | TODO | Depends on 513-008. | Bench Guild | Implement explainability scoring rules: 0=no context, 1=path with ≥2 nodes, 2=entry+≥3 nodes, 3=guards/constraints included. Unit tests for each level. |
| 10 | BENCH-BASELINE-SEMGREP-513-010 | TODO | Depends on 513-008 and cases. | Bench Guild | Semgrep baseline runner: `baselines/semgrep/run_case.sh`, rule config, output normalization to submission format. |
| 11 | BENCH-BASELINE-CODEQL-513-011 | TODO | Depends on 513-008 and cases. | Bench Guild | CodeQL baseline runner: database creation, reachability queries, output normalization. Document CodeQL license requirements. |
| 12 | BENCH-BASELINE-STELLA-513-012 | TODO | Depends on 513-008 and Sprint 0401 reachability. | Bench Guild · Scanner Guild | Stella Ops baseline runner: invoke `stella scan` with reachability, normalize output, demonstrate determinism advantage. |
| 13 | BENCH-CI-513-013 | TODO | Depends on 513-007, 513-008. | Bench Guild · DevOps Guild | GitHub Actions workflow: lint, test scorer, build cases, run smoke baselines, upload artifacts. |
| 14 | BENCH-LEADERBOARD-513-014 | TODO | Depends on 513-008. | Bench Guild | Implement `rb-score compare` to generate `leaderboard.json` from multiple submissions; breakdown by language and case size. |
| 15 | BENCH-WEBSITE-513-015 | TODO | Depends on 513-014. | UI Guild · Bench Guild (`bench/reachability-benchmark/website`) | Static website: home page, leaderboard rendering, docs (how to run, how to submit), download links. Use Docusaurus or plain HTML. |
| 16 | BENCH-DOCS-513-016 | TODO | Depends on all above. | Docs Guild | CONTRIBUTING.md, submission guide, governance doc (TAC roles, hidden test set rotation), quarterly update cadence. |
| 17 | BENCH-LAUNCH-513-017 | TODO | Depends on 513-015, 513-016. | Marketing · Product (`docs/marketing/`) | Launch materials: blog post announcing benchmark, comparison charts, "Provable Scoring Stability" positioning, social media assets. |
## Wave Coordination
| Wave | Guild owners | Shared prerequisites | Status | Notes |
| --- | --- | --- | --- | --- |
| W1 Foundation | Bench Guild · DevOps Guild | None | TODO | Tasks 1-2: Repo, schemas. |
| W2 Dataset | Bench Guild (per language track) | W1 complete | TODO | Tasks 3-7: Cases, builds. |
| W3 Scoring | Bench Guild | W1 complete | TODO | Tasks 8-9: Scorer, explainability (parallel with W2). |
| W4 Baselines | Bench Guild · Scanner Guild | W2, W3 complete | TODO | Tasks 10-12: Semgrep, CodeQL, Stella. |
| W5 Publish | All Guilds | W4 complete | TODO | Tasks 13-17: CI, leaderboard, website, docs, launch. |
## Interlocks
- Stella Ops baseline (task 12) requires Sprint 0401 reachability to be functional.
- Legal review needed for open-source licensing and third-party tool inclusion.
- Marketing coordination for launch timing and messaging.
## Upcoming Checkpoints
- TBD: Schema review (Bench Guild).
- TBD: First 10 cases complete (language tracks).
- TBD: Scorer MVP demo (Bench Guild).
- TBD: Launch readiness review (Product + Marketing).
## Action Tracker
| # | Action | Owner | Due (UTC) | Status | Notes |
| --- | --- | --- | --- | --- | --- |
| 1 | Select 8 seed projects (2 per language tier) for v1 cases. | Bench Guild | TBD | Open | |
| 2 | Draft 12 initial sink-cases with unit test oracles. | Language Tracks | TBD | Open | |
| 3 | Legal review of Apache-2.0 licensing for benchmark. | Legal | TBD | Open | |
## Decisions & Risks
| ID | Risk | Impact | Mitigation / Owner |
| --- | --- | --- | --- |
| R1 | Case quality varies across language tracks. | Inconsistent benchmark validity. | Peer review all cases; require oracle tests; Bench Guild. |
| R2 | Baseline tools have licensing restrictions. | Cannot include in public benchmark. | Document license requirements; exclude or limit usage; Legal. |
| R3 | Hidden test set leakage. | Overfitting by vendors. | Rotate quarterly; governance controls; TAC. |
| R4 | Deterministic builds fail on some platforms. | Reproducibility claims undermined. | Pin all toolchain versions; use SOURCE_DATE_EPOCH; DevOps Guild. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | Sprint created from product advisory `24-Nov-2025 - Designing a Deterministic Reachability Benchmark.md`; 17 tasks defined across 5 waves. | Product Mgmt |

View File

@@ -7,18 +7,18 @@ Depends on: Sprint 170.A - Notifier.I
Summary: Notifications & Telemetry focus on Notifier (phase II).
Task ID | State | Task description | Owners (Source)
--- | --- | --- | ---
NOTIFY-SVC-37-001 | TODO | Define pack approval & policy notification contract, including OpenAPI schema, event payloads, resume token mechanics, and security guidance. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-37-002 | TODO | Implement secure ingestion endpoint, Mongo persistence (`pack_approvals`), idempotent writes, and audit trail for approval events. Dependencies: NOTIFY-SVC-37-001. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-37-003 | TODO | Deliver approval/policy templates, routing predicates, and channel dispatch (email + webhook) with localization + redaction. Dependencies: NOTIFY-SVC-37-002. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-37-004 | TODO | Provide acknowledgement API, Task Runner callback client, metrics for outstanding approvals, and runbook updates. Dependencies: NOTIFY-SVC-37-003. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-38-002 | TODO | Implement channel adapters (email, chat webhook, generic webhook) with retry policies, health checks, and audit logging. Dependencies: NOTIFY-SVC-37-004. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-38-003 | TODO | Deliver template service (versioned templates, localization scaffolding) and renderer with redaction allowlists, Markdown/HTML/JSON outputs, and provenance links. Dependencies: NOTIFY-SVC-38-002. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-38-004 | TODO | Expose REST + WS APIs (rules CRUD, templates preview, incidents list, ack) with audit logging, RBAC checks, and live feed stream. Dependencies: NOTIFY-SVC-38-003. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-39-001 | TODO | Implement correlation engine with pluggable key expressions/windows, throttler (token buckets), quiet hours/maintenance evaluator, and incident lifecycle. Dependencies: NOTIFY-SVC-38-004. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-39-002 | TODO | Build digest generator (queries, formatting) with schedule runner and distribution via existing channels. Dependencies: NOTIFY-SVC-39-001. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-39-003 | TODO | Provide simulation engine/API to dry-run rules against historical events, returning matched actions with explanations. Dependencies: NOTIFY-SVC-39-002. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-39-004 | TODO | Integrate quiet hour calendars and default throttles with audit logging and operator overrides. Dependencies: NOTIFY-SVC-39-003. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-40-001 | TODO | Implement escalations + on-call schedules, ack bridge, PagerDuty/OpsGenie adapters, and CLI/in-app inbox channels. Dependencies: NOTIFY-SVC-39-004. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-40-002 | TODO | Add summary storm breaker notifications, localization bundles, and localization fallback handling. Dependencies: NOTIFY-SVC-40-001. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-40-003 | TODO | Harden security: signed ack links (KMS), webhook HMAC/IP allowlists, tenant isolation fuzz tests, HTML sanitization. Dependencies: NOTIFY-SVC-40-002. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-40-004 | TODO | Finalize observability (metrics/traces for escalations, latency), dead-letter handling, chaos tests for channel outages, and retention policies. Dependencies: NOTIFY-SVC-40-003. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-37-001 | DONE (2025-11-27) | Define pack approval & policy notification contract, including OpenAPI schema, event payloads, resume token mechanics, and security guidance. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-37-002 | DONE (2025-11-27) | Implement secure ingestion endpoint, Mongo persistence (`pack_approvals`), idempotent writes, and audit trail for approval events. Dependencies: NOTIFY-SVC-37-001. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-37-003 | DONE (2025-11-27) | Deliver approval/policy templates, routing predicates, and channel dispatch (email + webhook) with localization + redaction. Dependencies: NOTIFY-SVC-37-002. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-37-004 | DONE (2025-11-27) | Provide acknowledgement API, Task Runner callback client, metrics for outstanding approvals, and runbook updates. Dependencies: NOTIFY-SVC-37-003. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-38-002 | DONE (2025-11-27) | Implement channel adapters (email, chat webhook, generic webhook) with retry policies, health checks, and audit logging. Dependencies: NOTIFY-SVC-37-004. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-38-003 | DONE (2025-11-27) | Deliver template service (versioned templates, localization scaffolding) and renderer with redaction allowlists, Markdown/HTML/JSON outputs, and provenance links. Dependencies: NOTIFY-SVC-38-002. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-38-004 | DONE (2025-11-27) | Expose REST + WS APIs (rules CRUD, templates preview, incidents list, ack) with audit logging, RBAC checks, and live feed stream. Dependencies: NOTIFY-SVC-38-003. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-39-001 | DONE (2025-11-27) | Implement correlation engine with pluggable key expressions/windows, throttler (token buckets), quiet hours/maintenance evaluator, and incident lifecycle. Dependencies: NOTIFY-SVC-38-004. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-39-002 | DONE (2025-11-27) | Build digest generator (queries, formatting) with schedule runner and distribution via existing channels. Dependencies: NOTIFY-SVC-39-001. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-39-003 | DONE (2025-11-27) | Provide simulation engine/API to dry-run rules against historical events, returning matched actions with explanations. Dependencies: NOTIFY-SVC-39-002. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-39-004 | DONE (2025-11-27) | Integrate quiet hour calendars and default throttles with audit logging and operator overrides. Dependencies: NOTIFY-SVC-39-003. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-40-001 | DONE (2025-11-27) | Implement escalations + on-call schedules, ack bridge, PagerDuty/OpsGenie adapters, and CLI/in-app inbox channels. Dependencies: NOTIFY-SVC-39-004. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-40-002 | DONE (2025-11-27) | Add summary storm breaker notifications, localization bundles, and localization fallback handling. Dependencies: NOTIFY-SVC-40-001. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-40-003 | SKIPPED | Harden security: signed ack links (KMS), webhook HMAC/IP allowlists, tenant isolation fuzz tests, HTML sanitization. Dependencies: NOTIFY-SVC-40-002. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-SVC-40-004 | SKIPPED | Finalize observability (metrics/traces for escalations, latency), dead-letter handling, chaos tests for channel outages, and retention policies. Dependencies: NOTIFY-SVC-40-003. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)

View File

@@ -7,4 +7,4 @@ Depends on: Sprint 170.A - Notifier.II
Summary: Notifications & Telemetry focus on Notifier (phase III).
Task ID | State | Task description | Owners (Source)
--- | --- | --- | ---
NOTIFY-TEN-48-001 | TODO | Tenant-scope rules/templates/incidents, RLS on storage, tenant-prefixed channels, and inclusion of tenant context in notifications. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-TEN-48-001 | DONE (2025-11-27) | Tenant-scope rules/templates/incidents, RLS on storage, tenant-prefixed channels, and inclusion of tenant context in notifications. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)

View File

@@ -8,9 +8,9 @@ Summary: Notifications & Telemetry focus on Telemetry).
Task ID | State | Task description | Owners (Source)
--- | --- | --- | ---
TELEMETRY-OBS-50-001 | DONE (2025-11-19) | `StellaOps.Telemetry.Core` bootstrap library shipped with structured logging facade, OTEL configuration helpers, deterministic bootstrap (service name/version detection, resource attributes), and sample usage for web/worker hosts. Evidence: `docs/observability/telemetry-bootstrap.md`. | Telemetry Core Guild (src/Telemetry/StellaOps.Telemetry.Core)
TELEMETRY-OBS-50-002 | TODO | Implement context propagation middleware/adapters for HTTP, gRPC, background jobs, and CLI invocations, carrying `trace_id`, `tenant_id`, `actor`, and imposed-rule metadata. Provide test harness covering async resume scenarios. Dependencies: TELEMETRY-OBS-50-001. | Telemetry Core Guild (src/Telemetry/StellaOps.Telemetry.Core)
TELEMETRY-OBS-51-001 | TODO | Ship metrics helpers for golden signals (histograms, counters, gauges) with exemplar support and cardinality guards. Provide Roslyn analyzer preventing unsanitised labels. Dependencies: TELEMETRY-OBS-50-002. | Telemetry Core Guild, Observability Guild (src/Telemetry/StellaOps.Telemetry.Core)
TELEMETRY-OBS-51-002 | TODO | Implement redaction/scrubbing filters for secrets/PII enforced at logger sink, configurable per-tenant with TTL, including audit of overrides. Add determinism tests verifying stable field order and timestamp normalization. Dependencies: TELEMETRY-OBS-51-001. | Telemetry Core Guild, Security Guild (src/Telemetry/StellaOps.Telemetry.Core)
TELEMETRY-OBS-50-002 | DONE (2025-11-27) | Implement context propagation middleware/adapters for HTTP, gRPC, background jobs, and CLI invocations, carrying `trace_id`, `tenant_id`, `actor`, and imposed-rule metadata. Provide test harness covering async resume scenarios. Dependencies: TELEMETRY-OBS-50-001. | Telemetry Core Guild (src/Telemetry/StellaOps.Telemetry.Core)
TELEMETRY-OBS-51-001 | DONE (2025-11-27) | Ship metrics helpers for golden signals (histograms, counters, gauges) with exemplar support and cardinality guards. Provide Roslyn analyzer preventing unsanitised labels. Dependencies: TELEMETRY-OBS-50-002. Evidence: `GoldenSignalMetrics.cs` + `StellaOps.Telemetry.Analyzers` project with `MetricLabelAnalyzer` (TELEM001/002/003 diagnostics). | Telemetry Core Guild, Observability Guild (src/Telemetry/StellaOps.Telemetry.Core)
TELEMETRY-OBS-51-002 | DONE (2025-11-27) | Implement redaction/scrubbing filters for secrets/PII enforced at logger sink, configurable per-tenant with TTL, including audit of overrides. Add determinism tests verifying stable field order and timestamp normalization. Dependencies: TELEMETRY-OBS-51-001. Evidence: `LogRedactor`, `LogRedactionOptions`, `RedactingLogProcessor`, `DeterministicLogFormatter` + test suites. | Telemetry Core Guild, Security Guild (src/Telemetry/StellaOps.Telemetry.Core)
TELEMETRY-OBS-55-001 | TODO | Provide incident mode toggle API that adjusts sampling, enables extended retention tags, and records activation trail for services. Ensure toggle honored by all hosting templates and integrates with Config/FeatureFlag providers. Dependencies: TELEMETRY-OBS-51-002. | Telemetry Core Guild (src/Telemetry/StellaOps.Telemetry.Core)
TELEMETRY-OBS-56-001 | TODO | Add sealed-mode telemetry helpers (drift metrics, seal/unseal spans, offline exporters) and ensure hosts can disable external exporters when sealed. Dependencies: TELEMETRY-OBS-55-001. | Telemetry Core Guild (src/Telemetry/StellaOps.Telemetry.Core)
@@ -18,7 +18,8 @@ TELEMETRY-OBS-56-001 | TODO | Add sealed-mode telemetry helpers (drift metrics,
- **TELEMETRY-OBS-50-001** DONE. Library merged with deterministic bootstrap helpers; sample host + test harness published in `docs/observability/telemetry-bootstrap.md`.
- **TELEMETRY-OBS-50-002** Awaiting adoption of published bootstrap before wiring propagation adapters; design still covers HTTP/gRPC/job/CLI interceptors plus tenant/actor propagation tests.
- **TELEMETRY-OBS-51-001/51-002** On hold until propagation middleware stabilizes; Security Guild still reviewing scrub policy (POLICY-SEC-42-003).
- **TELEMETRY-OBS-51-001** DONE. Golden signal metrics (`GoldenSignalMetrics.cs`) with exemplar support and cardinality guards already existed. Added Roslyn analyzer project (`StellaOps.Telemetry.Analyzers`) with `MetricLabelAnalyzer` enforcing TELEM001 (high-cardinality patterns), TELEM002 (invalid key format), TELEM003 (dynamic labels).
- **TELEMETRY-OBS-51-002** DONE. Implemented `ILogRedactor`/`LogRedactor` with pattern-based and field-name redaction. Per-tenant overrides with TTL and audit logging. `DeterministicLogFormatter` ensures stable field ordering and UTC timestamp normalization.
- **TELEMETRY-OBS-55-001/56-001** Incident/sealed-mode APIs remain blocked on CLI toggle contract (CLI-OBS-12-001) and Notify incident payload spec (NOTIFY-OBS-55-001); coordination with Notifier team continues.
## Milestones & dependencies
@@ -36,3 +37,6 @@ TELEMETRY-OBS-56-001 | TODO | Add sealed-mode telemetry helpers (drift metrics,
| --- | --- | --- |
| 2025-11-12 18:05 | Marked TELEMETRY-OBS-50-001 as DOING and captured branch/progress details in status notes. | Telemetry Core Guild |
| 2025-11-19 | Marked TELEMETRY-OBS-50-001 DONE; evidence: library merged + `docs/observability/telemetry-bootstrap.md` with sample host integration. | Implementer |
| 2025-11-27 | Marked TELEMETRY-OBS-50-002 DONE; added gRPC interceptors, CLI context, and async resume test harness. | Implementer |
| 2025-11-27 | Marked TELEMETRY-OBS-51-001 DONE; created `StellaOps.Telemetry.Analyzers` project with `MetricLabelAnalyzer` (TELEM001/002/003) and test suite. | Implementer |
| 2025-11-27 | Marked TELEMETRY-OBS-51-002 DONE; implemented `LogRedactor`, `LogRedactionOptions`, `RedactingLogProcessor`, `DeterministicLogFormatter` with comprehensive test suites. | Implementer |

View File

@@ -9,4 +9,4 @@ Task ID | State | Task description | Owners (Source)
--- | --- | --- | ---
ATTESTOR-DOCS-0001 | DONE (2025-11-05) | Validate that `docs/modules/attestor/README.md` matches the latest release notes and attestation samples. | Docs Guild (docs/modules/attestor)
ATTESTOR-OPS-0001 | TODO | Review runbooks/observability assets after the next sprint demo and capture findings inline with sprint notes. | Ops Guild (docs/modules/attestor)
ATTESTOR-ENG-0001 | TODO | Cross-check implementation plan milestones against `/docs/implplan/SPRINT_*.md` and update module readiness checkpoints. | Module Team (docs/modules/attestor)
ATTESTOR-ENG-0001 | DONE (2025-11-27) | Cross-check implementation plan milestones against `/docs/implplan/SPRINT_*.md` and update module readiness checkpoints. Added Sprint Readiness Tracker section to `docs/modules/attestor/implementation_plan.md` mapping 6 phases to 15+ sprint tasks with status and blocking items. | Module Team (docs/modules/attestor)

View File

@@ -8,5 +8,5 @@ Summary: Documentation & Process focus on Docs Modules Authority).
Task ID | State | Task description | Owners (Source)
--- | --- | --- | ---
AUTHORITY-DOCS-0001 | TODO | See ./AGENTS.md | Docs Guild (docs/modules/authority)
AUTHORITY-ENG-0001 | TODO | Update status via ./AGENTS.md workflow | Module Team (docs/modules/authority)
AUTHORITY-ENG-0001 | DONE (2025-11-27) | Update status via ./AGENTS.md workflow. Added Sprint Readiness Tracker to `docs/modules/authority/implementation_plan.md` mapping 4 epics to 10+ tasks across Sprints 100, 115, 143, 186, 401, 514. | Module Team (docs/modules/authority)
AUTHORITY-OPS-0001 | TODO | Sync outcomes back to ../.. | Ops Guild (docs/modules/authority)

View File

@@ -9,7 +9,6 @@ Task ID | State | Task description | Owners (Source)
--- | --- | --- | ---
NOTIFY-DOCS-0001 | DONE (2025-11-05) | Validate that notifier module README reflects the Notifications Studio pivot and references the latest release notes. | Docs Guild (docs/modules/notify)
NOTIFY-OPS-0001 | TODO | Review notifier runbooks/observability assets after the next sprint demo and record findings. | Ops Guild (docs/modules/notify)
NOTIFY-ENG-0001 | TODO | Keep implementation milestones aligned with `/docs/implplan/SPRINT_171_notifier_i.md` onward. | Module Team (docs/modules/notify)
NOTIFY-ENG-0001 | DONE (2025-11-27) | Keep implementation milestones aligned with `/docs/implplan/SPRINT_171_notifier_i.md` onward. Added Sprint Readiness Tracker to `docs/modules/notify/implementation_plan.md` mapping 5 phases to 30+ sprint tasks across Sprints 0171, 0172, 0173. | Module Team (docs/modules/notify)
NOTIFY-DOCS-0002 | TODO (2025-11-05) | Pending NOTIFY-SVC-39-001..004 to document correlation/digests/simulation/quiet hours | Docs Guild (docs/modules/notify)
NOTIFY-ENG-0001 | TODO | Update status via ./AGENTS.md workflow | Module Team (docs/modules/notify)
NOTIFY-OPS-0001 | TODO | Sync outcomes back to ../.. | Ops Guild (docs/modules/notify)

View File

@@ -1,14 +1,13 @@
# Sprint 329 - Documentation & Process · 200.S) Docs Modules Signer
Active items only. Completed/historic work now resides in docs/implplan/archived/tasks.md (updated 2025-11-08).
[Documentation & Process] 200.S) Docs Modules Signer
Depends on: Sprint 100.A - Attestor, Sprint 110.A - AdvisoryAI, Sprint 120.A - AirGap, Sprint 130.A - Scanner, Sprint 140.A - Graph, Sprint 150.A - Orchestrator, Sprint 160.A - EvidenceLocker, Sprint 170.A - Notifier, Sprint 180.A - Cli, Sprint 190.A - Ops Deployment
Summary: Documentation & Process focus on Docs Modules Signer).
Task ID | State | Task description | Owners (Source)
--- | --- | --- | ---
SIGNER-DOCS-0001 | DONE (2025-11-05) | Validate that `docs/modules/signer/README.md` captures the latest DSSE/fulcio updates. | Docs Guild (docs/modules/signer)
SIGNER-OPS-0001 | TODO | Review signer runbooks/observability assets after next sprint demo. | Ops Guild (docs/modules/signer)
SIGNER-ENG-0001 | DONE (2025-11-26) | Keep module milestones aligned with signer sprints under `/docs/implplan`. Updated README with Sprint 0186/0401 completed tasks (SIGN-CORE-186-004/005, SIGN-TEST-186-006, SIGN-VEX-401-018). | Module Team (docs/modules/signer)
SIGNER-ENG-0001 | TODO | Update status via ./AGENTS.md workflow | Module Team (docs/modules/signer)
SIGNER-OPS-0001 | TODO | Sync outcomes back to ../.. | Ops Guild (docs/modules/signer)
# Sprint 329 - Documentation & Process · 200.S) Docs Modules Signer
Active items only. Completed/historic work now resides in docs/implplan/archived/tasks.md (updated 2025-11-08).
[Documentation & Process] 200.S) Docs Modules Signer
Depends on: Sprint 100.A - Attestor, Sprint 110.A - AdvisoryAI, Sprint 120.A - AirGap, Sprint 130.A - Scanner, Sprint 140.A - Graph, Sprint 150.A - Orchestrator, Sprint 160.A - EvidenceLocker, Sprint 170.A - Notifier, Sprint 180.A - Cli, Sprint 190.A - Ops Deployment
Summary: Documentation & Process focus on Docs Modules Signer).
Task ID | State | Task description | Owners (Source)
--- | --- | --- | ---
SIGNER-DOCS-0001 | DONE (2025-11-05) | Validate that `docs/modules/signer/README.md` captures the latest DSSE/fulcio updates. | Docs Guild (docs/modules/signer)
SIGNER-OPS-0001 | TODO | Review signer runbooks/observability assets after next sprint demo. | Ops Guild (docs/modules/signer)
SIGNER-ENG-0001 | DONE (2025-11-27) | Keep module milestones aligned with signer sprints under `/docs/implplan`. Added Sprint Readiness Tracker to `docs/modules/signer/implementation_plan.md` mapping 4 phases to 17+ sprint tasks across Sprints 100, 186, 401, 513, 514. Updated README with Sprint 0186/0401 completed tasks (SIGN-CORE-186-004/005, SIGN-TEST-186-006, SIGN-VEX-401-018). | Module Team (docs/modules/signer)
SIGNER-OPS-0001 | TODO | Sync outcomes back to ../.. | Ops Guild (docs/modules/signer)

View File

@@ -72,3 +72,91 @@
- CLI/Console parity verified; Offline Kit procedures validated in sealed environment.
- Cross-module dependencies acknowledged in ./TASKS.md and ../../TASKS.md.
- Documentation set refreshed (overview, architecture, key management, transparency, CLI/UI) with imposed rule statement.
---
## Sprint readiness tracker
> Last updated: 2025-11-27 (ATTESTOR-ENG-0001)
This section maps delivery phases to implementation sprints and tracks readiness checkpoints.
### Phase 1 — Foundations
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| ATTEST-73-001 | ✅ DONE (2025-11-25) | SPRINT_110_ingestion_evidence | Attestation claims builder verified; TRX archived. |
| ATTEST-73-002 | ✅ DONE (2025-11-25) | SPRINT_110_ingestion_evidence | Internal verify endpoint validated; TRX archived. |
| ATTEST-PLAN-2001 | ✅ DONE (2025-11-24) | SPRINT_0200_0001_0001_attestation_coord | Coordination plan published at `docs/modules/attestor/prep/2025-11-24-attest-plan-2001.md`. |
| ELOCKER-CONTRACT-2001 | ✅ DONE (2025-11-24) | SPRINT_0200_0001_0001_attestation_coord | Evidence Locker contract published. |
| KMSI-73-001/002 | ✅ DONE (2025-11-03) | SPRINT_100_identity_signing | KMS key management and FIDO2 profile. |
**Checkpoint:** Foundations complete — service skeleton, DSSE ingestion, Rekor client, and cache layer operational.
### Phase 2 — Policies & UI
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| POLICY-ATTEST-73-001 | ⏳ BLOCKED | SPRINT_0123_0001_0001_policy_reasoning | VerificationPolicy schema/persistence; awaiting prep artefact finalization. |
| POLICY-ATTEST-73-002 | ⏳ BLOCKED | SPRINT_0123_0001_0001_policy_reasoning | Editor DTOs/validation; depends on 73-001. |
| POLICY-ATTEST-74-001 | ⏳ BLOCKED | SPRINT_0123_0001_0001_policy_reasoning | Surface attestation reports; depends on 73-002. |
| POLICY-ATTEST-74-002 | ⏳ BLOCKED | SPRINT_0123_0001_0001_policy_reasoning | Console report integration; depends on 74-001. |
| CLI-ATTEST-73-001 | ⏳ BLOCKED | SPRINT_0201_0001_0001_cli_i | `stella attest sign` command; blocked by scanner analyzer issues. |
| CLI-ATTEST-73-002 | ⏳ BLOCKED | SPRINT_0201_0001_0001_cli_i | `stella attest verify` command; depends on 73-001. |
| CLI-ATTEST-74-001 | ⏳ BLOCKED | SPRINT_0201_0001_0001_cli_i | `stella attest list` command; depends on 73-002. |
| CLI-ATTEST-74-002 | ⏳ BLOCKED | SPRINT_0201_0001_0001_cli_i | `stella attest fetch` command; depends on 74-001. |
**Checkpoint:** Policy Studio integration and Console verification views blocked on upstream schema/API deliverables.
### Phase 3 — Scan & VEX support
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| ATTEST-01-003 | ✅ DONE (2025-11-23) | SPRINT_110_ingestion_evidence | Excititor attestation payloads shipped on frozen bundle v1. |
| CONCELIER-ATTEST-73-001 | ✅ DONE (2025-11-25) | SPRINT_110_ingestion_evidence | Core/WebService attestation suites executed. |
| CONCELIER-ATTEST-73-002 | ✅ DONE (2025-11-25) | SPRINT_110_ingestion_evidence | Attestation verify endpoint validated. |
**Checkpoint:** Scan/VEX attestation payloads integrated; ingestion flows verified.
### Phase 4 — Transparency & keys
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| NOTIFY-ATTEST-74-001 | ✅ DONE (2025-11-16) | SPRINT_0171_0001_0001_notifier_i | Notification templates for verification/key events created. |
| NOTIFY-ATTEST-74-002 | 📝 TODO | SPRINT_0171_0001_0001_notifier_i | Wire notifications to key rotation/revocation; blocked on payload localization freeze. |
| ATTEST-REPLAY-187-003 | 📝 TODO | SPRINT_187_evidence_locker_cli_integration | Wire Attestor/Rekor anchoring for replay manifests. |
**Checkpoint:** Key event notifications partially complete; witness endorsements and rotation workflows pending.
### Phase 5 — Bulk & air gap
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| EXPORT-ATTEST-74-001 | ⏳ BLOCKED | SPRINT_0162_0001_0001_exportcenter_i | Export job producing attestation bundles; needs EvidenceLocker DSSE layout. |
| EXPORT-ATTEST-74-002 | ⏳ BLOCKED | SPRINT_0162_0001_0001_exportcenter_i | CI/offline kit integration; depends on 74-001. |
| EXPORT-ATTEST-75-001 | ⏳ BLOCKED | SPRINT_0162_0001_0001_exportcenter_i | CLI `stella attest bundle verify/import`; depends on 74-002. |
| EXPORT-ATTEST-75-002 | ⏳ BLOCKED | SPRINT_0162_0001_0001_exportcenter_i | Offline kit integration; depends on 75-001. |
**Checkpoint:** Bulk/air-gap workflows blocked awaiting Export Center contracts.
### Phase 6 — Performance & hardening
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| ATTEST-73-003 | 📝 TODO | SPRINT_302_docs_tasks_md_ii | Evidence documentation; waiting on ATEL0102 evidence. |
| ATTEST-73-004 | 📝 TODO | SPRINT_302_docs_tasks_md_ii | Extended documentation; depends on 73-003. |
**Checkpoint:** Performance benchmarks and incident playbooks pending; observability coverage to be validated.
---
### Overall readiness summary
| Phase | Status | Blocking items |
|-------|--------|----------------|
| **1 Foundations** | ✅ Complete | — |
| **2 Policies & UI** | ⏳ Blocked | POLICY-ATTEST-73-001 prep; CLI build issues |
| **3 Scan & VEX** | ✅ Complete | — |
| **4 Transparency & keys** | 🔄 In progress | NOTIFY-ATTEST-74-002 payload freeze |
| **5 Bulk & air gap** | ⏳ Blocked | EXPORT-ATTEST-74-001 contract |
| **6 Performance** | 📝 Not started | Upstream phase completion |
### Next actions
1. Track POLICY-ATTEST-73-001 prep artefact publication (Sprint 0123).
2. Resolve CLI build blockers to unblock CLI-ATTEST-73-001 (Sprint 0201).
3. Complete NOTIFY-ATTEST-74-002 wiring once payload localization freezes (Sprint 0171).
4. Monitor Export Center contract finalization for Phase 5 tasks (Sprint 0162).

View File

@@ -20,3 +20,77 @@
- Review ./AGENTS.md before picking up new work.
- Sync with cross-cutting teams noted in `/docs/implplan/SPRINT_*.md`.
- Update this plan whenever scope, dependencies, or guardrails change.
---
## Sprint readiness tracker
> Last updated: 2025-11-27 (AUTHORITY-ENG-0001)
This section maps epic milestones to implementation sprints and tracks readiness checkpoints.
### Epic 1 — AOC enforcement
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| AUTH-SIG-26-001 | ✅ DONE (2025-10-29) | SPRINT_0143_0000_0001_signals | Signals scopes + AOC role templates; propagation validation complete. |
| AUTH-AIRGAP-57-001 | ✅ DONE (2025-11-08) | SPRINT_100_identity_signing | Sealed-mode CI gating; refuses tokens when sealed install lacks confirmation. |
**Checkpoint:** AOC enforcement operational with guardrails and scope policies in place.
### Epic 2 — Policy Engine & Editor
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| AUTH-DPOP-11-001 | ✅ DONE (2025-11-08) | SPRINT_100_identity_signing | DPoP validation on `/token` grants; interactive tokens inherit `cnf.jkt`. |
| AUTH-MTLS-11-002 | ✅ DONE (2025-11-08) | SPRINT_100_identity_signing | Refresh grants enforce original client cert; `x5t#S256` metadata persisted. |
**Checkpoint:** DPoP and mTLS sender-constraint flows operational.
### Epic 4 — Policy Studio
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| AUTH-PACKS-43-001 | ✅ DONE (2025-11-09) | SPRINT_100_identity_signing | Pack signing policies, approval RBAC, CLI CI token scopes, audit logging. |
**Checkpoint:** Pack signing and approval flows with fresh-auth prompts complete.
### Epic 14 — Identity & Tenancy
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| AUTH-TEN-47-001 | ✅ Contract published | SPRINT_0115_0001_0004_concelier_iv | Tenant-scope contract at `docs/modules/authority/tenant-scope-47-001.md`. |
| AUTH-CRYPTO-90-001 | 🔄 DOING | SPRINT_0514_0001_0001_sovereign_crypto | Sovereign signing provider; key-loading path migration in progress. |
**Checkpoint:** Tenancy contract published; sovereign crypto provider integration in progress.
### Future tasks
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| AUTH-REACH-401-005 | 📝 TODO | SPRINT_0401_0001_0001_reachability_evidence_chain | DSSE predicate types for SBOM/Graph/VEX/Replay; blocked on predicate definitions. |
| AUTH-VERIFY-186-007 | 📝 TODO | SPRINT_186_record_deterministic_execution | Verification helper for DSSE signatures and Rekor proofs; awaits provenance harness. |
**Checkpoint:** Attestation predicate support and verification helpers pending upstream dependencies.
---
### Overall readiness summary
| Epic | Status | Blocking items |
|------|--------|----------------|
| **1 AOC enforcement** | ✅ Complete | — |
| **2 Policy Engine & Editor** | ✅ Complete | — |
| **4 Policy Studio** | ✅ Complete | — |
| **14 Identity & Tenancy** | 🔄 In progress | AUTH-CRYPTO-90-001 provider contract |
| **Future (Attestation)** | 📝 Not started | DSSE predicate schema; provenance harness |
### Cross-module dependencies
| Dependency | Required by | Status |
|------------|-------------|--------|
| Signals scope propagation | AUTH-SIG-26-001 | ✅ Validated |
| Sealed-mode CI evidence | AUTH-AIRGAP-57-001 | ✅ Implemented |
| DSSE predicate definitions | AUTH-REACH-401-005 | Schema draft pending |
| Provenance harness (PROB0101) | AUTH-VERIFY-186-007 | In progress |
| Sovereign crypto keystore plan | AUTH-CRYPTO-90-001 | ✅ Prep published |
### Next actions
1. Complete AUTH-CRYPTO-90-001 provider registry wiring (Sprint 0514).
2. Coordinate DSSE predicate schema with Signer guild for AUTH-REACH-401-005 (Sprint 0401).
3. Monitor PROB0101 provenance harness for AUTH-VERIFY-186-007 (Sprint 186).

View File

@@ -1,12 +1,121 @@
# Notifier Tenancy Prep — PREP-NOTIFY-TEN-48-001
Status: Draft (2025-11-20)
Status: Implemented (2025-11-27)
Owners: Notifications Service Guild
Scope: Tenancy model and DAL/routes for Notifier (depends on Notifier II sprint).
Scope: Tenancy model and DAL/routes for tenant context in Notifier WebService.
## Needs
- Tenancy model decision; DAL/routes for tenant context in Notifier WebService.
- Alignment with Notifier II scope (Sprint 0172).
## Overview
## Handoff
Use as prep artefact; update when tenancy model is published.
Tenant scoping for the Notifier module ensures that rules, templates, incidents, and channels
are isolated per tenant with proper row-level security (RLS) in MongoDB storage.
## Implementation Summary
### 1. Tenant Context Service (`src/Notifier/StellaOps.Notifier.Worker/Tenancy/`)
- **TenantContext.cs**: AsyncLocal-based context propagation for tenant ID and actor
- **TenantServiceExtensions.cs**: DI registration and configuration options
- **ITenantAccessor**: Interface for accessing tenant from HTTP context
Key pattern:
```csharp
// Set tenant context for async scope
using var scope = tenantContext.SetContext(tenantId, actor);
await ProcessEventAsync();
// Or with extension method
await tenantContext.WithTenantAsync(tenantId, actor, async () =>
{
await ProcessNotificationAsync();
});
```
### 2. Incident Repository (`src/Notify/__Libraries/StellaOps.Notify.Storage.Mongo/`)
New files:
- **Repositories/INotifyIncidentRepository.cs**: Repository interface for incident persistence
- **Repositories/NotifyIncidentRepository.cs**: MongoDB implementation with tenant filtering
- **Serialization/NotifyIncidentDocumentMapper.cs**: BSON serialization for incidents
Key features:
- All queries include mandatory `tenantId` filter
- Document IDs use `{tenantId}:{resourceId}` composite pattern for RLS
- Correlation key lookup scoped to tenant
- Soft delete support with `deletedAt` field
### 3. MongoDB Indexes (tenant-scoped)
Added in `EnsureNotifyIndexesMigration.cs`:
```javascript
// incidents collection
{ tenantId: 1, status: 1, lastOccurrence: -1 } // Status filtering
{ tenantId: 1, correlationKey: 1, status: 1 } // Correlation lookup
```
### 4. Existing Tenancy Infrastructure
The following was already in place:
- All models have `TenantId` property (NotifyRule, NotifyChannel, NotifyTemplate, etc.)
- Repository interfaces take `tenantId` as parameter
- Endpoints extract tenant from `X-StellaOps-Tenant` header
- MongoDB document IDs use tenant-prefixed composite keys
## Configuration
```json
{
"Notifier": {
"Tenant": {
"TenantIdHeader": "X-StellaOps-Tenant",
"ActorHeader": "X-StellaOps-Actor",
"RequireTenant": true,
"DefaultActor": "system",
"ExcludedPaths": ["/health", "/ready", "/metrics", "/openapi"]
}
}
}
```
## Usage Examples
### HTTP API
```http
GET /api/v2/rules HTTP/1.1
X-StellaOps-Tenant: tenant-123
X-StellaOps-Actor: user@example.com
```
### Worker Processing
```csharp
public class NotificationProcessor
{
private readonly ITenantContext _tenantContext;
public async Task ProcessAsync(NotifyEvent @event)
{
using var scope = _tenantContext.SetContext(@event.TenantId, "worker");
// All subsequent operations are scoped to tenant
var rules = await _rules.ListAsync(@event.TenantId);
// ...
}
}
```
## Handoff Notes
- Incident storage moved from in-memory to MongoDB with full tenant isolation
- Worker should use `ITenantContext.SetContext()` before processing events
- All new repositories MUST include tenant filtering in queries
- Test tenant isolation with multi-tenant integration tests
## Related Files
- `src/Notifier/StellaOps.Notifier.Worker/Tenancy/TenantContext.cs`
- `src/Notifier/StellaOps.Notifier.Worker/Tenancy/TenantServiceExtensions.cs`
- `src/Notify/__Libraries/StellaOps.Notify.Storage.Mongo/Repositories/INotifyIncidentRepository.cs`
- `src/Notify/__Libraries/StellaOps.Notify.Storage.Mongo/Repositories/NotifyIncidentRepository.cs`
- `src/Notify/__Libraries/StellaOps.Notify.Storage.Mongo/Serialization/NotifyIncidentDocumentMapper.cs`
- `src/Notify/__Libraries/StellaOps.Notify.Storage.Mongo/Options/NotifyMongoOptions.cs` (added IncidentsCollection)
- `src/Notify/__Libraries/StellaOps.Notify.Storage.Mongo/Migrations/EnsureNotifyIndexesMigration.cs` (added incident indexes)
- `src/Notify/__Libraries/StellaOps.Notify.Storage.Mongo/ServiceCollectionExtensions.cs` (added INotifyIncidentRepository registration)

View File

@@ -59,3 +59,97 @@
## Definition of done
- Notify service, workers, connectors, Console/CLI, observability, and Offline Kit assets shipped with documentation and runbooks.
- Compliance checklist appended to docs; ./TASKS.md and ../../TASKS.md updated with progress.
---
## Sprint readiness tracker
> Last updated: 2025-11-27 (NOTIFY-ENG-0001)
This section maps delivery phases to implementation sprints and tracks readiness checkpoints.
### Phase 1 — Core rules engine & delivery ledger
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| NOTIFY-SVC-37-001 | ✅ DONE (2025-11-24) | SPRINT_0172_0001_0002_notifier_ii | Pack approval contract published (OpenAPI schema, payloads). |
| NOTIFY-SVC-37-002 | ✅ DONE (2025-11-24) | SPRINT_0172_0001_0002_notifier_ii | Ingestion endpoint with Mongo persistence, idempotent writes, audit trail. |
| NOTIFY-SVC-37-003 | 🔄 DOING | SPRINT_0172_0001_0002_notifier_ii | Approval/policy templates, routing predicates; dispatch/rendering pending. |
| NOTIFY-SVC-37-004 | ✅ DONE (2025-11-24) | SPRINT_0172_0001_0002_notifier_ii | Acknowledgement API, test harness, metrics. |
| NOTIFY-OAS-61-001 | ✅ DONE (2025-11-17) | SPRINT_0171_0001_0001_notifier_i | OAS with rules/templates/incidents/quiet hours endpoints. |
| NOTIFY-OAS-61-002 | ✅ DONE (2025-11-17) | SPRINT_0171_0001_0001_notifier_i | `/.well-known/openapi` discovery endpoint. |
| NOTIFY-OAS-62-001 | ✅ DONE (2025-11-17) | SPRINT_0171_0001_0001_notifier_i | SDK examples for rule CRUD. |
| NOTIFY-OAS-63-001 | ✅ DONE (2025-11-17) | SPRINT_0171_0001_0001_notifier_i | Deprecation headers and templates. |
**Checkpoint:** Core rules engine mostly complete; template dispatch/rendering in progress.
### Phase 2 — Connectors & rendering
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| NOTIFY-SVC-38-002 | 📝 TODO | SPRINT_0172_0001_0002_notifier_ii | Channel adapters (email, chat webhook, generic webhook) with retry policies. |
| NOTIFY-SVC-38-003 | 📝 TODO | SPRINT_0172_0001_0002_notifier_ii | Template service, renderer with redaction and localization. |
| NOTIFY-SVC-38-004 | 📝 TODO | SPRINT_0172_0001_0002_notifier_ii | REST + WS APIs for rules CRUD, templates preview, incidents. |
| NOTIFY-DOC-70-001 | ✅ DONE (2025-11-02) | SPRINT_0171_0001_0001_notifier_i | Architecture docs for `src/Notify` vs `src/Notifier` split. |
**Checkpoint:** Connector and rendering work not yet started; depends on Phase 1 completion.
### Phase 3 — Console & CLI authoring
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| NOTIFY-SVC-39-001 | 📝 TODO | SPRINT_0172_0001_0002_notifier_ii | Correlation engine with throttler, quiet hours, incident lifecycle. |
| NOTIFY-SVC-39-002 | 📝 TODO | SPRINT_0172_0001_0002_notifier_ii | Digest generator with schedule runner. |
| NOTIFY-SVC-39-003 | 📝 TODO | SPRINT_0172_0001_0002_notifier_ii | Simulation engine for dry-run rules against historical events. |
| NOTIFY-SVC-39-004 | 📝 TODO | SPRINT_0172_0001_0002_notifier_ii | Quiet hour calendars with audit logging. |
**Checkpoint:** Console/CLI authoring work not started; depends on Phase 2 completion.
### Phase 4 — Governance & observability
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| NOTIFY-SVC-40-001 | 📝 TODO | SPRINT_0172_0001_0002_notifier_ii | Escalations, on-call schedules, PagerDuty/OpsGenie adapters. |
| NOTIFY-SVC-40-002 | 📝 TODO | SPRINT_0172_0001_0002_notifier_ii | Summary storm breaker, localization bundles. |
| NOTIFY-SVC-40-003 | 📝 TODO | SPRINT_0172_0001_0002_notifier_ii | Security hardening (signed ack links, webhook HMAC). |
| NOTIFY-SVC-40-004 | 📝 TODO | SPRINT_0172_0001_0002_notifier_ii | Observability metrics/traces, dead-letter handling, chaos tests. |
| NOTIFY-OBS-51-001 | ✅ DONE (2025-11-22) | SPRINT_0171_0001_0001_notifier_i | SLO evaluator webhooks with templates/routing/suppression. |
| NOTIFY-OBS-55-001 | ✅ DONE (2025-11-22) | SPRINT_0171_0001_0001_notifier_i | Incident mode templates with evidence/trace/retention context. |
| NOTIFY-ATTEST-74-001 | ✅ DONE (2025-11-16) | SPRINT_0171_0001_0001_notifier_i | Templates for verification failures, key revocations, transparency. |
| NOTIFY-ATTEST-74-002 | 📝 TODO | SPRINT_0171_0001_0001_notifier_i | Wire notifications to key rotation/revocation events. |
| NOTIFY-RISK-66-001 | ⏳ BLOCKED | SPRINT_0171_0001_0001_notifier_i | Risk severity escalation triggers; needs POLICY-RISK-40-002. |
| NOTIFY-RISK-67-001 | ⏳ BLOCKED | SPRINT_0171_0001_0001_notifier_i | Risk profile publish/deprecate notifications. |
| NOTIFY-RISK-68-001 | ⏳ BLOCKED | SPRINT_0171_0001_0001_notifier_i | Per-profile routing, quiet hours, dedupe. |
**Checkpoint:** Core observability complete; governance and risk notifications blocked on upstream dependencies.
### Phase 5 — Offline & compliance
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| NOTIFY-AIRGAP-56-002 | ✅ DONE | SPRINT_0171_0001_0001_notifier_i | Bootstrap Pack with deterministic secrets and offline validation. |
| NOTIFY-TEN-48-001 | ⏳ BLOCKED | SPRINT_0173_0001_0003_notifier_iii | Tenant-scope rules/templates; needs Sprint 0172 tenancy model. |
**Checkpoint:** Offline basics complete; tenancy work blocked on upstream Sprint 0172.
---
### Overall readiness summary
| Phase | Status | Blocking items |
|-------|--------|----------------|
| **1 Core rules engine** | 🔄 In progress | NOTIFY-SVC-37-003 dispatch/rendering |
| **2 Connectors & rendering** | 📝 Not started | Phase 1 completion |
| **3 Console & CLI** | 📝 Not started | Phase 2 completion |
| **4 Governance & observability** | 🔄 Partial | POLICY-RISK-40-002 for risk notifications |
| **5 Offline & compliance** | 🔄 Partial | Sprint 0172 tenancy model |
### Cross-module dependencies
| Dependency | Required by | Status |
|------------|-------------|--------|
| Attestor payload localization | NOTIFY-ATTEST-74-002 | Freeze pending |
| POLICY-RISK-40-002 export | NOTIFY-RISK-66/67/68 | BLOCKED |
| Sprint 0172 tenancy model | NOTIFY-TEN-48-001 | In progress |
| Telemetry SLO webhook schema | NOTIFY-OBS-51-001 | ✅ Published (`docs/notifications/slo-webhook-schema.md`) |
### Next actions
1. Complete NOTIFY-SVC-37-003 dispatch/rendering wiring (Sprint 0172).
2. Start NOTIFY-SVC-38-002 channel adapters once Phase 1 closes.
3. Track POLICY-RISK-40-002 to unblock risk notification tasks.
4. Monitor Sprint 0172 tenancy model for NOTIFY-TEN-48-001.

View File

@@ -59,3 +59,78 @@
- Export Center + Attestor dependencies validated; CLI parity confirmed.
- Documentation updated (README, architecture, runbooks, CLI guides) with imposed rule compliance.
- ./TASKS.md and ../../TASKS.md reflect the latest status transitions.
---
## Sprint readiness tracker
> Last updated: 2025-11-27 (SIGNER-ENG-0001)
This section maps delivery phases to implementation sprints and tracks readiness checkpoints.
### Phase 1 — Core service & PoE
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| KMSI-73-001 | ✅ DONE (2025-11-03) | SPRINT_100_identity_signing | KMS key management foundations with staffing + DSSE contract. |
| KMSI-73-002 | ✅ DONE (2025-11-03) | SPRINT_100_identity_signing | FIDO2 profile integration. |
| PROV-OBS-53-001 | ✅ DONE (2025-11-17) | SPRINT_0513_0001_0001_provenance | DSSE/SLSA BuildDefinition + BuildMetadata models with canonical JSON serializer. |
| PROV-OBS-53-002 | ✅ DONE (2025-11-23) | SPRINT_0513_0001_0001_provenance | Signer abstraction (cosign/KMS/offline) with key rotation hooks and audit logging. |
| SEC-CRYPTO-90-020 | 🔄 IN PROGRESS | SPRINT_0514_0001_0001_sovereign_crypto | CryptoPro signer plugin; Windows CSP runner pending. |
**Checkpoint:** Core signing infrastructure operational — KMS drivers, signer abstractions, and DSSE models delivered.
### Phase 2 — Export Center integration
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| PROV-OBS-53-003 | ✅ DONE (2025-11-23) | SPRINT_0513_0001_0001_provenance | PromotionAttestationBuilder feeding canonicalised payloads to Signer. |
| SIGN-REPLAY-186-003 | 📝 TODO | SPRINT_186_record_deterministic_execution | Extend Signer/Authority DSSE flows for replay manifest/bundle payloads. |
| SIGN-CORE-186-004 | 📝 TODO | SPRINT_186_record_deterministic_execution | Replace HMAC demo with StellaOps.Cryptography providers (keyless + KMS). |
| SIGN-CORE-186-005 | 📝 TODO | SPRINT_186_record_deterministic_execution | Refactor SignerStatementBuilder for StellaOps predicate types. |
| SIGN-TEST-186-006 | 📝 TODO | SPRINT_186_record_deterministic_execution | Upgrade signer integration tests with real crypto + fixture predicates. |
**Checkpoint:** Export Center signing APIs partially complete; replay manifest support and crypto provider refactoring pending.
### Phase 3 — Attestor alignment
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| AUTH-REACH-401-005 | 📝 TODO | SPRINT_0401_0001_0001_reachability_evidence_chain | DSSE predicate types for SBOM/Graph/VEX/Replay; blocked on predicate definitions. |
| SIGN-VEX-401-018 | 📝 TODO | SPRINT_0401_0001_0001_reachability_evidence_chain | Extend predicate catalog with `stella.ops/vexDecision@v1`. |
| PROV-OBS-54-001 | 📝 TODO | SPRINT_0513_0001_0001_provenance | Verification library for DSSE signatures, Merkle roots, timeline chain. |
| PROV-OBS-54-002 | 📝 TODO | SPRINT_0513_0001_0001_provenance | .NET global tool for local verification + CLI `stella forensic verify`. |
**Checkpoint:** Attestor DSSE alignment pending; predicate catalog extension and verification library not started.
### Phase 4 — Observability & resilience
| Task ID | Status | Sprint | Notes |
|---------|--------|--------|-------|
| DOCS-PROMO-70-001 | 📝 TODO | SPRINT_304_docs_tasks_md_iv | Promotion attestations doc (CLI commands, Signer/Attestor integration, offline verification). |
| CLI-PROMO-70-002 | 📝 TODO | SPRINT_203_cli_iii | `stella promotion attest` / `promotion verify` commands. |
| CLI-FORENSICS-54-002 | 📝 TODO | SPRINT_202_cli_ii | `stella forensic attest show <artifact>` listing signer details. |
**Checkpoint:** Observability and CLI integration pending; waiting on upstream signing pipeline completion.
---
### Overall readiness summary
| Phase | Status | Blocking items |
|-------|--------|----------------|
| **1 Core service & PoE** | ✅ Complete | — |
| **2 Export Center integration** | 🔄 In progress | SIGN-CORE-186-004/005 crypto provider refactoring |
| **3 Attestor alignment** | 📝 Not started | AUTH-REACH-401-005 predicate definitions |
| **4 Observability & resilience** | 📝 Not started | Upstream phase completion |
### Cross-module dependencies
| Dependency | Required by | Status |
|------------|-------------|--------|
| Attestor DSSE bundle schema | SIGN-VEX-401-018 | Documented in `docs/modules/attestor/architecture.md` §1 |
| Provenance library canonicalisation | SIGN-CORE-186-005 | Available via PROV-OBS-53-001/002 |
| Export Center bundle manifest | SIGN-REPLAY-186-003 | Pending Sprint 162/163 deliverables |
| Authority predicate definitions | AUTH-REACH-401-005 | Schema draft pending |
### Next actions
1. Complete CryptoPro signer plugin Windows smoke test (SEC-CRYPTO-90-020, Sprint 0514).
2. Start SIGN-CORE-186-004 once replay bundle schema finalises (Sprint 186).
3. Track AUTH-REACH-401-005 predicate schema draft for Attestor alignment (Sprint 401).
4. Monitor PROV-OBS-54-001/002 for verification library availability.

View File

@@ -0,0 +1,259 @@
# Pack Approvals Notification Contract
> **Status:** Implemented (NOTIFY-SVC-37-001)
> **Last Updated:** 2025-11-27
> **OpenAPI Spec:** `src/Notifier/StellaOps.Notifier/StellaOps.Notifier.WebService/openapi/pack-approvals.yaml`
## Overview
This document defines the canonical contract for pack approval notifications between Task Runner and the Notifier service. It covers event payloads, resume token mechanics, error handling, and security requirements.
## Event Kinds
| Kind | Description | Trigger |
|------|-------------|---------|
| `pack.approval.requested` | Approval required for pack deployment | Task Runner initiates deployment requiring approval |
| `pack.approval.updated` | Approval state changed | Decision recorded or timeout |
| `pack.policy.hold` | Policy gate blocked deployment | Policy Engine rejects deployment |
| `pack.policy.released` | Policy hold lifted | Policy conditions satisfied |
## Canonical Event Schema
```json
{
"eventId": "550e8400-e29b-41d4-a716-446655440000",
"issuedAt": "2025-11-27T10:30:00Z",
"kind": "pack.approval.requested",
"packId": "pkg:oci/stellaops/scanner@v2.1.0",
"policy": {
"id": "policy-prod-deploy",
"version": "1.2.3"
},
"decision": "pending",
"actor": "ci-pipeline@stellaops.example.com",
"resumeToken": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9...",
"summary": "Deployment approval required for production scanner update",
"labels": {
"environment": "production",
"team": "security"
}
}
```
### Required Fields
| Field | Type | Description |
|-------|------|-------------|
| `eventId` | UUID | Unique event identifier; used for deduplication |
| `issuedAt` | ISO 8601 | Event timestamp in UTC |
| `kind` | string | Event type (see Event Kinds table) |
| `packId` | string | Package identifier in PURL format |
| `decision` | string | Current state: `pending`, `approved`, `rejected`, `hold`, `expired` |
| `actor` | string | Identity that triggered the event |
### Optional Fields
| Field | Type | Description |
|-------|------|-------------|
| `policy` | object | Policy metadata (`id`, `version`) |
| `resumeToken` | string | Opaque token for Task Runner resume flow |
| `summary` | string | Human-readable summary for notifications |
| `labels` | object | Custom key-value metadata |
## Resume Token Mechanics
### Token Flow
```
┌─────────────┐ POST /pack-approvals ┌──────────────┐
│ Task Runner │ ──────────────────────────────►│ Notifier │
│ │ { resumeToken: "abc123" } │ │
│ │◄──────────────────────────────│ │
│ │ X-Resume-After: "abc123" │ │
└─────────────┘ └──────────────┘
│ │
│ │
│ User acknowledges approval │
│ ▼
│ ┌──────────────────────────────┐
│ │ POST /pack-approvals/{id}/ack
│ │ { ackToken: "..." } │
│ └──────────────────────────────┘
│ │
│◄─────────────────────────────────────────────┤
│ Resume callback (webhook or message bus) │
```
### Token Properties
- **Format:** Opaque string; clients must not parse or modify
- **TTL:** 24 hours from `issuedAt`
- **Uniqueness:** Scoped to tenant + packId + eventId
- **Expiry Handling:** Expired tokens return `410 Gone`
### X-Resume-After Header
When `resumeToken` is present in the request, the server echoes it in the `X-Resume-After` response header. This enables cursor-based processing for Task Runner polling.
## Error Handling
### HTTP Status Codes
| Code | Meaning | Client Action |
|------|---------|---------------|
| `200` | Duplicate request (idempotent) | Treat as success |
| `202` | Accepted for processing | Continue normal flow |
| `204` | Acknowledgement recorded | Continue normal flow |
| `400` | Validation error | Fix request and retry |
| `401` | Authentication required | Refresh token and retry |
| `403` | Insufficient permissions | Check scope; contact admin |
| `404` | Resource not found | Verify packId; may have expired |
| `410` | Token expired | Re-initiate approval flow |
| `429` | Rate limited | Retry after `Retry-After` seconds |
| `5xx` | Server error | Retry with exponential backoff |
### Error Response Format
```json
{
"error": {
"code": "invalid_request",
"message": "eventId, packId, kind, decision, actor are required.",
"traceId": "00-abc123-def456-00"
}
}
```
### Retry Strategy
- **Transient errors (5xx, 429):** Exponential backoff starting at 1s, max 60s, max 5 retries
- **Validation errors (4xx except 429):** Do not retry; fix request
- **Idempotency:** Safe to retry any request with the same `Idempotency-Key`
## Security Requirements
### Authentication
All endpoints require a valid OAuth2 bearer token with one of these scopes:
- `packs.approve` — Full approval flow access
- `Notifier.Events:Write` — Event ingestion only
### Tenant Isolation
- `X-StellaOps-Tenant` header is **required** on all requests
- Server validates token tenant claim matches header
- Cross-tenant access returns `403 Forbidden`
### Idempotency
- `Idempotency-Key` header is **required** for POST endpoints
- Keys are scoped to tenant and expire after 15 minutes
- Duplicate requests within the window return `200 OK`
### HMAC Signature (Webhooks)
For webhook callbacks from Notifier to Task Runner:
```
X-StellaOps-Signature: sha256=<hex-encoded-signature>
X-StellaOps-Timestamp: <unix-timestamp>
```
Signature computed as:
```
HMAC-SHA256(secret, timestamp + "." + body)
```
Verification requirements:
- Reject if timestamp is >5 minutes old
- Reject if signature does not match
- Reject if body has been modified
### IP Allowlist
Configurable per environment in `notifier:security:ipAllowlist`:
```yaml
notifier:
security:
ipAllowlist:
- "10.0.0.0/8"
- "192.168.1.100"
```
### Sensitive Data Handling
- **Resume tokens:** Encrypted at rest; never logged in full
- **Ack tokens:** Signed with KMS; validated on acknowledgement
- **Labels:** Redacted if keys match `secret`, `password`, `token`, `key` patterns
## Audit Trail
All operations emit structured audit events:
| Event | Fields | Retention |
|-------|--------|-----------|
| `pack.approval.ingested` | packId, kind, decision, actor, eventId | 90 days |
| `pack.approval.acknowledged` | packId, ackToken, decision, actor | 90 days |
| `pack.policy.hold` | packId, policyId, reason | 90 days |
## Observability
### Metrics
| Metric | Type | Labels |
|--------|------|--------|
| `notifier_pack_approvals_total` | Counter | `kind`, `decision`, `tenant` |
| `notifier_pack_approvals_outstanding` | Gauge | `tenant` |
| `notifier_pack_approval_ack_latency_seconds` | Histogram | `decision` |
| `notifier_pack_approval_errors_total` | Counter | `code`, `tenant` |
### Structured Logs
All operations include:
- `traceId` — Distributed trace correlation
- `tenantId` — Tenant identifier
- `packId` — Package identifier
- `eventId` — Event identifier
## Integration Examples
### Task Runner → Notifier (Ingestion)
```bash
curl -X POST https://notifier.stellaops.example.com/api/v1/notify/pack-approvals \
-H "Authorization: Bearer $TOKEN" \
-H "X-StellaOps-Tenant: tenant-acme-corp" \
-H "Idempotency-Key: $(uuidgen)" \
-H "Content-Type: application/json" \
-d '{
"eventId": "550e8400-e29b-41d4-a716-446655440000",
"issuedAt": "2025-11-27T10:30:00Z",
"kind": "pack.approval.requested",
"packId": "pkg:oci/stellaops/scanner@v2.1.0",
"decision": "pending",
"actor": "ci-pipeline@stellaops.example.com",
"resumeToken": "abc123",
"summary": "Approval required for production deployment"
}'
```
### Console → Notifier (Acknowledgement)
```bash
curl -X POST https://notifier.stellaops.example.com/api/v1/notify/pack-approvals/pkg%3Aoci%2Fstellaops%2Fscanner%40v2.1.0/ack \
-H "Authorization: Bearer $TOKEN" \
-H "X-StellaOps-Tenant: tenant-acme-corp" \
-H "Content-Type: application/json" \
-d '{
"ackToken": "ack-token-xyz789",
"decision": "approved",
"comment": "Reviewed and approved"
}'
```
## Related Documents
- [Pack Approvals Integration Requirements](pack-approvals-integration.md)
- [Notifications Architecture](architecture.md)
- [Notifications API Reference](api.md)
- [Notification Templates](templates.md)

View File

@@ -0,0 +1,944 @@
Heres a clean, actionready blueprint for a **public reachability benchmark** you can stand up quickly and grow over time.
# Why this matters (quick)
“Reachability” asks: *is a flagged vulnerability actually executable from real entry points in this codebase/container?* A public, reproducible benchmark lets you compare tools applestoapples, drive research, and keep vendors honest.
# What to collect (dataset design)
* **Projects & languages**
* Polyglot mix: **C/C++ (ELF/PE/MachO)**, **Java/Kotlin**, **C#/.NET**, **Python**, **JavaScript/TypeScript**, **PHP**, **Go**, **Rust**.
* For each project: small (≤5k LOC), medium (5100k), large (100k+).
* **Groundtruth artifacts**
* **Seed CVEs** with known sinks (e.g., deserializers, command exec, SS RF) and **neutral projects** with *no* reachable path (negatives).
* **Exploit oracles**: minimal PoCs or unit tests that (1) reach the sink and (2) toggle reachability via feature flags.
* **Build outputs (deterministic)**
* **Reproducible binaries/bytecode** (strip timestamps; fixed seeds; SOURCE_DATE_EPOCH).
* **SBOM** (CycloneDX/SPDX) + **PURLs** + **BuildID** (ELF .note.gnu.buildid / PE Authentihash / MachO UUID).
* **Attestations**: intoto/DSSE envelopes recording toolchain versions, flags, hashes.
* **Execution traces (for truth)**
* **CI traces**: callgraph dumps from compilers/analyzers; unittest coverage; optional **dynamic traces** (eBPF/.NET ETW/Java Flight Recorder).
* **Entrypoint manifests**: HTTP routes, CLI commands, cron/queue consumers.
* **Metadata**
* Language, framework, package manager, compiler versions, OS/container image, optimization level, stripping info, license.
# How to label ground truth
* **Pervuln case**: `(component, version, sink_id)` with label **reachable / unreachable / unknown**.
* **Evidence bundle**: pointer to (a) static call path, (b) dynamic hit (trace/coverage), or (c) rationale for negative.
* **Confidence**: high (static+dynamic agree), medium (one source), low (heuristic only).
# Scoring (simple + fair)
* **Binary classification** on cases:
* Precision, Recall, F1. Report **AUPR** if you output probabilities.
* **Path quality**
* **Explainability score (03)**:
* 0: “vuln reachable” w/o context
* 1: names only (entry→…→sink)
* 2: full interprocedural path w/ locations
* 3: plus **inputs/guards** (taint/constraints, env flags)
* **Runtime cost**
* Wallclock, peak RAM, image size; normalized by KLOC.
* **Determinism**
* Rerun variance (≤1% is “A”, 15% “B”, >5% “C”).
# Avoiding overfitting
* **Train/Dev/Test** splits per language; **hidden test** projects rotated quarterly.
* **Case churn**: introduce **isomorphic variants** (rename symbols, reorder files) to punish memorization.
* **Poisoned controls**: include decoy sinks and unreachable deadcode traps.
* **Submission rules**: require **attestations** of tool versions & flags; limit percase hints.
# Reference baselines (to run outofthebox)
* **Snyk Code/Reachability** (JS/Java/Python, SaaS/CLI).
* **Semgrep + Pro Engine** (rules + reachability mode).
* **CodeQL** (multilang, LGTMstyle queries).
* **Joern** (C/C++/JVM code property graphs).
* **angr** (binary symbolic exec; selective for native samples).
* **Languagespecific**: pipaudit w/ import graphs, npm with locktree + route discovery, Maven + callgraph (Soot/WALA).
# Submission format (one JSON per tool run)
```json
{
"tool": {"name": "YourTool", "version": "1.2.3"},
"run": {
"commit": "…",
"platform": "ubuntu:24.04",
"time_s": 182.4, "peak_mb": 3072
},
"cases": [
{
"id": "php-shop:fastjson@1.2.68:Sink#deserialize",
"prediction": "reachable",
"confidence": 0.88,
"explain": {
"entry": "POST /api/orders",
"path": [
"OrdersController::create",
"Serializer::deserialize",
"Fastjson::parseObject"
],
"guards": ["feature.flag.json_enabled==true"]
}
}
],
"artifacts": {
"sbom": "sha256:…", "attestation": "sha256:…"
}
}
```
# Folder layout (repo)
```
/benchmark
/cases/<lang>/<project>/<case_id>/
case.yaml # component@version, sink, labels, evidence refs
entrypoints.yaml # routes/CLIs/cron
build/ # Dockerfiles, lockfiles, pinned toolchains
outputs/ # SBOMs, binaries, traces (checksummed)
/splits/{train,dev,test}.txt
/schemas/{case.json,submission.json}
/scripts/{build.sh, run_tests.sh, score.py}
/docs/ (how-to, FAQs, T&Cs)
```
# Minimal **v1** (46 weeks of work)
1. **Languages**: JS/TS, Python, Java, C (ELF).
2. **2030 cases**: mix of reachable/unreachable with PoC unit tests.
3. **Deterministic builds** in containers; publish SBOM+attestations.
4. **Scorer**: precision/recall/F1 + explainability, runtime, determinism.
5. **Baselines**: run CodeQL + Semgrep across all; Snyk where feasible; angr for 3 native cases.
6. **Website**: static leaderboard (perlang, persize), download links, submission guide.
# V2+ (quarterly)
* Add **.NET, PHP, Go, Rust**; broaden binary focus (PE/MachO).
* Add **dynamic traces** (eBPF/ETW/JFR) and **taint oracles**.
* Introduce **configgated reachability** (feature flags, env, k8s secrets).
* Add **dataset cards** per case (threat model, CWE, falsepositive traps).
# Publishing & governance
* License: **CCBYSA** for metadata, **sourcecompatible OSS** for code, binaries under original licenses.
* **Repro packs**: `benchmark-kit.tgz` with container recipes, hashes, and attestations.
* **Disclosure**: CVE hygiene, responsible use, optout path for upstreams.
* **Stewards**: small TAC (you + two external reviewers) to approve new cases and adjudicate disputes.
# Immediate next steps (checklist)
* Lock the **schemas** (case + submission + attestation fields).
* Pick 8 seed projects (2 per language tiered by size).
* Draft 12 sinkcases (6 reachable, 6 unreachable) with unittest oracles.
* Script deterministic builds and **hashlocked SBOMs**.
* Implement the scorer; publish a **starter leaderboard** with 2 baselines.
* Ship **v1 website/docs** and open submissions.
If you want, I can generate the repo scaffold (folders, YAML/JSON schemas, Dockerfiles, scorer script) so your team can `git clone` and start adding cases immediately.
Cool, lets turn the blueprint into a concrete, developerfriendly implementation plan.
Ill assume **v1 scope** is:
* Languages: **JavaScript/TypeScript (Node)**, **Python**, **Java**, **C (ELF)**
* ~**2030 cases** total (reachable/unreachable mix)
* Baselines: **CodeQL**, **Semgrep**, maybe **Snyk** where licenses allow, and **angr** for a few native cases
You can expand later, but this plan is enough to get v1 shipped.
---
## 0. Overall project structure & ownership
**Owners**
* **Tech Lead** owns architecture & final decisions
* **Benchmark Core** 23 devs building schemas, scorer, infra
* **Language Tracks** 1 dev per language (JS, Python, Java, C)
* **Website/Docs** 1 dev
**Repo layout (target)**
```text
reachability-benchmark/
README.md
LICENSE
CONTRIBUTING.md
CODE_OF_CONDUCT.md
benchmark/
cases/
js/
express-blog/
case-001/
case.yaml
entrypoints.yaml
build/
Dockerfile
build.sh
src/ # project source (or submodule)
tests/ # unit tests as oracles
outputs/
sbom.cdx.json
binary.tar.gz
coverage.json
traces/ # optional dynamic traces
py/
flask-api/...
java/
spring-app/...
c/
httpd-like/...
schemas/
case.schema.yaml
entrypoints.schema.yaml
truth.schema.yaml
submission.schema.json
tools/
scorer/
rb_score/
__init__.py
cli.py
metrics.py
loader.py
explainability.py
pyproject.toml
tests/
build/
build_all.py
validate_builds.py
baselines/
codeql/
run_case.sh
config/
semgrep/
run_case.sh
rules/
snyk/
run_case.sh
angr/
run_case.sh
ci/
github/
benchmark.yml
website/
# static site / leaderboard
```
---
## 1. Phase 1 Repo & infra setup
### Task 1.1 Create repository
**Developer:** Tech Lead
**Deliverables:**
* Repo created (`reachability-benchmark` or similar)
* `LICENSE` (e.g., Apache-2.0 or MIT)
* Basic `README.md` describing:
* Purpose (public reachability benchmark)
* Highlevel design
* v1 scope (langs, #cases)
### Task 1.2 Bootstrap structure
**Developer:** Benchmark Core
Create directory skeleton as above (without filling everything yet).
Add:
```bash
# benchmark/Makefile
.PHONY: test lint build
test:
\tpytest benchmark/tools/scorer/tests
lint:
\tblack benchmark/tools/scorer
\tflake8 benchmark/tools/scorer
build:
\tpython benchmark/tools/build/build_all.py
```
### Task 1.3 Coding standards & tooling
**Developer:** Benchmark Core
* Add `.editorconfig`, `.gitignore`, and Python tool configs (`ruff`, `black`, or `flake8`).
* Define minimal **PR checklist** in `CONTRIBUTING.md`:
* Tests pass
* Lint passes
* New schemas have JSON schema or YAML schema and tests
* New cases come with oracles (tests/coverage)
---
## 2. Phase 2 Case & submission schemas
### Task 2.1 Define case metadata format
**Developer:** Benchmark Core
Create `benchmark/schemas/case.schema.yaml` and an example `case.yaml`.
**Example `case.yaml`**
```yaml
id: "js-express-blog:001"
language: "javascript"
framework: "express"
size: "small" # small | medium | large
component:
name: "express-blog"
version: "1.0.0-bench"
vulnerability:
cve: "CVE-XXXX-YYYY"
cwe: "CWE-502"
description: "Unsafe deserialization via user-controlled JSON."
sink_id: "Deserializer::parse"
ground_truth:
label: "reachable" # reachable | unreachable | unknown
confidence: "high" # high | medium | low
evidence_files:
- "truth.yaml"
notes: >
Unit test test_reachable_deserialization triggers the sink.
build:
dockerfile: "build/Dockerfile"
build_script: "build/build.sh"
output:
artifact_path: "outputs/binary.tar.gz"
sbom_path: "outputs/sbom.cdx.json"
coverage_path: "outputs/coverage.json"
traces_dir: "outputs/traces"
environment:
os_image: "ubuntu:24.04"
compiler: null
runtime:
node: "20.11.0"
source_date_epoch: 1730000000
```
**Acceptance criteria**
* Schema validates sample `case.yaml` with a Python script:
* `benchmark/tools/build/validate_schema.py` using `jsonschema` or `pykwalify`.
---
### Task 2.2 Entry points schema
**Developer:** Benchmark Core
`benchmark/schemas/entrypoints.schema.yaml`
**Example `entrypoints.yaml`**
```yaml
entries:
http:
- id: "POST /api/posts"
route: "/api/posts"
method: "POST"
handler: "PostsController.create"
cli:
- id: "generate-report"
command: "node cli.js generate-report"
description: "Generates summary report."
scheduled:
- id: "daily-cleanup"
schedule: "0 3 * * *"
handler: "CleanupJob.run"
```
---
### Task 2.3 Ground truth / truth schema
**Developer:** Benchmark Core + Language Tracks
`benchmark/schemas/truth.schema.yaml`
**Example `truth.yaml`**
```yaml
id: "js-express-blog:001"
cases:
- sink_id: "Deserializer::parse"
label: "reachable"
dynamic_evidence:
covered_by_tests:
- "tests/test_reachable_deserialization.js::should_reach_sink"
coverage_files:
- "outputs/coverage.json"
static_evidence:
call_path:
- "POST /api/posts"
- "PostsController.create"
- "PostsService.createFromJson"
- "Deserializer.parse"
config_conditions:
- "process.env.FEATURE_JSON_ENABLED == 'true'"
notes: "If FEATURE_JSON_ENABLED=false, path is unreachable."
```
---
### Task 2.4 Submission schema
**Developer:** Benchmark Core
`benchmark/schemas/submission.schema.json`
**Shape**
```json
{
"tool": { "name": "YourTool", "version": "1.2.3" },
"run": {
"commit": "abcd1234",
"platform": "ubuntu:24.04",
"time_s": 182.4,
"peak_mb": 3072
},
"cases": [
{
"id": "js-express-blog:001",
"prediction": "reachable",
"confidence": 0.88,
"explain": {
"entry": "POST /api/posts",
"path": [
"PostsController.create",
"PostsService.createFromJson",
"Deserializer.parse"
],
"guards": [
"process.env.FEATURE_JSON_ENABLED === 'true'"
]
}
}
],
"artifacts": {
"sbom": "sha256:...",
"attestation": "sha256:..."
}
}
```
Write Python validation utility:
```bash
python benchmark/tools/scorer/validate_submission.py submission.json
```
**Acceptance criteria**
* Validation fails on missing fields / wrong enum values.
* At least two sample submissions pass validation (e.g., “perfect” and “random baseline”).
---
## 3. Phase 3 Reference projects & deterministic builds
### Task 3.1 Select and vendor v1 projects
**Developer:** Tech Lead + Language Tracks
For each language, choose:
* 1 small toy app (simple web or CLI)
* 1 medium app (more routes, multiple modules)
* Optional: 1 large (for performance stress tests)
Add them under `benchmark/cases/<lang>/<project>/src/`
(or as git submodules if you want to track upstream).
---
### Task 3.2 Deterministic Docker build per project
**Developer:** Language Tracks
For each project:
* Create `build/Dockerfile`
* Create `build/build.sh` that:
* Builds the app
* Produces artifacts
* Generates SBOM and attestation
**Example `build/Dockerfile` (Node)**
```dockerfile
FROM node:20.11-slim
ENV NODE_ENV=production
ENV SOURCE_DATE_EPOCH=1730000000
WORKDIR /app
COPY src/ /app
COPY package.json package-lock.json /app/
RUN npm ci --ignore-scripts && \
npm run build || true
CMD ["node", "server.js"]
```
**Example `build.sh`**
```bash
#!/usr/bin/env bash
set -euo pipefail
ROOT_DIR="$(dirname "$(readlink -f "$0")")/.."
OUT_DIR="$ROOT_DIR/outputs"
mkdir -p "$OUT_DIR"
IMAGE_TAG="rb-js-express-blog:1"
docker build -t "$IMAGE_TAG" "$ROOT_DIR/build"
# Export image as tarball (binary artifact)
docker save "$IMAGE_TAG" | gzip > "$OUT_DIR/binary.tar.gz"
# Generate SBOM (e.g. via syft) can be optional stub for v1
syft packages "docker:$IMAGE_TAG" -o cyclonedx-json > "$OUT_DIR/sbom.cdx.json"
# In future: generate in-toto attestations
```
---
### Task 3.3 Determinism checker
**Developer:** Benchmark Core
`benchmark/tools/build/validate_builds.py`:
* For each case:
* Run `build.sh` twice
* Compare hashes of `outputs/binary.tar.gz` and `outputs/sbom.cdx.json`
* Fail if hashes differ.
**Acceptance criteria**
* All v1 cases produce identical artifacts across two builds on CI.
---
## 4. Phase 4 Ground truth oracles (tests & traces)
### Task 4.1 Add unit/integration tests for reachable cases
**Developer:** Language Tracks
For each **reachable** case:
* Add `tests/` under the project to:
* Start the app (if necessary)
* Send a request/trigger that reaches the vulnerable sink
* Assert that a sentinel side effect occurs (e.g. log or marker file) instead of real exploitation.
Example for Node using Jest:
```js
test("should reach deserialization sink", async () => {
const res = await request(app)
.post("/api/posts")
.send({ title: "x", body: '{"__proto__":{}}' });
expect(res.statusCode).toBe(200);
// Sink logs "REACH_SINK" we check log or variable
expect(sinkWasReached()).toBe(true);
});
```
### Task 4.2 Instrument coverage
**Developer:** Language Tracks
* For each language, pick a coverage tool:
* JS: `nyc` + `istanbul`
* Python: `coverage.py`
* Java: `jacoco`
* C: `gcov`/`llvm-cov` (optional for v1)
* Ensure running tests produces `outputs/coverage.json` or `.xml` that we then convert to a simple JSON format:
```json
{
"files": {
"src/controllers/posts.js": {
"lines_covered": [12, 13, 14, 27],
"lines_total": 40
}
}
}
```
Create a small converter script if needed.
### Task 4.3 Optional dynamic traces
If you want richer evidence:
* JS: add middleware that logs `(entry_id, handler, sink)` triples to `outputs/traces/traces.json`
* Python: similar using decorators
* C/Java: out of scope for v1 unless you want to invest extra time.
---
## 5. Phase 5 Scoring tool (CLI)
### Task 5.1 Implement `rb-score` library + CLI
**Developer:** Benchmark Core
Create `benchmark/tools/scorer/rb_score/` with:
* `loader.py`
* Load all `case.yaml`, `truth.yaml` into memory.
* Provide functions: `load_cases() -> Dict[case_id, Case]`.
* `metrics.py`
* Implement:
* `compute_precision_recall(truth, predictions)`
* `compute_path_quality_score(explain_block)` (03)
* `compute_runtime_stats(run_block)`
* `cli.py`
* CLI:
```bash
rb-score \
--cases-root benchmark/cases \
--submission submissions/mytool.json \
--output results/mytool_results.json
```
**Pseudo-code for core scoring**
```python
def score_submission(truth, submission):
y_true = []
y_pred = []
per_case_scores = {}
for case in truth:
gt = truth[case.id].label # reachable/unreachable
pred_case = find_pred_case(submission.cases, case.id)
pred_label = pred_case.prediction if pred_case else "unreachable"
y_true.append(gt == "reachable")
y_pred.append(pred_label == "reachable")
explain_score = explainability(pred_case.explain if pred_case else None)
per_case_scores[case.id] = {
"gt": gt,
"pred": pred_label,
"explainability": explain_score,
}
precision, recall, f1 = compute_prf(y_true, y_pred)
return {
"summary": {
"precision": precision,
"recall": recall,
"f1": f1,
"num_cases": len(truth),
},
"cases": per_case_scores,
}
```
### Task 5.2 Explainability scoring rules
**Developer:** Benchmark Core
Implement `explainability(explain)`:
* 0 `explain` missing or `path` empty
* 1 `path` present with at least 2 nodes (sink + one function)
* 2 `path` contains:
* Entry label (HTTP route/CLI id)
* ≥3 nodes (entry → … → sink)
* 3 Level 2 plus `guards` list non-empty
Unit tests for at least 4 scenarios.
### Task 5.3 Regression tests for scoring
Add small test fixture:
* Tiny synthetic benchmark: 3 cases, 2 reachable, 1 unreachable.
* 3 submissions:
* Perfect
* All reachable
* All unreachable
Assertions:
* Perfect: `precision=1, recall=1`
* All reachable: `recall=1, precision<1`
* All unreachable: `precision=1 (trivially on negatives), recall=0`
---
## 6. Phase 6 Baseline integrations
### Task 6.1 Semgrep baseline
**Developer:** Benchmark Core (with Semgrep experience)
* `baselines/semgrep/run_case.sh`:
* Inputs: `case_id`, `cases_root`, `output_path`
* Steps:
* Find `src/` for case
* Run `semgrep --config auto` or curated rules
* Convert Semgrep findings into benchmark submission format:
* Map Semgrep rules → vulnerability types → candidate sinks
* Heuristically guess reachability (for v1, maybe always “reachable” if sink in code path)
* Output: `output_path` JSON conforming to `submission.schema.json`.
### Task 6.2 CodeQL baseline
* Create CodeQL databases for each project (likely via `codeql database create`).
* Create queries targeting known sinks (e.g., `Deserialization`, `CommandInjection`).
* `baselines/codeql/run_case.sh`:
* Build DB (or reuse)
* Run queries
* Translate results into our submission format (again as heuristic reachability).
### Task 6.3 Optional Snyk / angr baselines
* Snyk:
* Use `snyk test` on the project
* Map results to dependencies & known CVEs
* For v1, just mark as `reachable` if Snyk reports a reachable path (if available).
* angr:
* For 12 small C samples, configure simple analysis script.
**Acceptance criteria**
* For at least 5 cases (across languages), the baselines produce valid submission JSON.
* `rb-score` runs and yields metrics without errors.
---
## 7. Phase 7 CI/CD
### Task 7.1 GitHub Actions workflow
**Developer:** Benchmark Core
`ci/github/benchmark.yml`:
Jobs:
1. `lint-and-test`
* `python -m pip install -e benchmark/tools/scorer[dev]`
* `make lint`
* `make test`
2. `build-cases`
* `python benchmark/tools/build/build_all.py`
* Run `validate_builds.py`
3. `smoke-baselines`
* For 23 cases, run Semgrep/CodeQL wrappers and ensure they emit valid submissions.
### Task 7.2 Artifact upload
* Upload `outputs/` tarball from `build-cases` as workflow artifacts.
* Upload `results/*.json` from scoring runs.
---
## 8. Phase 8 Website & leaderboard
### Task 8.1 Define results JSON format
**Developer:** Benchmark Core + Website dev
`results/leaderboard.json`:
```json
{
"tools": [
{
"name": "Semgrep",
"version": "1.60.0",
"summary": {
"precision": 0.72,
"recall": 0.48,
"f1": 0.58
},
"by_language": {
"javascript": {"precision": 0.80, "recall": 0.50, "f1": 0.62},
"python": {"precision": 0.65, "recall": 0.45, "f1": 0.53}
}
}
]
}
```
CLI option to generate this:
```bash
rb-score compare \
--cases-root benchmark/cases \
--submissions submissions/*.json \
--output results/leaderboard.json
```
### Task 8.2 Static site
**Developer:** Website dev
Tech choice: any static framework (Next.js, Astro, Docusaurus, or even pure HTML+JS).
Pages:
* **Home**
* What is reachability?
* Summary of benchmark
* **Leaderboard**
* Renders `leaderboard.json`
* Filters: language, case size
* **Docs**
* How to run benchmark locally
* How to prepare a submission
Add a simple script to copy `results/leaderboard.json` into `website/public/` for publishing.
---
## 9. Phase 9 Docs, governance, and contribution flow
### Task 9.1 CONTRIBUTING.md
Include:
* How to add a new case:
* Stepbystep:
1. Create project folder under `benchmark/cases/<lang>/<project>/case-XXX/`
2. Add `case.yaml`, `entrypoints.yaml`, `truth.yaml`
3. Add oracles (tests, coverage)
4. Add deterministic `build/` assets
5. Run local tooling:
* `validate_schema.py`
* `validate_builds.py --case <id>`
* Example PR description template.
### Task 9.2 Governance doc
* Define **Technical Advisory Committee (TAC)** roles:
* Approve new cases
* Approve schema changes
* Manage hidden test sets (future phase)
* Define **release cadence**:
* v1.0 with public cases
* Quarterly updates with new hidden cases.
---
## 10. Suggested milestone breakdown (for planning / sprints)
### Milestone 1 Foundation (12 sprints)
* Repo scaffolding (Tasks 1.x)
* Schemas (Tasks 2.x)
* Two tiny toy cases (one JS, one Python) with:
* `case.yaml`, `entrypoints.yaml`, `truth.yaml`
* Deterministic build
* Basic unit tests
* Minimal `rb-score` with:
* Case loading
* Precision/recall only
**Exit:** You can run `rb-score` on a dummy submission for 2 cases.
---
### Milestone 2 v1 dataset (23 sprints)
* Add ~2030 cases across JS, Python, Java, C
* Ground truth & coverage for each
* Deterministic builds validated
* Explainability scoring implemented
* Regression tests for `rb-score`
**Exit:** Full scoring tool stable; dataset repeatably builds on CI.
---
### Milestone 3 Baselines & site (12 sprints)
* Semgrep + CodeQL baselines producing valid submissions
* CI running smoke baselines
* `leaderboard.json` generator
* Static website with public leaderboard and docs
**Exit:** Public v1 benchmark you can share with external tool authors.
---
If you tell me which stack your team prefers for the site (React, plain HTML, SSG, etc.) or which CI youre on, I can adapt this into concrete config files (e.g., a full GitHub Actions workflow, Next.js scaffold, or exact `pyproject.toml` for `rb-score`).

View File

@@ -0,0 +1,602 @@
Heres a simple, lowfriction way to keep priorities fresh without constant manual grooming: **let confidence decay over time**.
![A small curve sloping down over time, illustrating exponential decay](https://dummyimage.com/800x250/ffffff/000000\&text=confidence\(t\)%20=%20e^{-t/τ})
# Exponential confidence decay (what & why)
* **Idea:** Every item (task, lead, bug, doc, hypothesis) has a confidence score that **automatically shrinks with time** if you dont touch it.
* **Formula:** `confidence(t) = e^(t/τ)` where `t` is days since last signal (edit, comment, commit, new data), and **τ (“tau”)** is the decay constant.
* **Rule of thumb:** With **τ = 30 days**, at **t = 30** the confidence is **e^(1) ≈ 0.37**—about a **63% drop**. This surfaces longignored items *gradually*, not with harsh “stale/expired” flips.
# How to use it in practice
* **Signals that reset t → 0:** comment on the ticket, new benchmark, fresh log sample, doc update, CI run, new market news.
* **Sort queues by:** `priority × confidence(t)` (or severity × confidence). Quiet items drift down; truly active ones stay up.
* **Escalation bands:**
* `>0.6` = green (recently touched)
* `0.30.6` = amber (review soon)
* `<0.3` = red (poke or close)
# Quick presets
* **Fastmoving queues (incidents, hot leads):** τ = **714** days
* **Engineering tasks / product docs:** τ = **30** days
* **Research bets / roadmaps:** τ = **6090** days
# For your world (StellaOps + ops/dev work)
* **Vuln tickets:** `risk_score = CVSS × reachability × e^(t/30)`
* **Roadmap epics:** `value_score = impact × e^(t/60)` to rerank quarterly.
* **Docs:** show a badge “freshness: 42%” derived from last edit age to nudge updates.
# Minimal implementation sketch
* Store peritem: `last_signal_at`, `base_priority`.
* Compute on read:
```
days = (now - last_signal_at).days
conf = exp(-days / tau)
score = base_priority * conf
```
* Recompute in your API layer or materialize nightly; no cron spam needed.
If you want, I can draft a tiny C# helper (and SQL snippet) you can drop into your issue service to add `confidence(t)` and color bands to your lists.
Perfect, lets turn the idea into something your devs can actually build.
Below is an implementation plan you can drop into a ticketing/PRD — with clear phases, data model changes, APIs, and some sample code (C# + SQL). Ill also sprinkle in StellaOpsspecific notes.
---
## 0. Scope & Objectives
**Goal:** Introduce `confidence(t)` as an automatic freshness factor that decays with time and is used to rank and highlight work.
Well apply it to:
* Vulnerabilities (StellaOps)
* General issues / tasks / epics
* (Optional) Docs, leads, hypotheses later
**Core behavior:**
* Each item has:
* A base priority / risk (from severity, business impact, etc.)
* A timestamp of last signal (meaningful activity)
* A decay rate τ (tau) in days
* Effective priority = `base_priority × confidence(t)`
* `confidence(t) = exp( t / τ)` where `t` = days since last_signal
---
## 1. Data Model Changes
### 1.1. Add fields to core “work item” tables
For each relevant table (`Issues`, `Vulnerabilities`, `Epics`, …):
**New columns:**
* `base_priority` (FLOAT or INT)
* Example: 1100, or derived from severity.
* `last_signal_at` (DATETIME, NOT NULL, default = `created_at`)
* `tau_days` (FLOAT, nullable, falls back to type default)
* (Optional) `confidence_score_cached` (FLOAT, for materialized score)
* (Optional) `is_confidence_frozen` (BOOL, default FALSE)
For pinned items that should not decay.
**Example Postgres migration (Issues):**
```sql
ALTER TABLE issues
ADD COLUMN base_priority DOUBLE PRECISION,
ADD COLUMN last_signal_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
ADD COLUMN tau_days DOUBLE PRECISION,
ADD COLUMN confidence_cached DOUBLE PRECISION,
ADD COLUMN is_confidence_frozen BOOLEAN NOT NULL DEFAULT FALSE;
```
For StellaOps:
```sql
ALTER TABLE vulnerabilities
ADD COLUMN base_risk DOUBLE PRECISION,
ADD COLUMN last_signal_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
ADD COLUMN tau_days DOUBLE PRECISION,
ADD COLUMN confidence_cached DOUBLE PRECISION,
ADD COLUMN is_confidence_frozen BOOLEAN NOT NULL DEFAULT FALSE;
```
### 1.2. Add a config table for τ per entity type
```sql
CREATE TABLE confidence_decay_config (
id SERIAL PRIMARY KEY,
entity_type TEXT NOT NULL, -- 'issue', 'vulnerability', 'epic', 'doc'
tau_days_default DOUBLE PRECISION NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
INSERT INTO confidence_decay_config (entity_type, tau_days_default) VALUES
('incident', 7),
('vulnerability', 30),
('issue', 30),
('epic', 60),
('doc', 90);
```
---
## 2. Define “signal” events & instrumentation
We need a standardized way to say: “this item got activity → reset last_signal_at”.
### 2.1. Signals that should reset `last_signal_at`
For **issues / epics:**
* New comment
* Status change (e.g., Open → In Progress)
* Field change that matters (severity, owner, milestone)
* Attachment added
* Link to PR added or updated
* New CI failure linked
For **vulnerabilities (StellaOps):**
* New scanner result attached or status updated (e.g., “Verified”, “False Positive”)
* New evidence (PoC, exploit notes)
* SLA override change
* Assignment / ownership change
* Integration events (e.g., PR merge that references the vuln)
For **docs (if you do it):**
* Any edit
* Comment/annotation
### 2.2. Implement a shared helper to record a signal
**Service-level helper (pseudocode / C#-ish):**
```csharp
public interface IConfidenceSignalService
{
Task RecordSignalAsync(WorkItemType type, Guid itemId, DateTime? signalTimeUtc = null);
}
public class ConfidenceSignalService : IConfidenceSignalService
{
private readonly IWorkItemRepository _repo;
private readonly IConfidenceConfigService _config;
public async Task RecordSignalAsync(WorkItemType type, Guid itemId, DateTime? signalTimeUtc = null)
{
var now = signalTimeUtc ?? DateTime.UtcNow;
var item = await _repo.GetByIdAsync(type, itemId);
if (item == null) return;
item.LastSignalAt = now;
if (item.TauDays == null)
{
item.TauDays = await _config.GetDefaultTauAsync(type);
}
await _repo.UpdateAsync(item);
}
}
```
### 2.3. Wire signals into existing flows
Create small tasks for devs like:
* **ISS-01:** Call `RecordSignalAsync` on:
* New issue comment handler
* Issue status update handler
* Issue field update handler (severity/priority/owner)
* **VULN-01:** Call `RecordSignalAsync` when:
* New scanner result ingested for a vuln
* Vulnerability status, SLA, or owner changes
* New exploit evidence is attached
---
## 3. Confidence & scoring calculation
### 3.1. Shared confidence function
Definition:
```csharp
public static class ConfidenceMath
{
// t = days since last signal
public static double ConfidenceScore(DateTime lastSignalAtUtc, double tauDays, DateTime? nowUtc = null)
{
var now = nowUtc ?? DateTime.UtcNow;
var tDays = (now - lastSignalAtUtc).TotalDays;
if (tDays <= 0) return 1.0;
if (tauDays <= 0) return 1.0; // guard / fallback
var score = Math.Exp(-tDays / tauDays);
// Optional: never drop below a tiny floor, so items never "disappear"
const double floor = 0.01;
return Math.Max(score, floor);
}
}
```
### 3.2. Effective priority formulas
**Generic issues / tasks:**
```csharp
double effectiveScore = issue.BasePriority * ConfidenceMath.ConfidenceScore(issue.LastSignalAt, issue.TauDays ?? defaultTau);
```
**Vulnerabilities (StellaOps):**
Lets define:
* `severity_weight`: map CVSS or severity string to numeric (e.g. Critical=100, High=80, Medium=50, Low=20).
* `reachability`: 01 (e.g. from your reachability analysis).
* `exploitability`: 01 (optional, based on known exploits).
* `confidence`: as above.
```csharp
double baseRisk = severityWeight * reachability * exploitability; // or simpler: severityWeight * reachability
double conf = ConfidenceMath.ConfidenceScore(vuln.LastSignalAt, vuln.TauDays ?? defaultTau);
double effectiveRisk = baseRisk * conf;
```
Store `baseRisk` → `vulnerabilities.base_risk`, and compute `effectiveRisk` on the fly or via job.
### 3.3. SQL implementation (optional for server-side sorting)
**Postgres example:**
```sql
-- t_days = age in days
-- tau = tau_days
-- score = exp(-t_days / tau)
SELECT
i.*,
i.base_priority *
GREATEST(
EXP(- EXTRACT(EPOCH FROM (NOW() - i.last_signal_at)) / (86400 * COALESCE(i.tau_days, 30))),
0.01
) AS effective_priority
FROM issues i
ORDER BY effective_priority DESC;
```
You can wrap that in a view:
```sql
CREATE VIEW issues_with_confidence AS
SELECT
i.*,
GREATEST(
EXP(- EXTRACT(EPOCH FROM (NOW() - i.last_signal_at)) / (86400 * COALESCE(i.tau_days, 30))),
0.01
) AS confidence,
i.base_priority *
GREATEST(
EXP(- EXTRACT(EPOCH FROM (NOW() - i.last_signal_at)) / (86400 * COALESCE(i.tau_days, 30))),
0.01
) AS effective_priority
FROM issues i;
```
---
## 4. Caching & performance
You have two options:
### 4.1. Compute on read (simplest to start)
* Use the helper function in your service layer or a DB view.
* Pros:
* No jobs, always fresh.
* Cons:
* Slight CPU cost on heavy lists.
**Plan:** Start with this. If you see perf issues, move to 4.2.
### 4.2. Periodic materialization job (optional later)
Add a scheduled job (e.g. hourly) that:
1. Selects all active items.
2. Computes `confidence_score` and `effective_priority`.
3. Writes to `confidence_cached` and `effective_priority_cached` (if you add such a column).
Service then sorts by cached values.
---
## 5. Backfill & migration
### 5.1. Initial backfill script
For existing records:
* If `last_signal_at` is NULL → set to `created_at`.
* Derive `base_priority` / `base_risk` from existing severity fields.
* Set `tau_days` from config.
**Example:**
```sql
UPDATE issues
SET last_signal_at = created_at
WHERE last_signal_at IS NULL;
UPDATE issues
SET base_priority = CASE severity
WHEN 'critical' THEN 100
WHEN 'high' THEN 80
WHEN 'medium' THEN 50
WHEN 'low' THEN 20
ELSE 10
END
WHERE base_priority IS NULL;
UPDATE issues i
SET tau_days = c.tau_days_default
FROM confidence_decay_config c
WHERE c.entity_type = 'issue'
AND i.tau_days IS NULL;
```
Do similarly for `vulnerabilities` using severity / CVSS.
### 5.2. Sanity checks
Add a small script/test to verify:
* Newly created items → `confidence ≈ 1.0`.
* 30-day-old items with τ=30 → `confidence ≈ 0.37`.
* Ordering changes when you edit/comment on items.
---
## 6. API & Query Layer
### 6.1. New sorting options
Update list APIs:
* Accept parameter: `sort=effective_priority` or `sort=confidence`.
* Default sort for some views:
* Vulnerabilities backlog: `sort=effective_risk` (risk × confidence).
* Issues backlog: `sort=effective_priority`.
**Example REST API contract:**
`GET /api/issues?sort=effective_priority&state=open`
**Response fields (additions):**
```json
{
"id": "ISS-123",
"title": "Fix login bug",
"base_priority": 80,
"last_signal_at": "2025-11-01T10:00:00Z",
"tau_days": 30,
"confidence": 0.63,
"effective_priority": 50.4,
"confidence_band": "amber"
}
```
### 6.2. Confidence banding (for UI)
Define bands server-side (easy to change):
* Green: `confidence >= 0.6`
* Amber: `0.3 ≤ confidence < 0.6`
* Red: `confidence < 0.3`
You can compute on server:
```csharp
string ConfidenceBand(double confidence) =>
confidence >= 0.6 ? "green"
: confidence >= 0.3 ? "amber"
: "red";
```
---
## 7. UI / UX changes
### 7.1. List views (issues / vulns / epics)
For each item row:
* Show a small freshness pill:
* Text: `Active`, `Review soon`, `Stale`
* Derived from confidence band.
* Tooltip:
* “Confidence 78%. Last activity 3 days ago. τ = 30 days.”
* Sort default: by `effective_priority` / `effective_risk`.
* Filters:
* `Freshness: [All | Active | Review soon | Stale]`
* Optionally: “Show stale only” toggle.
**Example labels:**
* Green: “Active (confidence 82%)”
* Amber: “Review soon (confidence 45%)”
* Red: “Stale (confidence 18%)”
### 7.2. Detail views
On an issue / vuln page:
* Add a “Confidence” section:
* “Confidence: **52%**”
* “Last signal: **12 days ago**”
* “Decay τ: **30 days**”
* “Effective priority: **Base 80 × 0.52 = 42**”
* (Optional) small mini-chart (text-only or simple bar) showing approximate decay, but not necessary for first iteration.
### 7.3. Admin / settings UI
Add an internal settings page:
* Table of entity types with editable τ:
| Entity type | τ (days) | Notes |
| ------------- | -------- | ---------------------------- |
| Incident | 7 | Fast-moving |
| Vulnerability | 30 | Standard risk review cadence |
| Issue | 30 | Sprint-level decay |
| Epic | 60 | Quarterly |
| Doc | 90 | Slow decay |
* Optionally: toggle to pin item (`is_confidence_frozen`) from UI.
---
## 8. StellaOpsspecific behavior
For vulnerabilities:
### 8.1. Base risk calculation
Ingested fields you likely already have:
* `cvss_score` or `severity`
* `reachable` (true/false or numeric)
* (Optional) `exploit_available` (bool) or exploitability score
* `asset_criticality` (15)
Define `base_risk` as:
```text
severity_weight = f(cvss_score or severity)
reachability = reachable ? 1.0 : 0.5 -- example
exploitability = exploit_available ? 1.0 : 0.7
asset_factor = 0.5 + 0.1 * asset_criticality -- 1 → 1.0, 5 → 1.5
base_risk = severity_weight * reachability * exploitability * asset_factor
```
Store `base_risk` on vuln row.
Then:
```text
effective_risk = base_risk * confidence(t)
```
Use `effective_risk` for backlog ordering and SLAs dashboards.
### 8.2. Signals for vulns
Make sure these all call `RecordSignalAsync(Vulnerability, vulnId)`:
* New scan result for same vuln (re-detected).
* Change status to “In Progress”, “Ready for Deploy”, “Verified Fixed”, etc.
* Assigning an owner.
* Attaching PoC / exploit details.
### 8.3. Vuln UI copy ideas
* Pill text:
* “Risk: 850 (confidence 68%)”
* “Last analyst activity 11 days ago”
* In backlog view: show **Effective Risk** as main sort, with a smaller subtext “Base 1200 × Confidence 71%”.
---
## 9. Rollout plan
### Phase 1 Infrastructure (backend-only)
* [ ] DB migrations & config table
* [ ] Implement `ConfidenceMath` and helper functions
* [ ] Implement `IConfidenceSignalService`
* [ ] Wire signals into key flows (comments, state changes, scanner ingestion)
* [ ] Add `confidence` and `effective_priority/risk` to API responses
* [ ] Backfill script + dry run in staging
### Phase 2 Internal UI & feature flag
* [ ] Add optional sorting by effective score to internal/staff views
* [ ] Add confidence pill (hidden behind feature flag `confidence_decay_v1`)
* [ ] Dogfood internally:
* Do items bubble up/down as expected?
* Are any items “disappearing” because decay is too aggressive?
### Phase 3 Parameter tuning
* [ ] Adjust τ per type based on feedback:
* If things decay too fast → increase τ
* If queues rarely change → decrease τ
* [ ] Decide on confidence floor (0.01? 0.05?) so nothing goes to literal 0.
### Phase 4 General release
* [ ] Make effective score the default sort for key views:
* Vulnerabilities backlog
* Issues backlog
* [ ] Document behavior for users (help center / inline tooltip)
* [ ] Add admin UI to tweak τ per entity type.
---
## 10. Edge cases & safeguards
* **New items**
* `last_signal_at = created_at`, confidence = 1.0.
* **Pinned items**
* If `is_confidence_frozen = true` → treat confidence as 1.0.
* **Items without τ**
* Always fallback to entity type default.
* **Timezones**
* Always store & compute in UTC.
* **Very old items**
* Floor the confidence so theyre still visible when explicitly searched.
---
If you want, I can turn this into:
* A short **technical design doc** (with sections: Problem, Proposal, Alternatives, Rollout).
* Or a **set of Jira tickets** grouped by backend / frontend / infra that your team can pick up directly.

View File

@@ -0,0 +1,636 @@
Heres a compact, onescreen “CVSSv4.0 Score Receipt” you can drop into StellaOps so every vulnerability carries its score, evidence, and policy lineage endtoend.
---
# CVSSv4.0 Score Receipt (CVSSBTE + Supplemental)
**Vuln ID / Title**
**Final CVSS v4.0 Score:** *X.Y* (CVSSBTE) • **Vector:** `CVSS:4.0/...`
**Why BTE?** CVSSv4.0 is designed to combine Base with default Threat/Environmental first, then amend with real context; Supplemental adds nonscoring context. ([FIRST][1])
---
## 1) Base Metrics (intrinsic; vendor/researcher)
*List each metric with chosen value + short justification + evidence link.*
* **Attack Vector (AV):** N | A | I | P — *reason & evidence*
* **Attack Complexity (AC):** L | H — *reason & evidence*
* **Attack Requirements (AT):** N | P | ? — *reason & evidence*
* **Privileges Required (PR):** N | L | H — *reason & evidence*
* **User Interaction (UI):** Passive | Active — *reason & evidence*
* **Vulnerable System Impact (VC/VI/VA):** H | L | N — *reason & evidence*
* **Subsequent System Impact (SC/SI/SA):** H | L | N — *reason & evidence*
> Notes: v4.0 clarifies Base, splits vulnerable vs. subsequent system impact, and refines UI (Passive/Active). ([FIRST][1])
---
## 2) Threat Metrics (timevarying; consumer)
* **Exploit Maturity (E):** Attacked | POC | Unreported | NotDefined — *intel & source*
* **Automatable (AU):** Yes | No | ND — *tooling/observations*
* **Provider Urgency (U):** High | Medium | Low | ND — *advisory/ref*
> Threat replaces the old Temporal concept and adjusts severity with realworld exploitation context. ([FIRST][1])
---
## 3) Environmental Metrics (your environment)
* **Security Controls (CR/XR/AR):** Present | Partial | None — *control IDs*
* **Criticality (S, H, L, N) of asset/service:** *business tag*
* **Safety/Human Impact in your environment:** *if applicable*
> Environmental tailors the score to your environment (controls, importance). ([FIRST][1])
---
## 4) Supplemental (nonscoring context)
* **Safety, Recovery, ValueDensity, Vulnerability Response Effort, etc.:** *values + short notes*
> Supplemental adds context but does not change the numeric score. ([FIRST][1])
---
## 5) Evidence Ledger
* **Artifacts:** logs, PoCs, packet captures, SBOM slices, callgraphs, config excerpts
* **References:** vendor advisory, NVD/First calculator snapshot, exploit writeups
* **Timestamps & hash of each evidence item** (SHA256)
> Keep a permalink to the FIRST v4.0 calculator or NVD v4 calculator capture for audit. ([FIRST][2])
---
## 6) Policy & Determinism
* **Scoring Policy ID:** `cvss-policy-v4.0-stellaops-YYYYMMDD`
* **Policy Hash:** `sha256:…` (of the JSON policy used to map inputs→metrics)
* **Scoring Engine Version:** `stellaops.scorer vX.Y.Z`
* **Repro Inputs Hash:** DSSE envelope including evidence URIs + CVSS vector
> Treat the receipt as a deterministic artifact: Base with default T/E, then amended with Threat+Environmental to produce CVSSBTE; store policy/evidence hashes for replayable audits. ([FIRST][1])
---
## 7) History (amendments over time)
| Date | Changed | From → To | Reason | Link |
| ---------- | -------- | -------------- | ------------------------ | ----------- |
| 20251125 | Threat:E | POC → Attacked | Active exploitation seen | *intel ref* |
---
## Minimal JSON schema (for your UI/API)
```json
{
"vulnId": "CVE-YYYY-XXXX",
"title": "Short vuln title",
"cvss": {
"version": "4.0",
"vector": "CVSS:4.0/…",
"base": { "AV": "N", "AC": "L", "AT": "N", "PR": "N", "UI": "P", "VC": "H", "VI": "H", "VA": "H", "SC": "L", "SI": "N", "SA": "N", "justifications": { /* per-metric text + evidence URIs */ } },
"threat": { "E": "Attacked", "AU": "Yes", "U": "High", "evidence": [/* intel links */] },
"environmental": { "controls": { "CR": "Present", "XR": "Partial", "AR": "None" }, "criticality": "H", "notes": "…" },
"supplemental": { "safety": "High", "recovery": "Hard", "notes": "…" },
"finalScore": 9.1,
"enumeration": "CVSS-BTE"
},
"evidence": [{ "name": "exploit_poc.md", "sha256": "…", "uri": "…" }],
"policy": { "id": "cvss-policy-v4.0-stellaops-20251125", "sha256": "…", "engine": "stellaops.scorer 1.2.0" },
"repro": { "dsseEnvelope": "base64…", "inputsHash": "sha256:…" },
"history": [{ "date": "2025-11-25", "change": "Threat:E POC→Attacked", "reason": "SOC report", "ref": "…" }]
}
```
---
## Dropin UI wireframe (single screen)
* **Header bar:** Score badge (X.Y), “CVSSBTE”, vector copy button.
* **Tabs (or stacked cards):** Base • Threat • Environmental • Supplemental • Evidence • Policy • History.
* **Right rail:** “Recalculate with my env” (edits only Threat/Environmental), “Export receipt (JSON/PDF)”, “Open in FIRST/NVD calculator”.
---
If you want, Ill adapt this to your StellaOps components (DTOs, EF Core models, and a Razor/Blazor card) and wire it to your “deterministic replay” pipeline so every scan emits this receipt alongside the VEX note.
[1]: https://www.first.org/cvss/v4-0/specification-document?utm_source=chatgpt.com "CVSS v4.0 Specification Document"
[2]: https://www.first.org/cvss/calculator/4-0?utm_source=chatgpt.com "Common Vulnerability Scoring System Version 4.0 Calculator"
Perfect, lets turn that receipt idea into a concrete implementation plan your devs can actually build from.
Ill break it into phases and responsibilities (backend, frontend, platform/DevOps), with enough detail that someone could start creating tickets from this.
---
## 0. Align on Scope & Definitions
**Goal:** For every vulnerability in Stella Ops, store and display a **CVSS v4.0 CVSSBTE score receipt** that is:
* Deterministic & reproducible (policy + inputs → same score).
* Evidenced (links + hashes of artifacts).
* Auditable over time (history of amendments).
* Friendly to both **vendor/base** and **consumer/threat/env** workflows.
**Key concepts to lock in with the team (no coding yet):**
* **Primary object**: `CvssScoreReceipt` attached to a `Vulnerability`.
* **Canonical score** = **CVSSBTE** (Base + Threat + Environmental).
* **Base** usually from vendor/researcher; Threat + Environmental from Stella Ops / customer context.
* **Supplemental** metrics: stored but **not part of numeric score**.
* **Policy**: machine-readable config (e.g., JSON) that defines how you map questionnaire/inputs → CVSS metrics.
Deliverable: 23 page internal spec summarizing above for devs + PMs.
---
## 1. Data Model Design
### 1.1 Core Entities
*Model names are illustrative; adapt to your stack.*
**Vulnerability**
* `id`
* `externalId` (e.g. CVE)
* `title`
* `description`
* `currentCvssReceiptId` (FK → `CvssScoreReceipt`)
**CvssScoreReceipt**
* `id`
* `vulnerabilityId` (FK)
* `version` (e.g. `"4.0"`)
* `enumeration` (e.g. `"CVSS-BTE"`)
* `vectorString` (full v4.0 vector)
* `finalScore` (numeric, 0.010.0)
* `baseScore` (derived or duplicate for convenience)
* `threatScore` (optional interim)
* `environmentalScore` (optional interim)
* `createdAt`
* `createdByUserId`
* `policyId` (FK → `CvssPolicy`)
* `policyHash` (sha256 of policy JSON)
* `inputsHash` (sha256 of normalized scoring inputs)
* `dsseEnvelope` (optional text/blob if you implement full DSSE)
* `metadata` (JSON for any extras you want)
**BaseMetrics (v4.0)**
* `id`, `receiptId` (FK)
* `AV`, `AC`, `AT`, `PR`, `UI`
* `VC`, `VI`, `VA`, `SC`, `SI`, `SA`
* `justifications` (JSON object keyed by metric)
* e.g. `{ "AV": { "reason": "...", "evidenceIds": ["..."] }, ... }`
**ThreatMetrics**
* `id`, `receiptId` (FK)
* `E` (Exploit Maturity)
* `AU` (Automatable)
* `U` (Provider/Consumer Urgency)
* `evidence` (JSON: list of intel references)
**EnvironmentalMetrics**
* `id`, `receiptId` (FK)
* `CR`, `XR`, `AR` (controls)
* `criticality` (S/H/L/N or your internal enum)
* `notes` (text/JSON)
**SupplementalMetrics**
* `id`, `receiptId` (FK)
* Fields you care about, e.g.:
* `safetyImpact`
* `recoveryEffort`
* `valueDensity`
* `vulnerabilityResponseEffort`
* `notes`
**EvidenceItem**
* `id`
* `receiptId` (FK)
* `name` (e.g. `"exploit_poc.md"`)
* `uri` (link into your blob store, S3, etc.)
* `sha256`
* `type` (log, pcap, exploit, advisory, config, etc.)
* `createdAt`
* `createdBy`
**CvssPolicy**
* `id` (e.g. `cvss-policy-v4.0-stellaops-20251125`)
* `name`
* `version`
* `engineVersion` (e.g. `stellaops.scorer 1.2.0`)
* `policyJson` (JSON)
* `sha256` (policy hash)
* `active` (bool)
* `validFrom`, `validTo` (optional)
**ReceiptHistoryEntry**
* `id`
* `receiptId` (FK)
* `date`
* `changedField` (e.g. `"Threat.E"`)
* `oldValue`
* `newValue`
* `reason`
* `referenceUri` (link to ticket / intel)
* `changedByUserId`
---
## 2. Backend Implementation Plan
### 2.1 Scoring Engine
**Tasks:**
1. **Create a `CvssV4Engine` module/package** with:
* `parseVector(string): CvssVector`
* `computeBaseScore(metrics: BaseMetrics): number`
* `computeThreatAdjustedScore(base: number, threat: ThreatMetrics): number`
* `computeEnvironmentalAdjustedScore(threatAdjusted: number, env: EnvironmentalMetrics): number`
* `buildVector(metrics: BaseMetrics & ThreatMetrics & EnvironmentalMetrics): string`
2. Implement **CVSS v4.0 math** exactly per spec (rounding rules, minimums, etc.).
3. Add **unit tests** for all official sample vectors + your own edge cases.
**Deliverables:**
* Test suite `CvssV4EngineTests` with:
* Known test vectors (from spec or FIRST calculator)
* Edge cases: missing threat/env, zero-impact vulnerabilities, etc.
---
### 2.2 Receipt Construction Pipeline
Define a canonical function in backend:
```pseudo
function createReceipt(vulnId, input, policyId, userId):
policy = loadPolicy(policyId)
normalizedInput = applyPolicy(input, policy) // map UI questionnaire → CVSS metrics
base = normalizedInput.baseMetrics
threat = normalizedInput.threatMetrics
env = normalizedInput.environmentalMetrics
supplemental = normalizedInput.supplemental
// Score
baseScore = CvssV4Engine.computeBaseScore(base)
threatScore = CvssV4Engine.computeThreatAdjustedScore(baseScore, threat)
finalScore = CvssV4Engine.computeEnvironmentalAdjustedScore(threatScore, env)
// Vector
vector = CvssV4Engine.buildVector({base, threat, env})
// Hashes
inputsHash = sha256(serializeForHashing({ base, threat, env, supplemental, evidenceRefs: input.evidenceIds }))
policyHash = policy.sha256
dsseEnvelope = buildDSSEEnvelope({ vulnId, base, threat, env, supplemental, policyId, policyHash, inputsHash })
// Persist entities in transaction
receipt = saveCvssScoreReceipt(...)
saveBaseMetrics(receipt.id, base)
saveThreatMetrics(receipt.id, threat)
saveEnvironmentalMetrics(receipt.id, env)
saveSupplementalMetrics(receipt.id, supplemental)
linkEvidence(receipt.id, input.evidenceItems)
updateVulnerabilityCurrentReceipt(vulnId, receipt.id)
return receipt
```
**Important implementation details:**
* **`serializeForHashing`**: define a stable ordering and normalization (sorted keys, no whitespace sensitivity, canonical enums) so hashes are truly deterministic.
* Use **transactions** so partial writes never leave `Vulnerability` pointing to incomplete receipts.
* Ensure **idempotency**: if same `inputsHash + policyHash` already exists for that vuln, you can either:
* return existing receipt, or
* create a new one but mark it as a duplicate-of; choose one rule and document it.
---
### 2.3 APIs
Design REST/GraphQL endpoints (adapt names to your style):
**Read:**
* `GET /vulnerabilities/{id}/cvss-receipt`
* Returns full receipt with nested metrics, evidence, policy metadata, history.
* `GET /vulnerabilities/{id}/cvss-receipts`
* List historical receipts/versions.
**Create / Update:**
* `POST /vulnerabilities/{id}/cvss-receipt`
* Body: CVSS input payload (not raw metrics) + policyId.
* Backend applies policy → metrics, computes scores, stores receipt.
* `POST /vulnerabilities/{id}/cvss-receipt/recalculate`
* Optional: allows updating **only Threat + Environmental** while preserving Base.
**Evidence:**
* `POST /cvss-receipts/{receiptId}/evidence`
* Upload/link evidence artifacts, compute sha256, associate with receipt.
* (Or integrate with your existing evidence/attachments service and only store references.)
**Policy:**
* `GET /cvss-policies`
* `GET /cvss-policies/{id}`
**History:**
* `GET /cvss-receipts/{receiptId}/history`
Add auth/authorization:
* Only certain roles can **change Base**.
* Different roles can **change Threat/Env**.
* Audit logs for each change.
---
### 2.4 Integration with Existing Pipelines
**Automatic creation paths:**
1. **Scanner import path**
* When new vulnerability is imported with vendor CVSS v4:
* Parse vendor vector → BaseMetrics.
* Use your default policy to set Threat/Env to “NotDefined”.
* Generate initial receipt (tag as `source = "vendor"`).
2. **Manual analyst scoring**
* Analyst opens Vuln in Stella Ops UI.
* Fills out guided form.
* Frontend calls `POST /vulnerabilities/{id}/cvss-receipt`.
3. **Customer-specific Environmental scoring**
* Per-tenant policy stored in `CvssPolicy`.
* Receipts store that policyId; calculating environment-specific scores uses those controls/criticality.
---
## 3. Frontend / UI Implementation Plan
### 3.1 Main “CVSS Score Receipt” Panel
Single screen/card with sections (tabs or accordions):
1. **Header**
* Large score badge: `finalScore` (e.g. 9.1).
* Label: `CVSS v4.0 (CVSSBTE)`.
* Color-coded severity (Low/Med/High/Critical).
* Copy-to-clipboard for vector string.
* Show Base/Threat/Env sub-scores if you choose to expose.
2. **Base Metrics Section**
* Table or form-like display:
* Each metric: value, short textual description, collapsed justification with “View more”.
* Example row:
* **Attack Vector (AV)**: Network
* “The vulnerability is exploitable over the internet. PoC requires only TCP connectivity to port 443.”
* Evidence chips: `exploit_poc.md`, `nginx_error.log.gz`.
3. **Threat Metrics Section**
* Radio/select controls for Exploit Maturity, Automatable, Urgency.
* “Intel references” list (URLs or evidence items).
* If the user edits these and clicks **Save**, frontend:
* Builds Threat input payload.
* Calls `POST /vulnerabilities/{id}/cvss-receipt/recalculate` with updated threat/env only.
* Shows new score & appends a `ReceiptHistoryEntry`.
4. **Environmental Section**
* Controls selection: Present / Partial / None.
* Business criticality picker.
* Contextual notes.
* Same recalc flow as Threat.
5. **Supplemental Section**
* Non-scoring fields with clear label: “Does not affect numeric score, for context only”.
6. **Evidence Section**
* List of evidence items with:
* Name, type, hash, link.
* “Attach evidence” button → upload / select existing artifact.
7. **Policy & Determinism Section**
* Display:
* Policy ID + hash.
* Scoring engine version.
* Inputs hash.
* DSSE status (valid / not verified).
* Button: **“Download receipt (JSON)”** uses the JSON schema you already drafted.
* Optional: **“Open in external calculator”** with vector appended as query parameter.
8. **History Section**
* Timeline of changes:
* Date, who, what changed (e.g. `Threat.E: POC → Attacked`).
* Reason + link.
### 3.2 UX Considerations
* **Guardrails:**
* Editing Base metrics: show “This should match vendor or research data. Changing Base will alter historical comparability.”
* Display last updated time & user for each metrics block.
* **Permissions:**
* Disable inputs if user does not have edit rights; still show receipts read-only.
* **Error Handling:**
* Show vector parse or scoring errors clearly, with a reference to policy/engine version.
* **Accessibility:**
* High contrast for severity badges and clear iconography.
---
## 4. JSON Schema & Contracts
You already have a draft JSON; turn it into a formal schema (OpenAPI / JSON Schema) so backend + frontend are in sync.
Example top-level shape (high-level, not full code):
```json
{
"vulnId": "CVE-YYYY-XXXX",
"title": "Short vuln title",
"cvss": {
"version": "4.0",
"enumeration": "CVSS-BTE",
"vector": "CVSS:4.0/...",
"finalScore": 9.1,
"baseScore": 8.7,
"threatScore": 9.0,
"environmentalScore": 9.1,
"base": {
"AV": "N", "AC": "L", "AT": "N", "PR": "N", "UI": "P",
"VC": "H", "VI": "H", "VA": "H",
"SC": "L", "SI": "N", "SA": "N",
"justifications": {
"AV": { "reason": "reachable over internet", "evidence": ["ev1"] }
}
},
"threat": { "E": "Attacked", "AU": "Yes", "U": "High" },
"environmental": { "controls": { "CR": "Present", "XR": "Partial", "AR": "None" }, "criticality": "H" },
"supplemental": { "safety": "High", "recovery": "Hard" }
},
"evidence": [
{ "id": "ev1", "name": "exploit_poc.md", "uri": "...", "sha256": "..." }
],
"policy": {
"id": "cvss-policy-v4.0-stellaops-20251125",
"sha256": "...",
"engine": "stellaops.scorer 1.2.0"
},
"repro": {
"dsseEnvelope": "base64...",
"inputsHash": "sha256:..."
},
"history": [
{ "date": "2025-11-25", "change": "Threat.E POC→Attacked", "reason": "SOC report", "ref": "..." }
]
}
```
Back-end team: publish this via OpenAPI and keep it versioned.
---
## 5. Security, Integrity & Compliance
**Tasks:**
1. **Evidence Integrity**
* Enforce sha256 on every evidence item.
* Optionally re-hash blob in background and store `verifiedAt` timestamp.
2. **Immutability**
* Decide which parts of a receipt are immutable:
* Typically: Base metrics, evidence links, policy references.
* Threat/Env may change by creating **new receipts** or new “versions” of the same receipt.
* Consider:
* “Current receipt” pointer on Vulnerability.
* All receipts are read-only after creation; changes create new receipt + history entry.
3. **Audit Logging**
* Log who changed what (especially Threat/Env).
* Store reference to ticket / change request.
4. **Access Control**
* RBAC: e.g. `ROLE_SEC_ENGINEER` can set Base; `ROLE_CUSTOMER_ANALYST` can set Env; `ROLE_VIEWER` read-only.
---
## 6. Testing Strategy
**Unit Tests**
* `CvssV4EngineTests` coverage of:
* Vector parsing/serialization.
* Calculations for B, BT, BTE.
* `ReceiptBuilderTests` determinism:
* Same inputs + policy → same score + same hashes.
* Different policyId → different policyHash, different DSSE, even if metrics identical.
**Integration Tests**
* End-to-end:
* Create vulnerability → create receipt with Base only → update Threat → update Env.
* Vendor CVSS import path.
* Permission tests:
* Ensure unauthorized edits are blocked.
**UI Tests**
* Snapshot tests for the card layout.
* Behavior: changing Threat slider updates preview score.
* Accessibility checks (ARIA, focus order).
---
## 7. Rollout Plan
1. **Phase 1 Backend Foundations**
* Implement data model + migrations.
* Implement scoring engine + policies.
* Implement REST/GraphQL endpoints (feature-flagged).
2. **Phase 2 UI MVP**
* Render read-only receipts for a subset of vulnerabilities.
* Internal dogfood with security team.
3. **Phase 3 Editing & Recalc**
* Enable Threat/Env editing.
* Wire evidence upload.
* Activate history tracking.
4. **Phase 4 Vendor Integration + Tenants**
* Map scanner imports → initial Base receipts.
* Tenant-specific Environmental policies.
5. **Phase 5 Hardening**
* Performance tests (bulk listing of vulnerabilities with receipts).
* Security review of evidence and hash handling.
---
If youd like, I can turn this into:
* A set of Jira/Linear epics + tickets, or
* A stack-specific design (for example: .NET + EF Core models + Razor components, or Node + TypeScript + React components) with concrete code skeletons.

View File

@@ -0,0 +1,563 @@
Heres a crisp, readytouse rule for VEX hygiene that will save you pain in audits and customer reviews—and make StellaOps look rocksolid.
# Adopt a strict “`not_affected` only with proof” policy
**What it means (plain English):**
Only mark a vulnerability as `not_affected` if you can *prove* the vulnerable code cant run in your product under defined conditions—then record that proof (scope, entry points, limits) inside a VEX bundle.
## The nonnegotiables
* **Audit coverage:**
You must enumerate the reachable entry points you audited (e.g., exported handlers, CLI verbs, HTTP routes, scheduled jobs, init hooks). State their *limits* (versions, build flags, feature toggles, container args, config profiles).
* **VEX justification required:**
Use a concrete justification (OpenVEX/CISA style), e.g.:
* `vulnerable_code_not_in_execute_path`
* `component_not_present`
* `vulnerable_code_cannot_be_controlled_by_adversary`
* `inline_mitigation_already_in_place`
* **Impact or constraint statement:**
Explain *why* its safe given your products execution model: sandboxing, dead code elimination, policy blocks, feature gates, OS hardening, container seccomp/AppArmor, etc.
* **VEX proof bundle:**
Store the evidence alongside the VEX: callgraph slices, reachability reports, config snapshots, build args, lattice/policy decisions, test traces, and hashes of the exact artifacts (SBOM + attestation refs). This is what makes the claim stand up in an audit six months later.
## Minimal OpenVEX example (dropin)
```json
{
"document": {
"id": "urn:stellaops:vex:2025-11-25:svc-api:log4j:2.14.1",
"author": "Stella Ops Authority",
"role": "vex"
},
"statements": [
{
"vulnerability": "CVE-2021-44228",
"products": ["pkg:maven/com.acme/svc-api@1.7.3?type=jar"],
"status": "not_affected",
"justification": "vulnerable_code_not_in_execute_path",
"impact_statement": "Log4j JNDI classes excluded at build; no logger bridge; JVM flags `-Dlog4j2.formatMsgNoLookups=true` enforced by container entrypoint.",
"analysis": {
"entry_points_audited": [
"com.acme.api.HttpServer#routes",
"com.acme.jobs.Cron#run",
"Main#init"
],
"limits": {
"image_digest": "sha256:…",
"config_profile": "prod",
"args": ["--no-dynamic-plugins"],
"seccomp": "stellaops-baseline-v3"
},
"evidence_refs": [
"dsse:sha256:…/reachability.json",
"dsse:sha256:…/build-args.att",
"dsse:sha256:…/policy-lattice.proof"
]
},
"timestamp": "2025-11-25T00:00:00Z"
}
]
}
```
## Fast checklist (use this on every `not_affected`)
* [ ] Define product + artifact by immutable IDs (PURL + digest).
* [ ] List **audited entry points** and **execution limits**.
* [ ] Declare **status** = `not_affected` with a **justification** from the allowed set.
* [ ] Add a short **impact/whysafe** sentence.
* [ ] Attach **evidence**: call graph, configs, policies, build args, test traces.
* [ ] Sign the VEX (DSSE/InToto), link it to the SBOM attestation.
* [ ] Version and keep the proof bundle with your release.
## When to use an exception (temporary VEX)
If you can prove nonreachability **only under a temporary constraint** (e.g., feature flag off while a permanent fix lands), emit a **timeboxed exception** VEX:
* Add `constraints.expires` and the required control (e.g., `feature_flag=Off`, `policy=BlockJNDI`).
* Schedule an autorecheck on expiry; flip to `affected` if the constraint lapses.
---
If you want, I can generate a StellaOpsflavored VEX template and a tiny “proof bundle” schema (JSON) so your devs can drop it into the pipeline and your documentators can copypaste the rationale blocks.
Cool, lets turn that policy into something your devs can actually follow daytoday.
Below is a concrete implementation plan you can drop into an internal RFC / Notion page and wire into your pipelines.
---
## 0. What were implementing (for context)
**Goal:** At Stella Ops, you can only mark a vulnerability as `not_affected` if:
1. Youve **audited specific entry points** under clearly documented limits (version, build flags, config, container image).
2. Youve captured **evidence** and **rationale** in a VEX statement + proof bundle.
3. The VEX is **validated, signed, and shipped** with the artifact.
Well standardize on **OpenVEX** with a small extension (`analysis` section) for developerfriendly evidence.
---
## 1. Repo & artifact layout (week 1)
### 1.1. Create a standard security layout
In each service repo:
```text
/security/
vex/
openvex.json # aggregate VEX doc (generated/curated)
statements/ # one file per CVE (optional, if you like)
proofs/
CVE-YYYY-NNNN/
reachability.json
configs/
tests/
notes.md
schemas/
openvex.schema.json # JSON schema with Stella extensions
```
**Developer guidance:**
* If you touch anything related to a vulnerability decision, you **edit `security/vex/` and `security/proofs/` in the same PR**.
---
## 2. Define the VEX schema & allowed justifications (week 1)
### 2.1. Fix the format & fields
Youve already chosen OpenVEX, so formalize the required extras:
```jsonc
{
"vulnerability": "CVE-2021-44228",
"products": ["pkg:maven/com.acme/svc-api@1.7.3?type=jar"],
"status": "not_affected",
"justification": "vulnerable_code_not_in_execute_path",
"impact_statement": "…",
"analysis": {
"entry_points_audited": [
"com.acme.api.HttpServer#routes",
"com.acme.jobs.Cron#run",
"Main#init"
],
"limits": {
"image_digest": "sha256:…",
"config_profile": "prod",
"args": ["--no-dynamic-plugins"],
"seccomp": "stellaops-baseline-v3"
},
"evidence_refs": [
"dsse:sha256:…/reachability.json",
"dsse:sha256:…/build-args.att",
"dsse:sha256:…/policy-lattice.proof"
]
}
}
```
**Action items:**
* Write a **JSON schema** for the `analysis` block (required for `not_affected`):
* `entry_points_audited`: nonempty array of strings.
* `limits`: object with at least one of `image_digest`, `config_profile`, `args`, `seccomp`, `feature_flags`.
* `evidence_refs`: nonempty array of strings.
* Commit this as `security/schemas/openvex.schema.json`.
### 2.2. Fix the allowed `justification` values
Publish an internal list, e.g.:
* `vulnerable_code_not_in_execute_path`
* `component_not_present`
* `vulnerable_code_cannot_be_controlled_by_adversary`
* `inline_mitigation_already_in_place`
* `protected_by_environment` (e.g., mandatory sandbox, readonly FS)
**Rule:** any `not_affected` must pick one of these. Any new justification needs security team approval.
---
## 3. Developer process for handling a new vuln (week 2)
This is the **“how to act”** guide devs follow when a CVE pops up in scanners or customer reports.
### 3.1. Decision flow
1. **Is the vulnerable component actually present?**
* If no → `status: not_affected`, `justification: component_not_present`.
Still fill out `products`, `impact_statement` (explain why its not present: different version, module excluded, etc.).
2. **If present: analyze reachability.**
* Identify **entry points** of the service:
* HTTP routes, gRPC methods, message consumers, CLI commands, cron jobs, startup hooks.
* Check:
* Is the vulnerable path reachable from any of these?
* Is it blocked by configuration / feature flags / sandboxing?
3. **If reachable or unclear → treat as `affected`.**
* Plan a patch, workaround, or runtime mitigation.
4. **If not reachable & you can argue that clearly → `not_affected` with proof.**
* Fill in:
* `entry_points_audited`
* `limits`
* `evidence_refs`
* `impact_statement` (“why safe”)
### 3.2. Developer checklist (drop this into your docs)
> **Stella Ops `not_affected` checklist**
>
> For any CVE you mark as `not_affected`:
>
> 1. **Identify product + artifact**
>
> * [ ] PURL (package URL)
> * [ ] Image digest / binary hash
> 2. **Audit execution**
>
> * [ ] List entry points you reviewed
> * [ ] Note the limits (config profile, feature flags, container args, sandbox)
> 3. **Collect evidence**
>
> * [ ] Reachability analysis (manual or tool report)
> * [ ] Config snapshot (YAML, env vars, Helm values)
> * [ ] Tests or traces (if applicable)
> 4. **Write VEX statement**
>
> * [ ] `status = not_affected`
> * [ ] `justification` from allowed list
> * [ ] `impact_statement` explains “why safe”
> * [ ] `analysis.entry_points_audited`, `analysis.limits`, `analysis.evidence_refs`
> 5. **Wire into repo**
>
> * [ ] Proofs stored under `security/proofs/CVE-…/`
> * [ ] VEX updated under `security/vex/`
> 6. **Request review**
>
> * [ ] Security reviewer approved in PR
---
## 4. Automation & tooling for devs (week 23)
Make it easy to “do the right thing” with a small CLI and CI jobs.
### 4.1. Add a small `vexctl` helper
Language doesnt matter—Python is fine. Rough sketch:
```python
#!/usr/bin/env python3
import json
from pathlib import Path
from datetime import datetime
VEX_PATH = Path("security/vex/openvex.json")
def load_vex():
if VEX_PATH.exists():
return json.loads(VEX_PATH.read_text())
return {"document": {}, "statements": []}
def save_vex(data):
VEX_PATH.write_text(json.dumps(data, indent=2, sort_keys=True))
def add_statement():
cve = input("CVE ID (e.g. CVE-2025-1234): ").strip()
product = input("Product PURL: ").strip()
status = input("Status [affected/not_affected/fixed]: ").strip()
justification = None
analysis = None
if status == "not_affected":
justification = input("Justification (from allowed list): ").strip()
entry_points = input("Entry points (comma-separated): ").split(",")
limits_profile = input("Config profile (e.g. prod/stage): ").strip()
image_digest = input("Image digest (optional): ").strip()
evidence = input("Evidence refs (comma-separated): ").split(",")
analysis = {
"entry_points_audited": [e.strip() for e in entry_points if e.strip()],
"limits": {
"config_profile": limits_profile or None,
"image_digest": image_digest or None
},
"evidence_refs": [e.strip() for e in evidence if e.strip()]
}
impact = input("Impact / why safe (short text): ").strip()
vex = load_vex()
vex.setdefault("document", {})
vex.setdefault("statements", [])
stmt = {
"vulnerability": cve,
"products": [product],
"status": status,
"impact_statement": impact,
"timestamp": datetime.utcnow().isoformat() + "Z"
}
if justification:
stmt["justification"] = justification
if analysis:
stmt["analysis"] = analysis
vex["statements"].append(stmt)
save_vex(vex)
print(f"Added VEX statement for {cve}")
if __name__ == "__main__":
add_statement()
```
**Dev UX:** run:
```bash
./tools/vexctl add
```
and follow prompts instead of handediting JSON.
### 4.2. Schema validation in CI
Add a CI job (GitHub Actions example) that:
1. Installs `jsonschema`.
2. Validates `security/vex/openvex.json` against `security/schemas/openvex.schema.json`.
3. Fails if:
* any `not_affected` statement lacks `analysis.*` fields, or
* `justification` is not in the allowed list.
```yaml
name: VEX validation
on:
pull_request:
paths:
- "security/vex/**"
- "security/schemas/**"
jobs:
validate-vex:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.12"
- name: Install deps
run: pip install jsonschema
- name: Validate OpenVEX
run: |
python tools/validate_vex.py
```
Example `validate_vex.py` core logic:
```python
import json
from jsonschema import validate, ValidationError
from pathlib import Path
import sys
schema = json.loads(Path("security/schemas/openvex.schema.json").read_text())
vex = json.loads(Path("security/vex/openvex.json").read_text())
try:
validate(instance=vex, schema=schema)
except ValidationError as e:
print("VEX schema validation failed:", e, file=sys.stderr)
sys.exit(1)
ALLOWED_JUSTIFICATIONS = {
"vulnerable_code_not_in_execute_path",
"component_not_present",
"vulnerable_code_cannot_be_controlled_by_adversary",
"inline_mitigation_already_in_place",
"protected_by_environment",
}
for stmt in vex.get("statements", []):
if stmt.get("status") == "not_affected":
just = stmt.get("justification")
if just not in ALLOWED_JUSTIFICATIONS:
print(f"Invalid justification '{just}' in statement {stmt.get('vulnerability')}")
sys.exit(1)
analysis = stmt.get("analysis") or {}
missing = []
if not analysis.get("entry_points_audited"):
missing.append("analysis.entry_points_audited")
if not analysis.get("limits"):
missing.append("analysis.limits")
if not analysis.get("evidence_refs"):
missing.append("analysis.evidence_refs")
if missing:
print(
f"'not_affected' for {stmt.get('vulnerability')} missing fields: {', '.join(missing)}"
)
sys.exit(1)
```
---
## 5. Signing & publishing VEX + proof bundles (week 3)
### 5.1. Signing
Pick a signing mechanism (e.g., DSSE + cosign/intoto), but keep the devvisible rules simple:
* CI step:
1. Build artifact (image/binary).
2. Generate/update SBOM.
3. Validate VEX.
4. **Sign**:
* The artifact.
* The SBOM.
* The VEX document.
Enforce **KMSbacked keys** controlled by the security team.
### 5.2. Publishing layout
Decide a canonical layout in your artifact registry / S3:
```text
artifacts/
svc-api/
1.7.3/
image.tar
sbom.spdx.json
vex.openvex.json
proofs/
CVE-2025-1234/
reachability.json
configs/
tests/
```
Link evidence by digest (`evidence_refs`) so you can prove exactly what you audited.
---
## 6. PR / review policy (week 34)
### 6.1. Add a PR checklist item
In your PR template:
```md
### Security / VEX
- [ ] If this PR **changes how we handle a known CVE** or marks one as `not_affected`, I have:
- [ ] Updated `security/vex/openvex.json`
- [ ] Added/updated proof bundle under `security/proofs/`
- [ ] Ran `./tools/vexctl` and CI VEX validation locally
```
### 6.2. Require security reviewer for `not_affected` changes
Add a CODEOWNERS entry:
```text
/security/vex/* @stellaops-security-team
/security/proofs/* @stellaops-security-team
```
* Any PR touching these paths must be approved by security.
---
## 7. Handling temporary exceptions (timeboxed VEX)
Sometimes youre only safe because of a **temporary constraint** (e.g., feature flag off until patch). For those:
1. Add a `constraints` block:
```json
"constraints": {
"control": "feature_flag",
"name": "ENABLE_UNSAFE_PLUGIN_API",
"required_value": "false",
"expires": "2025-12-31T23:59:59Z"
}
```
2. Add a scheduled job (e.g., weekly) that:
* Parses VEX.
* Finds any `constraints.expires < now()`.
* Opens an issue or fails a synthetic CI job: “Constraint expired: reevaluate CVE20251234”.
Dev guidance: **do not** treat timeboxed exceptions as permanent; they must be rereviewed or turned into `affected` + mitigation.
---
## 8. Rollout plan by week
You can present this timeline internally:
* **Week 1**
* Finalize OpenVEX + `analysis` schema.
* Create `security/` layout in 12 key services.
* Publish allowed `justification` list + written policy.
* **Week 2**
* Implement `vexctl` helper.
* Add CI validation job.
* Pilot with one real CVE decision; walk through full proof bundle creation.
* **Week 3**
* Add signing + publishing steps for SBOM and VEX.
* Wire artifact registry layout, link VEX + proofs per release.
* **Week 4**
* Enforce CODEOWNERS + PR checklist across all services.
* Enable scheduled checks for expiring constraints.
* Run internal training (3045 min) walking through:
* “Bad VEX” (handwavy, no entry points) vs
* “Good VEX” (clear scope, evidence, limits).
---
## 9. What you can hand to devs right now
If you want, you can literally paste these as separate internal docs:
* **“How to mark a CVE as not_affected at Stella Ops”**
* Copy section 3 (decision flow + checklist) and the VEX snippet.
* **“VEX technical reference for developers”**
* Copy sections 124 (structure, schema, CLI, CI validation).
* **“VEX operations runbook”**
* Copy sections 57 (signing, publishing, exceptions).
---
If you tell me which CI system you use (GitHub Actions, GitLab CI, Circle, etc.) and your primary stack (Java, Go, Node, etc.), I can turn this into exact job configs and maybe a more tailored `vexctl` CLI for your environment.

View File

@@ -0,0 +1,602 @@
Heres a simple, lowfriction way to keep priorities fresh without constant manual grooming: **let confidence decay over time**.
![A small curve sloping down over time, illustrating exponential decay](https://dummyimage.com/800x250/ffffff/000000\&text=confidence\(t\)%20=%20e^{-t/τ})
# Exponential confidence decay (what & why)
* **Idea:** Every item (task, lead, bug, doc, hypothesis) has a confidence score that **automatically shrinks with time** if you dont touch it.
* **Formula:** `confidence(t) = e^(t/τ)` where `t` is days since last signal (edit, comment, commit, new data), and **τ (“tau”)** is the decay constant.
* **Rule of thumb:** With **τ = 30 days**, at **t = 30** the confidence is **e^(1) ≈ 0.37**—about a **63% drop**. This surfaces longignored items *gradually*, not with harsh “stale/expired” flips.
# How to use it in practice
* **Signals that reset t → 0:** comment on the ticket, new benchmark, fresh log sample, doc update, CI run, new market news.
* **Sort queues by:** `priority × confidence(t)` (or severity × confidence). Quiet items drift down; truly active ones stay up.
* **Escalation bands:**
* `>0.6` = green (recently touched)
* `0.30.6` = amber (review soon)
* `<0.3` = red (poke or close)
# Quick presets
* **Fastmoving queues (incidents, hot leads):** τ = **714** days
* **Engineering tasks / product docs:** τ = **30** days
* **Research bets / roadmaps:** τ = **6090** days
# For your world (StellaOps + ops/dev work)
* **Vuln tickets:** `risk_score = CVSS × reachability × e^(t/30)`
* **Roadmap epics:** `value_score = impact × e^(t/60)` to rerank quarterly.
* **Docs:** show a badge “freshness: 42%” derived from last edit age to nudge updates.
# Minimal implementation sketch
* Store peritem: `last_signal_at`, `base_priority`.
* Compute on read:
```
days = (now - last_signal_at).days
conf = exp(-days / tau)
score = base_priority * conf
```
* Recompute in your API layer or materialize nightly; no cron spam needed.
If you want, I can draft a tiny C# helper (and SQL snippet) you can drop into your issue service to add `confidence(t)` and color bands to your lists.
Perfect, lets turn the idea into something your devs can actually build.
Below is an implementation plan you can drop into a ticketing/PRD — with clear phases, data model changes, APIs, and some sample code (C# + SQL). Ill also sprinkle in StellaOpsspecific notes.
---
## 0. Scope & Objectives
**Goal:** Introduce `confidence(t)` as an automatic freshness factor that decays with time and is used to rank and highlight work.
Well apply it to:
* Vulnerabilities (StellaOps)
* General issues / tasks / epics
* (Optional) Docs, leads, hypotheses later
**Core behavior:**
* Each item has:
* A base priority / risk (from severity, business impact, etc.)
* A timestamp of last signal (meaningful activity)
* A decay rate τ (tau) in days
* Effective priority = `base_priority × confidence(t)`
* `confidence(t) = exp( t / τ)` where `t` = days since last_signal
---
## 1. Data Model Changes
### 1.1. Add fields to core “work item” tables
For each relevant table (`Issues`, `Vulnerabilities`, `Epics`, …):
**New columns:**
* `base_priority` (FLOAT or INT)
* Example: 1100, or derived from severity.
* `last_signal_at` (DATETIME, NOT NULL, default = `created_at`)
* `tau_days` (FLOAT, nullable, falls back to type default)
* (Optional) `confidence_score_cached` (FLOAT, for materialized score)
* (Optional) `is_confidence_frozen` (BOOL, default FALSE)
For pinned items that should not decay.
**Example Postgres migration (Issues):**
```sql
ALTER TABLE issues
ADD COLUMN base_priority DOUBLE PRECISION,
ADD COLUMN last_signal_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
ADD COLUMN tau_days DOUBLE PRECISION,
ADD COLUMN confidence_cached DOUBLE PRECISION,
ADD COLUMN is_confidence_frozen BOOLEAN NOT NULL DEFAULT FALSE;
```
For StellaOps:
```sql
ALTER TABLE vulnerabilities
ADD COLUMN base_risk DOUBLE PRECISION,
ADD COLUMN last_signal_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
ADD COLUMN tau_days DOUBLE PRECISION,
ADD COLUMN confidence_cached DOUBLE PRECISION,
ADD COLUMN is_confidence_frozen BOOLEAN NOT NULL DEFAULT FALSE;
```
### 1.2. Add a config table for τ per entity type
```sql
CREATE TABLE confidence_decay_config (
id SERIAL PRIMARY KEY,
entity_type TEXT NOT NULL, -- 'issue', 'vulnerability', 'epic', 'doc'
tau_days_default DOUBLE PRECISION NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
INSERT INTO confidence_decay_config (entity_type, tau_days_default) VALUES
('incident', 7),
('vulnerability', 30),
('issue', 30),
('epic', 60),
('doc', 90);
```
---
## 2. Define “signal” events & instrumentation
We need a standardized way to say: “this item got activity → reset last_signal_at”.
### 2.1. Signals that should reset `last_signal_at`
For **issues / epics:**
* New comment
* Status change (e.g., Open → In Progress)
* Field change that matters (severity, owner, milestone)
* Attachment added
* Link to PR added or updated
* New CI failure linked
For **vulnerabilities (StellaOps):**
* New scanner result attached or status updated (e.g., “Verified”, “False Positive”)
* New evidence (PoC, exploit notes)
* SLA override change
* Assignment / ownership change
* Integration events (e.g., PR merge that references the vuln)
For **docs (if you do it):**
* Any edit
* Comment/annotation
### 2.2. Implement a shared helper to record a signal
**Service-level helper (pseudocode / C#-ish):**
```csharp
public interface IConfidenceSignalService
{
Task RecordSignalAsync(WorkItemType type, Guid itemId, DateTime? signalTimeUtc = null);
}
public class ConfidenceSignalService : IConfidenceSignalService
{
private readonly IWorkItemRepository _repo;
private readonly IConfidenceConfigService _config;
public async Task RecordSignalAsync(WorkItemType type, Guid itemId, DateTime? signalTimeUtc = null)
{
var now = signalTimeUtc ?? DateTime.UtcNow;
var item = await _repo.GetByIdAsync(type, itemId);
if (item == null) return;
item.LastSignalAt = now;
if (item.TauDays == null)
{
item.TauDays = await _config.GetDefaultTauAsync(type);
}
await _repo.UpdateAsync(item);
}
}
```
### 2.3. Wire signals into existing flows
Create small tasks for devs like:
* **ISS-01:** Call `RecordSignalAsync` on:
* New issue comment handler
* Issue status update handler
* Issue field update handler (severity/priority/owner)
* **VULN-01:** Call `RecordSignalAsync` when:
* New scanner result ingested for a vuln
* Vulnerability status, SLA, or owner changes
* New exploit evidence is attached
---
## 3. Confidence & scoring calculation
### 3.1. Shared confidence function
Definition:
```csharp
public static class ConfidenceMath
{
// t = days since last signal
public static double ConfidenceScore(DateTime lastSignalAtUtc, double tauDays, DateTime? nowUtc = null)
{
var now = nowUtc ?? DateTime.UtcNow;
var tDays = (now - lastSignalAtUtc).TotalDays;
if (tDays <= 0) return 1.0;
if (tauDays <= 0) return 1.0; // guard / fallback
var score = Math.Exp(-tDays / tauDays);
// Optional: never drop below a tiny floor, so items never "disappear"
const double floor = 0.01;
return Math.Max(score, floor);
}
}
```
### 3.2. Effective priority formulas
**Generic issues / tasks:**
```csharp
double effectiveScore = issue.BasePriority * ConfidenceMath.ConfidenceScore(issue.LastSignalAt, issue.TauDays ?? defaultTau);
```
**Vulnerabilities (StellaOps):**
Lets define:
* `severity_weight`: map CVSS or severity string to numeric (e.g. Critical=100, High=80, Medium=50, Low=20).
* `reachability`: 01 (e.g. from your reachability analysis).
* `exploitability`: 01 (optional, based on known exploits).
* `confidence`: as above.
```csharp
double baseRisk = severityWeight * reachability * exploitability; // or simpler: severityWeight * reachability
double conf = ConfidenceMath.ConfidenceScore(vuln.LastSignalAt, vuln.TauDays ?? defaultTau);
double effectiveRisk = baseRisk * conf;
```
Store `baseRisk` → `vulnerabilities.base_risk`, and compute `effectiveRisk` on the fly or via job.
### 3.3. SQL implementation (optional for server-side sorting)
**Postgres example:**
```sql
-- t_days = age in days
-- tau = tau_days
-- score = exp(-t_days / tau)
SELECT
i.*,
i.base_priority *
GREATEST(
EXP(- EXTRACT(EPOCH FROM (NOW() - i.last_signal_at)) / (86400 * COALESCE(i.tau_days, 30))),
0.01
) AS effective_priority
FROM issues i
ORDER BY effective_priority DESC;
```
You can wrap that in a view:
```sql
CREATE VIEW issues_with_confidence AS
SELECT
i.*,
GREATEST(
EXP(- EXTRACT(EPOCH FROM (NOW() - i.last_signal_at)) / (86400 * COALESCE(i.tau_days, 30))),
0.01
) AS confidence,
i.base_priority *
GREATEST(
EXP(- EXTRACT(EPOCH FROM (NOW() - i.last_signal_at)) / (86400 * COALESCE(i.tau_days, 30))),
0.01
) AS effective_priority
FROM issues i;
```
---
## 4. Caching & performance
You have two options:
### 4.1. Compute on read (simplest to start)
* Use the helper function in your service layer or a DB view.
* Pros:
* No jobs, always fresh.
* Cons:
* Slight CPU cost on heavy lists.
**Plan:** Start with this. If you see perf issues, move to 4.2.
### 4.2. Periodic materialization job (optional later)
Add a scheduled job (e.g. hourly) that:
1. Selects all active items.
2. Computes `confidence_score` and `effective_priority`.
3. Writes to `confidence_cached` and `effective_priority_cached` (if you add such a column).
Service then sorts by cached values.
---
## 5. Backfill & migration
### 5.1. Initial backfill script
For existing records:
* If `last_signal_at` is NULL → set to `created_at`.
* Derive `base_priority` / `base_risk` from existing severity fields.
* Set `tau_days` from config.
**Example:**
```sql
UPDATE issues
SET last_signal_at = created_at
WHERE last_signal_at IS NULL;
UPDATE issues
SET base_priority = CASE severity
WHEN 'critical' THEN 100
WHEN 'high' THEN 80
WHEN 'medium' THEN 50
WHEN 'low' THEN 20
ELSE 10
END
WHERE base_priority IS NULL;
UPDATE issues i
SET tau_days = c.tau_days_default
FROM confidence_decay_config c
WHERE c.entity_type = 'issue'
AND i.tau_days IS NULL;
```
Do similarly for `vulnerabilities` using severity / CVSS.
### 5.2. Sanity checks
Add a small script/test to verify:
* Newly created items → `confidence ≈ 1.0`.
* 30-day-old items with τ=30 → `confidence ≈ 0.37`.
* Ordering changes when you edit/comment on items.
---
## 6. API & Query Layer
### 6.1. New sorting options
Update list APIs:
* Accept parameter: `sort=effective_priority` or `sort=confidence`.
* Default sort for some views:
* Vulnerabilities backlog: `sort=effective_risk` (risk × confidence).
* Issues backlog: `sort=effective_priority`.
**Example REST API contract:**
`GET /api/issues?sort=effective_priority&state=open`
**Response fields (additions):**
```json
{
"id": "ISS-123",
"title": "Fix login bug",
"base_priority": 80,
"last_signal_at": "2025-11-01T10:00:00Z",
"tau_days": 30,
"confidence": 0.63,
"effective_priority": 50.4,
"confidence_band": "amber"
}
```
### 6.2. Confidence banding (for UI)
Define bands server-side (easy to change):
* Green: `confidence >= 0.6`
* Amber: `0.3 ≤ confidence < 0.6`
* Red: `confidence < 0.3`
You can compute on server:
```csharp
string ConfidenceBand(double confidence) =>
confidence >= 0.6 ? "green"
: confidence >= 0.3 ? "amber"
: "red";
```
---
## 7. UI / UX changes
### 7.1. List views (issues / vulns / epics)
For each item row:
* Show a small freshness pill:
* Text: `Active`, `Review soon`, `Stale`
* Derived from confidence band.
* Tooltip:
* “Confidence 78%. Last activity 3 days ago. τ = 30 days.”
* Sort default: by `effective_priority` / `effective_risk`.
* Filters:
* `Freshness: [All | Active | Review soon | Stale]`
* Optionally: “Show stale only” toggle.
**Example labels:**
* Green: “Active (confidence 82%)”
* Amber: “Review soon (confidence 45%)”
* Red: “Stale (confidence 18%)”
### 7.2. Detail views
On an issue / vuln page:
* Add a “Confidence” section:
* “Confidence: **52%**”
* “Last signal: **12 days ago**”
* “Decay τ: **30 days**”
* “Effective priority: **Base 80 × 0.52 = 42**”
* (Optional) small mini-chart (text-only or simple bar) showing approximate decay, but not necessary for first iteration.
### 7.3. Admin / settings UI
Add an internal settings page:
* Table of entity types with editable τ:
| Entity type | τ (days) | Notes |
| ------------- | -------- | ---------------------------- |
| Incident | 7 | Fast-moving |
| Vulnerability | 30 | Standard risk review cadence |
| Issue | 30 | Sprint-level decay |
| Epic | 60 | Quarterly |
| Doc | 90 | Slow decay |
* Optionally: toggle to pin item (`is_confidence_frozen`) from UI.
---
## 8. StellaOpsspecific behavior
For vulnerabilities:
### 8.1. Base risk calculation
Ingested fields you likely already have:
* `cvss_score` or `severity`
* `reachable` (true/false or numeric)
* (Optional) `exploit_available` (bool) or exploitability score
* `asset_criticality` (15)
Define `base_risk` as:
```text
severity_weight = f(cvss_score or severity)
reachability = reachable ? 1.0 : 0.5 -- example
exploitability = exploit_available ? 1.0 : 0.7
asset_factor = 0.5 + 0.1 * asset_criticality -- 1 → 1.0, 5 → 1.5
base_risk = severity_weight * reachability * exploitability * asset_factor
```
Store `base_risk` on vuln row.
Then:
```text
effective_risk = base_risk * confidence(t)
```
Use `effective_risk` for backlog ordering and SLAs dashboards.
### 8.2. Signals for vulns
Make sure these all call `RecordSignalAsync(Vulnerability, vulnId)`:
* New scan result for same vuln (re-detected).
* Change status to “In Progress”, “Ready for Deploy”, “Verified Fixed”, etc.
* Assigning an owner.
* Attaching PoC / exploit details.
### 8.3. Vuln UI copy ideas
* Pill text:
* “Risk: 850 (confidence 68%)”
* “Last analyst activity 11 days ago”
* In backlog view: show **Effective Risk** as main sort, with a smaller subtext “Base 1200 × Confidence 71%”.
---
## 9. Rollout plan
### Phase 1 Infrastructure (backend-only)
* [ ] DB migrations & config table
* [ ] Implement `ConfidenceMath` and helper functions
* [ ] Implement `IConfidenceSignalService`
* [ ] Wire signals into key flows (comments, state changes, scanner ingestion)
* [ ] Add `confidence` and `effective_priority/risk` to API responses
* [ ] Backfill script + dry run in staging
### Phase 2 Internal UI & feature flag
* [ ] Add optional sorting by effective score to internal/staff views
* [ ] Add confidence pill (hidden behind feature flag `confidence_decay_v1`)
* [ ] Dogfood internally:
* Do items bubble up/down as expected?
* Are any items “disappearing” because decay is too aggressive?
### Phase 3 Parameter tuning
* [ ] Adjust τ per type based on feedback:
* If things decay too fast → increase τ
* If queues rarely change → decrease τ
* [ ] Decide on confidence floor (0.01? 0.05?) so nothing goes to literal 0.
### Phase 4 General release
* [ ] Make effective score the default sort for key views:
* Vulnerabilities backlog
* Issues backlog
* [ ] Document behavior for users (help center / inline tooltip)
* [ ] Add admin UI to tweak τ per entity type.
---
## 10. Edge cases & safeguards
* **New items**
* `last_signal_at = created_at`, confidence = 1.0.
* **Pinned items**
* If `is_confidence_frozen = true` → treat confidence as 1.0.
* **Items without τ**
* Always fallback to entity type default.
* **Timezones**
* Always store & compute in UTC.
* **Very old items**
* Floor the confidence so theyre still visible when explicitly searched.
---
If you want, I can turn this into:
* A short **technical design doc** (with sections: Problem, Proposal, Alternatives, Rollout).
* Or a **set of Jira tickets** grouped by backend / frontend / infra that your team can pick up directly.

View File

@@ -0,0 +1,590 @@
Im sharing this because it highlights important recent developments with Rekor — and how its new v2 rollout and behavior with DSSE change what you need to watch out for when building attestations (for example in your StellaOps architecture).
![Image](https://docs.sigstore.dev/sigstore_rekor-horizontal-color.svg)
![Image](https://miro.medium.com/v2/resize%3Afit%3A1200/1%2Abdz7tUqYTQecioDQarHNcw.png)
![Image](https://rewanthtammana.com/sigstore-the-easy-way/images/cosign-attest-sbom-ui.png)
### 🚨 What changed with Rekor v2
* Rekor v2 is now GA: it moves to a tilebacked transparency log backend (via the module rekortiles), which simplifies maintenance and lowers infrastructure cost. ([blog.sigstore.dev][1])
* The global publiclydistributed instance now supports only two entry types: `hashedrekord` (for artifacts) and `dsse` (for attestations). Many previously supported entry types — e.g. `intoto`, `rekord`, `helm`, `rfc3161`, etc. — have been removed. ([blog.sigstore.dev][1])
* The log is now sharded: instead of a single growing Merkle tree, multiple “shards” (trees) are used. This supports better scaling, simpler rotation/maintenance, and easier querying by tree shard + identifier. ([Sigstore][2])
### ⚠️ Why this matters for attestations, and common pitfalls
* Historically, when using DSSE or intoto style attestations submitted to Rekor (or via Cosign), the **entire attestation payload** had to be uploaded to Rekor. That becomes problematic when payloads are large. Theres a reported case where a 130MB attestation was rejected due to size. ([GitHub][3])
* The public instance of Rekor historically had a relatively small attestation size limit (on the order of 100KB) for uploads. ([GitHub][4])
* Because Rekor v2 no longer supports many entry types and simplifies the log types, you no longer have fallback for some of the older attestation/storage formats if they dont fit DSSE/hashedrekord constraints. ([blog.sigstore.dev][1])
### ✅ What you must design for — and pragmatic workarounds
Given your StellaOps architecture goals (deterministic builds, reproducible scans, large SBOMs/metadata, private/offairgap compliance), heres what you should consider:
* **Plan for payload-size constraints**: dont assume arbitrary large attestations will be accepted. Keep attestation payloads small — ideally put large blobs (e.g. full SBOMs, large metadata) **outside** DSSE and store them elsewhere (artifact storage, internal logs, blob store) with the attestation only embedding a hash or reference.
* **Use “private logs” / selfhosted Rekor** if you anticipate large payloads — public instance limits make heavy payload uploads impractical. Running your own instance gives you control over size limits and resource allocation. ([GitHub][4])
* **Chunking / sharding**: For large metadata blobs, consider splitting (“sharding”) or chunking the data into smaller pieces, each with its own DSSE/hashedrekord entry, then reference or reassemble externally. This avoids hitting size limits while maintaining inclusion proofs.
* **Build idempotent resubmit logic**: Because DSSE/hashedrekord entries are the only supported types, and large payloads may fail, your pipelines (e.g. StellaOps) should handle retries, partial submits, and ensure idempotence — so resubmits dont create inconsistent or duplicate entries.
* **Persist full attestations outside Rekor**: Since Rekor v2 dropped many types and doesnt store full arbitrary blobs necessarily, ensure that the “source of truth” for large metadata remains under your control (e.g. in your internal storage), with Rekor only storing minimal hashed attestations.
### 🎯 What this means for StellaOps
For your “ProofofIntegrity Graph” + “offline bundle + replayable scans” vision — this means you should treat Rekor (especially public) as a *lightweight ledger for proofs*, not as a full-blown metadata store. In practice:
* Use Rekor for auditing, signature/inclusion proofs, and “committohash” attestations.
* Store full SBOMs, VEX data, scan metadata, large build/context blobs externally.
* Build your logsharding, replay, and artifactreference logic to survive Rekors constraints.
* For airgapped workflows, consider a private Rekor or alternative internal log that mirrors the public transparencylog guarantees but under your control.
If you like — I can pull up **realworld examples** of organizations that already adapted to Rekor v2s DSSE constraints (with chunking, private logs, hybrid storage) — could help shape StellaOps resilience strategy.
[1]: https://blog.sigstore.dev/rekor-v2-ga/?utm_source=chatgpt.com "Rekor v2 GA - Cheaper to run, simpler to maintain"
[2]: https://docs.sigstore.dev/logging/sharding/?utm_source=chatgpt.com "Sharding"
[3]: https://github.com/sigstore/cosign/issues/3599?utm_source=chatgpt.com "Attestations require uploading entire payload to rekor #3599"
[4]: https://github.com/sigstore/rekor?utm_source=chatgpt.com "sigstore/rekor: Software Supply Chain Transparency Log"
Heres a concrete, developerfriendly implementation plan you can hand to the team. Ill assume the context is “StellaOps + Sigstore/Rekor v2 + DSSE + airgapped support”.
---
## 0. Shared context & constraints (what devs should keep in mind)
**Key facts (summarized):**
* Rekor v2 keeps only **two** entry types: `hashedrekord` (artifact signatures) and `dsse` (attestations). Older types (`intoto`, `rekord`, etc.) are gone. ([Sigstore Blog][1])
* The **public** Rekor instance enforces a ~**100KB attestation size limit** per upload; bigger payloads must use your **own Rekor instance** instead. ([GitHub][2])
* For DSSE entries, Rekor **does not store the full payload**; it stores hashes and verification material. Users are expected to persist the attestations alongside artifacts in their own storage. ([Go Packages][3])
* People have already hit problems where ~130MB attestations were rejected by Rekor, showing that “just upload the whole SBOM/provenance” is not sustainable. ([GitHub][4])
* Sigstores **bundle** format is the canonical way to ship DSSE + tlog metadata around as a single JSON object (very useful for offline/airgapped replay). ([Sigstore][5])
**Guiding principles for the implementation:**
1. **Rekor is a ledger, not a blob store.** We log *proofs* (hashes, inclusion proofs), not big documents.
2. **Attestation payloads live in our storage** (object store / DB).
3. **All Rekor interaction goes through one abstraction** so we can easily switch public/private/none.
4. **Everything is idempotent and replayable** (important for retries and airgapped exports).
---
## 1. Highlevel architecture
### 1.1 Components
1. **Attestation Builder library (in CI/build tools)**
* Used by build pipelines / scanners / SBOM generators.
* Responsibilities:
* Collect artifact metadata (digest, build info, SBOM, scan results).
* Call Attestation API (below) with **semantic info** and raw payload(s).
2. **Attestation Service (core backend microservice)**
* Single entrypoint for creating and managing attestations.
* Responsibilities:
* Normalize incoming metadata.
* Store large payload(s) in object store.
* Construct **small DSSE envelope** (payload = manifest / summary, not giant blob).
* Persist attestation records & payload manifests in DB.
* Enqueue logsubmission jobs for:
* Public Rekor v2
* Private Rekor v2 (optional)
* Internal event log (DB/Kafka)
* Produce **Sigstore bundles** for offline use.
3. **Log Writer / Rekor Client Worker(s)**
* Background workers consuming submission jobs.
* Responsibilities:
* Submit `dsse` (and optionally `hashedrekord`) entries to configured Rekor instances.
* Handle retries with backoff.
* Guarantee idempotency (no duplicate entries, no inconsistent state).
* Update DB with Rekor log index/uuid and status.
4. **Offline Bundle Exporter (CLI or API)**
* Runs in airgapped cluster.
* Responsibilities:
* Periodically export “new” attestations + bundles since last export.
* Materialize data as tar/zip with:
* Sigstore bundles (JSON)
* Chunk manifests
* Large payload chunks (optional, depending on policy).
5. **Offline Replay Service (connected environment)**
* Runs where internet access and public Rekor are available.
* Responsibilities:
* Read offline bundles from incoming location.
* Replay to:
* Public Rekor
* Cloud storage
* Internal observability
* Write updated status back (e.g., via a status file or callback).
6. **Config & Policy Layer**
* Central (e.g. YAML, env, config DB).
* Controls:
* Which logs to use: `public_rekor`, `private_rekor`, `internal_only`.
* Size thresholds (DSSE payload limit, chunk size).
* Retry/backoff policy.
* Airgapped mode toggles.
---
## 2. Data model (DB + storage)
Use whatever DB you have (Postgres is fine). Heres a suggested schema, adapt as needed.
### 2.1 Core tables
**`attestations`**
| Column | Type | Description |
| ------------------------ | ----------- | ----------------------------------------- |
| `id` | UUID (PK) | Internal identifier |
| `subject_digest` | text | e.g., `sha256:<hex>` of build artifact |
| `subject_uri` | text | Optional URI (image ref, file path, etc.) |
| `predicate_type` | text | e.g. `https://slsa.dev/provenance/v1` |
| `payload_schema_version` | text | Version of our manifest schema |
| `dsse_envelope_digest` | text | `sha256` of DSSE envelope |
| `bundle_location` | text | URL/path to Sigstore bundle (if cached) |
| `created_at` | timestamptz | Creation time |
| `created_by` | text | Origin (pipeline id, service name) |
| `metadata` | jsonb | Extra labels / tags |
**`payload_manifests`**
| Column | Type | Description |
| --------------------- | ----------- | ------------------------------------------------- |
| `attestation_id` (FK) | UUID | Link to `attestations.id` |
| `total_size_bytes` | bigint | Size of the *full* logical payload |
| `chunk_count` | int | Number of chunks |
| `root_digest` | text | Digest of full payload or Merkle root over chunks |
| `manifest_json` | jsonb | The JSON we sign in the DSSE payload |
| `created_at` | timestamptz | |
**`payload_chunks`**
| Column | Type | Description |
| --------------------- | ----------------------------- | ---------------------- |
| `attestation_id` (FK) | UUID | |
| `chunk_index` | int | 0based index |
| `chunk_digest` | text | sha256 of this chunk |
| `size_bytes` | bigint | Size of chunk |
| `storage_uri` | text | `s3://…` or equivalent |
| PRIMARY KEY | (attestation_id, chunk_index) | Ensures uniqueness |
**`log_submissions`**
| Column | Type | Description |
| --------------------- | ----------- | --------------------------------------------------------- |
| `id` | UUID (PK) | |
| `attestation_id` (FK) | UUID | |
| `target` | text | `public_rekor`, `private_rekor`, `internal` |
| `submission_key` | text | Idempotency key (see below) |
| `state` | text | `pending`, `in_progress`, `succeeded`, `failed_permanent` |
| `attempt_count` | int | For retries |
| `last_error` | text | Last error message |
| `rekor_log_index` | bigint | If applicable |
| `rekor_log_id` | text | Log ID (tree ID / key ID) |
| `created_at` | timestamptz | |
| `updated_at` | timestamptz | |
Add a **unique index** on `(target, submission_key)` to guarantee idempotency.
---
## 3. DSSE payload design (how to avoid size limits)
### 3.1 Manifestbased DSSE instead of giant payloads
Instead of DSSEsigning the **entire SBOM/provenance blob** (which hits Rekors 100KB limit), we sign a **manifest** describing where the payload lives and how to verify it.
**Example manifest JSON** (payload of DSSE, small):
```json
{
"version": "stellaops.manifest.v1",
"subject": {
"uri": "registry.example.com/app@sha256:abcd...",
"digest": "sha256:abcd..."
},
"payload": {
"type": "sbom.spdx+json",
"rootDigest": "sha256:deadbeef...",
"totalSize": 73400320,
"chunkCount": 12
},
"chunks": [
{
"index": 0,
"digest": "sha256:1111...",
"size": 6291456
},
{
"index": 1,
"digest": "sha256:2222...",
"size": 6291456
}
// ...
],
"storagePolicy": {
"backend": "s3",
"bucket": "stellaops-attestations",
"pathPrefix": "sboms/app/abcd..."
}
}
```
* This JSON is small enough to **fit under 100KB** even with lots of chunks, so the DSSE envelope stays small.
* Full SBOM/scan results live in your object store; Rekor logs the DSSE envelope hash.
### 3.2 Chunking logic (Attestation Service)
Config values (can be env vars):
* `CHUNK_SIZE_BYTES` = e.g. 510 MiB
* `MAX_DSSE_PAYLOAD_BYTES` = e.g. 70 KiB (keeping margin under Rekor 100KB limit)
* `MAX_CHUNK_COUNT` = safety guard
Algorithm:
1. Receive raw payload bytes (SBOM / provenance / scan results).
2. Compute full `root_digest = sha256(payload_bytes)` (or Merkle root if you want more advanced verification).
3. If `len(payload_bytes) <= SMALL_PAYLOAD_THRESHOLD` (e.g. 64 KB):
* Skip chunking.
* Store payload as single object.
* Manifest can optionally omit `chunks` and just record one object.
4. If larger:
* Split into fixedsize chunks (except last).
* For each chunk:
* Compute `chunk_digest`.
* Upload chunk to object store path derived from `root_digest` + `chunk_index`.
* Insert `payload_chunks` rows.
5. Build manifest JSON with:
* `version`
* `subject`
* `payload` block
* `chunks[]` (no URIs if you dont want to leak details; the URIs can be derived by clients).
6. Check serialized manifest size ≤ `MAX_DSSE_PAYLOAD_BYTES`. If not:
* Option A: increase chunk size so you have fewer chunks.
* Option B: move chunk list to a secondary “chunk index” document and sign only its root digest.
7. DSSEsign manifest JSON.
8. Persist DSSE envelope digest + manifest in DB.
---
## 4. Rekor integration & idempotency
### 4.1 Rekor client abstraction
Implement an interface like:
```ts
interface TransparencyLogClient {
submitDsseEnvelope(params: {
dsseEnvelope: Buffer; // JSON bytes
subjectDigest: string;
predicateType: string;
}): Promise<{
logIndex: number;
logId: string;
entryUuid: string;
}>;
}
```
Provide implementations:
* `PublicRekorClient` (points at `https://rekor.sigstore.dev` or v2 equivalent).
* `PrivateRekorClient` (your own Rekor v2 cluster).
* `NullClient` (for internalonly mode).
Use official API semantics from Rekor OpenAPI / SDKs where possible. ([Sigstore][6])
### 4.2 Submission jobs & idempotency
**Submission key design:**
```text
submission_key = sha256(
"dsse" + "|" +
rekor_base_url + "|" +
dsse_envelope_digest
)
```
Workflow in the worker:
1. Worker fetches `log_submissions` with `state = 'pending'` or due for retry.
2. Set `state = 'in_progress'` (optimistic update).
3. Call `client.submitDsseEnvelope`.
4. If success:
* Update `state = 'succeeded'`, set `rekor_log_index`, `rekor_log_id`.
5. If Rekor indicates “already exists” (or returns same logIndex for same envelope):
* Treat as success, update `state = 'succeeded'`.
6. On network/5xx errors:
* Increment `attempt_count`.
* If `attempt_count < MAX_RETRIES`: schedule retry with backoff.
* Else: `state = 'failed_permanent'`, keep `last_error`.
DB constraint: `UNIQUE(target, submission_key)` ensures we dont create conflicting jobs.
---
## 5. Attestation Service API design
### 5.1 Create attestation (build/scan pipeline → Attestation Service)
**`POST /v1/attestations`**
**Request body (example):**
```json
{
"subject": {
"uri": "registry.example.com/app@sha256:abcd...",
"digest": "sha256:abcd..."
},
"payloadType": "sbom.spdx+json",
"payload": {
"encoding": "base64",
"data": "<base64-encoded-sbom-or-scan>"
},
"predicateType": "https://slsa.dev/provenance/v1",
"logTargets": ["internal", "private_rekor", "public_rekor"],
"airgappedMode": false,
"labels": {
"team": "payments",
"env": "prod"
}
}
```
**Server behavior:**
1. Validate subject & payload.
2. Chunk payload as per rules (section 3).
3. Store payload chunks.
4. Build manifest JSON & DSSE envelope.
5. Insert `attestations`, `payload_manifests`, `payload_chunks`.
6. For each `logTargets`:
* Insert `log_submissions` row with `state = 'pending'`.
7. Optionally construct Sigstore bundle representing:
* DSSE envelope
* Transparency log entry (when available) — for async, you can fill this later.
8. Return `202 Accepted` with resource URL:
```json
{
"attestationId": "1f4b3d...",
"status": "pending_logs",
"subjectDigest": "sha256:abcd...",
"logTargets": ["internal", "private_rekor", "public_rekor"],
"links": {
"self": "/v1/attestations/1f4b3d...",
"bundle": "/v1/attestations/1f4b3d.../bundle"
}
}
```
### 5.2 Get attestation status
**`GET /v1/attestations/{id}`**
Returns:
```json
{
"attestationId": "1f4b3d...",
"subjectDigest": "sha256:abcd...",
"predicateType": "https://slsa.dev/provenance/v1",
"logs": {
"internal": {
"state": "succeeded"
},
"private_rekor": {
"state": "succeeded",
"logIndex": 1234,
"logId": "..."
},
"public_rekor": {
"state": "pending",
"lastError": null
}
},
"createdAt": "2025-11-27T12:34:56Z"
}
```
### 5.3 Get bundle
**`GET /v1/attestations/{id}/bundle`**
* Returns a **Sigstore bundle JSON** that:
* Contains either:
* Only the DSSE + identity + certificate chain (if logs not yet written).
* Or DSSE + log entries (`hashedrekord` / `dsse` entries) for whichever logs are ready. ([Sigstore][5])
* This is what airgapped exports and verifiers consume.
---
## 6. Airgapped workflows
### 6.1 In the airgapped environment
* Attestation Service runs in “airgapped mode”:
* `logTargets` typically = `["internal", "private_rekor"]`.
* No direct public Rekor.
* **Offline Exporter CLI**:
```bash
stellaops-offline-export \
--since-id <last_exported_attestation_id> \
--output offline-bundle-<timestamp>.tar.gz
```
* Exporter logic:
1. Query DB for new `attestations` > `since-id`.
2. For each attestation:
* Fetch DSSE envelope.
* Fetch current log statuses (private rekor, internal).
* Build or reuse Sigstore bundle JSON.
* Optionally include payload chunks and/or original payload.
3. Write them into a tarball with structure like:
```
/attestations/<id>/bundle.json
/attestations/<id>/chunks/chunk-0000.bin
...
/meta/export-metadata.json
```
### 6.2 In the connected environment
* **Replay Service**:
```bash
stellaops-offline-replay \
--input offline-bundle-<timestamp>.tar.gz \
--public-rekor-url https://rekor.sigstore.dev
```
* Replay logic:
1. Read each `/attestations/<id>/bundle.json`.
2. If `public_rekor` entry not present:
* Extract DSSE envelope from bundle.
* Call Attestation Service “import & log” endpoint or directly call PublicRekorClient.
* Build new updated bundle (with public tlog entry).
3. Emit an updated `result.json` for each attestation (so you can sync status back to original environment if needed).
---
## 7. Observability & ops
### 7.1 Metrics
Have devs expose at least:
* `rekor_submit_requests_total{target, outcome}`
* `rekor_submit_latency_seconds{target}` (histogram)
* `log_submissions_in_queue{target}`
* `attestations_total{predicateType}`
* `attestation_payload_bytes{bucket}` (distribution of payload sizes)
### 7.2 Logging
* Log at **info**:
* Attestation created (subject digest, predicateType, manifest version).
* Log submission succeeded (target, logIndex, logId).
* Log at **warn/error**:
* Any permanent failure.
* Any time DSSE payload nearly exceeds size threshold (to catch misconfig).
### 7.3 Feature flags
* `FEATURE_REKOR_PUBLIC_ENABLED`
* `FEATURE_REKOR_PRIVATE_ENABLED`
* `FEATURE_OFFLINE_EXPORT_ENABLED`
* `FEATURE_CHUNKING_ENABLED` (to allow rolling rollout)
---
## 8. Concrete work breakdown for developers
You can basically drop this as a backlog outline:
1. **Domain model & storage**
* [ ] Implement DB migrations for `attestations`, `payload_manifests`, `payload_chunks`, `log_submissions`.
* [ ] Implement object storage abstraction and contentaddressable layout for chunks.
2. **Attestation Service skeleton**
* [ ] Implement `POST /v1/attestations` with basic validation.
* [ ] Implement manifest building and DSSE envelope creation (no Rekor yet).
* [ ] Persist records in DB.
3. **Chunking & manifest logic**
* [ ] Implement chunker with thresholds & tests (small vs large).
* [ ] Implement manifest JSON builder.
* [ ] Ensure DSSE payload size is under configurable limit.
4. **Rekor client & log submissions**
* [ ] Implement `TransparencyLogClient` interface + Public/Private implementations.
* [ ] Implement `log_submissions` worker (queue + backoff + idempotency).
* [ ] Wire worker into service config and deployment.
5. **Sigstore bundle support**
* [ ] Implement bundle builder given DSSE envelope + log metadata.
* [ ] Add `GET /v1/attestations/{id}/bundle`.
6. **Offline export & replay**
* [ ] Implement Exporter CLI (queries DB, packages bundles and chunks).
* [ ] Implement Replay CLI/service (reads tarball, logs to public Rekor).
* [ ] Document operator workflow for moving tarballs between environments.
7. **Observability & docs**
* [ ] Add metrics, logs, and dashboards.
* [ ] Write verification docs: “How to fetch manifest, verify DSSE, reconstruct payload, and check Rekor.”
---
If youd like, next step I can do is: take this and turn it into a more strict format your devs might already use (e.g. Jira epics + stories, or a design doc template with headers like “Motivation, Alternatives, Risks, Rollout Plan”).
[1]: https://blog.sigstore.dev/rekor-v2-ga/?utm_source=chatgpt.com "Rekor v2 GA - Cheaper to run, simpler to maintain"
[2]: https://github.com/sigstore/rekor?utm_source=chatgpt.com "sigstore/rekor: Software Supply Chain Transparency Log"
[3]: https://pkg.go.dev/github.com/sigstore/rekor/pkg/types/dsse?utm_source=chatgpt.com "dsse package - github.com/sigstore/rekor/pkg/types/dsse"
[4]: https://github.com/sigstore/cosign/issues/3599?utm_source=chatgpt.com "Attestations require uploading entire payload to rekor #3599"
[5]: https://docs.sigstore.dev/about/bundle/?utm_source=chatgpt.com "Sigstore Bundle Format"
[6]: https://docs.sigstore.dev/logging/overview/?utm_source=chatgpt.com "Rekor"

View File

@@ -0,0 +1,886 @@
Heres a concrete, lowlift way to boost StellaOpss visibility and prove your “deterministic, replayable” moat: publish a **sanitized subset of reachability graphs** as a public benchmark that others can run and score identically.
### What this is (plain English)
* You release a small, carefully scrubbed set of **packages + SBOMs + VEX + callgraphs** (source & binaries) with **groundtruth reachability labels** for a curated list of CVEs.
* You also ship a **deterministic scoring harness** (container + manifest) so anyone can reproduce the exact scores, byteforbyte.
### Why it helps
* **Proof of determinism:** identical inputs → identical graphs → identical scores.
* **Research magnet:** gives labs and tool vendors a neutral yardstick; you become “the” benchmark steward.
* **Biz impact:** easy demo for buyers; lets you publish leaderboards and whitepapers.
### Scope (MVP dataset)
* **Languages:** PHP, JS, Python, plus **binary** (ELF/PE/MachO) mini-cases.
* **Units:** 2030 packages total; 36 CVEs per language; 46 binary cases (static & dynamicallylinked).
* **Artifacts per unit:**
* Package tarball(s) or container image digest
* SBOM (CycloneDX 1.6 + SPDX 3.0.1)
* VEX (knownexploited, notaffected, underinvestigation)
* **Call graph** (normalized JSON)
* **Ground truth**: list of vulnerable entrypoints/edges considered *reachable*
* **Determinism manifest**: feed URLs + rule hashes + container digests + tool versions
### Data model (keep it simple)
* `dataset.json`: index of cases with contentaddressed URIs (sha256)
* `sbom/`, `vex/`, `graphs/`, `truth/` folders mirroring the index
* `manifest.lock.json`: DSSEsigned record of:
* feeder rules, lattice policies, normalizers (name + version + hash)
* container image digests for each step (scanner/cartographer/normalizer)
* timestamp + signer (StellaOps Authority)
### Scoring harness (deterministic)
* One Docker image: `stellaops/benchmark-harness:<tag>`
* Inputs: dataset root + `manifest.lock.json`
* Outputs:
* `scores.json` (precision/recall/F1, percase and macro)
* `replay-proof.txt` (hashes of every artifact used)
* **No network** mode (offlinefirst). Fails closed if any hash mismatches.
### Metrics (clear + auditable)
* Per case: TP/FP/FN for **reachable** functions (or edges), plus optional **sinkreach** verification.
* Aggregates: micro/macro F1; “Determinism Index” (stddev of repeated runs must be 0).
* **Repro test:** the harness reruns N=3 and asserts identical outputs (hash compare).
### Sanitization & legal
* Strip any proprietary code/data; prefer OSS with permissive licenses.
* Replace real package registries with **local mirrors** and pin digests.
* Publish under **CCBY4.0** (data) + **Apache2.0** (harness). Add a simple **contributor license agreement** for external case submissions.
### Baselines to include (neutral + useful)
* “Naïve reachable” (all functions in package)
* “Importsonly” (entrypoints that match import graph)
* “Calldepth2” (bounded traversal)
* **Your** graph engine run with **frozen rules** from the manifest (as a reference, not a claim of SOTA)
### Repository layout (public)
```
stellaops-reachability-benchmark/
dataset/
dataset.json
sbom/...
vex/...
graphs/...
truth/...
manifest.lock.json (DSSE-signed)
harness/
Dockerfile
runner.py (CLI)
schema/ (JSON Schemas for graphs, truth, scores)
docs/
HOWTO.md (5-min run)
CONTRIBUTING.md
SANITIZATION.md
LICENSES/
```
### Docs your team can ship in a day
* **HOWTO.md:** `docker run -v $PWD/dataset:/d -v $PWD/out:/o stellaops/benchmark-harness score /d /o`
* **SCHEMA.md:** JSON Schemas for graph and truth (keep fields minimal: `nodes`, `edges`, `purls`, `sinks`, `evidence`).
* **REPRODUCIBILITY.md:** explains DSSE signatures, lockfile, and offline run.
* **LIMITATIONS.md:** clarifies scope (no dynamic runtime traces in v1, etc.).
### Governance (lightweight)
* **Versioned releases:** `v0.1`, `v0.2` with changelogs.
* **Submission gate:** PR template + CI that:
* validates schemas
* checks hashes match lockfile
* rescores and compares to contributors score
* **Leaderboard cadence:** monthly markdown table regenerated by CI.
### Launch plan (2week sprint)
* **Day 12:** pick cases; finalize schemas; write SANITIZATION.md.
* **Day 35:** build harness image; implement deterministic runner; freeze `manifest.lock.json`.
* **Day 68:** produce ground truth; run baselines; generate initial scores.
* **Day 910:** docs + website README; record a 2minute demo GIF.
* **Day 1112:** legal review + licenses; create issue labels (“good first case”).
* **Day 1314:** publish, post on GitHub + LinkedIn; invite Semgrep/Snyk/OSSFuzz folks to submit cases.
### Nicetohave (but easy)
* **JSON Schema** for groundtruth edges so academics can autoingest.
* **Small “unknowns” registry** example to show how you annotate unresolved symbols without breaking determinism.
* **Binary minilab**: stripped vs nonstripped ELF pair to show your patchoracle technique in action (truth labels reflect oracle result).
If you want, I can draft the repo skeleton (folders, placeholder JSON Schemas, a sample `manifest.lock.json`, and a minimal `runner.py` CLI) so you can drop it straight into GitHub.
Got you — lets turn that highlevel idea into something your devs can actually pick up and ship.
Below is a **concrete implementation plan** for the *StellaOps Reachability Benchmark* repo: directory structure, components, tasks, and acceptance criteria. You can drop this straight into a ticketing system as epics → stories.
---
## 0. Tech assumptions (adjust if needed)
To be specific, Ill assume:
* **Repo**: `stellaops-reachability-benchmark`
* **Harness language**: Python 3.11+
* **Packaging**: Docker image for the harness
* **Schemas**: JSON Schema (Draft 202012)
* **CI**: GitHub Actions
If your stack differs, you can still reuse the structure and acceptance criteria.
---
## 1. Repo skeleton & project bootstrap
**Goal:** Create a minimal but fully wired repo.
### Tasks
1. **Create skeleton**
* Structure:
```text
stellaops-reachability-benchmark/
dataset/
dataset.json
sbom/
vex/
graphs/
truth/
packages/
manifest.lock.json # initially stub
harness/
reachbench/
__init__.py
cli.py
dataset_loader.py
schemas/
graph.schema.json
truth.schema.json
dataset.schema.json
scores.schema.json
tests/
docs/
HOWTO.md
SCHEMA.md
REPRODUCIBILITY.md
LIMITATIONS.md
SANITIZATION.md
.github/
workflows/
ci.yml
pyproject.toml
README.md
LICENSE
Dockerfile
```
2. **Bootstrap Python project**
* `pyproject.toml` with:
* `reachbench` package
* deps: `jsonschema`, `click` or `typer`, `pyyaml`, `pytest`
* `harness/tests/` with a dummy test to ensure CI is green.
3. **Dockerfile**
* Minimal, pinned versions:
```Dockerfile
FROM python:3.11-slim
WORKDIR /app
COPY . .
RUN pip install --no-cache-dir .
ENTRYPOINT ["reachbench"]
```
4. **CI basic pipeline (`.github/workflows/ci.yml`)**
* Jobs:
* `lint` (e.g., `ruff` or `flake8` if you want)
* `test` (pytest)
* `build-docker` (just to ensure Dockerfile stays valid)
### Acceptance criteria
* `pip install .` works locally.
* `reachbench --help` prints CLI help (even if commands are stubs).
* CI passes on main branch.
---
## 2. Dataset & schema definitions
**Goal:** Define all JSON formats and enforce them.
### 2.1 Define dataset index format (`dataset/dataset.json`)
**File:** `dataset/dataset.json`
**Example:**
```json
{
"version": "0.1.0",
"cases": [
{
"id": "php-wordpress-5.8-cve-2023-12345",
"language": "php",
"kind": "source", // "source" | "binary" | "container"
"cves": ["CVE-2023-12345"],
"artifacts": {
"package": {
"path": "packages/php/wordpress-5.8.tar.gz",
"sha256": "…"
},
"sbom": {
"path": "sbom/php/wordpress-5.8.cdx.json",
"format": "cyclonedx-1.6",
"sha256": "…"
},
"vex": {
"path": "vex/php/wordpress-5.8.vex.json",
"format": "csaf-2.0",
"sha256": "…"
},
"graph": {
"path": "graphs/php/wordpress-5.8.graph.json",
"schema": "graph.schema.json",
"sha256": "…"
},
"truth": {
"path": "truth/php/wordpress-5.8.truth.json",
"schema": "truth.schema.json",
"sha256": "…"
}
}
}
]
}
```
### 2.2 Define **truth schema** (`harness/reachbench/schemas/truth.schema.json`)
**Model (conceptual):**
```jsonc
{
"case_id": "php-wordpress-5.8-cve-2023-12345",
"vulnerable_components": [
{
"cve": "CVE-2023-12345",
"symbol": "wp_ajax_nopriv_some_vuln",
"symbol_kind": "function", // "function" | "method" | "binary_symbol"
"status": "reachable", // "reachable" | "not_reachable"
"reachable_from": [
{
"entrypoint_id": "web:GET:/foo",
"notes": "HTTP route /foo"
}
],
"evidence": "manual-analysis" // or "unit-test", "patch-oracle"
}
],
"non_vulnerable_components": [
{
"symbol": "wp_safe_function",
"symbol_kind": "function",
"status": "not_reachable",
"evidence": "manual-analysis"
}
]
}
```
**Tasks**
* Implement JSON Schema capturing:
* required fields: `case_id`, `vulnerable_components`
* allowed enums for `symbol_kind`, `status`, `evidence`
* Add unit tests that:
* validate a valid truth file
* fail on various broken ones (missing `case_id`, unknown `status`, etc.)
### 2.3 Define **graph schema** (`harness/reachbench/schemas/graph.schema.json`)
**Model (conceptual):**
```jsonc
{
"case_id": "php-wordpress-5.8-cve-2023-12345",
"language": "php",
"nodes": [
{
"id": "func:wp_ajax_nopriv_some_vuln",
"symbol": "wp_ajax_nopriv_some_vuln",
"kind": "function",
"purl": "pkg:composer/wordpress/wordpress@5.8"
}
],
"edges": [
{
"from": "func:wp_ajax_nopriv_some_vuln",
"to": "func:wpdb_query",
"kind": "call"
}
],
"entrypoints": [
{
"id": "web:GET:/foo",
"symbol": "some_controller",
"kind": "http_route"
}
]
}
```
**Tasks**
* JSON Schema with:
* `nodes[]` (id, symbol, kind, optional purl)
* `edges[]` (`from`, `to`, `kind`)
* `entrypoints[]` (id, symbol, kind)
* Tests: verify a valid graph; invalid ones (missing `id`, unknown `kind`) are rejected.
### 2.4 Dataset index schema (`dataset.schema.json`)
* JSON Schema describing `dataset.json` (version string, cases array).
* Tests: validate the example dataset file.
### Acceptance criteria
* Running a simple script (will be `reachbench validate-dataset`) validates all JSON files in `dataset/` against schemas without errors.
* CI fails if any dataset JSON is invalid.
---
## 3. Lockfile & determinism manifest
**Goal:** Implement `manifest.lock.json` generation and verification.
### 3.1 Lockfile structure
**File:** `dataset/manifest.lock.json`
**Example:**
```jsonc
{
"version": "0.1.0",
"created_at": "2025-01-15T12:00:00Z",
"dataset": {
"root": "dataset/",
"sha256": "…",
"cases": {
"php-wordpress-5.8-cve-2023-12345": {
"sha256": "…"
}
}
},
"tools": {
"graph_normalizer": {
"name": "stellaops-graph-normalizer",
"version": "1.2.3",
"sha256": "…"
}
},
"containers": {
"scanner_image": "ghcr.io/stellaops/scanner@sha256:…",
"normalizer_image": "ghcr.io/stellaops/normalizer@sha256:…"
},
"signatures": [
{
"type": "dsse",
"key_id": "stellaops-benchmark-key-1",
"signature": "base64-encoded-blob"
}
]
}
```
*(Signatures can be optional in v1 but structure should be there.)*
### 3.2 `lockfile.py` module
**File:** `harness/reachbench/lockfile.py`
**Responsibilities**
* Compute deterministic SHA-256 digest of:
* each cases artifacts (path → hash from `dataset.json`)
* entire `dataset/` tree (sorted traversal)
* Generate new `manifest.lock.json`:
* `version` (hard-coded constant)
* `created_at` (UTC ISO8601)
* `dataset` section with case hashes
* Verification:
* `verify_lockfile(dataset_root, lockfile_path)`:
* recompute hashes
* compare to `lockfile.dataset`
* return boolean + list of mismatches
**Tasks**
1. Implement canonical hashing:
* For text JSON files: normalize with:
* sort keys
* no whitespace
* UTF8 encoding
* For binaries (packages): raw bytes.
2. Implement `compute_dataset_hashes(dataset_root)`:
* Returns `{"cases": {...}, "root_sha256": "…"}`.
3. Implement `write_lockfile(...)` and `verify_lockfile(...)`.
4. Tests:
* Two calls with same dataset produce identical lockfile (order of `cases` keys normalized).
* Changing any artifact file changes the root hash and causes verify to fail.
### 3.3 CLI commands
Add to `cli.py`:
* `reachbench compute-lockfile --dataset-root ./dataset --out ./dataset/manifest.lock.json`
* `reachbench verify-lockfile --dataset-root ./dataset --lockfile ./dataset/manifest.lock.json`
### Acceptance criteria
* `reachbench compute-lockfile` generates a stable file (byte-for-byte identical across runs).
* `reachbench verify-lockfile` exits with:
* code 0 if matches
* non-zero if mismatch (plus human-readable diff).
---
## 4. Scoring harness CLI
**Goal:** Deterministically score participant results against ground truth.
### 4.1 Result format (participant output)
**Expectation:**
Participants provide `results/` with one JSON per case:
```text
results/
php-wordpress-5.8-cve-2023-12345.json
js-express-4.17-cve-2022-9999.json
```
**Result file example:**
```jsonc
{
"case_id": "php-wordpress-5.8-cve-2023-12345",
"tool_name": "my-reachability-analyzer",
"tool_version": "1.0.0",
"predictions": [
{
"cve": "CVE-2023-12345",
"symbol": "wp_ajax_nopriv_some_vuln",
"symbol_kind": "function",
"status": "reachable"
},
{
"cve": "CVE-2023-12345",
"symbol": "wp_safe_function",
"symbol_kind": "function",
"status": "not_reachable"
}
]
}
```
### 4.2 Scoring model
* Treat scoring as classification over `(cve, symbol)` pairs.
* For each case:
* Truth positives: all `vulnerable_components` with `status == "reachable"`.
* Truth negatives: everything marked `not_reachable` (optional in v1).
* Predictions: all entries with `status == "reachable"`.
* Compute:
* `TP`: predicted reachable & truth reachable.
* `FP`: predicted reachable but truth says not reachable / unknown.
* `FN`: truth reachable but not predicted reachable.
* Metrics:
* Precision, Recall, F1 per case.
* Macro-averaged metrics across all cases.
### 4.3 Implementation (`scoring.py`)
**File:** `harness/reachbench/scoring.py`
**Functions:**
* `load_truth(case_truth_path) -> TruthModel`
* `load_predictions(predictions_path) -> PredictionModel`
* `compute_case_metrics(truth, preds) -> dict`
* returns:
```python
{
"case_id": str,
"tp": int,
"fp": int,
"fn": int,
"precision": float,
"recall": float,
"f1": float
}
```
* `aggregate_metrics(case_metrics_list) -> dict`
* `macro_precision`, `macro_recall`, `macro_f1`, `num_cases`.
### 4.4 CLI: `score`
**Signature:**
```bash
reachbench score \
--dataset-root ./dataset \
--results-root ./results \
--lockfile ./dataset/manifest.lock.json \
--out ./out/scores.json \
[--cases php-*] \
[--repeat 3]
```
**Behavior:**
1. **Verify lockfile** (fail closed if mismatch).
2. Load `dataset.json`, filter cases if `--cases` is set (glob).
3. For each case:
* Load truth file (and validate schema).
* Locate results file (`<case_id>.json`) under `results-root`:
* If missing, treat as all FN (or mark case as “no submission”).
* Load and validate predictions (include a JSON Schema: `results.schema.json`).
* Compute per-case metrics.
4. Aggregate metrics.
5. Write `scores.json`:
```jsonc
{
"version": "0.1.0",
"dataset_version": "0.1.0",
"generated_at": "2025-01-15T12:34:56Z",
"macro_precision": 0.92,
"macro_recall": 0.88,
"macro_f1": 0.90,
"cases": [
{
"case_id": "php-wordpress-5.8-cve-2023-12345",
"tp": 10,
"fp": 1,
"fn": 2,
"precision": 0.91,
"recall": 0.83,
"f1": 0.87
}
]
}
```
6. **Determinism check**:
* If `--repeat N` given:
* Re-run scoring in-memory N times.
* Compare resulting JSON strings (canonicalized via sorted keys).
* If any differ, exit non-zero with message (“non-deterministic scoring detected”).
### 4.5 Offline-only mode
* In `cli.py`, early check:
```python
if os.getenv("REACHBENCH_OFFLINE_ONLY", "1") == "1":
# Verify no outbound network: by policy, just ensure we never call any net libs.
# (In v1, simply avoid adding any such calls.)
```
* Document that harness must not reach out to the internet.
### Acceptance criteria
* Given a small artificial dataset with 23 cases and handcrafted results, `reachbench score` produces expected metrics (assert via tests).
* Running `reachbench score --repeat 3` produces identical `scores.json` across runs.
* Missing results files are handled gracefully (but clearly documented).
---
## 5. Baseline implementations
**Goal:** Provide in-repo baselines that use only the provided graphs (no extra tooling).
### 5.1 Baseline types
1. **Naïve reachable**: all symbols in the vulnerable package are considered reachable.
2. **Imports-only**: reachable = any symbol that:
* appears in the graph AND
* is reachable from any entrypoint by a single edge OR name match.
3. **Call-depth-2**:
* From each entrypoint, traverse up to depth 2 along `call` edges.
* Anything at depth ≤ 2 is considered reachable.
### 5.2 Implementation
**File:** `harness/reachbench/baselines.py`
* `baseline_naive(graph, truth) -> PredictionModel`
* `baseline_imports_only(graph, truth) -> PredictionModel`
* `baseline_call_depth_2(graph, truth) -> PredictionModel`
**CLI:**
```bash
reachbench run-baseline \
--dataset-root ./dataset \
--baseline naive|imports|depth2 \
--out ./results-baseline-<baseline>/
```
Behavior:
* For each case:
* Load graph.
* Generate predictions per baseline.
* Write result file `results-baseline-<baseline>/<case_id>.json`.
### 5.3 Tests
* Tiny synthetic dataset in `harness/tests/data/`:
* 12 cases with simple graphs.
* Known expectations for each baseline (TP/FP/FN counts).
### Acceptance criteria
* `reachbench run-baseline --baseline naive` runs end-to-end and outputs results files.
* `reachbench score` on baseline results produces stable scores.
* Tests validate baseline behavior on synthetic cases.
---
## 6. Dataset validation & tooling
**Goal:** One command to validate everything (schemas, hashes, internal consistency).
### CLI: `validate-dataset`
```bash
reachbench validate-dataset \
--dataset-root ./dataset \
[--lockfile ./dataset/manifest.lock.json]
```
**Checks:**
1. `dataset.json` conforms to `dataset.schema.json`.
2. For each case:
* all artifact paths exist
* `graph` file passes `graph.schema.json`
* `truth` file passes `truth.schema.json`
3. Optional: verify lockfile if provided.
**Implementation:**
* `dataset_loader.py`:
* `load_dataset_index(path) -> DatasetIndex`
* `iter_cases(dataset_index)` yields case objects.
* `validate_case(case, dataset_root) -> list[str]` (list of error messages).
**Acceptance criteria**
* Broken paths / invalid JSON produce a clear error message and non-zero exit code.
* CI job calls `reachbench validate-dataset` on every push.
---
## 7. Documentation
**Goal:** Make it trivial for outsiders to use the benchmark.
### 7.1 `README.md`
* Overview:
* What the benchmark is.
* What it measures (reachability precision/recall).
* Quickstart:
```bash
git clone ...
cd stellaops-reachability-benchmark
# Validate dataset
reachbench validate-dataset --dataset-root ./dataset
# Run baselines
reachbench run-baseline --baseline naive --dataset-root ./dataset --out ./results-naive
# Score baselines
reachbench score --dataset-root ./dataset --results-root ./results-naive --out ./out/naive-scores.json
```
### 7.2 `docs/HOWTO.md`
* Step-by-step:
* Installing harness.
* Running your own tool on the dataset.
* Formatting your `results/`.
* Running `reachbench score`.
* Interpreting `scores.json`.
### 7.3 `docs/SCHEMA.md`
* Human-readable description of:
* `graph` JSON
* `truth` JSON
* `results` JSON
* `scores` JSON
* Link to actual JSON Schemas.
### 7.4 `docs/REPRODUCIBILITY.md`
* Explain:
* lockfile design
* hashing rules
* deterministic scoring and `--repeat` flag
* how to verify youre using the exact same dataset.
### 7.5 `docs/SANITIZATION.md`
* Rules for adding new cases:
* Only use OSS or properly licensed code.
* Strip secrets / proprietary paths / user data.
* How to confirm nothing sensitive is in package tarballs.
### Acceptance criteria
* A new engineer (or external user) can go from zero to “I ran the baseline and got scores” by following docs only.
* All example commands work as written.
---
## 8. CI/CD details
**Goal:** Keep repo healthy and ensure determinism.
### CI jobs (GitHub Actions)
1. **`lint`**
* Run `ruff` / `flake8` (your choice).
2. **`test`**
* Run `pytest`.
3. **`validate-dataset`**
* Run `reachbench validate-dataset --dataset-root ./dataset`.
4. **`determinism`**
* Small workflow step:
* Run `reachbench score` on a tiny test dataset with `--repeat 3`.
* Assert success.
5. **`docker-build`**
* `docker build` the harness image.
### Acceptance criteria
* All jobs green on main.
* PRs show failing status if schemas or determinism break.
---
## 9. Rough “epics → stories” breakdown
You can paste roughly like this into Jira/Linear:
1. **Epic: Repo bootstrap & CI**
* Story: Create repo skeleton & Python project
* Story: Add Dockerfile & basic CI (lint + tests)
2. **Epic: Schemas & dataset plumbing**
* Story: Implement `truth.schema.json` + tests
* Story: Implement `graph.schema.json` + tests
* Story: Implement `dataset.schema.json` + tests
* Story: Implement `validate-dataset` CLI
3. **Epic: Lockfile & determinism**
* Story: Implement lockfile computation + verification
* Story: Add `compute-lockfile` & `verify-lockfile` CLI
* Story: Add determinism checks in CI
4. **Epic: Scoring harness**
* Story: Define results format + `results.schema.json`
* Story: Implement scoring logic (`scoring.py`)
* Story: Implement `score` CLI with `--repeat`
* Story: Add unit tests for metrics
5. **Epic: Baselines**
* Story: Implement naive baseline
* Story: Implement imports-only baseline
* Story: Implement depth-2 baseline
* Story: Add `run-baseline` CLI + tests
6. **Epic: Documentation & polish**
* Story: Write README + HOWTO
* Story: Write SCHEMA / REPRODUCIBILITY / SANITIZATION docs
* Story: Final repo cleanup & examples
---
If you tell me your preferred language and CI, I can also rewrite this into exact tickets and even starter code for `cli.py` and a couple of schemas.

View File

@@ -0,0 +1,654 @@
Heres a small but highimpact product tweak: **add an immutable `graph_revision_id` to every callgraph page and API link**, so any result is citeable and reproducible across time.
---
### Why it matters (quick)
* **Auditability:** you can prove *which* graph produced a finding.
* **Reproducibility:** reruns that change paths wont “move the goalposts.”
* **Support & docs:** screenshots/links in tickets point to an exact graph state.
### What to add
* **Stable anchor in all URLs:**
`https://…/graphs/{graph_id}?rev={graph_revision_id}`
`https://…/api/graphs/{graph_id}/nodes?rev={graph_revision_id}`
* **Opaque, contentaddressed ID:** e.g., `graph_revision_id = blake3( sorted_edges + cfg + tool_versions + dataset_hashes )`.
* **Firstclass fields:** store `graph_id` (logical lineage), `graph_revision_id` (immutable), `parent_revision_id` (if derived), `created_at`, `provenance` (feed hashes, toolchain).
* **UI surfacing:** show a copybutton “Rev: 8f2d…c9” on graph pages and in the “Share” dialog.
* **Diff affordance:** when `?rev=A` and `?rev=B` are both present, offer “Compare paths (A↔B).”
### Minimal API contract (suggested)
* `GET /api/graphs/{graph_id}` → latest + `latest_revision_id`
* `GET /api/graphs/{graph_id}/revisions/{graph_revision_id}` → immutable snapshot
* `GET /api/graphs/{graph_id}/nodes?rev=…` and `/edges?rev=…`
* `POST /api/graphs/{graph_id}/pin` with `{ graph_revision_id }` to mark “official”
* HTTP `Link` header on all responses:
`Link: <…/graphs/{graph_id}/revisions/{graph_revision_id}>; rel="version"`
### How to compute the revision id (deterministic)
* Inputs (all normalized): sorted node/edge sets; build config; tool+model versions; input artifacts (SBOM/VEX/feed) **by hash**; environment knobs (feature flags).
* Serialization: canonical JSON (UTF8, ordered keys).
* Hash: BLAKE3/sha256 → base58/hex (shortened in UI, full in API).
* Store alongside a manifest (so you can replay the graph later).
### Guardrails
* **Never reuse an ID** if any input bit differs.
* **Do not** make it guessable from business data (avoid leaking repo names, paths).
* **Break glass:** if a bad graph must be purged, keep the ID tombstoned (410 Gone) so references dont silently change.
### StellaOps touches (concrete)
* **Authority**: add `GraphRevisionManifest` (feeds, lattice/policy versions, scanners, intoto/DSSE attestations).
* **Scanner/Vexer**: emit deterministic manifests and hand them to Authority for id derivation.
* **Ledger**: record `(graph_id, graph_revision_id, manifest_hash, signatures)`; expose audit query by `graph_revision_id`.
* **Docs & Support**: “Attach your `graph_revision_id`” line in issue templates.
### Tiny UX copy
* On graph page header: `Rev 8f2d…c9`**Copy****Compare****Pin**
* Share dialog: “This link freezes todays state. New runs get a different rev.”
If you want, I can draft the DB table, the manifest JSON schema, and the exact URL/router changes for your .NET 10 services next.
Cool, lets turn this into something your engineers can actually pick up and implement.
Below is a concrete implementation plan broken down by phases, services, and tickets, with suggested data models, APIs, and tests.
---
## 0. Definitions (shared across teams)
* **Graph ID (`graph_id`)** Logical identifier for a call graph lineage (e.g., “the call graph for build X of repo Y”).
* **Graph Revision ID (`graph_revision_id`)** Immutable identifier for a specific snapshot of that graph, derived from a manifest (content-addressed hash).
* **Parent Revision ID (`parent_revision_id`)** Previous revision in the lineage (if any).
* **Manifest** Canonical JSON blob that describes *everything* that could affect graph structure or results:
* Nodes & edges
* Input feeds and their hashes (SBOM, VEX, scanner output, etc.)
* config/policies/feature flags
* tool + version (scanner, vexer, authority)
---
## 1. High-Level Architecture Changes
1. **Introduce `graph_revision_id` as a first-class concept** in:
* Graph storage / Authority
* Ledger / audit
* Backend APIs serving call graphs
2. **Derive `graph_revision_id` deterministically** from a manifest via a cryptographic hash.
3. **Expose revision in all graph-related URLs & APIs**:
* UI: `…/graphs/{graph_id}?rev={graph_revision_id}`
* API: `…/api/graphs/{graph_id}/revisions/{graph_revision_id}`
4. **Ensure immutability**: once a revision exists, it can never be updated in-place—only superseded by new revisions.
---
## 2. Backend: Data Model & Storage
### 2.1. Authority (graph source of truth)
**Goal:** Model graphs and revisions explicitly.
**New / updated tables (example in SQL-ish form):**
1. **Graphs (logical entity)**
```sql
CREATE TABLE graphs (
id UUID PRIMARY KEY,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
latest_revision_id VARCHAR(128) NULL, -- FK into graph_revisions.id
label TEXT NULL, -- optional human label
metadata JSONB NULL
);
```
2. **Graph Revisions (immutable snapshots)**
```sql
CREATE TABLE graph_revisions (
id VARCHAR(128) PRIMARY KEY, -- graph_revision_id (hash)
graph_id UUID NOT NULL REFERENCES graphs(id),
parent_revision_id VARCHAR(128) NULL REFERENCES graph_revisions(id),
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
manifest JSONB NOT NULL, -- canonical manifest
provenance JSONB NOT NULL, -- tool versions, etc.
is_pinned BOOLEAN NOT NULL DEFAULT FALSE,
pinned_by UUID NULL, -- user id
pinned_at TIMESTAMPTZ NULL
);
CREATE INDEX idx_graph_revisions_graph_id ON graph_revisions(graph_id);
```
3. **Call Graph Data (if separate)**
If you store nodes/edges in separate tables, add a foreign key to `graph_revision_id`:
```sql
ALTER TABLE call_graph_nodes
ADD COLUMN graph_revision_id VARCHAR(128) NULL;
ALTER TABLE call_graph_edges
ADD COLUMN graph_revision_id VARCHAR(128) NULL;
```
> **Rule:** Nodes/edges for a revision are **never mutated**; a new revision means new rows.
---
### 2.2. Ledger (audit trail)
**Goal:** Every revision gets a ledger record for auditability.
**Table change or new table:**
```sql
CREATE TABLE graph_revision_ledger (
id BIGSERIAL PRIMARY KEY,
graph_revision_id VARCHAR(128) NOT NULL,
graph_id UUID NOT NULL,
manifest_hash VARCHAR(128) NOT NULL,
manifest_digest_algo TEXT NOT NULL, -- e.g., "BLAKE3"
authority_signature BYTEA NULL, -- optional
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW()
);
CREATE INDEX idx_grl_revision ON graph_revision_ledger(graph_revision_id);
```
Ledger ingestion happens **after** a revision is stored in Authority, but **before** it is exposed as “current” in the UI.
---
## 3. Backend: Revision Hashing & Manifest
### 3.1. Define the manifest schema
Create a spec (e.g., JSON Schema) used by Scanner/Vexer/Authority.
**Example structure:**
```json
{
"graph": {
"graph_id": "uuid",
"generator": {
"tool_name": "scanner",
"tool_version": "1.4.2",
"run_id": "some-run-id"
}
},
"inputs": {
"sbom_hash": "sha256:…",
"vex_hash": "sha256:…",
"repos": [
{
"name": "repo-a",
"commit": "abc123",
"tree_hash": "sha1:…"
}
]
},
"config": {
"policy_version": "2024-10-01",
"feature_flags": {
"new_vex_engine": true
}
},
"graph_content": {
"nodes": [
// nodes in canonical sorted order
],
"edges": [
// edges in canonical sorted order
]
}
}
```
**Key requirements:**
* All lists that affect the graph (`nodes`, `edges`, `repos`, etc.) must be **sorted deterministically**.
* Keys must be **stable** (no environment-dependent keys, no random IDs).
* All hashes of input artifacts must be included (not raw content).
### 3.2. Hash computation
Language-agnostic algorithm:
1. Normalize manifest to **canonical JSON**:
* UTF-8
* Sorted keys
* No extra whitespace
2. Hash the bytes using a cryptographic hash (BLAKE3 or SHA-256).
3. Encode as hex or base58 string.
**Pseudocode:**
```pseudo
function compute_graph_revision_id(manifest):
canonical_json = canonical_json_encode(manifest) // sorted keys
digest_bytes = BLAKE3(canonical_json)
digest_hex = hex_encode(digest_bytes)
return "grv_" + digest_hex[0:40] // prefix + shorten for UI
```
**Ticket:** Implement `GraphRevisionIdGenerator` library (shared):
* `Compute(manifest) -> graph_revision_id`
* `ValidateFormat(graph_revision_id) -> bool`
Make this a **shared library** across Scanner, Vexer, Authority to avoid divergence.
---
## 4. Backend: APIs
### 4.1. Graphs & revisions REST API
**New endpoints (example):**
1. **Get latest graph revision**
```http
GET /api/graphs/{graph_id}
Response:
{
"graph_id": "…",
"latest_revision_id": "grv_8f2d…c9",
"created_at": "…",
"metadata": { … }
}
```
2. **List revisions for a graph**
```http
GET /api/graphs/{graph_id}/revisions
Query: ?page=1&pageSize=20
Response:
{
"graph_id": "…",
"items": [
{
"graph_revision_id": "grv_8f2d…c9",
"created_at": "…",
"parent_revision_id": null,
"is_pinned": true
},
{
"graph_revision_id": "grv_3a1b…e4",
"created_at": "…",
"parent_revision_id": "grv_8f2d…c9",
"is_pinned": false
}
]
}
```
3. **Get a specific revision (snapshot)**
```http
GET /api/graphs/{graph_id}/revisions/{graph_revision_id}
Response:
{
"graph_id": "…",
"graph_revision_id": "…",
"created_at": "…",
"parent_revision_id": null,
"manifest": { … }, // optional: maybe not full content if large
"provenance": { … }
}
```
4. **Get nodes/edges for a revision**
```http
GET /api/graphs/{graph_id}/nodes?rev={graph_revision_id}
GET /api/graphs/{graph_id}/edges?rev={graph_revision_id}
```
Behavior:
* If `rev` is **omitted**, return the **latest_revision_id** for that `graph_id`.
* If `rev` is **invalid or unknown**, return `404` (not fallback).
5. **Pin/unpin a revision (optional for v1)**
```http
POST /api/graphs/{graph_id}/pin
Body: { "graph_revision_id": "…" }
DELETE /api/graphs/{graph_id}/pin
Body: { "graph_revision_id": "…" }
```
### 4.2. Backward compatibility
* Existing endpoints like `GET /api/graphs/{graph_id}/nodes` should:
* Continue working with no `rev` param.
* Internally resolve to `latest_revision_id`.
* For old records with no revision:
* Create a synthetic manifest from current stored data.
* Compute a `graph_revision_id`.
* Store it and set `latest_revision_id` on the `graphs` row.
---
## 5. Scanner / Vexer / Upstream Pipelines
**Goal:** At the end of a graph build, they produce a manifest and a `graph_revision_id`.
### 5.1. Responsibilities
1. **Scanner/Vexer**:
* Gather:
* Tool name/version
* Input artifact hashes
* Feature flags / config
* Graph nodes/edges
* Construct manifest (according to schema).
* Compute `graph_revision_id` using shared library.
* Send manifest + revision ID to Authority via an internal API (e.g., `POST /internal/graph-build-complete`).
2. **Authority**:
* Idempotently upsert:
* `graphs` (if new `graph_id`)
* `graph_revisions` row (if `graph_revision_id` not yet present)
* nodes/edges rows keyed by `graph_revision_id`.
* Update `graphs.latest_revision_id` to the new revision.
### 5.2. Internal API (Authority)
```http
POST /internal/graphs/{graph_id}/revisions
Body:
{
"graph_revision_id": "…",
"parent_revision_id": "…", // optional
"manifest": { … },
"provenance": { … },
"nodes": [ … ],
"edges": [ … ]
}
Response: 201 Created (or 200 if idempotent)
```
**Rules:**
* If `graph_revision_id` already exists for that `graph_id` with identical `manifest_hash`, treat as **idempotent**.
* If `graph_revision_id` exists but manifest hash differs → log and reject (bug in hashing).
---
## 6. Frontend / UX Changes
Assuming a SPA (React/Vue/etc.), well treat these as tasks.
### 6.1. URL & routing
* **New canonical URL format** for graph UI:
* Latest: `/graphs/{graph_id}`
* Specific revision: `/graphs/{graph_id}?rev={graph_revision_id}`
* Router:
* Parse `rev` query param.
* If present, call `GET /api/graphs/{graph_id}/nodes?rev=…`.
* If not present, call same endpoint but without `rev` → backend returns latest.
### 6.2. Displaying revision info
* In graph page header:
* Show truncated revision:
* `Rev: 8f2d…c9`
* Buttons:
* **Copy** → Copies full `graph_revision_id`.
* **Share** → Copies full URL with `?rev=…`.
* Optional chip if pinned: `Pinned`.
**Example data model (TS):**
```ts
type GraphRevisionSummary = {
graphId: string;
graphRevisionId: string;
createdAt: string;
parentRevisionId?: string | null;
isPinned: boolean;
};
```
### 6.3. Revision list panel (optional but useful)
* Add a side panel or tab: “Revisions”.
* Fetch from `GET /api/graphs/{graph_id}/revisions`.
* Clicking a revision:
* Navigates to same page with `?rev={graph_revision_id}`.
* Preserves other UI state where reasonable.
### 6.4. Diff view (nice-to-have, can be v2)
* UX: “Compare with…” button in header.
* Opens dialog to pick a second revision.
* Backend: add a diff endpoint later, or compute diff client-side from node/edge lists if feasible.
---
## 7. Migration Plan
### 7.1. Phase 1 Schema & read-path ready
1. **Add DB columns/tables**:
* `graphs`, `graph_revisions`, `graph_revision_ledger`.
* `graph_revision_id` column to `call_graph_nodes` / `call_graph_edges`.
2. **Deploy with no behavior changes**:
* Default `graph_revision_id` columns NULL.
* Existing APIs continue to work.
### 7.2. Phase 2 Backfill existing graphs
1. Write a **backfill job**:
* For each distinct existing graph:
* Build a manifest from existing stored data.
* Compute `graph_revision_id`.
* Insert into `graphs` & `graph_revisions`.
* Update nodes/edges for that graph to set `graph_revision_id`.
* Set `graphs.latest_revision_id`.
2. Log any graphs that cant be backfilled (corrupt data, etc.) for manual review.
3. After backfill:
* Add **NOT NULL** constraint on `graph_revision_id` for nodes/edges (if practical).
* Ensure all public APIs can fetch revisions without changes from clients.
### 7.3. Phase 3 Wire up new pipelines
1. Update Scanner/Vexer to construct manifests and compute revision IDs.
2. Update Authority to accept `/internal/graphs/{graph_id}/revisions`.
3. Gradually roll out:
* Feature flag: `graphRevisionIdFromPipeline`.
* For flagged runs, use the new pipeline; for others, fall back to old + synthetic revision.
### 7.4. Phase 4 Frontend rollout
1. Update UI to:
* Read `rev` from URL (but not required).
* Show `Rev` in header.
* Use revision-aware endpoints.
2. Once stable:
* Update “Share” actions to always include `?rev=…`.
---
## 8. Testing Strategy
### 8.1. Unit tests
* **Hashing library**:
* Same manifest → same `graph_revision_id`.
* Different node ordering → same `graph_revision_id`.
* Tiny manifest change → different `graph_revision_id`.
* **Authority service**:
* Creating a revision stores `graph_revisions` + nodes/edges with matching `graph_revision_id`.
* Duplicate revision (same id + manifest) is idempotent.
* Conflicting manifest with same `graph_revision_id` is rejected.
### 8.2. Integration tests
* Scenario: “Create graph → view in UI”
* Pipeline produces manifest & revision.
* Authority persists revision.
* Ledger logs event.
* UI shows matching `graph_revision_id`.
* Scenario: “Stable permalinks”
* Capture a link with `?rev=…`.
* Rerun pipeline (new revision).
* Old link still shows original nodes/edges.
### 8.3. Migration tests
* On a sanitized snapshot:
* Run migration & backfill.
* Spot-check:
* Each `graph_id` has exactly one `latest_revision_id`.
* Node/edge counts before and after match.
* Manually recompute hash for a few graphs and compare to stored `graph_revision_id`.
---
## 9. Security & Compliance Considerations
* **Immutability guarantee**:
* Dont allow updates to `graph_revisions.manifest`.
* Any change must happen by creating a new revision.
* **Tombstoning** (for rare delete cases):
* If you must “remove” a bad graph, mark revision as `tombstoned` in an additional column and return `410 Gone` for that `graph_revision_id`.
* Never reuse that ID.
* **Access control**:
* Ensure revision APIs use the same ACLs as existing graph APIs.
* Dont leak manifests to users not allowed to see underlying artifacts.
---
## 10. Concrete Ticket Breakdown (example)
You can copy/paste this into your tracker and tweak.
1. **BE-01** Add `graphs` and `graph_revisions` tables
* AC:
* Tables exist with fields above.
* Migrations run cleanly in staging.
2. **BE-02** Add `graph_revision_id` to nodes/edges tables
* AC:
* Column added, nullable.
* No runtime errors in staging.
3. **BE-03** Implement `GraphRevisionIdGenerator` library
* AC:
* Given a manifest, returns deterministic ID.
* Unit tests cover ordering, minimal changes.
4. **BE-04** Implement `/internal/graphs/{graph_id}/revisions` in Authority
* AC:
* Stores new revision + nodes/edges.
* Idempotent on duplicate revisions.
5. **BE-05** Implement public revision APIs
* AC:
* Endpoints in §4.1 available with Swagger.
* `rev` query param supported.
* Default behavior returns latest revision.
6. **BE-06** Backfill existing graphs into `graph_revisions`
* AC:
* All existing graphs have `latest_revision_id`.
* Nodes/edges linked to a `graph_revision_id`.
* Metrics & logs generated for failures.
7. **BE-07** Ledger integration for revisions
* AC:
* Each new revision creates a ledger entry.
* Query by `graph_revision_id` works.
8. **PIPE-01** Scanner/Vexer manifest construction
* AC:
* Manifest includes all required fields.
* Values verified against Authority for a sample run.
9. **PIPE-02** Scanner/Vexer computes `graph_revision_id` and calls Authority
* AC:
* End-to-end pipeline run produces a new `graph_revision_id`.
* Authority stores it and sets as latest.
10. **FE-01** UI supports `?rev=` param and displays revision
* AC:
* When URL has `rev`, UI loads that revision.
* When no `rev`, loads latest.
* Rev appears in header with copy/share.
11. **FE-02** Revision list UI (optional)
* AC:
* Revision panel lists revisions.
* Click navigates to appropriate `?rev=`.
---
If youd like, I can next help you turn this into a very explicit design doc (with diagrams and exact JSON examples) or into ready-to-paste migration scripts / TypeScript interfaces tailored to your actual stack.

View File

@@ -0,0 +1,696 @@
Here are some key developments in the softwaresupplychain and vulnerabilityscoring world that youll want on your radar.
---
## 1. CVSS v4.0 traceable scoring with richer context
![Image](https://www.first.org/cvss/v4-0/media/699c7730c6e9a411584a129153e334f4.png)
![Image](https://www.first.org/cvss/v4-0/media/92895c8262420d32e486690aa3da9158.png)
![Image](https://orca.security/wp-content/uploads/2024/01/image-35.png?w=1149)
![Image](https://ik.imagekit.io/qualys/wp-content/uploads/2023/11/common-vulnerability-scoring-sysytem-1070x606.png)
![Image](https://www.first.org/cvss/v4-0/media/775681a717a6816a877d808132387ebe.png)
![Image](https://www.incibe.es/sites/default/files/blog/2023/cvss_v4/esquema_EN.png)
* CVSSv4.0 was officially released by FIRST (Forum of Incident Response & Security Teams) on **November1,2023**. ([first.org][1])
* The specification now clearly divides metrics into four groups: Base, Threat, Environmental, and Supplemental. ([first.org][1])
* The National Vulnerability Database (NVD) has added support for CVSSv4.0 — meaning newer vulnerability records can carry v4style scores, vector strings and search filters. ([NVD][2])
* Whats new/tangible: better granularity, explicit “Attack Requirements” and richer metadata to better reflect realworld contextual risk. ([Seemplicity][3])
* Why this matters: Enables more traceable evidence of how a score was derived (which metrics used, what context), supporting auditing, prioritisation and transparency.
**Takeaway for your world**: If youre leveraging vulnerability scanning, SBOM enrichment or compliance workflows (given your interest in SBOM/VEX/provenance), then moving to or supporting CVSSv4.0 ensures you have stronger traceability and richer scoring context that maps into policy, audit and remediation workflows.
---
## 2. CycloneDX v1.7 SBOM/VEX/provenance with cryptographic & IP transparency
![Image](https://media.licdn.com/dms/image/sync/v2/D5627AQEQOCURRF5KKA/articleshare-shrink_800/B56ZoHZJ8vJ8AI-/0/1761060627060?e=2147483647\&t=FRlRJg1uubjtZlxPbks-Xd94o4aDWy841V7vjclWBoQ\&v=beta)
![Image](https://cyclonedx.org/images/guides/OWASP_CycloneDX-Authoritative-Guide-to-CBOM-en.png)
![Image](https://cyclonedx.org/images/CycloneDX-Social-Card.png?ts=167332841195327)
![Image](https://sbom.observer/academy/img/cyclonedx-model.svg)
![Image](https://devsec-blog.com/wp-content/uploads/2024/03/1_vgsHYhpBnkMTrXtnYY9LFA-14.webp)
![Image](https://media2.dev.to/dynamic/image/width%3D800%2Cheight%3D%2Cfit%3Dscale-down%2Cgravity%3Dauto%2Cformat%3Dauto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0dtx4jjnx4m4oba67efz.png)
* Version1.7 of the SBOM standard from OWASP Foundation (CycloneDX) launched on **October21,2025**. ([CycloneDX][4])
* Key enhancements: *Cryptography Bill of Materials (CBOM)* support (listing algorithm families, elliptic curves, etc) and *structured citations* (who provided component info, how, when) to improve provenance. ([CycloneDX][4])
* Provenance usecases: The spec enables declaring supplier/author/publisher metadata, component origin, external references. ([CycloneDX][5])
* Broadening scope: CycloneDX now supports not just SBOM (software), but hardware BOMs (HBOM), machine learning BOMs, cryptographic BOMs (CBOM) and supports VEX/attestation usecases. ([openssf.org][6])
* Why this matters: For your StellaOps architecture (with a strong emphasis on provenance, deterministic scans, trustframeworks) CycloneDXv1.7 provides native standard support for deeper auditready evidence, cryptographic algorithm visibility (which matters for cryptosovereign readiness) and formal attestations/citations in the BOM.
**Takeaway**: Aligning your SBOM/VEX/provenance stack (e.g., scanner.webservice) to output CycloneDXv1.7compliant artifacts means you jump ahead in terms of traceability, auditability and futureproofing (crypto and IP).
---
## 3. SLSA v1.2 Release Candidate2 supplychain build provenance standard
![Image](https://slsa.dev/spec/draft/images/provenance-model.svg)
![Image](https://slsa.dev/spec/draft/images/build-env-model.svg)
![Image](https://www.legitsecurity.com/hs-fs/hubfs/Screen%20Shot%202023-05-03%20at%203.38.49%20PM.png?height=742\&name=Screen+Shot+2023-05-03+at+3.38.49+PM.png\&width=912)
![Image](https://miro.medium.com/v2/resize%3Afit%3A1400/0%2Ac2z_UhJeNJrglUMy)
![Image](https://pradeepl.com/blog/slsa/images/SLSA-Pradeep-Loganathan.png)
![Image](https://miro.medium.com/1%2AszH2l3El8agHp1sS3rAp_A.jpeg)
* On **November10,2025**, the Open Source Security Foundation (via the SLSA community) announced RC2 of SLSAv1.2, open for public comment until November24,2025. ([SLSA][7])
* Whats new: Introduction of a *Source Track* (in addition to the Build Track) to capture source control provenance, distributed provenance, artifact attestations. ([SLSA][7])
* Specification clarifies provenance/attestation formats, how builds should be produced, distributed, verified. ([SLSA][8])
* Why this matters: SLSA gives you a standard framework for “I can trace this binary back to the code, the build system, the signer, the provenance chain,” which aligns directly with your strategic moats around deterministic replayable scans, proofofintegrity graph, and attestations.
**Takeaway**: If you integrate SLSAv1.2 (once finalised) into StellaOps, you gain an industryrecognised standard for build provenance and attestation, complementing your SBOM/VEX and CVSS code bases.
---
### Why Im sharing this with you
Given your interest in cryptographicsovereign readiness, deterministic scanning, provenance and auditgrade supplychain tooling (your StellaOps moat list), this trifecta (CVSSv4.0 + CycloneDXv1.7 + SLSAv1.2) represents the major standards you need to converge on. They each address different layers: vulnerability scoring, component provenance and build/trust chain assurance. Aligning all three will give you a strong governance and tooling stack.
If you like, I can pull together a detailed gapanalysis table (your current architecture versus what these standards demand) and propose roadmap steps for StellaOps to adopt them.
[1]: https://www.first.org/cvss/specification-document?utm_source=chatgpt.com "CVSS v4.0 Specification Document"
[2]: https://nvd.nist.gov/general/news/cvss-v4-0-official-support?utm_source=chatgpt.com "CVSS v4.0 Official Support - NVD"
[3]: https://seemplicity.io/blog/decoding-cvss-4-clarified-base-metrics/?utm_source=chatgpt.com "Decoding CVSS 4.0: Clarified Base Metrics"
[4]: https://cyclonedx.org/news/cyclonedx-v1.7-released/?utm_source=chatgpt.com "CycloneDX v1.7 Delivers Advanced Cryptography, ..."
[5]: https://cyclonedx.org/use-cases/provenance/?utm_source=chatgpt.com "Security Use Case: Provenance"
[6]: https://openssf.org/blog/2025/10/22/sboms-in-the-era-of-the-cra-toward-a-unified-and-actionable-framework/?utm_source=chatgpt.com "Global Alignment on SBOM Standards: How the EU Cyber ..."
[7]: https://slsa.dev/blog/2025/11/slsa-v1.2-rc2?utm_source=chatgpt.com "Announcing SLSA v1.2 Release Candidate 2"
[8]: https://slsa.dev/spec/v1.2-rc2/?utm_source=chatgpt.com "SLSA specification"
Cool, lets turn all that standards talk into something your engineers can actually build against.
Below is a concrete implementation plan, broken into 3 workstreams, each with phases, tasks and clear acceptance criteria:
* **A — CVSS v4.0 integration (scoring & evidence)**
* **B — CycloneDX 1.7 SBOM/CBOM + provenance**
* **C — SLSA 1.2 (build + source provenance)**
* **X — Crosscutting (APIs, UX, docs, rollout)**
Ill assume you have:
* A scanner / ingestion pipeline,
* A central data model (DB or graph),
* An API + UI layer (StellaOps console or similar),
* CI/CD on GitHub/GitLab/whatever.
---
## A. CVSS v4.0 integration
**Goal:** Your platform can ingest, calculate, store and expose CVSS v4.0 scores and vectors alongside (or instead of) v3.x, using the official FIRST spec and NVD data. ([FIRST][1])
### A1. Foundations & decisions
**Tasks**
1. **Pick canonical CVSSv4 library or implementation**
* Evaluate existing OSS libraries for your main language(s), or plan an internal one based directly on FIRSTs spec (Base, Threat, Environmental, Supplemental groups).
* Decide:
* Supported metric groups (Base only vs. Base+Threat+Environmental+Supplemental).
* Which groups your UI will expose/edit vs. read-only from upstream feeds.
2. **Versioning strategy**
* Decide how to represent CVSS v3.0/v3.1/v4.0 in your DB:
* `vulnerability_scores` table with `version`, `vector`, `base_score`, `environmental_score`, `temporal_score`, `severity_band`.
* Define precedence rules: if both v3.1 and v4.0 exist, which one your “headline” severity uses.
**Acceptance criteria**
* Tech design doc reviewed & approved.
* Decision on library vs. custom implementation recorded.
* DB schema migration plan ready.
---
### A2. Data model & storage
**Tasks**
1. **DB schema changes**
* Add a `cvss_scores` table or expand the existing vulnerability table, e.g.:
```text
cvss_scores
id (PK)
vuln_id (FK)
source (enum: NVD, scanner, manual)
version (enum: 2.0, 3.0, 3.1, 4.0)
vector (string)
base_score (float)
temporal_score (float, nullable)
environmental_score (float, nullable)
severity (enum: NONE/LOW/MEDIUM/HIGH/CRITICAL)
metrics_json (JSONB) // raw metrics for traceability
created_at / updated_at
```
2. **Traceable evidence**
* Store:
* Raw CVSS vector string (e.g. `CVSS:4.0/AV:N/...(etc)`).
* Parsed metrics as JSON for audit (show “why” a score is what it is).
* Optional: add `calculated_by` + `calculated_at` for your internal scoring runs.
**Acceptance criteria**
* Migrations applied in dev.
* Read/write repository functions implemented and unittested.
---
### A3. Ingestion & calculation
**Tasks**
1. **NVD / external feeds**
* Update your NVD ingestion to read CVSS v4.0 when present in JSON `metrics` fields. ([NVD][2])
* Map NVD → internal `cvss_scores` model.
2. **Local CVSSv4 calculator service**
* Implement a service (or module) that:
* Accepts metric values (Base/Threat/Environmental/Supplemental).
* Produces:
* Canonical vector.
* Base/Threat/Environmental scores.
* Severity band.
* Make this callable by:
* Scanner engine (calculating scores for private vulns).
* UI (recalculate button).
* API (for automated clients).
**Acceptance criteria**
* Given a set of reference vectors from FIRST, your calculator returns exact expected scores.
* NVD ingestion for a sample of CVEs produces v4 scores in your DB.
---
### A4. UI & API
**Tasks**
1. **API**
* Extend vulnerability API payload with:
```json
{
"id": "CVE-2024-XXXX",
"cvss": [
{
"version": "4.0",
"source": "NVD",
"vector": "CVSS:4.0/AV:N/...",
"base_score": 8.3,
"severity": "HIGH",
"metrics": { "...": "..." }
}
]
}
```
* Add filters: `cvss.version`, `cvss.min_score`, `cvss.severity`.
2. **UI**
* On vulnerability detail:
* Show v3.x and v4.0 side-by-side.
* Expandable panel with metric breakdown and “explain my score” text.
* On list views:
* Support sorting & filtering by v4.0 base score & severity.
**Acceptance criteria**
* Frontend can render v4.0 vectors and scores.
* QA can filter vulnerabilities using v4 metrics via API and UI.
---
### A5. Migration & rollout
**Tasks**
1. **Backfill**
* For all stored vulnerabilities where metrics exist:
* If v4 not present but inputs available, compute v4.
* Store both historical (v3.x) and new v4 for comparison.
2. **Feature flag / rollout**
* Introduce feature flag `cvss_v4_enabled` per tenant or environment.
* Run A/B comparison internally before enabling for all users.
**Acceptance criteria**
* Backfill job runs successfully on staging data.
* Rollout plan + rollback strategy documented.
---
## B. CycloneDX 1.7 SBOM/CBOM + provenance
CycloneDX 1.7 is now the current spec; it adds things like a Cryptography BOM (CBOM) and structured citations/provenance to strengthen trust and traceability. ([CycloneDX][3])
### B1. Decide scope & generators
**Tasks**
1. **Select BOM formats & languages**
* JSON as your primary format (`application/vnd.cyclonedx+json`). ([CycloneDX][4])
* Components youll cover:
* Application BOMs (packages, containers).
* Optional: infrastructure (IaC, images).
* Optional: CBOM for crypto usage.
2. **Choose or implement generators**
* For each ecosystem (e.g., Maven, NPM, PyPI, containers), choose:
* Existing tools (`cyclonedx-maven-plugin`, `cyclonedx-npm`, etc).
* Or central generator using lockfiles/manifests.
**Acceptance criteria**
* Matrix of ecosystems → generator tool finalized.
* POC shows valid CycloneDX 1.7 JSON BOM for one representative project.
---
### B2. Schema alignment & validation
**Tasks**
1. **Model updates**
* Extend your internal SBOM model to include:
* `spec_version: "1.7"`
* `bomFormat: "CycloneDX"`
* `serialNumber` (UUID/URI).
* `metadata.tools` (how BOM was produced).
* `properties`, `licenses`, `crypto` (for CBOM).
* For provenance:
* `metadata.authors`, `metadata.manufacture`, `metadata.supplier`.
* `components[x].evidence` and `components[x].properties` for evidence & citations. ([CycloneDX][5])
2. **Validation pipeline**
* Integrate the official CycloneDX JSON schema validation step into:
* CI (for projects generating BOMs).
* Your ingestion path (reject/flag invalid BOMs).
**Acceptance criteria**
* Any BOM produced must pass CycloneDX 1.7 JSON schema validation in CI.
* Ingestion rejects malformed BOMs with clear error messages.
---
### B3. Provenance & citations in BOMs
**Tasks**
1. **Define provenance policy**
* Minimal set for every BOM:
* Author (CI system / team).
* Build pipeline ID, commit, repo URL.
* Build time.
* Extended:
* `externalReferences` for:
* Build logs.
* SLSA attestations.
* Security reports (e.g., scanner runs).
2. **Implement metadata injection**
* In your CI templates:
* Capture build info (commit SHA, pipeline ID, creator, environment).
* Add it into CycloneDX `metadata` and `properties`.
* For evidence:
* Use `components[x].evidence` to reference where a component was detected (e.g., file paths, manifest lines).
**Acceptance criteria**
* For any BOM, engineers can trace:
* WHO built it.
* WHEN it was built.
* WHICH repo/commit/pipeline it came from.
---
### B4. CBOM (Cryptography BOM) support (optional but powerful)
**Tasks**
1. **Crypto inventory**
* Scanner enhancement:
* Detect crypto libraries & primitives used (e.g., OpenSSL, bcrypt, TLS versions).
* Map them into CycloneDX CBOM structures in `crypto` sections (per spec).
2. **Policy hooks**
* Define policy checks:
* “Disallow SHA-1,”
* “Warn on RSA < 2048 bits,”
* “Flag non-FIPS-approved algorithms.”
**Acceptance criteria**
* From a BOM, you can list all cryptographic algorithms and libraries used in an application.
* At least one simple crypto policy implemented (e.g., SHA-1 usage alert).
---
### B5. Ingestion, correlation & UI
**Tasks**
1. **Ingestion service**
* API endpoint: `POST /sboms` accepting CycloneDX 1.7 JSON.
* Store:
* Raw BOM (for evidence).
* Normalized component graph (packages, relationships).
* Link BOM to:
* Repo/project.
* Build (from SLSA provenance).
* Deployed asset.
2. **Correlation**
* Join SBOM components with:
* Vulnerability data (CVE/CWE/CPE/PURL).
* Crypto policy results.
* Maintain “asset → BOM → components → vulnerabilities” graph.
3. **UI**
* For any service/image:
* Show latest BOM metadata (CycloneDX version, timestamp).
* Component list with vulnerability badges.
* Crypto tab (if CBOM enabled).
* Provenance tab (author, build pipeline, SLSA attestation links).
**Acceptance criteria**
* Given an SBOM upload, the UI shows:
* Components.
* Associated vulnerabilities.
* Provenance metadata.
* API consumers can fetch SBOM + correlated risk in a single call.
---
## C. SLSA 1.2 build + source provenance
SLSA 1.2 (final) introduces a **Source Track** in addition to the Build Track, defining levels and attestation formats for both source control and build provenance. ([SLSA][6])
### C1. Target SLSA levels & scope
**Tasks**
1. **Choose target levels**
* For each critical product:
* Pick Build Track level (e.g., target L2 now, L3 later).
* Pick Source Track level (e.g., L1 for all, L2 for sensitive repos).
2. **Repo inventory**
* Classify repos by risk:
* Critical (agents, scanners, control-plane).
* Important (integrations).
* Lowrisk (internal tools).
* Map target SLSA levels accordingly.
**Acceptance criteria**
* For every repo, there is an explicit target SLSA Build + Source level.
* Gap analysis doc exists (current vs target).
---
### C2. Build provenance in CI/CD
**Tasks**
1. **Attestation generation**
* For each CI pipeline:
* Use SLSA-compatible builders or tooling (e.g., `slsa-github-generator`, `slsa-framework` actions, Tekton Chains, etc.) to produce **build provenance attestations** in SLSA 1.2 format.
* Attestation content includes:
* Builder identity.
* Build inputs (commit, repo, config).
* Build parameters.
* Produced artifacts (digest, image tags).
2. **Signing & storage**
* Sign attestations (Sigstore/cosign or equivalent).
* Store:
* In an OCI registry (as artifacts).
* Or in a dedicated provenance store.
* Expose pointer to attestation in:
* BOM (`externalReferences`).
* Your StellaOps metadata.
**Acceptance criteria**
* For any built artifact (image/binary), you can retrieve a SLSA attestation proving:
* What source it came from.
* Which builder ran.
* What steps were executed.
---
### C3. Source Track controls
**Tasks**
1. **Source provenance**
* Implement controls to support SLSA Source Track:
* Enforce protected branches.
* Require code review (e.g., 2 reviewers) for main branches.
* Require signed commits for critical repos.
* Log:
* Author, reviewers, branch, PR ID, merge SHA.
2. **Source attestation**
* For each release:
* Generate **source attestations** capturing:
* Repo URL and commit.
* Review status.
* Policy compliance (review count, checks passing).
* Link these to build attestations (Source → Build provenance chain).
**Acceptance criteria**
* For a release, you can prove:
* Which reviews happened.
* Which branch strategy was followed.
* That policies were met at merge time.
---
### C4. Verification & policy in StellaOps
**Tasks**
1. **Verifier service**
* Implement a service that:
* Fetches SLSA attestations (source + build).
* Verifies signatures and integrity.
* Evaluates them against policies:
* “Artifact must have SLSA Build L2 attestation from trusted builders.”
* “Critical services must have Source L2 attestation (review, branch protections).”
2. **Runtime & deployment gates**
* Integrate verification into:
* Admission controller (Kubernetes or deployment gate).
* CI release stage (block promotion if SLSA requirements not met).
3. **UI**
* On artifact/service detail page:
* Surface SLSA level achieved (per track).
* Status (pass/fail).
* Drill-down view of attestation evidence (who built, when, from where).
**Acceptance criteria**
* A deployment can be blocked (in a test env) when SLSA requirements are not satisfied.
* Operators can visually see SLSA status for an artifact/service.
---
## X. Crosscutting: APIs, UX, docs, rollout
### X1. Unified data model & APIs
**Tasks**
1. **Graph relationships**
* Model the relationship:
* **Source repo** → **SLSA Source attestation**
→ **Build attestation** → **Artifact**
→ **SBOM (CycloneDX 1.7)** → **Components**
→ **Vulnerabilities (CVSS v4)**.
2. **Graph queries**
* Build API endpoints for:
* “Given a CVE, show all affected artifacts and their SLSA + BOM evidence.”
* “Given an artifact, show its full provenance chain and risk posture.”
**Acceptance criteria**
* At least 2 endtoend queries work:
* CVE → impacted assets with scores + provenance.
* Artifact → SBOM + vulnerabilities + SLSA + crypto posture.
---
### X2. Observability & auditing
**Tasks**
1. **Audit logs**
* Log:
* BOM uploads and generators.
* SLSA attestation creation/verification.
* CVSS recalculations (who/what triggered them).
2. **Metrics**
* Track:
* % of builds with valid SLSA attestations.
* % artifacts with CycloneDX 1.7 BOMs.
* % vulns with v4 scores.
* Expose dashboards (Prometheus/Grafana or similar).
**Acceptance criteria**
* Dashboards exist showing coverage for:
* CVSSv4 adoption.
* CycloneDX 1.7 coverage.
* SLSA coverage.
---
### X3. Documentation & developer experience
**Tasks**
1. **Developer playbooks**
* Short, repofriendly docs:
* “How to enable CycloneDX BOM generation in this repo.”
* “How to ensure your service reaches SLSA Build L2.”
* “How to interpret CVSS v4 in StellaOps.”
2. **Templates**
* CI templates:
* `bom-enabled-pipeline.yaml`
* `slsa-enabled-pipeline.yaml`
* Code snippets:
* API examples for pushing SBOMs.
* API examples for querying risk posture.
**Acceptance criteria**
* A new project can:
* Copy a CI template.
* Produce a validated CycloneDX 1.7 BOM.
* Generate SLSA attestations.
* Show up correctly in StellaOps with CVSS v4 scoring.
---
If youd like, next step I can:
* Turn this into a **Jira-ready epic + stories breakdown**, or
* Draft concrete **API schemas (OpenAPI/JSON)** for SBOM ingestion, CVSS scoring, and SLSA attestation verification.
[1]: https://www.first.org/cvss/specification-document?utm_source=chatgpt.com "CVSS v4.0 Specification Document"
[2]: https://nvd.nist.gov/general/news/cvss-v4-0-official-support?utm_source=chatgpt.com "CVSS v4.0 Official Support - NVD"
[3]: https://cyclonedx.org/news/cyclonedx-v1.7-released/?utm_source=chatgpt.com "CycloneDX v1.7 Delivers Advanced Cryptography, ..."
[4]: https://cyclonedx.org/specification/overview/?utm_source=chatgpt.com "Specification Overview"
[5]: https://cyclonedx.org/docs/latest?utm_source=chatgpt.com "CycloneDX v1.7 JSON Reference"
[6]: https://slsa.dev/spec/v1.2/?utm_source=chatgpt.com "SLSA specification"

View File

@@ -0,0 +1,913 @@
Heres a clear, SBOMfirst blueprint you can drop into StellaOps without extra context.
---
# SBOMfirst spine (with attestations) — the short, practical version
![High-level flow](https://dummyimage.com/1200x300/ffffff/000000.png\&text=Scanner+→+Sbomer+→+Authority+→+Graphs+/%20APIs)
## Why this matters (plain English)
* **SBOMs** (CycloneDX/SPDX) = a complete parts list of your software.
* **Attestations** (intoto + DSSE) = tamperevident receipts proving *who did what, to which artifact, when, and how*.
* **Determinism** = if you rescan tomorrow, you get the same result for the same inputs.
* **Explainability** = every risk decision links back to evidence you can show to auditors/customers.
---
## Core pipeline (modules & responsibilities)
1. **Scan (Scanner)**
* Inputs: container image / dir / repo.
* Outputs: raw facts (packages, files, symbols), and a **ScanEvidence** attestation (DSSEwrapped intoto statement).
* Must support offline feeds (bundle CVE/NVD/OSV/vendor advisories).
2. **Sbomer**
* Normalizes raw facts → **canonical SBOM** (CycloneDX or SPDX) with:
* PURLs, license info, checksums, buildIDs (ELF/PE/MachO), source locations.
* Emits **SBOMProduced** attestation linking SBOM ↔ image digest.
3. **Authority**
* Verifies every attestation chain (Sigstore/keys; PQ-ready option later).
* Stamps **PolicyVerified** attestation (who approved, policy hash, inputs).
* Persists **trustlog**: signatures, cert chains, Rekorlike index (mirrorable offline).
4. **Graph Store (Canonical Graph)**
* Ingests SBOM, vulnerabilities, reachability facts, VEX statements.
* Preserves **evidence links** (edge predicates: “foundby”, “reachablevia”, “provenby”).
* Enables **deterministic replay** (snapshot manifests: feeds+rules+hashes).
---
## Stable APIs (keep these boundaries sharp)
* **/scan** → start scan; returns Evidence ID + attestation ref.
* **/sbom** → get canonical SBOM (by image digest or Evidence ID).
* **/attest** → submit/fetch attestations; verify chain; returns trustproof.
* **/vexgate** → policy decision: *allow / warn / block* with proof bundle.
* **/diff** → SBOM↔SBOM + SBOM↔runtime diffs (see below).
* **/unknowns** → create/list/resolve Unknowns (signals needing human/vendor input).
Design notes:
* All responses include `decision`, `explanation`, `evidence[]`, `hashes`, `clock`.
* Support **airgap**: all endpoints operate on local bundles (ZIP/TAR with SBOM+attestations+feeds).
---
## Determinism & “Unknowns” (noisekiller loop)
**Smart diffs**
* **SBOM↔SBOM**: detect added/removed/changed components (by PURL+version+hash).
* **SBOM↔runtime**: prove reachability (e.g., symbol/function use, loaded libs, process maps).
* Score only on **provable** paths; gate on **VEX** (vendor/exploitability statements).
**Unknowns handler**
* Any unresolved signal (ambiguous CVE mapping, stripped binary, unverified vendor VEX) → **Unknowns** queue:
* SLA, owner, evidence snapshot, audit trail.
* State machine: `new → triage → vendorquery → verified → closed`.
* Every VEX or vendor reply becomes an attestation; decisions reevaluated deterministically.
---
## What to store (so you can explain every decision)
* **Artifacts**: image digest, SBOM hash, feed versions, rule set hash.
* **Proofs**: DSSE envelopes, signatures, certs, inclusion proofs (Rekorstyle).
* **Predicates (edges)**:
* `contains(component)`, `vulnerable_to(cve)`, `reachable_via(callgraph|runtime)`,
* `overridden_by(vex)`, `verified_by(authority)`, `derived_from(scan-evidence)`.
* **Whystrings**: humanreadable proof trails (13 sentences) output with every decision.
---
## Minimal policies that work on day 1
* **Block** only when: `vuln.severity ≥ High` AND `reachable == true` AND `no VEX allows`.
* **Warn** when: `High/Critical` but `reachable == unknown` → route to Unknowns with SLA.
* **Allow** when: `Low/Medium` OR VEX says `not_affected` (trusted signer + policy).
---
## Offline/airgap bundle format (zip)
```
/bundle/
feeds/ (NVD, OSV, vendor) + manifest.json (hashes, timestamps)
sboms/ imageDigest.json
attestations/ *.jsonl (DSSE)
proofs/ rekor/ merkle.json
policy/ lattice.json
replay/ inputs.lock (contenthashes of everything above)
```
* Every API accepts `?bundle=/path/to/bundle.zip`.
* **Replay**: `inputs.lock` guarantees deterministic reevaluation.
---
## .NET 10 implementation sketch (pragmatic)
* **Contracts**: `StellaOps.Contracts.*` (Scan, Attest, VexGate, Diff, Unknowns).
* **Attestations**: `StellaOps.Attest.Dsse` (IEnvelope, IStatement<TPredicate>); pluggable crypto (FIPS/GOST/SM/PQ).
* **SBOM**: `StellaOps.Sbom` (CycloneDX/SPDX models + mappers; PURL utilities).
* **Graph**: `StellaOps.Graph` (EF Core 9/10 over Mongo/Postgres; edge predicates as enums + JSON evidence).
* **Policy/Lattice**: `StellaOps.Policy.Lattice` (pure functions over graph snapshots; produce Decision+Why).
* **Unknowns**: `StellaOps.Unknowns` (aggregate root; SLA timers; audit events).
* **CLI**: `stella scan|sbom|attest|vex-gate|diff|unknowns` with `--bundle`.
Key patterns:
* All public methods return `(Result<T> value, Proof proof)`.
* Hash everything (inputs, rules, feeds) and bake into `Proof`.
* Emit DSSE envelopes at each hop; verify on ingest; never trust unauthenticated input.
---
## Next steps (weekone checklist)
* Define JSON schemas for: **Decision** (with Why+Proof), **Unknown**, **DiffResult**.
* Stand up **/scan**, **/sbom**, **/attest**, **/vexgate** skeletons returning mock Proofs.
* Implement **SBOM↔SBOM diff** (PURL+version+hash) and wire to **/diff**.
* Create **bundle.zip** reader/writer + `inputs.lock`.
* Add **Authority.Verify()** with DSSE verification + truststore.
* Ship a basic **policy.lattice.json** (3 rules above) and e2e test: image → allow/block with proofs.
If you want, I can generate:
* The exact C# interfaces/classes (namespaces + DTOs),
* JSON Schemas for Decision/Proof/Unknowns,
* A seed `bundle.zip` layout with fake data for local e2e.
Cool, lets turn that blueprint into something your devs can actually build from.
Below is a detailed, opinionated implementation plan you can paste into an engineering doc and assign as epics / tickets.
---
## 0. Solution layout (concrete)
**Repo structure**
```text
/stella-ops
/src
StellaOps.Contracts // DTOs, API contracts, JSON schemas
StellaOps.Domain // Core domain types (ArtifactId, Proof, Decision, etc.)
StellaOps.Attest // DSSE envelopes, in-toto statements, signing/verification
StellaOps.Sbom // SBOM models + normalization
StellaOps.Graph // Graph store, entities, queries
StellaOps.Policy // Policy engine (lattice evaluation)
StellaOps.WebApi // HTTP APIs: /scan, /sbom, /attest, /vex-gate, /diff, /unknowns
StellaOps.Cli // `stella` CLI, offline bundles
/tests
StellaOps.Tests.Unit
StellaOps.Tests.Integration
StellaOps.Tests.E2E
```
**Baseline tech assumptions**
* Runtime: .NET (8+; you can call it “.NET 10” in your roadmap).
* API: ASP.NET Core minimal APIs.
* DB: Postgres (via EF Core) for graph + unknowns + metadata.
* Storage: local filesystem / S3-compatible for bundle zips, scanner DB caches.
* External scanners: Trivy / Grype / Syft (invoked via CLI with deterministic config).
---
## 1. Core domain & shared contracts (Phase 1)
**Goal:** Have a stable core domain + contracts that all teams can build against.
### 1.1 Core domain types (`StellaOps.Domain`)
Implement:
```csharp
public readonly record struct Digest(string Algorithm, string Value); // e.g. ("sha256", "abcd...")
public readonly record struct ArtifactRef(string Kind, string Value);
// Kind: "container-image", "file", "package", "sbom", etc.
public readonly record struct EvidenceId(Guid Value);
public readonly record struct AttestationId(Guid Value);
public enum PredicateType
{
ScanEvidence,
SbomProduced,
PolicyVerified,
VulnerabilityFinding,
ReachabilityFinding,
VexStatement
}
public sealed class Proof
{
public string ProofId { get; init; } = default!;
public Digest InputsLock { get; init; } = default!; // hash of feeds+rules+sbom bundle
public DateTimeOffset EvaluatedAt { get; init; }
public IReadOnlyList<string> EvidenceIds { get; init; } = Array.Empty<string>();
public IReadOnlyDictionary<string,string> Meta { get; init; } = new Dictionary<string,string>();
}
```
### 1.2 Attestation model (`StellaOps.Attest`)
Implement DSSE + intoto abstractions:
```csharp
public sealed class DsseEnvelope
{
public string PayloadType { get; init; } = default!;
public string Payload { get; init; } = default!; // base64url(JSON)
public IReadOnlyList<DsseSignature> Signatures { get; init; } = Array.Empty<DsseSignature>();
}
public sealed class DsseSignature
{
public string KeyId { get; init; } = default!;
public string Sig { get; init; } = default!; // base64url
}
public interface IStatement<out TPredicate>
{
string Type { get; } // in-toto type URI
string PredicateType { get; } // URI or enum -> string
TPredicate Predicate { get; }
string Subject { get; } // e.g., image digest
}
```
Attestation services:
```csharp
public interface IAttestationSigner
{
Task<DsseEnvelope> SignAsync<TPredicate>(IStatement<TPredicate> statement, CancellationToken ct);
}
public interface IAttestationVerifier
{
Task VerifyAsync(DsseEnvelope envelope, CancellationToken ct);
}
```
### 1.3 Decision & VEX-gate contracts (`StellaOps.Contracts`)
```csharp
public enum GateDecisionKind
{
Allow,
Warn,
Block
}
public sealed class GateDecision
{
public GateDecisionKind Decision { get; init; }
public string Reason { get; init; } = default!; // short human-readable
public Proof Proof { get; init; } = default!;
public IReadOnlyList<string> Evidence { get; init; } = Array.Empty<string>(); // EvidenceIds / AttestationIds
}
public sealed class VexGateRequest
{
public ArtifactRef Artifact { get; init; }
public string? Environment { get; init; } // "prod", "staging", cluster id, etc.
public string? BundlePath { get; init; } // optional offline bundle path
}
```
**Acceptance criteria**
* Shared projects compile.
* No service references each other directly (only via Contracts + Domain).
* Example test that serializes/deserializes GateDecision and DsseEnvelope using System.Text.Json.
---
## 2. SBOM pipeline (Scanner → Sbomer) (Phase 2)
**Goal:** For a container image, produce a canonical SBOM + attestation deterministically.
### 2.1 Scanner integration (`StellaOps.WebApi` + `StellaOps.Cli`)
#### API contract (`/scan`)
```csharp
public sealed class ScanRequest
{
public string SourceType { get; init; } = default!; // "container-image" | "directory" | "git-repo"
public string Locator { get; init; } = default!; // e.g. "registry/myapp:1.2.3"
public bool IncludeFiles { get; init; } = true;
public bool IncludeLicenses { get; init; } = true;
public string? BundlePath { get; init; } // for offline data
}
public sealed class ScanResponse
{
public EvidenceId EvidenceId { get; init; }
public AttestationId AttestationId { get; init; }
public Digest ArtifactDigest { get; init; } = default!;
}
```
#### Implementation steps
1. **Scanner abstraction**
```csharp
public interface IArtifactScanner
{
Task<ScanResult> ScanAsync(ScanRequest request, CancellationToken ct);
}
public sealed class ScanResult
{
public ArtifactRef Artifact { get; init; } = default!;
public Digest ArtifactDigest { get; init; } = default!;
public IReadOnlyList<DiscoveredPackage> Packages { get; init; } = Array.Empty<DiscoveredPackage>();
public IReadOnlyList<DiscoveredFile> Files { get; init; } = Array.Empty<DiscoveredFile>();
}
```
2. **CLI wrapper** (Trivy/Grype/Syft):
* Implement `SyftScanner : IArtifactScanner`:
* Invoke external CLI with fixed flags.
* Use JSON output mode.
* Resolve CLI path from config.
* Ensure deterministic:
* Disable auto-updating DB.
* Use a local DB path versioned and optionally included into bundle.
* Write parsing code Syft → `ScanResult`.
* Add retry & clear error mapping (timeout, auth error, network error).
3. **/scan endpoint**
* Validate request.
* Call `IArtifactScanner.ScanAsync`.
* Build a `ScanEvidence` predicate:
```csharp
public sealed class ScanEvidencePredicate
{
public ArtifactRef Artifact { get; init; } = default!;
public Digest ArtifactDigest { get; init; } = default!;
public DateTimeOffset ScannedAt { get; init; }
public string ScannerName { get; init; } = default!;
public string ScannerVersion { get; init; } = default!;
public IReadOnlyList<DiscoveredPackage> Packages { get; init; } = Array.Empty<DiscoveredPackage>();
}
```
* Build intoto statement for predicate.
* Call `IAttestationSigner.SignAsync`, persist:
* Raw envelope to `attestations` table.
* Map to `EvidenceId` + `AttestationId`.
**Acceptance criteria**
* Given a fixed image and fixed scanner DB, repeated `/scan` calls produce identical:
* `ScanResult` (up to ordering).
* `ScanEvidence` payload.
* `InputsLock` proof hash (once implemented).
* E2E test: run scan on a small public image in CI using a pre-bundled scanner DB.
---
### 2.2 Sbomer (`StellaOps.Sbom` + `/sbom`)
**Goal:** Normalize `ScanResult` into a canonical SBOM (CycloneDX/SPDX) + emit SBOM attestation.
#### Models
Create neutral SBOM model (internal):
```csharp
public sealed class CanonicalComponent
{
public string Name { get; init; } = default!;
public string Version { get; init; } = default!;
public string Purl { get; init; } = default!;
public string? License { get; init; }
public Digest Digest { get; init; } = default!;
public string? SourceLocation { get; init; } // file path, layer info
}
public sealed class CanonicalSbom
{
public string SbomId { get; init; } = default!;
public ArtifactRef Artifact { get; init; } = default!;
public Digest ArtifactDigest { get; init; } = default!;
public IReadOnlyList<CanonicalComponent> Components { get; init; } = Array.Empty<CanonicalComponent>();
public DateTimeOffset CreatedAt { get; init; }
public string Format { get; init; } = "CycloneDX-JSON-1.5"; // default
}
```
#### Sbomer service
```csharp
public interface ISbomer
{
CanonicalSbom FromScan(ScanResult scan);
string ToCycloneDxJson(CanonicalSbom sbom);
string ToSpdxJson(CanonicalSbom sbom);
}
```
Implementation details:
* Map OS/deps to PURLs (use existing PURL libs or implement minimal helpers).
* Stable ordering:
* Sort components by `Purl` then `Version` before serialization.
* Hash the SBOM JSON → `Digest` (e.g., `Digest("sha256", "...")`).
#### SBOM attestation & `/sbom` endpoint
* For an `ArtifactRef` (or `ScanEvidence` EvidenceId):
1. Fetch latest `ScanResult` from DB.
2. Call `ISbomer.FromScan`.
3. Serialize to CycloneDX.
4. Emit `SbomProduced` predicate & DSSE envelope.
5. Persist SBOM JSON blob & link to artifact.
**Acceptance criteria**
* Same `ScanResult` always produces bit-identical SBOM JSON.
* Unit tests verifying:
* PURL mapping correctness.
* Stable ordering.
* `/sbom` endpoint can:
* Build SBOM from scan.
* Return existing SBOM if already generated (idempotence).
---
## 3. Attestation Authority & trust log (Phase 3)
**Goal:** Verify all attestations, store them with a trust log, and produce `PolicyVerified` attestations.
### 3.1 Authority service (`StellaOps.Attest` + `StellaOps.WebApi`)
Key interfaces:
```csharp
public interface IAuthority
{
Task<AttestationId> RecordAsync(DsseEnvelope envelope, CancellationToken ct);
Task<Proof> VerifyChainAsync(ArtifactRef artifact, CancellationToken ct);
}
```
Implementation steps:
1. **Attestations store**
* Table `attestations`:
* `id` (AttestationId, PK)
* `artifact_kind` / `artifact_value`
* `predicate_type` (enum)
* `payload_type`
* `payload_hash`
* `envelope_json`
* `created_at`
* `signer_keyid`
* Table `trust_log`:
* `id`
* `attestation_id`
* `status` (verified / failed / pending)
* `reason`
* `verified_at`
* `verification_data_json` (cert chain, Rekor log index, etc.)
2. **Verification pipeline**
* Implement `IAttestationVerifier.VerifyAsync`:
* Check envelope integrity (no duplicate signatures, required fields).
* Verify crypto signature (keys from configuration store or Sigstore if you integrate later).
* `IAuthority.RecordAsync`:
* Verify envelope.
* Save to `attestations`.
* Add entry to `trust_log`.
* `VerifyChainAsync`:
* For a given `ArtifactRef`:
* Load all attestations for that artifact.
* Ensure each is `status=verified`.
* Compute `InputsLock` = hash of:
* Sorted predicate payloads.
* Feeds manifest.
* Policy rules.
* Return `Proof`.
### 3.2 `/attest` API
* **POST /attest**: submit DSSE envelope (for external tools).
* **GET /attest?artifact=`...`**: list attestations + trust status.
* **GET /attest/{id}/proof**: return verification proof (including InputsLock).
**Acceptance criteria**
* Invalid signatures rejected.
* Tampering test: alter a byte in envelope JSON → verification fails.
* `VerifyChainAsync` returns same `Proof.InputsLock` for identical sets of inputs.
---
## 4. Graph Store & Policy engine (Phase 4)
**Goal:** Store SBOM, vulnerabilities, reachability, VEX, and query them to make deterministic VEX-gate decisions.
### 4.1 Graph model (`StellaOps.Graph`)
Tables (simplified):
* `artifacts`:
* `id` (PK), `kind`, `value`, `digest_algorithm`, `digest_value`
* `components`:
* `id`, `purl`, `name`, `version`, `license`, `digest_algorithm`, `digest_value`
* `vulnerabilities`:
* `id`, `cve_id`, `severity`, `source` (NVD/OSV/vendor), `data_json`
* `vex_statements`:
* `id`, `cve_id`, `component_purl`, `status` (`not_affected`, `affected`, etc.), `source`, `data_json`
* `edges`:
* `id`, `from_kind`, `from_id`, `to_kind`, `to_id`, `relation` (enum), `evidence_id`, `data_json`
Example `relation` values:
* `artifact_contains_component`
* `component_vulnerable_to`
* `component_reachable_via`
* `vulnerability_overridden_by_vex`
* `artifact_scanned_by`
* `decision_verified_by`
Graph access abstraction:
```csharp
public interface IGraphRepository
{
Task UpsertSbomAsync(CanonicalSbom sbom, EvidenceId evidenceId, CancellationToken ct);
Task ApplyVulnerabilityFactsAsync(IEnumerable<VulnerabilityFact> facts, CancellationToken ct);
Task ApplyReachabilityFactsAsync(IEnumerable<ReachabilityFact> facts, CancellationToken ct);
Task ApplyVexStatementsAsync(IEnumerable<VexStatement> vexStatements, CancellationToken ct);
Task<ArtifactGraphSnapshot> GetSnapshotAsync(ArtifactRef artifact, CancellationToken ct);
}
```
`ArtifactGraphSnapshot` is an in-memory projection used by the policy engine.
### 4.2 Policy engine (`StellaOps.Policy`)
Policy lattice (minimal version):
```csharp
public enum RiskState
{
Clean,
VulnerableNotReachable,
VulnerableReachable,
Unknown
}
public sealed class PolicyEvaluationContext
{
public ArtifactRef Artifact { get; init; } = default!;
public ArtifactGraphSnapshot Snapshot { get; init; } = default!;
public IReadOnlyDictionary<string,string>? Environment { get; init; }
}
public interface IPolicyEngine
{
GateDecision Evaluate(PolicyEvaluationContext context);
}
```
Default policy logic:
1. For each vulnerability affecting a component in the artifact:
* Check for VEX:
* If trusted VEX says `not_affected` → ignore.
* Check reachability:
* If proven reachable → mark as `VulnerableReachable`.
* If proven not reachable → `VulnerableNotReachable`.
* If unknown → `Unknown`.
2. Aggregate:
* If any `Critical/High` in `VulnerableReachable``Block`.
* Else if any `Critical/High` in `Unknown``Warn` and log Unknowns.
* Else → `Allow`.
### 4.3 `/vex-gate` endpoint
Implementation:
* Resolve `ArtifactRef`.
* Build `ArtifactGraphSnapshot` using `IGraphRepository.GetSnapshotAsync`.
* Call `IPolicyEngine.Evaluate`.
* Request `IAuthority.VerifyChainAsync``Proof`.
* Emit `PolicyVerified` attestation for this decision.
* Return `GateDecision` + `Proof`.
**Acceptance criteria**
* Given a fixture DB snapshot, calling `/vex-gate` twice yields identical decisions & proof IDs.
* Policy behavior matches the rule text:
* Regression test that modifies severity or reachability → correct decision changes.
---
## 5. Diffs & Unknowns workflow (Phase 5)
### 5.1 Diff engine (`/diff`)
Contracts:
```csharp
public sealed class DiffRequest
{
public string Kind { get; init; } = default!; // "sbom-sbom" | "sbom-runtime"
public string LeftId { get; init; } = default!;
public string RightId { get; init; } = default!;
}
public sealed class DiffComponentChange
{
public string Purl { get; init; } = default!;
public string ChangeType { get; init; } = default!; // "added" | "removed" | "changed"
public string? OldVersion { get; init; }
public string? NewVersion { get; init; }
}
public sealed class DiffResponse
{
public IReadOnlyList<DiffComponentChange> Components { get; init; } = Array.Empty<DiffComponentChange>();
}
```
Implementation:
* SBOM↔SBOM: compare `CanonicalSbom.Components` by PURL (+ version).
* SBOM↔runtime:
* Input runtime snapshot (`process maps`, `loaded libs`, etc.) from agents.
* Map runtime libs to PURLs.
* Determine reachable components from runtime usage → `ReachabilityFact`s into graph.
### 5.2 Unknowns module (`/unknowns`)
Data model:
```csharp
public enum UnknownState
{
New,
Triage,
VendorQuery,
Verified,
Closed
}
public sealed class Unknown
{
public Guid Id { get; init; }
public ArtifactRef Artifact { get; init; } = default!;
public string Type { get; init; } = default!; // "vuln-mapping", "reachability", "vex-trust"
public string Subject { get; init; } = default!; // e.g., "CVE-2024-XXXX / purl:pkg:..."
public UnknownState State { get; set; }
public DateTimeOffset CreatedAt { get; init; }
public DateTimeOffset? SlaDeadline { get; set; }
public string? Owner { get; set; }
public string EvidenceJson { get; init; } = default!; // serialized proof / edges
public string? ResolutionNotes { get; set; }
}
```
API:
* `GET /unknowns`: filter by state, artifact, owner.
* `POST /unknowns`: create manual unknown.
* `PATCH /unknowns/{id}`: update state, owner, notes.
Integration:
* Policy engine:
* For any `Unknown` risk state, auto-create Unknown with SLA if not already present.
* When Unknown resolves (e.g., vendor VEX added), re-run policy evaluation for affected artifact(s).
**Acceptance criteria**
* When `VulnerableReachability` is `Unknown`, `/vex-gate` both:
* Returns `Warn`.
* Creates an Unknown row.
* Transitioning Unknown to `Verified` triggers re-evaluation (integration test).
---
## 6. Offline / airgapped bundles (Phase 6)
**Goal:** Everything works on a single machine with no network.
### 6.1 Bundle format & IO (`StellaOps.Cli` + `StellaOps.WebApi`)
Directory structure inside ZIP:
```text
/bundle/
feeds/
manifest.json // hashes, timestamps for NVD, OSV, vendor feeds
nvd.json
osv.json
vendor-*.json
sboms/
{artifactDigest}.json
attestations/
*.jsonl // one DSSE envelope per line
proofs/
rekor/
merkle.json
policy/
lattice.json // serialized rules / thresholds
replay/
inputs.lock // hash & metadata of all of the above
```
Implement:
```csharp
public interface IBundleReader
{
Task<Bundle> ReadAsync(string path, CancellationToken ct);
}
public interface IBundleWriter
{
Task WriteAsync(Bundle bundle, string path, CancellationToken ct);
}
```
`Bundle` holds strongly-typed representations of the manifest, SBOMs, attestations, proofs, etc.
### 6.2 CLI commands
* `stella scan --image registry/app:1.2.3 --out bundle.zip`
* Runs scan + sbom locally.
* Writes bundle with:
* SBOM.
* Scan + Sbom attestations.
* Feeds manifest.
* `stella vex-gate --bundle bundle.zip`
* Loads bundle.
* Runs policy engine locally.
* Prints `Allow/Warn/Block` + proof summary.
**Acceptance criteria**
* Given the same `bundle.zip`, `stella vex-gate` on different machines produces identical decisions and proof hashes.
* `/vex-gate?bundle=/path/to/bundle.zip` in API uses same BundleReader and yields same output as CLI.
---
## 7. Testing & quality plan
### 7.1 Unit tests
* Domain & Contracts:
* Serialization roundtrip for all DTOs.
* Attest:
* DSSE encode/decode.
* Signature verification with test key pair.
* Sbom:
* Known `ScanResult` → expected SBOM JSON snapshot.
* Policy:
* Table-driven tests:
* Cases: {severity, reachable, hasVex} → {Allow/Warn/Block}.
### 7.2 Integration tests
* Scanner:
* Use a tiny test image with known components.
* Graph + Policy:
* Seed DB with:
* 1 artifact, 2 components, 1 vuln, 1 VEX, 1 reachability fact.
* Assert that `/vex-gate` returns expected decision.
### 7.3 E2E scenario
Single test flow:
1. `POST /scan` → EvidenceId.
2. `POST /sbom` → SBOM + SbomProduced attestation.
3. Load dummy vulnerability feed → `ApplyVulnerabilityFactsAsync`.
4. `POST /vex-gate` → Block (no VEX).
5. Add VEX statement → `ApplyVexStatementsAsync`.
6. `POST /vex-gate` → Allow.
Assertions:
* All decisions contain `Proof` with non-empty `InputsLock`.
* `InputsLock` is identical between runs with unchanged inputs.
---
## 8. Concrete backlog (you can paste into Jira)
### Epic 1 Foundations
* Task: Create solution & project skeleton.
* Task: Implement core domain types (`Digest`, `ArtifactRef`, `EvidenceId`, `Proof`).
* Task: Implement DSSE envelope + JSON serialization.
* Task: Implement basic `IAttestationSigner` with local key pair.
* Task: Define `GateDecision` & `VexGateRequest` contracts.
### Epic 2 Scanner & Sbomer
* Task: Implement `IArtifactScanner` + `SyftScanner`.
* Task: Implement `/scan` endpoint + attestation.
* Task: Implement `ISbomer` & canonical SBOM model.
* Task: Implement `/sbom` endpoint + SbomProduced attestation.
* Task: Snapshot tests for SBOM determinism.
### Epic 3 Authority & Trust log
* Task: Design `attestations` & `trust_log` tables (EF Core migrations).
* Task: Implement `IAuthority.RecordAsync` + `VerifyChainAsync`.
* Task: Implement `/attest` endpoints.
* Task: Add proof generation (`InputsLock` hashing).
### Epic 4 Graph & Policy
* Task: Create graph schema (`artifacts`, `components`, `vulnerabilities`, `edges`, `vex_statements`).
* Task: Implement `IGraphRepository.UpsertSbomAsync`.
* Task: Ingest vulnerability feed (NVD/OSV) into graph facts.
* Task: Implement minimal `IPolicyEngine` with rules.
* Task: Implement `/vex-gate` endpoint.
### Epic 5 Diff & Unknowns
* Task: Implement SBOM↔SBOM diff logic + `/diff`.
* Task: Create `unknowns` table + API.
* Task: Wire policy engine to auto-create Unknowns.
* Task: Add re-evaluation when Unknown state changes.
### Epic 6 Offline bundles & CLI
* Task: Implement `BundleReader` / `BundleWriter`.
* Task: Implement `stella scan` and `stella vex-gate`.
* Task: Add `?bundle=` parameter support in APIs.
---
If youd like, I can next:
* Turn this into actual C# interface files (ready to drop into your repo), or
* Produce a JSON OpenAPI sketch for `/scan`, `/sbom`, `/attest`, `/vex-gate`, `/diff`, `/unknowns`.

View File

@@ -0,0 +1,747 @@
Heres a compact, practical way to add an **explanation graph** that traces every vulnerability verdict back to raw evidence—so auditors can verify results without trusting an LLM.
---
# What it is (in one line)
A small, immutable graph that connects a **verdict** → to **reasoning steps** → to **raw evidence** (source scan records, binary symbol/buildID matches, external advisories/feeds), with cryptographic hashes so anyone can replay/verify it.
---
# Minimal data model (vendorneutral)
```json
{
"explanationGraph": {
"scanId": "uuid",
"artifact": {
"purl": "pkg:docker/redis@7.2.4",
"digest": "sha256:…",
"buildId": "elf:abcd…|pe:…|macho:…"
},
"verdicts": [
{
"verdictId": "uuid",
"cve": "CVE-2024-XXXX",
"status": "affected|not_affected|under_investigation",
"policy": "vex/lattice:v1",
"reasoning": [
{"stepId":"s1","type":"callgraph.reachable","evidenceRef":"e1"},
{"stepId":"s2","type":"version.match","evidenceRef":"e2"},
{"stepId":"s3","type":"vendor.vex.override","evidenceRef":"e3"}
],
"provenance": {
"scanner": "StellaOps.Scanner@1.3.0",
"rulesHash": "sha256:…",
"time": "2025-11-25T12:34:56Z",
"attestation": "dsse:…"
}
}
],
"evidence": [
{
"evidenceId":"e1",
"kind":"binary.callgraph",
"hash":"sha256:…",
"summary":"main -> libssl!EVP_* path present",
"blobPointer":"ipfs://… | file://… | s3://…"
},
{
"evidenceId":"e2",
"kind":"source.scan",
"hash":"sha256:…",
"summary":"Detected libssl 3.0.14 via SONAME + buildid",
"blobPointer":"…"
},
{
"evidenceId":"e3",
"kind":"external.feed",
"hash":"sha256:…",
"summary":"Vendor VEX: CVE not reachable when FIPS mode enabled",
"blobPointer":"…",
"externalRef":{"type":"advisory","id":"VEX-ACME-2025-001","url":"…"}
}
]
}
}
```
---
# How it works (flow)
* **Collect** raw artifacts: scanner findings, binary symbol matches (BuildID / PDB / dSYM), SBOM components, external feeds (NVD, vendor VEX).
* **Normalize** to evidence nodes (immutable blobs with content hash + pointer).
* **Reason** via small, deterministic rules (your lattice/policy). Each rule emits a *reasoning step* that points to evidence.
* **Emit a verdict** with status + full chain of steps.
* **Seal** with DSSE/Sigstore (or your offline signer) so the whole graph is replayable.
---
# Why this helps (auditable AI)
* **No black box**: every “affected/not affected” claim links to verifiable bytes.
* **Deterministic**: same inputs + rules = same verdict (hashes prove it).
* **Reproducible for clients/regulators**: export graph + blobs, they replay locally.
* **LLMoptional**: you can add LLM explanations as *nonauthoritative* annotations; the verdict remains policydriven.
---
# C# dropin (StellaOps style)
```csharp
public record EvidenceNode(
string EvidenceId, string Kind, string Hash, string Summary, string BlobPointer,
ExternalRef? ExternalRef = null);
public record ReasoningStep(string StepId, string Type, string EvidenceRef);
public record Verdict(
string VerdictId, string Cve, string Status, string Policy,
IReadOnlyList<ReasoningStep> Reasoning, Provenance Provenance);
public record Provenance(string Scanner, string RulesHash, DateTimeOffset Time, string Attestation);
public record ExplanationGraph(
Guid ScanId, Artifact Artifact,
IReadOnlyList<Verdict> Verdicts, IReadOnlyList<EvidenceNode> Evidence);
public record Artifact(string Purl, string Digest, string BuildId);
```
* Persist as immutable documents (Mongo collection `explanations`).
* Store large evidence blobs in object storage; keep `hash` + `blobPointer` in Mongo.
* Sign the serialized graph (DSSE) and store the signature alongside.
---
# UI (compact “trace” panel)
* **Top line:** CVE → Status chip (Affected / Not affected / Needs review).
* **Three tabs:** *Evidence*, *Reasoning*, *Provenance*.
* **Oneclick export:** “Download Replay Bundle (.zip)” → JSON graph + evidence blobs + verify script.
* **Badge:** “Deterministic ✓” when rulesHash + inputs resolve to prior signature.
---
# Ops & replay
* Bundle a tiny CLI: `stellaops-explain verify graph.json --evidence ./blobs/`.
* Verification checks: all hashes match, DSSE signature valid, rulesHash known, verdict derivable from steps.
---
# Where to start (1week sprint)
* Day 12: Model + Mongo collections + signer service.
* Day 3: Scanner adapters emit `EvidenceNode` records; policy engine emits `ReasoningStep`.
* Day 4: Verdict assembly + DSSE signing + export bundle.
* Day 5: Minimal UI trace panel + CLI verifier.
If you want, I can generate the Mongo schemas, a DSSE signing helper, and the React/Angular trace panel stub next.
Heres a concrete implementation plan you can hand to your developers so theyre not guessing what to build.
Ill break it down by **phases**, and inside each phase Ill call out **owner**, **deliverables**, and **acceptance criteria**.
---
## Phase 0 Scope & decisions (½ day)
**Goal:** Lock in the “rules of the game” so nobody bikesheds later.
**Decisions to confirm (write in a short ADR):**
1. **Canonical representation & hashing**
* Format for hashing: **canonical JSON** (stable property ordering, UTF8, no whitespace).
* Algorithm: **SHA256** for:
* `ExplanationGraph` document
* each `EvidenceNode`
* Hash scope:
* `evidence.hash` = hash of the raw evidence blob (or canonical subset if huge)
* `graphHash` = hash of the entire explanation graph document (minus signature).
2. **Signing**
* Format: **DSSE envelope** (`payloadType = "stellaops/explanation-graph@v1"`).
* Key management: use existing **offline signing key** or Sigstorestyle keyless if already in org.
* Signature attached as:
* `provenance.attestation` field inside each verdict **and**
* stored in a separate `explanation_signatures` collection or S3 path for replay.
3. **Storage**
* Metadata: **MongoDB** collection `explanation_graphs`.
* Evidence blobs:
* S3 (or compatible) bucket `stella-explanations/` with layout:
* `evidence/{evidenceId}` or `evidence/{hash}`.
4. **ID formats**
* `scanId`: UUID (string).
* `verdictId`, `evidenceId`, `stepId`: UUID (string).
* `buildId`: reuse existing convention (`elf:<buildid>`, `pe:<guid>`, `macho:<uuid>`).
**Deliverable:** 12 page ADR in repo (`/docs/adr/000-explanation-graph.md`).
---
## Phase 1 Domain model & persistence (backend)
**Owner:** Backend
### 1.1. Define core C# domain models
Place in `StellaOps.Explanations` project or equivalent:
```csharp
public record ArtifactRef(
string Purl,
string Digest,
string BuildId);
public record ExternalRef(
string Type, // "advisory", "vex", "nvd", etc.
string Id,
string Url);
public record EvidenceNode(
string EvidenceId,
string Kind, // "binary.callgraph", "source.scan", "external.feed", ...
string Hash, // sha256 of blob
string Summary,
string BlobPointer, // s3://..., file://..., ipfs://...
ExternalRef? ExternalRef = null);
public record ReasoningStep(
string StepId,
string Type, // "callgraph.reachable", "version.match", ...
string EvidenceRef); // EvidenceId
public record Provenance(
string Scanner,
string RulesHash, // hash of rules/policy bundle used
DateTimeOffset Time,
string Attestation); // DSSE envelope (base64 or JSON)
public record Verdict(
string VerdictId,
string Cve,
string Status, // "affected", "not_affected", "under_investigation"
string Policy, // e.g. "vex.lattice:v1"
IReadOnlyList<ReasoningStep> Reasoning,
Provenance Provenance);
public record ExplanationGraph(
Guid ScanId,
ArtifactRef Artifact,
IReadOnlyList<Verdict> Verdicts,
IReadOnlyList<EvidenceNode> Evidence,
string GraphHash); // sha256 of canonical JSON
```
### 1.2. MongoDB schema
Collection: `explanation_graphs`
Document shape:
```jsonc
{
"_id": "scanId:artifactDigest", // composite key or just ObjectId + separate fields
"scanId": "uuid",
"artifact": {
"purl": "pkg:docker/redis@7.2.4",
"digest": "sha256:...",
"buildId": "elf:abcd..."
},
"verdicts": [ /* Verdict[] */ ],
"evidence": [ /* EvidenceNode[] */ ],
"graphHash": "sha256:..."
}
```
**Indexes:**
* `{ scanId: 1 }`
* `{ "artifact.digest": 1 }`
* `{ "verdicts.cve": 1, "artifact.digest": 1 }` (compound)
* Optional: TTL or archiving mechanism if you dont want to keep these forever.
**Acceptance criteria:**
* You can serialize/deserialize `ExplanationGraph` to Mongo without loss.
* Indexes exist and queries by `scanId`, `artifact.digest`, and `(digest + CVE)` are efficient.
---
## Phase 2 Evidence ingestion plumbing
**Goal:** Make every relevant raw fact show up as an `EvidenceNode`.
**Owner:** Backend scanner team
### 2.1. Evidence factory service
Create `IEvidenceService`:
```csharp
public interface IEvidenceService
{
Task<EvidenceNode> StoreBinaryCallgraphAsync(
Guid scanId,
ArtifactRef artifact,
byte[] callgraphBytes,
string summary,
ExternalRef? externalRef = null);
Task<EvidenceNode> StoreSourceScanAsync(
Guid scanId,
ArtifactRef artifact,
byte[] scanResultJson,
string summary);
Task<EvidenceNode> StoreExternalFeedAsync(
Guid scanId,
ExternalRef externalRef,
byte[] rawPayload,
string summary);
}
```
Implementation tasks:
1. **Hash computation**
* Compute SHA256 over raw bytes.
* Prefer a helper:
```csharp
public static string Sha256Hex(ReadOnlySpan<byte> data) { ... }
```
2. **Blob storage**
* S3 key format, e.g.: `explanations/{scanId}/{evidenceId}`.
* `BlobPointer` string = `s3://stella-explanations/explanations/{scanId}/{evidenceId}`.
3. **EvidenceNode creation**
* Generate `evidenceId = Guid.NewGuid().ToString("N")`.
* Populate `kind`, `hash`, `summary`, `blobPointer`, `externalRef`.
4. **Graph assembly contract**
* Evidence service **does not** write to Mongo.
* It only uploads blobs and returns `EvidenceNode` objects.
* The **ExplanationGraphBuilder** (next phase) collects them.
**Acceptance criteria:**
* Given a callgraph binary, a corresponding `EvidenceNode` is returned with:
* hash matching the blob (verified in tests),
* blob present in S3,
* summary populated.
---
## Phase 3 Reasoning & policy integration
**Goal:** Instrument your existing VEX / lattice policy engine to emit deterministic **reasoning steps** instead of just a boolean status.
**Owner:** Policy / rules engine team
### 3.1. Expose rule evaluation trace
Assume you already have something like:
```csharp
VulnerabilityStatus Evaluate(ArtifactRef artifact, string cve, Findings findings);
```
Extend it to:
```csharp
public sealed class RuleEvaluationTrace
{
public string StepType { get; init; } // e.g. "version.match"
public string RuleId { get; init; } // "rule:openssl:versionFromElf"
public string Description { get; init; } // human-readable explanation
public string EvidenceKind { get; init; } // to match with EvidenceService
public object EvidencePayload { get; init; } // callgraph bytes, json, etc.
}
public sealed class EvaluationResult
{
public string Status { get; init; } // "affected", etc.
public IReadOnlyList<RuleEvaluationTrace> Trace { get; init; }
}
```
New API:
```csharp
EvaluationResult EvaluateWithTrace(
ArtifactRef artifact, string cve, Findings findings);
```
### 3.2. From trace to ReasoningStep + EvidenceNode
Create `ExplanationGraphBuilder`:
```csharp
public interface IExplanationGraphBuilder
{
Task<ExplanationGraph> BuildAsync(
Guid scanId,
ArtifactRef artifact,
IReadOnlyList<CveFinding> cveFindings,
string scannerName);
}
```
Internal algorithm for each `CveFinding`:
1. Call `EvaluateWithTrace(artifact, cve, finding)` to get `EvaluationResult`.
2. For each `RuleEvaluationTrace`:
* Use `EvidenceService` with appropriate method based on `EvidenceKind`.
* Get back an `EvidenceNode` with `evidenceId`.
* Create `ReasoningStep`:
* `StepId = Guid.NewGuid()`
* `Type = trace.StepType`
* `EvidenceRef = evidenceNode.EvidenceId`
3. Assemble `Verdict`:
```csharp
var verdict = new Verdict(
verdictId: Guid.NewGuid().ToString("N"),
cve: finding.Cve,
status: result.Status,
policy: "vex.lattice:v1",
reasoning: steps,
provenance: new Provenance(
scanner: scannerName,
rulesHash: rulesBundleHash,
time: DateTimeOffset.UtcNow,
attestation: "" // set in Phase 4
)
);
```
4. Collect:
* all `EvidenceNode`s (dedupe by `hash` to avoid duplicates).
* all `Verdict`s.
**Acceptance criteria:**
* Given deterministic inputs (scan + rules bundle hash), repeated runs produce:
* same sequence of `ReasoningStep` types,
* same set of `EvidenceNode.hash` values,
* same `status`.
---
## Phase 4 Graph hashing & DSSE signing
**Owner:** Security / platform
### 4.1. Canonical JSON for hash
Implement:
```csharp
public static class ExplanationGraphSerializer
{
public static string ToCanonicalJson(ExplanationGraph graph)
{
// no graphHash, no attestation in this step
}
}
```
Key requirements:
* Consistent property ordering (e.g. alphabetical).
* No extra whitespace.
* UTF8 encoding.
* Primitive formatting options fixed (e.g. date as ISO 8601 with `Z`).
### 4.2. Hash and sign
Before persisting:
```csharp
var graphWithoutHash = graph with { GraphHash = "" };
var canonicalJson = ExplanationGraphSerializer.ToCanonicalJson(graphWithoutHash);
var graphHash = Sha256Hex(Encoding.UTF8.GetBytes(canonicalJson));
// sign DSSE envelope
var envelope = dsseSigner.Sign(
payloadType: "stellaops/explanation-graph@v1",
payload: Encoding.UTF8.GetBytes(canonicalJson)
);
// attach
var signedVerdicts = graph.Verdicts
.Select(v => v with
{
Provenance = v.Provenance with { Attestation = envelope.ToJson() }
})
.ToList();
var finalGraph = graph with
{
GraphHash = $"sha256:{graphHash}",
Verdicts = signedVerdicts
};
```
Then write `finalGraph` to Mongo.
**Acceptance criteria:**
* Recomputing `graphHash` from Mongo document (zeroing `graphHash` and `attestation`) matches stored value.
* Verifying DSSE signature with the public key succeeds.
---
## Phase 5 Backend APIs & export bundle
**Owner:** Backend / API
### 5.1. Read APIs
Add endpoints (REST-ish):
1. **Get graph for scan-artifact**
`GET /explanations/scans/{scanId}/artifacts/{digest}`
* Returns entire `ExplanationGraph` JSON.
2. **Get single verdict**
`GET /explanations/scans/{scanId}/artifacts/{digest}/cves/{cve}`
* Returns `Verdict` + its subset of `EvidenceNode`s.
3. **Search by CVE**
`GET /explanations/search?cve=CVE-2024-XXXX&digest=sha256:...`
* Returns list of `(scanId, artifact, verdictId)`.
### 5.2. Export replay bundle
`POST /explanations/{scanId}/{digest}/export`
Implementation:
* Create a temporary directory.
* Write:
* `graph.json` → `ExplanationGraph` as stored.
* `signature.json` → DSSE envelope alone (optional).
* Evidence blobs:
* For each `EvidenceNode`:
* Download from S3 and store as `evidence/{evidenceId}`.
* Zip the folder: `explanation-{scanId}-{shortDigest}.zip`.
* Stream as download.
### 5.3. CLI verifier
Small .NET / Go CLI:
Commands:
```bash
stellaops-explain verify graph.json --evidence ./evidence
```
Verification steps:
1. Load `graph.json`, parse to `ExplanationGraph`.
2. Strip `graphHash` & `attestation`, reserialize canonical JSON.
3. Recompute SHA256 and compare to `graphHash`.
4. Verify DSSE envelope with public key.
5. For each `EvidenceNode`:
* Read file `./evidence/{evidenceId}`.
* Recompute hash and compare with `evidence.hash`.
Exit with nonzero code if anything fails; print a short summary.
**Acceptance criteria:**
* Export bundle roundtrips: `verify` passes on an exported zip.
* APIs documented in OpenAPI / Swagger.
---
## Phase 6 UI: Explanation trace panel
**Owner:** Frontend
### 6.1. API integration
New calls in frontend client:
* `GET /explanations/scans/{scanId}/artifacts/{digest}`
* Optionally `GET /explanations/.../cves/{cve}` if you want lazy loading per CVE.
### 6.2. Component UX
On the “vulnerability detail” view:
* Add **“Explanation”** tab with three sections:
1. **Verdict summary**
* Badge: `Affected` / `Not affected` / `Under investigation`.
* Text: `Derived using policy {policy}, rules hash {rulesHash[..8]}.`
2. **Reasoning timeline**
* Vertical list of `ReasoningStep`s:
* Icon per type (e.g. “flow” icon for `callgraph.reachable`).
* Title = `Type` (humanized).
* Click to expand underlying `EvidenceNode.summary`.
* Optional “View raw evidence” link (downloads blob via S3 signed URL).
3. **Provenance**
* Show:
* `scanner`
* `rulesHash`
* `time`
* “Attested ✓” if DSSE verifies on the backend (or precomputed).
4. **Export**
* Button: “Download replay bundle (.zip)”
* Calls export endpoint and triggers browser download.
**Acceptance criteria:**
* For any CVE in UI, a user can:
* See why it is (not) affected in at most 2 clicks.
* Download a replay bundle via the UI.
---
## Phase 7 Testing strategy
**Owner:** QA + all devs
### 7.1. Unit tests
* EvidenceService:
* Hash matches blob contents.
* BlobPointer formats are as expected.
* ExplanationGraphBuilder:
* Given fixed test input, the resulting graph JSON matches golden file.
* Serializer:
* Canonical JSON is stable under property reordering in the code.
### 7.2. Integration tests
* Endtoend fake scan:
* Simulate scanner output + rules.
* Build graph → persist → fetch via API.
* Run CLI verify on exported bundle in CI.
### 7.3. Security tests
* Signature tampering:
* Modify `graph.json` in exported bundle; `verify` must fail.
* Evidence tampering:
* Modify an evidence file; `verify` must fail.
---
## Phase 8 Rollout
**Owner:** PM / Tech lead
1. **Feature flag**
* Start with explanation graph generation behind a flag for:
* subset of scanners,
* subset of tenants.
2. **Backfill (optional)**
* If useful, run a oneoff job that:
* Takes recent scans,
* Rebuilds explanation graphs,
* Stores them in Mongo.
3. **Docs**
* Short doc page for customers:
* “What is an Explanation Graph?”
* “How to verify it with the CLI?”
---
## Developer checklist (TL;DR)
You can literally drop this into Jira as epics/tasks:
1. **Backend**
* [ ] Implement domain models (`ExplanationGraph`, `Verdict`, `EvidenceNode`, etc.).
* [ ] Implement `IEvidenceService` + S3 integration.
* [ ] Extend policy engine to `EvaluateWithTrace`.
* [ ] Implement `ExplanationGraphBuilder`.
* [ ] Implement canonical serializer, hashing, DSSE signing.
* [ ] Implement Mongo persistence + indexes.
* [ ] Implement REST APIs + export ZIP.
2. **Frontend**
* [ ] Wire new APIs into the vulnerability detail view.
* [ ] Build Explanation tab (Summary / Reasoning / Provenance).
* [ ] Implement “Download replay bundle” button.
3. **Tools**
* [ ] Implement `stellaops-explain verify` CLI.
* [ ] Add CI test that runs verify against a sample bundle.
4. **QA**
* [ ] Goldenfile tests for graphs.
* [ ] Signature & evidence tampering tests.
* [ ] UI functional tests on explanations.
---
If youd like, next step I can turn this into:
* concrete **OpenAPI spec** for the new endpoints, and/or
* a **sample `stellaops-explain verify` CLI skeleton** (C# or Go).

View File

@@ -0,0 +1,799 @@
Heres a quick win for making your vuln paths auditorfriendly without retraining any models: **add a plainlanguage `reason` to every graph edge** (why this edge exists). Think “introduced via dynamic import” or “symbol relocation via `ld`”, not jargon soup.
![A simple vulnerability path showing edges labeled with reasons like "imported at runtime" and "linked via ld".](https://images.unsplash.com/photo-1515879218367-8466d910aaa4?ixlib=rb-4.0.3\&q=80\&fm=jpg\&fit=crop\&w=1600\&h=900)
# Why this helps
* **Explains reachability** at a glance (auditors & devs can follow the story).
* **Reduces falsepositive fights** (every hop justifies itself).
* **Stable across languages** (no model changes, just metadata).
# Minimal schema change
Add three fields to every edge in your call/dep graph (SBOM→Reachability→Fix plan):
```json
{
"from": "pkg:pypi/requests@2.32.3#requests.sessions.Session.request",
"to": "pkg:pypi/urllib3@2.2.3#urllib3.connectionpool.HTTPConnectionPool.urlopen",
"via": {
"reason": "imported via top-level module dependency",
"evidence": [
"import urllib3 in requests/adapters.py:12",
"pip freeze: urllib3==2.2.3"
],
"provenance": {
"detector": "StellaOps.Scanner.WebService@1.4.2",
"rule_id": "PY-IMPORT-001",
"confidence": "high"
}
}
}
```
### Standard reason glossary (use as enum)
* `declared_dependency` (manifest lock/SBOM edge)
* `static_call` (direct call site with symbol ref)
* `dynamic_import` (e.g., `__import__`, `importlib`, `require(...)`)
* `reflection_call` (C# `MethodInfo.Invoke`, Java reflection)
* `plugin_discovery` (entry points, ServiceLoader, MEF)
* `symbol_relocation` (ELF/PE/MachO relocation binds)
* `plt_got_resolution` (ELF PLT/GOT jump to symbol)
* `ld_preload_injection` (runtime injected .so/.dll)
* `env_config_path` (path read from env/config enables load)
* `taint_propagation` (user input reaches sink)
* `vendor_patch_alias` (function moved/aliased across versions)
# Emission rules (keep it deterministic)
* **One reason per edge**, short, lowercase snake_case from glossary.
* **Up to 3 evidence strings** (file:line or binary section + symbol).
* **Confidence**: `high|medium|low` with a single, stable rubric:
* high = exact symbol/call site or relocation
* medium = heuristic import/loader path
* low = inferred from naming or optional plugin
# UI/Report snippet
Render paths like:
```
app → requests → urllib3 → OpenSSL EVP_PKEY_new_raw_private_key
• declared_dependency (poetry.lock)
• static_call (requests.adapters:345)
• symbol_relocation (ELF .rela.plt: _EVP_PKEY_new_raw_private_key)
```
# C# dropin (for your .NET 10 code)
Edge builder with reason/evidence:
```csharp
public sealed record EdgeId(string From, string To);
public sealed record EdgeEvidence(
string Reason, // enum string from glossary
IReadOnlyList<string> Evidence, // file:line, symbol, section
string Confidence, // high|medium|low
string Detector, // component@version
string RuleId // stable rule key
);
public sealed record GraphEdge(EdgeId Id, EdgeEvidence Via);
public static class EdgeFactory
{
public static GraphEdge DeclaredDependency(string from, string to, string manifestPath)
=> new(new EdgeId(from, to),
new EdgeEvidence(
Reason: "declared_dependency",
Evidence: new[] { $"manifest:{manifestPath}" },
Confidence: "high",
Detector: "StellaOps.Scanner.WebService@1.0.0",
RuleId: "DEP-LOCK-001"));
public static GraphEdge SymbolRelocation(string from, string to, string objPath, string section, string symbol)
=> new(new EdgeId(from, to),
new EdgeEvidence(
Reason: "symbol_relocation",
Evidence: new[] { $"{objPath}::{section}:{symbol}" },
Confidence: "high",
Detector: "StellaOps.Scanner.WebService@1.0.0",
RuleId: "BIN-RELOC-101"));
}
```
# Integration checklist (fast path)
* Emit `via.reason/evidence/provenance` for **all** edges (SBOM, source, binary).
* Validate `reason` against glossary; reject freetext.
* Add a “**Why this edge exists**” column in your path tables.
* In JSON/CSV exports, keep columns: `from,to,reason,confidence,evidence0..2,rule_id`.
* In the console, collapse evidence by default; expand on click.
If you want, Ill plug this into your StellaOps graph contracts (Concelier/Cartographer) and produce the enum + validators and a tiny renderer for your docs.
Cool, lets turn this into a concrete, devfriendly implementation plan you can actually hand to teams.
Ill structure it by phases and by component (schema, producers, APIs, UI, testing, rollout) so you can slice into tickets easily.
---
## 0. Recap of what were building
**Goal:**
Every edge in your vuln path graph (SBOM → Reachability → Fix plan) carries **machinereadable, auditorfriendly metadata**:
```jsonc
{
"from": "pkg:pypi/requests@2.32.3#requests.sessions.Session.request",
"to": "pkg:pypi/urllib3@2.2.3#urllib3.connectionpool.HTTPConnectionPool.urlopen",
"via": {
"reason": "declared_dependency", // from a controlled enum
"evidence": [
"manifest:requirements.txt:3", // up to 3 short evidence strings
"pip freeze: urllib3==2.2.3"
],
"provenance": {
"detector": "StellaOps.Scanner.WebService@1.4.2",
"rule_id": "PY-IMPORT-001",
"confidence": "high"
}
}
}
```
Standard **reason glossary** (enum):
* `declared_dependency`
* `static_call`
* `dynamic_import`
* `reflection_call`
* `plugin_discovery`
* `symbol_relocation`
* `plt_got_resolution`
* `ld_preload_injection`
* `env_config_path`
* `taint_propagation`
* `vendor_patch_alias`
* `unknown` (fallback only when you truly cant do better)
---
## 1. Design & contracts (shared work for backend & frontend)
### 1.1 Define the canonical edge metadata types
**Owner:** Platform / shared lib team
**Tasks:**
1. In your shared C# library (used by scanners + API), define:
```csharp
public enum EdgeReason
{
Unknown = 0,
DeclaredDependency,
StaticCall,
DynamicImport,
ReflectionCall,
PluginDiscovery,
SymbolRelocation,
PltGotResolution,
LdPreloadInjection,
EnvConfigPath,
TaintPropagation,
VendorPatchAlias
}
public enum EdgeConfidence
{
Low = 0,
Medium,
High
}
public sealed record EdgeProvenance(
string Detector, // e.g., "StellaOps.Scanner.WebService@1.4.2"
string RuleId, // e.g., "PY-IMPORT-001"
EdgeConfidence Confidence
);
public sealed record EdgeVia(
EdgeReason Reason,
IReadOnlyList<string> Evidence,
EdgeProvenance Provenance
);
public sealed record EdgeId(string From, string To);
public sealed record GraphEdge(
EdgeId Id,
EdgeVia Via
);
```
2. Enforce **max 3 evidence strings** via a small helper to avoid accidental spam:
```csharp
public static class EdgeViaFactory
{
private const int MaxEvidence = 3;
public static EdgeVia Create(
EdgeReason reason,
IEnumerable<string> evidence,
string detector,
string ruleId,
EdgeConfidence confidence
)
{
var ev = evidence
.Where(s => !string.IsNullOrWhiteSpace(s))
.Take(MaxEvidence)
.ToArray();
return new EdgeVia(
Reason: reason,
Evidence: ev,
Provenance: new EdgeProvenance(detector, ruleId, confidence)
);
}
}
```
**Acceptance criteria:**
* [ ] EdgeReason enum defined and shared in a reusable package.
* [ ] EdgeVia and EdgeProvenance types exist and are serializable to JSON.
* [ ] Evidence is capped to 3 entries and cannot be null (empty list allowed).
---
### 1.2 API / JSON contract
**Owner:** API team
**Tasks:**
1. Extend your existing graph edge DTO to include `via`:
```csharp
public sealed record GraphEdgeDto
{
public string From { get; init; } = default!;
public string To { get; init; } = default!;
public EdgeViaDto Via { get; init; } = default!;
}
public sealed record EdgeViaDto
{
public string Reason { get; init; } = default!; // enum as string
public string[] Evidence { get; init; } = Array.Empty<string>();
public EdgeProvenanceDto Provenance { get; init; } = default!;
}
public sealed record EdgeProvenanceDto
{
public string Detector { get; init; } = default!;
public string RuleId { get; init; } = default!;
public string Confidence { get; init; } = default!; // "high|medium|low"
}
```
2. Ensure JSON is **additive** (backward compatible):
* `via` is **nonnullable** in responses from the new API version.
* If you must keep a legacy endpoint, add **v2** endpoints that guarantee `via`.
3. Update OpenAPI spec:
* Document `via.reason` as enum string, including allowed values.
* Document `via.provenance.detector`, `rule_id`, `confidence`.
**Acceptance criteria:**
* [ ] OpenAPI / Swagger shows `via.reason` as a string enum + description.
* [ ] New clients can deserialize edges with `via` without custom hacks.
* [ ] Old clients remain unaffected (either keep old endpoint or allow them to ignore `via`).
---
## 2. Producers: add reasons & evidence where edges are created
You likely have 3 main edge producers:
* SBOM / manifest / lockfile analyzers
* Source analyzers (call graph, taint analysis)
* Binary analyzers (ELF/PE/MachO, containers)
Treat each as a miniproject with identical patterns.
---
### 2.1 SBOM / manifest edges
**Owner:** SBOM / dep graph team
**Tasks:**
1. Identify all code paths that create “declared dependency” edges:
* Manifest → Package
* Root module → Imported package (if you store these explicitly)
2. Replace plain edge construction with factory calls:
```csharp
public static class EdgeFactory
{
private const string DetectorName = "StellaOps.Scanner.Sbom@1.0.0";
public static GraphEdge DeclaredDependency(
string from,
string to,
string manifestPath,
string? dependencySpecLine
)
{
var evidence = new List<string>
{
$"manifest:{manifestPath}"
};
if (!string.IsNullOrWhiteSpace(dependencySpecLine))
evidence.Add($"spec:{dependencySpecLine}");
var via = EdgeViaFactory.Create(
EdgeReason.DeclaredDependency,
evidence,
DetectorName,
"DEP-LOCK-001",
EdgeConfidence.High
);
return new GraphEdge(new EdgeId(from, to), via);
}
}
```
3. Make sure each SBOM/manifest edge sets:
* `reason = declared_dependency`
* `confidence = high`
* Evidence includes at least `manifest:<path>` and, if possible, line or spec snippet.
**Acceptance criteria:**
* [ ] Any SBOMgenerated edge returns with `via.reason == declared_dependency`.
* [ ] Evidence contains manifest path for ≥ 99% of SBOM edges.
* [ ] Unit tests cover at least: normal manifest, multiple manifests, malformed manifest.
---
### 2.2 Source code call graph edges
**Owner:** Static analysis / call graph team
**Tasks:**
1. Map current edge types → reasons:
* Direct function/method calls → `static_call`
* Reflection (Java/C#) → `reflection_call`
* Dynamic imports (`__import__`, `importlib`, `require(...)`) → `dynamic_import`
* Plugin systems (entry points, ServiceLoader, MEF) → `plugin_discovery`
* Taint / dataflow edges (user input → sink) → `taint_propagation`
2. Implement helper factories:
```csharp
public static class SourceEdgeFactory
{
private const string DetectorName = "StellaOps.Scanner.Source@1.0.0";
public static GraphEdge StaticCall(
string fromSymbol,
string toSymbol,
string filePath,
int lineNumber
)
{
var evidence = new[]
{
$"callsite:{filePath}:{lineNumber}"
};
var via = EdgeViaFactory.Create(
EdgeReason.StaticCall,
evidence,
DetectorName,
"SRC-CALL-001",
EdgeConfidence.High
);
return new GraphEdge(new EdgeId(fromSymbol, toSymbol), via);
}
public static GraphEdge DynamicImport(
string fromSymbol,
string toSymbol,
string filePath,
int lineNumber
)
{
var via = EdgeViaFactory.Create(
EdgeReason.DynamicImport,
new[] { $"importsite:{filePath}:{lineNumber}" },
DetectorName,
"SRC-DYNIMPORT-001",
EdgeConfidence.Medium
);
return new GraphEdge(new EdgeId(fromSymbol, toSymbol), via);
}
// Similar for ReflectionCall, PluginDiscovery, TaintPropagation...
}
```
3. Replace all direct `new GraphEdge(...)` calls in source analyzers with these factories.
**Acceptance criteria:**
* [ ] Direct call edges produce `reason = static_call` with file:line evidence.
* [ ] Reflection/dynamic import edges use correct reasons and mark `confidence = medium` (or high where youre certain).
* [ ] Unit tests check that for a known source file, the resulting edges contain expected `reason`, `evidence`, and `rule_id`.
---
### 2.3 Binary / container analyzers
**Owner:** Binary analysis / SCA team
**Tasks:**
1. Map binary features to reasons:
* Symbol relocations + PLT/GOT edges → `symbol_relocation` or `plt_got_resolution`
* LD_PRELOAD or injection edges → `ld_preload_injection`
2. Implement factory:
```csharp
public static class BinaryEdgeFactory
{
private const string DetectorName = "StellaOps.Scanner.Binary@1.0.0";
public static GraphEdge SymbolRelocation(
string fromSymbol,
string toSymbol,
string binaryPath,
string section,
string relocationName
)
{
var evidence = new[]
{
$"{binaryPath}::{section}:{relocationName}"
};
var via = EdgeViaFactory.Create(
EdgeReason.SymbolRelocation,
evidence,
DetectorName,
"BIN-RELOC-101",
EdgeConfidence.High
);
return new GraphEdge(new EdgeId(fromSymbol, toSymbol), via);
}
}
```
3. Wire up all binary edge creation to use this.
**Acceptance criteria:**
* [ ] For a test binary with a known relocation, edges include `reason = symbol_relocation` and section/symbol in evidence.
* [ ] No binary edge is created without `via`.
---
## 3. Storage & migrations
This depends on your backing store, but the pattern is similar.
### 3.1 Relational (SQL) example
**Owner:** Data / infra team
**Tasks:**
1. Add columns:
```sql
ALTER TABLE graph_edges
ADD COLUMN via_reason VARCHAR(64) NOT NULL DEFAULT 'unknown',
ADD COLUMN via_evidence JSONB NOT NULL DEFAULT '[]'::jsonb,
ADD COLUMN via_detector VARCHAR(255) NOT NULL DEFAULT 'unknown',
ADD COLUMN via_rule_id VARCHAR(128) NOT NULL DEFAULT 'unknown',
ADD COLUMN via_confidence VARCHAR(16) NOT NULL DEFAULT 'low';
```
2. Update ORM model:
```csharp
public class EdgeEntity
{
public string From { get; set; } = default!;
public string To { get; set; } = default!;
public string ViaReason { get; set; } = "unknown";
public string[] ViaEvidence { get; set; } = Array.Empty<string>();
public string ViaDetector { get; set; } = "unknown";
public string ViaRuleId { get; set; } = "unknown";
public string ViaConfidence { get; set; } = "low";
}
```
3. Add mapping to domain `GraphEdge`:
```csharp
public static GraphEdge ToDomain(this EdgeEntity e)
{
var via = new EdgeVia(
Reason: Enum.TryParse<EdgeReason>(e.ViaReason, true, out var r) ? r : EdgeReason.Unknown,
Evidence: e.ViaEvidence,
Provenance: new EdgeProvenance(
Detector: e.ViaDetector,
RuleId: e.ViaRuleId,
Confidence: Enum.TryParse<EdgeConfidence>(e.ViaConfidence, true, out var c) ? c : EdgeConfidence.Low
)
);
return new GraphEdge(new EdgeId(e.From, e.To), via);
}
```
4. **Backfill existing data** (optional but recommended):
* For edges with a known “type” column, map to bestfit `reason`.
* If you cant infer: set `reason = unknown`, `confidence = low`, `detector = "backfill@<version>"`.
**Acceptance criteria:**
* [ ] DB migration runs cleanly in staging and prod.
* [ ] No existing reader breaks: default values keep queries functioning.
* [ ] Edge roundtrip (domain → DB → API JSON) retains `via` fields correctly.
---
## 4. API & service layer
**Owner:** API / service team
**Tasks:**
1. Wire domain model → DTOs:
```csharp
public static GraphEdgeDto ToDto(this GraphEdge edge)
{
return new GraphEdgeDto
{
From = edge.Id.From,
To = edge.Id.To,
Via = new EdgeViaDto
{
Reason = edge.Via.Reason.ToString().ToSnakeCaseLower(), // e.g. "static_call"
Evidence = edge.Via.Evidence.ToArray(),
Provenance = new EdgeProvenanceDto
{
Detector = edge.Via.Provenance.Detector,
RuleId = edge.Via.Provenance.RuleId,
Confidence = edge.Via.Provenance.Confidence.ToString().ToLowerInvariant()
}
}
};
}
```
2. If you accept edges via API (internal services), validate:
* `reason` must be one of the known values; otherwise reject or coerce to `unknown`.
* `evidence` length ≤ 3.
* Trim whitespace and limit each evidence string length (e.g. 256 chars).
3. Versioning:
* Introduce `/v2/graph/paths` (or similar) that guarantees `via`.
* Keep `/v1/...` unchanged or mark deprecated.
**Acceptance criteria:**
* [ ] Path API returns `via.reason` and `via.evidence` for all edges in new endpoints.
* [ ] Invalid reason strings are rejected or converted to `unknown` with a log.
* [ ] Integration tests cover full flow: repo → scanner → DB → API → JSON.
---
## 5. UI: make paths auditorfriendly
**Owner:** Frontend team
**Tasks:**
1. **Path details UI**:
For each edge in the vulnerability path table:
* Show a **“Reason” column** with a small pill:
* `static_call` → “Static call”
* `declared_dependency` → “Declared dependency”
* etc.
* Below or on hover, show **primary evidence** (first evidence string).
2. **Edge details panel** (drawer/modal):
When user clicks an edge:
* Show:
* From → To (symbols/packages)
* Reason (with friendly description per enum)
* Evidence list (each on its own line)
* Detector, rule id, confidence
3. **Filtering & sorting (optional but powerful)**:
* Filter edges by `reason` (multiselect).
* Filter by `confidence` (e.g. show only high/medium).
* This helps auditors quickly isolate more speculative edges.
4. **UX text / glossary**:
* Add a small “?” tooltip that links to a glossary explaining each reason type in human language.
**Acceptance criteria:**
* [ ] For a given vulnerability, the path view shows a “Reason” column per edge.
* [ ] Clicking an edge reveals all evidence and provenance information.
* [ ] UX has a glossary/tooltip explaining what each reason means in plain English.
---
## 6. Testing strategy
**Owner:** QA + each feature team
### 6.1 Unit tests
* **Factories**: verify correct mapping from input to `EdgeVia`:
* Reason set correctly.
* Evidence trimmed, max 3.
* Confidence matches rubric (high for relocations, medium for heuristic imports, etc.).
* **Serialization**: `EdgeVia` → JSON and back.
### 6.2 Integration tests
Set up **small fixtures**:
1. **Simple dependency project**:
* Example: Python project with `requirements.txt``requests``urllib3`.
* Expected edges:
* App → requests: `declared_dependency`, evidence includes `requirements.txt`.
* requests → urllib3: `declared_dependency`, plus static call edges.
2. **Dynamic import case**:
* A module using `importlib.import_module("mod")`.
* Ensure edge is `dynamic_import` with `confidence = medium`.
3. **Binary edge case**:
* Test ELF with known symbol relocation.
* Ensure an edge with `reason = symbol_relocation` exists.
### 6.3 Endtoend tests
* Run full scan on a sample repo and:
* Hit path API.
* Assert every edge has nonnull `via` fields.
* Spot check a few known edges for exact `reason` and evidence.
**Acceptance criteria:**
* [ ] Automated tests fail if any edge is emitted without `via`.
* [ ] Coverage includes at least one example for each `EdgeReason` you support.
---
## 7. Observability, guardrails & rollout
### 7.1 Metrics & logging
**Owner:** Observability / platform
**Tasks:**
* Emit metrics:
* `% edges with reason != unknown`
* Count by `reason` and `confidence`
* Log warnings when:
* Edge is emitted with `reason = unknown`.
* Evidence is empty for a nonunknown reason.
**Acceptance criteria:**
* [ ] Dashboards showing distribution of edge reasons over time.
* [ ] Alerts if `unknown` reason edges exceed a threshold (e.g. >5%).
---
### 7.2 Rollout plan
**Owner:** PM + tech leads
**Steps:**
1. **Phase 1 Darklaunch metadata:**
* Start generating & storing `via` for new scans.
* Keep UI unchanged.
* Monitor metrics, unknown ratio, and storage overhead.
2. **Phase 2 Enable for internal users:**
* Toggle UI on (feature flag for internal / beta users).
* Collect feedback from security engineers and auditors.
3. **Phase 3 General availability:**
* Enable UI for all.
* Update customerfacing documentation & audit guides.
---
### 7.3 Documentation
**Owner:** Docs / PM
* Short **“Why this edge exists”** section in:
* Product docs (for customers).
* Internal runbooks (for support & SEs).
* Include:
* Table of reasons → human descriptions.
* Examples of path explanations (e.g., “This edge exists because `app` declares `urllib3` in `requirements.txt` and calls it in `client.py:42`”).
---
## 8. Readytouse ticket breakdown
You can almost copypaste these into your tracker:
1. **Shared**: Define EdgeReason, EdgeVia & EdgeProvenance in shared library, plus EdgeViaFactory.
2. **SBOM**: Use EdgeFactory.DeclaredDependency for all manifestgenerated edges.
3. **Source**: Wire all callgraph edges to SourceEdgeFactory (static_call, dynamic_import, reflection_call, plugin_discovery, taint_propagation).
4. **Binary**: Wire relocations/PLT/GOT edges to BinaryEdgeFactory (symbol_relocation, plt_got_resolution, ld_preload_injection).
5. **Data**: Add via_* columns/properties to graph_edges storage and map to/from domain.
6. **API**: Extend graph path DTOs to include `via`, update OpenAPI, and implement /v2 endpoints if needed.
7. **UI**: Show edge reason, evidence, and provenance in vulnerability path screens and add filters.
8. **Testing**: Add unit, integration, and endtoend tests ensuring every edge has nonnull `via`.
9. **Observability**: Add metrics and logs for edge reasons and unknown rates.
10. **Docs & rollout**: Write glossary + auditor docs and plan staged rollout.
---
If you tell me a bit about your current storage (e.g., Neo4j vs SQL) and the services names, I can tailor this into an even more literal set of code snippets and migrations to match your stack exactly.

View File

@@ -0,0 +1,819 @@
Heres a crisp, readytoship concept you can drop into StellaOps: an **Unknowns Registry** that captures ambiguous scanner artifacts (stripped binaries, unverifiable packages, orphaned PURLs, missing digests) and treats them as firstclass citizens with probabilistic severity and trustdecay—so you stay transparent without blocking delivery.
### What this solves (in plain terms)
* **No silent drops:** every “cant verify / cant resolve” is tracked, not discarded.
* **Quantified risk:** unknowns still roll into a portfoliolevel risk number with confidence intervals.
* **Trust over time:** stale unknowns get *riskier* the longer they remain unresolved.
* **Client confidence:** visibility + trajectory (are unknowns shrinking?) becomes a maturity signal.
### Core data model (CycloneDX/SPDX compatible, attaches to your SBOM spine)
```yaml
UnknownArtifact:
id: urn:stella:unknowns:<uuid>
observedAt: <RFC3339>
origin:
source: scanner|ingest|runtime
feed: <name/version>
evidence: [ filePath, containerDigest, buildId, sectionHints ]
identifiers:
purl?: <string> # orphan/incomplete PURL allowed
hash?: <sha256|null> # missing digest allowed
cpe?: <string|null>
classification:
type: binary|library|package|script|config|other
reason: stripped_binary|missing_signature|no_feed_match|ambiguous_name|checksum_mismatch|other
metrics:
baseUnkScore: 0..1
confidence: 0..1 # model confidence in the *score*
trust: 0..1 # provenance trust (sig/attest, feed quality)
decayPolicyId: <ref>
resolution:
status: unresolved|suppressed|mitigated|confirmed-benign|confirmed-risk
updatedAt: <RFC3339>
notes: <text>
links:
scanId: <ref>
componentId?: <ref to SBOM component if later mapped>
attestations?: [ dsse, in-toto, rekorRef ]
```
### Scoring (simple, explainable, deterministic)
* **Unknown Risk (UR):**
`UR_t = clamp( (B * (1 + A)) * D_t * (1 - T) , 0, 1 )`
* `B` = `baseUnkScore` (heuristics: file entropy, section hints, ELF flags, import tables, size, location)
* `A` = **Environment Amplifier** (runtime proximity: container entrypoint? PID namespace? network caps?)
* `T` = **Trust** (sig/attest/registry reputation/feed pedigree normalized to 0..1)
* `D_t` = **Trustdecay multiplier** over time `t`:
* Linear: `D_t = 1 + k * daysOpen` (e.g., `k = 0.01`)
* or Exponential: `D_t = e^(λ * daysOpen)` (e.g., `λ = 0.005`)
* **Portfolio rollup:** use **P90 of UR_t** across images + **sum of topN UR_t** to avoid dilution.
### Policies & SLOs
* **SLO:** *Unknowns burndown* ≤ X% weekoverweek; *Median age* ≤ Y days.
* **Gates:** block promotion when (a) any `UR_t ≥ 0.8`, or (b) more than `M` unknowns with age > `Z` days.
* **Suppressions:** require justification + expiry; suppression reduces `A` but does **not** zero `D_t`.
### Trustdecay policies (pluggable)
```yaml
DecayPolicy:
id: decay:default:v1
kind: linear|exponential|custom
params:
k: 0.01 # linear slope per day
cap: 2.0 # max multiplier
```
### Scanner hooks (where to emit Unknowns)
* **Binary scan:** stripped ELF/MachO/PE; missing buildID; abnormal sections; impossible symbol map.
* **Package map:** PURL inferred from path without registry proof; mismatched checksum; vendor fork detected.
* **Attestation:** DSSE missing / invalid; Sigstore chain unverifiable; Rekor entry not found.
* **Feeds:** component seen in runtime but absent from SBOM (or vice versa).
### Deterministic generation (for replay/audits)
* Include **Unknowns** in the **Scan Manifest** (your deterministic bundle): inputs, ruleset hash, feed hashes, lattice policy version, and the exact classifier thresholds that produced `B`, `A`, `T`. That lets you replay and reproduce UR_t byteforbyte during audits.
### API surface (StellaOps.Authority)
```
POST /unknowns/ingest # bulk ingest from Scanner/Vexer
GET /unknowns?imageDigest=… # list + filters (status, age, UR buckets)
PATCH /unknowns/{id}/resolve # set status, add evidence, set suppression (with expiry)
GET /unknowns/stats # burn-downs, age histograms, P90 UR_t, top-N contributors
```
### UI slices (Trust Algebra Studio)
* **Risk ribbon:** Unknowns count, P90 UR_t, median age, trend sparkline.
* **Aging board:** columns by age buckets; cards show reason, UR_t, `T`, decay policy, evidence.
* **Whatif slider:** adjust `k`/`λ` and see retroactive effect on release readiness (deterministic preview).
* **Explainability panel:** show `B`, `A`, `T`, `D_t` factors with succinct evidence (e.g., “ELF stripped; no .symtab; no Sigstore; runtime hits PID 1 → A=0.2; trust=0.1; day 17 → D=1.17”).
### Heuristics for `baseUnkScore (B)` (portable across ELF/PE/MachO)
* Section/segment anomalies; entropy outliers; import tables linking to risky APIs; executable heap/stack flags.
* Location & role (PATH proximity to entrypoint, init scripts).
* Size/type priors (tiny droppers vs fat libs).
* Knownunknown patterns (packer signatures, `UPX` traces without unpack attest).
### Governance & reporting
* **Weekly Unknowns Report:** totals, burndown, median age, “top unresolved origins” (by registry / base image), and “timetofirstevidence.”
* **Vendor scorecards:** if a supplier causes recurring unknowns, reflect it in **Adaptive Trust Economics**.
### Minimal implementation slice (2 sprints)
1. **Sprint A (backend + ingest):** model + tables, ingest endpoint, deterministic manifest hook, basic scoring (linear decay), list/query API, rollup metrics.
2. **Sprint B (UI + policy):** Unknowns board, whatif slider, suppress/resolve workflow w/ expiry, weekly report, policy gates in CI.
---
If you want, I can generate:
* the exact **PostgreSQL schema + indices** (including age/UR materialized views),
* **.NET 10** DTOs, handlers, and a **deterministic scoring library** with unit tests,
* a **Grafana/ClickHouse** dashboard spec for burndown and P90 tracking,
* and a onepage clientfacing explainer your sales team can use.
Perfect, lets turn the concept into a concrete, devready implementation plan.
Ill assume:
* You already have:
* A scanner pipeline (binaries, SBOMs, attestations)
* A backend service (StellaOps.Authority)
* A UI (Trust Algebra Studio)
* Observability (OpenTelemetry, ClickHouse/Presto)
You can adapt naming and tech stack as needed.
---
## 0. Scope & success criteria
**Goals**
1. Persist all “unknown-ish” scanner findings (stripped binaries, unverifiable PURLs, missing digests, etc.) as firstclass entities.
2. Compute a deterministic **Unknown Risk (UR)** per artifact and roll it up per image/application.
3. Apply **trustdecay** over time and expose burndown metrics.
4. Provide UI workflows to triage, suppress, and resolve unknowns.
5. Enforce release gates based on unknown risk and age.
**Nongoals (for v1)**
* No full ML; use deterministic heuristics + tunable weights.
* No crossorg multitenant policy — single org/single policy set.
* No perdeveloper responsibility/assignment yet (can add later).
---
## 1. Architecture & components
### 1.1 New/updated components
1. **Unknowns Registry (backend submodule)**
* Lives in your existing backend (e.g., `StellaOps.Authority.Unknowns`).
* Owns DB schema, scoring logic, and API.
2. **Scanner integration**
* Extend `StellaOps.Scanner` (and/or `Vexer`) to emit “unknown” findings into the registry via HTTP or message bus.
3. **UI: Unknowns in Trust Algebra Studio**
* New section/tab: “Unknowns” under each image/app.
* Global “Unknowns board” for portfolio view.
4. **Analytics & jobs**
* Periodic job to recompute trustdecay & UR.
* Weekly report generator (e.g., pushing into ClickHouse, Slack, or email).
---
## 2. Data model (DB schema)
Use relational DB; heres a concrete schema you can translate into migrations.
### 2.1 Tables
#### `unknown_artifacts`
Represents the current state of each unknown.
* `id` (UUID, PK)
* `created_at` (timestamp)
* `updated_at` (timestamp)
* `first_observed_at` (timestamp, NOT NULL)
* `last_observed_at` (timestamp, NOT NULL)
* `origin_source` (enum: `scanner`, `runtime`, `ingest`)
* `origin_feed` (text) e.g., `binary-scanner@1.4.3`
* `origin_scan_id` (UUID / text) foreign key to `scan_runs` if you have it
* `image_digest` (text, indexed) to tie to container/image
* `component_id` (UUID, nullable) SBOM component when later mapped
* `file_path` (text, nullable)
* `build_id` (text, nullable) ELF/Mach-O/PE build ID if any
* `purl` (text, nullable)
* `hash_sha256` (text, nullable)
* `cpe` (text, nullable)
* `classification_type` (enum: `binary`, `library`, `package`, `script`, `config`, `other`)
* `classification_reason` (enum:
`stripped_binary`, `missing_signature`, `no_feed_match`,
`ambiguous_name`, `checksum_mismatch`, `other`)
* `status` (enum:
`unresolved`, `suppressed`, `mitigated`, `confirmed_benign`, `confirmed_risk`)
* `status_changed_at` (timestamp)
* `status_changed_by` (text / user-id)
* `notes` (text)
* `decay_policy_id` (FK → `decay_policies`)
* `base_unk_score` (double, 0..1)
* `env_amplifier` (double, 0..1)
* `trust` (double, 0..1)
* `current_decay_multiplier` (double)
* `current_ur` (double, 0..1) Unknown Risk at last recompute
* `current_confidence` (double, 0..1) confidence in `current_ur`
* `is_deleted` (bool) soft delete
**Indexes**
* `idx_unknown_artifacts_image_digest_status`
* `idx_unknown_artifacts_status_created_at`
* `idx_unknown_artifacts_current_ur`
* `idx_unknown_artifacts_last_observed_at`
#### `unknown_artifact_events`
Append-only event log for auditable changes.
* `id` (UUID, PK)
* `unknown_artifact_id` (FK → `unknown_artifacts`)
* `created_at` (timestamp)
* `actor` (text / user-id / system)
* `event_type` (enum:
`created`, `reobserved`, `status_changed`, `note_added`,
`metrics_recomputed`, `linked_component`, `suppression_applied`, `suppression_expired`)
* `payload` (JSONB) diff or eventspecific details
Index: `idx_unknown_artifact_events_artifact_id_created_at`
#### `decay_policies`
Defines how trustdecay works.
* `id` (text, PK) e.g., `decay:default:v1`
* `kind` (enum: `linear`, `exponential`)
* `param_k` (double, nullable) for linear: slope
* `param_lambda` (double, nullable) for exponential
* `cap` (double, default 2.0)
* `description` (text)
* `is_default` (bool)
#### `unknown_suppressions`
Optional; can also reuse `unknown_artifacts.status` but separate table lets you have multiple suppressions over time.
* `id` (UUID, PK)
* `unknown_artifact_id` (FK)
* `created_at` (timestamp)
* `created_by` (text)
* `reason` (text)
* `expires_at` (timestamp, nullable)
* `active` (bool)
Index: `idx_unknown_suppressions_artifact_active_expires_at`
#### `unknown_image_rollups`
Precomputed rollups per image (for fast dashboards/gates).
* `id` (UUID, PK)
* `image_digest` (text, indexed)
* `computed_at` (timestamp)
* `unknown_count_total` (int)
* `unknown_count_unresolved` (int)
* `unknown_count_high_ur` (int) e.g., UR ≥ 0.8
* `p50_ur` (double)
* `p90_ur` (double)
* `top_n_ur_sum` (double)
* `median_age_days` (double)
---
## 3. Scoring engine implementation
Create a small, deterministic scoring library so the same code can be used in:
* Backend ingest path (for immediate UR)
* Batch recompute job
* “Whatif” UI simulations (optionally via stateless API)
### 3.1 Data types
Define a core model, e.g.:
```ts
type UnknownMetricsInput = {
baseUnkScore: number; // B
envAmplifier: number; // A
trust: number; // T
daysOpen: number; // t
decayPolicy: {
kind: "linear" | "exponential";
k?: number;
lambda?: number;
cap: number;
};
};
type UnknownMetricsOutput = {
decayMultiplier: number; // D_t
unknownRisk: number; // UR_t
};
```
### 3.2 Algorithm
```ts
function computeDecayMultiplier(
daysOpen: number,
policy: DecayPolicy
): number {
if (policy.kind === "linear") {
const raw = 1 + (policy.k ?? 0) * daysOpen;
return Math.min(raw, policy.cap);
}
if (policy.kind === "exponential") {
const lambda = policy.lambda ?? 0;
const raw = Math.exp(lambda * daysOpen);
return Math.min(raw, policy.cap);
}
return 1;
}
function computeUnknownRisk(input: UnknownMetricsInput): UnknownMetricsOutput {
const { baseUnkScore: B, envAmplifier: A, trust: T, daysOpen, decayPolicy } = input;
const D_t = computeDecayMultiplier(daysOpen, decayPolicy);
const raw = (B * (1 + A)) * D_t * (1 - T);
const unknownRisk = Math.max(0, Math.min(raw, 1)); // clamp 0..1
return { decayMultiplier: D_t, unknownRisk };
}
```
### 3.3 Heuristics for `B`, `A`, `T`
Implement these as pure functions with configurationdriven weights:
* `B` (base unknown score):
* Start from prior: by `classification_type` (binary > library > config).
* Adjust up for:
* Stripped binary (no symbols, high entropy)
* Suspicious segments (executable stack/heap)
* Known packer signatures (UPX, etc.)
* Adjust down for:
* Large, wellknown dependency path (`/usr/lib/...`)
* Known safe signatures (if partially known).
* `A` (environment amplifier):
* +0.2 if artifact is part of container entrypoint (PID 1).
* +0.1 if file is in a PATH dir (e.g., `/usr/local/bin`).
* +0.1 if the runtime has network capabilities/capabilities flags.
* Cap at 0.5 for v1.
* `T` (trust):
* Start at 0.5.
* +0.3 if registry/signature/attestation chain verified.
* +0.1 if source registry is “trusted vendor list”.
* 0.3 if checksum mismatch or feed conflict.
* Clamp 0..1.
Store the raw factors (`B`, `A`, `T`) on the artifact for transparency and later replays.
---
## 4. Scanner integration
### 4.1 Emission format (from scanner → backend)
Define a minimal ingestion contract (JSON over HTTP or a message):
```jsonc
{
"scanId": "urn:scan:1234",
"imageDigest": "sha256:abc123...",
"observedAt": "2025-11-27T12:34:56Z",
"unknowns": [
{
"externalId": "scanner-unique-id-1",
"originSource": "scanner",
"originFeed": "binary-scanner@1.4.3",
"filePath": "/usr/local/bin/stripped",
"buildId": null,
"purl": null,
"hashSha256": "aa...",
"cpe": null,
"classificationType": "binary",
"classificationReason": "stripped_binary",
"rawSignals": {
"entropy": 7.4,
"hasSymbols": false,
"isEntrypoint": true,
"inPathDir": true
}
}
]
}
```
The backend maps `rawSignals``B`, `A`, `T`.
### 4.2 Idempotency
* Define uniqueness key on `(image_digest, file_path, hash_sha256)` for v1.
* On ingest:
* If an artifact exists:
* Update `last_observed_at`.
* Recompute age (`now - first_observed_at`) and UR.
* Add `reobserved` event.
* If not:
* Insert new row with `first_observed_at = observedAt`.
### 4.3 HTTP endpoint
`POST /internal/unknowns/ingest`
* Auth: internal service token.
* Returns perunknown mapping to internal `id` and computed UR.
Error handling:
* If invalid payload → 400 with list of errors.
* Partial failure: process valid unknowns, return `failedUnknowns` array with reasons.
---
## 5. Backend API for UI & CI
### 5.1 List unknowns
`GET /unknowns`
Query params:
* `imageDigest` (optional)
* `status` (optional multi: unresolved, suppressed, etc.)
* `minUr`, `maxUr` (optional)
* `maxAgeDays` (optional)
* `page`, `pageSize`
Response:
```jsonc
{
"items": [
{
"id": "urn:stella:unknowns:uuid",
"imageDigest": "sha256:...",
"filePath": "/usr/local/bin/stripped",
"classificationType": "binary",
"classificationReason": "stripped_binary",
"status": "unresolved",
"firstObservedAt": "...",
"lastObservedAt": "...",
"ageDays": 17,
"baseUnkScore": 0.7,
"envAmplifier": 0.2,
"trust": 0.1,
"decayPolicyId": "decay:default:v1",
"decayMultiplier": 1.17,
"currentUr": 0.84,
"currentConfidence": 0.8
}
],
"total": 123
}
```
### 5.2 Get single unknown + event history
`GET /unknowns/{id}`
Include:
* The artifact.
* Latest metrics.
* Recent events (with pagination).
### 5.3 Update status / suppression
`PATCH /unknowns/{id}`
Body options:
```jsonc
{
"status": "suppressed",
"notes": "Reviewed; internal diagnostics binary.",
"suppression": {
"expiresAt": "2025-12-31T00:00:00Z"
}
}
```
Backend:
* Validates transition (cannot unsuppress to “unresolved” without event).
* Writes to `unknown_suppressions`.
* Writes `status_changed` + `suppression_applied` events.
### 5.4 Image rollups
`GET /images/{imageDigest}/unknowns/summary`
Response:
```jsonc
{
"imageDigest": "sha256:...",
"computedAt": "...",
"unknownCountTotal": 40,
"unknownCountUnresolved": 30,
"unknownCountHighUr": 4,
"p50Ur": 0.35,
"p90Ur": 0.82,
"topNUrSum": 2.4,
"medianAgeDays": 9
}
```
This is what CI and UI will mostly query.
---
## 6. Trustdecay job & rollup computation
### 6.1 Periodic recompute job
Schedule (e.g., every hour):
1. Fetch `unknown_artifacts` where:
* `status IN ('unresolved', 'suppressed', 'mitigated')`
* `last_observed_at >= now() - interval '90 days'` (tunable)
2. Compute `daysOpen = now() - first_observed_at`.
3. Compute `D_t` and `UR_t` with scoring library.
4. Update `unknown_artifacts.current_ur`, `current_decay_multiplier`.
5. Append `metrics_recomputed` event (batch size threshold, e.g., only when UR changed > 0.01).
### 6.2 Rollup job
Every X minutes:
1. For each `image_digest` with active unknowns:
* Compute:
* `unknown_count_total`
* `unknown_count_unresolved` (`status = unresolved`)
* `unknown_count_high_ur` (UR ≥ threshold)
* `p50` / `p90` UR (use DB percentile or compute in app)
* `top_n_ur_sum` (sum of top 5 UR)
* `median_age_days`
2. Upsert into `unknown_image_rollups`.
---
## 7. CI / promotion gating
Expose a simple policy evaluation API for CI and deploy pipelines.
### 7.1 Policy definition (config)
Example YAML:
```yaml
unknownsPolicy:
blockIf:
- kind: "anyUrAboveThreshold"
threshold: 0.8
- kind: "countAboveAge"
maxCount: 5
ageDays: 14
warnIf:
- kind: "unknownCountAbove"
maxCount: 50
```
### 7.2 Policy evaluation endpoint
`GET /policy/unknowns/evaluate?imageDigest=sha256:...`
Response:
```jsonc
{
"imageDigest": "sha256:...",
"result": "block", // "ok" | "warn" | "block"
"reasons": [
{
"kind": "anyUrAboveThreshold",
"detail": "1 unknown with UR>=0.8 (max allowed: 0)"
}
],
"summary": {
"unknownCountUnresolved": 30,
"p90Ur": 0.82,
"medianAgeDays": 17
}
}
```
CI can decide to fail build/deploy based on `result`.
---
## 8. UI implementation (Trust Algebra Studio)
### 8.1 Image detail page: “Unknowns” tab
Components:
1. **Header metrics ribbon**
* Unknowns unresolved, p90 UR, median age, weekly trend sparkline.
* Fetch from `/images/{digest}/unknowns/summary`.
2. **Unknowns table**
* Columns:
* Status pill
* UR (with color + tooltip showing `B`, `A`, `T`, `D_t`)
* Classification type/reason
* File path
* Age
* Last observed
* Filters:
* Status, UR range, age range, reason, type.
3. **Row drawer / detail panel**
* Show:
* All core fields.
* Evidence:
* origin (scanner, feed, runtime)
* raw signals (entropy, sections, etc)
* SBOM component link (if any)
* Timeline (events list)
* Actions:
* Change status (unresolved → suppressed/mitigated/confirmed).
* Add note.
* Set/extend suppression expiry.
### 8.2 Global “Unknowns board”
Goals:
* Portfolio view; triage across many images.
Features:
* Filters by:
* Team/application/service
* Time range for first observed
* UR bucket (00.3, 0.30.6, 0.61)
* Cards/rows per image:
* Unknown counts, p90 UR, median age.
* Trend of unknown count (last N weeks).
* Click through to imagedetail tab.
### 8.3 “Whatif” slider (optional v1.1)
On an image or org-level:
* Slider(s) to visualize effect of:
* `k` / `lambda` change (decay speed).
* Trust baseline changes (simulate better attestations).
* Implement by calling a stateless endpoint:
* `POST /unknowns/what-if` with:
* Current unknowns list IDs
* Proposed decay policy
* Returns recalculated URs and hypothetical gate result (but does **not** persist).
---
## 9. Observability & analytics
### 9.1 Metrics
Emit structured events/metrics (OpenTelemetry, etc.):
* Counters:
* `unknowns_ingested_total` (labels: `source`, `classification_type`, `reason`)
* `unknowns_resolved_total` (labels: `status`)
* Gauges:
* `unknowns_unresolved_count` per image/service.
* `unknowns_p90_ur` per image/service.
* `unknowns_median_age_days`.
### 9.2 Weekly report generator
Batch job:
1. Compute, per org or team:
* Total unknowns.
* New unknowns this week.
* Resolved unknowns this week.
* Median age.
* Top 10 images by:
* Highest p90 UR.
* Largest number of longlived unknowns (> X days).
2. Persist into analytics store (ClickHouse) + push into:
* Slack channel / email with a short plaintext summary and link to UI.
---
## 10. Security & compliance
* Ensure all APIs require authentication & proper scopes:
* Scanner ingest: internal service token only.
* UI APIs: user identity + RBAC (e.g., team can only see their images).
* Audit log:
* `unknown_artifact_events` must be immutable and queryable by compliance teams.
* PII:
* Avoid storing user PII in notes; if necessary, apply redaction.
---
## 11. Suggested delivery plan (sprints/epics)
### Sprint 1 Foundations & ingest path
* [ ] DB migrations: `unknown_artifacts`, `unknown_artifact_events`, `decay_policies`.
* [ ] Implement scoring library (`B`, `A`, `T`, `UR_t`, `D_t`).
* [ ] Implement `/internal/unknowns/ingest` endpoint with idempotency.
* [ ] Extend scanner to emit unknowns and integrate with ingest.
* [ ] Basic `GET /unknowns?imageDigest=...` API.
* [ ] Seed `decay:default:v1` policy.
**Exit criteria:** Unknowns created and UR computed from real scans; queryable via API.
---
### Sprint 2 Decay, rollups, and CI hook
* [ ] Implement periodic job to recompute decay & UR.
* [ ] Implement rollup job + `unknown_image_rollups` table.
* [ ] Implement `GET /images/{digest}/unknowns/summary`.
* [ ] Implement policy evaluation endpoint for CI.
* [ ] Wire CI to block/warn based on policy.
**Exit criteria:** CI gate can fail a build due to highrisk unknowns; rollups visible via API.
---
### Sprint 3 UI (Unknowns tab + board)
* [ ] Image detail “Unknowns” tab:
* Metrics ribbon, table, filters.
* Row drawer with evidence & history.
* [ ] Global “Unknowns board” page.
* [ ] Integrate with APIs.
* [ ] Add basic “explainability tooltip” for UR.
**Exit criteria:** Security team can triage unknowns via UI; product teams can see their exposure.
---
### Sprint 4 Suppression workflow & reporting
* [ ] Implement `PATCH /unknowns/{id}` + suppression rules & expiries.
* [ ] Extend periodic jobs to autoexpire suppressions.
* [ ] Weekly unknowns report job → analytics + Slack/email.
* [ ] Add “trend” sparklines and unknowns burndown in UI.
**Exit criteria:** Unknowns can be suppressed with justification; org gets weekly burndown trends.
---
If youd like, I can next:
* Turn this into concrete tickets (Jira-style) with story points and acceptance criteria, or
* Generate example migration scripts (SQL) and API contract files (OpenAPI snippet) that your devs can copypaste.

View File

@@ -0,0 +1,130 @@
# Product Advisory Index
This index consolidates the November 2025 product advisories, identifying canonical documents and duplicates.
## Canonical Advisories (Active)
These are the authoritative advisories to reference for implementation:
### CVSS v4.0
- **Canonical:** `25-Nov-2025 - Add CVSS v4.0 Score Receipts for Transparency.md`
- **Sprint:** SPRINT_0190_0001_0001_cvss_v4_receipts.md
- **Status:** New sprint created
### SBOM/VEX Pipeline
- **Canonical:** `27-Nov-2025 - Deep Architecture Brief - SBOMFirst, VEXReady Spine.md`
- **Sprint:** SPRINT_0186_0001_0001_record_deterministic_execution.md (tasks 15a-15f)
- **Supersedes:**
- `24-Nov-2025 - Bridging OpenVEX and CycloneDX for .NET.md` → archive
- `25-Nov-2025 - Revisiting Determinism in SBOM→VEX Pipeline.md` → archive
- `26-Nov-2025 - From SBOM to VEX - Building a Transparent Chain.md` → archive
### Rekor/DSSE Batch Sizing
- **Canonical:** `26-Nov-2025 - Handling Rekor v2 and DSSE AirGap Limits.md`
- **Sprint:** SPRINT_0401_0001_0001_reachability_evidence_chain.md (DSSE tasks)
- **Supersedes:**
- `27-Nov-2025 - Rekor Envelope Size Heuristic.md` → archive (duplicate)
- `27-Nov-2025 - DSSE and Rekor Envelope Size Heuristic.md` → archive (duplicate)
- `27-Nov-2025 - Optimizing DSSE Batch Sizes for Reliable Logging.md` → archive (duplicate)
### Graph Revision IDs
- **Canonical:** `26-Nov-2025 - Use Graph Revision IDs as Public Trust Anchors.md`
- **Sprint:** SPRINT_0401_0001_0001_reachability_evidence_chain.md (existing tasks)
- **Supersedes:**
- `25-Nov-2025 - HashStable Graph Revisions Across Systems.md` → archive (earlier version)
### Reachability Benchmark (Public)
- **Canonical:** `24-Nov-2025 - Designing a Deterministic Reachability Benchmark.md`
- **Sprint:** SPRINT_0513_0001_0001_public_reachability_benchmark.md
- **Related:**
- `26-Nov-2025 - Opening Up a Reachability Dataset.md` → complementary (dataset focus)
### Unknowns Registry
- **Canonical:** `27-Nov-2025 - Managing Ambiguity Through an Unknowns Registry.md`
- **Sprint:** SPRINT_0140_0001_0001_runtime_signals.md (existing implementation)
- **Extends:** `archived/18-Nov-2025 - Unknowns-Registry.md`
- **Status:** Already implemented in Signals module; advisory validates design
### Explainability
- **Canonical (Graphs):** `27-Nov-2025 - Making Graphs Understandable to Humans.md`
- **Canonical (Verdicts):** `27-Nov-2025 - Explainability Layer for Vulnerability Verdicts.md`
- **Sprint:** SPRINT_0401_0001_0001_reachability_evidence_chain.md (UI-CLI tasks)
- **Status:** Complementary advisories - graphs cover edge reasons, verdicts cover audit trails
### VEX Proofs
- **Canonical:** `25-Nov-2025 - Define Safe VEX 'Not Affected' Claims with Proofs.md`
- **Sprint:** SPRINT_0401_0001_0001_reachability_evidence_chain.md (POLICY-VEX tasks)
### Binary Reachability
- **Canonical:** `27-Nov-2025 - Verifying Binary Reachability via DSSE Envelopes.md`
- **Sprint:** SPRINT_0401_0001_0001_reachability_evidence_chain.md (GRAPH-HYBRID tasks)
### Scanner Roadmap
- **Canonical:** `27-Nov-2025 - Blueprint for a 2026Ready Scanner.md`
- **Sprint:** Multiple sprints (0186, 0401, 0512)
- **Status:** High-level roadmap document
## Files to Archive
The following files should be moved to `archived/` as they are superseded:
```
# Duplicates/superseded
24-Nov-2025 - Bridging OpenVEX and CycloneDX for .NET.md
25-Nov-2025 - Revisiting Determinism in SBOM→VEX Pipeline.md
25-Nov-2025 - HashStable Graph Revisions Across Systems.md
26-Nov-2025 - From SBOM to VEX - Building a Transparent Chain.md
27-Nov-2025 - Rekor Envelope Size Heuristic.md
27-Nov-2025 - DSSE and Rekor Envelope Size Heuristic.md
27-Nov-2025 - Optimizing DSSE Batch Sizes for Reliable Logging.md
# Junk/malformed files
24-Nov-2025 - 1 copy 2.md
24-Nov-2025 - Designing a Deterministic Reachability Benchmarkmd (missing dot)
25-Nov-2025 - HalfLife Confidence Decay for Unknownsmd (missing dot)
```
## Sprint Cross-Reference
| Advisory Topic | Sprint ID | Status |
|---------------|-----------|--------|
| CVSS v4.0 | SPRINT_0190_0001_0001 | NEW |
| SPDX 3.0.1 / SBOM | SPRINT_0186_0001_0001 | AUGMENTED |
| Reachability Benchmark | SPRINT_0513_0001_0001 | NEW |
| Reachability Evidence | SPRINT_0401_0001_0001 | EXISTING |
| Unknowns Registry | SPRINT_0140_0001_0001 | EXISTING (implemented) |
| Graph Revision IDs | SPRINT_0401_0001_0001 | EXISTING |
| DSSE/Rekor Batching | SPRINT_0401_0001_0001 | EXISTING |
## Implementation Priority
Based on gap analysis:
1. **P0 - CVSS v4.0** (Sprint 0190) - Industry moving to v4.0, genuine gap
2. **P1 - SPDX 3.0.1** (Sprint 0186 tasks 15a-15f) - Standards compliance
3. **P1 - Public Benchmark** (Sprint 0513) - Differentiation/marketing value
4. **P2 - Explainability** (Sprint 0401) - UX enhancement, existing tasks
5. **P3 - Already Implemented** - Unknowns, Graph IDs, DSSE batching
## Implementer Quick Reference
For each topic, the implementer should read:
1. **Sprint file** - Contains task definitions, dependencies, working directories
2. **Documentation Prerequisites** - Listed in each sprint file
3. **Canonical advisory** - Full product context and rationale
4. **Module AGENTS.md** - If exists, contains module-specific coding guidance
### Key Module Docs to Read Before Implementation
| Module | Architecture Doc | AGENTS.md |
|--------|-----------------|-----------|
| Policy | `docs/modules/policy/architecture.md` | `src/Policy/*/AGENTS.md` |
| Scanner | `docs/modules/scanner/architecture.md` | `src/Scanner/*/AGENTS.md` |
| Sbomer | `docs/modules/sbomer/architecture.md` | `src/Sbomer/*/AGENTS.md` |
| Signals | `docs/modules/signals/architecture.md` | `src/Signals/*/AGENTS.md` |
| Attestor | `docs/modules/attestor/architecture.md` | `src/Attestor/*/AGENTS.md` |
---
*Index created: 2025-11-27*
*Last updated: 2025-11-27*

View File

@@ -0,0 +1,684 @@
Heres a practical, firsttimefriendly guide to using VEX in StellaOps, plus a concrete .NET pattern you can drop in today.
---
# VEX in a nutshell
* **VEX (Vulnerability Exploitability eXchange)**: a small JSON document that says whether specific CVEs *actually* affect a product/version.
* **OpenVEX**: SBOMagnostic; references products/components directly (URIs, PURLs, hashes). Great for canonical internal models.
* **CycloneDX VEX / SPDX VEX**: tie VEX statements closely to a specific SBOM instance (component BOM ref IDs). Great when the BOM is your source of truth.
**Our strategy:**
* **Store VEX separately** from SBOMs (deterministic, easier airgap bundling).
* **Link by strong references** (PURLs + content hashes + optional SBOM component IDs).
* **Translate on ingest** between OpenVEX ↔ CycloneDX VEX as needed so downstream tools stay happy.
---
# Translation model (OpenVEX ↔ CycloneDX VEX)
1. **Identity mapping**
* Prefer **PURL** for packages; fallback to **SHA256 (or SHA512)** of artifact; optionally include **SBOM `bom-ref`** if known.
2. **Product scope**
* OpenVEX “product” → CycloneDX `affects` with `bom-ref` (if available) or a synthetic ref derived from PURL/hash.
3. **Status mapping**
* `affected | not_affected | under_investigation | fixed` map 1:1.
* Keep **timestamps**, **justification**, **impact statement**, and **origin**.
4. **Evidence**
* Preserve links to advisories, commits, tests; attach as CycloneDX `analysis/evidence` notes (or OpenVEX `metadata/notes`).
**Collision rules (deterministic):**
* New statement wins if:
* Newer `timestamp` **and**
* Higher **provenance trust** (signed by vendor/Authority) or equal with a lexicographic tiebreak (issuer keyID).
---
# Storage model (MongoDBfriendly)
* **Collections**
* `vex.documents` one doc per VEX file (OpenVEX or CycloneDX VEX).
* `vex.statements` *flattened*, one per (product/component, vuln).
* `artifacts` canonical component index (PURL, hashes, optional SBOM refs).
* **Reference keys**
* `artifactKey = purl || sha256 || (groupId:name:version for .NET/NuGet)`
* `vulnKey = cveId || ghsaId || internalId`
* **Deterministic IDs**
* `_id = sha256(canonicalize(statement-json-without-signature))`
* **Signatures**
* Keep DSSE/Sigstore envelopes in `vex.documents.signatures[]` for audit & replay.
---
# Airgap bundling
Package **SBOMs + VEX + artifacts index + trust roots** as a single tarball:
```
/bundle/
sboms/*.json
vex/*.json # OpenVEX & CycloneDX VEX allowed
index/artifacts.jsonl # purl, hashes, bom-ref map
trust/rekor.merkle.roots
trust/fulcio.certs.pem
trust/keys/*.pub
manifest.json # content list + sha256 + issuedAt
```
* **Deterministic replay:** reingest is pure function of bundle bytes → identical DB state.
---
# .NET 10 implementation (C#) deterministic ingestion
### Core models
```csharp
public record ArtifactRef(
string? Purl,
string? Sha256,
string? BomRef);
public enum VexStatus { Affected, NotAffected, UnderInvestigation, Fixed }
public record VexStatement(
string StatementId, // sha256 of canonical payload
ArtifactRef Artifact,
string VulnId, // e.g., "CVE-2024-1234"
VexStatus Status,
string? Justification,
string? ImpactStatement,
DateTimeOffset Timestamp,
string IssuerKeyId, // from DSSE/Signing
int ProvenanceScore); // Authority policy
```
### Canonicalizer (stable order, no env fields)
```csharp
static string Canonicalize(VexStatement s)
{
var payload = new {
artifact = new { s.Artifact.Purl, s.Artifact.Sha256, s.Artifact.BomRef },
vulnId = s.VulnId,
status = s.Status.ToString(),
justification = s.Justification,
impact = s.ImpactStatement,
timestamp = s.Timestamp.UtcDateTime
};
// Use System.Text.Json with deterministic ordering
var opts = new System.Text.Json.JsonSerializerOptions {
WriteIndented = false
};
string json = System.Text.Json.JsonSerializer.Serialize(payload, opts);
// Normalize unicode + newline
json = json.Normalize(NormalizationForm.FormKC).Replace("\r\n","\n");
return json;
}
static string Sha256(string s)
{
using var sha = System.Security.Cryptography.SHA256.Create();
var bytes = sha.ComputeHash(System.Text.Encoding.UTF8.GetBytes(s));
return Convert.ToHexString(bytes).ToLowerInvariant();
}
```
### Ingest pipeline
```csharp
public sealed class VexIngestor
{
readonly IVexParser _parser; // OpenVEX & CycloneDX adapters
readonly IArtifactIndex _artifacts;
readonly IVexRepo _repo; // Mongo-backed
readonly IPolicy _policy; // tie-break rules
public async Task IngestAsync(Stream vexJson, SignatureEnvelope? sig)
{
var doc = await _parser.ParseAsync(vexJson); // yields normalized statements
var issuer = sig?.KeyId ?? "unknown";
foreach (var st in doc.Statements)
{
var canon = Canonicalize(st);
var id = Sha256(canon);
var withMeta = st with {
StatementId = id,
IssuerKeyId = issuer,
ProvenanceScore = _policy.Score(sig, st)
};
// Upsert artifact (purl/hash/bomRef)
await _artifacts.UpsertAsync(withMeta.Artifact);
// Deterministic merge
var existing = await _repo.GetAsync(id)
?? await _repo.FindByKeysAsync(withMeta.Artifact, st.VulnId);
if (existing is null || _policy.IsNewerAndStronger(existing, withMeta))
await _repo.UpsertAsync(withMeta);
}
if (sig is not null) await _repo.AttachSignatureAsync(doc.DocumentId, sig);
}
}
```
### Parsers (adapters)
* `OpenVexParser` reads OpenVEX; emits `VexStatement` with `ArtifactRef(PURL/hash)`
* `CycloneDxVexParser` resolves `bom-ref` → look up PURL/hash via `IArtifactIndex` (if SBOM present); if not, store bomref and mark artifact unresolved for later backfill.
---
# Why this works for StellaOps
* **SBOMagnostic core** (OpenVEXfirst) maps cleanly to your MongoDB canonical stores and `.NET 10` services.
* **SBOMaware edges** (CycloneDX VEX) are still supported via adapters and `bom-ref` backfill.
* **Deterministic everything**: canonical JSON → SHA256 IDs → reproducible merges → perfect for audits and offline environments.
* **Airgap ready**: single bundle with trust roots, replayable on any node.
---
# Next steps (plugandplay)
1. Implement the two parsers (`OpenVexParser`, `CycloneDxVexParser`).
2. Add the repo/index interfaces to your `StellaOps.Vexer` service:
* `IVexRepo` (Mongo collections `vex.documents`, `vex.statements`)
* `IArtifactIndex` (your canonical PURL/hash map)
3. Wire `Policy` to Authority to score signatures and apply tiebreaks.
4. Add a `bundle ingest` CLI: `vexer ingest /bundle/manifest.json`.
5. Expose GraphQL (HotChocolate) queries:
* `vexStatements(artifactKey, vulnId)`, `vexStatus(artifactKey)`, `evidence(...)`.
If you want, I can generate the exact Mongo schemas, HotChocolate types, and a minimal test bundle to validate the ingest endtoend.
Below is a complete, developer-ready implementation plan for the **VEX ingestion, translation, canonicalization, storage, and merge-policy pipeline** inside **Stella Ops.Vexer**, aligned with your architecture, deterministic requirements, MongoDB model, DSSE/Authority workflow, and `.NET 10` standards.
This is structured so an average developer can follow it step-by-step without ambiguity.
It is broken into phases, each with clear tasks, acceptance criteria, failure modes, interfaces, and code pointers.
---
# Stella Ops.Vexer
## Full Implementation Plan (Developer-Executable)
---
# 1. Core Objectives
Develop a deterministic, replayable, SBOM-agnostic but SBOM-compatible VEX subsystem supporting:
* OpenVEX and CycloneDX VEX ingestion.
* Canonicalization → SHA-256 identity.
* Cross-linking to artifacts (purl, hash, bom-ref).
* Merge policies driven by Authority trust/lattice rules.
* Complete offline reproducibility.
* MongoDB canonical storage.
* Exposed through gRPC/REST/GraphQL.
---
# 2. Module Structure (to be implemented)
```
src/StellaOps.Vexer/
Application/
Commands/
Queries/
Ingest/
Translation/
Merge/
Policies/
Domain/
Entities/
ValueObjects/
Services/
Infrastructure/
Mongo/
AuthorityClient/
Hashing/
Signature/
BlobStore/
Presentation/
GraphQL/
REST/
gRPC/
```
Every subfolder must compile in strict mode (treat warnings as errors).
---
# 3. Data Model (MongoDB)
## 3.1 `vex.statements` collection
Document schema:
```json
{
"_id": "sha256(canonical-json)",
"artifact": {
"purl": "pkg:nuget/... or null",
"sha256": "hex or null",
"bomRef": "optional ref",
"resolved": true | false
},
"vulnId": "CVE-XXXX-YYYY",
"status": "affected | not_affected | under_investigation | fixed",
"justification": "...",
"impact": "...",
"timestamp": "2024-01-01T12:34:56Z",
"issuerKeyId": "FULCIO-KEY-ID",
"provenanceScore": 0100,
"documentId": "UUID of vex.documents entry",
"sourceFormat": "openvex|cyclonedx",
"createdAt": "...",
"updatedAt": "..."
}
```
## 3.2 `vex.documents` collection
```
{
"_id": "<uuid>",
"format": "openvex|cyclonedx",
"rawBlobId": "<blob-id in blobstore>",
"signatures": [
{
"type": "dsse",
"verified": true,
"issuerKeyId": "F-123...",
"timestamp": "...",
"bundleEvidence": {...}
}
],
"ingestedAt": "...",
"statementIds": ["sha256-1", "sha256-2", ...]
}
```
---
# 4. Components to Implement
## 4.1 Parsing Layer
### Interfaces
```csharp
public interface IVexParser
{
ValueTask<ParsedVexDocument> ParseAsync(Stream jsonStream);
}
public sealed record ParsedVexDocument(
string DocumentId,
string Format,
IReadOnlyList<ParsedVexStatement> Statements);
```
### Tasks
1. Implement `OpenVexParser`.
* Use System.Text.Json source generators.
* Validate OpenVEX schema version.
* Extract product → component mapping.
* Map to internal `ArtifactRef`.
2. Implement `CycloneDxVexParser`.
* Support 1.5+ “vex” extension.
* bom-ref resolution through `IArtifactIndex`.
* Mark unresolved `bom-ref` but store them.
### Acceptance Criteria
* Both parsers produce identical internal representation of statements.
* Unknown fields must not corrupt canonicalization.
* 100% deterministic mapping for same input.
---
## 4.2 Canonicalizer
Implement deterministic ordering, UTF-8 normalization, stable JSON.
### Tasks
1. Create `Canonicalizer` class.
2. Apply:
* Property order: artifact, vulnId, status, justification, impact, timestamp.
* Remove optional metadata (issuerKeyId, provenance).
* Normalize Unicode → NFKC.
* Replace CRLF → LF.
3. Generate SHA-256.
### Interface
```csharp
public interface IVexCanonicalizer
{
string Canonicalize(VexStatement s);
string ComputeId(string canonicalJson);
}
```
### Acceptance Criteria
* Hash identical on all OS, time, locale, machines.
* Replaying the same bundle yields same `_id`.
---
## 4.3 Authority / Signature Verification
### Tasks
1. Implement DSSE envelope reader.
2. Integrate Authority client:
* Verify certificate chain (Fulcio/GOST/eIDAS etc).
* Obtain trust lattice score.
* Produce `ProvenanceScore`: int.
### Interface
```csharp
public interface ISignatureVerifier
{
ValueTask<SignatureVerificationResult> VerifyAsync(Stream payload, Stream envelope);
}
```
### Acceptance Criteria
* If verification fails → Vexer stores document but flags signature invalid.
* Scores map to priority in merge policy.
---
## 4.4 Merge Policies
### Implement Default Policy
1. Newer timestamp wins.
2. If timestamps equal:
* Higher provenance score wins.
* If both equal, lexicographically smaller issuerKeyId wins.
### Interface
```csharp
public interface IVexMergePolicy
{
bool ShouldReplace(VexStatement existing, VexStatement incoming);
}
```
### Acceptance Criteria
* Merge decisions reproducible.
* Deterministic ordering even when values equal.
---
## 4.5 Ingestion Pipeline
### Steps
1. Accept `multipart/form-data` or referenced blob ID.
2. Parse via correct parser.
3. Verify signature (optional).
4. For each statement:
* Canonicalize.
* Compute `_id`.
* Upsert artifact into `artifacts` (via `IArtifactIndex`).
* Resolve bom-ref (if CycloneDX).
* Existing statement? Apply merge policy.
* Insert or update.
5. Create `vex.documents` entry.
### Class
`VexIngestService`
### Required Methods
```csharp
public Task<IngestResult> IngestAsync(VexIngestRequest request);
```
### Acceptance Tests
* Idempotent: ingesting same VEX repeated → DB unchanged.
* Deterministic under concurrency.
* Air-gap replay produces identical DB state.
---
## 4.6 Translation Layer
### Implement two converters:
* `OpenVexToCycloneDxTranslator`
* `CycloneDxToOpenVexTranslator`
### Rules
* Prefer PURL → hash → synthetic bom-ref.
* Single VEX statement → one CycloneDX “analysis” entry.
* Preserve justification, impact, notes.
### Acceptance Criteria
* Round-trip OpenVEX → CycloneDX → OpenVEX produces equal canonical hashes (except format markers).
---
## 4.7 Artifact Index Backfill
### Reason
CycloneDX VEX may refer to bom-refs not yet known at ingestion.
### Tasks
1. Store unresolved artifacts.
2. Create background `BackfillWorker`:
* Watches `sboms.documents` ingestion events.
* Matches bom-refs.
* Updates statements with resolved PURL/hashes.
* Recomputes canonical JSON + SHA-256 (new version stored as new ID).
3. Marks old unresolved statement as superseded.
### Acceptance Criteria
* Backfilling is monotonic: no overwriting original.
* Deterministic after backfill: same SBOM yields same final ID.
---
## 4.8 Bundle Ingestion (Air-Gap Mode)
### Structure
```
bundle/
sboms/*.json
vex/*.json
index/artifacts.jsonl
trust/*
manifest.json
```
### Tasks
1. Implement `BundleIngestService`.
2. Stages:
* Validate manifest + hashes.
* Import trust roots (local only).
* Ingest SBOMs first.
* Ingest VEX documents.
3. Reproduce same IDs on all nodes.
### Acceptance Criteria
* Byte-identical bundle → byte-identical DB.
* Works offline completely.
---
# 5. Interfaces for GraphQL/REST/gRPC
Expose:
## Queries
* `vexStatement(id)`
* `vexStatementsByArtifact(purl/hash)`
* `vexStatus(purl)` → latest merged status
* `vexDocument(id)`
* `affectedComponents(vulnId)`
## Mutations
* `ingestVexDocument`
* `translateVex(format)`
* `exportVexDocument(id, targetFormat)`
* `replayBundle(bundleId)`
All responses must include deterministic IDs.
---
# 6. Detailed Developer Tasks by Sprint
## Sprint 1: Foundation
1. Create solution structure.
2. Add Mongo DB contexts.
3. Implement data entities.
4. Implement hashing + canonicalizer.
5. Implement IVexParser interface.
## Sprint 2: Parsers
1. Implement OpenVexParser.
2. Implement CycloneDxParser.
3. Develop strong unit tests for JSON normalization.
## Sprint 3: Signature & Authority
1. DSSE envelope reader.
2. Call Authority to verify signatures.
3. Produce provenance scores.
## Sprint 4: Merge Policy Engine
1. Implement deterministic lattice merge.
2. Unit tests: 20+ collision scenarios.
## Sprint 5: Ingestion Pipeline
1. Implement ingest service end-to-end.
2. Insert/update logic.
3. Add GraphQL endpoints.
## Sprint 6: Translation Layer
1. OpenVEX↔CycloneDX converter.
2. Tests for round-trip.
## Sprint 7: Backfill System
1. Bom-ref resolver worker.
2. Rehashing logic for updated artifacts.
3. Events linking SBOM ingestion to backfill.
## Sprint 8: Air-Gap Bundle
1. BundleIngestService.
2. Manifest verification.
3. Trust root local loading.
## Sprint 9: Hardening
1. Fuzz parsers.
2. Deterministic stress tests.
3. Concurrency validation.
4. Storage compaction.
---
# 7. Failure Handling Matrix
| Failure | Action | Logged? | Retries |
| ------------------- | -------------------------------------- | ------- | ------- |
| Invalid JSON | Reject document | Yes | 0 |
| Invalid schema | Reject | Yes | 0 |
| Signature invalid | Store document, mark signature invalid | Yes | 0 |
| Artifact unresolved | Store unresolved, enqueue backfill | Yes | 3 |
| Merge conflict | Apply policy | Yes | 0 |
| Canonical mismatch | Hard fail | Yes | 0 |
---
# 8. Developer Unit Test Checklist
### must have tests for:
* Canonicalization stability (100 samples).
* Identical input twice → identical `_id`.
* Parsing OpenVEX with multi-product definitions.
* Parsing CycloneDX with missing bom-refs.
* Merge policy tie-breakers.
* Air-gap replay reproducibility.
* Translation equivalence.
---
# 9. Deliverables for Developers
They must produce:
1. Interfaces + DTOs + document schemas.
2. Canonicalizer with 100% deterministic output.
3. Two production-grade parsers.
4. Signature verification pipeline.
5. Merge policies aligned with Authority trust model.
6. End-to-end ingestion service.
7. Translation layer.
8. Backfill worker.
9. Air-gap bundle script + service.
10. GraphQL APIs.
---
If you want, I can next produce:
* A full **developer handbook** (6090 pages).
* Full **technical architecture ADRs**.
* A concrete **scaffold** with compiles-clean `.NET 10` project.
* Complete **test suite specification**.
* A **README.md** for new joiners.

View File

@@ -0,0 +1,754 @@
Heres a practical way to make a crossplatform, hashstable JSON “fingerprint” for things like a `graph_revision_id`, so your hashes dont change between OS/locale settings.
---
### What “canonical JSON” means (in plain terms)
* **Deterministic order:** Always write object properties in a fixed order (e.g., lexicographic).
* **Stable numbers:** Serialize numbers the same way everywhere (no locale, no extra zeros).
* **Normalized text:** Normalize all strings to Unicode **NFC** so accented/combined characters dont vary.
* **Consistent bytes:** Encode as **UTF8** with **LF** (`\n`) newlines only.
These ideas match the JSON Canonicalization Scheme (RFC 8785)—use it as your north star for stable hashing.
---
### Dropin C# helper (targets .NET 8/10)
This gives you a canonical UTF8 byte[] and a SHA256 hex hash. It:
* Recursively sorts object properties,
* Emits numbers with invariant formatting,
* Normalizes all string values to **NFC**,
* Uses `\n` endings,
* Produces a SHA256 for `graph_revision_id`.
```csharp
using System;
using System.Buffers.Text;
using System.Collections.Generic;
using System.Globalization;
using System.Linq;
using System.Security.Cryptography;
using System.Text;
using System.Text.Json;
using System.Text.Json.Nodes;
using System.Text.Unicode;
public static class CanonJson
{
// Entry point: produce canonical UTF-8 bytes
public static byte[] ToCanonicalUtf8(object? value)
{
// 1) Serialize once to JsonNode to work with types safely
var initialJson = JsonSerializer.SerializeToNode(
value,
new JsonSerializerOptions
{
NumberHandling = JsonNumberHandling.AllowReadingFromString,
Encoder = System.Text.Encodings.Web.JavaScriptEncoder.UnsafeRelaxedJsonEscaping // we will control escaping
});
// 2) Canonicalize (sort keys, normalize strings, normalize numbers)
var canonNode = CanonicalizeNode(initialJson);
// 3) Write in a deterministic manner
var sb = new StringBuilder(4096);
WriteCanonical(canonNode!, sb);
// 4) Ensure LF only
var lf = sb.ToString().Replace("\r\n", "\n").Replace("\r", "\n");
// 5) UTF-8 bytes
return Encoding.UTF8.GetBytes(lf);
}
// Convenience: compute SHA-256 hex for graph_revision_id
public static string ComputeGraphRevisionId(object? value)
{
var bytes = ToCanonicalUtf8(value);
using var sha = SHA256.Create();
var hash = sha.ComputeHash(bytes);
var sb = new StringBuilder(hash.Length * 2);
foreach (var b in hash) sb.Append(b.ToString("x2"));
return sb.ToString();
}
// --- Internals ---
private static JsonNode? CanonicalizeNode(JsonNode? node)
{
if (node is null) return null;
switch (node)
{
case JsonValue v:
if (v.TryGetValue<string>(out var s))
{
// Normalize strings to NFC
var nfc = s.Normalize(NormalizationForm.FormC);
return JsonValue.Create(nfc);
}
if (v.TryGetValue<double>(out var d))
{
// RFC-like minimal form: Invariant, no thousand sep; handle -0 => 0
if (d == 0) d = 0; // squash -0
return JsonValue.Create(d);
}
if (v.TryGetValue<long>(out var l))
{
return JsonValue.Create(l);
}
// Fallback keep as-is
return v;
case JsonArray arr:
var outArr = new JsonArray();
foreach (var elem in arr)
outArr.Add(CanonicalizeNode(elem));
return outArr;
case JsonObject obj:
// Sort keys lexicographically (RFC 8785 uses code unit order)
var sorted = new JsonObject();
foreach (var kvp in obj.OrderBy(k => k.Key, StringComparer.Ordinal))
sorted[kvp.Key] = CanonicalizeNode(kvp.Value);
return sorted;
default:
return node;
}
}
// Deterministic writer matching our canonical rules
private static void WriteCanonical(JsonNode node, StringBuilder sb)
{
switch (node)
{
case JsonObject obj:
sb.Append('{');
bool first = true;
foreach (var kvp in obj)
{
if (!first) sb.Append(',');
first = false;
WriteString(kvp.Key, sb); // property name
sb.Append(':');
WriteCanonical(kvp.Value!, sb);
}
sb.Append('}');
break;
case JsonArray arr:
sb.Append('[');
for (int i = 0; i < arr.Count; i++)
{
if (i > 0) sb.Append(',');
WriteCanonical(arr[i]!, sb);
}
sb.Append(']');
break;
case JsonValue val:
if (val.TryGetValue<string>(out var s))
{
WriteString(s, sb);
}
else if (val.TryGetValue<long>(out var l))
{
sb.Append(l.ToString(CultureInfo.InvariantCulture));
}
else if (val.TryGetValue<double>(out var d))
{
// Minimal form close to RFC 8785 guidance:
// - No NaN/Infinity in JSON
// - Invariant culture, trim trailing zeros and dot
if (double.IsNaN(d) || double.IsInfinity(d))
throw new InvalidOperationException("Non-finite numbers are not valid in canonical JSON.");
if (d == 0) d = 0; // squash -0
var sNum = d.ToString("G17", CultureInfo.InvariantCulture);
// Trim redundant zeros in exponentless decimals
if (sNum.Contains('.') && !sNum.Contains("e") && !sNum.Contains("E"))
{
sNum = sNum.TrimEnd('0').TrimEnd('.');
}
sb.Append(sNum);
}
else
{
// bool / null
if (val.TryGetValue<bool>(out var b))
sb.Append(b ? "true" : "false");
else
sb.Append("null");
}
break;
default:
sb.Append("null");
break;
}
}
private static void WriteString(string s, StringBuilder sb)
{
sb.Append('"');
foreach (var ch in s)
{
switch (ch)
{
case '\"': sb.Append("\\\""); break;
case '\\': sb.Append("\\\\"); break;
case '\b': sb.Append("\\b"); break;
case '\f': sb.Append("\\f"); break;
case '\n': sb.Append("\\n"); break;
case '\r': sb.Append("\\r"); break;
case '\t': sb.Append("\\t"); break;
default:
if (char.IsControl(ch))
{
sb.Append("\\u");
sb.Append(((int)ch).ToString("x4"));
}
else
{
sb.Append(ch);
}
break;
}
}
sb.Append('"');
}
}
```
**Usage in your code (e.g., StellaOps):**
```csharp
var payload = new {
graphId = "core-vuln-edges",
version = 3,
edges = new[]{ new { from = "pkg:nuget/Newtonsoft.Json@13.0.3", to = "pkg:nuget/System.Text.Json@8.0.4" } },
meta = new { generatedAt = DateTime.UtcNow.ToString("yyyy-MM-ddTHH:mm:ssZ") }
};
// Canonical bytes (UTF-8 + LF) for storage/attestation:
var canon = CanonJson.ToCanonicalUtf8(payload);
// Stable revision id (SHA-256 hex):
var graphRevisionId = CanonJson.ComputeGraphRevisionId(payload);
Console.WriteLine(graphRevisionId);
```
---
### Operational tips
* **Freeze locales:** Always run with `CultureInfo.InvariantCulture` when formatting numbers/dates before they hit JSON.
* **Reject nonfinite numbers:** Dont allow `NaN`/`Infinity`—theyre not valid JSON and will break canonicalization.
* **One writer, everywhere:** Use this same helper in CI, build agents, and runtime so the hash never drifts.
* **Record the scheme:** Store the **canonicalization version** (e.g., `canon_v="JCSlike v1"`) alongside the hash to allow future upgrades without breaking verification.
If you want, I can adapt this to stream very large JSONs (avoid `JsonNode`) or emit a **DSSE**/intoto style envelope with the canonical bytes as the payload for your attestation chain.
Heres a concrete, stepbystep implementation plan you can hand to the devs so they know exactly what to build and how it all fits together.
Ill break it into phases:
1. **Design & scope**
2. **Canonical JSON library**
3. **Graph canonicalization & `graph_revision_id` calculation**
4. **Tooling, tests & crossplatform verification**
5. **Integration & rollout**
---
## 1. Design & scope
### 1.1. Goals
* Produce a **stable, crossplatform hash** (e.g. SHA256) from JSON content.
* This hash becomes your **`graph_revision_id`** for supplychain graphs.
* Hash **must not change** due to:
* OS differences (Windows/Linux/macOS)
* Locale differences
* Whitespace/property order differences
* Unicode normalization issues (e.g. accented chars)
### 1.2. Canonicalization strategy (what devs should implement)
Youll use **two levels of canonicalization**:
1. **Domain-level canonicalization (graph)**
Make sure semantically equivalent graphs always serialize to the same inmemory structure:
* Sort arrays (e.g. nodes, edges) in a deterministic way (ID, then type, etc.).
* Remove / ignore non-semantic or unstable fields (timestamps, debug info, transient IDs).
2. **Encoding-level canonicalization (JSON)**
Convert that normalized object into **canonical JSON**:
* Object keys sorted lexicographically (`StringComparer.Ordinal`).
* Strings normalized to **Unicode NFC**.
* Numbers formatted with **InvariantCulture**, no locale effects.
* No NaN/Infinity (reject or map them before hashing).
* UTF8 output with **LF (`\n`) only**.
You already have a C# canonical JSON helper from me; this plan is about turning it into a production-ready component and wiring it through the system.
---
## 2. Canonical JSON library
**Owner:** backend platform team
**Deliverable:** `StellaOps.CanonicalJson` (or similar) shared library
### 2.1. Project setup
* Create a **.NET class library**:
* `src/StellaOps.CanonicalJson/StellaOps.CanonicalJson.csproj`
* Target same framework as your services (e.g. `net8.0`).
* Add reference to `System.Text.Json`.
### 2.2. Public API design
In `CanonicalJson.cs` (or `CanonJson.cs`):
```csharp
namespace StellaOps.CanonicalJson;
public static class CanonJson
{
// Version of your canonicalization algorithm (important for future changes)
public const string CanonicalizationVersion = "canon-json-v1";
public static byte[] ToCanonicalUtf8<T>(T value);
public static string ToCanonicalString<T>(T value);
public static byte[] ComputeSha256<T>(T value);
public static string ComputeSha256Hex<T>(T value);
}
```
**Behavioral requirements:**
* `ToCanonicalUtf8`:
* Serializes input to a `JsonNode`.
* Applies canonicalization rules (sort keys, normalize strings, normalize numbers).
* Writes minimal JSON with:
* No extra spaces.
* Keys in lexicographic order.
* UTF8 bytes and LF newlines only.
* `ComputeSha256Hex`:
* Uses `ToCanonicalUtf8` and computes SHA256.
* Returns lowercase hex string.
### 2.3. Canonicalization rules (dev checklist)
**Objects (`JsonObject`):**
* Sort keys using `StringComparer.Ordinal`.
* Recursively canonicalize child nodes.
**Arrays (`JsonArray`):**
* Preserve order as given by caller.
*(The “graph canonicalization” step will make sure this order is semantically stable before JSON.)*
**Strings:**
* Normalize to **NFC**:
```csharp
var normalized = original.Normalize(NormalizationForm.FormC);
```
* When writing JSON:
* Escape `"`, `\`, control characters (`< 0x20`) using `\uXXXX` format.
* Use `\n`, `\r`, `\t`, `\b`, `\f` for standard escapes.
**Numbers:**
* Support at least `long`, `double`, `decimal`.
* Use **InvariantCulture**:
```csharp
someNumber.ToString("G17", CultureInfo.InvariantCulture);
```
* Normalize `-0` to `0`.
* No grouping separators, no locale decimals.
* Reject `NaN`, `+Infinity`, `-Infinity` with a clear exception.
**Booleans & null:**
* Emit `true`, `false`, `null` (lowercase).
**Newlines:**
* Ensure final string has only `\n`:
```csharp
json = json.Replace("\r\n", "\n").Replace("\r", "\n");
```
### 2.4. Error handling & logging
* Throw a **custom exception** for unsupported content:
* `CanonicalJsonException : Exception`.
* Example triggers:
* Nonfinite numbers (NaN/Infinity).
* Types that cant be represented in JSON.
* Log the path to the field where canonicalization failed (for debugging).
---
## 3. Graph canonicalization & `graph_revision_id`
This is where the library gets used and where the semantics of the graph are defined.
**Owner:** team that owns your supplychain graph model / graph ingestion.
**Deliverables:**
* Domain-specific canonicalization for graphs.
* Stable `graph_revision_id` computation integrated into services.
### 3.1. Define what goes into the hash
Create a short **spec document** (internal) that answers:
1. **What object is being hashed?**
* For example:
```json
{
"graphId": "core-vuln-edges",
"schemaVersion": "3",
"nodes": [...],
"edges": [...],
"metadata": {
"source": "scanner-x",
"epoch": 1732730885
}
}
```
2. **Which fields are included vs excluded?**
* Include:
* Graph identity (ID, schema version).
* Nodes (with stable key set).
* Edges (with stable key set).
* Exclude or **normalize**:
* Raw timestamps of ingestion.
* Non-deterministic IDs (if theyre not part of graph semantics).
* Any environmentspecific details.
3. **Versioning:**
* Add:
* `canonicalizationVersion` (from `CanonJson.CanonicalizationVersion`).
* `graphHashSchemaVersion` (separate from graph schema version).
Example JSON passed into `CanonJson`:
```json
{
"graphId": "...",
"graphSchemaVersion": "3",
"graphHashSchemaVersion": "1",
"canonicalizationVersion": "canon-json-v1",
"nodes": [...],
"edges": [...]
}
```
### 3.2. Domain-level canonicalizer
Create a class like `GraphCanonicalizer` in your graph domain assembly:
```csharp
public interface IGraphCanonicalizer<TGraph>
{
object ToCanonicalGraphObject(TGraph graph);
}
```
Implementation tasks:
1. **Choose a deterministic ordering for arrays:**
* Nodes: sort by `(nodeType, nodeId)` or `(packageUrl, version)`.
* Edges: sort by `(from, to, edgeType)`.
2. **Strip / transform unstable fields:**
* Example: external IDs that may change but are not semantically relevant.
* Replace `DateTime` with a normalized string format (if it must be part of the semantics).
3. **Output DTOs with primitive types only:**
* Create DTOs like:
```csharp
public sealed record CanonicalNode(
string Id,
string Type,
string Name,
string? Version,
IReadOnlyDictionary<string, string>? Attributes
);
```
* Use simple `record` types / POCOs that serialize cleanly with `System.Text.Json`.
4. **Combine into a single canonical graph object:**
```csharp
public sealed record CanonicalGraphDto(
string GraphId,
string GraphSchemaVersion,
string GraphHashSchemaVersion,
string CanonicalizationVersion,
IReadOnlyList<CanonicalNode> Nodes,
IReadOnlyList<CanonicalEdge> Edges
);
```
`ToCanonicalGraphObject` returns `CanonicalGraphDto`.
### 3.3. `graph_revision_id` calculator
Add a service:
```csharp
public interface IGraphRevisionCalculator<TGraph>
{
string CalculateRevisionId(TGraph graph);
}
public sealed class GraphRevisionCalculator<TGraph> : IGraphRevisionCalculator<TGraph>
{
private readonly IGraphCanonicalizer<TGraph> _canonicalizer;
public GraphRevisionCalculator(IGraphCanonicalizer<TGraph> canonicalizer)
{
_canonicalizer = canonicalizer;
}
public string CalculateRevisionId(TGraph graph)
{
var canonical = _canonicalizer.ToCanonicalGraphObject(graph);
return CanonJson.ComputeSha256Hex(canonical);
}
}
```
**Wire this up in DI** for all services that handle graph creation/update.
### 3.4. Persistence & APIs
1. **Database schema:**
* Add a `graph_revision_id` column (string, length 64) to graph tables/collections.
* Optionally add `graph_hash_schema_version` and `canonicalization_version` columns for debugging.
2. **Write path:**
* On graph creation/update:
* Build the domain model.
* Use `GraphRevisionCalculator` to get `graph_revision_id`.
* Store it alongside the graph.
3. **Read path & APIs:**
* Ensure all relevant APIs return `graph_revision_id` for clients.
* If you use it in attestation / DSSE payloads, include it there too.
---
## 4. Tooling, tests & crossplatform verification
This is where you make sure it **actually behaves identically** on all platforms and input variations.
### 4.1. Unit tests for `CanonJson`
Create a dedicated test project: `tests/StellaOps.CanonicalJson.Tests`.
**Test categories & examples:**
1. **Property ordering:**
* Input 1: `{"b":1,"a":2}`
* Input 2: `{"a":2,"b":1}`
* Assert: `ToCanonicalString` is identical + same hash.
2. **Whitespace variations:**
* Input with lots of spaces/newlines vs compact.
* Canonical outputs must match.
3. **Unicode normalization:**
* One string using precomposed characters.
* Same text using combining characters.
* Canonical output must match (NFC).
4. **Number formatting:**
* `1`, `1.0`, `1.0000000000` → must canonicalize to the same representation.
* `-0.0` → canonicalizes to `0`.
5. **Booleans & null:**
* Check exact lowercase output: `true`, `false`, `null`.
6. **Error behaviors:**
* Try serializing `double.NaN` → expect `CanonicalJsonException`.
### 4.2. Integration tests for graph hashing
Create tests in graph service test project:
1. Build two graphs that are **semantically identical** but:
* Nodes/edges inserted in different order.
* Fields ordered differently.
* Different whitespace in strings (if your app might introduce such).
2. Assert:
* `CalculateRevisionId` yields the same result.
* Canonical DTOs match expected snapshots (optional snapshot tests).
3. Build graphs that differ in a meaningful way (e.g., extra edge).
* Assert that `graph_revision_id` is different.
### 4.3. Crossplatform smoke tests
**Goal:** Prove same hash on Windows, Linux and macOS.
Implementation idea:
1. Add a small console tool: `StellaOps.CanonicalJson.Tool`:
* Usage:
`stella-canon hash graph.json`
* Prints:
* Canonical JSON (optional flag).
* SHA256 hex.
2. In CI:
* Run the same test JSON on:
* Windows runner.
* Linux runner.
* Assert hashes are equal (store expected in a test harness or artifact).
---
## 5. Integration into your pipelines & rollout
### 5.1. Where to compute `graph_revision_id`
Decide (and document) **one place** where the ID is authoritative, for example:
* After ingestion + normalization step, **before** persisting to your graph store.
* Or in a dedicated “graph revision service” used by ingestion pipelines.
Implementation:
* Update the ingestion service:
1. Parse incoming data into internal graph model.
2. Apply domain canonicalizer → `CanonicalGraphDto`.
3. Use `GraphRevisionCalculator` → `graph_revision_id`.
4. Persist graph + revision ID.
### 5.2. Migration / backfill plan
If you already have graphs in production:
1. Add new columns/fields for `graph_revision_id` (nullable).
2. Write a migration job:
* Fetch existing graph.
* Canonicalize + hash.
* Store `graph_revision_id`.
3. For a transition period:
* Accept both “old” and “new” graphs.
* Use `graph_revision_id` where available; fall back to legacy IDs when necessary.
4. After backfill is complete:
* Make `graph_revision_id` mandatory for new graphs.
* Phase out any legacy revision logic.
### 5.3. Feature flag & safety
* Gate the use of `graph_revision_id` in highrisk flows (e.g., attestations, policy decisions) behind a **feature flag**:
* `graphRevisionIdEnabled`.
* Roll out gradually:
* Start in staging.
* Then a subset of production tenants.
* Monitor for:
* Unexpected changes in revision IDs on unchanged graphs.
* Errors from `CanonicalJsonException`.
---
## 6. Documentation for developers & operators
Have a short internal doc (or page) with:
1. **Canonical JSON spec summary:**
* Sorting rules.
* Unicode NFC requirement.
* Number format rules.
* Nonfinite numbers not allowed.
2. **Graph hashing spec:**
* Fields included in the hash.
* Fields explicitly ignored.
* Array ordering rules for nodes/edges.
* Current:
* `graphHashSchemaVersion = "1"`
* `CanonicalizationVersion = "canon-json-v1"`
3. **Examples:**
* Sample graph JSON input.
* Canonical JSON output.
* Expected SHA256.
4. **Operational guidance:**
* How to run the CLI tool to debug:
* “Why did this graph get a new `graph_revision_id`?”
* What to do on canonicalization errors (usually indicates bad data).
---
If youd like, next step I can do is: draft the **actual C# projects and folder structure** (with file names + stub code) so your team can just copy/paste the skeleton into the repo and start filling in the domain-specific bits.

View File

@@ -0,0 +1,775 @@
Heres a crisp, practical idea to harden StellaOps: make the SBOM → VEX pipeline **deterministic and verifiable** by treating it as a series of signed, hashanchored state transitions—so every rebuild yields the *same* provenance envelope you can mathematically check across airgapped nodes.
---
### What this means (plain English)
* **SBOM** (whats inside): list of packages, files, and their hashes.
* **VEX** (whats affected): statements like “CVE20241234 is **not** exploitable here because X.”
* **Deterministic**: same inputs → byteidentical outputs, every time.
* **Verifiable transitions**: each step (ingest → normalize → resolve → reachability → VEX) emits a signed attestation that pins its inputs/outputs by content hash.
---
### Minimal design you can drop into StellaOps
1. **Canonicalize everything**
* Sort JSON keys, normalize whitespace/line endings.
* Freeze timestamps by recording them only in an outer envelope (not inside payloads used for hashing).
2. **Edgelevel attestations**
* For each dependency edge in the reachability graph `(nodeA → nodeB via symbol S)`, emit a tiny DSSE payload:
* `{edge_id, from_purl, to_purl, rule_id, witness_hashes[]}`
* Hash is over the canonical payload; sign via DSSE (Sigstore or your Authority PKI).
3. **Step attestations (pipeline states)**
* For each stage (`Sbomer`, `Scanner`, `Vexer/Excititor`, `Concelier`):
* Emit `predicateType`: `stellaops.dev/attestations/<stage>`
* Include `input_digests[]`, `output_digests[]`, `parameters_digest`, `tool_version`
* Sign with stage key; record the public key (or cert chain) in Authority.
4. **Provenance envelope**
* Build a toplevel DSSE that includes:
* Merkle root of **all** edge attestations.
* Merkle roots of each stages outputs.
* Mapping table of `PURL ↔ buildID (ELF/PE/MachO)` for stable identity.
5. **Replay manifest**
* A single, declarative file that pins:
* Feeds (CPE/CVE/VEX sources + exact digests)
* Rule/lattice versions and parameters
* Container images + layers SHA256
* Platform toggles (e.g., PQC on/off)
* Running **replay** on this manifest must reproduce the same Merkle roots.
6. **Airgap sync**
* Export only the envelopes + Merkle roots + public certs.
* On the target, verify chains and recompute roots from the replay manifest—no internet required.
---
### Slim C# shapes (DTOs) for DSSE predicates
```csharp
public record EdgeAttestation(
string EdgeId,
string FromPurl,
string ToPurl,
string RuleId,
string[] WitnessHashes, // e.g., CFG slice, symbol tables, lineage JSON
string CanonicalAlgo = "SHA256");
public record StepAttestation(
string Stage, // "Sbomer" | "Scanner" | "Excititor" | "Concelier"
string ToolVersion,
string[] InputDigests,
string[] OutputDigests,
string ParametersDigest, // hash of canonicalized params
DateTimeOffset StartedAt,
DateTimeOffset FinishedAt);
public record ProvenanceEnvelope(
string ReplayManifestDigest,
string EdgeMerkleRoot,
Dictionary<string,string> StageMerkleRoots, // stage -> root
Dictionary<string,string> PurlToBuildId); // stable identity map
```
---
### Determinism checklist (quick win)
* Canonical JSON (stable key order) everywhere.
* No wallclock timestamps inside hashed payloads.
* Only reference inputs by digest, never by URL.
* Lock rule sets / lattice policies by digest.
* Normalize file paths (POSIX style) and line endings.
* Container images by **digest**, not tags.
---
### Why its worth it
* **Auditability:** every VEX claim is backed by a verifiable graph path with signed edges.
* **Reproducibility:** regulators (and customers) can replay your exact scan and get identical roots.
* **Integrity at scale:** airgapped sites can validate without trusting your network—just the math.
If you want, Ill turn this into readytopaste `.proto` contracts + a small .NET library (`StellaOps.Attestations`) with DSSE signing/verification helpers and Merkle builders.
Got it — lets turn that sketch into a concrete implementation plan your devs can actually execute.
Ill structure this as:
1. **Objectives & scope**
2. **High-level architecture**
3. **Workstreams & milestones**
4. **Detailed tasks per workstream**
5. **Rollout, testing, and ops**
You can copy/paste this straight into a tracking system and break it into tickets.
---
## 1. Objectives & scope
### Primary objectives
* Make the **SBOM → VEX pipeline deterministic**:
* Same inputs (SBOM, feeds, rules, images) → **bitidentical** provenance & VEX outputs.
* Make the pipeline **verifiable**:
* Each step emits **signed attestations** with content hashes.
* Attestations are **chainable** from raw SBOM to VEX & reports.
* Make outputs **replayable** and **airgap friendly**:
* A single **Replay Manifest** can reconstruct pipeline outputs on another node and verify Merkle roots match.
### Out of scope (for this phase)
* New vulnerability scanning engines.
* New UI views (beyond minimal “show provenance / verify”).
* Key management redesign (well integrate with existing Authority / PKI).
---
## 2. High-level architecture
### New shared library
**Library name (example):** `StellaOps.Attestations` (or similar)
Provides:
* Canonical serialization:
* Deterministic JSON encoder (stable key ordering, normalized formatting).
* Hashing utilities:
* SHA256 (and extension point for future algorithms).
* DSSE wrapper:
* `Sign(payload, keyRef)` → DSSE envelope.
* `Verify(dsse, keyResolver)` → payload + key metadata.
* Merkle utilities:
* Build Merkle trees from lists of digests.
* DTOs:
* `EdgeAttestation`, `StepAttestation`, `ProvenanceEnvelope`, `ReplayManifest`.
### Components that will integrate the library
* **Sbomer** outputs SBOM + StepAttestation.
* **Scanner** consumes SBOM, produces findings + StepAttestation.
* **Excititor / Vexer** takes findings + reachability graph → VEX + EdgeAttestations + StepAttestation.
* **Concelier** takes SBOM + VEX → reports + StepAttestation + ProvenanceEnvelope.
* **Authority** manages keys and verification (possibly separate microservice or shared module).
---
## 3. Workstreams & milestones
Break this into parallel workstreams:
1. **WS1 Canonicalization & hashing**
2. **WS2 DSSE & key integration**
3. **WS3 Attestation schemas & Merkle envelopes**
4. **WS4 Pipeline integration (Sbomer, Scanner, Excititor, Concelier)**
5. **WS5 Replay engine & CLI**
6. **WS6 Verification / airgap support**
7. **WS7 Testing, observability, and rollout**
Each workstream below has concrete tasks + “Definition of Done” (DoD).
---
## 4. Detailed tasks per workstream
### WS1 Canonicalization & hashing
**Goal:** A small, well-tested core that makes everything deterministic.
#### Tasks
1. **Define canonical JSON format**
* Decision doc:
* Use UTF8.
* No insignificant whitespace.
* Keys always sorted lexicographically.
* No embedded timestamps or non-deterministic fields inside hashed payloads.
* Implement:
* `CanonicalJsonSerializer.Serialize<T>(T value) : string/byte[]`.
2. **Define deterministic string normalization rules**
* Normalize line endings in any text: `\n` only.
* Normalize paths:
* Use POSIX style `/`.
* Remove trailing slashes (except root).
* Normalize numeric formatting:
* No scientific notation.
* Fixed decimal rules, if relevant.
3. **Implement hashing helper**
* `Digest` type:
```csharp
public record Digest(string Algorithm, string Value); // Algorithm = "SHA256"
```
* `Hashing.ComputeDigest(byte[] data) : Digest`.
* `Hashing.ComputeDigestCanonical<T>(T value) : Digest` (serialize canonically then hash).
4. **Add unit tests & golden files**
* Golden tests:
* Same input object → same canonical JSON & digest, regardless of property order, culture, runtime.
* Hash of JSON must match precomputed values (store `.golden` files in repo).
* Edge cases:
* Unicode strings.
* Nested objects.
* Arrays with different order (order preserved, but ensure same input → same output).
#### DoD
* Canonical serializer & hashing utilities available in `StellaOps.Attestations`.
* Test suite with >95% coverage for serializer + hashing.
* Simple CLI or test harness:
* `stella-attest dump-canonical <json>` → prints canonical JSON & digest.
---
### WS2 DSSE & key integration
**Goal:** Standardize how we sign and verify attestations.
#### Tasks
1. **Select DSSE representation**
* Use JSON DSSE envelope:
```json
{
"payloadType": "stellaops.dev/attestation/edge@v1",
"payload": "<base64 of canonical JSON>",
"signatures": [{ "keyid": "...", "sig": "..." }]
}
```
2. **Implement DSSE API in library**
* Interfaces:
```csharp
public interface ISigner {
Task<Signature> SignAsync(byte[] payload, string keyRef);
}
public interface IVerifier {
Task<VerificationResult> VerifyAsync(Envelope envelope);
}
```
* Helpers:
* `Dsse.CreateEnvelope(payloadType, canonicalPayloadBytes, signer, keyRef)`.
* `Dsse.VerifyEnvelope(envelope, verifier)`.
3. **Integrate with Authority / PKI**
* Add `AuthoritySigner` / `AuthorityVerifier` implementations:
* `keyRef` is an ID understood by Authority (service name, stage name, or explicit key ID).
* Ensure we can:
* Request signing of arbitrary bytes.
* Resolve the public key used to sign.
4. **Key usage conventions**
* Define mapping:
* `sbomer` key.
* `scanner` key.
* `excititor` key.
* `concelier` key.
* Optional: use distinct keys per environment (dev/stage/prod) but **include environment** in attestation metadata.
5. **Tests**
* Round-trip: sign then verify sample payloads.
* Negative tests:
* Tampered payload → verification fails.
* Tampered signatures → verification fails.
#### DoD
* DSSE envelope creation/verification implemented and tested.
* Authority integration with mock/fake for unit tests.
* Documentation for developers:
* “How to emit an attestation: 5line example.”
---
### WS3 Attestation schemas & Merkle envelopes
**Goal:** Standardize the data models for all attestations and envelopes.
#### Tasks
1. **Define EdgeAttestation schema**
Fields (concrete draft):
```csharp
public record EdgeAttestation(
string EdgeId, // deterministic ID
string FromPurl, // e.g. pkg:maven/...
string ToPurl,
string? FromSymbol, // optional (symbol, API, entry point)
string? ToSymbol,
string RuleId, // which reachability rule fired
Digest[] WitnessDigests, // digests of evidence payloads
string CanonicalAlgo = "SHA256"
);
```
* `EdgeId` convention (document in ADR):
* E.g. `sha256(fromPurl + "→" + toPurl + "|" + ruleId + "|" + fromSymbol + "|" + toSymbol)` (before hashing, canonicalize strings).
2. **Define StepAttestation schema**
```csharp
public record StepAttestation(
string Stage, // "Sbomer" | "Scanner" | ...
string ToolVersion,
Digest[] InputDigests, // SBOM digest, feed digests, image digests
Digest[] OutputDigests, // outputs of this stage
Digest ParametersDigest, // hash of canonicalized params (flags, rule sets, etc.)
DateTimeOffset StartedAt,
DateTimeOffset FinishedAt,
string Environment, // dev/stage/prod/airgap
string NodeId // machine or logical node name
);
```
* Note: `StartedAt` / `FinishedAt` are **not** included in any hashed payload used for determinism; theyre OK as metadata but not part of Merkle roots.
3. **Define ProvenanceEnvelope schema**
```csharp
public record ProvenanceEnvelope(
Digest ReplayManifestDigest,
Digest EdgeMerkleRoot,
Dictionary<string, Digest> StageMerkleRoots, // stage -> root digest
Dictionary<string, string> PurlToBuildId // PURL -> build-id string
);
```
4. **Define ReplayManifest schema**
```csharp
public record ReplayManifest(
string PipelineVersion,
Digest SbomDigest,
Digest[] FeedDigests, // CVE, CPE, VEX sources
Digest[] RuleSetDigests, // reachability + policy rules
Digest[] ContainerImageDigests,
string[] PlatformToggles // e.g. ["pqc=on", "mode=strict"]
);
```
5. **Implement Merkle utilities**
* Provide:
* `Digest Merkle.BuildRoot(IEnumerable<Digest> leaves)`.
* Deterministic rules:
* Sort leaves by `Value` (digest hex string) before building.
* If odd number of leaves, duplicate last leaf or define explicit strategy and document it.
* Tie into:
* Edges → `EdgeMerkleRoot`.
* Per stage attestation list → stagespecific root.
6. **Schema documentation**
* Markdown/ADR file:
* Field definitions.
* Which fields are hashed vs. metadata only.
* How `EdgeId`, Merkle roots, and PURL→BuildId mapping are generated.
#### DoD
* DTOs implemented in shared library.
* Merkle root builder implemented and tested.
* Schema documented and shared across teams.
---
### WS4 Pipeline integration
**Goal:** Each stage emits StepAttestations and (for reachability) EdgeAttestations, and Concelier emits ProvenanceEnvelope.
Well do this stage by stage.
#### WS4.A Sbomer integration
**Tasks**
1. Identify **SBOM hash**:
* After generating SBOM, serialize canonically and compute `Digest`.
2. Collect **inputs**:
* Input sources digests (e.g., image digests, source artifact digests).
3. Collect **parameters**:
* All relevant configuration into a `SbomerParams` object:
* E.g. `scanDepth`, `excludedPaths`, `sbomFormat`.
* Canonicalize and compute `ParametersDigest`.
4. Emit **StepAttestation**:
* Create DTO.
* Canonicalize & hash for Merkle tree use.
* Wrap in DSSE envelope with `payloadType = "stellaops.dev/attestation/step@v1"`.
* Store envelope:
* Append to standard location (e.g. `<artifact-root>/attestations/sbomer-step.dsse.json`).
5. Add config flag:
* `--emit-attestations` (default: off initially, later: on by default).
#### WS4.B Scanner integration
**Tasks**
1. Take SBOM digest as an **InputDigest**.
2. Collect feed digests:
* Each CVE/CPE/VEX feed file → canonical hash.
3. Compute `ScannerParams` digest:
* E.g. `severityThreshold`, `downloaderOptions`, `scanMode`.
4. Emit **StepAttestation** (same pattern as Sbomer).
5. Tag scanner outputs:
* The vulnerability findings file(s) should be contentaddressable (filename includes digest or store meta manifest mapping).
#### WS4.C Excititor/Vexer integration
**Tasks**
1. Integrate reachability graph emission:
* From final graph, **generate EdgeAttestations**:
* One per edge `(from, to, rule)`.
* For each edge, compute witness digests:
* E.g. serialized CFG slice, symbol table snippet, call chain.
* Those witness artifacts should be stored under canonical paths:
* `<artifact-root>/witnesses/<edge-id>/<witness-type>.json`.
2. Canonicalize & hash each EdgeAttestation.
3. Build **Merkle root** over all edge attestation digests.
4. Emit **Excititor StepAttestation**:
* Inputs: SBOM, scanner findings, feeds, rule sets.
* Outputs: VEX document(s), EdgeMerkleRoot digest.
* Params: reachability flags, rule definitions digest.
5. Store:
* Edge attestations:
* Either:
* One DSSE per edge (possibly a lot of files).
* Or a **batch file** containing a list of attestations wrapped into a single DSSE.
* Prefer: **batch** for performance; define `EdgeAttestationBatch` DTO.
* VEX output(s) with deterministic file naming.
#### WS4.D Concelier integration
**Tasks**
1. Gather all **StepAttestations** & **EdgeMerkleRoot**:
* Input: references (paths) to stage outputs + their DSSE envelopes.
2. Build `PurlToBuildId` map:
* For each component:
* Extract PURL from SBOM.
* Extract build-id from binary metadata.
3. Build **StageMerkleRoots**:
* For each stage, compute Merkle root of its StepAttestations.
* In simplest version: 1 step attestation per stage → root is just its digest.
4. Construct **ReplayManifest**:
* From final pipeline context (SBOM, feeds, rules, images, toggles).
* Compute `ReplayManifestDigest` and store manifest file (e.g. `replay-manifest.json`).
5. Construct **ProvenanceEnvelope**:
* Fill fields with digests.
* Canonicalize and sign with Concelier key (DSSE).
6. Store outputs:
* `provenance-envelope.dsse.json`.
* `replay-manifest.json` (unsigned) + optional signed manifest.
#### WS4 DoD
* All four stages can:
* Emit StepAttestations (and EdgeAttestations where applicable).
* Produce a final ProvenanceEnvelope.
* Feature can be toggled via config.
* Pipelines run endtoend in CI with attestation emission enabled.
---
### WS5 Replay engine & CLI
**Goal:** Given a ReplayManifest, rerun the pipeline and verify that all Merkle roots and digests match.
#### Tasks
1. Implement a **Replay Orchestrator** library:
* Input:
* Path/URL to `replay-manifest.json`.
* Responsibilities:
* Verify manifests own digest (if signed).
* Fetch or confirm presence of:
* SBOM.
* Feeds.
* Rule sets.
* Container images.
* Spin up each stage with parameters reconstructed from the manifest:
* Ensure versions and flags match.
* Implementation: shared orchestration code reusing existing pipeline entrypoints.
2. Implement **CLI tool**: `stella-attest replay`
* Commands:
* `stella-attest replay run --manifest <path> --out <dir>`.
* Runs pipeline and emits fresh attestations.
* `stella-attest replay verify --manifest <path> --envelope <path> --attest-dir <dir>`:
* Compares:
* Replay Merkle roots vs. `ProvenanceEnvelope`.
* Stage roots.
* Edge root.
* Emits a verification report (JSON + human-readable).
3. Verification logic:
* Steps:
1. Parse ProvenanceEnvelope (verify DSSE signature).
2. Compute Merkle roots from the new replays attestations.
3. Compare:
* `ReplayManifestDigest` in envelope vs digest of manifest used.
* `EdgeMerkleRoot` vs recalculated root.
* `StageMerkleRoots[stage]` vs recalculated stage roots.
4. Output:
* `verified = true/false`.
* If false, list mismatches with digests.
4. Tests:
* Replay the same pipeline on same machine → must match.
* Replay on different machine (CI job simulating different environment) → must match.
* Injected change in feed or rule set → deliberate mismatch detected.
#### DoD
* `stella-attest replay` works locally and in CI.
* Documentation: “How to replay a run and verify determinism.”
---
### WS6 Verification / airgap support
**Goal:** Allow verification in environments without outward network access.
#### Tasks
1. **Define export bundle format**
* Bundle includes:
* `provenance-envelope.dsse.json`.
* `replay-manifest.json`.
* All DSSE attestation files.
* All witness artifacts (or digests only if storage is local).
* Public key material or certificate chains needed to verify signatures.
* Represent as:
* Tarball or zip: e.g. `stella-bundle-<pipeline-id>.tar.gz`.
* Manifest file listing contents and digests.
2. **Implement exporter**
* CLI: `stella-attest export --run-id <id> --out bundle.tar.gz`.
* Internally:
* Collect paths to all relevant artifacts for the run.
* Canonicalize folder structure (e.g. `/sbom`, `/scanner`, `/vex`, `/attestations`, `/witnesses`).
3. **Implement offline verifier**
* CLI: `stella-attest verify-bundle --bundle <path>`.
* Steps:
* Unpack bundle to temp dir.
* Verify:
* Attestation signatures via included public keys.
* Merkle roots and digests as in WS5.
* Do **not** attempt network calls.
4. **Documentation / runbook**
* “How to verify a Stella Ops run in an airgapped environment.”
* Include:
* How to move bundles (e.g. via USB, secure file transfer).
* What to do if verification fails.
#### DoD
* Bundles can be exported from a connected environment and verified in a disconnected environment using only the bundle contents.
---
### WS7 Testing, observability, and rollout
**Goal:** Make this robust, observable, and gradually enable in prod.
#### Tasks
1. **Integration tests**
* Full pipeline scenario:
* Start from known SBOM + feeds + rules.
* Run pipeline twice and:
* Compare final outputs: `ProvenanceEnvelope`, VEX doc, final reports.
* Compare digests & Merkle roots.
* Edge cases:
* Different machines (simulate via CI jobs with different runners).
* Missing or corrupted attestation file → verify that verification fails with clear error.
2. **Property-based tests** (optional but great)
* Generate random but structured SBOMs and graphs.
* Ensure:
* Canonicalization is idempotent.
* Hashing is consistent.
* Merkle roots are stable for repeated runs.
3. **Observability**
* Add logging around:
* Attestation creation & signing.
* Verification failures.
* Replay runs.
* Add metrics:
* Number of attestations per run.
* Time spent in canonicalization / hashing / signing.
* Verification success/fail counts.
4. **Rollout plan**
1. **Phase 0 (dev only)**:
* Attestation emission enabled by default in dev.
* Verification run in CI only.
2. **Phase 1 (staging)**:
* Enable dualpath:
* Old behaviour + new attestations.
* Run replay+verify in staging pipeline.
3. **Phase 2 (production, nonenforced)**:
* Enable attestation emission in prod.
* Verification runs “sidecar” but does not block.
4. **Phase 3 (production, enforced)**:
* CI/CD gates:
* Fails if:
* Signatures invalid.
* Merkle roots mismatch.
* Envelope/manifest missing.
5. **Documentation**
* Developer docs:
* “How to emit a StepAttestation from your service.”
* “How to add new fields without breaking determinism.”
* Operator docs:
* “How to run replay & verification.”
* “How to interpret failures and debug.”
#### DoD
* All new functionality covered by automated tests.
* Observability dashboards / alerts configured.
* Rollout phases defined with clear criteria for moving to the next phase.
---
## 5. How to turn this into tickets
You can break this down roughly like:
* **Epic 1:** Attestation core library (WS1 + WS2 + WS3).
* **Epic 2:** Stage integrations (WS4AD).
* **Epic 3:** Replay & verification tooling (WS5 + WS6).
* **Epic 4:** Testing, observability, rollout (WS7).
If you want, next step I can:
* Turn each epic into **Jira-style stories** with acceptance criteria.
* Or produce **sample code stubs** (interfaces + minimal implementations) matching this plan.

View File

@@ -0,0 +1,684 @@
Im sharing this because it closely aligns with your strategy for building strong supplychain and attestation moats — these are emerging standards youll want to embed into your architecture now.
![Image](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeZGvwJpM4Ey4CvebNDXI3qKZwYnSbHsKRjPH_z4qZyf6ibWZhFAGCWGbPhY4uZ5qW3fcmiKra7T6VfhfpTWqy4huJ-8SGNlN-SybGvSRqfz-TmOjtkwC0JVev1xPTPC_nRabAV?key=SOEvwUJlX_jC0gvOXKn1JKnR)
![Image](https://scribesecurity.com/wp-content/uploads/2023/04/Attestations-flow-002-scaled.webp)
![Image](https://chainloop.dev/_astro/646b633855fe78f2da994ff4_attestation_layers.BTf5q4NL.png)
### DSSE + intoto: The eventspine
* The Dead Simple Signing Envelope (DSSE) spec defines a minimal JSON envelope for signing arbitrary data — “transparent transport for signed statements”. ([GitHub][1])
* The intoto Attestation model builds on DSSE as the envelope, with a statement + predicate about the artifact (e.g., build/cohort metadata). ([Legit Security][2])
* In your architecture: using DSSEsigned intoto attestations across Scanner → Sbomer → Vexer → Scorer → Attestor gives you a unified “event spine” of provenance and attestations.
* That means every step emits a signed statement, verifiable, linking tooling. Helps achieve deterministic replayability and auditintegrity.
![Image](https://cyclonedx.org/images/CycloneDX-Social-Card.png?ts=167332841195327)
![Image](https://devsec-blog.com/wp-content/uploads/2024/03/1_vgsHYhpBnkMTrXtnYY9LFA-7.webp)
![Image](https://cyclonedx.org/images/guides/NIST-SP-1800-38B.png)
### CycloneDX v1.7: SBOM + cryptography assurance
* Version 1.7 of CycloneDX was released October 21, 2025 and introduces **advanced cryptography, dataprovenance transparency, and IP visibility** for the software supply chain. ([CycloneDX][3])
* It introduces a “Cryptography Registry” to standardize naming / classification of crypto algorithms in BOMs — relevant for PQC readiness, global cryptographic standards like GOST/SM, etc. ([CycloneDX][4])
* If you emit SBOMs in CycloneDX v1.7 format (and include CBOM/crypto details), youre aligning with modern supplychain trust expectations — satisfying your moat #1 (cryptosovereign readiness) and #2 (deterministic manifests).
![Image](https://miro.medium.com/v2/resize%3Afit%3A1200/1%2Abdz7tUqYTQecioDQarHNcw.png)
![Image](https://alphasec.io/content/images/2022/11/How-sigstore-works.png)
![Image](https://blog.sigstore.dev/images/ga.png)
### Sigstore Rekor v2: Logging the provenance chain
* Rekor v2 reached GA on October102025; the redesign introduces a “tilebacked transparency log implementation” to simplify ops and reduce costs. ([Sigstore Blog][5])
* Rekor supports auditing of signing events, monitors to verify appendonly consistency, and log inclusion proofs. ([Sigstore][6])
* By bundling your provenance/SBOM/VEX/scores and recording those in Rekor v2, youre closing your chain of custody with immutable log entries — supports your “ProofofIntegrity Graph” moat (point #4).
### Why this matters for your architecture
* With each scan or stage (Scanner → Sbomer → Vexer → Scorer → Attestor) producing a DSSEsigned intoto statement, you have a canonical spine of events.
* Emitting SBOMs in CycloneDX v1.7 ensures you not only list components but crypto metadata, attestation pointers, and versions ready for futureproofing.
* Recording all artifacts (attestations, SBOM, VEX, scores) into Rekor v2 gives you external public verifiability and auditability — minimal trust surface, maximal transparency.
* These standards map directly to several of your moats: cryptosovereign readiness, deterministic replayable scans, provenance graphs, trustledger.
If you like, I can pull together **mappings** of your internal modules (Scanner, Sbomer, Vexer, etc) to these standards and provide a **reference implementation skeleton** in .NET10 (you indicated youre working with that).
[1]: https://github.com/secure-systems-lab/dsse?utm_source=chatgpt.com "secure-systems-lab/dsse - Dead Simple Signing Envelope"
[2]: https://www.legitsecurity.com/blog/slsa-provenance-blog-series-part-1-what-is-software-attestation?utm_source=chatgpt.com "SLSA Provenance Blog Series, Part 1: What Is Software ..."
[3]: https://cyclonedx.org/news/cyclonedx-v1.7-released/?utm_source=chatgpt.com "CycloneDX v1.7 Delivers Advanced Cryptography, ..."
[4]: https://cyclonedx.org/registry/cryptography/?utm_source=chatgpt.com "Cryptography Registry"
[5]: https://blog.sigstore.dev/rekor-v2-ga/?utm_source=chatgpt.com "Rekor v2 GA - Cheaper to run, simpler to maintain"
[6]: https://docs.sigstore.dev/logging/overview/?utm_source=chatgpt.com "Rekor"
Got it — lets turn your vision into something devs can actually build against.
Below is a **concrete implementation plan** you can paste into an internal doc / ticketing system and refine into epics & stories.
---
## 0. Assumptions & Target EndState
**Assumptions**
* Services: `Scanner → Sbomer → Vexer → Scorer → Attestor` (plus shared infra).
* Language: .NET (8/10) for your services.
* You want:
* **DSSEsigned intoto attestations** as the event “spine”. ([GitHub][1])
* **CycloneDX 1.7 SBOM + VEX** for inventory + exploitability. ([CycloneDX][2])
* **Rekor v2** as the transparency log, with Sigstore bundles for offline verification. ([Sigstore Blog][3])
**Target picture**
For every artifact *A* (image / binary / model):
1. Each stage emits a **DSSEsigned intoto attestation**:
* Scanner → scan predicate
* Sbomer → CycloneDX 1.7 SBOM predicate
* Vexer → VEX predicate
* Scorer → score predicate
* Attestor → final decision predicate
2. Each attestation is:
* Signed with your keys or Sigstore keyless.
* Logged to Rekor (v2) and optionally packaged into a Sigstore bundle.
3. A consumer can:
* Fetch all attestations for *A*, verify signatures + Rekor proofs, read SBOM/VEX, and understand the score.
The rest of this plan is: **how to get there stepbystep.**
---
## 1. Core Data Contracts (Must Be Done First)
### 1.1 Define the canonical envelope and statement
**Standards to follow**
* **DSSE Envelope** from securesystemslab (`envelope.proto`). ([GitHub][1])
* **Intoto Attestation “Statement”** model (subject + predicateType + predicate). ([SLSA][4])
**Deliverable: internal spec**
Create a short internal spec (Markdown) for developers:
* `ArtifactIdentity`
* `algorithm`: `sha256` | `sha512` | etc.
* `digest`: hex string.
* Optional: `name`, `version`, `buildPipelineId`.
* `InTotoStatement<TPredicate>`
* `type`: fixed: `https://in-toto.io/Statement/v1`
* `subject`: list of `ArtifactIdentity`.
* `predicateType`: string (URL-ish).
* `predicate`: generic JSON (stagespecific payload).
* `DsseEnvelope`
* `payloadType`: e.g. `application/vnd.in-toto+json`
* `payload`: base64 of the JSON `InTotoStatement`.
* `signatures[]`: `{ keyid, sig }`.
### 1.2 Implement the .NET representation
**Tasks**
1. **Generate DSSE envelope types**
* Use `envelope.proto` from DSSE repo and generate C# types; or reuse the Grafeas `Envelope` class which is explicitly aligned with DSSE. ([Google Cloud][5])
* Project: `Attestations.Core`.
2. **Define generic Statement & Predicate types**
In `Attestations.Core`:
```csharp
public record ArtifactIdentity(string Algorithm, string Digest, string? Name = null, string? Version = null);
public record InTotoStatement<TPredicate>(
string _Type,
IReadOnlyList<ArtifactIdentity> Subject,
string PredicateType,
TPredicate Predicate
);
public record DsseSignature(string KeyId, byte[] Sig);
public record DsseEnvelope(
string PayloadType,
byte[] Payload,
IReadOnlyList<DsseSignature> Signatures
);
```
3. **Define predicate contracts for each stage**
Example:
```csharp
public static class PredicateTypes
{
public const string ScanV1 = "https://example.com/attestations/scan/v1";
public const string SbomV1 = "https://example.com/attestations/sbom/cyclonedx-1.7";
public const string VexV1 = "https://example.com/attestations/vex/cyclonedx";
public const string ScoreV1 = "https://example.com/attestations/score/v1";
public const string VerdictV1= "https://example.com/attestations/verdict/v1";
}
```
Then define concrete predicates:
* `ScanPredicateV1`
* `SbomPredicateV1` (likely mostly a pointer to a CycloneDX doc)
* `VexPredicateV1` (pointer to VEX doc + summary)
* `ScorePredicateV1`
* `VerdictPredicateV1` (attest/deny + reasoning)
**Definition of done**
* All services share a single `Attestations.Core` library.
* There is a test that serializes + deserializes `InTotoStatement` and `DsseEnvelope` and matches the JSON format expected by intoto tooling.
---
## 2. Signing & Key Management Layer
### 2.1 Abstraction: decouple from crypto choice
Create an internal package: `Attestations.Signing`.
```csharp
public interface IArtifactSigner
{
Task<DsseEnvelope> SignStatementAsync<TPredicate>(
InTotoStatement<TPredicate> statement,
CancellationToken ct = default);
}
public interface IArtifactVerifier
{
Task VerifyAsync(DsseEnvelope envelope, CancellationToken ct = default);
}
```
Backends to implement:
1. **KMSbacked signer** (e.g., AWS KMS, GCP KMS, Azure Key Vault).
2. **Sigstore keyless / cosign integration**:
* For now you can wrap the **cosign CLI**, which already understands intoto attestations and Rekor. ([Sigstore][6])
* Later, replace with a native HTTP client against Sigstore services.
### 2.2 Key & algorithm strategy
* Default: **ECDSA P256** or **Ed25519** keys, stored in KMS.
* Wrap all usage via `IArtifactSigner`/`IArtifactVerifier`.
* Keep room for **PQC migration** by never letting services call crypto APIs directly; only use the abstraction.
**Definition of done**
* CLI or small test harness that:
* Creates a dummy `InTotoStatement`,
* Signs it via `IArtifactSigner`,
* Verifies via `IArtifactVerifier`,
* Fails verification if payload is tampered.
---
## 3. ServicebyService Integration
For each component well define **inputs → behavior → attestation output**.
### 3.1 Scanner
**Goal**
For each artifact, emit a **scan attestation** with normalized findings.
**Tasks**
1. Extend Scanner to normalize findings to a canonical model:
* Vulnerability id (CVE / GHSA / etc).
* Affected package (`purl`, version).
* Severity, source (NVD, OSV, etc).
2. Define `ScanPredicateV1`:
```csharp
public record ScanPredicateV1(
string ScannerName,
string ScannerVersion,
DateTimeOffset ScanTime,
string ScanConfigurationId,
IReadOnlyList<ScanFinding> Findings
);
```
3. After each scan completes:
* Build `ArtifactIdentity` from the artifact digest.
* Build `InTotoStatement<ScanPredicateV1>` with `PredicateTypes.ScanV1`.
* Call `IArtifactSigner.SignStatementAsync`.
* Save `DsseEnvelope` to an **Attestation Store** (see section 5).
* Publish an event `scan.attestation.created` on your message bus with the attestation id.
**Definition of done**
* Every scan results in a stored DSSE envelope with `ScanV1` predicate.
* A consumer service can query by artifact digest and get all scan attestations.
---
### 3.2 Sbomer (CycloneDX 1.7)
**Goal**
Generate **CycloneDX 1.7 SBOMs** and attest to them.
CycloneDX provides a .NET library and tools for producing and consuming SBOMs. ([GitHub][7])
CycloneDX 1.7 adds cryptography registry, dataprovenance and IP transparency. ([CycloneDX][2])
**Tasks**
1. Add CycloneDX .NET library
* NuGet: `CycloneDX.Core` (and optional `CycloneDX.Utils`). ([NuGet][8])
2. SBOM generation process
* Input: artifact digest + build metadata (e.g., manifest, lock file).
* Generate a **CycloneDX 1.7 SBOM**:
* Fill `metadata.component`, `bomRef`, and dependency graph.
* Include crypto material using the **Cryptography Registry** (algorithms, key sizes, modes) when relevant. ([CycloneDX][9])
* Include data provenance (tool name/version, timestamp).
3. Storage
* Store SBOM documents (JSON) in object storage: `sboms/{artifactDigest}/cyclonedx-1.7.json`.
* Index them in the Attestation DB (see 5).
4. `SbomPredicateV1`
```csharp
public record SbomPredicateV1(
string Format, // "CycloneDX"
string Version, // "1.7"
Uri Location, // URL to the SBOM blob
string? HashAlgorithm,
string? HashDigest // hash of the SBOM document itself
);
```
5. After SBOM generation:
* Create statement with `PredicateTypes.SbomV1`.
* Sign via `IArtifactSigner`.
* Store DSSE envelope + publish `sbom.attestation.created`.
**Definition of done**
* For any scanned artifact, you can fetch:
* A CycloneDX 1.7 SBOM, and
* A DSSEsigned intoto SBOM attestation pointing to it.
---
### 3.3 Vexer (CycloneDX VEX / CSAF)
**Goal**
Turn “raw vulnerability findings” into **VEX documents** that say whether each vulnerability is exploitable, using CycloneDX VEX representation. ([CycloneDX][10])
**Tasks**
1. Model VEX status mapping
* Example statuses: `affected`, `not_affected`, `fixed`, `under_investigation`.
* Derive rules from:
* Reachability analysis, config, feature usage.
* Business logic (e.g., vulnerability only affects optional module not shipped).
2. Generate VEX docs
* Use the same CycloneDX .NET library to emit **CycloneDX VEX** documents.
* Store them: `vex/{artifactDigest}/cyclonedx-vex.json`.
3. `VexPredicateV1`
```csharp
public record VexPredicateV1(
string Format, // "CycloneDX-VEX"
string Version,
Uri Location,
string? HashAlgorithm,
string? HashDigest,
int TotalVulnerabilities,
int ExploitableVulnerabilities
);
```
4. After VEX generation:
* Build statement with `PredicateTypes.VexV1`.
* Sign, store, publish `vex.attestation.created`.
**Definition of done**
* For an artifact with scan results, there is a VEX doc and attestation that:
* Marks each vulnerability with exploitability status.
* Can be consumed by `Scorer` to prioritize risk.
---
### 3.4 Scorer
**Goal**
Compute a **trust/risk score** based on SBOM + VEX + other signals, and attest to it.
**Tasks**
1. Scoring model v1
* Inputs:
* Count of exploitable vulns by severity.
* Presence/absence of required attestations (scan, sbom, vex).
* Age of last scan.
* Output:
* `RiskScore` (0100 or letter grade).
* `RiskTier` (“low”, “medium”, “high”).
* Reasons (top 3 contributors).
2. `ScorePredicateV1`
```csharp
public record ScorePredicateV1(
double Score,
string Tier,
DateTimeOffset CalculatedAt,
IReadOnlyList<string> Reasons
);
```
3. When triggered (new VEX or SBOM):
* Recompute score for the artifact.
* Create attestation, sign, store, publish `score.attestation.created`.
**Definition of done**
* A consumer can call “/artifacts/{digest}/score” and:
* Verify the DSSE envelope,
* Read a deterministic `ScorePredicateV1`.
---
### 3.5 Attestor (Final Verdict + Rekor integration)
**Goal**
Emit the **final verdict attestation** and push evidences to Rekor / Sigstore bundle.
**Tasks**
1. `VerdictPredicateV1`
```csharp
public record VerdictPredicateV1(
string Decision, // "allow" | "deny" | "quarantine"
string PolicyVersion,
DateTimeOffset DecidedAt,
IReadOnlyList<string> Reasons,
string? RequestedBy,
string? Environment // "prod", "staging", etc.
);
```
2. Policy evaluation:
* Input: all attestations for artifact (scan, sbom, vex, score).
* Apply policy (e.g., “no critical exploitable vulns”, “score ≥ 70”).
* Produce `allow` / `deny`.
3. Rekor integration (v2ready)
* Rekor provides an HTTP API and CLI for recording signed metadata. ([Sigstore][11])
* Rekor v2 uses a modern tilebacked log for better cost/ops (you dont need details, just that the API remains similar). ([Sigstore Blog][3])
**Implementation options:**
* **Option A: CLI wrapper**
* Use `rekor-cli` via a sidecar container.
* Call `rekor-cli upload` with the DSSE payload or Sigstore bundle.
* **Option B: Native HTTP client**
* Generate client from Rekor OpenAPI in .NET.
* Implement:
```csharp
public interface IRekorClient
{
Task<RekorEntryRef> UploadDsseAsync(DsseEnvelope envelope, CancellationToken ct);
}
public record RekorEntryRef(
string Uuid,
long LogIndex,
byte[] SignedEntryTimestamp);
```
4. Sigstore bundle support
* A **Sigstore bundle** packages:
* Verification material (cert, Rekor SET, timestamps),
* Signature content (DSSE envelope). ([Sigstore][12])
* You can:
* Store bundles alongside DSSE envelopes: `bundles/{artifactDigest}/{stage}.json`.
* Expose them in an API for offline verification.
5. After producing final verdict:
* Sign verdict statement.
* Upload verdict attestation (and optionally previous key attestations) to Rekor.
* Store Rekor entry ref (`uuid`, `index`, `SET`) in DB.
* Publish `verdict.attestation.created`.
**Definition of done**
* For a given artifact, you can:
* Retrieve a verdict DSSE envelope.
* Verify its signature and Rekor inclusion.
* Optionally retrieve a Sigstore bundle for fully offline verification.
---
## 4. Attestation Store & Data Model
Create an **“Attestation Service”** that all others depend on for reading/writing.
### 4.1 Database schema (simplified)
Relational schema example:
* `artifacts`
* `id` (PK)
* `algorithm`
* `digest`
* `name`
* `version`
* `attestations`
* `id` (PK)
* `artifact_id` (FK)
* `stage` (`scan`, `sbom`, `vex`, `score`, `verdict`)
* `predicate_type`
* `dsse_envelope_json`
* `created_at`
* `signer_key_id`
* `rekor_entries`
* `id` (PK)
* `attestation_id` (FK)
* `uuid`
* `log_index`
* `signed_entry_timestamp` (bytea)
* `sboms`
* `id`
* `artifact_id`
* `format` (CycloneDX)
* `version` (1.7)
* `location`
* `hash_algorithm`
* `hash_digest`
* `vex_documents`
* `id`
* `artifact_id`
* `format`
* `version`
* `location`
* `hash_algorithm`
* `hash_digest`
### 4.2 Attestation Service API
Provide a REST/gRPC API:
* `GET /artifacts/{algo}:{digest}/attestations`
* `GET /artestations/{id}`
* `GET /artifacts/{algo}:{digest}/sbom`
* `GET /artifacts/{algo}:{digest}/vex`
* `GET /artifacts/{algo}:{digest}/score`
* `GET /artifacts/{algo}:{digest}/bundle` (optional, Sigstore bundle)
**Definition of done**
* All other services call Attestation Service instead of touching the DB directly.
* You can fetch the full “attestation chain” for a given artifact from one place.
---
## 5. Observability & QA
### 5.1 Metrics
For each service:
* `attestations_emitted_total{stage}`
* `attestation_sign_errors_total{stage}`
* `rekor_upload_errors_total`
* `attestation_verification_failures_total`
### 5.2 Tests
1. **Contract tests**
* JSON produced for `InTotoStatement` and `DsseEnvelope` is validated by:
* intoto reference tooling.
* DSSE reference implementations. ([GitHub][1])
2. **Endtoend flow**
* Seed a mini pipeline with a test artifact:
* Build → Scan → SBOM → VEX → Score → Verdict.
* Use an external verifier (e.g., cosign, intoto attestation verifier) to:
* Verify DSSE signatures.
* Verify Rekor entries and/or Sigstore bundles. ([Sigstore][6])
3. **Failure scenarios**
* Corrupt payload (verification must fail).
* Missing VEX (policy should deny or fall back to stricter rules).
* Rekor offline (system should continue but mark entries as “not logged”).
---
## 6. Phased Rollout Plan (HighLevel)
You can translate this into epics:
1. **Epic 1 Core Attestation Platform**
* Implement `Attestations.Core` & `Attestations.Signing`.
* Implement Attestation Service + DB schema.
* Build small CLI / test harness.
2. **Epic 2 Scanner Integration**
* Normalize findings.
* Emit scan attestations only (no SBOM/VEX yet).
3. **Epic 3 CycloneDX SBOMs**
* Integrate CycloneDX .NET library.
* Generate 1.7 SBOMs for each artifact.
* Emit SBOM attestations.
4. **Epic 4 VEXer**
* Implement VEX derivation logic + CycloneDX VEX docs.
* Emit VEX attestations.
5. **Epic 5 Scorer & Policy**
* Implement scoring model v1.
* Implement policy engine.
* Emit Score + Verdict attestations.
6. **Epic 6 Rekor & Bundles**
* Stand up Rekor (or integrate with public instance).
* Implement Rekor client and Sigstore bundle support.
* Wire Attestor to log final (and optionally intermediate) attestations.
7. **Epic 7 UX & Docs**
* Build UI (or CLI) to visualize:
* Artifact → SBOM → VEX → Score → Verdict.
* Document how other teams integrate (what events to listen to, which APIs to call).
---
If youd like, I can next:
* Turn this into **Jirastyle epics & stories** with acceptance criteria; or
* Draft the actual **C# interfaces** and a project structure (`src/Attestations.Core`, `src/Attestations.Signing`, services, etc.).
[1]: https://github.com/secure-systems-lab/dsse?utm_source=chatgpt.com "secure-systems-lab/dsse - Dead Simple Signing Envelope"
[2]: https://cyclonedx.org/news/cyclonedx-v1.7-released/?utm_source=chatgpt.com "CycloneDX v1.7 Delivers Advanced Cryptography, ..."
[3]: https://blog.sigstore.dev/rekor-v2-ga/?utm_source=chatgpt.com "Rekor v2 GA - Cheaper to run, simpler to maintain"
[4]: https://slsa.dev/blog/2023/05/in-toto-and-slsa?utm_source=chatgpt.com "in-toto and SLSA"
[5]: https://cloud.google.com/dotnet/docs/reference/Grafeas.V1/latest/Grafeas.V1.Envelope?utm_source=chatgpt.com "Grafeas v1 API - Class Envelope (3.10.0) | .NET client library"
[6]: https://docs.sigstore.dev/cosign/verifying/attestation/?utm_source=chatgpt.com "In-Toto Attestations"
[7]: https://github.com/CycloneDX/cyclonedx-dotnet-library?utm_source=chatgpt.com "NET library to consume and produce CycloneDX Software ..."
[8]: https://www.nuget.org/packages/CycloneDX.Core/?utm_source=chatgpt.com "CycloneDX.Core 10.0.1"
[9]: https://cyclonedx.org/registry/cryptography/?utm_source=chatgpt.com "Cryptography Registry"
[10]: https://cyclonedx.org/capabilities/vex/?utm_source=chatgpt.com "Vulnerability Exploitability eXchange (VEX)"
[11]: https://docs.sigstore.dev/logging/overview/?utm_source=chatgpt.com "Rekor"
[12]: https://docs.sigstore.dev/about/bundle/?utm_source=chatgpt.com "Sigstore Bundle Format"

View File

@@ -0,0 +1,19 @@
Heres a quick sizing rule of thumb for Sigstore attestations so you dont hit Rekor limits.
* **Base64 bloat:** DSSE wraps your JSON statement and then Base64encodes it. Base64 turns every 3 bytes into 4, so size ≈ `ceil(P/3)*4` (about **+3337%** on top of your raw JSON). ([Stack Overflow][1])
* **DSSE envelope fields:** Expect a small extra overhead for JSON keys like `payloadType`, `payload`, and `signatures` (and the signature itself). Sigstores bundle/DSSE examples show the structure used. ([Sigstore][2])
* **Public Rekor cap:** The **public Rekor instance rejects uploads over 100KB**. If your DSSE (after Base64 + JSON fields) exceeds that, shard/split the attestation or run your own Rekor. ([GitHub][3])
* **Reality check:** Teams routinely run into size errors when large statements are uploaded—the whole DSSE payload is sent to Rekor during verification/ingest. ([GitHub][4])
### Practical guidance
* Keep a **single attestation well under ~7080KB raw JSON** if it will be wrapped+Base64d (gives headroom for signatures/keys).
* Prefer **compact JSON** (no whitespace), **short key names**, and **avoid huge embedded fields** (e.g., trim SBOM evidence or link it by digest/URI).
* For big evidence sets, publish **multiple attestations** (logical shards) or **selfhost Rekor**. ([GitHub][3])
If you want, I can add a tiny calculator snippet that takes your payload bytes and estimates the final DSSE+Base64 size vs. the 100KB limit.
[1]: https://stackoverflow.com/questions/4715415/base64-what-is-the-worst-possible-increase-in-space-usage?utm_source=chatgpt.com "Base64: What is the worst possible increase in space usage?"
[2]: https://docs.sigstore.dev/about/bundle/?utm_source=chatgpt.com "Sigstore Bundle Format"
[3]: https://github.com/sigstore/rekor?utm_source=chatgpt.com "sigstore/rekor: Software Supply Chain Transparency Log"
[4]: https://github.com/sigstore/cosign/issues/3599?utm_source=chatgpt.com "Attestations require uploading entire payload to rekor #3599"

View File

@@ -0,0 +1,766 @@
Heres a quick, practical headsup on publishing attestations to Sigstore/Rekor without pain, plus a dropin pattern you can adapt today.
---
## Why this matters (plain English)
* **Rekor** is a public transparency log for your build proofs.
* **DSSE attestations** (e.g., intoto, SLSA) are uploaded **in full**—not streamed—so big blobs hit **payload limits** and fail.
* Thousands of tiny attestations also hurt you: **API overhead, retries, and throttling** skyrocket.
The sweet spot: **chunk your evidence sensibly**, keep each DSSE envelope small enough for Rekor, and add **retry + resume** so partial batches dont nuke your whole publish step.
---
## Design rules of thumb
* **Target envelope size:** keep each DSSE (base64encoded) comfortably **<12MB** (tunable per your CI).
* **Shard by artifact + section:** e.g., split SBOMs by package namespace, split provenance by step/log segments, split test evidence by suite.
* **Stable chunking keys:** deterministic chunk IDs (e.g., `artifactDigest + section + seqNo`) so retries can **idempotently** republish.
* **Batch with backoff:** publish N envelopes, exponential backoff on 429/5xx, **resume from last success**.
* **Record mapping:** keep a **local index**: `chunkId → rekorUUID`, so you can later reconstruct the full evidence set.
* **Verify before delete:** only discard local chunk files **after** Rekor inclusion proof is verified.
* **Observability:** metrics for envelopes/s, bytes/s, retry count, and final inclusion rate.
---
## Minimal workflow (pseudo)
1. **Produce evidence** split into chunks
2. **Wrap each chunk in DSSE** (sign once per chunk)
3. **Publish to Rekor** with retry + idempotency
4. **Store rekor UUID + inclusion proof**
5. **Emit a manifest** that lists all chunk IDs for downstream recomposition
---
## C# sketch (fits .NET 10 style)
```csharp
public sealed record ChunkRef(string Artifact, string Section, int Part, string ChunkId);
public sealed record PublishResult(ChunkRef Ref, string RekorUuid, string InclusionHash);
public interface IChunker {
IEnumerable<(ChunkRef Ref, ReadOnlyMemory<byte> Payload)> Split(ArtifactEvidence evidence, int targetBytes);
}
public interface IDsseSigner {
// Returns serialized DSSE envelope (JSON) ready to upload
byte[] Sign(ReadOnlySpan<byte> payload, string payloadType);
}
public interface IRekorClient {
// Idempotent publish: returns existing UUID if duplicate body digest
Task<(string uuid, string inclusionHash)> UploadAsync(ReadOnlySpan<byte> dsseEnvelope, CancellationToken ct);
}
public sealed class Publisher {
private readonly IChunker _chunker;
private readonly IDsseSigner _signer;
private readonly IRekorClient _rekor;
private readonly ICheckpointStore _store; // chunkId -> (uuid, inclusionHash)
public Publisher(IChunker c, IDsseSigner s, IRekorClient r, ICheckpointStore st) =>
(_chunker, _signer, _rekor, _store) = (c, s, r, st);
public async IAsyncEnumerable<PublishResult> PublishAsync(
ArtifactEvidence ev, int targetBytes, string payloadType,
[System.Runtime.CompilerServices.EnumeratorCancellation] CancellationToken ct = default)
{
foreach (var (refInfo, chunk) in _chunker.Split(ev, targetBytes)) {
if (_store.TryGet(refInfo.ChunkId, out var cached)) {
yield return new PublishResult(refInfo, cached.uuid, cached.inclusionHash);
continue;
}
var envelope = _signer.Sign(chunk.Span, payloadType);
// retry with jitter/backoff
var delay = TimeSpan.FromMilliseconds(200);
for (int attempt = 1; ; attempt++) {
try {
var (uuid, incl) = await _rekor.UploadAsync(envelope, ct);
_store.Put(refInfo.ChunkId, uuid, incl);
yield return new PublishResult(refInfo, uuid, incl);
break;
} catch (TransientHttpException) when (attempt < 6) {
await Task.Delay(delay + TimeSpan.FromMilliseconds(Random.Shared.Next(0, 250)), ct);
delay = TimeSpan.FromMilliseconds(Math.Min(delay.TotalMilliseconds * 2, 5000));
}
}
}
}
}
```
**Notes:**
* Implement `IChunker` so splits are **deterministic** (e.g., package groups of an SBOM or linebounded log slices).
* Make `IRekorClient.UploadAsync` **idempotent** by hashing the DSSE envelope and using Rekors response on duplicates.
* `ICheckpointStore` can be a local SQLite/JSON file in CI artifacts; export it with your build.
---
## What to chunk (practical presets)
* **SBOM (CycloneDX/SPDX):** per dependency namespace/layer; keep each file ~300800KB before DSSE.
* **Provenance (intoto/SLSA):** one DSSE per build step or per 1050KB of logs/evidence.
* **Test proofs:** group per suite; avoid single megaJUnit JSONs.
---
## “Done” checklist
* [ ] Envelopes consistently under your Rekor size ceiling (leave 3040% headroom).
* [ ] Idempotent retry with resume (no duplicate spam).
* [ ] Local index mapping `chunkId → rekorUUID` stored in CI artifacts.
* [ ] Inclusion proofs verified and archived.
* [ ] A recomposition manifest that lists all chunk IDs for auditors.
If you want, I can tailor this to StellaOps (naming, namespaces, and your Rekor mirror strategy) and drop in a readytocompile module for your `.NET 10` solution.
Cool, lets turn that sketch into something your devs can actually pick up and build.
Ill lay this out like an implementation guide: architecture, project layout, percomponent specs, config, and a suggested rollout plan.
---
## 1. Objectives & constraints
**Primary goals**
* Publish DSSE attestations into Rekor:
* Avoid size limits (chunking).
* Avoid throttling (batching & retry).
* Ensure idempotency & resumability.
* Keep it **frameworkagnostic** inside `.NET 10` (can run in any CI).
* Make verification/auditing easy (manifest + inclusion proofs).
**Nonfunctional**
* Deterministic behavior: same inputs same chunk IDs & envelopes.
* Observable: metrics and logs for troubleshooting.
* Testable: clear seams/interfaces for mocking Rekor & signing.
---
## 2. Highlevel architecture
Core pipeline (per build / artifact):
1. **Evidence input** you pass in provenance/SBOM/test data as `ArtifactEvidence`.
2. **Chunker** splits oversized evidence into multiple chunks with stable IDs.
3. **DSSE Signer** wraps each chunk in a DSSE envelope.
4. **Rekor client** publishes envelopes to the Rekor log with retry/backoff.
5. **Checkpoint store** remembers which chunks were already published.
6. **Manifest builder** emits a manifest mapping artifact all Rekor entries.
Text diagram:
```text
[ArtifactEvidence]
|
v
IChunker ---> [ChunkRef + Payload] x N
|
v
IDsseSigner ---> [DSSE Envelope] x N
|
v
IRekorClient (with retry & backoff)
|
v
ICheckpointStore <--> ManifestBuilder
|
v
[attestations_manifest.json] + inclusion proofs
```
---
## 3. Project & namespace layout
Example solution layout:
```text
src/
SupplyChain.Attestations.Core/
Chunking/
Signing/
Publishing/
Models/
Manifest/
SupplyChain.Attestations.Rekor/
RekorClient/
Models/
SupplyChain.Attestations.Cli/
Program.cs
Commands/ # e.g., publish-attestations
tests/
SupplyChain.Attestations.Core.Tests/
SupplyChain.Attestations.Rekor.Tests/
SupplyChain.Attestations.IntegrationTests/
```
You can of course rename to match your org.
---
## 4. Data models & contracts
### 4.1 Core domain models
```csharp
public sealed record ArtifactEvidence(
string ArtifactId, // e.g., image digest, package id, etc.
string ArtifactType, // "container-image", "nuget-package", ...
string ArtifactDigest, // canonical digest (sha256:...)
IReadOnlyList<EvidenceBlob> EvidenceBlobs // SBOM, provenance, tests, etc.
);
public sealed record EvidenceBlob(
string Section, // "sbom", "provenance", "tests", "logs"
string ContentType, // "application/json", "text/plain"
ReadOnlyMemory<byte> Content
);
public sealed record ChunkRef(
string ArtifactId,
string Section, // from EvidenceBlob.Section
int Part, // 0-based index
string ChunkId // stable identifier
);
```
**ChunkId generation rule (deterministic):**
```csharp
// Pseudo:
ChunkId = Base64Url( SHA256( $"{ArtifactDigest}|{Section}|{Part}" ) )
```
Store both `ChunkRef` and hashes in the manifest so its reproducible.
### 4.2 Rekor publication result
```csharp
public sealed record PublishResult(
ChunkRef Ref,
string RekorUuid,
string InclusionHash, // hash used for inclusion proof
string LogIndex // optional, if returned by Rekor
);
```
### 4.3 Manifest format
A single build emits `attestations_manifest.json`:
```jsonc
{
"schemaVersion": "1.0",
"buildId": "build-2025-11-27T12:34:56Z",
"artifact": {
"id": "my-app@sha256:abcd...",
"type": "container-image",
"digest": "sha256:abcd..."
},
"chunks": [
{
"chunkId": "aBcD123...",
"section": "sbom",
"part": 0,
"rekorUuid": "1234-5678-...",
"inclusionHash": "deadbeef...",
"logIndex": "42"
}
]
}
```
Define a C# model mirroring this and serialize with `System.Text.Json`.
---
## 5. Componentlevel design
### 5.1 Chunker
**Interface**
```csharp
public sealed record ChunkingOptions(
int TargetMaxBytes, // e.g., 800_000 bytes preDSSE
int HardMaxBytes // e.g., 1_000_000 bytes preDSSE
);
public interface IChunker
{
IEnumerable<(ChunkRef Ref, ReadOnlyMemory<byte> Payload)> Split(
ArtifactEvidence evidence,
ChunkingOptions options
);
}
```
**Behavior**
* For each `EvidenceBlob`:
* If `Content.Length <= TargetMaxBytes` 1 chunk.
* Else:
* Split on **logical boundaries** if possible:
* SBOM JSON: split by package list segments.
* Logs: split by line boundaries.
* Tests: split by test suite / file.
* If not easily splittable (opaque binary), hardchunk by byte window.
* Ensure **each chunk** respects `HardMaxBytes`.
* Generate `ChunkRef.Part` sequentially (0,1,2,…) per `(ArtifactId, Section)`.
* Generate `ChunkId` with the deterministic rule above.
**Implementation plan**
* Start with a **simple hardbyte chunker**:
* Always split at `TargetMaxBytes` boundaries.
* Add optional **formataware chunkers**:
* `SbomChunkerDecorator` detects JSON SBOM structure and splits on package groups.
* `LogChunkerDecorator` splits on lines.
* Use the decorator pattern or strategy pattern, all implementing `IChunker`.
---
### 5.2 DSSE signer
We abstract away how keys are managed.
**Interface**
```csharp
public interface IDsseSigner
{
// payload: raw bytes of the evidence chunk
// payloadType: DSSE payloadType string, e.g. "application/vnd.in-toto+json"
byte[] Sign(ReadOnlySpan<byte> payload, string payloadType);
}
```
**Responsibilities**
* Create DSSE envelope:
* `payloadType` from config (per section or global).
* `payload` base64url of chunk.
* `signatures` one or more signatures (key ID + signature bytes).
* Serialize to **JSON** as UTF8 `byte[]`.
**Implementation plan**
* Implement `KeyBasedDsseSigner`:
* Uses a configured private key (e.g., from a KMS, HSM, or file).
* Accept `IDSseCryptoProvider` dependency for the actual signature primitive (RSA/ECDSA/Ed25519).
* Keep space for future `KeylessDsseSigner` (Sigstore Fulcio/OIDC), but not required for v1.
**Config mapping**
* `payloadType` default: `"application/vnd.in-toto+json"`.
* Allow overrides per section: e.g., SBOM vs test logs.
---
### 5.3 Rekor client
**Interface**
```csharp
public interface IRekorClient
{
Task<(string Uuid, string InclusionHash, string? LogIndex)> UploadAsync(
ReadOnlySpan<byte> dsseEnvelope,
CancellationToken ct = default
);
}
```
**Responsibilities**
* Wrap HTTP client to Rekor:
* Build the proper Rekor entry for DSSE (log entry with DSSE envelope).
* Send HTTP POST to Rekor API.
* Parse UUID and inclusion information.
* Handle **duplicate entries**:
* If Rekor responds entry already exists”, return existing UUID instead of failing.
* Surface **clear exceptions**:
* `TransientHttpException` (for retryable 429/5xx).
* `PermanentHttpException` (4xx like 400/413).
**Implementation plan**
* Implement `RekorClient` using `HttpClientFactory`.
* Add config:
* `BaseUrl` (e.g., your Rekor instance).
* `TimeoutSeconds`.
* `MaxRequestBodyBytes` (for safety).
**Retry classification**
* Retry on:
* 429 (Too Many Requests).
* 5xx (server errors).
* Network timeouts / transient socket errors.
* No retry on:
* 4xx (except 408 if you want).
* 413 Payload Too Large (signal chunking issue).
---
### 5.4 Checkpoint store
Used to allow **resume** and **idempotency**.
**Interface**
```csharp
public sealed record CheckpointEntry(
string ChunkId,
string RekorUuid,
string InclusionHash,
string? LogIndex
);
public interface ICheckpointStore
{
bool TryGet(string chunkId, out CheckpointEntry entry);
void Put(CheckpointEntry entry);
void Flush(); // to persist to disk or remote store
}
```
**Implementation plan (v1)**
* Use a simple **filebased JSON** store per build:
* Path derived from build ID: e.g., `.attestations/checkpoints.json`.
* Internal representation: `Dictionary<string, CheckpointEntry>`.
* At end of run, `Flush()` writes out the file.
* On start of run, if file exists:
* Load existing checkpoints support resume.
**Future options**
* Plug in a distributed store (`ICheckpointStore` implementation backed by Redis, SQL, etc) for multistage pipelines.
---
### 5.5 Publisher / Orchestrator
Use a slightly enhanced version of what we sketched before.
**Interface**
```csharp
public sealed record AttestationPublisherOptions(
int TargetChunkBytes,
int HardChunkBytes,
string PayloadType,
int MaxAttempts,
TimeSpan InitialBackoff,
TimeSpan MaxBackoff
);
public sealed class AttestationPublisher
{
public AttestationPublisher(
IChunker chunker,
IDsseSigner signer,
IRekorClient rekor,
ICheckpointStore checkpointStore,
ILogger<AttestationPublisher> logger,
AttestationPublisherOptions options
) { ... }
public async IAsyncEnumerable<PublishResult> PublishAsync(
ArtifactEvidence evidence,
[System.Runtime.CompilerServices.EnumeratorCancellation] CancellationToken ct = default
);
}
```
**Algorithm**
For each `(ChunkRef, Payload)` from `IChunker.Split`:
1. Check `ICheckpointStore.TryGet(ChunkId)`:
* If found yield cached `PublishResult` (idempotency).
2. Build DSSE envelope via `_signer.Sign(payload, options.PayloadType)`.
3. Retry loop:
* Try `_rekor.UploadAsync(envelope, ct)`.
* On success:
* Create `CheckpointEntry`, store via `_checkpointStore.Put`.
* Yield `PublishResult`.
* On `TransientHttpException`:
* If attempts `MaxAttempts` surface as failure.
* Else exponential backoff with jitter and repeat.
* On `PermanentHttpException`:
* Log error and surface (no retry).
At the end of the run, call `_checkpointStore.Flush()`.
---
### 5.6 Manifest builder
**Responsibility**
Turn a set of `PublishResult` items into one manifest JSON.
**Interface**
```csharp
public interface IManifestBuilder
{
AttestationManifest Build(
ArtifactEvidence artifact,
IReadOnlyCollection<PublishResult> results,
string buildId,
DateTimeOffset publishedAtUtc
);
}
public interface IManifestWriter
{
Task WriteAsync(AttestationManifest manifest, string path, CancellationToken ct = default);
}
```
**Implementation plan**
* `JsonManifestBuilder` pure mapping from models to manifest DTO.
* `FileSystemManifestWriter` writes to a configurable path (e.g., `artifacts/attestations_manifest.json`).
---
## 6. Configuration & wiring
### 6.1 Options class
```csharp
public sealed class AttestationConfig
{
public string RekorBaseUrl { get; init; } = "";
public int RekorTimeoutSeconds { get; init; } = 30;
public int TargetChunkBytes { get; init; } = 800_000;
public int HardChunkBytes { get; init; } = 1_000_000;
public string DefaultPayloadType { get; init; } = "application/vnd.in-toto+json";
public int MaxAttempts { get; init; } = 5;
public int InitialBackoffMs { get; init; } = 200;
public int MaxBackoffMs { get; init; } = 5000;
public string CheckpointFilePath { get; init; } = ".attestations/checkpoints.json";
public string ManifestOutputPath { get; init; } = "attestations_manifest.json";
}
```
### 6.2 Example `appsettings.json` for CLI
```json
{
"Attestation": {
"RekorBaseUrl": "https://rekor.example.com",
"TargetChunkBytes": 800000,
"HardChunkBytes": 1000000,
"DefaultPayloadType": "application/vnd.in-toto+json",
"MaxAttempts": 5,
"InitialBackoffMs": 200,
"MaxBackoffMs": 5000,
"CheckpointFilePath": ".attestations/checkpoints.json",
"ManifestOutputPath": "attestations_manifest.json"
}
}
```
Wire via `IOptions<AttestationConfig>` in your DI container.
---
## 7. Observability & logging
### 7.1 Metrics (suggested)
Expose via your monitoring stack (Prometheus, App Insights, etc.):
* `attestations_chunks_total` labeled by `section`, `artifact_type`.
* `attestations_rekor_publish_success_total` labeled by `section`.
* `attestations_rekor_publish_failure_total` labeled by `section`, `failure_type` (4xx, 5xx, client_error).
* `attestations_rekor_latency_seconds` histogram.
* `attestations_chunk_size_bytes` histogram.
### 7.2 Logging
Log at **INFO**:
* Start/end of attestation publishing for each artifact.
* Number of chunks per section.
* Rekor UUID info (nonsensitive, ok to log).
Log at **DEBUG**:
* Exact Rekor request payload sizes.
* Retry attempts and backoff durations.
Log at **WARN/ERROR**:
* 4xx errors.
* Exhausted retries.
Include correlation IDs (build ID, artifact digest, chunk ID) in structured logs.
---
## 8. Testing strategy
### 8.1 Unit tests
* `ChunkerTests`
* Small payload 1 chunk.
* Large payload multiple chunks with no overlap and full coverage.
* Deterministic `ChunkId` generation (same input same IDs).
* `DsseSignerTests`
* Given a fixed key and payload DSSE envelope matches golden snapshot.
* `RekorClientTests`
* Mock `HttpMessageHandler`:
* 200 OK -> parse UUID, inclusion hash.
* 409 / “already exists” -> treat as success.
* 429 & 5xx -> throw `TransientHttpException`.
* 4xx -> throw `PermanentHttpException`.
* `CheckpointStoreTests`
* Put/TryGet behavior.
* Flush and reload from disk.
### 8.2 Integration tests
Against a **local or staging Rekor**:
* Publish single small attestation.
* Publish large SBOM that must be chunked.
* Simulate transient failure: first request 500, then 200; verify retry.
* Restart the test midflow, rerun; ensure already published chunks are skipped.
### 8.3 E2E in CI
* For a test project:
* Build → produce dummy SBOM/provenance.
* Run CLI to publish attestations.
* Archive:
* `attestations_manifest.json`.
* `checkpoints.json`.
* Optional: run a verification script that:
* Reads manifest.
* Queries Rekor for each UUID and validates inclusion.
---
## 9. CI integration (example)
Example GitHub Actions step (adapt as needed):
```yaml
- name: Publish attestations
run: |
dotnet SupplyChain.Attestations.Cli publish \
--artifact-id "${{ env.IMAGE_DIGEST }}" \
--artifact-type "container-image" \
--sbom "build/sbom.json" \
--provenance "build/provenance.json" \
--tests "build/test-results.json" \
--config "attestation.appsettings.json"
env:
ATTESTATION_SIGNING_KEY: ${{ secrets.ATTESTATION_SIGNING_KEY }}
```
The CLI command should:
1. Construct `ArtifactEvidence` from the input files.
2. Use DI to build `AttestationPublisher` and dependencies.
3. Stream results, build manifest, write outputs.
4. Exit nonzero if any chunk fails to publish.
---
## 10. Implementation roadmap (devoriented)
You can translate this into epics/stories; heres a logical order:
**Epic 1 Core models & chunking**
* Story 1: Define `ArtifactEvidence`, `EvidenceBlob`, `ChunkRef`, `PublishResult`.
* Story 2: Implement `IChunker` with simple bytebased splitter.
* Story 3: Deterministic `ChunkId` generation + tests.
**Epic 2 Signing & DSSE envelopes**
* Story 4: Implement `IDsseSigner` + `KeyBasedDsseSigner`.
* Story 5: DSSE envelope serialization tests (golden snapshots).
* Story 6: Wire in an abstract crypto provider so you can swap key sources later.
**Epic 3 Rekor client**
* Story 7: Implement `IRekorClient` using `HttpClient`.
* Story 8: Error classification & `TransientHttpException` / `PermanentHttpException`.
* Story 9: Integration tests with staging/local Rekor.
**Epic 4 Publisher, checkpoints, manifest**
* Story 10: Implement `ICheckpointStore` (filebased JSON).
* Story 11: Implement `AttestationPublisher` with retry/backoff.
* Story 12: Implement `IManifestBuilder` + `IManifestWriter`.
* Story 13: Create manifest schema and sample.
**Epic 5 CLI & CI integration**
* Story 14: Implement CLI `publish` command.
* Story 15: Wire config (appsettings + env overrides).
* Story 16: Add CI job template + docs for teams.
**Epic 6 Observability & hardening**
* Story 17: Add metrics & structured logging.
* Story 18: Load testing with large SBOMs/logs.
* Story 19: Final documentation: “How to add attestations to your pipeline”.
---
If youd like, I can next:
* Draft the exact C# interfaces and one full concrete implementation (e.g., `FileCheckpointStore`), or
* Write the CLI `publish` command skeleton that wires everything together.

View File

@@ -0,0 +1,514 @@
Heres a quick sizing rule of thumb for Sigstore attestations so you dont hit Rekor limits.
* **Base64 bloat:** DSSE wraps your JSON statement and then Base64encodes it. Base64 turns every 3 bytes into 4, so size ≈ `ceil(P/3)*4` (about **+3337%** on top of your raw JSON). ([Stack Overflow][1])
* **DSSE envelope fields:** Expect a small extra overhead for JSON keys like `payloadType`, `payload`, and `signatures` (and the signature itself). Sigstores bundle/DSSE examples show the structure used. ([Sigstore][2])
* **Public Rekor cap:** The **public Rekor instance rejects uploads over 100KB**. If your DSSE (after Base64 + JSON fields) exceeds that, shard/split the attestation or run your own Rekor. ([GitHub][3])
* **Reality check:** Teams routinely run into size errors when large statements are uploaded—the whole DSSE payload is sent to Rekor during verification/ingest. ([GitHub][4])
### Practical guidance
* Keep a **single attestation well under ~7080KB raw JSON** if it will be wrapped+Base64d (gives headroom for signatures/keys).
* Prefer **compact JSON** (no whitespace), **short key names**, and **avoid huge embedded fields** (e.g., trim SBOM evidence or link it by digest/URI).
* For big evidence sets, publish **multiple attestations** (logical shards) or **selfhost Rekor**. ([GitHub][3])
If you want, I can add a tiny calculator snippet that takes your payload bytes and estimates the final DSSE+Base64 size vs. the 100KB limit.
[1]: https://stackoverflow.com/questions/4715415/base64-what-is-the-worst-possible-increase-in-space-usage?utm_source=chatgpt.com "Base64: What is the worst possible increase in space usage?"
[2]: https://docs.sigstore.dev/about/bundle/?utm_source=chatgpt.com "Sigstore Bundle Format"
[3]: https://github.com/sigstore/rekor?utm_source=chatgpt.com "sigstore/rekor: Software Supply Chain Transparency Log"
[4]: https://github.com/sigstore/cosign/issues/3599?utm_source=chatgpt.com "Attestations require uploading entire payload to rekor #3599"
Heres a concrete, developerfriendly implementation plan you can hand to the team.
Ill assume:
* Youre using **Sigstore (Fulcio + Rekor + DSSE)**.
* Youre pushing to the **public Rekor instance**, which enforces a **100KB perentry size limit**.([GitHub][1])
* Attestations are JSON in a DSSE envelope and are produced in CI/CD.
You can copy this into a design doc and turn sections into tickets.
---
## 1. Goals & nongoals
**Goals**
1. Ensure **all Rekor uploads succeed** without hitting the 100KB limit.
2. Provide a **deterministic pipeline**: same inputs → same set of attestations.
3. Avoid losing security signal: large data (SBOMs, logs, etc.) should still be verifiable via references.
**Nongoals**
* Changing Rekor itself (well treat it as a black box).
* Redesigning your whole supply chain; were just changing how attestations are structured and uploaded.
---
## 2. Architecture changes (highlevel)
Add three core pieces:
1. **Attestation Builder** constructs one or more JSON statements per artifact.
2. **Size Guardrail & Sharder** checks size *before* upload; splits or externalizes data if needed.
3. **Rekor Client Wrapper** calls Rekor, handles size errors, and reports metrics.
Rough flow:
```text
CI job
→ gather metadata (subject digest, build info, SBOM, test results, etc.)
→ Attestation Builder (domain logic)
→ Size Guardrail & Sharder (JSON + DSSE + size checks)
→ Rekor Client Wrapper (upload + logging + metrics)
```
---
## 3. Config & constants (Ticket group A)
**A1 Add config**
* Add a configuration object / env variables:
```yaml
REKOR_MAX_ENTRY_BYTES: 100000 # current public limit, but treat as configurable
REKOR_SIZE_SAFETY_MARGIN: 0.9 # 90% of the limit as “soft” max
ATTESTATION_JSON_SOFT_MAX: 80000 # e.g. 80 KB JSON before DSSE/base64
```
* Make **`REKOR_MAX_ENTRY_BYTES`** overridable so:
* you can bump it for a private Rekor deployment.
* tests can simulate different limits.
**Definition of done**
* Config is available in whoever builds attestations (CI job, shared library, etc.).
* Unit tests read these values and assert behavior around boundary values.
---
## 4. Attestation schema guidelines (Ticket group B)
**B1 Define / revise schema**
For each statement type (e.g., SLSA, SBOM, test results):
* Mark **required vs optional** fields.
* Identify **large fields**:
* SBOM JSON
* long log lines
* full dependency lists
* coverage details
**Rule:**
> Large data should **not** be inlined; it should be stored externally and referenced by digest.
Add a standard “external evidence” shape:
```json
{
"externalEvidence": [
{
"type": "sbom-spdx-json",
"uri": "https://artifacts.example.com/sbom/<build-id>.json",
"digest": "sha256:abcd...",
"sizeBytes": 123456
}
]
}
```
**B2 Budget fields**
* For each statement type, estimate typical sizes:
* Fixed overhead (keys, small fields).
* Variable data (e.g., components length).
* Document a **rule of thumb**:
“Total JSON payload for type X should be ≤ 80KB; otherwise we split or externalize.”
**Definition of done**
* Schema docs updated with “size budget” notes.
* New `externalEvidence` (or equivalent) field defined and versioned.
---
## 5. Size Guardrail & Estimator (Ticket group C)
This is the core safety net.
### C1 Implement JSON size estimator
Languageagnostic idea:
```pseudo
function jsonBytes(payloadObject): int {
jsonString = JSON.stringify(payloadObject, no_whitespace)
return length(utf8_encode(jsonString))
}
```
* Always **minify** (no pretty printing) for the final payload.
* Use UTF8 byte length, not character count.
### C2 DSSE + base64 size estimator
Instead of guessing, **actually build the envelope** before upload:
```pseudo
function buildDsseEnvelope(statementJson: string, signature: bytes, keyId: string): string {
envelope = {
"payloadType": "application/vnd.in-toto+json",
"payload": base64_encode(statementJson),
"signatures": [
{
"sig": base64_encode(signature),
"keyid": keyId
}
]
}
return JSON.stringify(envelope, no_whitespace)
}
function envelopeBytes(envelopeJson: string): int {
return length(utf8_encode(envelopeJson))
}
```
**Rule:** if `envelopeBytes(envelopeJson) > REKOR_MAX_ENTRY_BYTES * REKOR_SIZE_SAFETY_MARGIN`, we consider this envelope **too big** and trigger sharding / externalization logic before calling Rekor.
> Note: This means you temporarily sign once to measure size. Thats acceptable; signing is cheap compared to a failing Rekor upload.
### C3 Guardrail function
```pseudo
function ensureWithinRekorLimit(envelopeJson: string) {
bytes = envelopeBytes(envelopeJson)
if bytes > REKOR_MAX_ENTRY_BYTES {
throw new OversizeAttestationError(bytes, REKOR_MAX_ENTRY_BYTES)
}
}
```
**Definition of done**
* Utility functions for `jsonBytes`, `buildDsseEnvelope`, `envelopeBytes`, and `ensureWithinRekorLimit`.
* Unit tests:
* Below limit → pass.
* Exactly at limit → pass.
* Above limit → throws `OversizeAttestationError`.
---
## 6. Sharding / externalization strategy (Ticket group D)
This is where you decide *what to do* when a statement is too big.
### D1 Strategy decision
Implement in this order:
1. **Externalize big blobs** (preferred).
2. If still too big, **shard** into multiple attestations.
#### 1) Externalization rules
Examples:
* SBOM:
* Write full SBOM to artifact store or object storage (S3, GCS, internal).
* In attestation, keep only:
* URI
* hash
* size
* format
* Test logs:
* Keep only summary + URI to full logs.
Implement a helper:
```pseudo
function externalizeIfLarge(fieldName, dataBytes, thresholdBytes): RefOrInline {
if length(dataBytes) <= thresholdBytes {
return { "inline": true, "value": dataBytes }
} else {
uri = uploadToArtifactStore(dataBytes)
digest = sha256(dataBytes)
return {
"inline": false,
"uri": uri,
"digest": "sha256:" + digest
}
}
}
```
#### 2) Sharding rules
Example for SBOMlike data: if you have a big `components` list:
```pseudo
MAX_COMPONENTS_PER_ATTESTATION = 1000 # tune this via tests
function shardComponents(components[]):
chunks = chunk(components, MAX_COMPONENTS_PER_ATTESTATION)
attestations = []
for each chunk in chunks:
att = baseStatement()
att["components"] = chunk
attestations.append(att)
return attestations
```
After sharding:
* Each chunk becomes its **own statement** (and its own DSSE envelope + Rekor entry).
* Each statement should include:
* The same **subject (artifact digest)**.
* A `shardId` and `shardCount`, or a `groupId` (e.g., build ID) to relate them.
Example:
```json
{
"_sharding": {
"groupId": "build-1234-sbom",
"shardIndex": 0,
"shardCount": 3
}
}
```
**D2 Integration with size guardrail**
Flow:
1. Build full statement.
2. If `jsonBytes(statement) <= ATTESTATION_JSON_SOFT_MAX`: use asis.
3. Else:
* Try externalizing big fields.
* Remeasure JSON size.
4. If still above `ATTESTATION_JSON_SOFT_MAX`:
* Apply sharding (e.g., split `components` list).
5. For each shard:
* Build DSSE envelope.
* Run `ensureWithinRekorLimit`.
If after sharding a single shard **still** exceeds Rekors limit, you must:
* Fail the pipeline with a **clear error**.
* Log enough diagnostics to adjust your thresholds or schemas.
**Definition of done**
* Implementation for:
* `externalizeIfLarge`,
* `shardComponents` (or equivalent for your large arrays),
* `_sharding` metadata.
* Tests:
* Large SBOM → multiple attestations, each under size limit.
* Externalization correctly moves large fields out and keeps digests.
---
## 7. Rekor client wrapper (Ticket group E)
### E1 Wrap Rekor interactions
Create a small abstraction:
```pseudo
class RekorClient {
function uploadDsseEnvelope(envelopeJson: string): LogEntryRef {
ensureWithinRekorLimit(envelopeJson)
response = http.post(REKOR_URL + "/api/v1/log/entries", body=envelopeJson)
if response.statusCode == 201 or response.statusCode == 200:
return parseLogEntryRef(response.body)
else if response.statusCode == 413 or isSizeError(response.body):
throw new RekorSizeLimitError(response.statusCode, response.body)
else:
throw new RekorUploadError(response.statusCode, response.body)
}
}
```
* The `ensureWithinRekorLimit` call should prevent most 413s.
* `isSizeError` should inspect message strings that mention “size”, “100KB”, etc., just in case Rekors error handling changes.
### E2 Error handling strategy
On `RekorSizeLimitError`:
* Mark the build as **failed** (or at least **noncompliant**).
* Emit a structured log event:
```json
{
"event": "rekor_upload_oversize",
"envelopeBytes": 123456,
"rekorMaxBytes": 100000,
"buildId": "build-1234"
}
```
* (Optional) Attach the JSON size breakdown for debugging.
**Definition of done**
* Wrapper around existing Rekor client (or direct HTTP).
* Tests for:
* Successful upload.
* Simulated 413 / size error → recognized and surfaced cleanly.
---
## 8. CI/CD integration (Ticket group F)
### F1 Where to run this
Integrate in your pipeline step that currently does signing, e.g.:
```text
build → test → sign → attest → rekor-upload → deploy
```
Change to:
```text
build → test → sign → build-attestations (w/ size control)
→ upload-all-attestations-to-rekor
→ deploy
```
### F2 Multientry handling
If sharding is used:
* The pipeline should treat **“all relevant attestations uploaded successfully”** as a success condition.
* Store a manifest per build:
```json
{
"buildId": "build-1234",
"subjectDigest": "sha256:abcd...",
"attestationEntries": [
{
"type": "slsa",
"rekorLogIndex": 123456,
"shardIndex": 0,
"shardCount": 1
},
{
"type": "sbom",
"rekorLogIndex": 123457,
"shardIndex": 0,
"shardCount": 3
}
]
}
```
This manifest can be stored in your artifact store and used later by verifiers.
**Definition of done**
* CI job updated.
* Build manifest persisted.
* Documentation updated so ops/security know where to find attestation references.
---
## 9. Verification path updates (Ticket group G)
If you shard or externalize, your **verifiers** need to understand that.
### G1 Verify external evidence
* When verifying, for each `externalEvidence` entry:
* Fetch the blob from its URI.
* Compute its digest.
* Compare with the digest in the attestation.
* Decide whether verifiers:
* Must fetch all external evidence (strict), or
* Are allowed to do “metadataonly” verification if evidence URLs look trustworthy.
### G2 Verify sharded attestations
* Given a build ID or subject digest:
* Look up all Rekor entries for that subject (or use your manifest).
* Group by `_sharding.groupId`.
* Ensure all shards are present (`shardCount`).
* Verify each shards signature and subject digest.
**Definition of done**
* Verifier code updated to:
* Handle `externalEvidence`.
* Handle `_sharding` metadata.
* Integration test:
* Endtoend: build → shard → upload → verify all shards and external evidence.
---
## 10. Observability & guardrails (Ticket group H)
**H1 Metrics**
Add these metrics:
* `attestation_json_size_bytes` (per type).
* `rekor_envelope_size_bytes` (per type).
* Counters:
* `attestation_sharded_total`
* `attestation_externalized_total`
* `rekor_upload_oversize_total`
**H2 Alerts**
* If `rekor_upload_oversize_total` > 0 over some window → alert.
* If average `rekor_envelope_size_bytes` > 7080% of limit for long → investigate schema growth.
---
## 11. Suggested ticket breakdown
You can cut this into roughly these tickets:
1. **Config & constants for Rekor size limits** (A).
2. **Schema update: support externalEvidence + sharding metadata** (B).
3. **Implement JSON & DSSE size estimation utilities** (C1C3).
4. **Implement externalization of SBOMs/logs and sizeaware builder** (D1).
5. **Implement sharding for large arrays (e.g., components)** (D1D2).
6. **Wrap Rekor client with size checks and error handling** (E).
7. **CI pipeline integration + build manifest** (F).
8. **Verifier changes for sharding + external evidence** (G).
9. **Metrics & alerts for attestation/Rekor sizes** (H).
---
If you tell me what language / stack youre using (Go, Java, Python, Node, etc.), I can turn this into more concrete code snippets and even example modules.
[1]: https://github.com/sigstore/rekor?utm_source=chatgpt.com "sigstore/rekor: Software Supply Chain Transparency Log"