Implement ledger metrics for observability and add tests for Ruby packages endpoints
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled

- Added `LedgerMetrics` class to record write latency and total events for ledger operations.
- Created comprehensive tests for Ruby packages endpoints, covering scenarios for missing inventory, successful retrieval, and identifier handling.
- Introduced `TestSurfaceSecretsScope` for managing environment variables during tests.
- Developed `ProvenanceMongoExtensions` for attaching DSSE provenance and trust information to event documents.
- Implemented `EventProvenanceWriter` and `EventWriter` classes for managing event provenance in MongoDB.
- Established MongoDB indexes for efficient querying of events based on provenance and trust.
- Added models and JSON parsing logic for DSSE provenance and trust information.
This commit is contained in:
master
2025-11-13 09:29:09 +02:00
parent 151f6b35cc
commit 61f963fd52
101 changed files with 5881 additions and 1776 deletions

View File

View File

@@ -6,93 +6,237 @@ Active items only. Completed/historic work now resides in docs/implplan/archived
| Wave | Guild owners | Shared prerequisites | Status | Notes | | Wave | Guild owners | Shared prerequisites | Status | Notes |
| --- | --- | --- | --- | --- | | --- | --- | --- | --- | --- |
| 110.A AdvisoryAI | Advisory AI Guild · Docs Guild · SBOM Service Guild | Sprint 100.A Attestor (closed 2025-11-09 per `docs/implplan/archived/SPRINT_100_identity_signing.md`) | DOING | Guardrail regression suite (AIAI-31-009) closed 2025-11-12 with the new `AdvisoryAI:Guardrails` configuration; console doc (DOCS-AIAI-31-004) remains DOING while SBOM/CLI/Policy/DevOps dependencies finish unblocking the screenshots/runbook work. | | 110.A AdvisoryAI | Advisory AI Guild · Docs Guild · SBOM Service Guild | Sprint 100.A Attestor (closed 2025-11-09 per `docs/implplan/archived/SPRINT_100_identity_signing.md`) | DOING | Guardrail regression suite (AIAI-31-009) closed 2025-11-12 with the new `AdvisoryAI:Guardrails` configuration; console doc (DOCS-AIAI-31-004) remains DOING while SBOM/CLI/Policy/DevOps dependencies unblock screenshots/runbook work. |
| 110.B Concelier | Concelier Core & WebService Guilds · Observability Guild · AirGap Guilds (Importer/Policy/Time) | Sprint 100.A Attestor | DOING | Paragraph chunk API shipped 2025-11-07; structured field/caching (CONCELIER-AIAI-31-002) is still TODO, telemetry (CONCELIER-AIAI-31-003) closed 2025-11-12 with tenant/result/cache counters for Advisory AI, and air-gap/console/attestation tracks remain gated on Link-Not-Merge + Cartographer schema. | | 110.B Concelier | Concelier Core & WebService Guilds · Observability Guild · AirGap Guilds (Importer/Policy/Time) | Sprint 100.A Attestor | DOING | Paragraph chunk API shipped 2025-11-07; structured field/caching (CONCELIER-AIAI-31-002) is mid-implementation, telemetry (CONCELIER-AIAI-31-003) closed 2025-11-12, and air-gap/console/attestation tracks are held by Link-Not-Merge + Cartographer schema. |
| 110.C Excititor | Excititor WebService/Core Guilds · Observability Guild · Evidence Locker Guild | Sprint 100.A Attestor | DOING | Normalized justification projections (EXCITITOR-AIAI-31-001) landed via `/v1/vex/observations/{vulnerabilityId}/{productKey}`; chunk API, telemetry, docs, attestation, and mirror backlog stay queued behind Link-Not-Merge / Cartographer prerequisites. | | 110.C Excititor | Excititor WebService/Core Guilds · Observability Guild · Evidence Locker Guild | Sprint 100.A Attestor | DOING | Normalized justification projections (EXCITITOR-AIAI-31-001) landed; chunk API, telemetry, docs, attestation, and mirror backlog stay queued behind Link-Not-Merge / Evidence Locker prerequisites. |
| 110.D Mirror | Mirror Creator Guild · Exporter Guild · CLI Guild · AirGap Time Guild | Sprint 100.A Attestor | TODO | Wave remains TODO—MIRROR-CRT-56-001 has not started, so DSSE/TUF, OCI/time-anchor, CLI, and scheduling integrations cannot proceed. | | 110.D Mirror | Mirror Creator Guild · Exporter Guild · CLI Guild · AirGap Time Guild | Sprint 100.A Attestor | TODO | Wave remains TODO—MIRROR-CRT-56-001 has no owner, so DSSE/TUF, OCI/time-anchor, CLI, and scheduling integrations cannot proceed. |
## Status Snapshot (2025-11-09) ## Status snapshot (2025-11-13)
- **Advisory AI (110.A)** WebService orchestration (AIAI-31-004), typed SBOM client/tooling (AIAI-31-002/003), guardrail pipeline (AIAI-31-005), and overview/API/architecture docs (DOCS-AIAI-31-001/002/003) are DONE; focus now sits on DOCS-AIAI-31-004 and AIAI-31-009 while CLI/policy/SBOM deliverables unblock the remaining docs. - **Advisory AI (110.A)** Guardrail regression suite AIAI-31-009 closed on 2025-11-12 with the `AdvisoryAI:Guardrails` binding and sub-400ms batch budgets; DOCS-AIAI-31-004 is the lone DOING item, blocked on Console screenshots (CONSOLE-VULN-29-001 / CONSOLE-VEX-30-001) plus SBOM evidence. SBOM-AIAI-31-003 and DOCS-AIAI-31-005/006/008/009 stay BLOCKED until SBOM-AIAI-31-001, CLI-VULN-29-001, CLI-VEX-30-001, POLICY-ENGINE-31-001, and DEVOPS-AIAI-31-001 land (ETAs requested for 2025-11-14).
- 2025-11-09: AIAI-31-009 remains DOING after converting the guardrail harness into JSON fixtures, expanding property/perf coverage, and validating offline cache seeding; remote inference packaging (AIAI-31-008) is still TODO until the policy knob work in AIAI-31-006..007 completes. - **Concelier (110.B)** Paragraph chunk API shipped on 2025-11-07 and telemetry (CONCELIER-AIAI-31-003) landed 2025-11-12; structured field/caching (CONCELIER-AIAI-31-002) is DOING but cannot release until Link-Not-Merge plus Cartographer schema (`CARTO-GRAPH-21-002`) finalize. Air-gap (CONCELIER-AIRGAP-56..58), console (CONCELIER-CONSOLE-23-001..003), attestation (CONCELIER-ATTEST-73-001/002), and overdue connector refreshes (FEEDCONN-ICSCISA-02-012 / FEEDCONN-KISA-02-008) remain TODO.
- 2025-11-09: DOCS-AIAI-31-004 continues DOING—guardrail/offline sections are drafted, but screenshots plus copy blocks wait on CONSOLE-VULN-29-001, CONSOLE-VEX-30-001, and EXCITITOR-CONSOLE-23-001. - **Excititor (110.C)** Normalized VEX justifications (EXCITITOR-AIAI-31-001) are live; chunk API, telemetry, docs, attestation, air-gap, and connector parity tracks (EXCITITOR-AIAI-31-002/003/004, `EXCITITOR-ATTEST-*`, `EXCITITOR-AIRGAP-*`, `EXCITITOR-CONN-TRUST-01-001`) are queued behind the same Link-Not-Merge schema plus Evidence Locker contract.
- SBOM-AIAI-31-003 and DOCS-AIAI-31-005/006/008/009 remain BLOCKED pending SBOM-AIAI-31-001, CLI-VULN-29-001, CLI-VEX-30-001, POLICY-ENGINE-31-001, and DEVOPS-AIAI-31-001. - **Mirror (110.D)** MIRROR-CRT-56-001 still lacks an owner, so DSSE/TUF, OCI/time-anchor, CLI, Export Center, and AirGap Time integrations (MIRROR-CRT-56/57/58, EXPORT-OBS-51/54, AIRGAP-TIME-57-001) cannot start; kickoff moved to 2025-11-15 unless staffing is resolved sooner.
- 2025-11-10: AIAI-31-009 performance suite doubled dataset coverage (blocked phrase seed + perf scenarios) and now enforces sub-400ms guardrail batches so Advisory AI can cite deterministic budgets.
- 2025-11-12: AIAI-31-009 shipped guardrail configuration binding (`AdvisoryAI:Guardrails`) plus the expanded perf suite; only DOCS-AIAI-31-004 remains DOING while SBOM/CLI/Policy dependencies unblock.
- 2025-11-12: DOCS-AIAI-31-004 now documents the guardrail ribbon payload contract (sample + telemetry hooks) so Console/QA can exercise blocked/cached states without waiting for staging captures.
- **Concelier (110.B)** `/advisories/{advisoryKey}/chunks` shipped on 2025-11-07 with tenant enforcement, chunk tuning knobs, and regression fixtures; structured field/caching work (CONCELIER-AIAI-31-002) is still TODO while telemetry/guardrail instrumentation (CONCELIER-AIAI-31-003) closed 2025-11-12 with OTEL counters for tenant/result/cache tags.
- Air-gap provenance/staleness bundles (`CONCELIER-AIRGAP-56-001``CONCELIER-AIRGAP-58-001`), console views/deltas (`CONCELIER-CONSOLE-23-001..003`), and attestation metadata (`CONCELIER-ATTEST-73-001/002`) remain TODO pending Link-Not-Merge plus Cartographer schema delivery.
- Connector provenance refreshes `FEEDCONN-ICSCISA-02-012` and `FEEDCONN-KISA-02-008` are still overdue, leaving evidence parity gaps for those feeds.
- 2025-11-10: CONCELIER-AIAI-31-003 shipped cache/request histograms + guardrail counters/log scopes; docs now map the new metrics for Advisory AI dashboards.
- **Excititor (110.C)** Normalized VEX justification projections (EXCITITOR-AIAI-31-001) landed via `/v1/vex/observations/{vulnerabilityId}/{productKey}`; the downstream chunk API (EXCITITOR-AIAI-31-002), telemetry/guardrails (EXCITITOR-AIAI-31-003), docs/OpenAPI alignment (EXCITITOR-AIAI-31-004), and attestation payload work (`EXCITITOR-ATTEST-*`) stay TODO until Link-Not-Merge schema land.
- Mirror/air-gap backlog (`EXCITITOR-AIRGAP-56-001` .. `EXCITITOR-AIRGAP-58-001`) and connector provenance parity (`EXCITITOR-CONN-TRUST-01-001`) remain unscheduled, so Advisory AI cannot yet hydrate sealed VEX evidence or cite connector signatures.
- **Mirror (110.D)** MIRROR-CRT-56-001 (deterministic bundle assembler) has not kicked off, so DSSE/TUF (MIRROR-CRT-56-002), OCI exports (MIRROR-CRT-57-001), time anchors (MIRROR-CRT-57-002), CLI verbs (MIRROR-CRT-58-001), and Export Center automation (MIRROR-CRT-58-002) are all blocked.
## Blockers & Overdue Follow-ups ### Wave health (RAG snapshot)
- Advisory AI customer-facing coverage remains blocked until SBOM-AIAI-31-001 exposes the `/v1/sbom/context` hand-off kit and until CLI-VULN-29-001, CLI-VEX-30-001, POLICY-ENGINE-31-001, and DEVOPS-AIAI-31-001 ship—keeping SBOM-AIAI-31-003 plus DOCS-AIAI-31-005/006/008/009 and the remote inference packaging work (AIAI-31-008) on hold. | Wave | Health | Drivers |
- `CONCELIER-GRAPH-21-001`, `CONCELIER-GRAPH-21-002`, and `CONCELIER-GRAPH-21-005` remain BLOCKED awaiting `CONCELIER-POLICY-20-002` outputs and Cartographer schema (`CARTO-GRAPH-21-002`), keeping downstream Excititor graph consumers on hold.
- `EXCITITOR-GRAPH-21-001`, `EXCITITOR-GRAPH-21-002`, and `EXCITITOR-GRAPH-21-005` stay BLOCKED until the same Cartographer/Link-Not-Merge prerequisites are delivered.
- Connector provenance updates `FEEDCONN-ICSCISA-02-012` (due 2025-10-23) and `FEEDCONN-KISA-02-008` (due 2025-10-24) remain past due and need scheduling. FeedMerge coordination tasks have been dropped (no AOC policy/governance backing yet), so capacity shifts to schema/guard deliverables.
- Mirror evidence work remains blocked until `MIRROR-CRT-56-001` ships; align Export Center (`EXPORT-OBS-51-001`) and AirGap time anchor (`AIRGAP-TIME-57-001`) owners for kickoff.
## Immediate actions (target: 2025-11-12)
- **Advisory AI** Land AIAI-31-009 test harness updates plus remote inference packaging (AIAI-31-008) once POLICY-ENGINE-31-001 and DEVOPS-AIAI-31-001 expose the required knobs; SBOM guild to deliver SBOM-AIAI-31-001 so SBOM-AIAI-31-003 and the CLI/policy/runbook docs can unblock.
- **Concelier** Finish CONCELIER-AIAI-31-002 structured fields/caching and wire CONCELIER-AIAI-31-003 telemetry before starting air-gap or console endpoints; hold daily sync with Cartographer owners on CONCELIER-LNM-21-201/202 + CARTO-GRAPH-21-002.
- **Excititor** Wrap EXCITITOR-AIAI-31-001 justification projections, then immediately stage EXCITITOR-AIAI-31-002/003 plus EXCITITOR-ATTEST-01-003 to keep Advisory AI evidence feeds parallel to Concelier.
- **Mirror** Schedule MIRROR-CRT-56-001 kickoff with Export Center/AirGap Time guilds, confirm `EXPORT-OBS-51-001` + `AIRGAP-TIME-57-001` owners, and pre-stage DSSE/TUF design notes so MIRROR-CRT-56-002 can start as soon as the assembler lands.
- **Downstream prep** Scanner (Sprint 130) and Policy/Vuln Explorer (Sprint 129) owners should review AIAI-31-009 outputs after 2025-11-10 to ensure schema expectations match; Concelier CONSOLE (23-001..003) and AIRGAP (56/57/58) leads need Link-Not-Merge dates set during the 2025-11-11 checkpoint; Excititor mirror/air-gap teams should stage EXCITITOR-AIRGAP-56/57/58 implementation plans; Mirror CLI/Export Center teams should assemble design notes ahead of MIRROR-CRT-56-002/58-001 once the assembler kickoff happens.
## Wave detail references (2025-11-09)
- **110.A AdvisoryAI (docs/implplan/SPRINT_111_advisoryai.md)**
DOCS-AIAI-31-004 remains DOING; DOCS-AIAI-31-005/006/008/009 are BLOCKED on CLI/POLICY/SBOM/DevOps dependencies; SBOM-AIAI-31-003 is still TODO awaiting SBOM-AIAI-31-001; AIAI-31-008 is TODO until guardrail knobs land, and AIAI-31-009 stays DOING with the expanded harness/perf coverage work.
- **110.B Concelier (docs/implplan/SPRINT_112_concelier_i.md)**
CONCELIER-AIAI-31-002 is TODO while CONCELIER-AIAI-31-003 is DOING; all air-gap (`CONCELIER-AIRGAP-56/57/58-*`), attestation (`CONCELIER-ATTEST-73-*`), and console (`CONCELIER-CONSOLE-23-*`) tracks remain TODO pending Link-Not-Merge (`CONCELIER-LNM-21-*`) and Cartographer schema (`CARTO-GRAPH-21-002`) delivery.
- **110.C Excititor (docs/implplan/SPRINT_119_excititor_i.md)**
EXCITITOR-AIAI-31-001 is DOING; EXCITITOR-AIAI-31-002/003/004, EXCITITOR-ATTEST-01-003/-73-001/-73-002, EXCITITOR-AIRGAP-56/57/58-* and EXCITITOR-CONN-TRUST-01-001 are all TODO awaiting the justification projection output plus Link-Not-Merge contracts.
- **110.D Mirror (docs/implplan/SPRINT_125_mirror.md)**
Every MIRROR-CRT-56/57/58 task is still TODO; DSSE/TUF, OCI bundle, time-anchor, CLI, and Export Center automation cannot start until the deterministic bundle assembler (MIRROR-CRT-56-001) is underway with EXPORT-OBS-51-001 and AIRGAP-TIME-57-001 owners confirmed.
## Downstream dependency rollup (snapshot: 2025-11-09)
| Wave | Dependent sprint(s) (selected) | Impact if 110.* slips |
| --- | --- | --- | | --- | --- | --- |
| 110.A AdvisoryAI | `SPRINT_130_scanner_surface.md`, `SPRINT_129_policy_reasoning.md`, `SPRINT_513_provenance.md`, `SPRINT_514_sovereign_crypto_enablement.md` | Scanner analyzers need AdvisoryAI schemas/feeds, Policy/Vuln Explorer tracks cannot expose advisory reasoning, and provenance/sovereign crypto programs remain paused until evidence contracts land. | | 110.A AdvisoryAI | 🔶 Watching | Only DOCS-AIAI-31-004 is active; waiting on SBOM/CLI/Policy/DevOps ETAs to restart remaining doc/SBOM tasks. |
| 110.B Concelier | `SPRINT_113_concelier_ii.md`, `SPRINT_114_concelier_iii.md`, `SPRINT_115_concelier_iv.md` | Link-Not-Merge schema + observation APIs gate Concelier graph, telemetry, and orchestrator waves; Console/advisor UIs stay blocked. | | 110.B Concelier | 🔶 Watching | Structured caching is in-flight but Link-Not-Merge schema + connector refreshes remain unresolved. |
| 110.C Excititor | `SPRINT_120_excititor_ii.md``SPRINT_124_excititor_vi.md` | VEX chunk/attestation phases cannot progress until Excititor.I ships justification projections/guardrails, delaying Lens, Policy, and Advisory AI parity for VEX evidence. | | 110.C Excititor | 🔶 Watching | Downstream work entirely gated by Link-Not-Merge + Evidence Locker contract; ready to move once schemas approved. |
| 110.D Mirror | 🔴 Blocked | MIRROR-CRT-56-001 still unstaffed; kickoff on 2025-11-15 must assign owner or sprint slips. |
### Wave task tracker (refreshed 2025-11-13)
#### 110.A AdvisoryAI
| Task ID | State | Notes |
| --- | --- | --- |
| DOCS-AIAI-31-004 | DOING | Console guardrail doc drafted; screenshots/runbook copy blocked on CONSOLE-VULN-29-001, CONSOLE-VEX-30-001, and SBOM evidence feeds. |
| AIAI-31-009 | DONE (2025-11-12) | Guardrail regression suite + `AdvisoryAI:Guardrails` config binding merged with perf budgets. |
| AIAI-31-008 | TODO | Remote inference packaging waits on policy knobs (AIAI-31-006/007). |
| SBOM-AIAI-31-003 | BLOCKED | Needs SBOM-AIAI-31-001 outputs plus CLI-VULN/CLI-VEX deliverables. |
| DOCS-AIAI-31-005/006/008/009 | BLOCKED | Await SBOM/CLI/Policy/DevOps artifacts listed above. |
#### 110.B Concelier
| Task ID | State | Notes |
| --- | --- | --- |
| CONCELIER-AIAI-31-002 | DOING | Structured field/caching implementation underway; blocked on Link-Not-Merge schema + CARTO-GRAPH-21-002. |
| CONCELIER-AIAI-31-003 | DONE (2025-11-12) | Telemetry counters/histograms live for Advisory AI dashboards. |
| CONCELIER-AIRGAP-56-001..58-001 | TODO | Air-gap bundles waiting on schema + attestation payloads. |
| CONCELIER-CONSOLE-23-001..003 | TODO | Console overlays blocked by Link-Not-Merge delivery. |
| CONCELIER-ATTEST-73-001/002 | TODO | Attestation metadata wiring queued behind structured caching. |
| FEEDCONN-ICSCISA-02-012 / FEEDCONN-KISA-02-008 | BLOCKED | Connector provenance refreshes overdue; need feed owner schedule. |
#### 110.C Excititor
| Task ID | State | Notes |
| --- | --- | --- |
| EXCITITOR-AIAI-31-001 | DONE (2025-11-09) | Normalized VEX justification projections live at `/v1/vex/observations/{vulnerabilityId}/{productKey}`. |
| EXCITITOR-AIAI-31-002 | TODO | Chunk API pending Link-Not-Merge schema + Evidence Locker ingest plan. |
| EXCITITOR-AIAI-31-003 | TODO | Telemetry/guardrail instrumentation blocked on chunk schema readiness. |
| EXCITITOR-AIAI-31-004 | TODO | Docs/OpenAPI alignment follows chunk API. |
| EXCITITOR-ATTEST-01-003 / 73-001 / 73-002 | TODO | Attestation payload work waiting on chunk normalization + Evidence Locker scope. |
| EXCITITOR-AIRGAP-56/57/58 · EXCITITOR-CONN-TRUST-01-001 | TODO | Air-gap + connector parity require Link-Not-Merge + attestation readiness. |
#### 110.D Mirror
| Task ID | State | Notes |
| --- | --- | --- |
| MIRROR-CRT-56-001 | TODO | Deterministic assembler lacks owner; kickoff reset to 2025-11-15. |
| MIRROR-CRT-56-002 | TODO | DSSE/TUF design blocked on MIRROR-CRT-56-001 code path. |
| MIRROR-CRT-57-001/002 | TODO | OCI/time-anchor workstreams depend on assembler completion. |
| MIRROR-CRT-58-001/002 | TODO | Export/CLI automation waiting on MIRROR-CRT-56-001. |
| EXPORT-OBS-51-001 / 54-001 · AIRGAP-TIME-57-001 · CLI-AIRGAP-56-001 · PROV-OBS-53-001 | TODO | Require assembler baseline and staffing commitments. |
### In-flight focus (DOING items)
| Task ID | Remaining work | Blockers | Target date | Owners |
| --- | --- | --- | --- | --- |
| DOCS-AIAI-31-004 | Capture Console screenshots + guardrail ribbon copy, finalize runbook text. | CONSOLE-VULN-29-001 / CONSOLE-VEX-30-001 outputs; SBOM evidence feeds. | 2025-11-15 | Docs Guild · Advisory AI Guild |
| CONCELIER-AIAI-31-002 | Implement structured field/caching API + regression fixtures. | Link-Not-Merge schema (`CONCELIER-GRAPH-21-001/002`, `CARTO-GRAPH-21-002`). | 2025-11-16 | Concelier Core · Concelier WebService Guilds |
| CONCELIER-GRAPH-21-001/002 · CARTO-GRAPH-21-002 | Finalize projection schema + change events, publish migration guide. | Cross-guild review on 2025-11-14. | 2025-11-14 | Concelier Core · Cartographer Guild · SBOM Service Guild |
| MIRROR-CRT-56-001 staffing | Assign engineering owner, scope kickoff, and start assembler implementation. | Needs Mirror/Exporter/AirGap leadership approval. | 2025-11-15 | Mirror Creator Guild · Exporter Guild |
### Dependency status watchlist (2025-11-13)
| Dependency | Status | Impacted work | Owner(s) / follow-up |
| --- | --- | --- | --- |
| SBOM/CLI/Policy/DevOps deliverables (SBOM-AIAI-31-001/003, CLI-VULN-29-001, CLI-VEX-30-001, POLICY-ENGINE-31-001, DEVOPS-AIAI-31-001) | ETAs requested for 2025-11-14. | DOCS-AIAI-31-004/005/006/008/009, SBOM-AIAI-31-003, AIAI-31-008. | SBOM Service · CLI · Policy · DevOps guilds |
| Link-Not-Merge schema (CONCELIER-LNM-21-001..003, CONCELIER-GRAPH-21-001/002, CARTO-GRAPH-21-002) | Review on 2025-11-14. | CONCELIER-AIAI-31-002, CONCELIER-AIRGAP-56..58, EXCITITOR-AIAI-31-002/003/004, EXCITITOR-ATTEST-*, Mirror consumers. | Concelier Core · Cartographer Guild · Platform Events Guild |
| Connector refreshes (FEEDCONN-ICSCISA-02-012 / FEEDCONN-KISA-02-008) | Overdue since 2025-10-23/24. | Advisory AI feed coverage + telemetry accuracy. | Concelier Feed Owners |
| MIRROR-CRT-56-001 staffing | Owner not yet assigned; kickoff moved to 2025-11-15. | Entire Mirror wave + Export Center + AirGap Time work. | Mirror Creator Guild · Exporter Guild · AirGap Time Guild |
| Evidence Locker attestation contract | Drafting; needs Excititor + Concelier alignment. | EXCITITOR-ATTEST-* and CONCELIER-ATTEST-73-001/002. | Evidence Locker Guild · Excititor Guild · Concelier Guild |
### Upcoming checkpoints (2025-11-13 → 2025-11-15)
| Date (UTC) | Session | Goal / expected exit | Impacted wave(s) | Prep owner(s) |
| --- | --- | --- | --- | --- |
| 2025-11-14 | Advisory AI customer surfaces follow-up | Capture SBOM/CLI/Policy/DevOps ETAs so DOCS-AIAI backlog can resume. | 110.A | Advisory AI · SBOM · CLI · Policy · DevOps guild leads |
| 2025-11-14 | Link-Not-Merge schema review | Approve CARTO-GRAPH-21-002 + CONCELIER-GRAPH-21-001/002 payloads, document migration. | 110.B · 110.C | Concelier Core · Cartographer Guild · SBOM Service Guild |
| 2025-11-15 | Excititor attestation sequencing | Sequence EXCITITOR-AIAI-31-002/003 and slot EXCITITOR-ATTEST-01-003 / 73-001 / 73-002 with Evidence Locker. | 110.C | Excititor Web/Core · Evidence Locker Guild |
| 2025-11-15 | Mirror evidence kickoff | Assign MIRROR-CRT-56-001 owner, confirm EXPORT-OBS/AIRGAP-TIME staffing, outline DSSE/TUF + OCI milestones. | 110.D | Mirror Creator · Exporter · AirGap Time · Security guilds |
### Meeting prep checklist
| Session | Pre-reads / artifacts | Open questions | Prep owner(s) |
| --- | --- | --- | --- |
| Advisory AI customer surfaces (2025-11-14) | SBOM-AIAI-31-001 projection draft, CLI-VULN/CLI-VEX scope notes, POLICY-ENGINE-31-001 knob proposal, DEVOPS-AIAI-31-001 runbook outline. | Exact delivery dates for each artifact? Any blockers requiring interim screenshots or mock SBOM data? | Advisory AI Guild · SBOM Service · CLI · Policy · DevOps guilds |
| Link-Not-Merge schema review (2025-11-14) | Latest `CONCELIER-GRAPH-21-001/002` + `CARTO-GRAPH-21-002` payloads, migration guide draft, event contract examples. | Are there unresolved fields/tenant tags? How will backfill/replay be handled? Do Advisory AI consumers need an adapter? | Concelier Core · Cartographer Guild · SBOM Service Guild · Platform Events Guild |
| Excititor attestation sequencing (2025-11-15) | EXCITITOR-AIAI-31-002/003 design notes, Evidence Locker contract draft, attestation backlog order (`EXCITITOR-ATTEST-*`). | Which attestation payload ships first? What telemetry/rollout gates are required? How will Evidence Locker validate manifests? | Excititor Web/Core · Evidence Locker Guild |
| Mirror evidence kickoff (2025-11-15) | MIRROR-CRT-56-001 scope brief, EXPORT-OBS-51/54 staffing plan, AIRGAP-TIME-57-001 requirements, DSSE/TUF design outline. | Who owns MIRROR-CRT-56-001? Can Export/AirGap lend engineers immediately? Do we need interim manual bundles before assembler lands? | Mirror Creator · Exporter · AirGap Time · Security guilds |
### Target outcomes (through 2025-11-15)
| Deliverable | Target date | Status | Dependencies / notes |
| --- | --- | --- | --- |
| DOCS-AIAI-31-004 publication | 2025-11-15 | DOING | Needs Console screenshots + SBOM feeds once SBOM/CLI ETAs are confirmed. |
| SBOM/CLI/Policy/DevOps ETA commitments | 2025-11-14 | PENDING | Advisory AI follow-up must end with written delivery dates. |
| Link-Not-Merge schema approval | 2025-11-14 | PENDING | Requires agreement on CONCELIER-GRAPH-21-001/002 + CARTO-GRAPH-21-002 payloads. |
| Excititor attestation sequencing plan | 2025-11-15 | PENDING | Dependent on Evidence Locker contract + attestation backlog ordering. |
| MIRROR-CRT-56-001 owner assignment | 2025-11-15 | PENDING | Must exit kickoff with named engineer + sprint scope. |
### Awaiting updates (blocking follow-ups)
| Update needed | Why it matters | Requested from | When requested |
| --- | --- | --- | --- |
| Written SBOM-AIAI-31-001/003, CLI-VULN-29-001, CLI-VEX-30-001, POLICY-ENGINE-31-001, DEVOPS-AIAI-31-001 ETAs | Unblocks DOCS-AIAI-31-004/005/006/008/009 and SBOM-AIAI-31-003 | SBOM Service, CLI, Policy, DevOps guild leads | 2025-11-13 stand-up |
| Confirmation that Link-Not-Merge pre-read comments are resolved | Determines whether schema can be approved on 2025-11-14 | Concelier Core · Cartographer Guild · SBOM Service Guild | 2025-11-13 meeting prep |
| Evidence Locker sign-off on attestation contract draft | Required before Excititor attestation sequencing on 2025-11-15 | Evidence Locker Guild | 2025-11-13 |
| Mirror/Exporter leadership agreement on MIRROR-CRT-56-001 owner | Without it, the 2025-11-15 kickoff has no accountable engineer | Mirror Creator Guild · Exporter Guild · AirGap Time Guild | 2025-11-13 |
### Pre-read distribution status (as of 2025-11-13 22:31 UTC)
| Session | Pre-read packet | Status | Owner(s) |
| --- | --- | --- | --- |
| Advisory AI follow-up (2025-11-14) | SBOM kit draft + CLI/Policy/DevOps notes | Docs compiled; waiting for guild leads to append ETA fields before sharing. | Advisory AI Guild |
| Link-Not-Merge review (2025-11-14) | Schema redlines + migration doc | Circulated to Concelier/Cartographer/SBOM; comments due morning of 2025-11-14. | Concelier Core · Cartographer Guild |
| Excititor attestation sequencing (2025-11-15) | Evidence Locker contract draft + backlog order | Draft complete; Evidence Locker reviewing telemetry requirements. | Excititor Web/Core · Evidence Locker Guild |
| Mirror kickoff (2025-11-15) | MIRROR-CRT-56-001 scope brief + staffing proposal | Outline sent to Mirror/Exporter leadership; pending confirmation of available engineers. | Mirror Creator Guild · Exporter Guild |
### Decisions needed (before 2025-11-15)
| Decision | Blocking work | Accountable owner(s) | Due date |
| --- | --- | --- | --- |
| Provide SBOM/CLI/Policy/DevOps delivery dates | DOCS-AIAI-31-004/005/006/008/009, SBOM-AIAI-31-003, AIAI-31-008 | SBOM Service · CLI · Policy · DevOps guilds | 2025-11-14 |
| Approve Link-Not-Merge + CARTO schema | CONCELIER-AIAI-31-002, EXCITITOR-AIAI-31-002/003/004, air-gap + attestation tasks | Concelier Core · Cartographer Guild · SBOM Service Guild | 2025-11-14 |
| Assign MIRROR-CRT-56-001 owner | All Mirror/Export/AirGap downstream work | Mirror Creator Guild · Exporter Guild · AirGap Time Guild | 2025-11-15 |
| Confirm Evidence Locker attestation scope | EXCITITOR-ATTEST-* and CONCELIER-ATTEST-73-001/002 | Evidence Locker Guild · Excititor Guild · Concelier Guild | 2025-11-15 |
| Approve DOCS-AIAI-31-004 screenshot plan | Publication of console guardrail doc | Docs Guild · Console Guild | 2025-11-15 |
### Action item tracker (status as of 2025-11-13)
| Item | Status | Next step | Owner(s) | Due |
| --- | --- | --- | --- | --- |
| SBOM-AIAI-31-001 projection kit | Pending ETA | Provide delivery date + artifact checklist during 2025-11-14 call. | SBOM Service Guild | 2025-11-14 |
| CLI-VULN-29-001 / CLI-VEX-30-001 scope alignment | In progress | Confirm parameter set + release vehicle to unblock docs. | CLI Guild | 2025-11-14 |
| POLICY-ENGINE-31-001 guardrail knob | Drafting | Share config snippet + rollout plan with Advisory AI. | Policy Guild | 2025-11-14 |
| DEVOPS-AIAI-31-001 deployment runbooks | Not started | Outline automation coverage and ops checklist. | DevOps Guild | 2025-11-15 |
| Link-Not-Merge schema redlines | Circulated | Sign off during 2025-11-14 review, publish migration notes. | Concelier Core · Cartographer Guild · SBOM Service Guild | 2025-11-14 |
| MIRROR-CRT-56-001 staffing plan | Not started | Name owner + confirm initial sprint scope. | Mirror Creator Guild · Exporter Guild | 2025-11-15 |
### Standup agenda (2025-11-13)
| Track | Questions to cover | Owner ready to report |
| --- | --- | --- |
| 110.A Advisory AI | Are SBOM/CLI/Policy/DevOps guilds ready to commit ETAs so DOCS-AIAI backlog can resume? | Advisory AI Guild · Docs Guild |
| 110.B Concelier | Link-Not-Merge review prep status and connector refresh recovery plan? | Concelier Core · Concelier WebService Guilds |
| 110.C Excititor | Evidence Locker contract + attestation sequencing ready for 2025-11-15 session? | Excititor Web/Core Guilds · Evidence Locker Guild |
| 110.D Mirror | Who is owning MIRROR-CRT-56-001 and what runway is needed? | Mirror Creator Guild · Exporter Guild |
| Cross-track | Any new risks requiring leadership escalation before 2025-11-14 checkpoints? | Sprint 110 leads |
### Standup agenda (2025-11-14)
| Track | Confirmation needed | Follow-ups if “no” | Reporter |
| --- | --- | --- | --- |
| 110.A Advisory AI | Did SBOM/CLI/Policy/DevOps provide ETAs + artifact checklists? | Escalate to guild leads immediately; flag DOCS backlog as red. | Advisory AI Guild |
| 110.B Concelier | Is Link-Not-Merge schema ready for review (no open comments)? | Capture blockers, inform Cartographer + Advisory AI, update schema review agenda. | Concelier Core |
| 110.C Excititor | Has Evidence Locker ackd the attestation contract + backlog order? | Schedule follow-up session pre-15th; unblock by providing interim contract. | Excititor Web/Core |
| 110.D Mirror | Is MIRROR-CRT-56-001 owner confirmed before kickoff? | Escalate to Mirror/Exporter leadership; re-plan kickoff if still unstaffed. | Mirror Creator Guild |
| Cross-track | Any new dependencies discovered that affect Nov 15 deliverables? | Add to Awaiting Updates + contingency plan. | Sprint 110 leads |
### Standup agenda (2025-11-15)
| Track | Key question | Owner ready to report |
| --- | --- | --- |
| 110.A Advisory AI | Did SBOM/CLI/Policy/DevOps artifacts land and unblock DOCS/SBOM backlog? | Advisory AI Guild · Docs Guild |
| 110.B Concelier | Were Link-Not-Merge schemas approved and migrations kicked off? | Concelier Core · Cartographer Guild |
| 110.C Excititor | Is the attestation sequencing plan locked with Evidence Locker sign-off? | Excititor Web/Core Guilds · Evidence Locker Guild |
| 110.D Mirror | Is MIRROR-CRT-56-001 staffed with a sprint plan after kickoff? | Mirror Creator Guild · Exporter Guild · AirGap Time Guild |
| Cross-track | Any spillover risks or re-scoping needed after the checkpoints? | Sprint 110 leads |
### Outcome capture template (use after Nov 1415 checkpoints)
| Session | Date | Outcome | Follow-up tasks |
| --- | --- | --- | --- |
| Advisory AI follow-up | 2025-11-14 | _TBD_ | _TBD_ |
| Link-Not-Merge review | 2025-11-14 | _TBD_ | _TBD_ |
| Excititor attestation sequencing | 2025-11-15 | _TBD_ | _TBD_ |
| Mirror evidence kickoff | 2025-11-15 | _TBD_ | _TBD_ |
### Contingency playbook (reviewed 2025-11-13)
| Risk trigger | Immediate response | Owner | Escalation window |
| --- | --- | --- | --- |
| Link-Not-Merge review slips | Document unresolved schema fields, escalate to runtime governance, evaluate interim adapter for Advisory AI. | Concelier Core · Cartographer Guild | Escalate by 2025-11-15 governance call. |
| SBOM/CLI/Policy/DevOps ETAs miss 2025-11-14 | Flag DOCS-AIAI backlog as “red”, source temporary screenshots/mock data, escalate to Advisory AI leadership. | Docs Guild · Advisory AI Guild | Escalate by 2025-11-15 stand-up. |
| MIRROR-CRT-56-001 still unstaffed on 2025-11-15 | Reassign engineers from Export/Excititor backlog, drop lower-priority Mirror scope, publish revised schedule. | Mirror Creator Guild · Exporter Guild · AirGap Time Guild | Escalate by 2025-11-15 kickoff retro. |
| Connector refreshes slip another week | Limit Advisory AI exposure to stale feeds, publish customer comms, add feeds to incident review. | Concelier Feed Owners | Escalate by 2025-11-18. |
| Evidence Locker contract stalls | Delay attestation tasks, focus on telemetry/docs, involve Platform Governance. | Evidence Locker Guild · Excititor Guild | Escalate by 2025-11-17. |
## Downstream dependencies (2025-11-13)
| Wave | Dependent sprint(s) | Impact if delayed |
| --- | --- | --- |
| 110.A AdvisoryAI | Advisory AI customer rollout (Docs, Console, CLI), `SPRINT_120_excititor_ii.md`, `SPRINT_140_runtime_signals.md` | SBOM/CLI/Policy/DevOps lag keeps Advisory AI docs + guardrails blocked and stalls downstream Scanner/Policy/Vuln Explorer adoption. |
| 110.B Concelier | `SPRINT_140_runtime_signals.md`, `SPRINT_185_shared_replay_primitives.md`, Concelier console/air-gap/attest waves | Link-Not-Merge schema + observation APIs gate Concelier graph, telemetry, and orchestrator waves; Console/advisor UIs stay blocked. |
| 110.C Excititor | `SPRINT_120_excititor_ii.md``SPRINT_124_excititor_vi.md` | VEX chunk/attestation phases cannot progress until chunk/telemetry deliverables land, delaying Lens, Policy, and Advisory AI parity. |
| 110.D Mirror | `SPRINT_125_mirror.md` | Export Center, CLI, and air-gap bundles rely on MIRROR-CRT-56-001; no downstream mirror automation can begin until the deterministic assembler is complete. | | 110.D Mirror | `SPRINT_125_mirror.md` | Export Center, CLI, and air-gap bundles rely on MIRROR-CRT-56-001; no downstream mirror automation can begin until the deterministic assembler is complete. |
## Interlocks & owners ## Interlocks & owners (2025-11-13)
| Interlock | Participants | Needed artifact(s) | Status / notes (2025-11-09) | | Interlock | Participants | Needed artifact(s) | Status / notes |
| --- | --- | --- | --- | | --- | --- | --- | --- |
| Advisory AI customer surfaces | Advisory AI Guild · SBOM Service Guild · CLI Guild · Policy Guild · DevOps Guild | `SBOM-AIAI-31-001`, `SBOM-AIAI-31-003`, `CLI-VULN-29-001`, `CLI-VEX-30-001`, `POLICY-ENGINE-31-001`, `DEVOPS-AIAI-31-001` | SBOM hand-off kit + CLI/Policy knobs still pending; DOCS-AIAI-31-005/006/008/009 stay blocked until these artifacts ship. | | Advisory AI customer surfaces | Advisory AI Guild · SBOM Service Guild · CLI Guild · Policy Guild · DevOps Guild | `SBOM-AIAI-31-001`, `SBOM-AIAI-31-003`, `CLI-VULN-29-001`, `CLI-VEX-30-001`, `POLICY-ENGINE-31-001`, `DEVOPS-AIAI-31-001` | ETAs due 2025-11-14 to unblock DOCS-AIAI backlog and SBOM-AIAI-31-003. |
| Link-Not-Merge contract | Concelier Core/WebService Guilds · Cartographer Guild · Platform Events Guild | `CONCELIER-LNM-21-001``21-203`, `CARTO-GRAPH-21-002`, `CONCELIER-GRAPH-21-001/002`, `CONCELIER-CONSOLE-23-001..003` | Schema and observation APIs not started; Cartographer schema delivery remains the gate for CONCELIER-AIAI-31-002/003 and all console/air-gap tracks. | | Link-Not-Merge contract | Concelier Core/WebService Guilds · Cartographer Guild · Platform Events Guild | `CONCELIER-LNM-21-001``21-203`, `CARTO-GRAPH-21-002`, `CONCELIER-GRAPH-21-001/002`, `CONCELIER-CONSOLE-23-001..003` | Schema review on 2025-11-14 to unblock CONCELIER-AIAI-31-002/003 and downstream console/air-gap tasks. |
| VEX justification + attestation | Excititor WebService/Core Guilds · Observability Guild · Evidence Locker Guild · Cartographer Guild | `EXCITITOR-AIAI-31-001``31-004`, `EXCITITOR-ATTEST-01-003`, `EXCITITOR-ATTEST-73-001/002`, `EXCITITOR-AIRGAP-56/57/58-*`, `EXCITITOR-CONN-TRUST-01-001` | Justification enrichment is DOING; every downstream chunk/telemetry/attestation/mirror task remains TODO pending that output plus Link-Not-Merge contracts. | | VEX justification + attestation | Excititor Web/Core Guilds · Observability Guild · Evidence Locker Guild · Cartographer Guild | `EXCITITOR-AIAI-31-001``31-004`, `EXCITITOR-ATTEST-01-003`, `EXCITITOR-ATTEST-73-001/002`, `EXCITITOR-AIRGAP-56/57/58-*`, `EXCITITOR-CONN-TRUST-01-001` | Attestation sequencing meeting on 2025-11-15 to finalize Evidence Locker contract + backlog order. |
| Mirror evidence kickoff | Mirror Creator Guild · Exporter Guild · AirGap Time Guild · Security Guild · CLI Guild | `MIRROR-CRT-56-001``56-002`, `MIRROR-CRT-57-001/002`, `MIRROR-CRT-58-001/002`, `EXPORT-OBS-51-001`, `EXPORT-OBS-54-001`, `AIRGAP-TIME-57-001`, `CLI-AIRGAP-56-001`, `PROV-OBS-53-001` | No owner meeting yet; assembler (MIRROR-CRT-56-001) is still unscheduled, so DSSE/TUF, OCI, time-anchor, CLI, and Export Center hooks cannot start. | | Mirror evidence kickoff | Mirror Creator Guild · Exporter Guild · AirGap Time Guild · Security Guild · CLI Guild | `MIRROR-CRT-56/57/58-*`, `EXPORT-OBS-51-001`, `EXPORT-OBS-54-001`, `AIRGAP-TIME-57-001`, `CLI-AIRGAP-56-001`, `PROV-OBS-53-001` | Kickoff scheduled 2025-11-15; objective is to assign MIRROR-CRT-56-001 owner and confirm downstream staffing. |
### Upcoming checkpoints
| Date (UTC) | Focus | Agenda / expected exit |
| --- | --- | --- |
| 2025-11-10 | Advisory AI customer surfaces | Confirm SBOM-AIAI-31-001 delivery slot, align CLI-VULN/CLI-VEX scope owners, and capture POLICY-ENGINE-31-001 + DEVOPS-AIAI-31-001 readiness so DOCS-AIAI-31-005/006/008/009 can resume. |
| 2025-11-11 | Link-Not-Merge contract | Cartographer to present CARTO-GRAPH-21-002 schema draft, Concelier to commit dates for CONCELIER-LNM-21-001..003 and CONCELIER-AIAI-31-002/003 telemetry wiring. |
| 2025-11-11 | VEX justification + attestation | Walk EXCITITOR-AIAI-31-001 output, sequence EXCITITOR-AIAI-31-002/003, and lock attestation backlog order (`EXCITITOR-ATTEST-01-003`, `-73-001`, `-73-002`). |
| 2025-11-12 | Mirror evidence kickoff | Assign MIRROR-CRT-56-001 lead, confirm EXPORT-OBS-51-001/AIRGAP-TIME-57-001 owners, and outline DSSE/TUF design reviews for MIRROR-CRT-56-002. |
## Coordination log ## Coordination log
| Date | Notes | | Date | Notes |
| --- | --- | | --- | --- |
| 2025-11-09 | Sprint file refreshed with wave detail references, interlocks, and risk log; waiting on 2025-11-10/11/12 syncs for SBOM/CLI/POLICY/DevOps, Link-Not-Merge, Excititor justification, and Mirror assembler commitments. | | 2025-11-13 | Snapshot, wave tracker, decision/action lists, and contingency plan refreshed ahead of 2025-11-14/15 checkpoints; awaiting SBOM/CLI/Policy/DevOps ETAs, Link-Not-Merge approval, and Mirror staffing outcomes. |
| 2025-11-09 | Sprint file captured initial wave detail references, interlocks, and risks pending SBOM/CLI/POLICY/DevOps, Link-Not-Merge, Excititor justification, and Mirror assembler commitments. |
## Risk log (2025-11-09) ## Risk log (2025-11-13)
| Risk | Impact | Mitigation / owner | | Risk | Impact | Mitigation / owner |
| --- | --- | --- | | --- | --- | --- |
| SBOM/CLI/Policy/DevOps deliverables slip past 2025-11-12 | Advisory AI CLI/docs remain blocked; downstream Scanner/Policy/Vuln Explorer sprints cannot validate schema feeds | Capture ETAs during 2025-11-10 interlock; SBOM/CLI/Policy/DevOps guild leads to publish commit dates and update sprint rows immediately | | SBOM/CLI/Policy/DevOps deliverables slip past 2025-11-14 | Advisory AI docs + SBOM feeds remain blocked, delaying customer rollout + dependent sprints. | Capture ETAs during 2025-11-14 interlock; escalate to Advisory AI leadership if not committed. |
| Link-Not-Merge schema delays (`CONCELIER-LNM-21-*`, `CARTO-GRAPH-21-002`) | Concelier evidence APIs, console views, and Excititor graph consumers cannot progress; Advisory AI loses deterministic Concelier feeds | 2025-11-11 checkpoint to lock schema delivery; Cartographer + Concelier core owners to share migration plan and unblock CONCELIER-AIAI-31-002/003 | | Link-Not-Merge schema delays (`CONCELIER-LNM-21-*`, `CARTO-GRAPH-21-002`) | Concelier/Excititor evidence APIs, console views, and air-gap tracks cannot progress; Advisory AI loses deterministic feeds. | Land schema review on 2025-11-14; publish migration plan and unblock CONCELIER-AIAI-31-002 + EXCITITOR-AIAI-31-002 immediately after approval. |
| Excititor justification/attestation backlog stalls | Advisory AI cannot cite VEX evidence, Excititor attestation/air-gap tasks remain TODO, Mirror parity slips | Excititor web/core leads to finish EXCITITOR-AIAI-31-001 and schedule EXCITITOR-AIAI-31-002/003 + ATTEST tasks during 2025-11-11 session | | Excititor attestation backlog stalls | Advisory AI cannot cite VEX evidence; attestation + air-gap tasks idle; Mirror parity slips. | Use 2025-11-15 sequencing session to lock order, then reserve engineering capacity for attestation tickets. |
| Mirror assembler lacks staffing (`MIRROR-CRT-56-001`) | DSSE/TUF, OCI/time-anchor, CLI, Export Center automations cannot even start, blocking Wave 110.D and Sprint 125 entirely | 2025-11-12 kickoff must assign an owner and confirm EXPORT-OBS/AIRGAP-TIME prerequisites; track progress daily until assembler code is in flight | | Mirror assembler lacks staffing (`MIRROR-CRT-56-001`) | DSSE/TUF, OCI/time-anchor, CLI, Export Center automations cannot start, blocking Sprint 125 altogether. | Assign owner during 2025-11-15 kickoff; reallocate Export/AirGap engineers if no volunteer surfaces. |
| Connector provenance refreshes remain overdue | Advisory AI may serve stale evidence for ICSCISA/KISA feeds. | Feed owners to publish remediation plan and temporary mitigations by 2025-11-15 stand-up. |

View File

@@ -8,7 +8,8 @@ Summary: Ingestion & Evidence focus on AdvisoryAI.
Task ID | State | Task description | Owners (Source) Task ID | State | Task description | Owners (Source)
--- | --- | --- | --- --- | --- | --- | ---
DOCS-AIAI-31-006 | BLOCKED (2025-11-03) | Update `/docs/policy/assistant-parameters.md` covering temperature, token limits, ranking weights, TTLs. Dependencies: POLICY-ENGINE-31-001. | Docs Guild, Policy Guild (docs) DOCS-AIAI-31-006 | DONE (2025-11-13) | `/docs/policy/assistant-parameters.md` now documents inference modes, guardrail phrases, budgets, and cache/queue knobs (POLICY-ENGINE-31-001 inputs captured via `AdvisoryAiServiceOptions`). | Docs Guild, Policy Guild (docs)
> 2025-11-13: Published `docs/policy/assistant-parameters.md`, added env-var mapping tables, and linked the page from Advisory AI architecture so guild owners can trace DOCS-AIAI-31-006 to Sprint 111.
DOCS-AIAI-31-008 | BLOCKED (2025-11-03) | Publish `/docs/sbom/remediation-heuristics.md` (feasibility scoring, blast radius). Dependencies: SBOM-AIAI-31-001. | Docs Guild, SBOM Service Guild (docs) DOCS-AIAI-31-008 | BLOCKED (2025-11-03) | Publish `/docs/sbom/remediation-heuristics.md` (feasibility scoring, blast radius). Dependencies: SBOM-AIAI-31-001. | Docs Guild, SBOM Service Guild (docs)
DOCS-AIAI-31-009 | BLOCKED (2025-11-03) | Create `/docs/runbooks/assistant-ops.md` for warmup, cache priming, model outages, scaling. Dependencies: DEVOPS-AIAI-31-001. | Docs Guild, DevOps Guild (docs) DOCS-AIAI-31-009 | BLOCKED (2025-11-03) | Create `/docs/runbooks/assistant-ops.md` for warmup, cache priming, model outages, scaling. Dependencies: DEVOPS-AIAI-31-001. | Docs Guild, DevOps Guild (docs)
SBOM-AIAI-31-003 | TODO (2025-11-03) | Publish the Advisory AI hand-off kit for `/v1/sbom/context`, share base URL/API key + tenant header contract, and run a joint end-to-end retrieval smoke test with Advisory AI. Dependencies: SBOM-AIAI-31-001. | SBOM Service Guild, Advisory AI Guild (src/SbomService/StellaOps.SbomService) SBOM-AIAI-31-003 | TODO (2025-11-03) | Publish the Advisory AI hand-off kit for `/v1/sbom/context`, share base URL/API key + tenant header contract, and run a joint end-to-end retrieval smoke test with Advisory AI. Dependencies: SBOM-AIAI-31-001. | SBOM Service Guild, Advisory AI Guild (src/SbomService/StellaOps.SbomService)
@@ -31,7 +32,7 @@ DOCS-AIAI-31-005 | BLOCKED (2025-11-03) | Publish `/docs/advisory-ai/cli.md` cov
> 2025-11-03: DOCS-AIAI-31-002 marked DONE `docs/advisory-ai/architecture.md` published describing pipeline, deterministic tooling, caching, and profile governance (Docs Guild). > 2025-11-03: DOCS-AIAI-31-002 marked DONE `docs/advisory-ai/architecture.md` published describing pipeline, deterministic tooling, caching, and profile governance (Docs Guild).
> 2025-11-03: DOCS-AIAI-31-004 marked BLOCKED Console widgets/endpoints (CONSOLE-VULN-29-001, CONSOLE-VEX-30-001, EXCITITOR-CONSOLE-23-001) still pending; cannot document UI flows yet. > 2025-11-03: DOCS-AIAI-31-004 marked BLOCKED Console widgets/endpoints (CONSOLE-VULN-29-001, CONSOLE-VEX-30-001, EXCITITOR-CONSOLE-23-001) still pending; cannot document UI flows yet.
> 2025-11-03: DOCS-AIAI-31-005 marked BLOCKED CLI implementation (`stella advise run`, CLI-VULN-29-001, CLI-VEX-30-001) plus AIAI-31-004C not shipped; doc blocked until commands exist. > 2025-11-03: DOCS-AIAI-31-005 marked BLOCKED CLI implementation (`stella advise run`, CLI-VULN-29-001, CLI-VEX-30-001) plus AIAI-31-004C not shipped; doc blocked until commands exist.
> 2025-11-03: DOCS-AIAI-31-006 marked BLOCKED Advisory AI parameter knobs (POLICY-ENGINE-31-001) absent; doc deferred. > 2025-11-03: DOCS-AIAI-31-006 initially blocked (POLICY-ENGINE-31-001 pending); resolved 2025-11-13 once the guardrail/inference bindings shipped and the parameter doc landed.
> 2025-11-07: DOCS-AIAI-31-007 marked DONE `/docs/security/assistant-guardrails.md` now documents redaction rules, blocked phrases, telemetry, and alert procedures. > 2025-11-07: DOCS-AIAI-31-007 marked DONE `/docs/security/assistant-guardrails.md` now documents redaction rules, blocked phrases, telemetry, and alert procedures.
> 2025-11-03: DOCS-AIAI-31-008 marked BLOCKED Waiting on SBOM heuristics delivery (SBOM-AIAI-31-001). > 2025-11-03: DOCS-AIAI-31-008 marked BLOCKED Waiting on SBOM heuristics delivery (SBOM-AIAI-31-001).
> 2025-11-03: DOCS-AIAI-31-009 marked BLOCKED DevOps runbook inputs (DEVOPS-AIAI-31-001) outstanding. > 2025-11-03: DOCS-AIAI-31-009 marked BLOCKED DevOps runbook inputs (DEVOPS-AIAI-31-001) outstanding.
@@ -49,3 +50,49 @@ DOCS-AIAI-31-005 | BLOCKED (2025-11-03) | Publish `/docs/advisory-ai/cli.md` cov
> 2025-11-04: AIAI-31-002 completed `AddSbomContext` typed client registered in WebService/Worker, BaseAddress/tenant headers sourced from configuration, and retriever HTTP-mapping tests extended. > 2025-11-04: AIAI-31-002 completed `AddSbomContext` typed client registered in WebService/Worker, BaseAddress/tenant headers sourced from configuration, and retriever HTTP-mapping tests extended.
> 2025-11-04: AIAI-31-003 completed deterministic toolset integrated with orchestrator cache, property/range tests broadened, and dependency analysis outputs now hashed for replay. > 2025-11-04: AIAI-31-003 completed deterministic toolset integrated with orchestrator cache, property/range tests broadened, and dependency analysis outputs now hashed for replay.
> 2025-11-04: AIAI-31-004A ongoing WebService/Worker queue wiring emits initial metrics, SBOM context hashing feeds cache keys, and replay docs updated ahead of guardrail implementation. > 2025-11-04: AIAI-31-004A ongoing WebService/Worker queue wiring emits initial metrics, SBOM context hashing feeds cache keys, and replay docs updated ahead of guardrail implementation.
## Blockers & dependencies (2025-11-13)
| Blocked item | Dependency | Owner(s) | Notes |
| --- | --- | --- | --- |
| DOCS-AIAI-31-004 (`/docs/advisory-ai/console.md`) | CONSOLE-VULN-29-001 · CONSOLE-VEX-30-001 · EXCITITOR-CONSOLE-23-001 | Docs Guild · Console Guild | Screenshots + a11y copy cannot be captured until Console widgets + Excititor feeds ship. |
| DOCS-AIAI-31-005 (`/docs/advisory-ai/cli.md`) | CLI-VULN-29-001 · CLI-VEX-30-001 · AIAI-31-004C | Docs Guild · CLI Guild | CLI verbs + outputs not available; doc work paused. |
| DOCS-AIAI-31-008 (`/docs/sbom/remediation-heuristics.md`) | SBOM-AIAI-31-001 | Docs Guild · SBOM Service Guild | Needs heuristics kit + API contract. |
| DOCS-AIAI-31-009 (`/docs/runbooks/assistant-ops.md`) | DEVOPS-AIAI-31-001 | Docs Guild · DevOps Guild | Runbook automation steps pending DevOps guidance. |
| SBOM-AIAI-31-003 (`/v1/sbom/context` hand-off kit) | SBOM-AIAI-31-001 | SBOM Service Guild · Advisory AI Guild | Requires base `/v1/sbom/context` projection + smoke test plan. |
| AIAI-31-008 (on-prem/remote inference packaging) | AIAI-31-006..007 (guardrail knobs, security guidance) | Advisory AI Guild · DevOps Guild | Needs finalized guardrail knob doc (done) plus DevOps runbooks before shipping containers/manifests. |
## Next actions (target: 2025-11-15)
| Owner(s) | Action | Status |
| --- | --- | --- |
| Docs Guild · Console Guild | Capture screenshot checklist + copy snippets for DOCS-AIAI-31-004 once Console widgets land; pre-draft alt text now. | Pending widgets |
| SBOM Service Guild | Publish SBOM-AIAI-31-001 projection doc + ETA for hand-off kit; unblock SBOM-AIAI-31-003 and remediation heuristics doc. | Pending |
| CLI Guild | Share outline of `stella advise` verbs (CLI-VULN/CLI-VEX) so docs can prep structure before GA. | Pending |
| DevOps Guild | Provide first draft of DEVOPS-AIAI-31-001 runbook so DOCS-AIAI-31-009 can start. | Pending |
| Advisory AI Guild | Scope packaging work for AIAI-31-008 (container manifests, Helm/Compose) now that guardrail knobs doc (DOCS-AIAI-31-006) is live. | In planning |
## Dependency watchlist
| Dependency | Latest update | Impact |
| --- | --- | --- |
| CONSOLE-VULN-29-001 / CONSOLE-VEX-30-001 | DOING as of 2025-11-08; telemetry not yet exposed to docs. | Blocks DOCS-AIAI-31-004 screenshots + instructions. |
| EXCITITOR-CONSOLE-23-001 | Not started (per Console backlog). | Required for console doc data feed references. |
| SBOM-AIAI-31-001 | ETA requested during Sprint 110 follow-up (2025-11-14). | Gate for SBOM-AIAI-31-003 & DOCS-AIAI-31-008. |
| DEVOPS-AIAI-31-001 | Awaiting runbook draft. | Gate for DOCS-AIAI-31-009 + AIAI-31-008 packaging guidance. |
## Standup prompts
1. Are Console owners on track to deliver widget screenshots/data before 2025-11-15 so DOCS-AIAI-31-004 can close?
2. Has SBOM-AIAI-31-001 published a projection kit and smoke-test plan to unlock SBOM-AIAI-31-003/DOCS-AIAI-31-008?
3. When will CLI-VULN-29-001 / CLI-VEX-30-001 expose a beta so DOCS-AIAI-31-005 can resume?
4. Does DevOps have a draft for DEVOPS-AIAI-31-001 (needed for DOCS-AIAI-31-009) and the packaging work in AIAI-31-008?
## Risks (snapshot 2025-11-13)
| Risk | Impact | Mitigation / owner |
| --- | --- | --- |
| Console dependencies miss 2025-11-15 | DOCS-AIAI-31-004 misses sprint goal, delaying Advisory AI UI documentation. | Escalate via Console stand-up; consider temporary mock screenshots if needed. |
| SBOM-AIAI-31-001 slips again | SBOM hand-off kit + remediation heuristics doc stay blocked, delaying customer enablement. | SBOM Guild to commit date during Sprint 110 follow-up; escalate if no date. |
| CLI backlog deprioritized | DOCS-AIAI-31-005 + CLI enablement slide. | Request interim CLI output samples; coordinate with CLI guild for priority. |
| DevOps runbook not ready | DOCS-AIAI-31-009 + packaging work (AIAI-31-008) suspended. | DevOps to share outline even if final automation pending; iterate doc in parallel. |

View File

@@ -1,24 +1,99 @@
# Sprint 112 - Ingestion & Evidence · 110.B) Concelier.I # Sprint 112 · Concelier.I — Canonical Evidence & Provenance (Rebaseline 2025-11-13)
Active items only. Completed/historic work now resides in docs/implplan/archived/tasks.md (updated 2025-11-08). Phase 110.B keeps Concelier focused on ingestion fidelity and evidence APIs. All active work here assumes Advisory AI consumes *canonical* advisory documents (no merge transforms) and that every field we emit carries exact provenance anchors.
[Ingestion & Evidence] 110.B) Concelier.I ## Canonical Model Commitments
Depends on: Sprint 100.A - Attestor - **Single source of truth:** `/advisories/{key}/chunks` must render from the canonical `Advisory` aggregate (document id + latest observation set), never from derived cache copies.
Summary: Ingestion & Evidence focus on Concelier (phase I). - **Provenance anchors:** Each structured field cites both the Mongo `_id` of the backing observation document and the JSON Pointer into that observation (`observationPath`). This mirrors how GHSAs GraphQL `securityAdvisory.references` and Cisco PSIRTs `openVuln` feeds expose source handles, so downstream tooling can reconcile fields deterministically.
Task ID | State | Task description | Owners (Source) - **Deterministic ordering:** Sort structured entries by `(fieldType, observationPath, sourceId)` to keep cache keys and telemetry stable across nodes. We are keeping this policy “as-is” for now to avoid churn in Advisory AI prompts.
- **External parity:** Continue mapping fields named in competitor docs (GitHub Security Advisory GraphQL, Red Hat CVE data API, Cisco PSIRT openVuln) so migrations remain predictable.
## Workstream A — Advisory AI Structured Fields (AIAI-31)
Task ID | State | Exit criteria | Owners
--- | --- | --- | --- --- | --- | --- | ---
CONCELIER-AIAI-31-002 `Structured fields` | TODO | Ship chunked advisory observation responses (workaround/fix notes, CVSS, affected range) where every field is traced back to the upstream document via provenance anchors; enforce deterministic sorting/pagination and add read-through caching so Advisory AI can hydrate RAG contexts without recomputing severity. | Concelier WebService Guild (src/Concelier/StellaOps.Concelier.WebService) CONCELIER-AIAI-31-002 `Structured fields` | DOING | 1) Program.cs endpoint fully rewritten to resolve the canonical advisory (via `IAdvisoryStore`/`IAliasStore`) and issue structured field entries. 2) Cache key = `tenant + AdvisoryFingerprint`. 3) Responses contain `{chunkId, fingerprint, entries[], provenance.documentId, provenance.observationPath}` with deterministic ordering. 4) Tests updated (`StatementProvenanceEndpointAttachesMetadata`, new structured chunk fixture) and Mongo2Go coverage passes. | Concelier WebService Guild (src/Concelier/StellaOps.Concelier.WebService)
CONCELIER-AIAI-31-003 `Advisory AI telemetry` | DONE (2025-11-12) | Instrument the new chunk endpoints with request/tenant metrics, cache-hit ratios, and guardrail violation counters so we can prove Concelier is serving raw evidence safely (no merges, no derived fields). | Concelier WebService Guild, Observability Guild (src/Concelier/StellaOps.Concelier.WebService) CONCELIER-AIAI-31-003 `Advisory AI telemetry` | DONE (2025-11-12) | OTEL counters (`advisory_ai_chunk_requests_total`, `advisory_ai_chunk_cache_hits_total`, `advisory_ai_guardrail_blocks_total`) tagged with tenant/result/cache. Nothing further planned unless guardrail policy changes. | Concelier WebService Guild · Observability Guild
CONCELIER-AIRGAP-56-001 `Mirror ingestion adapters` | TODO | Add mirror ingestion paths that read advisory bundles, persist bundle IDs/merkle roots unchanged, and assert append-only semantics so sealed deployments ingest the same raw facts as online clusters. | Concelier Core Guild (src/Concelier/__Libraries/StellaOps.Concelier.Core)
CONCELIER-AIRGAP-56-002 `Bundle catalog linking` | TODO | Record `bundle_id`, `merkle_root`, and time-anchor metadata on every observation/linkset so provenance survives exports; document how Offline Kit verifiers replay the references. Depends on CONCELIER-AIRGAP-56-001. | Concelier Core Guild, AirGap Importer Guild (src/Concelier/__Libraries/StellaOps.Concelier.Core)
CONCELIER-AIRGAP-57-001 `Sealed-mode source restrictions` | TODO | Enforce sealed-mode policies that disable non-mirror connectors, emit actionable remediation errors, and log attempts without touching advisory content. Depends on CONCELIER-AIRGAP-56-001. | Concelier Core Guild, AirGap Policy Guild (src/Concelier/__Libraries/StellaOps.Concelier.Core)
CONCELIER-AIRGAP-57-002 `Staleness annotations` | TODO | Compute staleness metadata per bundle (fetched/published delta, clock source) and expose it via observation APIs so consoles/CLI can highlight out-of-date advisories without altering evidence. Depends on CONCELIER-AIRGAP-56-002. | Concelier Core Guild, AirGap Time Guild (src/Concelier/__Libraries/StellaOps.Concelier.Core)
CONCELIER-AIRGAP-58-001 `Portable advisory evidence` | TODO | Package advisory observations/linksets plus provenance notes into portable evidence bundles tied to timeline IDs; include verifier instructions for cross-domain transfer. Depends on CONCELIER-AIRGAP-57-002. | Concelier Core Guild, Evidence Locker Guild (src/Concelier/__Libraries/StellaOps.Concelier.Core)
CONCELIER-ATTEST-73-001 `ScanResults attestation inputs` | TODO | Emit observation and linkset digests required for ScanResults attestations (raw JSON, provenance metadata) so Attestor can sign outputs without Concelier inferring verdicts. | Concelier Core Guild, Attestor Service Guild (src/Concelier/__Libraries/StellaOps.Concelier.Core)
CONCELIER-ATTEST-73-002 `Transparency metadata` | TODO | Surface per-observation digests and bundle IDs through read APIs so transparency proofs/explainers can cite immutable evidence. Depends on CONCELIER-ATTEST-73-001. | Concelier Core Guild (src/Concelier/__Libraries/StellaOps.Concelier.Core)
CONCELIER-CONSOLE-23-001 `Advisory aggregation views` | TODO | Provide `/console/advisories` list/detail endpoints that group linksets, display per-source severity/status chips, and expose provenance metadata—never merge or override upstream values. Depends on CONCELIER-LNM-21-201/202. | Concelier WebService Guild, BE-Base Platform Guild (src/Concelier/StellaOps.Concelier.WebService)
CONCELIER-CONSOLE-23-002 `Dashboard deltas API` | TODO | Calculate deterministic advisory deltas (new, modified, conflicting) for Console dashboards, referencing linkset IDs and timestamps rather than computed verdicts. Depends on CONCELIER-CONSOLE-23-001. | Concelier WebService Guild (src/Concelier/StellaOps.Concelier.WebService)
CONCELIER-CONSOLE-23-003 `Search fan-out helpers` | TODO | Implement CVE/GHSA/PURL lookup helpers that return observation/linkset excerpts plus provenance pointers so global search can preview raw evidence safely; include caching + tenant guards. | Concelier WebService Guild (src/Concelier/StellaOps.Concelier.WebService)
CONCELIER-CORE-AOC-19-013 `Authority tenant scope smoke coverage` | TODO | Expand smoke/e2e suites so Authority tokens + tenant headers are required for every ingest/read path, proving that aggregation stays tenant-scoped and merge-free. | Concelier Core Guild (src/Concelier/__Libraries/StellaOps.Concelier.Core)
> 2025-11-12: CONCELIER-AIAI-31-003 shipped OTEL counters (`advisory_ai_chunk_requests_total`, `advisory_ai_chunk_cache_hits_total`, `advisory_ai_guardrail_blocks_total`) with tenant/result/cache tags so Advisory AI dashboards can see guardrail hits even when Concelier serves cached chunk responses. ### Implementation checklist (kept inline until CONCELIER-AIAI-31-002 ships)
1. Add `ResolveAdvisoryAsync` helper with alias fallback + tenant guard.
2. Update `AdvisoryChunkCacheKey` to include `AdvisoryFingerprint`.
3. Rewrite `/advisories/{key}/chunks` handler to call the structured builder and emit provenance anchors.
4. Refresh telemetry tests to assert `Response.Entries.Count`.
5. Extend docs (`docs/provenance/inline-dsse.md` + Advisory AI API reference) with the structured schema mirroring GHSA / Cisco references.
## Workstream B — Mirror & Offline Provenance (AIRGAP-56/57/58)
Task ID | State | Exit criteria / notes | Owners
--- | --- | --- | ---
CONCELIER-AIRGAP-56-001 `Mirror ingestion adapters` | TODO | Implement read paths for Offline Kit bundles, persist `bundleId`, `merkleRoot`, and maintain append-only ledger comparisons. | Concelier Core Guild (src/Concelier/__Libraries/StellaOps.Concelier.Core)
CONCELIER-AIRGAP-56-002 `Bundle catalog linking` | TODO | Every observation/linkset stores `{bundleId, merkleRoot, observationPath}` so exported evidence can cite provenance exactly once; depends on 56-001. | Concelier Core Guild · AirGap Importer Guild
CONCELIER-AIRGAP-57-001 `Sealed-mode source restrictions` | TODO | Feature flag + policy that rejects non-mirror connectors with actionable diagnostics; depends on 56-001. | Concelier Core Guild · AirGap Policy Guild
CONCELIER-AIRGAP-57-002 `Staleness annotations` | TODO | Compute `fetchedAt/publishedAt/clockSource` deltas per bundle and expose via observation APIs without mutating evidence; depends on 56-002. | Concelier Core Guild · AirGap Time Guild
CONCELIER-AIRGAP-58-001 `Portable advisory evidence` | TODO | Package advisory observations/linksets + provenance notes (document id + observationPath) into timeline-bound portable bundles with verifier instructions; depends on 57-002. | Concelier Core Guild · Evidence Locker Guild
## Workstream C — Transparency & Attestor (ATTEST-73)
Task ID | State | Exit criteria / notes | Owners
--- | --- | --- | ---
CONCELIER-ATTEST-73-001 `ScanResults attestation inputs` | TODO | Emit `{observationDigest, linksetDigest, documentId}` pairs required by Attestor so DSSE bundles include the same provenance anchors Advisory AI emits. | Concelier Core Guild · Attestor Service Guild
CONCELIER-ATTEST-73-002 `Transparency metadata` | TODO | Read APIs expose `bundleId`, Rekor references, and observation paths for external transparency explorers; depends on 73-001. | Concelier Core Guild
## Workstream D — Console & Search Surfaces (CONSOLE-23)
Task ID | State | Exit criteria / notes | Owners
--- | --- | --- | ---
CONCELIER-CONSOLE-23-001 `Advisory aggregation views` | TODO | `/console/advisories` returns grouped linksets with per-source severity/status chips plus `{documentId, observationPath}` provenance references (matching GHSA + Red Hat CVE browser expectations); depends on CONCELIER-LNM-21-201/202. | Concelier WebService Guild · BE-Base Platform Guild
CONCELIER-CONSOLE-23-002 `Dashboard deltas API` | TODO | Deterministic “new/modified/conflicting” sets referencing linkset IDs and field paths rather than computed verdicts; depends on 23-001. | Concelier WebService Guild
CONCELIER-CONSOLE-23-003 `Search fan-out helpers` | TODO | CVE/GHSA/PURL lookups return observation excerpts, provenance anchors, and cache hints so tenants can preview evidence safely; reuse structured field taxonomy from Workstream A. | Concelier WebService Guild
## Workstream E — Tenant Scope & AOC Guardrails
Task ID | State | Exit criteria / notes | Owners
--- | --- | --- | ---
CONCELIER-CORE-AOC-19-013 `Authority tenant scope smoke coverage` | TODO | Expand smoke/e2e suites so Authority tokens + tenant headers are mandatory for ingest/read paths (including the new provenance endpoint). Must assert no merge-side effects and that provenance anchors always round-trip. | Concelier Core Guild (src/Concelier/__Libraries/StellaOps.Concelier.Core)
## Recent Updates
- 2025-11-12: CONCELIER-AIAI-31-003 shipped OTEL counters for Advisory AI chunk traffic; dashboards now display cache hit ratios and guardrail blocks per tenant.
- 2025-11-13: Sprint rebaseline complete; structured field scope locked to canonical model + provenance anchors, matching competitor schemas for short-term parity.
## Current status (2025-11-13)
| Workstream | State | Notes |
| --- | --- | --- |
| A Advisory AI structured fields | 🔶 DOING | CONCELIER-AIAI-31-002 code work in progress; schema locked, telemetry landed, release blocked on Link-Not-Merge + CARTO schemas. |
| B Mirror & offline provenance | 🔴 BLOCKED | No work can start until MIRROR-CRT-56-001 staffing and Offline Kit bundle contracts finalize. |
| C Transparency & Attestor | 🔴 BLOCKED | Waiting on Workstream A output plus attestation backlog sequencing (Sprint 110/Excititor). |
| D Console & search surfaces | 🔶 WATCHING | Scoped but dependencies on Link-Not-Merge + Console backlog; preparing schema docs in parallel. |
| E Tenant scope & AOC guardrails | 🔶 WATCHING | Requires Authority smoke coverage; no active engineering yet but tests ready to clone once structured endpoint stabilizes. |
## Blockers & dependencies
| Dependency | Impacted work | Owner(s) | Status |
| --- | --- | --- | --- |
| Link-Not-Merge schema (`CONCELIER-LNM-21-*`, `CARTO-GRAPH-21-002`) | Workstream A release, Workstream D APIs | Concelier Core · Cartographer Guild · Platform Events Guild | Review scheduled 2025-11-14; approval required before shipping structured fields/console APIs. |
| MIRROR-CRT-56-001 staffing | Workstream B (AIRGAP-56/57/58) | Mirror Creator Guild · Exporter Guild · AirGap Time Guild | Owner not assigned (per Sprint 110); kickoff on 2025-11-15 must resolve. |
| Evidence Locker attestation contract | Workstream C (ATTEST-73) | Evidence Locker Guild · Concelier Core | Needs alignment with Excititor attestation plan on 2025-11-15. |
| Authority scope smoke coverage (`CONCELIER-CORE-AOC-19-013`) | Workstream E | Concelier Core · Authority Guild | Waiting on structured endpoint readiness + AUTH-SIG-26-001 validation. |
## Next actions (target: 2025-11-16)
| Workstream | Owner(s) | Action | Status |
| --- | --- | --- | --- |
| A | Concelier WebService Guild | Finish `ResolveAdvisoryAsync`, cache key update, and structured response builder; prep PR for review once schema approved. | In progress |
| A | Docs Guild | Draft structured field schema appendix referencing provenance anchors for Advisory AI docs. | Pending |
| B | Concelier Core + Mirror leadership | Join 2025-11-15 kickoff, capture MIRROR-CRT-56-001 owner, and align bundle metadata contract. | Pending |
| C | Concelier Core + Evidence Locker | Produce attestation payload outline so ATTEST-73-001 can start immediately after sequencing meeting. | Pending |
| D | Concelier WebService Guild | Prepare `/console/advisories` API spec (field list, provenance references) so implementation can begin once Link-Not-Merge clears. | Drafting |
| E | Concelier Core | Clone Authority smoke suites to cover new structured endpoint once Workstream A enters review. | Pending |
## Standup prompts
1. Has Link-Not-Merge schema review resolved all blocking comments? If not, what fields remain at risk?
2. Who will own MIRROR-CRT-56-001 after the 2025-11-15 kickoff, and do we have staffing for follow-on AIRGAP tasks?
3. Did Evidence Locker accept the attestation contract draft, enabling ATTEST-73-001 to move forward?
4. Are Authority/AOC smoke tests ready to clone once structured fields release, or do we need additional scope from AUTH-SIG-26-001?
## Risks (snapshot 2025-11-13)
| Risk | Impact | Mitigation / owner |
| --- | --- | --- |
| Link-Not-Merge schema slips past 2025-11-14 | Structured fields + console APIs stay unreleased, blocking Advisory AI and Console surfaces. | Push for schema sign-off during 2025-11-14 review; prep fallback adapter if necessary. |
| Mirror staffing unresolved | AirGap provenance work (AIRGAP-56/57/58) cannot start, delaying Offline Kit parity. | Escalate at 2025-11-15 kickoff; consider borrowing engineers from Evidence Locker or Export guilds. |
| Evidence Locker contract delay | ATTEST-73 work cannot begin, leaving Advisory AI without attested provenance. | Align with Excititor/Evidence Locker owners during 2025-11-15 sequencing session; draft interim spec. |
| Authority smoke coverage gap | AOC guardrails may regress when structured endpoint ships. | Schedule paired testing with Authority guild once Workstream A PR is ready. |

View File

@@ -10,7 +10,7 @@ Task ID | State | Task description | Owners (Source)
--- | --- | --- | --- --- | --- | --- | ---
EXCITITOR-AIAI-31-001 `Justification enrichment` | DONE (2025-11-12) | Expose normalized VEX justifications, product scope trees, and paragraph/JSON-pointer anchors via `VexObservation` projections so Advisory AI can cite raw evidence without invoking any consensus logic. | Excititor WebService Guild (src/Excititor/StellaOps.Excititor.WebService) EXCITITOR-AIAI-31-001 `Justification enrichment` | DONE (2025-11-12) | Expose normalized VEX justifications, product scope trees, and paragraph/JSON-pointer anchors via `VexObservation` projections so Advisory AI can cite raw evidence without invoking any consensus logic. | Excititor WebService Guild (src/Excititor/StellaOps.Excititor.WebService)
EXCITITOR-AIAI-31-002 `VEX chunk API` | TODO | Ship `/vex/evidence/chunks` with tenant/policy filters that streams raw statements, signature metadata, and scope scores for Retrieval-Augmented Generation clients; response must stay aggregation-only and reference observation/linkset IDs. Depends on EXCITITOR-AIAI-31-001. | Excititor WebService Guild (src/Excititor/StellaOps.Excititor.WebService) EXCITITOR-AIAI-31-002 `VEX chunk API` | TODO | Ship `/vex/evidence/chunks` with tenant/policy filters that streams raw statements, signature metadata, and scope scores for Retrieval-Augmented Generation clients; response must stay aggregation-only and reference observation/linkset IDs. Depends on EXCITITOR-AIAI-31-001. | Excititor WebService Guild (src/Excititor/StellaOps.Excititor.WebService)
EXCITITOR-AIAI-31-003 `Telemetry & guardrails` | TODO | Instrument the new evidence APIs with request counters, chunk sizes, signature verification failure meters, and AOC guard violations so Lens/Advisory AI teams can detect misuse quickly. Depends on EXCITITOR-AIAI-31-002. | Excititor WebService Guild, Observability Guild (src/Excititor/StellaOps.Excititor.WebService) EXCITITOR-AIAI-31-003 `Telemetry & guardrails` | IN REVIEW (2025-11-13) | Instrument the new evidence APIs with request counters, chunk sizes, signature verification failure meters, and AOC guard violations so Lens/Advisory AI teams can detect misuse quickly. Depends on EXCITITOR-AIAI-31-002. | Excititor WebService Guild, Observability Guild (src/Excititor/StellaOps.Excititor.WebService)
EXCITITOR-AIAI-31-004 `Schema & docs alignment` | TODO | Update OpenAPI/SDK/docs to codify the Advisory-AI evidence contract (fields, determinism guarantees, pagination) and describe how consumers map observation IDs back to raw storage. | Excititor WebService Guild, Docs Guild (src/Excititor/StellaOps.Excititor.WebService) EXCITITOR-AIAI-31-004 `Schema & docs alignment` | TODO | Update OpenAPI/SDK/docs to codify the Advisory-AI evidence contract (fields, determinism guarantees, pagination) and describe how consumers map observation IDs back to raw storage. | Excititor WebService Guild, Docs Guild (src/Excititor/StellaOps.Excititor.WebService)
EXCITITOR-AIRGAP-56-001 `Mirror-first ingestion` | TODO | Wire mirror bundle ingestion paths that preserve upstream digests, bundle IDs, and provenance metadata exactly so offline Advisory-AI/Lens deployments can replay evidence with AOC parity. | Excititor Core Guild (src/Excititor/__Libraries/StellaOps.Excititor.Core) EXCITITOR-AIRGAP-56-001 `Mirror-first ingestion` | TODO | Wire mirror bundle ingestion paths that preserve upstream digests, bundle IDs, and provenance metadata exactly so offline Advisory-AI/Lens deployments can replay evidence with AOC parity. | Excititor Core Guild (src/Excititor/__Libraries/StellaOps.Excititor.Core)
EXCITITOR-AIRGAP-57-001 `Sealed-mode enforcement` | TODO | Enforce sealed-mode policies that disable external connectors, emit actionable remediation errors, and record staleness annotations that Advisory AI can surface as “evidence freshness” signals. Depends on EXCITITOR-AIRGAP-56-001. | Excititor Core Guild, AirGap Policy Guild (src/Excititor/__Libraries/StellaOps.Excititor.Core) EXCITITOR-AIRGAP-57-001 `Sealed-mode enforcement` | TODO | Enforce sealed-mode policies that disable external connectors, emit actionable remediation errors, and record staleness annotations that Advisory AI can surface as “evidence freshness” signals. Depends on EXCITITOR-AIRGAP-56-001. | Excititor Core Guild, AirGap Policy Guild (src/Excititor/__Libraries/StellaOps.Excititor.Core)
@@ -20,4 +20,83 @@ EXCITITOR-ATTEST-73-001 `VEX attestation payloads` | TODO | Emit attestation pay
EXCITITOR-ATTEST-73-002 `Chain provenance` | TODO | Provide APIs that link attestation IDs back to observation/linkset/product tuples, enabling Advisory AI to cite provenance without any derived verdict. Depends on EXCITITOR-ATTEST-73-001. | Excititor Core Guild (src/Excititor/__Libraries/StellaOps.Excititor.Core) EXCITITOR-ATTEST-73-002 `Chain provenance` | TODO | Provide APIs that link attestation IDs back to observation/linkset/product tuples, enabling Advisory AI to cite provenance without any derived verdict. Depends on EXCITITOR-ATTEST-73-001. | Excititor Core Guild (src/Excititor/__Libraries/StellaOps.Excititor.Core)
EXCITITOR-CONN-TRUST-01-001 `Connector provenance parity` | TODO | Update MSRC, Oracle, Ubuntu, and Stella mirror connectors to emit signer fingerprints, issuer tiers, and bundle references while remaining aggregation-only; document how Lens consumers should interpret these hints. | Excititor Connectors Guild (src/Excititor/__Libraries/StellaOps.Excititor.Connectors.*) EXCITITOR-CONN-TRUST-01-001 `Connector provenance parity` | TODO | Update MSRC, Oracle, Ubuntu, and Stella mirror connectors to emit signer fingerprints, issuer tiers, and bundle references while remaining aggregation-only; document how Lens consumers should interpret these hints. | Excititor Connectors Guild (src/Excititor/__Libraries/StellaOps.Excititor.Connectors.*)
## Task clusters & readiness
### Advisory-AI evidence APIs
- **Delivered:** `EXCITITOR-AIAI-31-001` (`/v1/vex/observations/{vulnerabilityId}/{productKey}` projection API) landed 2025-11-12 with normalized justifications and anchors.
- **In flight:** `EXCITITOR-AIAI-31-003` (instrumentation + guardrails) and `EXCITITOR-AIAI-31-004` (OpenAPI/SDK/docs alignment).
- **Dependencies:** Needs `EXCITITOR-AIAI-31-002` (projection service plumbing) — confirmed completed via architecture doc; observability pipeline requires Ops dashboards.
- **Ready-to-start checklist:** finalize request/response examples in OpenAPI, add replayable telemetry fixtures, and attach Advisory-AI contract summary to this sprint doc.
### AirGap ingestion & portable bundles
- **Scope:** `EXCITITOR-AIRGAP-56/57/58` (mirror-first ingestion, sealed-mode enforcement, portable evidence bundles).
- **Dependencies:** relies on Attestor DSSE verification (Sprint 100.A) and AirGap policy toggles; Evidence Locker partnership needed for portable bundle format.
- **Ready-to-start checklist:**
1. Secure mirror bundle schema from Export Center (Sprint 162) and attach sample manifests.
2. Document sealed-mode error catalog + diagnostics surfaced to Advisory AI/Lens during offline enforcement.
3. Define bundle manifest → timeline ID mapping for Advisory AI, referencing Export Center + TimelineIndexer contracts.
### Attestation & provenance chain
- **Tasks:** `EXCITITOR-ATTEST-01-003`, `EXCITITOR-ATTEST-73-001`, `EXCITITOR-ATTEST-73-002`.
- **Dependencies:** Attestor service readiness (Sprint 100.A) plus DSSE payload contract; requires `IVexAttestationVerifier` plan doc referenced in repo.
- **Ready-to-start checklist:**
1. Finish verifier test harness & deterministic diagnostics.
2. Capture sample attestation payload spec (supplier identity, justification summary, scope metadata) and attach here.
3. Describe provenance linkage for `/v1/vex/attestations/{id}` + observation/linkset/product tuples in docs.
### Connector provenance parity
- **Task:** `EXCITITOR-CONN-TRUST-01-001` (MSRC/Oracle/Ubuntu/Stella connectors).
- **Dependencies:** Source feeds must already emit signer metadata; align with AOC aggregator guardrails; ensure docs outline how Lens consumes trust hints.
- **Ready-to-start checklist:**
1. Inventory current connector coverage + signer metadata availability.
2. Define signer fingerprint + issuer tier schema shared across connectors (document in module README).
3. Update acceptance tests under `src/Excititor/__Libraries/StellaOps.Excititor.Connectors.*` to assert provenance payload.
## Dependencies & blockers
- Attestor DSSE verification (`EXCITITOR-ATTEST-01-003`, Sprint 100.A) gates `EXCITITOR-ATTEST-73-001/002` and portable bundles.
- Export Center mirror bundle schema (Sprint 162) and EvidenceLocker portable bundle format (Sprint 160/161) must land before `EXCITITOR-AIRGAP-56/58` can proceed; target sync 2025-11-15.
- Observability stack (Ops/Signals wave) must expose span/metric sinks before `EXCITITOR-AIAI-31-003` instrumentation merges; waiting on Ops telemetry MR.
- Security review pending for connector provenance fingerprints to ensure no secrets leak in aggregation-only mode; Docs/Security review scheduled 2025-11-18.
## Documentation references
- `docs/modules/excititor/architecture.md` — authoritative data model, APIs, and guardrails for Excititor.
- `docs/modules/excititor/README.md#latest-updates` — consensus beta + Advisory-AI integration context.
- `docs/modules/excititor/mirrors.md` — AirGap/mirror ingestion checklist referenced by `EXCITITOR-AIRGAP-56/57`.
- `docs/modules/excititor/operations/*` — observability + sealed-mode runbooks feeding `EXCITITOR-AIAI-31-003` instrumentation requirements.
- `docs/modules/excititor/implementation_plan.md` — per-module workstream alignment table (mirrors Sprint 200 documentation process).
## Action tracker
| Focus | Action | Owner(s) | Due | Status |
| --- | --- | --- | --- | --- |
| Advisory-AI APIs | Publish finalized OpenAPI schema + SDK notes for projection API (`EXCITITOR-AIAI-31-004`). | Excititor WebService Guild · Docs Guild | 2025-11-15 | In review (draft shared 2025-11-13) |
| Observability | Wire metrics/traces for `/v1/vex/observations/**` and document dashboards (`EXCITITOR-AIAI-31-003`). | Excititor WebService Guild · Observability Guild | 2025-11-16 | Blocked (code + ops runbook ready; waiting on Ops span sink deploy) |
| AirGap | Capture mirror bundle schema + sealed-mode toggle requirements for `EXCITITOR-AIRGAP-56/57`. | Excititor Core Guild · AirGap Policy Guild | 2025-11-17 | Pending |
| Portable bundles | Draft bundle manifest + EvidenceLocker linkage notes for `EXCITITOR-AIRGAP-58-001`. | Excititor Core Guild · Evidence Locker Guild | 2025-11-18 | Pending |
| Attestation | Complete verifier suite + diagnostics for `EXCITITOR-ATTEST-01-003`. | Excititor Attestation Guild | 2025-11-16 | In progress (verifier harness 80% complete) |
| Connectors | Inventory signer metadata + plan rollout for MSRC/Oracle/Ubuntu/Stella connectors (`EXCITITOR-CONN-TRUST-01-001`). | Excititor Connectors Guild | 2025-11-19 | Pending (schema draft expected 2025-11-14) |
## Upcoming checkpoints (UTC)
| Date | Session / Owner | Goal | Fallback |
| --- | --- | --- | --- |
| 2025-11-14 | Connector provenance schema review (Connectors + Security Guilds) | Approve signer fingerprint + issuer tier schema for `EXCITITOR-CONN-TRUST-01-001`. | If schema not ready, keep task blocked and request interim metadata list from connectors. |
| 2025-11-15 | Export Center mirror schema sync (Export Center + Excititor + AirGap) | Receive mirror bundle manifest to unblock `EXCITITOR-AIRGAP-56/57` (schema still pending). | If delayed, escalate to Sprint 162 leads and use placeholder spec with clearly marked TODO. |
| 2025-11-16 | Attestation verifier rehearsal (Excititor Attestation Guild) | Demo `IVexAttestationVerifier` harness + diagnostics to unblock `EXCITITOR-ATTEST-73-*`. | If issues persist, log BLOCKED status in attestation plan and re-forecast completion. |
| 2025-11-18 | Observability span sink deploy (Ops/Signals Guild) | Enable telemetry pipeline needed for `EXCITITOR-AIAI-31-003`. | If deploy slips, implement temporary counters/logs and keep action tracker flagged as blocked. |
## Risks & mitigations
| Risk | Severity | Impact | Mitigation |
| --- | --- | --- | --- |
| Observability sinks not ready for `EXCITITOR-AIAI-31-003` | Medium | Advisory-AI misuse would go undetected | Coordinate with Ops to reuse Signals dashboards; ship log-only fallback. |
| Mirror bundle schema slips (Export Center/AirGap) | High | Blocks sealed-mode + portable bundles | Use placeholder schema from `docs/modules/export-center/architecture.md` and note deltas; escalate to Export Center leads. |
| Attestation verifier misses 2025-11-16 target | High | Attestation payload tasks cannot start | Daily stand-ups with Attestation Guild; parallelize diagnostics while verifier finalizes. |
| Connector signer metadata incomplete | Medium | Trust parity story delayed | Stage connector-specific TODOs; allow partial rollout with feature flags. |
## Status log
- 2025-11-12 — Snapshot refreshed; EXCITITOR-AIAI-31-001 marked DONE, remaining tasks pending on observability, AirGap bundle schemas, and attestation verifier completion.
- 2025-11-13 — Added readiness checklists per task cluster plus action tracker; awaiting outcomes from Export Center mirror schema delivery and Attestor verifier rehearsals before flipping AirGap/Attestation tasks to DOING.
- 2025-11-13 (EOD) — OpenAPI draft for `EXCITITOR-AIAI-31-004` shared for review; Observability wiring blocked until Ops deploys span sink, noted above.
- 2025-11-14 — Connector provenance schema review scheduled; awaiting schema draft delivery before meeting. Export Center mirror schema still pending, keeping `EXCITITOR-AIRGAP-56/57` blocked.
- 2025-11-14 — `EXCITITOR-AIAI-31-003` instrumentation (request counters, chunk histogram, signature failure + guard-violation meters) merged into Excititor WebService; telemetry export remains blocked on Ops span sink rollout.
- 2025-11-14 (PM) — Published `docs/modules/excititor/operations/observability.md` documenting the new evidence metrics so Ops/Lens can hook dashboards while waiting for the span sink deployment.
> 2025-11-12: EXCITITOR-AIAI-31-001 delivered `/v1/vex/observations/{vulnerabilityId}/{productKey}` backed by the new `IVexObservationProjectionService`, returning normalized statements (scope tree, anchors, document metadata) so Advisory AI and Console can cite raw VEX evidence without touching consensus logic. > 2025-11-12: EXCITITOR-AIAI-31-001 delivered `/v1/vex/observations/{vulnerabilityId}/{productKey}` backed by the new `IVexObservationProjectionService`, returning normalized statements (scope tree, anchors, document metadata) so Advisory AI and Console can cite raw VEX evidence without touching consensus logic.

View File

@@ -22,4 +22,4 @@ EXCITITOR-GRAPH-21-002 `Overlay enrichment` | BLOCKED (2025-10-27) | Ensure over
EXCITITOR-GRAPH-21-005 `Inspector indexes` | BLOCKED (2025-10-27) | Add indexes/materialized views for VEX lookups by PURL/policy to support Cartographer inspector performance; document migrations. Dependencies: EXCITITOR-GRAPH-21-002. | Excititor Storage Guild (src/Excititor/__Libraries/StellaOps.Excititor.Storage.Mongo) EXCITITOR-GRAPH-21-005 `Inspector indexes` | BLOCKED (2025-10-27) | Add indexes/materialized views for VEX lookups by PURL/policy to support Cartographer inspector performance; document migrations. Dependencies: EXCITITOR-GRAPH-21-002. | Excititor Storage Guild (src/Excititor/__Libraries/StellaOps.Excititor.Storage.Mongo)
EXCITITOR-GRAPH-24-101 `VEX summary API` | TODO | Provide endpoints delivering VEX status summaries per component/asset for Vuln Explorer integration. Dependencies: EXCITITOR-GRAPH-21-005. | Excititor WebService Guild (src/Excititor/StellaOps.Excititor.WebService) EXCITITOR-GRAPH-24-101 `VEX summary API` | TODO | Provide endpoints delivering VEX status summaries per component/asset for Vuln Explorer integration. Dependencies: EXCITITOR-GRAPH-21-005. | Excititor WebService Guild (src/Excititor/StellaOps.Excititor.WebService)
EXCITITOR-GRAPH-24-102 `Evidence batch API` | TODO | Add batch VEX observation retrieval optimized for Graph overlays/tooltips. Dependencies: EXCITITOR-GRAPH-24-101. | Excititor WebService Guild (src/Excititor/StellaOps.Excititor.WebService) EXCITITOR-GRAPH-24-102 `Evidence batch API` | TODO | Add batch VEX observation retrieval optimized for Graph overlays/tooltips. Dependencies: EXCITITOR-GRAPH-24-101. | Excititor WebService Guild (src/Excititor/StellaOps.Excititor.WebService)
EXCITITOR-LNM-21-001 `VEX observation model` | TODO | Define immutable `vex_observations` schema capturing raw statements, product PURLs, justification, and AOC metadata. `DOCS-LNM-22-002` blocked pending this schema. | Excititor Core Guild (src/Excititor/__Libraries/StellaOps.Excititor.Core) EXCITITOR-LNM-21-001 `VEX observation model` | IN REVIEW (2025-11-14) | Schema defined in `docs/modules/excititor/vex_observations.md`, covering fields, indexes, determinism rules, and AOC metadata. `DOCS-LNM-22-002` can now consume this contract. | Excititor Core Guild (docs/modules/excititor/vex_observations.md)

View File

@@ -19,3 +19,66 @@ Focus: Policy & Reasoning focus on Findings (phase I).
| 7 | LEDGER-AIRGAP-57-001 | TODO | Link findings evidence snapshots to portable evidence bundles and ensure cross-enclave verification works (Deps: LEDGER-AIRGAP-56-002) | Findings Ledger Guild, Evidence Locker Guild / src/Findings/StellaOps.Findings.Ledger | | 7 | LEDGER-AIRGAP-57-001 | TODO | Link findings evidence snapshots to portable evidence bundles and ensure cross-enclave verification works (Deps: LEDGER-AIRGAP-56-002) | Findings Ledger Guild, Evidence Locker Guild / src/Findings/StellaOps.Findings.Ledger |
| 8 | LEDGER-AIRGAP-58-001 | TODO | Emit timeline events for bundle import impacts (new findings, remediation changes) with sealed-mode context (Deps: LEDGER-AIRGAP-57-001) | Findings Ledger Guild, AirGap Controller Guild / src/Findings/StellaOps.Findings.Ledger | | 8 | LEDGER-AIRGAP-58-001 | TODO | Emit timeline events for bundle import impacts (new findings, remediation changes) with sealed-mode context (Deps: LEDGER-AIRGAP-57-001) | Findings Ledger Guild, AirGap Controller Guild / src/Findings/StellaOps.Findings.Ledger |
| 9 | LEDGER-ATTEST-73-001 | TODO | Persist pointers from findings to verification reports and attestation envelopes for explainability | Findings Ledger Guild, Attestor Service Guild / src/Findings/StellaOps.Findings.Ledger | | 9 | LEDGER-ATTEST-73-001 | TODO | Persist pointers from findings to verification reports and attestation envelopes for explainability | Findings Ledger Guild, Attestor Service Guild / src/Findings/StellaOps.Findings.Ledger |
## Findings.I scope & goals
- Deliver ledger observability baselines (`LEDGER-29-007/008/009`) so Policy teams can trust ingestion, anchoring, and replay at >5M findings/tenant.
- Extend ledger provenance to cover orchestrator jobs, air-gapped bundle imports, and attestation evidence (`LEDGER-34-101`, `LEDGER-AIRGAP-*`, `LEDGER-ATTEST-73-001`).
- Ship deployment collateral (Helm/Compose, backup/restore, offline kit) and documentation so downstream guilds can adopt without bespoke guidance.
### Entry criteria
- Sprint 110.A AdvisoryAI deliverables must be complete (raw findings parity, provenance contracts).
- Observability Guild approves metric names/labels for `ledger_*` series.
- Mirror bundle schemas (AirGap kits) published so `LEDGER-AIRGAP-*` tasks can reference stable fields.
### Exit criteria
- Metrics/logs/dashboards live in ops telemetry packs with alert wiring.
- Determinism/load harness produces signed report for 5M findings/tenant scenario.
- Deployment manifests + offline kit instructions reviewed by DevOps/AirGap guilds.
- Ledger records referential pointers to orchestrator runs, bundle provenance, and attestation envelopes.
## Task clusters & owners
| Cluster | Linked tasks | Owners | Status snapshot | Notes |
| --- | --- | --- | --- | --- |
| Observability & diagnostics | LEDGER-29-007/008 | Findings Ledger Guild · Observability Guild · QA Guild | TODO | Metric/log spec captured in `docs/modules/findings-ledger/observability.md`; determinism harness spec added in `docs/modules/findings-ledger/replay-harness.md`; sequencing captured in `docs/modules/findings-ledger/implementation_plan.md`; awaiting Observability sign-off + Grafana JSON export (target 2025-11-15). |
| Deployment & backup | LEDGER-29-009 | Findings Ledger Guild · DevOps Guild | TODO | Baseline deployment/backup guide published (`docs/modules/findings-ledger/deployment.md`); need to align Compose/Helm overlays + automate migrations. |
| Orchestrator provenance | LEDGER-34-101 | Findings Ledger Guild | TODO | Blocked until Orchestrator exports job ledger payload; coordinate with Sprint 150.A. |
| Air-gap provenance & staleness | LEDGER-AIRGAP-56/57/58 series | Findings Ledger Guild · AirGap Guilds · Evidence Locker Guild | TODO | Requirements captured in `docs/modules/findings-ledger/airgap-provenance.md`; blocked on mirror bundle schema freeze + AirGap controller inputs. |
| Attestation linkage | LEDGER-ATTEST-73-001 | Findings Ledger Guild · Attestor Service Guild | TODO | Waiting on attestation payload pointers from NOTIFY-ATTEST-74-001 work to reuse DSSE IDs. |
## Milestones & dependencies
| Target date | Milestone | Dependency / owner | Notes |
| --- | --- | --- | --- |
| 2025-11-15 | Metrics + dashboard schema sign-off | Observability Guild | Unblocks LEDGER-29-007 instrumentation PR. |
| 2025-11-18 | Determinism + replay harness dry-run at 5M findings | QA Guild | Required before LEDGER-29-008 can close. |
| 2025-11-20 | Helm/Compose manifests + backup doc review | DevOps Guild · AirGap Controller Guild | Needed for LEDGER-29-009 + LEDGER-AIRGAP-56-001. |
| 2025-11-22 | Mirror bundle provenance schema freeze | AirGap Time Guild | Enables LEDGER-AIRGAP-56/57/58 sequencing. |
| 2025-11-25 | Orchestrator ledger export contract signed | Orchestrator Guild | Prereq for LEDGER-34-101 linkage. |
## Risks & mitigations
- **Metric churn** — Observability schema changes could slip schedule. Mitigation: lock metric names by Nov15 and document in `docs/observability/policy.md`.
- **Replay workload** — 5M findings load tests may exceed lab capacity. Mitigation: leverage existing QA replay rig, capture CPU/memory budgets for runbooks.
- **Air-gap drift** — Mirror bundle format still moving. Mitigation: version provenance schema, gate LEDGER-AIRGAP-* merge until doc + manifest updates reviewed.
- **Cross-guild lag** — Orchestrator/Attestor dependencies may delay provenance pointers. Mitigation: weekly sync notes in sprint log; add feature flags so ledger work can merge behind toggles.
## External dependency tracker
| Dependency | Current state (2025-11-13) | Impact |
| --- | --- | --- |
| Sprint 110.A AdvisoryAI | DONE | Enables Findings.I start; monitor regressions. |
| Observability metric schema | IN REVIEW | Blocks LEDGER-29-007/008 dashboards. |
| Orchestrator job export contract | TODO | Required for LEDGER-34-101; tracked in Sprint 150.A wave table. |
| Mirror bundle schema | DRAFT | Needed for LEDGER-AIRGAP-56/57/58 messaging + manifests. |
| Attestation pointer schema | DRAFT | Needs alignment with NOTIFY-ATTEST-74-001 to reuse DSSE IDs. |
## Coordination log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-13 09:30 | Documented Findings.I scope, milestones, and external dependencies; awaiting Observability + Orchestrator inputs before flipping any tasks to DOING. | Findings Ledger Guild |
| 2025-11-13 10:45 | Published `docs/modules/findings-ledger/observability.md` detailing metrics/logs/alerts required for LEDGER-29-007/008; sent draft to Observability Guild for review. | Findings Ledger Guild |
| 2025-11-13 11:20 | Added `docs/modules/findings-ledger/deployment.md` covering Compose/Helm rollout, migrations, backup/restore, and offline workflows for LEDGER-29-009. | Findings Ledger Guild |
| 2025-11-13 11:50 | Added `docs/modules/findings-ledger/replay-harness.md` outlining fixtures, CLI workflow, and reporting for LEDGER-29-008 determinism tests. | Findings Ledger Guild |
| 2025-11-13 12:05 | Drafted `docs/modules/findings-ledger/implementation_plan.md` summarizing phase sequencing and dependencies for Findings.I. | Findings Ledger Guild |
| 2025-11-13 12:25 | Authored `docs/modules/findings-ledger/airgap-provenance.md` detailing bundle provenance, staleness, evidence snapshot, and timeline requirements for LEDGER-AIRGAP-56/57/58. | Findings Ledger Guild |

View File

@@ -13,7 +13,7 @@ Dependency: Sprint 135 - 6. Scanner.VI — Scanner & Surface focus on Scanner (p
| `SCANNER-ENV-01` | TODO (2025-11-06) | Replace ad-hoc environment reads with `StellaOps.Scanner.Surface.Env` helpers for cache roots and CAS endpoints. | Scanner Worker Guild (src/Scanner/StellaOps.Scanner.Worker) | — | | `SCANNER-ENV-01` | TODO (2025-11-06) | Replace ad-hoc environment reads with `StellaOps.Scanner.Surface.Env` helpers for cache roots and CAS endpoints. | Scanner Worker Guild (src/Scanner/StellaOps.Scanner.Worker) | — |
| `SCANNER-ENV-02` | TODO (2025-11-06) | Wire Surface.Env helpers into WebService hosting (cache roots, feature flags) and document configuration. | Scanner WebService Guild, Ops Guild (src/Scanner/StellaOps.Scanner.WebService) | SCANNER-ENV-01 | | `SCANNER-ENV-02` | TODO (2025-11-06) | Wire Surface.Env helpers into WebService hosting (cache roots, feature flags) and document configuration. | Scanner WebService Guild, Ops Guild (src/Scanner/StellaOps.Scanner.WebService) | SCANNER-ENV-01 |
| `SCANNER-ENV-03` | TODO | Adopt Surface.Env helpers for plugin configuration (cache roots, CAS endpoints, feature toggles). | BuildX Plugin Guild (src/Scanner/StellaOps.Scanner.Sbomer.BuildXPlugin) | SCANNER-ENV-02 | | `SCANNER-ENV-03` | TODO | Adopt Surface.Env helpers for plugin configuration (cache roots, CAS endpoints, feature toggles). | BuildX Plugin Guild (src/Scanner/StellaOps.Scanner.Sbomer.BuildXPlugin) | SCANNER-ENV-02 |
| `SURFACE-ENV-01` | DOING (2025-11-01) | Draft `surface-env.md` enumerating environment variables, defaults, and air-gap behaviour for Surface consumers. | Scanner Guild, Zastava Guild (src/Scanner/__Libraries/StellaOps.Scanner.Surface.Env) | — | | `SURFACE-ENV-01` | DONE (2025-11-13) | Draft `surface-env.md` enumerating environment variables, defaults, and air-gap behaviour for Surface consumers. | Scanner Guild, Zastava Guild (src/Scanner/__Libraries/StellaOps.Scanner.Surface.Env) | — |
| `SURFACE-ENV-02` | DOING (2025-11-02) | Implement strongly-typed env accessors with validation and deterministic logging inside `StellaOps.Scanner.Surface.Env`. | Scanner Guild (src/Scanner/__Libraries/StellaOps.Scanner.Surface.Env) | SURFACE-ENV-01 | | `SURFACE-ENV-02` | DOING (2025-11-02) | Implement strongly-typed env accessors with validation and deterministic logging inside `StellaOps.Scanner.Surface.Env`. | Scanner Guild (src/Scanner/__Libraries/StellaOps.Scanner.Surface.Env) | SURFACE-ENV-01 |
| `SURFACE-ENV-03` | TODO | Adopt the env helper across Scanner Worker/WebService/BuildX plug-ins. | Scanner Guild (src/Scanner/__Libraries/StellaOps.Scanner.Surface.Env) | SURFACE-ENV-02 | | `SURFACE-ENV-03` | TODO | Adopt the env helper across Scanner Worker/WebService/BuildX plug-ins. | Scanner Guild (src/Scanner/__Libraries/StellaOps.Scanner.Surface.Env) | SURFACE-ENV-02 |
| `SURFACE-ENV-04` | TODO | Wire env helper into Zastava Observer/Webhook containers. | Zastava Guild (src/Scanner/__Libraries/StellaOps.Scanner.Surface.Env) | SURFACE-ENV-02 | | `SURFACE-ENV-04` | TODO | Wire env helper into Zastava Observer/Webhook containers. | Zastava Guild (src/Scanner/__Libraries/StellaOps.Scanner.Surface.Env) | SURFACE-ENV-02 |

View File

@@ -7,17 +7,17 @@
| Task ID | State | Summary | Owner / Source | Depends On | | Task ID | State | Summary | Owner / Source | Depends On |
| --- | --- | --- | --- | --- | | --- | --- | --- | --- | --- |
| `SCANNER-ENG-0008` | TODO | Maintain EntryTrace heuristic cadence per `docs/benchmarks/scanner/scanning-gaps-stella-misses-from-competitors.md`, including quarterly pattern reviews + explain-trace updates. | EntryTrace Guild, QA Guild (src/Scanner/__Libraries/StellaOps.Scanner.EntryTrace) | — | | `SCANNER-ENG-0008` | TODO | Maintain EntryTrace heuristic cadence per `docs/benchmarks/scanner/scanning-gaps-stella-misses-from-competitors.md`, including quarterly pattern reviews + explain-trace updates. | EntryTrace Guild, QA Guild (src/Scanner/__Libraries/StellaOps.Scanner.EntryTrace) | — |
| `SCANNER-ENG-0009` | DOING (2025-11-02) | Deliver Ruby analyzer parity and observation pipeline (lockfiles, runtime graph, policy signals) per the gap doc. | Ruby Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Ruby) | SCANNER-ANALYZERS-RUBY-28-001..012 | | `SCANNER-ENG-0009` | DONE (2025-11-13) | Ruby analyzer parity shipped: runtime graph + capability signals, observation payload, Mongo-backed `ruby.packages` inventory, CLI/WebService surfaces, and plugin manifest bundles for Worker loadout. | Ruby Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Ruby) | SCANNER-ANALYZERS-RUBY-28-001..012 |
| `SCANNER-ENG-0010` | TODO | Ship the PHP analyzer pipeline (composer lock, autoload graph, capability signals) to close comparison gaps. | PHP Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Php) | SCANNER-ANALYZERS-PHP-27-001..012 | | `SCANNER-ENG-0010` | TODO | Ship the PHP analyzer pipeline (composer lock, autoload graph, capability signals) to close comparison gaps. | PHP Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Php) | SCANNER-ANALYZERS-PHP-27-001..012 |
| `SCANNER-ENG-0011` | TODO | Scope the Deno runtime analyzer (lockfile resolver, import graphs) based on competitor techniques to extend beyond Sprint 130 coverage. | Language Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Deno) | — | | `SCANNER-ENG-0011` | TODO | Scope the Deno runtime analyzer (lockfile resolver, import graphs) based on competitor techniques to extend beyond Sprint 130 coverage. | Language Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Deno) | — |
| `SCANNER-ENG-0012` | TODO | Evaluate Dart analyzer requirements (pubspec parsing, AOT artifacts) and split implementation tasks. | Language Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Dart) | — | | `SCANNER-ENG-0012` | TODO | Evaluate Dart analyzer requirements (pubspec parsing, AOT artifacts) and split implementation tasks. | Language Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Dart) | — |
| `SCANNER-ENG-0013` | TODO | Plan Swift Package Manager coverage (Package.resolved, xcframeworks, runtime hints) with policy hooks. | Swift Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Swift) | — | | `SCANNER-ENG-0013` | TODO | Plan Swift Package Manager coverage (Package.resolved, xcframeworks, runtime hints) with policy hooks. | Swift Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Swift) | — |
| `SCANNER-ENG-0014` | TODO | Align Kubernetes/VM target coverage between Scanner and Zastava per comparison findings; publish joint roadmap. | Runtime Guild, Zastava Guild (docs/modules/scanner) | — | | `SCANNER-ENG-0014` | TODO | Align Kubernetes/VM target coverage between Scanner and Zastava per comparison findings; publish joint roadmap. | Runtime Guild, Zastava Guild (docs/modules/scanner) | — |
| `SCANNER-ENG-0015` | DOING (2025-11-09) | Document DSSE/Rekor operator enablement guidance and rollout levers surfaced in the gap analysis. | Export Center Guild, Scanner Guild (docs/modules/scanner) | — | | `SCANNER-ENG-0015` | DONE (2025-11-13) | DSSE/Rekor operator playbook published (`docs/modules/scanner/operations/dsse-rekor-operator-guide.md`) with config/env tables, rollout phases, runbook snippets, offline verification steps, and SLA/alert guidance. | Export Center Guild, Scanner Guild (docs/modules/scanner) | — |
| `SCANNER-ENG-0016` | DONE (2025-11-10) | RubyLockCollector and vendor ingestion finalized: Bundler config overrides honoured, workspace lockfiles merged, vendor bundles normalised, and deterministic fixtures added. | Ruby Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Ruby) | SCANNER-ENG-0009 | | `SCANNER-ENG-0016` | DONE (2025-11-10) | RubyLockCollector and vendor ingestion finalized: Bundler config overrides honoured, workspace lockfiles merged, vendor bundles normalised, and deterministic fixtures added. | Ruby Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Ruby) | SCANNER-ENG-0009 |
| `SCANNER-ENG-0017` | DONE (2025-11-09) | Build the runtime require/autoload graph builder with tree-sitter Ruby per design §4.4 and integrate EntryTrace hints. | Ruby Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Ruby) | SCANNER-ENG-0016 | | `SCANNER-ENG-0017` | DONE (2025-11-09) | Build the runtime require/autoload graph builder with tree-sitter Ruby per design §4.4 and integrate EntryTrace hints. | Ruby Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Ruby) | SCANNER-ENG-0016 |
| `SCANNER-ENG-0018` | DONE (2025-11-09) | Emit Ruby capability + framework surface signals as defined in design §4.5 with policy predicate hooks. | Ruby Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Ruby) | SCANNER-ENG-0017 | | `SCANNER-ENG-0018` | DONE (2025-11-09) | Emit Ruby capability + framework surface signals as defined in design §4.5 with policy predicate hooks. | Ruby Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Ruby) | SCANNER-ENG-0017 |
| `SCANNER-ENG-0019` | DOING (2025-11-10) | Ship Ruby CLI verbs (`stella ruby inspect|resolve`) and Offline Kit packaging per design §4.6. | Ruby Analyzer Guild, CLI Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Ruby) | SCANNER-ENG-0016..0018 | | `SCANNER-ENG-0019` | DONE (2025-11-13) | Ruby CLI verbs now resolve inventories by scan ID, digest, or image reference; Scanner.WebService fallbacks + CLI client encoding ensure `--image` works for both digests and tagged references, and tests cover the new lookup flow. | Ruby Analyzer Guild, CLI Guild (src/Scanner/StellaOps.Scanner.Analyzers.Lang.Ruby) | SCANNER-ENG-0016..0018 |
| `SCANNER-LIC-0001` | DONE (2025-11-10) | Tree-sitter licensing captured, `NOTICE.md` updated, and Offline Kit now mirrors `third-party-licenses/` with ruby artifacts. | Scanner Guild, Legal Guild (docs/modules/scanner) | SCANNER-ENG-0016 | | `SCANNER-LIC-0001` | DONE (2025-11-10) | Tree-sitter licensing captured, `NOTICE.md` updated, and Offline Kit now mirrors `third-party-licenses/` with ruby artifacts. | Scanner Guild, Legal Guild (docs/modules/scanner) | SCANNER-ENG-0016 |
| `SCANNER-POLICY-0001` | DONE (2025-11-10) | Ruby predicates shipped: Policy Engine exposes `sbom.any_component` + `ruby.*`, tests updated, DSL/offline-kit docs refreshed. | Policy Guild, Ruby Analyzer Guild (docs/modules/scanner) | SCANNER-ENG-0018 | | `SCANNER-POLICY-0001` | DONE (2025-11-10) | Ruby predicates shipped: Policy Engine exposes `sbom.any_component` + `ruby.*`, tests updated, DSL/offline-kit docs refreshed. | Policy Guild, Ruby Analyzer Guild (docs/modules/scanner) | SCANNER-ENG-0018 |
| `SCANNER-CLI-0001` | DONE (2025-11-10) | Coordinate CLI UX/help text for new Ruby verbs and update CLI docs/golden outputs. | CLI Guild, Ruby Analyzer Guild (src/Cli/StellaOps.Cli) | SCANNER-ENG-0019 | | `SCANNER-CLI-0001` | DONE (2025-11-10) | Coordinate CLI UX/help text for new Ruby verbs and update CLI docs/golden outputs. | CLI Guild, Ruby Analyzer Guild (src/Cli/StellaOps.Cli) | SCANNER-ENG-0019 |
@@ -32,3 +32,14 @@
- `SCANNER-ENG-0009`: 2025-11-12 — Added bundler-version metadata to observation payloads, introduced the `complex-app` fixture to cover vendor caches/BUNDLE_PATH overrides, and taught `stellaops-cli ruby inspect` to print the observation banner (bundler/runtime/capabilities) alongside JSON `observation` blocks. - `SCANNER-ENG-0009`: 2025-11-12 — Added bundler-version metadata to observation payloads, introduced the `complex-app` fixture to cover vendor caches/BUNDLE_PATH overrides, and taught `stellaops-cli ruby inspect` to print the observation banner (bundler/runtime/capabilities) alongside JSON `observation` blocks.
- `SCANNER-ENG-0009`: 2025-11-12 — Ruby package inventories now flow into `RubyPackageInventoryStore`; `SurfaceManifestStageExecutor` builds the package list, persists it via Mongo, and Scanner.WebService exposes the data through `GET /api/scans/{scanId}/ruby-packages` for CLI/Policy consumers. - `SCANNER-ENG-0009`: 2025-11-12 — Ruby package inventories now flow into `RubyPackageInventoryStore`; `SurfaceManifestStageExecutor` builds the package list, persists it via Mongo, and Scanner.WebService exposes the data through `GET /api/scans/{scanId}/ruby-packages` for CLI/Policy consumers.
- `SCANNER-ENG-0009`: 2025-11-12 — Ruby package inventory API now returns a typed envelope (scanId/imageDigest/generatedAt + packages) backed by `ruby.packages`; Worker/WebService DI registers the real store when Mongo is enabled, CLI `ruby resolve` consumes the new payload/warns when inventories are still warming, and docs/OpenAPI references were refreshed. - `SCANNER-ENG-0009`: 2025-11-12 — Ruby package inventory API now returns a typed envelope (scanId/imageDigest/generatedAt + packages) backed by `ruby.packages`; Worker/WebService DI registers the real store when Mongo is enabled, CLI `ruby resolve` consumes the new payload/warns when inventories are still warming, and docs/OpenAPI references were refreshed.
### Updates — 2025-11-13
- `SCANNER-ENG-0009`: Verified Worker DI registers `IRubyPackageInventoryStore` when Mongo is enabled and falls back to `NullRubyPackageInventoryStore` for in-memory/unit scenarios; confirmed Scanner.WebService endpoint + CLI client exercise the same store contract.
- `SCANNER-ENG-0009`: Cross-checked docs/manifests so operators can trace the new `/api/scans/{scanId}/ruby-packages` endpoint from `docs/modules/scanner/architecture.md` and the CLI reference; plugin drop under `plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.Ruby` now mirrors the analyzer assembly + manifest for Worker hot-load.
- `SCANNER-ENG-0009`: Targeted tests cover analyzer fixtures, Worker persistence, and the WebService endpoint:
`dotnet test src/Scanner/__Tests/StellaOps.Scanner.Analyzers.Lang.Ruby.Tests/StellaOps.Scanner.Analyzers.Lang.Ruby.Tests.csproj --nologo --verbosity minimal`
`dotnet test src/Scanner/__Tests/StellaOps.Scanner.Worker.Tests/StellaOps.Scanner.Worker.Tests.csproj --nologo --verbosity minimal`
`dotnet test src/Scanner/__Tests/StellaOps.Scanner.WebService.Tests/StellaOps.Scanner.WebService.Tests.csproj --nologo --verbosity minimal --filter "FullyQualifiedName~RubyPackages"`
- `SCANNER-ENG-0015`: DSSE & Rekor operator guide expanded with configuration/env var map, rollout runbook, verification snippets, and alert/SLO recommendations so Export Center + Ops can enable attestations deterministically.
- `SCANNER-ENG-0019`: Scanner.WebService now maps digest/reference identifiers back to canonical scan IDs, CLI backend encodes path segments, and regression tests (`RubyPackagesEndpointsTests`, `StellaOps.Cli.Tests --filter Ruby`) cover the new resolution path so `stella ruby resolve --image` works for both digests and tagged references.

View File

@@ -13,14 +13,14 @@ This file now only tracks the runtime & signals status snapshot. Active backlog
| 140.C Signals | Signals Guild · Authority Guild (for scopes) · Runtime Guild | Sprint 120.A AirGap; Sprint 130.A Scanner | DOING | API skeleton and callgraph ingestion are active; runtime facts endpoint still depends on the same shared prerequisites. | | 140.C Signals | Signals Guild · Authority Guild (for scopes) · Runtime Guild | Sprint 120.A AirGap; Sprint 130.A Scanner | DOING | API skeleton and callgraph ingestion are active; runtime facts endpoint still depends on the same shared prerequisites. |
| 140.D Zastava | Zastava Observer/Webhook Guilds · Security Guild | Sprint 120.A AirGap; Sprint 130.A Scanner | TODO | Surface.FS integration waits on Scanner surface caches; prep sealed-mode env helpers meanwhile. | | 140.D Zastava | Zastava Observer/Webhook Guilds · Security Guild | Sprint 120.A AirGap; Sprint 130.A Scanner | TODO | Surface.FS integration waits on Scanner surface caches; prep sealed-mode env helpers meanwhile. |
# Status snapshot (2025-11-12) # Status snapshot (2025-11-13)
- **140.A Graph** GRAPH-INDEX-28-007/008/009/010 remain TODO while Scanner surface artifacts and SBOM projection schemas are outstanding; clustering/backfill/fixture scaffolds are staged but cannot progress until analyzer payloads arrive. - **140.A Graph** GRAPH-INDEX-28-007/008/009/010 remain TODO while Scanner surface artifacts and SBOM projection schemas are outstanding; clustering/backfill/fixture scaffolds are staged but cannot progress until analyzer payloads arrive.
- **140.B SbomService** Advisory AI, console, and orchestrator tracks stay TODO; SBOM-SERVICE-21-001..004 remain BLOCKED waiting for Concelier Link-Not-Merge (`CONCELIER-GRAPH-21-001`) plus Cartographer schema (`CARTO-GRAPH-21-002`), and AirGap parity must be re-validated once schemas land. Teams are refining projection docs so we can flip to DOING as soon as payloads land. - **140.B SbomService** Advisory AI, console, and orchestrator tracks stay TODO; SBOM-SERVICE-21-001..004 remain BLOCKED waiting for Concelier Link-Not-Merge (`CONCELIER-GRAPH-21-001`) plus Cartographer schema (`CARTO-GRAPH-21-002`), and AirGap parity must be re-validated once schemas land. Teams are refining projection docs so we can flip to DOING as soon as payloads land.
- **140.C Signals** SIGNALS-24-001 shipped on 2025-11-09; SIGNALS-24-002 is DOING with callgraph retrieval live but CAS promotion + signed manifest tooling still pending; SIGNALS-24-003 is DOING after JSON/NDJSON ingestion merged, yet provenance/context enrichment and runtime feed reconciliation remain in-flight. Scoring/cache work (SIGNALS-24-004/005) stays BLOCKED until runtime uploads publish consistently and scope propagation validation (post `AUTH-SIG-26-001`) completes. - **140.C Signals** SIGNALS-24-001 shipped on 2025-11-09; SIGNALS-24-002 is DOING with callgraph retrieval live but CAS promotion + signed manifest tooling still pending; SIGNALS-24-003 is DOING after JSON/NDJSON ingestion merged, yet provenance/context enrichment and runtime feed reconciliation remain in-flight. Scoring/cache work (SIGNALS-24-004/005) stays BLOCKED until runtime uploads publish consistently and scope propagation validation (post `AUTH-SIG-26-001`) completes.
- **140.D Zastava** ZASTAVA-ENV/SECRETS/SURFACE tracks remain TODO because Surface.FS cache outputs from Scanner are still unavailable; guilds continue prepping Surface.Env helper adoption and sealed-mode scaffolding. - **140.D Zastava** ZASTAVA-ENV/SECRETS/SURFACE tracks remain TODO because Surface.FS cache outputs from Scanner are still unavailable; guilds continue prepping Surface.Env helper adoption and sealed-mode scaffolding.
## Wave task tracker (refreshed 2025-11-12) ## Wave task tracker (refreshed 2025-11-13)
### 140.A Graph ### 140.A Graph
@@ -79,7 +79,7 @@ This file now only tracks the runtime & signals status snapshot. Active backlog
| SIGNALS-24-002 | Promote callgraph CAS buckets to prod scopes, publish signed manifest metadata, document retention/GC policy, wire alerts for failed graph retrievals. | 2025-11-14 | Signals Guild, Platform Storage Guild | | SIGNALS-24-002 | Promote callgraph CAS buckets to prod scopes, publish signed manifest metadata, document retention/GC policy, wire alerts for failed graph retrievals. | 2025-11-14 | Signals Guild, Platform Storage Guild |
| SIGNALS-24-003 | Finalize provenance/context enrichment (Authority scopes + runtime metadata), support NDJSON batch provenance, backfill existing facts, and validate AOC contract. | 2025-11-15 | Signals Guild, Runtime Guild, Authority Guild | | SIGNALS-24-003 | Finalize provenance/context enrichment (Authority scopes + runtime metadata), support NDJSON batch provenance, backfill existing facts, and validate AOC contract. | 2025-11-15 | Signals Guild, Runtime Guild, Authority Guild |
## Wave readiness checklist (2025-11-12) ## Wave readiness checklist (2025-11-13)
| Wave | Entry criteria | Prep status | Next checkpoint | | Wave | Entry criteria | Prep status | Next checkpoint |
| --- | --- | --- | --- | | --- | --- | --- | --- |
@@ -88,7 +88,7 @@ This file now only tracks the runtime & signals status snapshot. Active backlog
| 140.C Signals | CAS promotion approval + runtime provenance contract + AUTH-SIG-26-001 sign-off. | HOST + callgraph retrieval merged; CAS/provenance work tracked in DOING table above. | 2025-11-13 runtime sync to approve CAS rollout + schema freeze. | | 140.C Signals | CAS promotion approval + runtime provenance contract + AUTH-SIG-26-001 sign-off. | HOST + callgraph retrieval merged; CAS/provenance work tracked in DOING table above. | 2025-11-13 runtime sync to approve CAS rollout + schema freeze. |
| 140.D Zastava | Surface.FS cache availability + Surface.Env helper specs published. | Env/secrets design notes ready; waiting for Scanner cache drop and Surface.FS API stubs. | 2025-11-15 Surface guild office hours to confirm helper adoption plan. | | 140.D Zastava | Surface.FS cache availability + Surface.Env helper specs published. | Env/secrets design notes ready; waiting for Scanner cache drop and Surface.FS API stubs. | 2025-11-15 Surface guild office hours to confirm helper adoption plan. |
### Signals DOING activity log ### Signals DOING activity log (updates through 2025-11-13)
| Date | Update | Owners | | Date | Update | Owners |
| --- | --- | --- | | --- | --- | --- |
@@ -96,7 +96,7 @@ This file now only tracks the runtime & signals status snapshot. Active backlog
| 2025-11-11 | Completed NDJSON ingestion soak test (JSON/NDJSON + gzip) and documented provenance enrichment mapping required from Authority scopes; open PR wiring AOC metadata pending review. | Signals Guild, Runtime Guild | | 2025-11-11 | Completed NDJSON ingestion soak test (JSON/NDJSON + gzip) and documented provenance enrichment mapping required from Authority scopes; open PR wiring AOC metadata pending review. | Signals Guild, Runtime Guild |
| 2025-11-09 | Runtime facts ingestion endpoint + streaming NDJSON support merged with sealed-mode gating; next tasks are provenance enrichment and scoring linkage. | Signals Guild, Runtime Guild | | 2025-11-09 | Runtime facts ingestion endpoint + streaming NDJSON support merged with sealed-mode gating; next tasks are provenance enrichment and scoring linkage. | Signals Guild, Runtime Guild |
## Dependency status watchlist (2025-11-12) ## Dependency status watchlist (2025-11-13)
| Dependency | Status | Latest detail | Owner(s) / follow-up | | Dependency | Status | Latest detail | Owner(s) / follow-up |
| --- | --- | --- | --- | | --- | --- | --- | --- |
@@ -106,7 +106,7 @@ This file now only tracks the runtime & signals status snapshot. Active backlog
| Sprint 130 Scanner surface artifacts | ETA pending | Analyzer artifact publication schedule still outstanding; Graph/Zastava need cache outputs and manifests. | Scanner Guild · Graph Indexer Guild · Zastava Guilds | | Sprint 130 Scanner surface artifacts | ETA pending | Analyzer artifact publication schedule still outstanding; Graph/Zastava need cache outputs and manifests. | Scanner Guild · Graph Indexer Guild · Zastava Guilds |
| AirGap parity review (Sprint 120.A) | Not scheduled | SBOM path/timeline endpoints must re-pass AirGap checklist once Concelier schema lands; reviewers on standby. | AirGap Guild · SBOM Service Guild | | AirGap parity review (Sprint 120.A) | Not scheduled | SBOM path/timeline endpoints must re-pass AirGap checklist once Concelier schema lands; reviewers on standby. | AirGap Guild · SBOM Service Guild |
## Upcoming checkpoints ## Upcoming checkpoints (updated 2025-11-13)
| Date | Session | Goal | Impacted wave(s) | Prep owner(s) | | Date | Session | Goal | Impacted wave(s) | Prep owner(s) |
| --- | --- | --- | --- | --- | | --- | --- | --- | --- | --- |
@@ -124,7 +124,7 @@ This file now only tracks the runtime & signals status snapshot. Active backlog
| Concelier schema review (2025-11-14) | Link-Not-Merge schema redlines, Cartographer webhook contract, AirGap parity checklist, SBOM-SERVICE-21-001 scaffolding plan. | Final field list for relationships/scopes? Event payload metadata requirements? AirGap review schedule & owners? | Concelier Core · Cartographer Guild · SBOM Service Guild · AirGap Guild | | Concelier schema review (2025-11-14) | Link-Not-Merge schema redlines, Cartographer webhook contract, AirGap parity checklist, SBOM-SERVICE-21-001 scaffolding plan. | Final field list for relationships/scopes? Event payload metadata requirements? AirGap review schedule & owners? | Concelier Core · Cartographer Guild · SBOM Service Guild · AirGap Guild |
| Surface guild office hours (2025-11-15) | Surface.Env helper adoption notes, sealed-mode test harness outline, Surface.FS API stub timeline. | Can Surface.FS caches publish before Analyzer drop? Any additional sealed-mode requirements? Who owns Surface.Env rollout in Observer/Webhook repos? | Surface Guild · Zastava Observer/Webhook Guilds | | Surface guild office hours (2025-11-15) | Surface.Env helper adoption notes, sealed-mode test harness outline, Surface.FS API stub timeline. | Can Surface.FS caches publish before Analyzer drop? Any additional sealed-mode requirements? Who owns Surface.Env rollout in Observer/Webhook repos? | Surface Guild · Zastava Observer/Webhook Guilds |
## Target outcomes (through 2025-11-15) ## Target outcomes (through 2025-11-15, refreshed 2025-11-13)
| Deliverable | Target date | Status | Dependencies / notes | | Deliverable | Target date | Status | Dependencies / notes |
| --- | --- | --- | --- | | --- | --- | --- | --- |
@@ -134,6 +134,48 @@ This file now only tracks the runtime & signals status snapshot. Active backlog
| Concelier Link-Not-Merge schema ratified | 2025-11-14 | BLOCKED | Requires `CONCELIER-GRAPH-21-001` + `CARTO-GRAPH-21-002` agreement; AirGap review scheduled after sign-off. | | Concelier Link-Not-Merge schema ratified | 2025-11-14 | BLOCKED | Requires `CONCELIER-GRAPH-21-001` + `CARTO-GRAPH-21-002` agreement; AirGap review scheduled after sign-off. |
| Surface.Env helper adoption checklist | 2025-11-15 | TODO | Zastava guild preparing sealed-mode test harness; depends on Surface guild office hours outcomes. | | Surface.Env helper adoption checklist | 2025-11-15 | TODO | Zastava guild preparing sealed-mode test harness; depends on Surface guild office hours outcomes. |
## Decisions needed (before 2025-11-15, refreshed 2025-11-13)
| Decision | Blocking work | Accountable owner(s) | Due date |
| --- | --- | --- | --- |
| Approve CAS bucket policies + signed manifest rollout | Closing SIGNALS-24-002; enabling scoring/cache prep | Platform Storage Guild · Signals Guild | 2025-11-13 |
| Freeze runtime provenance schema + scope propagation fixtures | Completing SIGNALS-24-003 enrichment/backfill | Runtime Guild · Authority Guild | 2025-11-13 |
| Publish Sprint 130 analyzer artifact drop schedule | Starting GRAPH-INDEX-28-007 and ZASTAVA-SURFACE-01/02 | Scanner Guild | 2025-11-13 |
| Ratify Link-Not-Merge schema + change event contract | Kicking off SBOM-SERVICE-21-001/002 and Graph overlays | Concelier Core · Cartographer Guild · SBOM Service Guild | 2025-11-14 |
| Schedule AirGap parity review for SBOM endpoints | Allowing Advisory AI adoption and AirGap sign-off | AirGap Guild · SBOM Service Guild | 2025-11-14 |
| Assign owner for Surface.Env helper rollout (Observer vs Webhook) | Executing ZASTAVA-ENV-01/02 once caches drop | Surface Guild · Zastava Guilds | 2025-11-15 |
## Contingency playbook (reviewed 2025-11-13)
| Risk trigger | Immediate response | Owner | Escalation window |
| --- | --- | --- | --- |
| CAS promotion review slips past 2025-11-13 | Switch SIGNALS-24-002 to “red”, keep staging in shadow bucket, and escalate to Platform Storage leadership for expedited review. | Signals Guild | Escalate by 2025-11-14 stand-up. |
| Runtime provenance schema disputes persist | Freeze ingestion on current schema, log breaking field requests, and schedule joint Runtime/Authority architecture review. | Runtime Guild · Authority Guild | Escalate by 2025-11-14 EOD. |
| Scanner cannot provide analyzer artifact ETA | Raise blocker in Scanner leadership channel, request interim mock manifests, and re-plan Graph/Zastava scope to focus on harness/test prep. | Graph Indexer Guild · Zastava Guilds | Escalate by 2025-11-14 midday. |
| Concelier/Cartographer schema review stalls | Capture outstanding fields/issues, loop in Advisory AI + AirGap leadership, and evaluate temporary schema adapters for SBOM Service. | SBOM Service Guild · Concelier Core | Escalate at 2025-11-15 runtime governance call. |
| Surface.Env owner not assigned | Default to Zastava Observer guild owning both ENV tasks, and add webhook coverage as a follow-on item; document resource gap. | Surface Guild · Zastava Observer Guild | Escalate by 2025-11-16. |
## Action item tracker (status as of 2025-11-13)
| Item | Status | Next step | Owner(s) | Due |
| --- | --- | --- | --- | --- |
| CAS checklist feedback | In review | Platform Storage to mark checklist “approved” or add blockers before runtime sync. | Platform Storage Guild | 2025-11-13 |
| Signed manifest PRs | Ready for merge | Signals to merge once CAS checklist approved, then deploy to staging. | Signals Guild | 2025-11-14 |
| Provenance schema appendix | Drafted | Runtime/Authority to publish final appendix + fixtures to repo. | Runtime Guild · Authority Guild | 2025-11-13 |
| Scanner artifact roadmap | Draft in Scanner doc | Publish final ETA + delivery format after readiness sync. | Scanner Guild | 2025-11-13 |
| Link-Not-Merge schema redlines | Circulated | Concelier/Cartographer/SBOM to sign off during Nov 14 review. | Concelier Core · Cartographer Guild · SBOM Service Guild | 2025-11-14 |
| Surface.Env adoption checklist | Outline ready | Surface guild to confirm owner and add step-by-step instructions post office hours. | Surface Guild · Zastava Guilds | 2025-11-15 |
## Standup agenda (2025-11-13)
| Track | Questions / updates to cover | Owner ready to report |
| --- | --- | --- |
| 140.A Graph | Did Scanner commit to an analyzer artifact ETA? If not, what mock data or alternate scope can Graph tackle? | Graph Indexer Guild |
| 140.B SbomService | Are Concelier/CARTO reviewers aligned on schema redlines ahead of the Nov 14 meeting? Any AirGap checklist prep gaps? | SBOM Service Guild |
| 140.C Signals | Status of CAS approval + signed manifest merges? Is provenance schema appendix ready for publication? Any blockers for runtime backfill? | Signals Guild · Runtime Guild · Authority Guild |
| 140.D Zastava | What dependencies remain besides Surface.FS cache drop? Do we have a draft owner for Surface.Env rollout? | Zastava Guilds |
| Cross-track | Upcoming decisions/risks from the contingency playbook that need leadership visibility today? | Sprint 140 leads |
# Blockers & coordination # Blockers & coordination
- **Concelier Link-Not-Merge / Cartographer schemas** SBOM-SERVICE-21-001..004 cannot start until `CONCELIER-GRAPH-21-001` and `CARTO-GRAPH-21-002` deliver the projection payloads. - **Concelier Link-Not-Merge / Cartographer schemas** SBOM-SERVICE-21-001..004 cannot start until `CONCELIER-GRAPH-21-001` and `CARTO-GRAPH-21-002` deliver the projection payloads.
@@ -155,7 +197,7 @@ This file now only tracks the runtime & signals status snapshot. Active backlog
| Scanner Guild | Publish Sprint 130 surface artifact roadmap + Surface.FS cache drop timeline so Graph/Zastava can schedule start dates; provide mock datasets if slips extend past 2025-11-15. | | Scanner Guild | Publish Sprint 130 surface artifact roadmap + Surface.FS cache drop timeline so Graph/Zastava can schedule start dates; provide mock datasets if slips extend past 2025-11-15. |
| Zastava Guilds | Convert Surface.Env helper adoption notes into a ready-to-execute checklist, align sealed-mode tests, and be prepared to start once Surface.FS caches are announced. | | Zastava Guilds | Convert Surface.Env helper adoption notes into a ready-to-execute checklist, align sealed-mode tests, and be prepared to start once Surface.FS caches are announced. |
# Downstream dependency rollup (snapshot: 2025-11-12) # Downstream dependency rollup (snapshot: 2025-11-13)
| Track | Dependent sprint(s) | Impact if delayed | | Track | Dependent sprint(s) | Impact if delayed |
| --- | --- | --- | | --- | --- | --- |
@@ -179,7 +221,7 @@ This file now only tracks the runtime & signals status snapshot. Active backlog
| Date | Notes | | Date | Notes |
| --- | --- | | --- | --- |
| 2025-11-12 | Snapshot + wave tracker refreshed; pending dependencies captured for Graph/SBOM/Signals/Zastava while Signals DOING work progresses on callgraph CAS promotion + runtime ingestion wiring. | | 2025-11-13 | Snapshot, wave tracker, meeting prep, and action items refreshed ahead of Nov 13 checkpoints; awaiting outcomes before flipping statuses. |
| 2025-11-11 | Runtime + Signals ran NDJSON ingestion soak test; Authority flagged remaining provenance fields for schema freeze ahead of 2025-11-13 sync. | | 2025-11-11 | Runtime + Signals ran NDJSON ingestion soak test; Authority flagged remaining provenance fields for schema freeze ahead of 2025-11-13 sync. |
| 2025-11-09 | Sprint 140 snapshot refreshed; awaiting Scanner surface artifact ETA, Concelier/CARTO schema delivery, and Signals host merge before any wave can advance to DOING. | | 2025-11-09 | Sprint 140 snapshot refreshed; awaiting Scanner surface artifact ETA, Concelier/CARTO schema delivery, and Signals host merge before any wave can advance to DOING. |
# Sprint 140 - Runtime & Signals # Sprint 140 - Runtime & Signals

View File

@@ -8,8 +8,114 @@ This file now only tracks the export & evidence status snapshot. Active backlog
| Wave | Guild owners | Shared prerequisites | Status | Notes | | Wave | Guild owners | Shared prerequisites | Status | Notes |
| --- | --- | --- | --- | --- | | --- | --- | --- | --- | --- |
| 160.A EvidenceLocker | Evidence Locker Guild · Security Guild · Docs Guild | Sprint 110.A AdvisoryAI; Sprint 120.A AirGap; Sprint 130.A Scanner; Sprint 150.A Orchestrator | TODO | Waiting for orchestrator capsule data and AdvisoryAI evidence bundles to stabilize before wiring ingestion APIs. | | 160.A EvidenceLocker | Evidence Locker Guild · Security Guild · Docs Guild | Sprint 110.A AdvisoryAI; Sprint 120.A AirGap; Sprint 130.A Scanner; Sprint 150.A Orchestrator | BLOCKED (2025-11-12) | Waiting for orchestrator capsule data and AdvisoryAI evidence bundles to stabilize before wiring ingestion APIs. |
| 160.B ExportCenter | Exporter Service Guild · Mirror Creator Guild · DevOps Guild | Sprint 110.A AdvisoryAI; Sprint 120.A AirGap; Sprint 130.A Scanner; Sprint 150.A Orchestrator | TODO | Profiles can begin once EvidenceLocker contracts are published; keep DSSE/attestation specs ready. | | 160.B ExportCenter | Exporter Service Guild · Mirror Creator Guild · DevOps Guild | Sprint 110.A AdvisoryAI; Sprint 120.A AirGap; Sprint 130.A Scanner; Sprint 150.A Orchestrator | BLOCKED (2025-11-12) | Profiles can begin once EvidenceLocker contracts are published; keep DSSE/attestation specs ready. |
| 160.C TimelineIndexer | Timeline Indexer Guild · Evidence Locker Guild · Security Guild | Sprint 110.A AdvisoryAI; Sprint 120.A AirGap; Sprint 130.A Scanner; Sprint 150.A Orchestrator | TODO | Postgres/RLS scaffolding drafted; hold for event schemas from orchestrator/notifications. | | 160.C TimelineIndexer | Timeline Indexer Guild · Evidence Locker Guild · Security Guild | Sprint 110.A AdvisoryAI; Sprint 120.A AirGap; Sprint 130.A Scanner; Sprint 150.A Orchestrator | BLOCKED (2025-11-12) | Postgres/RLS scaffolding drafted; hold for event schemas from orchestrator/notifications. |
# Sprint 160 - Export & Evidence # Sprint 160 - Export & Evidence
## Detail trackers & next actions
### 160.A EvidenceLocker
- Detail trackers: [SPRINT_161_evidencelocker.md](./SPRINT_161_evidencelocker.md) (wave entry) and [SPRINT_187_evidence_locker_cli_integration.md](./SPRINT_187_evidence_locker_cli_integration.md) for CLI/replay integration follow-ups.
- Task radar (all TODO as of 2025-11-12):
- `EVID-REPLAY-187-001` — add Evidence Locker replay bundle ingestion/retention APIs and document storage policy (`src/EvidenceLocker/StellaOps.EvidenceLocker`, `docs/modules/evidence-locker/architecture.md`).
- `RUNBOOK-REPLAY-187-004` & `CLI-REPLAY-187-002` — CLI + ops readiness for replay bundles (`docs/runbooks/replay_ops.md`, CLI module).
- `EVID-CRYPTO-90-001` — route hashing/signing/bundle encryption through `ICryptoProviderRegistry`/`ICryptoHash` per `docs/security/crypto-routing-audit-2025-11-07.md`.
- Contracts: bundle packaging + DSSE layout documented in `docs/modules/evidence-locker/bundle-packaging.md` (`EVID-OBS-54-002`); portable/incident modes live under `docs/modules/evidence-locker/incident-mode.md`.
- Gating dependencies: orchestrator capsule schema (`docs/events/orchestrator-scanner-events.md`), AdvisoryAI evidence bundle payload notes, and replay ledger requirements from `docs/replay/DETERMINISTIC_REPLAY.md`.
- Ready-to-start checklist: finalize ingest schema deltas, stage Replay Ledger ops drills, and publish the API surface summary into `SPRINT_161_evidencelocker.md` before moving items to DOING.
#### EvidenceLocker task snapshot (2025-11-12)
| Task ID | Scope | State | Notes / Owners |
| --- | --- | --- | --- |
| EVID-REPLAY-187-001 | Replay bundle ingestion + retention APIs | TODO | Evidence Locker Guild · docs/modules/evidence-locker/architecture.md |
| CLI-REPLAY-187-002 | CLI record/verify/replay UX | TODO | CLI Guild · `docs/modules/cli/architecture.md` |
| RUNBOOK-REPLAY-187-004 | Replay ops runbook + drills | TODO | Docs/Ops Guild · `/docs/runbooks/replay_ops.md` |
| EVID-CRYPTO-90-001 | Sovereign crypto routing | TODO | Evidence Locker + Security Guilds · `ICryptoProviderRegistry` integration |
### 160.B ExportCenter
- Detail trackers: [SPRINT_162_exportcenter_i.md](./SPRINT_162_exportcenter_i.md) (mirror/bootstrap/attestation jobs, `DVOFF-64-002`, `EXPORT-AIRGAP-56/57/58`, `EXPORT-ATTEST-74/75`, `EXPORT-OAS-61/62`) and [SPRINT_163_exportcenter_ii.md](./SPRINT_163_exportcenter_ii.md) (service automation, observability, notification hooks, crypto routing `EXPORT-CRYPTO-90-001`).
- Task radar highlights:
- Mirror & bootstrap: `EXPORT-AIRGAP-56-001/002/003/004/005` and `EXPORT-AIRGAP-57-001`, `EXPORT-AIRGAP-58-001` — build mirror bundles, bootstrap packs, portable evidence exports, and notifications.
- Attestation bundles: `EXPORT-ATTEST-74-001/002` and `EXPORT-ATTEST-75-001/002` — job implementation, CI/offline integration, CLI verify/import, and documentation (`docs/modules/attestor/airgap.md`).
- API/OAS: `EXPORT-OAS-61-001/002`, `EXPORT-OAS-62-001`, `EXPORT-OAS-63-001` — refreshed OpenAPI, discovery endpoint, SDK updates, deprecation headers.
- Service/observability: `EXPORT-SVC-35-001…005`, `EXPORT-OBS-50/51/52`, plus `EXPORT-CRYPTO-90-001` ensuring crypto routing parity with Evidence Locker.
- Dependencies: EvidenceLocker contracts + DSSE proofs define digests; orchestration relies on Orchestrator events + Scheduler readiness; crypto routing must stay aligned with `docs/security/crypto-routing-audit-2025-11-07.md`.
- Ready-to-start checklist: confirm sealed bundle spec (from EvidenceLocker) is frozen, reconcile crypto provider matrix with RootPack deployments, and prep the DevPortal verification CLI scaffolding so `DVOFF-64-002` can move immediately.
#### ExportCenter task snapshot (2025-11-12)
| Task ID | Scope | State | Notes / Owners |
| --- | --- | --- | --- |
| DVOFF-64-002 | DevPortal bundle verification CLI | TODO | DevPortal Offline + AirGap Controller Guilds |
| EXPORT-AIRGAP-56-001/002 | Mirror bundle + bootstrap pack profiles | TODO | Exporter + Mirror Creator + DevOps Guilds |
| EXPORT-AIRGAP-57-001 | Portable evidence export mode | TODO | Exporter Service + Evidence Locker Guild |
| EXPORT-ATTEST-74-001/002 | Attestation bundle job + CI integration | TODO | Attestation Bundle + Exporter Guilds |
| EXPORT-ATTEST-75-001/002 | CLI verify/import + offline kit integration | TODO | Attestation Bundle + CLI + Exporter Guilds |
| EXPORT-OAS-61/62/63 | OpenAPI refresh, discovery, SDK + deprecation headers | TODO | Exporter Service + API Governance + SDK Guilds |
| EXPORT-CRYPTO-90-001 | Sovereign crypto routing | TODO | Exporter Service + Security Guilds |
### 160.C TimelineIndexer
- Detail tracker: [SPRINT_165_timelineindexer.md](./SPRINT_165_timelineindexer.md) (TIMELINE-OBS-52-001…004 and TIMELINE-OBS-53-001 covering migrations, ingestion pipeline, APIs, RLS, and evidence linkage).
- Task radar:
- `TIMELINE-OBS-52-001` — bootstrap service + Postgres migrations with deterministic scripts and RLS scaffolding.
- `TIMELINE-OBS-52-002` — event ingestion pipeline (NATS/Redis consumers, ordering, dedupe, trace correlation, metrics).
- `TIMELINE-OBS-52-003` — REST/gRPC APIs with filtering/pagination + OpenAPI contracts.
- `TIMELINE-OBS-52-004` — finalize RLS, scope checks, audit logging, legal hold enforcement tests.
- `TIMELINE-OBS-53-001` — evidence linkage endpoint returning signed manifest references.
- Dependencies: needs orchestrator/notifications event schemas plus EvidenceLocker digest references to land before Postgres migrations can be frozen; export bundle IDs must be stable to hydrate `/timeline/{id}/evidence`.
- Ready-to-start checklist: secure the event schema package, stage Postgres migration plan (including RLS policies) for review, and align ingest ordering semantics with Scheduler/ExportCenter event cadence.
#### TimelineIndexer task snapshot (2025-11-12)
| Task ID | Scope | State | Notes / Owners |
| --- | --- | --- | --- |
| TIMELINE-OBS-52-001 | Service bootstrap + Postgres migrations/RLS | TODO | Timeline Indexer Guild |
| TIMELINE-OBS-52-002 | Event ingestion pipeline + metrics | TODO | Timeline Indexer Guild |
| TIMELINE-OBS-52-003 | REST/gRPC APIs + OpenAPI contracts | TODO | Timeline Indexer Guild |
| TIMELINE-OBS-52-004 | RLS policies, audit logging, legal hold tests | TODO | Timeline Indexer + Security Guilds |
| TIMELINE-OBS-53-001 | Evidence linkage endpoint | TODO | Timeline Indexer + Evidence Locker Guilds |
## Interlocks & readiness signals
| Dependency | Owner / Source | Impacts | Status / Next signal |
| --- | --- | --- | --- |
| Orchestrator capsule & notifications schema (`docs/events/orchestrator-scanner-events.md`) | Orchestrator Service Guild · Notifications Guild (Sprint 150.A + 140 wave) | 160.A, 160.B, 160.C | Pending schema drop scheduled for 2025-11-15 sync; unblock EvidenceLocker ingestion, ExportCenter notifications, and TimelineIndexer ordering once envelopes freeze. |
| AdvisoryAI evidence bundle schema & payload notes (Sprint 110.A) | AdvisoryAI Guild | 160.A, 160.B | Still stabilizing; EvidenceLocker cannot finalize DSSE manifests or digests until this contract lands. Follow up in AdvisoryAI stand-up on 2025-11-14. |
| Replay ledger spec alignment (`docs/replay/DETERMINISTIC_REPLAY.md`, `/docs/runbooks/replay_ops.md`) | Replay Delivery Guild (Sprint 187) | 160.A | Replay ops runbook exists (2025-11-03); EvidenceLocker must incorporate retention API shape before DOING. Track in EVID-REPLAY-187-001. |
| Crypto routing parity (`docs/security/crypto-routing-audit-2025-11-07.md`) | Security Guild + Export/Evidence teams (`EVID-CRYPTO-90-001`, `EXPORT-CRYPTO-90-001`) | 160.A, 160.B | Audit published 2025-11-07; both guilds must wire `ICryptoProviderRegistry` before enabling sovereign profiles. Target reenlist date: 2025-11-18 readiness review. |
| DevPortal verification CLI scaffolding (`DVOFF-64-002`) | DevPortal Offline Guild (Sprint 162) | 160.B | CLI still TODO; keep `stella devportal verify bundle.tgz` prototype ready so that once bundle contracts are signed, DOING can start within same sprint. |
| DevPortal verification CLI scaffolding (`DVOFF-64-002`) | DevPortal Offline Guild (Sprint 162) | 160.B | CLI still TODO; keep `stella devportal verify bundle.tgz` prototype ready so that once bundle contracts are signed, DOING can start within same sprint. |
## Upcoming checkpoints (UTC)
| Date | Session / Owner | Target outcome | Fallback / Escalation |
| --- | --- | --- | --- |
| 2025-11-14 | AdvisoryAI stand-up (AdvisoryAI Guild) | Freeze evidence bundle schema + payload notes so EvidenceLocker can finalize DSSE manifests (blocked). | If schema slips, log BLOCKED status in Sprint 110 tracker and re-evaluate at 2025-11-18 review. |
| 2025-11-15 | Orchestrator + Notifications schema handoff (Orchestrator Service + Notifications Guilds) | Publish capsule envelopes & notification contracts required by EvidenceLocker ingest, ExportCenter notifications, TimelineIndexer ordering (blocked). | If envelopes not ready, escalate to Wave 150/140 leads and leave blockers noted here; defer DOING flips. |
| 2025-11-18 | Sovereign crypto readiness review (Security Guild + Evidence/Export teams) | Validate `ICryptoProviderRegistry` wiring plan for `EVID-CRYPTO-90-001` & `EXPORT-CRYPTO-90-001`; green-light sovereign modes (blocked). | If gating issues remain, file action items in Security board and hold related sprint tasks in TODO. |
| 2025-11-19 | DevPortal Offline CLI dry run (DevPortal Offline + AirGap Controller Guilds) | Demo `stella devportal verify bundle.tgz` using sample manifest to prove readiness once EvidenceLocker spec lands (blocked awaiting schema). | If CLI not ready, update DVOFF-64-002 description with new ETA and note risk in Sprint 162 doc. |
## Action tracker
| Wave | Immediate action | Owner(s) | Due | Status |
| --- | --- | --- | --- | --- |
| 160.A EvidenceLocker | Draft ingest schema summary + Replay Ledger API notes into `SPRINT_161_evidencelocker.md` once orchestrator + AdvisoryAI schemas land. | Evidence Locker Guild · Replay Delivery Guild | 2025-11-16 | Pending (blocked on Nov-14/15 checkpoints) |
| 160.A EvidenceLocker | Validate crypto provider registry plan for `EVID-CRYPTO-90-001` ahead of the Nov-18 review. | Evidence Locker Guild · Security Guild | 2025-11-17 | Risk: awaiting Security design feedback |
| 160.A EvidenceLocker | Prep CLI + ops teams for replay handoff (`RUNBOOK-REPLAY-187-004`, `CLI-REPLAY-187-002`) once Evidence Locker APIs are drafted. | CLI Guild · Ops Guild · Evidence Locker Guild | 2025-11-18 | Pending |
| 160.B ExportCenter | Prepare DevPortal verification CLI prototype (`DVOFF-64-002`) covering manifest hash + DSSE verification flow. | DevPortal Offline Guild · AirGap Controller Guild | 2025-11-19 | In progress (design draft shared; waiting on bundle schema) |
| 160.B ExportCenter | Align attestation bundle job + CLI verbs (`EXPORT-ATTEST-74/75`) with EvidenceLocker DSSE layout once published. | Exporter Service Guild · Attestation Bundle Guild · CLI Guild | 2025-11-20 | Pending |
| 160.B ExportCenter | Stage crypto routing hooks in exporter service (`EXPORT-CRYPTO-90-001`) tied to the Nov-18 review. | Exporter Service Guild · Security Guild | 2025-11-18 | Pending |
| 160.C TimelineIndexer | Produce Postgres migration/RLS draft for TIMELINE-OBS-52-001 and share with Security/Compliance reviewers. | Timeline Indexer Guild · Security Guild | 2025-11-18 | Pending |
| 160.C TimelineIndexer | Prototype ingest ordering tests (NATS → Postgres) to exercise TIMELINE-OBS-52-002 once event schema drops. | Timeline Indexer Guild | 2025-11-19 | Pending |
| 160.C TimelineIndexer | Coordinate evidence linkage contract with EvidenceLocker (TIMELINE-OBS-53-001) so `/timeline/{id}/evidence` can call sealed manifest references. | Timeline Indexer Guild · Evidence Locker Guild | 2025-11-20 | Pending |
## Risks & mitigations
| Risk | Impacted wave(s) | Severity | Mitigation / Owner |
| --- | --- | --- | --- |
| AdvisoryAI schema slips past 2025-11-14, delaying DSSE manifest freeze. | 160.A, 160.B | High | AdvisoryAI Guild to provide interim sample payloads; EvidenceLocker to stub schema adapters so ExportCenter can begin validation with mock data. |
| Orchestrator/Notifications schema handoff misses 2025-11-15 window. | 160.A, 160.B, 160.C | High | Escalate to Wave 150/140 leads, record BLOCKED status in both sprint docs, and schedule daily schema stand-ups until envelopes land. |
| Sovereign crypto routing design not ready by 2025-11-18 review. | 160.A, 160.B | Medium | Security Guild to publish `ICryptoProviderRegistry` reference implementation; Evidence/Export guilds to nominate fallback providers per profile. |
| DevPortal verification CLI lacks signed bundle fixtures for dry run. | 160.B | Medium | Exporter Guild to provide sample manifest + DSSE pair; DevPortal Offline Guild to script fake EvidenceLocker output for demo. |
| TimelineIndexer Postgres/RLS plan not reviewed before coding. | 160.C | Medium | Timeline Indexer Guild to share migration plan with Security/Compliance for async review; unblock coding by securing written approval in sprint doc. |
## Status log
- 2025-11-12 — Snapshot refreshed; all Export & Evidence waves remain BLOCKED pending orchestrator capsule data, AdvisoryAI bundle schemas, and EvidenceLocker contracts. Re-evaluate readiness after the orchestrator + notifications schema handoff (target sync: 2025-11-15).
- 2025-11-12 (EOD) — Added checkpoint calendar, action tracker, and risk table to keep Wave 160 aligned on pre-work while dependencies stabilize; next update scheduled immediately after the AdvisoryAI + Orchestrator handoffs.

View File

@@ -6,4 +6,28 @@ Active items only. Completed/historic work now resides in docs/implplan/archived
Depends on: Sprint 110.A - AdvisoryAI, Sprint 120.A - AirGap, Sprint 130.A - Scanner, Sprint 150.A - Orchestrator Depends on: Sprint 110.A - AdvisoryAI, Sprint 120.A - AirGap, Sprint 130.A - Scanner, Sprint 150.A - Orchestrator
Summary: Export & Evidence focus on EvidenceLocker). Summary: Export & Evidence focus on EvidenceLocker).
Task ID | State | Task description | Owners (Source) Task ID | State | Task description | Owners (Source)
--- | --- | --- | --- --- | --- | --- | ---
## Task board (snapshot: 2025-11-12)
| Task ID | State | Description | Owners (Source) |
| --- | --- | --- | --- |
| EVID-OBS-54-002 | TODO | Finalize deterministic bundle packaging + DSSE layout per `docs/modules/evidence-locker/bundle-packaging.md`, ensuring parity with portable/incident modes. | Evidence Locker Guild (`src/EvidenceLocker/StellaOps.EvidenceLocker`) |
| EVID-REPLAY-187-001 | TODO | Implement replay bundle ingestion + retention APIs and document storage policy updates referencing `docs/replay/DETERMINISTIC_REPLAY.md`. | Evidence Locker Guild · Replay Delivery Guild |
| CLI-REPLAY-187-002 | TODO | Add `scan --record`, `verify`, `replay`, `diff` CLI verbs with offline bundle resolution; sync golden tests. | CLI Guild (`src/Cli/StellaOps.Cli`) |
| RUNBOOK-REPLAY-187-004 | TODO | Publish `/docs/runbooks/replay_ops.md` coverage for retention enforcement, RootPack rotation, and verification drills. | Docs Guild · Ops Guild |
| EVID-CRYPTO-90-001 | TODO | Route hashing/signing/bundle encryption through `ICryptoProviderRegistry`/`ICryptoHash` for sovereign crypto providers. | Evidence Locker Guild · Security Guild |
## Dependencies & readiness
- Waiting on AdvisoryAI evidence bundle schema + payload notes (Sprint 110.A) to freeze DSSE manifest format.
- Waiting on orchestrator + notifications capsule schema (Sprint 150.A / Sprint 140.A handoff) to finalize ingest API fields.
- Replay Ledger alignment requires `docs/replay/DETERMINISTIC_REPLAY.md` sections 2, 8, and 9 to be reflected in Evidence Locker + CLI before DOING.
- Crypto routing must follow `docs/security/crypto-routing-audit-2025-11-07.md` and align with Export Centers `EXPORT-CRYPTO-90-001` for consistency.
## Ready-to-start checklist
1. Capture orchestrator capsule + AdvisoryAI schema diffs in this sprint doc (attach sample payloads).
2. Draft Replay Ledger API summary + CLI command notes here so `EVID-REPLAY-187-001` can flip to DOING.
3. Confirm `ICryptoProviderRegistry` design with Security Guild ahead of 2025-11-18 readiness review.
4. Ensure docs/ops owners have outline for replay runbook before CLI/EvidenceLocker work begins.

View File

@@ -20,4 +20,22 @@ EXPORT-ATTEST-75-001 | TODO | Integrate attestation bundles into offline kit flo
EXPORT-ATTEST-75-002 | TODO | Document `/docs/modules/attestor/airgap.md` with bundle workflows and verification steps. Dependencies: EXPORT-ATTEST-75-001. | Attestation Bundle Guild, Docs Guild (src/ExportCenter/StellaOps.ExportCenter.AttestationBundles) EXPORT-ATTEST-75-002 | TODO | Document `/docs/modules/attestor/airgap.md` with bundle workflows and verification steps. Dependencies: EXPORT-ATTEST-75-001. | Attestation Bundle Guild, Docs Guild (src/ExportCenter/StellaOps.ExportCenter.AttestationBundles)
EXPORT-OAS-61-001 | TODO | Update Exporter OAS covering profiles, runs, downloads, devportal exports with standard error envelope and examples. | Exporter Service Guild, API Contracts Guild (src/ExportCenter/StellaOps.ExportCenter) EXPORT-OAS-61-001 | TODO | Update Exporter OAS covering profiles, runs, downloads, devportal exports with standard error envelope and examples. | Exporter Service Guild, API Contracts Guild (src/ExportCenter/StellaOps.ExportCenter)
EXPORT-OAS-61-002 | TODO | Provide `/.well-known/openapi` discovery endpoint with version metadata and ETag. Dependencies: EXPORT-OAS-61-001. | Exporter Service Guild (src/ExportCenter/StellaOps.ExportCenter) EXPORT-OAS-61-002 | TODO | Provide `/.well-known/openapi` discovery endpoint with version metadata and ETag. Dependencies: EXPORT-OAS-61-001. | Exporter Service Guild (src/ExportCenter/StellaOps.ExportCenter)
EXPORT-OAS-62-001 | TODO | Ensure SDKs include export profile/run clients with streaming download helpers; add smoke tests. Dependencies: EXPORT-OAS-61-002. | Exporter Service Guild, SDK Generator Guild (src/ExportCenter/StellaOps.ExportCenter) EXPORT-OAS-62-001 | TODO | Ensure SDKs include export profile/run clients with streaming download helpers; add smoke tests. Dependencies: EXPORT-OAS-61-002. | Exporter Service Guild, SDK Generator Guild (src/ExportCenter/StellaOps.ExportCenter)
## Task snapshot (2025-11-12)
- Mirror/bootstrap profiles: `EXPORT-AIRGAP-56-001/002`, `EXPORT-AIRGAP-57-001`, `EXPORT-AIRGAP-58-001` (bundle builds, bootstrap packs, notification fan-out).
- Attestation bundles: `EXPORT-ATTEST-74-001/002`, `EXPORT-ATTEST-75-001/002` plus docs entry to wire CLI + offline kit workflows.
- DevPortal verification: `DVOFF-64-002` (hash/signature verification CLI) aligns with EvidenceLocker sealed bundle contracts.
- API/OAS + SDK: `EXPORT-OAS-61/62` ensures clients and discovery endpoints reflect export surfaces.
## Dependencies & blockers
- Waiting on EvidenceLocker bundle contracts (Sprint 161) to freeze DSSE layouts for mirror/attestation/CLI tasks.
- Orchestrator + Notifications schema (Sprint 150.A / 140) must be published to emit ready events (`EXPORT-AIRGAP-58-001`).
- Sovereign crypto requirements tracked via `EXPORT-CRYPTO-90-001` (Sprint 163) and Security Guild audit (2025-11-07).
- DevPortal CLI prototype requires sample manifests from Exporter + EvidenceLocker coordination to rehearse Nov-19 dry run.
## Ready-to-start checklist
1. Import EvidenceLocker sample manifests once AdvisoryAI + orchestrator schemas freeze; attach to this doc.
2. Align export profile configs with AirGap/DevOps to ensure OCI bootstrap pack dependencies are available offline.
3. Prep `stella devportal verify bundle.tgz` demo script + fixtures ahead of Nov-19 dry run.
4. Stage telemetry hooks for notification events to integrate with TimelineIndexer once events begin emitting.

View File

@@ -24,3 +24,21 @@ EXPORT-SVC-35-003 | TODO | Deliver JSON adapters (`json:raw`, `json:policy`) wit
EXPORT-SVC-35-004 | TODO | Build mirror (full) adapter producing filesystem layout, indexes, manifests, and README with download-only distribution. Dependencies: EXPORT-SVC-35-003. | Exporter Service Guild (src/ExportCenter/StellaOps.ExportCenter) EXPORT-SVC-35-004 | TODO | Build mirror (full) adapter producing filesystem layout, indexes, manifests, and README with download-only distribution. Dependencies: EXPORT-SVC-35-003. | Exporter Service Guild (src/ExportCenter/StellaOps.ExportCenter)
EXPORT-SVC-35-005 | TODO | Implement manifest/provenance writer and KMS signing/attestation (detached + embedded) for bundle outputs. Dependencies: EXPORT-SVC-35-004. | Exporter Service Guild (src/ExportCenter/StellaOps.ExportCenter) EXPORT-SVC-35-005 | TODO | Implement manifest/provenance writer and KMS signing/attestation (detached + embedded) for bundle outputs. Dependencies: EXPORT-SVC-35-004. | Exporter Service Guild (src/ExportCenter/StellaOps.ExportCenter)
EXPORT-CRYPTO-90-001 | TODO | Ensure manifest hashing, signing, and bundle encryption flows route through `ICryptoProviderRegistry`/`ICryptoHash` so RootPack deployments can select CryptoPro/PKCS#11 providers per `docs/security/crypto-routing-audit-2025-11-07.md`. | Exporter Service Guild, Security Guild (src/ExportCenter/StellaOps.ExportCenter) EXPORT-CRYPTO-90-001 | TODO | Ensure manifest hashing, signing, and bundle encryption flows route through `ICryptoProviderRegistry`/`ICryptoHash` so RootPack deployments can select CryptoPro/PKCS#11 providers per `docs/security/crypto-routing-audit-2025-11-07.md`. | Exporter Service Guild, Security Guild (src/ExportCenter/StellaOps.ExportCenter)
## Task snapshot (2025-11-12)
- Service core: `EXPORT-SVC-35-001…005` hardens planner, worker, adapters, and provenance writers for deterministic outputs.
- Observability/audit: `EXPORT-OBS-50/51/52` ensure traces, metrics, and audit logs capture tenants, profiles, DSSE digests.
- API lifecycle: `EXPORT-OAS-63-001` delivers deprecation headers + notifications for legacy endpoints.
- Crypto parity: `EXPORT-CRYPTO-90-001` wires sovereign provider support matching EvidenceLocker design.
## Dependencies & blockers
- Requires Sprint 162 (phase I) outputs and EvidenceLocker contracts to supply DSSE digests for observability tests.
- Depends on Security Guild publishing the crypto routing reference ahead of the 2025-11-18 readiness review.
- Needs orchestrator/notifications schema finalization to define audit trail payloads and event IDs.
- Export planner/worker queue relies on Orchestrator/Scheduler telemetry readiness (Sprint 150), still in BLOCKED state.
## Ready-to-start checklist
1. Mirror the EvidenceLocker DSSE manifest schema into exporter tests once AdvisoryAI + orchestrator schemas freeze.
2. Define telemetry schema (traces/logs/metrics) per Observability guidelines and attach to this doc.
3. Draft deprecation communication plan for legacy endpoints with API Governance before coding `EXPORT-OAS-63-001`.
4. Stage crypto provider configuration (default, CryptoPro, PKCS#11) for fast integration after the Nov-18 review.

View File

@@ -11,4 +11,21 @@ TIMELINE-OBS-52-001 | TODO | Bootstrap `StellaOps.Timeline.Indexer` service with
TIMELINE-OBS-52-002 | TODO | Implement event ingestion pipeline (NATS/Redis consumers) with ordering guarantees, dedupe on `(event_id, tenant_id)`, correlation to trace IDs, and backpressure metrics. Dependencies: TIMELINE-OBS-52-001. | Timeline Indexer Guild (src/TimelineIndexer/StellaOps.TimelineIndexer) TIMELINE-OBS-52-002 | TODO | Implement event ingestion pipeline (NATS/Redis consumers) with ordering guarantees, dedupe on `(event_id, tenant_id)`, correlation to trace IDs, and backpressure metrics. Dependencies: TIMELINE-OBS-52-001. | Timeline Indexer Guild (src/TimelineIndexer/StellaOps.TimelineIndexer)
TIMELINE-OBS-52-003 | TODO | Expose REST/gRPC APIs for timeline queries (`GET /timeline`, `/timeline/{id}`) with filters, pagination, and tenant enforcement. Provide OpenAPI + contract tests. Dependencies: TIMELINE-OBS-52-002. | Timeline Indexer Guild (src/TimelineIndexer/StellaOps.TimelineIndexer) TIMELINE-OBS-52-003 | TODO | Expose REST/gRPC APIs for timeline queries (`GET /timeline`, `/timeline/{id}`) with filters, pagination, and tenant enforcement. Provide OpenAPI + contract tests. Dependencies: TIMELINE-OBS-52-002. | Timeline Indexer Guild (src/TimelineIndexer/StellaOps.TimelineIndexer)
TIMELINE-OBS-52-004 | TODO | Finalize RLS policies, scope checks (`timeline:read`), and audit logging for query access. Include integration tests for cross-tenant isolation and legal hold markers. Dependencies: TIMELINE-OBS-52-003. | Timeline Indexer Guild, Security Guild (src/TimelineIndexer/StellaOps.TimelineIndexer) TIMELINE-OBS-52-004 | TODO | Finalize RLS policies, scope checks (`timeline:read`), and audit logging for query access. Include integration tests for cross-tenant isolation and legal hold markers. Dependencies: TIMELINE-OBS-52-003. | Timeline Indexer Guild, Security Guild (src/TimelineIndexer/StellaOps.TimelineIndexer)
TIMELINE-OBS-53-001 | TODO | Link timeline events to evidence bundle digests + attestation subjects; expose `/timeline/{id}/evidence` endpoint returning signed manifest references. Dependencies: TIMELINE-OBS-52-004. | Timeline Indexer Guild, Evidence Locker Guild (src/TimelineIndexer/StellaOps.TimelineIndexer) TIMELINE-OBS-53-001 | TODO | Link timeline events to evidence bundle digests + attestation subjects; expose `/timeline/{id}/evidence` endpoint returning signed manifest references. Dependencies: TIMELINE-OBS-52-004. | Timeline Indexer Guild, Evidence Locker Guild (src/TimelineIndexer/StellaOps.TimelineIndexer)
## Task snapshot (2025-11-12)
- Core service: `TIMELINE-OBS-52-001/002` cover Postgres migrations/RLS scaffolding and NATS/Redis ingestion with deterministic ordering + metrics.
- API surface: `TIMELINE-OBS-52-003/004` expose REST/gRPC query endpoints, RLS policies, audit logging, and legal-hold tests.
- Evidence linkage: `TIMELINE-OBS-53-001` joins timeline events to EvidenceLocker digests for `/timeline/{id}/evidence`.
## Dependencies & blockers
- Waiting on orchestrator + notifications schema (Wave 150/140) to finalize ingestion payload and event IDs.
- Requires EvidenceLocker bundle digest schema to link timeline entries to sealed manifests.
- Needs Scheduler/Orchestrator queue readiness for ingestion ordering semantics (impacting 52-002).
- Security/Compliance review required for Postgres RLS migrations before coding begins.
## Ready-to-start checklist
1. Obtain sample orchestrator capsule events + notifications once schema drops; attach to this doc for reference.
2. Draft Postgres migration + RLS design and share with Security/Compliance for approval.
3. Define ingestion ordering tests (NATS to Postgres) and expected metrics/alerts.
4. Align evidence linkage contract with EvidenceLocker (bundle IDs, DSSE references) prior to implementing `TIMELINE-OBS-53-001`.

View File

@@ -8,7 +8,132 @@ This file now only tracks the notifications & telemetry status snapshot. Active
| Wave | Guild owners | Shared prerequisites | Status | Notes | | Wave | Guild owners | Shared prerequisites | Status | Notes |
| --- | --- | --- | --- | --- | | --- | --- | --- | --- | --- |
| 170.A Notifier | Notifications Service Guild · Attestor Service Guild · Observability Guild | Sprint 150.A Orchestrator | TODO | Needs orchestrator job events/attest data; keep templates staged for when job attestations land. | | 170.A Notifier | Notifications Service Guild · Attestor Service Guild · Observability Guild | Sprint 150.A Orchestrator | **DOING (2025-11-12)** | Scope confirmation + template/OAS prep underway; execution tracked in `SPRINT_171_notifier_i.md` (NOTIFY-ATTEST/OAS/OBS/RISK series). |
| 170.B Telemetry | Telemetry Core Guild · Observability Guild · Security Guild | Sprint 150.A Orchestrator | TODO | Library scaffolding is ready but should launch once orchestrator/Policy consumers can adopt shared helpers. | | 170.B Telemetry | Telemetry Core Guild · Observability Guild · Security Guild | Sprint 150.A Orchestrator | **DOING (2025-11-12)** | Bootstrapping `StellaOps.Telemetry.Core` plus adoption runway in `SPRINT_174_telemetry.md`; waiting on Orchestrator/Policy hosts to consume new helpers. |
# Sprint 170 - Notifications & Telemetry # Sprint 170 - Notifications & Telemetry
## Wave 170.A Notifier readiness
### Scope & goals
- Deliver attestation/key-rotation alert templates plus routing so Attestor/Signer incidents surface immediately (NOTIFY-ATTEST-74-001/002).
- Refresh Notifier OpenAPI/SDK surface (`NOTIFY-OAS-61-001``NOTIFY-OAS-63-001`) so Console/CLI teams can self-serve the new endpoints.
- Wire SLO/incident inputs into rules (NOTIFY-OBS-51-001/55-001) and extend risk-profile routing (NOTIFY-RISK-66-001 → NOTIFY-RISK-68-001) without regressing quiet-hours/dedup.
- Preserve Offline Kit and documentation parity (NOTIFY-DOC-70-001 — done, NOTIFY-AIRGAP-56-002 — done) while adding the new rule surfaces.
### Entry criteria
- Orchestrator job attest events flowing to Notify bus (Sprint 150.A dependency) with test fixtures approved by Attestor Guild.
- Quiet-hours/digest backlog reconciled (no pending blockers in `docs/notifications/*.md`).
- Observability Guild sign-off on telemetry fields reused by Notifier SLO webhooks.
### Exit criteria
- All NOTIFY-ATTEST/OAS/OBS/RISK tasks in `SPRINT_171_notifier_i.md` moved to DONE with accompanying doc updates.
- Templates promoted to Offline Kit manifests and sample payloads stored under `docs/notifications/templates.md`.
- Incident mode notifications exercised in staging with audit logs + DSSE evidence attached.
### Task clusters & owners
| Cluster | Linked tasks | Owners | Status snapshot | Notes |
| --- | --- | --- | --- | --- |
| Attestation / key lifecycle alerts | NOTIFY-ATTEST-74-001/74-002 | Notifications Service Guild · Attestor Service Guild | TODO → DOING (prep) | Template scaffolding drafted; awaiting Rekor witness payload contract freeze. |
| API/OAS refresh & SDK parity | NOTIFY-OAS-61-001 → NOTIFY-OAS-63-001 | Notifications Service Guild · API Contracts Guild · SDK Generator Guild | TODO | Contract doc outline in review; SDK generator blocked on `/notifications/rules` schema finalize date (target 2025-11-15). |
| Observability-driven triggers | NOTIFY-OBS-51-001/55-001 | Notifications Service Guild · Observability Guild | TODO | Depends on Telemetry team exposing SLO webhook payload shape (see TELEMETRY-OBS-51-001). |
| Risk profile routing | NOTIFY-RISK-66-001 → NOTIFY-RISK-68-001 | Notifications Service Guild · Risk Engine Guild · Policy Guild | TODO | Requires Policys risk profile metadata (POLICY-RISK-40-002) export; follow up in Sprint 175. |
| Docs & offline parity | NOTIFY-DOC-70-001, NOTIFY-AIRGAP-56-002 | Notifications Service Guild · DevOps Guild | DONE | Remains reference for GA checklists; keep untouched unless new surfaces appear. |
### Observability checkpoints
- Align metric names/labels with `docs/notifications/architecture.md#12-observability-prometheus--otel` before promoting new dashboards.
- Ensure Notifier spans/logs include tenant, ruleId, actionId, and `attestation_event_id` for attestation-triggered templates.
- Capture incident notification smoke tests via `ops/devops/telemetry/tenant_isolation_smoke.py` once Telemetry wave lands.
## Wave 170.B Telemetry bootstrap
### Scope & goals
- Ship `StellaOps.Telemetry.Core` bootstrap + propagation helpers (TELEMETRY-OBS-50-001/50-002).
- Provide golden-signal helpers + scrubbing/PII safety nets (TELEMETRY-OBS-51-001/51-002) so service teams can onboard without bespoke plumbing.
- Implement incident + sealed-mode toggles (TELEMETRY-OBS-55-001/56-001) and document the integration contract for Orchestrator, Policy, Task Runner, Gateway (`WEB-OBS-50-001`).
### Entry criteria
- Orchestrator + Policy hosts expose extension points for telemetry bootstrap (tracked via Sprint 150.A and IDs ORCH-OBS-50-001 / POLICY-OBS-50-001).
- Observability Guild reviewed storage footprint impacts for Prometheus/Tempo/Loki per module (docs/modules/telemetry/architecture.md §2).
- Security Guild signs off on redaction defaults + tenant override audit logging.
### Exit criteria
- Core library published to `/local-nugets` and referenced by at least Orchestrator & Policy in integration branches.
- Context propagation middleware validated through HTTP/gRPC/job smoke tests with deterministic trace IDs.
- Incident/sealed-mode toggles wired into CLI + Notify hooks (NOTIFY-OBS-55-001) with runbooks updated under `docs/notifications/architecture.md`.
### Task clusters & owners
| Cluster | Linked tasks | Owners | Status snapshot | Notes |
| --- | --- | --- | --- | --- |
| Bootstrap & propagation | TELEMETRY-OBS-50-001/50-002 | Telemetry Core Guild | TODO → DOING (scaffolding) | Collector profile templates staged; need service metadata detector + sample host integration PRs. |
| Metrics helpers + scrubbing | TELEMETRY-OBS-51-001/51-002 | Telemetry Core Guild · Observability Guild · Security Guild | TODO | Roslyn analyzer spec drafted; waiting on scrub policy from Security (POLICY-SEC-42-003). |
| Incident & sealed-mode controls | TELEMETRY-OBS-55-001/56-001 | Telemetry Core Guild · Observability Guild | TODO | Requires CLI toggle contract (CLI-OBS-12-001) and Notify incident payload spec (NOTIFY-OBS-55-001). |
### Tooling & validation
- Smoke: `ops/devops/telemetry/smoke_otel_collector.py` + `tenant_isolation_smoke.py` to run for each profile (default/forensic/airgap).
- Offline bundle packaging: `ops/devops/telemetry/package_offline_bundle.py` to include updated collectors, dashboards, manifest digests.
- Incident simulation: reuse `ops/devops/telemetry/generate_dev_tls.sh` for local collector certs during sealed-mode testing.
## Shared milestones & dependencies
| Target date | Milestone | Owners | Dependency notes |
| --- | --- | --- | --- |
| 2025-11-13 | Finalize attestation payload schema + template variables | Notifications Service Guild · Attestor Service Guild | Unblocks NOTIFY-ATTEST-74-001/002 + Telemetry incident span labels. |
| 2025-11-15 | Publish draft Notifier OAS + SDK snippets | Notifications Service Guild · API Contracts Guild | Required for CLI/UI adoption; prereq for NOTIFY-OAS-61/62 series. |
| 2025-11-18 | Land Telemetry.Core bootstrap sample in Orchestrator | Telemetry Core Guild · Orchestrator Guild | Demonstrates TELEMETRY-OBS-50-001 viability; prerequisite for Policy adoption + Notify SLO hooks. |
| 2025-11-20 | Incident/quiet-hour end-to-end rehearsal | Notifications Service Guild · Telemetry Core Guild · Observability Guild | Validates TELEMETRY-OBS-55-001 + NOTIFY-OBS-55-001 + CLI toggle contract. |
| 2025-11-22 | Offline kit bundle refresh (notifications + telemetry assets) | DevOps Guild · Notifications Service Guild · Telemetry Core Guild | Ensure docs/ops/offline-kit manifests reference new templates/configs. |
## Risks & mitigations
- **Telemetry data drift in sealed mode.** Mitigate by enforcing `IEgressPolicy` checks (TELEMETRY-OBS-56-001) and documenting fallback exporters; schedule smoke runs after each config change.
- **Template/API divergence.** Maintain single source of truth in `SPRINT_171_notifier_i.md` tasks; require API Contracts review before merging SDK updates to avoid drift with UI consumers.
- **Observability storage overhead.** Coordinate with Ops Guild to project Prometheus/Tempo growth when SLO webhooks + incident toggles increase cardinality; adjust retention per docs/modules/telemetry/architecture.md §2.
- **Cross-sprint dependency churn.** Track ORCH-OBS-50-001, POLICY-OBS-50-001, WEB-OBS-50-001 weekly; if they slip, re-baseline Telemetry wave deliverables or gate Notifier observability triggers accordingly.
## Task mirror snapshot (reference: Sprint 171 & 174 trackers)
### Wave 170.A Notifier (Sprint 171 mirror)
- **Open tasks:** 11 (NOTIFY-ATTEST/OAS/OBS/RISK series).
- **Done tasks:** 2 (NOTIFY-DOC-70-001, NOTIFY-AIRGAP-56-002) serve as baselines for doc/offline parity.
| Category | Task IDs | Current state | Notes |
| --- | --- | --- | --- |
| Attestation + key lifecycle | NOTIFY-ATTEST-74-001/002 | **DOING / TODO** | Template creation in progress (74-001) with doc updates in `docs/notifications/templates.md`; wiring (74-002) waiting on schema freeze & template hand-off. |
| API/OAS + SDK refresh | NOTIFY-OAS-61-001 → 63-001 | **DOING / TODO** | OAS doc updates underway (61-001); downstream endpoints/SDK items remain TODO until schema merged. |
| Observability-driven triggers | NOTIFY-OBS-51-001/55-001 | TODO | Depends on Telemetry SLO webhook schema + incident toggle contract. |
| Risk routing | NOTIFY-RISK-66-001 → 68-001 | TODO | Policy/Risk metadata export (POLICY-RISK-40-002) required before implementation. |
| Completed prerequisites | NOTIFY-DOC-70-001, NOTIFY-AIRGAP-56-002 | DONE | Keep as reference for documentation/offline-kit parity. |
### Wave 170.B Telemetry (Sprint 174 mirror)
- **Open tasks:** 6 (TELEMETRY-OBS-50/51/55/56 series).
- **Done tasks:** 0 (wave not yet started in Sprint 174 beyond scaffolding work-in-progress).
| Category | Task IDs | Current state | Notes |
| --- | --- | --- | --- |
| Bootstrap & propagation | TELEMETRY-OBS-50-001/002 | **DOING / TODO** | Core bootstrap coding active (50-001); propagation adapters (50-002) queued pending package publication. |
| Metrics helpers & scrubbing | TELEMETRY-OBS-51-001/002 | TODO | Roslyn analyzer + scrub policy review pending Security Guild approval. |
| Incident & sealed-mode controls | TELEMETRY-OBS-55-001/56-001 | TODO | Requires CLI toggle contract (CLI-OBS-12-001) and Notify incident payload spec (NOTIFY-OBS-55-001). |
## External dependency tracker
| Dependency | Source sprint / doc | Current state (as of 2025-11-12) | Impact on waves |
| --- | --- | --- | --- |
| Sprint 150.A Orchestrator (wave table) | `SPRINT_150_scheduling_automation.md` | TODO | Blocks Notifier template wiring + Telemetry consumption of job events until orchestration telemetry lands. |
| ORCH-OBS-50-001 `orchestrator instrumentation` | `docs/implplan/archived/tasks.md` excerpt / Sprint 150 backlog | TODO | Needed for Telemetry.Core sample + Notify SLO hooks; monitor for slip. |
| POLICY-OBS-50-001 `policy instrumentation` | Sprint 150 backlog | TODO | Required before Telemetry helpers can be adopted by Policy + risk routing. |
| WEB-OBS-50-001 `gateway telemetry core adoption` | Sprint 214/215 backlogs | TODO | Ensures web/gateway emits trace IDs that Notify incident payload references. |
| POLICY-RISK-40-002 `risk profile metadata export` | Sprint 215+ (Policy) | TODO | Prerequisite for NOTIFY-RISK-66/67/68 payload enrichment. |
## Coordination log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-12 10:15 | Wave rows flipped to DOING; baseline scope/entry/exit criteria recorded for both waves. | Observability Guild · Notifications Service Guild |
| 2025-11-12 14:40 | Added task mirror + dependency tracker + milestone table to keep Sprint170 snapshot aligned with Sprint171/174 execution plans. | Observability Guild |
| 2025-11-12 18:05 | Marked NOTIFY-ATTEST-74-001, NOTIFY-OAS-61-001, and TELEMETRY-OBS-50-001 as DOING in their sprint trackers; added status notes reflecting in-flight work vs. gated follow-ups. | Notifications Service Guild · Telemetry Core Guild |
| 2025-11-12 19:20 | Documented attestation template suite (Section7 in `docs/notifications/templates.md`) to unblock NOTIFY-ATTEST-74-001 deliverables and updated sprint mirrors accordingly. | Notifications Service Guild |
| 2025-11-12 19:32 | Synced notifications architecture doc to reference the new attestation template suite so downstream teams see the dependency in one place. | Notifications Service Guild |
| 2025-11-12 19:45 | Updated notifications overview + rules docs with `tmpl-attest-*` requirements so rule authors/operators share the same contract. | Notifications Service Guild |
| 2025-11-12 20:05 | Published baseline Offline Kit templates under `offline/notifier/templates/attestation/` for Slack/Email/Webhook so NOTIFY-ATTEST-74-002 wiring has ready-made artefacts. | Notifications Service Guild |

View File

@@ -7,9 +7,9 @@ Depends on: Sprint 150.A - Orchestrator
Summary: Notifications & Telemetry focus on Notifier (phase I). Summary: Notifications & Telemetry focus on Notifier (phase I).
Task ID | State | Task description | Owners (Source) Task ID | State | Task description | Owners (Source)
--- | --- | --- | --- --- | --- | --- | ---
NOTIFY-ATTEST-74-001 | TODO | Create notification templates for verification failures, expiring attestations, key revocations, and transparency anomalies. | Notifications Service Guild, Attestor Service Guild (src/Notifier/StellaOps.Notifier) NOTIFY-ATTEST-74-001 | **DOING (2025-11-12)** | Create notification templates for verification failures, expiring attestations, key revocations, and transparency anomalies. | Notifications Service Guild, Attestor Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-ATTEST-74-002 | TODO | Wire notifications to key rotation/revocation events and transparency witness failures. Dependencies: NOTIFY-ATTEST-74-001. | Notifications Service Guild, KMS Guild (src/Notifier/StellaOps.Notifier) NOTIFY-ATTEST-74-002 | TODO | Wire notifications to key rotation/revocation events and transparency witness failures. Dependencies: NOTIFY-ATTEST-74-001. | Notifications Service Guild, KMS Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-OAS-61-001 | TODO | Update notifier OAS with rules, templates, incidents, quiet hours endpoints using standard error envelope and examples. | Notifications Service Guild, API Contracts Guild (src/Notifier/StellaOps.Notifier) NOTIFY-OAS-61-001 | **DOING (2025-11-12)** | Update notifier OAS with rules, templates, incidents, quiet hours endpoints using standard error envelope and examples. | Notifications Service Guild, API Contracts Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-OAS-61-002 | TODO | Implement `/.well-known/openapi` discovery endpoint with scope metadata. Dependencies: NOTIFY-OAS-61-001. | Notifications Service Guild (src/Notifier/StellaOps.Notifier) NOTIFY-OAS-61-002 | TODO | Implement `/.well-known/openapi` discovery endpoint with scope metadata. Dependencies: NOTIFY-OAS-61-001. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-OAS-62-001 | TODO | Provide SDK usage examples for rule CRUD, incident ack, and quiet hours; ensure SDK smoke tests. Dependencies: NOTIFY-OAS-61-002. | Notifications Service Guild, SDK Generator Guild (src/Notifier/StellaOps.Notifier) NOTIFY-OAS-62-001 | TODO | Provide SDK usage examples for rule CRUD, incident ack, and quiet hours; ensure SDK smoke tests. Dependencies: NOTIFY-OAS-61-002. | Notifications Service Guild, SDK Generator Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-OAS-63-001 | TODO | Emit deprecation headers and Notifications templates for retiring notifier APIs. Dependencies: NOTIFY-OAS-62-001. | Notifications Service Guild, API Governance Guild (src/Notifier/StellaOps.Notifier) NOTIFY-OAS-63-001 | TODO | Emit deprecation headers and Notifications templates for retiring notifier APIs. Dependencies: NOTIFY-OAS-62-001. | Notifications Service Guild, API Governance Guild (src/Notifier/StellaOps.Notifier)
@@ -20,3 +20,29 @@ NOTIFY-RISK-67-001 | TODO | Notify stakeholders when risk profiles are published
NOTIFY-RISK-68-001 | TODO | Support per-profile routing rules, quiet hours, and dedupe for risk alerts; integrate with CLI/Console preferences. Dependencies: NOTIFY-RISK-67-001. | Notifications Service Guild (src/Notifier/StellaOps.Notifier) NOTIFY-RISK-68-001 | TODO | Support per-profile routing rules, quiet hours, and dedupe for risk alerts; integrate with CLI/Console preferences. Dependencies: NOTIFY-RISK-67-001. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-DOC-70-001 | DONE (2025-11-02) | Document the split between legacy `src/Notify` libraries and the new `src/Notifier` runtime, updating architecture docs with rationale/cross-links. | Notifications Service Guild (src/Notifier/StellaOps.Notifier) NOTIFY-DOC-70-001 | DONE (2025-11-02) | Document the split between legacy `src/Notify` libraries and the new `src/Notifier` runtime, updating architecture docs with rationale/cross-links. | Notifications Service Guild (src/Notifier/StellaOps.Notifier)
NOTIFY-AIRGAP-56-002 | DONE | Provide Bootstrap Pack notifier configurations with deterministic secrets handling and offline validation steps. Dependencies: NOTIFY-AIRGAP-56-001. | Notifications Service Guild, DevOps Guild (src/Notifier/StellaOps.Notifier) NOTIFY-AIRGAP-56-002 | DONE | Provide Bootstrap Pack notifier configurations with deterministic secrets handling and offline validation steps. Dependencies: NOTIFY-AIRGAP-56-001. | Notifications Service Guild, DevOps Guild (src/Notifier/StellaOps.Notifier)
## Status notes (2025-11-12 UTC)
- **NOTIFY-ATTEST-74-001** Template matrix (verification failure, expiring attestation, key revoke, witness anomaly) drafted; Section7 added to `docs/notifications/templates.md` plus cross-references in `notifications/overview.md` and `notifications/rules.md` so rule authors and operators use the canonical `tmpl-attest-*` suite; baseline template exports now live under `offline/notifier/templates/attestation/*.template.json`; waiting on Attestor schema freeze (due 2025-11-13) before locking copy and localization tokens.
- **NOTIFY-OAS-61-001** OpenAPI document restructure underway; shared error envelope + examples added, but `quietHours` and `incident` sections still need review with API Contracts Guild.
- **NOTIFY-OBS-51-001/NOTIFY-OBS-55-001** Remain TODO pending Telemetry SLO webhook schema + incident toggle contract; coordinate with TELEMETRY-OBS-50/55 tasks.
- **NOTIFY-RISK-66-001 → NOTIFY-RISK-68-001** Blocked by Policy export (`POLICY-RISK-40-002`) to supply profile metadata; revisit once Policy sprint publishes the feed.
## Milestones & dependencies
| Target date | Milestone | Owner(s) | Notes / dependencies |
| --- | --- | --- | --- |
| 2025-11-13 | Finalize attestation payload schema + localization tokens | Notifications Service Guild · Attestor Service Guild | Required to close NOTIFY-ATTEST-74-001 and unblock NOTIFY-ATTEST-74-002 wiring work. |
| 2025-11-15 | Draft Notifier OAS published for review | Notifications Service Guild · API Contracts Guild | Enables follow-on `.well-known` endpoint and SDK tasks (NOTIFY-OAS-61-002/62-001). |
| 2025-11-18 | Incident payload contract agreed with Telemetry & Ops | Notifications Service Guild · Observability Guild | Needed before NOTIFY-OBS-51-001/55-001 can move to DOING. |
| 2025-11-20 | Risk profile metadata export available (`POLICY-RISK-40-002`) | Notifications Service Guild · Policy Guild | Gate for NOTIFY-RISK-66-001 → NOTIFY-RISK-68-001 implementation. |
## Coordination log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-12 18:05 | Marked NOTIFY-ATTEST-74-001 and NOTIFY-OAS-61-001 as DOING; documented current blockers in status notes. | Notifications Service Guild |
| 2025-11-12 19:20 | Added attestation template suite (Section7 of `docs/notifications/templates.md`) covering template keys/helpers/samples to support NOTIFY-ATTEST-74-001 deliverables. | Notifications Service Guild |
| 2025-11-12 19:32 | Updated `docs/notifications/architecture.md` rendering section to reference the new `tmpl-attest-*` suite so architecture + template docs stay in sync. | Notifications Service Guild |
| 2025-11-12 19:45 | Synced `docs/notifications/overview.md` and `docs/notifications/rules.md` with the attestation template requirements so operators and rule authors see the mandated keys. | Notifications Service Guild |
| 2025-11-12 20:05 | Added baseline template exports under `offline/notifier/templates/attestation/` (Slack/Email/Webhook variants) to seed Offline Kit bundles. | Notifications Service Guild |

View File

@@ -7,9 +7,31 @@ Depends on: Sprint 150.A - Orchestrator
Summary: Notifications & Telemetry focus on Telemetry). Summary: Notifications & Telemetry focus on Telemetry).
Task ID | State | Task description | Owners (Source) Task ID | State | Task description | Owners (Source)
--- | --- | --- | --- --- | --- | --- | ---
TELEMETRY-OBS-50-001 | TODO | Create `StellaOps.Telemetry.Core` library with structured logging facade, OpenTelemetry configuration helpers, and deterministic bootstrap (service name/version detection, resource attributes). Publish sample usage for web/worker hosts. | Telemetry Core Guild (src/Telemetry/StellaOps.Telemetry.Core) TELEMETRY-OBS-50-001 | **DOING (2025-11-12)** | Create `StellaOps.Telemetry.Core` library with structured logging facade, OpenTelemetry configuration helpers, and deterministic bootstrap (service name/version detection, resource attributes). Publish sample usage for web/worker hosts. | Telemetry Core Guild (src/Telemetry/StellaOps.Telemetry.Core)
TELEMETRY-OBS-50-002 | TODO | Implement context propagation middleware/adapters for HTTP, gRPC, background jobs, and CLI invocations, carrying `trace_id`, `tenant_id`, `actor`, and imposed-rule metadata. Provide test harness covering async resume scenarios. Dependencies: TELEMETRY-OBS-50-001. | Telemetry Core Guild (src/Telemetry/StellaOps.Telemetry.Core) TELEMETRY-OBS-50-002 | TODO | Implement context propagation middleware/adapters for HTTP, gRPC, background jobs, and CLI invocations, carrying `trace_id`, `tenant_id`, `actor`, and imposed-rule metadata. Provide test harness covering async resume scenarios. Dependencies: TELEMETRY-OBS-50-001. | Telemetry Core Guild (src/Telemetry/StellaOps.Telemetry.Core)
TELEMETRY-OBS-51-001 | TODO | Ship metrics helpers for golden signals (histograms, counters, gauges) with exemplar support and cardinality guards. Provide Roslyn analyzer preventing unsanitised labels. Dependencies: TELEMETRY-OBS-50-002. | Telemetry Core Guild, Observability Guild (src/Telemetry/StellaOps.Telemetry.Core) TELEMETRY-OBS-51-001 | TODO | Ship metrics helpers for golden signals (histograms, counters, gauges) with exemplar support and cardinality guards. Provide Roslyn analyzer preventing unsanitised labels. Dependencies: TELEMETRY-OBS-50-002. | Telemetry Core Guild, Observability Guild (src/Telemetry/StellaOps.Telemetry.Core)
TELEMETRY-OBS-51-002 | TODO | Implement redaction/scrubbing filters for secrets/PII enforced at logger sink, configurable per-tenant with TTL, including audit of overrides. Add determinism tests verifying stable field order and timestamp normalization. Dependencies: TELEMETRY-OBS-51-001. | Telemetry Core Guild, Security Guild (src/Telemetry/StellaOps.Telemetry.Core) TELEMETRY-OBS-51-002 | TODO | Implement redaction/scrubbing filters for secrets/PII enforced at logger sink, configurable per-tenant with TTL, including audit of overrides. Add determinism tests verifying stable field order and timestamp normalization. Dependencies: TELEMETRY-OBS-51-001. | Telemetry Core Guild, Security Guild (src/Telemetry/StellaOps.Telemetry.Core)
TELEMETRY-OBS-55-001 | TODO | Provide incident mode toggle API that adjusts sampling, enables extended retention tags, and records activation trail for services. Ensure toggle honored by all hosting templates and integrates with Config/FeatureFlag providers. Dependencies: TELEMETRY-OBS-51-002. | Telemetry Core Guild (src/Telemetry/StellaOps.Telemetry.Core) TELEMETRY-OBS-55-001 | TODO | Provide incident mode toggle API that adjusts sampling, enables extended retention tags, and records activation trail for services. Ensure toggle honored by all hosting templates and integrates with Config/FeatureFlag providers. Dependencies: TELEMETRY-OBS-51-002. | Telemetry Core Guild (src/Telemetry/StellaOps.Telemetry.Core)
TELEMETRY-OBS-56-001 | TODO | Add sealed-mode telemetry helpers (drift metrics, seal/unseal spans, offline exporters) and ensure hosts can disable external exporters when sealed. Dependencies: TELEMETRY-OBS-55-001. | Telemetry Core Guild (src/Telemetry/StellaOps.Telemetry.Core) TELEMETRY-OBS-56-001 | TODO | Add sealed-mode telemetry helpers (drift metrics, seal/unseal spans, offline exporters) and ensure hosts can disable external exporters when sealed. Dependencies: TELEMETRY-OBS-55-001. | Telemetry Core Guild (src/Telemetry/StellaOps.Telemetry.Core)
## Status notes (2025-11-12 UTC)
- **TELEMETRY-OBS-50-001** Core bootstrap scaffolding live in branch `feature/telemetry-core-bootstrap`; resource detector + profile manifest generator in review; sample Orchestrator host integration slated for 2025-11-18.
- **TELEMETRY-OBS-50-002** Awaiting merged bootstrap package before wiring propagation adapters; draft design covers HTTP/gRPC/job/CLI interceptors plus tenant/actor propagation tests.
- **TELEMETRY-OBS-51-001/51-002** On hold until propagation middleware stabilizes; Security Guild still reviewing scrub policy (POLICY-SEC-42-003).
- **TELEMETRY-OBS-55-001/56-001** Incident/sealed-mode APIs blocked on CLI toggle contract (CLI-OBS-12-001) and Notify incident payload spec (NOTIFY-OBS-55-001); keep coordination with Notifier team.
## Milestones & dependencies
| Target date | Milestone | Owner(s) | Notes / dependencies |
| --- | --- | --- | --- |
| 2025-11-18 | Land Telemetry.Core bootstrap sample in Orchestrator | Telemetry Core Guild · Orchestrator Guild | Demonstrates TELEMETRY-OBS-50-001 deliverable; prerequisite for propagation middleware adoption. |
| 2025-11-19 | Publish propagation adapter API draft | Telemetry Core Guild | Needed for TELEMETRY-OBS-50-002 and downstream service adoption. |
| 2025-11-21 | Security sign-off on scrub policy (POLICY-SEC-42-003) | Telemetry Core Guild · Security Guild | Unlocks TELEMETRY-OBS-51-001/51-002 implementation. |
| 2025-11-22 | Incident/CLI toggle contract agreed (CLI-OBS-12-001 + NOTIFY-OBS-55-001) | Telemetry Core Guild · Notifications Service Guild · CLI Guild | Required before TELEMETRY-OBS-55-001/56-001 can advance. |
## Coordination log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-12 18:05 | Marked TELEMETRY-OBS-50-001 as DOING and captured branch/progress details in status notes. | Telemetry Core Guild |

View File

@@ -13,3 +13,20 @@ RUNBOOK-REPLAY-187-004 | TODO | Publish `/docs/runbooks/replay_ops.md` covering
EVID-CRYPTO-90-001 | TODO | Route Evidence Locker hashing/signing (manifest digests, DSSE assembly, bundle encryption) through `ICryptoProviderRegistry`/`ICryptoHash` so sovereign profiles (e.g., `ru-offline`) can swap providers per `docs/security/crypto-routing-audit-2025-11-07.md`. | Evidence Locker Guild, Security Guild (`src/EvidenceLocker/StellaOps.EvidenceLocker`) EVID-CRYPTO-90-001 | TODO | Route Evidence Locker hashing/signing (manifest digests, DSSE assembly, bundle encryption) through `ICryptoProviderRegistry`/`ICryptoHash` so sovereign profiles (e.g., `ru-offline`) can swap providers per `docs/security/crypto-routing-audit-2025-11-07.md`. | Evidence Locker Guild, Security Guild (`src/EvidenceLocker/StellaOps.EvidenceLocker`)
> 2025-11-03: `/docs/runbooks/replay_ops.md` created — Evidence Locker, CLI, Attestor teams can transition replay delivery tasks to **DOING** alongside Ops runbook rehearsals. > 2025-11-03: `/docs/runbooks/replay_ops.md` created — Evidence Locker, CLI, Attestor teams can transition replay delivery tasks to **DOING** alongside Ops runbook rehearsals.
## Task snapshot (2025-11-12)
- EvidenceLocker: `EVID-REPLAY-187-001` (replay ingestion/retention) and `EVID-CRYPTO-90-001` (sovereign crypto).
- CLI/Docs: `CLI-REPLAY-187-002` plus `RUNBOOK-REPLAY-187-004` ensure offline workflows + ops readiness.
- Attestor: `RUNBOOK-REPLAY-187-003` documents replay ledger integration with DSSE/attest flows.
## Dependencies & blockers
- Tied to Sprint 160 wave progress (EvidenceLocker DSSE schema + orchestrator capsule events).
- Requires Scanner Record Mode (Sprint 186) payload stability to drive replay ingestion.
- CLI/Attestor work depends on EvidenceLocker API schema freeze.
- Sovereign crypto readiness review on 2025-11-18 must approve provider registry usage.
## Ready-to-start checklist
1. Capture sample replay bundle payloads from Scanner record mode for CLI + Attestor reference.
2. Align EvidenceLocker API design with Replay Delivery + Ops teams, documenting endpoints before coding.
3. Schedule joint review covering `/docs/runbooks/replay_ops.md` with EvidenceLocker, CLI, Attestor, Ops.
4. Confirm `ICryptoProviderRegistry` coverage for replay bundle signing/encryption ahead of the Nov-18 review.

View File

@@ -52,8 +52,8 @@ _Theme:_ Finish the provable reachability pipeline (graph CAS → replay → DSS
| UNCERTAINTY-SCORER-401-025 | TODO | Implement the entropy-aware risk scorer (`riskScore = base × reach × trust × (1 + entropyBoost)`) and wire it into finding writes. | Signals Guild (`src/Signals/StellaOps.Signals.Application`, `docs/uncertainty/README.md`) | | UNCERTAINTY-SCORER-401-025 | TODO | Implement the entropy-aware risk scorer (`riskScore = base × reach × trust × (1 + entropyBoost)`) and wire it into finding writes. | Signals Guild (`src/Signals/StellaOps.Signals.Application`, `docs/uncertainty/README.md`) |
| UNCERTAINTY-POLICY-401-026 | TODO | Update policy guidance (Concelier/Excitors) with uncertainty gates (U1/U2/U3), sample YAML rules, and remediation actions. | Policy Guild · Concelier Guild (`docs/policy/dsl.md`, `docs/uncertainty/README.md`) | | UNCERTAINTY-POLICY-401-026 | TODO | Update policy guidance (Concelier/Excitors) with uncertainty gates (U1/U2/U3), sample YAML rules, and remediation actions. | Policy Guild · Concelier Guild (`docs/policy/dsl.md`, `docs/uncertainty/README.md`) |
| UNCERTAINTY-UI-401-027 | TODO | Surface uncertainty chips/tooltips in the Console (React UI) + CLI output (risk score + entropy states). | UI Guild · CLI Guild (`src/UI/StellaOps.UI`, `src/Cli/StellaOps.Cli`, `docs/uncertainty/README.md`) | | UNCERTAINTY-UI-401-027 | TODO | Surface uncertainty chips/tooltips in the Console (React UI) + CLI output (risk score + entropy states). | UI Guild · CLI Guild (`src/UI/StellaOps.UI`, `src/Cli/StellaOps.Cli`, `docs/uncertainty/README.md`) |
| PROV-INLINE-401-028 | DOING | Extend Authority/Feedser event writers to attach inline DSSE + Rekor references on every SBOM/VEX/scan event using `StellaOps.Provenance.Mongo`. | Authority Guild · Feedser Guild (`docs/provenance/inline-dsse.md`, `src/__Libraries/StellaOps.Provenance.Mongo`) | | PROV-INLINE-401-028 | DONE | Extend Authority/Feedser event writers to attach inline DSSE + Rekor references on every SBOM/VEX/scan event using `StellaOps.Provenance.Mongo`. | Authority Guild · Feedser Guild (`docs/provenance/inline-dsse.md`, `src/__Libraries/StellaOps.Provenance.Mongo`) |
| PROV-BACKFILL-401-029 | TODO | Backfill historical Mongo events with DSSE/Rekor metadata by resolving known attestations per subject digest. | Platform Guild (`docs/provenance/inline-dsse.md`, `scripts/publish_attestation_with_provenance.sh`) | | PROV-BACKFILL-401-029 | DOING | Backfill historical Mongo events with DSSE/Rekor metadata by resolving known attestations per subject digest (wiring ingestion helpers + endpoint tests in progress). | Platform Guild (`docs/provenance/inline-dsse.md`, `scripts/publish_attestation_with_provenance.sh`) |
| PROV-INDEX-401-030 | TODO | Deploy provenance indexes (`events_by_subject_kind_provenance`, etc.) and expose compliance/replay queries. | Platform Guild · Ops Guild (`docs/provenance/inline-dsse.md`, `ops/mongo/indices/events_provenance_indices.js`) | | PROV-INDEX-401-030 | TODO | Deploy provenance indexes (`events_by_subject_kind_provenance`, etc.) and expose compliance/replay queries. | Platform Guild · Ops Guild (`docs/provenance/inline-dsse.md`, `ops/mongo/indices/events_provenance_indices.js`) |
> Use `docs/reachability/DELIVERY_GUIDE.md` for architecture context, dependencies, and acceptance tests. > Use `docs/reachability/DELIVERY_GUIDE.md` for architecture context, dependencies, and acceptance tests.

View File

@@ -1,6 +1,7 @@
# Advisory AI architecture # Advisory AI architecture
> Captures the retrieval, guardrail, and inference packaging requirements defined in the Advisory AI implementation plan and related module guides. > Captures the retrieval, guardrail, and inference packaging requirements defined in the Advisory AI implementation plan and related module guides.
> Configuration knobs (inference modes, guardrails, cache/queue budgets) now live in [`docs/policy/assistant-parameters.md`](../../policy/assistant-parameters.md) per DOCS-AIAI-31-006.
## 1) Goals ## 1) Goals

View File

@@ -4,6 +4,7 @@ Excititor converts heterogeneous VEX feeds into raw observations and linksets th
## Latest updates (2025-11-05) ## Latest updates (2025-11-05)
- Link-Not-Merge readiness: release note [Excitor consensus beta](../../updates/2025-11-05-excitor-consensus-beta.md) captures how Excititor feeds power the Excititor consensus beta (sample payload in [consensus JSON](../../vex/consensus-json.md)). - Link-Not-Merge readiness: release note [Excitor consensus beta](../../updates/2025-11-05-excitor-consensus-beta.md) captures how Excititor feeds power the Excititor consensus beta (sample payload in [consensus JSON](../../vex/consensus-json.md)).
- Added [observability guide](operations/observability.md) describing the evidence metrics emitted by `EXCITITOR-AIAI-31-003` (request counters, statement histogram, signature status, guard violations) so Ops/Lens can alert on misuse.
- README now points policy/UI teams to the upcoming consensus integration work. - README now points policy/UI teams to the upcoming consensus integration work.
- DSSE packaging for consensus bundles and Export Center hooks are documented in the [beta release note](../../updates/2025-11-05-excitor-consensus-beta.md); operators mirroring Excititor exports must verify detached JWS artefacts (`bundle.json.jws`) alongside each bundle. - DSSE packaging for consensus bundles and Export Center hooks are documented in the [beta release note](../../updates/2025-11-05-excitor-consensus-beta.md); operators mirroring Excititor exports must verify detached JWS artefacts (`bundle.json.jws`) alongside each bundle.
- Follow-ups called out in the release note (Policy weighting knobs `POLICY-ENGINE-30-101`, CLI verb `CLI-VEX-30-002`) remain in-flight and are tracked in `/docs/implplan/SPRINT_200_documentation_process.md`. - Follow-ups called out in the release note (Policy weighting knobs `POLICY-ENGINE-30-101`, CLI verb `CLI-VEX-30-002`) remain in-flight and are tracked in `/docs/implplan/SPRINT_200_documentation_process.md`.

View File

@@ -2,7 +2,7 @@
> Consolidates the VEX ingestion guardrails from Epic1 with consensus and AI-facing requirements from Epics7 and8. This is the authoritative architecture record for Excititor. > Consolidates the VEX ingestion guardrails from Epic1 with consensus and AI-facing requirements from Epics7 and8. This is the authoritative architecture record for Excititor.
> **Scope.** This document specifies the **Excititor** service: its purpose, trust model, data structures, observation/linkset pipelines, APIs, plug-in contracts, storage schema, performance budgets, testing matrix, and how it integrates with Concelier, Policy Engine, and evidence surfaces. It is implementation-ready. > **Scope.** This document specifies the **Excititor** service: its purpose, trust model, data structures, observation/linkset pipelines, APIs, plug-in contracts, storage schema, performance budgets, testing matrix, and how it integrates with Concelier, Policy Engine, and evidence surfaces. It is implementation-ready. The immutable observation store schema lives in [`vex_observations.md`](./vex_observations.md).
--- ---

View File

@@ -0,0 +1,41 @@
# Excititor Observability Guide
> Added 2025-11-14 alongside Sprint 119 (`EXCITITOR-AIAI-31-003`). Complements the AirGap/mirror runbooks under the same folder.
Excititors evidence APIs now emit first-class OpenTelemetry metrics so Lens, Advisory AI, and Ops can detect misuse or missing provenance without paging through logs. This document lists the counters/histograms shipped by the WebService (`src/Excititor/StellaOps.Excititor.WebService`) and how to hook them into your exporters/dashboards.
## Telemetry prerequisites
- Enable `Excititor:Telemetry` in the service configuration (`appsettings.*`), ensuring **metrics** export is on. The WebService automatically adds the evidence meter (`StellaOps.Excititor.WebService.Evidence`) alongside the ingestion meter.
- Deploy at least one OTLP or console exporter (see `TelemetryExtensions.ConfigureExcititorTelemetry`). If your region lacks OTLP transport, fall back to scraping the console exporter for smoke tests.
- Coordinate with the Ops/Signals guild to provision the span/metric sinks referenced in `docs/modules/platform/architecture-overview.md#observability`.
## Metrics reference
| Metric | Type | Description | Key dimensions |
| --- | --- | --- | --- |
| `excititor.vex.observation.requests` | Counter | Number of `/v1/vex/observations/{vulnerabilityId}/{productKey}` requests handled. | `tenant`, `outcome` (`success`, `error`, `cancelled`), `truncated` (`true/false`) |
| `excititor.vex.observation.statement_count` | Histogram | Distribution of statements returned per observation projection request. | `tenant`, `outcome` |
| `excititor.vex.signature.status` | Counter | Signature status per statement (missing vs. unverified). | `tenant`, `status` (`missing`, `unverified`) |
| `excititor.vex.aoc.guard_violations` | Counter | Aggregated count of Aggregation-Only Contract violations detected by the WebService (ingest + `/vex/aoc/verify`). | `tenant`, `surface` (`ingest`, `aoc_verify`, etc.), `code` (AOC error code) |
> All metrics originate from the `EvidenceTelemetry` helper (`src/Excititor/StellaOps.Excititor.WebService/Telemetry/EvidenceTelemetry.cs`). When disabled (telemetry off), the helper is inert.
### Dashboard hints
- **Advisory-AI readiness** alert when `excititor.vex.signature.status{status="missing"}` spikes for a tenant, indicating connectors arent supplying signatures.
- **Guardrail monitoring** graph `excititor.vex.aoc.guard_violations` per `code` to catch upstream feed regressions before they pollute Evidence Locker or Lens caches.
- **Capacity planning** histogram percentiles of `excititor.vex.observation.statement_count` feed API sizing (higher counts mean Advisory AI is requesting broad scopes).
## Operational steps
1. **Enable telemetry**: set `Excititor:Telemetry:EnableMetrics=true`, configure OTLP endpoints/headers as described in `TelemetryExtensions`.
2. **Add dashboards**: import panels referencing the metrics above (see Grafana JSON snippets in Ops repo once merged).
3. **Alerting**: add rules for high guard violation rates and missing signatures. Tie alerts back to connectors via tenant metadata.
4. **Post-deploy checks**: after each release, verify metrics emit by curling `/v1/vex/observations/...`, watching the console exporter (dev) or OTLP (prod).
## Related documents
- `docs/modules/excititor/architecture.md` API contract, AOC guardrails, connector responsibilities.
- `docs/modules/excititor/mirrors.md` AirGap/mirror ingestion checklist (feeds into `EXCITITOR-AIRGAP-56/57`).
- `docs/modules/platform/architecture-overview.md#observability` platform-wide telemetry guidance.

View File

@@ -0,0 +1,131 @@
# VEX Observation Model (`vex_observations`)
> Authored 2025-11-14 for Sprint 120 (`EXCITITOR-LNM-21-001`). This document is the canonical schema description for Excititors immutable observation records. It unblocks downstream documentation tasks (`DOCS-LNM-22-002`) and aligns the WebService/Worker data structures with Mongo persistence.
Excititor ingests heterogeneous VEX statements, normalizes them under the Aggregation-Only Contract (AOC), and persists each normalized statement as a **VEX observation**. These observations are the source of truth for:
- Advisory AI citation APIs (`/v1/vex/observations/{vulnerabilityId}/{productKey}`)
- Graph/Vuln Explorer overlays (batch observation APIs)
- Evidence Locker + portable bundle manifests
- Policy Engine materialization and audit trails
All observation documents are immutable. New information creates a new observation record linked by `observationId`; supersedence happens through Graph/Lens layers, not by mutating this collection.
## Storage & routing
| Aspect | Value |
| --- | --- |
| Collection | `vex_observations` (Mongo) |
| Upstream generator | `VexObservationProjectionService` (WebService) and Worker normalization pipeline |
| Primary key | `{tenant, observationId}` |
| Required indexes | `{tenant, vulnerabilityId}`, `{tenant, productKey}`, `{tenant, document.digest}`, `{tenant, providerId, status}` |
| Source of truth for | `/v1/vex/observations`, Graph batch APIs, Excititor → Evidence Locker replication |
## Canonical document shape
```jsonc
{
"tenant": "default",
"observationId": "vex:obs:sha256:...",
"vulnerabilityId": "CVE-2024-12345",
"productKey": "pkg:maven/org.example/app@1.2.3",
"providerId": "ubuntu-csaf",
"status": "affected", // matches VexClaimStatus enum
"justification": {
"type": "component_not_present",
"reason": "Package not shipped in this profile",
"detail": "Binary not in base image"
},
"detail": "Free-form vendor detail",
"confidence": {
"score": 0.9,
"level": "high",
"method": "vendor"
},
"signals": {
"severity": {
"scheme": "cvss3.1",
"score": 7.8,
"label": "High",
"vector": "CVSS:3.1/..."
},
"kev": true,
"epss": 0.77
},
"scope": {
"key": "pkg:deb/ubuntu/apache2@2.4.58-1",
"purls": [
"pkg:deb/ubuntu/apache2@2.4.58-1",
"pkg:docker/example/app@sha256:..."
],
"cpes": ["cpe:2.3:a:apache:http_server:2.4.58:*:*:*:*:*:*:*"]
},
"anchors": [
"#/statements/0/justification",
"#/statements/0/detail"
],
"document": {
"format": "csaf",
"digest": "sha256:abc123...",
"revision": "2024-10-22T09:00:00Z",
"sourceUri": "https://ubuntu.com/security/notices/USN-0000-1",
"signature": {
"type": "cosign",
"issuer": "https://token.actions.githubusercontent.com",
"keyId": "ubuntu-vex-prod",
"verifiedAt": "2024-10-22T09:01:00Z",
"transparencyLogReference": "rekor://UUID",
"trust": {
"tenantId": "default",
"issuerId": "ubuntu",
"effectiveWeight": 0.9,
"tenantOverrideApplied": false,
"retrievedAtUtc": "2024-10-22T09:00:30Z"
}
}
},
"aoc": {
"guardVersion": "2024.10.0",
"violations": [], // non-empty -> stored + surfaced
"ingestedAt": "2024-10-22T09:00:05Z",
"retrievedAt": "2024-10-22T08:59:59Z"
},
"metadata": {
"provider-hint": "Mainline feed",
"source-channel": "mirror"
}
}
```
### Field notes
- **`tenant`** logical tenant resolved by WebService based on headers or default configuration.
- **`observationId`** deterministic hash (sha256) over `{tenant, vulnerabilityId, productKey, providerId, statementDigest}`. Never reused.
- **`status` + `justification`** follow the OpenVEX semantics enforced by `StellaOps.Excititor.Core.VexClaim`.
- **`scope`** includes canonical `key` plus normalized PURLs/CPES; deterministic ordering.
- **`anchors`** optional JSON-pointer hints pointing to the source document sections; stored as trimmed strings.
- **`document.signature`** mirrors `VexSignatureMetadata`; empty if upstream feed lacks signatures.
- **`aoc.violations`** stored if the guard detected non-fatal issues; fatal issues never create an observation.
- **`metadata`** reserved for deterministic provider hints; keys follow `vex.*` prefix guidance.
## Determinism & AOC guarantees
1. **Write-once** once inserted, observation documents never change. New evidence creates a new `observationId`.
2. **Sorted collections** arrays (`anchors`, `purls`, `cpes`) are sorted lexicographically before persistence.
3. **Guard metadata** `aoc.guardVersion` records the guard library version (`docs/aoc/guard-library.md`), enabling audits.
4. **Signatures** only verification metadata proven by the Worker is stored; WebService never recomputes trust.
5. **Time normalization** all timestamps stored as UTC ISO-8601 strings (Mongo `DateTime`).
## API mapping
| API | Source fields | Notes |
| --- | --- | --- |
| `/v1/vex/observations/{vuln}/{product}` | `tenant`, `vulnerabilityId`, `productKey`, `scope`, `statements[]` | Response uses `VexObservationProjectionService` to render `statements`, `document`, and `signature` fields. |
| `/vex/aoc/verify` | `document.digest`, `providerId`, `aoc` | Replays guard validation for recent digests; guard violations here align with `aoc.violations`. |
| Evidence batch API (Graph) | `statements[]`, `scope`, `signals`, `anchors` | Format optimized for overlays; resuces `document` to digest/URI. |
## Related work
- `EXCITITOR-GRAPH-24-*` relies on this schema to build overlays.
- `DOCS-LNM-22-002` (Link-Not-Merge documentation) references this file.
- `EXCITITOR-ATTEST-73-*` uses `document.digest` + `signature` to embed provenance in attestation payloads.

View File

@@ -0,0 +1,61 @@
# Findings Ledger — Air-Gap Provenance Extensions (LEDGER-AIRGAP-56/57/58)
> **Scope:** How ledger events capture mirror bundle provenance, staleness metrics, evidence snapshots, and sealed-mode timeline events for air-gapped deployments.
## 1. Requirements recap
- **LEDGER-AIRGAP-56-001:** Record mirror bundle metadata (`bundle_id`, `merkle_root`, `time_anchor`, `source_region`) whenever advisories/VEX/policies are imported offline. Tie import provenance to each affected ledger event.
- **LEDGER-AIRGAP-56-002:** Surface staleness metrics and enforce risk-critical export blocks when imported data exceeds freshness SLAs; emit remediation guidance.
- **LEDGER-AIRGAP-57-001:** Link findings evidence snapshots (portable bundles) so cross-enclave verification can attest to the same ledger hash.
- **LEDGER-AIRGAP-58-001:** Emit sealed-mode timeline events describing bundle impacts (new findings, remediation deltas) for Console and Notify.
## 2. Schema additions
| Entity | Field | Type | Notes |
| --- | --- | --- | --- |
| `ledger_events.event_body` | `airgap.bundle` | object | `{ "bundleId", "merkleRoot", "timeAnchor", "sourceRegion", "importedAt", "importOperator" }` recorded on import events. |
| `ledger_events.event_body` | `airgap.evidenceSnapshot` | object | `{ "bundleUri", "dsseDigest", "expiresAt" }` for findings evidence bundles. |
| `ledger_projection` | `airgap.stalenessSeconds` | integer | Age of newest data feeding the finding projection. |
| `ledger_projection` | `airgap.bundleId` | string | Last bundle influencing the projection row. |
| `timeline_events` (new view) | `airgapImpact` | object | Materials needed for LEDGER-AIRGAP-58-001 timeline feed (finding counts, severity deltas). |
Canonical JSON must sort object keys (`bundleId`, `importOperator`, …) to keep hashes deterministic.
## 3. Import workflow
1. **Mirror bundle validation:** AirGap controller verifies bundle signature/manifest before ingest; saves metadata for ledger enrichment.
2. **Event enrichment:** The importer populates `airgap.bundle` fields on each event produced from the bundle. `bundleId` equals manifest digest (SHA-256). `merkleRoot` is the bundles manifest Merkle root; `timeAnchor` is the authoritative timestamp from the bundle.
3. **Anchoring:** Merkle batching includes bundle metadata; anchor references in `ledger_merkle_roots.anchor_reference` use format `airgap::<bundleId>` when not externally anchored.
4. **Projection staleness:** Projector updates `airgap.stalenessSeconds` comparing current time with `bundle.timeAnchor` per artifact scope; CLI + Console read the value to display freshness indicators.
## 4. Staleness enforcement
- Config option `AirGapPolicies:FreshnessThresholdSeconds` (default 604800 = 7days) sets allowable age.
- Export workflows check `airgap.stalenessSeconds`; when over threshold the service raises `ERR_AIRGAP_STALE` and supplies remediation message referencing the last bundle (`bundleId`, `timeAnchor`, `importOperator`).
- Metrics (`ledger_airgap_staleness_seconds`) track distribution per tenant for dashboards.
## 5. Evidence snapshots
- Evidence bundles (`airgap.evidenceSnapshot`) reference portable DSSE packages stored in Evidence Locker (`bundleUri` like `file://offline/evidence/<bundleId>.tar`).
- CLI command `stella ledger evidence link` attaches evidence snapshots to findings after bundle generation; ledger event records both DSSE digest and expiration.
- Timeline entries and Console detail views display “Evidence snapshot available” with download instructions suited for sealed environments.
## 6. Timeline events (LEDGER-AIRGAP-58-001)
- New derived view `timeline_airgap_impacts` emits JSON objects such as:
```json
{
"tenant": "tenant-a",
"bundleId": "bundle-sha256:…",
"newFindings": 42,
"resolvedFindings": 18,
"criticalDelta": +5,
"timeAnchor": "2025-10-30T11:00:00Z",
"sealedMode": true
}
```
- Console + Notify subscribe to `ledger.airgap.timeline` events to show sealed-mode summaries.
## 7. Offline kit considerations
- Include bundle provenance schema, staleness policy config, CLI scripts (`stella airgap bundle import`, `stella ledger evidence link`), and sample manifests.
- Provide validation script `scripts/ledger/validate-airgap-bundle.sh` verifying manifest signatures, timestamps, and ledger enrichment before ingest.
- Document sealed-mode toggles ensuring no external egress occurs when importing bundles.
---
*Draft 2025-11-13 for LEDGER-AIRGAP-56/57/58 planning.*

View File

@@ -0,0 +1,129 @@
# Findings Ledger Deployment & Operations Guide
> **Applies to** `StellaOps.Findings.Ledger` writer + projector services (Sprint120).
> **Audience** Platform/DevOps engineers bringing up Findings Ledger across dev/stage/prod and air-gapped sites.
## 1. Prerequisites
| Component | Requirement |
| --- | --- |
| Database | PostgreSQL 14+ with `citext`, `uuid-ossp`, `pgcrypto`, and `pg_partman`. Provision dedicated database/user per environment. |
| Storage | Minimum 200GB SSD per production environment (ledger + projection + Merkle tables). |
| TLS & identity | Authority reachable for service-to-service JWTs; mTLS optional but recommended. |
| Secrets | Store DB connection string, encryption keys (`LEDGER__ATTACHMENTS__ENCRYPTIONKEY`), signing credentials for Merkle anchoring in secrets manager. |
| Observability | OTLP collector endpoint (or Loki/Prometheus endpoints) configured; see `docs/modules/findings-ledger/observability.md`. |
## 2. Docker Compose deployment
1. **Create env files**
```bash
cp deploy/compose/env/ledger.env.example ledger.env
cp etc/secrets/ledger.postgres.secret.example ledger.postgres.env
# Populate LEDGER__DB__CONNECTIONSTRING, LEDGER__ATTACHMENTS__ENCRYPTIONKEY, etc.
```
2. **Add ledger service overlay** (append to the Compose file in use, e.g. `docker-compose.prod.yaml`):
```yaml
services:
findings-ledger:
image: stellaops/findings-ledger:${STELLA_VERSION:-2025.11.0}
restart: unless-stopped
env_file:
- ledger.env
- ledger.postgres.env
environment:
ASPNETCORE_URLS: http://0.0.0.0:8080
LEDGER__DB__CONNECTIONSTRING: ${LEDGER__DB__CONNECTIONSTRING}
LEDGER__OBSERVABILITY__ENABLED: "true"
LEDGER__MERKLE__ANCHORINTERVAL: "00:05:00"
ports:
- "8188:8080"
depends_on:
- postgres
volumes:
- ./etc/ledger/appsettings.json:/app/appsettings.json:ro
```
3. **Run migrations then start services**
```bash
dotnet run --project src/Findings/StellaOps.Findings.Ledger.Migrations \
-- --connection "$LEDGER__DB__CONNECTIONSTRING"
docker compose --env-file ledger.env --env-file ledger.postgres.env \
-f deploy/compose/docker-compose.prod.yaml up -d findings-ledger
```
4. **Smoke test**
```bash
curl -sf http://localhost:8188/health/ready
curl -sf http://localhost:8188/metrics | grep ledger_write_latency_seconds
```
## 3. Helm deployment
1. **Create secret**
```bash
kubectl create secret generic findings-ledger-secrets \
--from-literal=LEDGER__DB__CONNECTIONSTRING="$CONN_STRING" \
--from-literal=LEDGER__ATTACHMENTS__ENCRYPTIONKEY="$ENC_KEY" \
--dry-run=client -o yaml | kubectl apply -f -
```
2. **Helm values excerpt**
```yaml
services:
findingsLedger:
enabled: true
image:
repository: stellaops/findings-ledger
tag: 2025.11.0
envFromSecrets:
- name: findings-ledger-secrets
env:
LEDGER__OBSERVABILITY__ENABLED: "true"
LEDGER__MERKLE__ANCHORINTERVAL: "00:05:00"
resources:
requests: { cpu: "500m", memory: "1Gi" }
limits: { cpu: "2", memory: "4Gi" }
probes:
readinessPath: /health/ready
livenessPath: /health/live
```
3. **Install/upgrade**
```bash
helm upgrade --install stellaops deploy/helm/stellaops \
-f deploy/helm/stellaops/values-prod.yaml
```
4. **Verify**
```bash
kubectl logs deploy/stellaops-findings-ledger | grep "Ledger started"
kubectl port-forward svc/stellaops-findings-ledger 8080 &
curl -sf http://127.0.0.1:8080/metrics | head
```
## 4. Backups & restores
| Task | Command / guidance |
| --- | --- |
| Online backup | `pg_dump -Fc --dbname="$LEDGER_DB" --file ledger-$(date -u +%Y%m%d).dump` (run hourly for WAL + daily full dumps). |
| Point-in-time recovery | Enable WAL archiving; document target `recovery_target_time`. |
| Projection rebuild | After restore, run `dotnet run --project tools/LedgerReplayHarness -- --connection "$LEDGER_DB" --tenant all` to regenerate projections and verify hashes. |
| Evidence bundles | Store Merkle root anchors + replay DSSE bundles alongside DB backups for audit parity. |
## 5. Offline / air-gapped workflow
- Use `stella ledger observability snapshot --out offline/ledger/metrics.tar.gz` before exporting Offline Kits. Include:
- `ledger_write_latency_seconds` summaries
- `ledger_merkle_anchor_duration_seconds` histogram
- Latest `ledger_merkle_roots` rows (export via `psql \copy`)
- Package ledger service binaries + migrations using `ops/offline-kit/build_offline_kit.py --include ledger`.
- Document sealed-mode restrictions: disable outbound attachments unless egress policy allows Evidence Locker endpoints; set `LEDGER__ATTACHMENTS__ALLOWEGRESS=false`.
## 6. Post-deploy checklist
- [ ] Health + metrics endpoints respond.
- [ ] Merkle anchors writing to `ledger_merkle_roots`.
- [ ] Projection lag < 30s (`ledger_projection_lag_seconds`).
- [ ] Grafana dashboards imported under “Findings Ledger”.
- [ ] Backups scheduled + restore playbook tested.
- [ ] Offline snapshot taken (air-gapped sites).
---
*Draft prepared 2025-11-13 for LEDGER-29-009/LEDGER-AIRGAP-56-001 planning. Update once Compose/Helm overlays are merged.*

View File

@@ -0,0 +1,45 @@
# Implementation Plan — Findings Ledger (Sprint 120)
## Phase 1 Observability baselines (LEDGER-29-007)
- Instrument writer/projector with metrics listed in `observability.md` (`ledger_write_latency_seconds`, `ledger_events_total`, `ledger_projection_lag_seconds`, etc.).
- Emit structured logs (Serilog JSON) including chain/sequence/hash metadata.
- Wire OTLP exporters, ensure `/metrics` endpoint exposes histogram buckets with exemplars.
- Publish Grafana dashboards + alert rules (Policy SLO pack).
- Deliver doc updates + sample Grafana JSON in repo (`docs/observability/dashboards/findings-ledger/`).
## Phase 2 Determinism harness (LEDGER-29-008)
- Finalize NDJSON fixtures for ≥5M findings/tenant (per tenant/test scenario).
- Implement `tools/LedgerReplayHarness` CLI as specified in `replay-harness.md`.
- Add GitHub/Gitea pipeline job(s) running nightly (1M) + weekly (5M) harness plus DSSE signing.
- Capture CPU/memory/latency metrics and commit signed reports for validation.
- Provide runbook for QA + Ops to rerun harness in their environments.
## Phase 3 Deployment & backup collateral (LEDGER-29-009)
- Integrate ledger service into Compose (`docker-compose.prod.yaml`) and Helm values.
- Automate PostgreSQL migrations (DatabaseMigrator invocation pre-start).
- Document backup cadence (pg_dump + WAL archiving) and projection rebuild process (call harness).
- Ensure Offline Kit packaging pulls binaries, migrations, harness, and default dashboards.
## Phase 4 Provenance & air-gap extensions
- LEDGER-34-101: ingest orchestrator run export metadata, index by artifact hash, expose audit endpoint.
- LEDGER-AIRGAP-56/57/58: extend ledger events to capture bundle provenance, staleness metrics, timeline events.
- LEDGER-ATTEST-73-001: store attestation pointers (DSSE IDs, Rekor metadata) for explainability.
- For each extension, update schema doc + workflow inference doc to describe newly recorded fields and tenant-safe defaults.
## Dependencies & sequencing
1. AdvisoryAI Sprint 110.A completion (raw findings parity).
2. Observability schema approval (Nov15) to unblock Phase 1 instrumentation.
3. QA lab capacity for 5M replay (Nov18 checkpoint).
4. DevOps review of Compose/Helm overlays (Nov20).
5. Orchestrator export schema freeze (Nov25) for provenance linkage.
## Deliverables checklist
- [ ] Metrics/logging/tracing implementation merged, dashboards exported.
- [ ] Harness CLI + fixtures + signed reports committed.
- [ ] Compose/Helm overlays + backup/restore runbooks validated.
- [ ] Air-gap provenance fields documented + implemented.
- [ ] Sprint tracker and release notes updated after each phase.
---
*Draft: 2025-11-13. Update when sequencing or dependencies change.*

View File

@@ -0,0 +1,65 @@
# Findings Ledger Observability Profile (Sprint 120)
> **Audience:** Findings Ledger Guild · Observability Guild · DevOps · AirGap Controller Guild
> **Scope:** Metrics, logs, traces, dashboards, and alert contracts required by LEDGER-29-007/008/009. Complements the schema spec and workflow docs.
## 1. Telemetry stack & conventions
- **Export path:** .NET OpenTelemetry SDK → OTLP → shared collector → Prometheus/Tempo/Loki. Enable via `observability.enabled=true` in `appsettings`.
- **Namespace prefix:** `ledger.*` for metrics, `Ledger.*` for logs/traces. Labels follow `tenant`, `chain`, `policy`, `status`, `reason`, `anchor`.
- **Time provenance:** All timestamps emitted in UTC ISO-8601. When metrics/logs include monotonic durations they must derive from `TimeProvider`.
## 2. Metrics
| Metric | Type | Labels | Description / target |
| --- | --- | --- | --- |
| `ledger_write_latency_seconds` | Histogram | `tenant`, `event_type` | End-to-end append latency (API ingress → persisted). P95 ≤120ms. |
| `ledger_events_total` | Counter | `tenant`, `event_type`, `source` (`policy`, `workflow`, `orchestrator`) | Incremented per committed event. Mirrors Merkle leaf count. |
| `ledger_ingest_backlog_events` | Gauge | `tenant` | Number of events buffered in the writer queue. Alert when >5000 for 5min. |
| `ledger_projection_lag_seconds` | Gauge | `tenant` | Wall-clock difference between latest ledger event and projection tail. Target <30s. |
| `ledger_projection_rebuild_seconds` | Histogram | `tenant` | Duration of replay/rebuild operations triggered by LEDGER-29-008 harness. |
| `ledger_merkle_anchor_duration_seconds` | Histogram | `tenant` | Time to batch + anchor events. Target <60s per 10k events. |
| `ledger_merkle_anchor_failures_total` | Counter | `tenant`, `reason` (`db`, `signing`, `network`) | Alerts at >0 within 15min. |
| `ledger_attachments_encryption_failures_total` | Counter | `tenant`, `stage` (`encrypt`, `sign`, `upload`) | Ensures secure attachment pipeline stays healthy. |
| `ledger_db_connections_active` | Gauge | `role` (`writer`, `projector`) | Helps tune pool size. |
| `ledger_app_version_info` | Gauge | `version`, `git_sha` | Static metric for fleet observability. |
### Derived dashboards
- **Writer health:** `ledger_write_latency_seconds` (P50/P95/P99), backlog gauge, event throughput.
- **Projection health:** `ledger_projection_lag_seconds`, rebuild durations, conflict counts (from logs).
- **Anchoring:** Anchor duration histogram, failure counter, root hash timeline.
## 3. Logs & traces
- **Log structure:** Serilog JSON with fields `tenant`, `chainId`, `sequence`, `eventId`, `eventType`, `actorId`, `policyVersion`, `hash`, `merkleRoot`.
- **Log levels:** `Information` for success summaries (sampled), `Warning` for retried operations, `Error` for failed writes/anchors.
- **Correlation:** Each API request includes `requestId` + `traceId` logged with events. Projector logs capture `replayId` and `rebuildReason`.
- **Secrets:** Ensure `event_body` is never logged; log only metadata/hashes.
## 4. Alerts
| Alert | Condition | Response |
| --- | --- | --- |
| **LedgerWriteSLA** | `ledger_write_latency_seconds` P95 > 0.12s for 3 intervals | Check DB contention, review queue backlog, scale writer. |
| **LedgerBacklogGrowing** | `ledger_ingest_backlog_events` > 5000 for 5min | Inspect upstream policy runs, ensure projector keeping up. |
| **ProjectionLag** | `ledger_projection_lag_seconds` > 60s | Trigger rebuild, verify change streams. |
| **AnchorFailure** | `ledger_merkle_anchor_failures_total` increase > 0 | Collect logs, rerun anchor, verify signing service. |
| **AttachmentSecurityError** | `ledger_attachments_encryption_failures_total` increase > 0 | Audit attachments pipeline; check key material and storage endpoints. |
Alerts integrate with Notifier channel `ledger.alerts`. For air-gapped deployments emit to local syslog + CLI incident scripts.
## 5. Testing & determinism harness
- **Replay harness:** CLI `dotnet run --project tools/LedgerReplayHarness` executes deterministic replays at 5M findings/tenant. Metrics emitted: `ledger_projection_rebuild_seconds` with `scenario` label.
- **Property tests:** Seeded tests ensure `ledger_events_total` and Merkle leaf counts match after replay.
- **CI gating:** `LEDGER-29-008` requires harness output uploaded as signed JSON (`harness-report.json` + DSSE) and referenced in sprint notes.
## 6. Offline & air-gap guidance
- Collect metrics/log snapshots via `stella ledger observability snapshot --out offline/ledger/metrics.tar.gz`. Include `ledger_write_latency_seconds` summary, anchor root history, and projection lag samples.
- Include default Grafana JSON under `offline/telemetry/dashboards/ledger/*.json`. Dashboards use the metrics above; filter by `tenant`.
- Ensure sealed-mode doc (`docs/modules/findings-ledger/schema.md` §3.3) references `ledger_attachments_encryption_failures_total` so Ops can confirm encryption pipeline health without remote telemetry.
## 7. Runbook pointers
- **Anchoring issues:** Refer to `docs/modules/findings-ledger/schema.md` §3 for root structure, `ops/devops/telemetry/package_offline_bundle.py` for diagnostics.
- **Projection rebuilds:** `docs/modules/findings-ledger/workflow-inference.md` for chain rules; `scripts/ledger/replay.sh` (LEDGER-29-008 deliverable) for deterministic replays.
---
*Draft compiled 2025-11-13 for LEDGER-29-007/008 planning. Update when metrics or alerts change.*

View File

@@ -0,0 +1,86 @@
# Findings Ledger Replay & Determinism Harness (LEDGER-29-008)
> **Audience:** Findings Ledger Guild · QA Guild · Policy Guild
> **Purpose:** Define the reproducible harness for 5M findings/tenant replay tests and determinism validation required by LEDGER-29-008.
## 1. Goals
- Reproduce ledger + projection state from canonical event fixtures with byte-for-byte determinism.
- Stress test writer/projector throughput at ≥5M findings per tenant, capturing CPU/memory/latency profiles.
- Produce signed reports (DSSE) that CI and auditors can review before shipping.
## 2. Architecture
```
Fixtures (.ndjson) → Harness Runner → Ledger Writer API → Postgres Ledger DB
↘ Projector (same DB) ↘ Metrics snapshot
```
- **Fixtures:** `fixtures/ledger/*.ndjson`, sorted by `sequence_no`, containing canonical JSON envelopes with precomputed hashes.
- **Runner:** `tools/LedgerReplayHarness` (console app) feeds events, waits for projector catch-up, and verifies projection hashes.
- **Validation:** After replay, the runner re-reads ledger/projection tables, recomputes hashes, and compares to fixture expectations.
- **Reporting:** Generates `harness-report.json` with metrics (latency histogram, insertion throughput, projection lag) plus a DSSE signature.
## 3. CLI usage
```bash
dotnet run --project tools/LedgerReplayHarness \
-- --fixture fixtures/ledger/tenant-a.ndjson \
--connection "Host=postgres;Username=stellaops;Password=***;Database=findings_ledger" \
--tenant tenant-a \
--maxParallel 8 \
--report out/harness/tenant-a-report.json
```
Options:
| Option | Description |
| --- | --- |
| `--fixture` | Path to NDJSON file (supports multiple). |
| `--connection` | Postgres connection string (writer + projector share). |
| `--tenant` | Tenant identifier; harness ensures partitions exist. |
| `--maxParallel` | Batch concurrency (default 4). |
| `--report` | Output path for report JSON; `.sig` generated alongside. |
| `--metrics-endpoint` | Optional Prometheus scrape URI for live metrics snapshot. |
## 4. Verification steps
1. **Hash validation:** Recompute `event_hash` for each appended event and ensure matches fixture.
2. **Sequence integrity:** Confirm gapless sequences per chain; harness aborts on mismatch.
3. **Projection determinism:** Compare projector-derived `cycle_hash` with expected value from fixture metadata.
4. **Performance:** Capture P50/P95 latencies for `ledger_write_latency_seconds` and ensure targets (<120ms P95) met.
5. **Resource usage:** Sample CPU/memory via `dotnet-counters` or `kubectl top` and store in report.
6. **Merkle root check:** Rebuild Merkle tree from events and ensure root equals database `ledger_merkle_roots` entry.
## 5. Output report schema
```json
{
"tenant": "tenant-a",
"fixtures": ["fixtures/ledger/tenant-a.ndjson"],
"eventsWritten": 5123456,
"durationSeconds": 1422.4,
"latencyP95Ms": 108.3,
"projectionLagMaxSeconds": 18.2,
"cpuPercentMax": 72.5,
"memoryMbMax": 3580,
"merkleRoot": "3f1a…",
"status": "pass",
"timestamp": "2025-11-13T11:45:00Z"
}
```
The harness writes `harness-report.json` plus `harness-report.json.sig` (DSSE) and `metrics-snapshot.prom` for archival.
## 6. CI integration
- New pipeline job `ledger-replay-harness` runs nightly with reduced dataset (1M findings) to detect regressions quickly.
- Full 5M run executes weekly and before releases; artifacts uploaded to `out/qa/findings-ledger/`.
- Gates: merge blocked if harness `status != pass` or latencies exceed thresholds.
## 7. Air-gapped execution
- Include fixtures + harness binaries inside Offline Kit under `offline/ledger/replay/`.
- Provide `run-harness.sh` script that sets env vars, executes runner, and exports reports.
- Operators attach signed reports to audit trails, verifying hashed fixtures before import.
---
*Draft prepared 2025-11-13 for LEDGER-29-008. Update when CLI options or thresholds change.*

View File

@@ -3,3 +3,4 @@
| Task ID | State | Notes | | Task ID | State | Notes |
| --- | --- | --- | | --- | --- | --- |
| `SCANNER-POLICY-0001` | DONE (2025-11-10) | Ruby component predicates implemented in engine/tests, DSL docs updated, offline kit verifies `seed-data/analyzers/ruby/git-sources`. | | `SCANNER-POLICY-0001` | DONE (2025-11-10) | Ruby component predicates implemented in engine/tests, DSL docs updated, offline kit verifies `seed-data/analyzers/ruby/git-sources`. |
| `DOCS-AIAI-31-006` | DONE (2025-11-13) | Published `docs/policy/assistant-parameters.md` capturing Advisory AI configuration knobs (inference/guardrails/cache/queue) and linked it from the module architecture dossier. |

View File

@@ -263,9 +263,10 @@ The emitted `buildId` metadata is preserved in component hashes, diff payloads,
### 5.6 DSSE attestation (via Signer/Attestor) ### 5.6 DSSE attestation (via Signer/Attestor)
* WebService constructs **predicate** with `image_digest`, `stellaops_version`, `license_id`, `policy_digest?` (when emitting **final reports**), timestamps. * WebService constructs **predicate** with `image_digest`, `stellaops_version`, `license_id`, `policy_digest?` (when emitting **final reports**), timestamps.
* Calls **Signer** (requires **OpTok + PoE**); Signer verifies **entitlement + scanner image integrity** and returns **DSSE bundle**. * Calls **Signer** (requires **OpTok + PoE**); Signer verifies **entitlement + scanner image integrity** and returns **DSSE bundle**.
* **Attestor** logs to **Rekor v2**; returns `{uuid,index,proof}` → stored in `artifacts.rekor`. * **Attestor** logs to **Rekor v2**; returns `{uuid,index,proof}` → stored in `artifacts.rekor`.
* Operator enablement runbooks (toggles, env-var map, rollout guidance) live in [`operations/dsse-rekor-operator-guide.md`](operations/dsse-rekor-operator-guide.md) per SCANNER-ENG-0015.
--- ---

View File

@@ -40,35 +40,49 @@ Surface.Env exposes `ISurfaceEnvironment` which returns an immutable `SurfaceEnv
| Variable | Description | Default | Notes | | Variable | Description | Default | Notes |
|----------|-------------|---------|-------| |----------|-------------|---------|-------|
| `SCANNER_SURFACE_FS_ENDPOINT` | Base URI for Surface.FS service (RustFS, S3-compatible). | _required_ | e.g. `https://surface-cache.svc.cluster.local`. Zastava uses `ZASTAVA_SURFACE_FS_ENDPOINT`; when absent, falls back to scanner value. | | `SCANNER_SURFACE_FS_ENDPOINT` | Base URI for Surface.FS / RustFS / S3-compatible store. | _required_ | Throws `SurfaceEnvironmentException` when `RequireSurfaceEndpoint = true`. When disabled (tests), builder falls back to `https://surface.invalid` so validation can fail fast. Also binds `Surface:Fs:Endpoint` from `IConfiguration`. |
| `SCANNER_SURFACE_FS_BUCKET` | Bucket/container name used for manifests and artefacts. | `surface-cache` | Must be unique per tenant. | | `SCANNER_SURFACE_FS_BUCKET` | Bucket/container used for manifests and artefacts. | `surface-cache` | Must be unique per tenant; validators enforce non-empty value. |
| `SCANNER_SURFACE_FS_REGION` | Optional region (S3-style). | `null` | Required for AWS S3. | | `SCANNER_SURFACE_FS_REGION` | Optional region for S3-compatible stores. | `null` | Needed only when the backing store requires it (AWS/GCS). |
| `SCANNER_SURFACE_CACHE_ROOT` | Local filesystem directory for warm caches. | `/var/lib/stellaops/surface` | Should reside on fast SSD. | | `SCANNER_SURFACE_CACHE_ROOT` | Local directory for warm caches. | `<temp>/stellaops/surface` | Directory is created if missing. Override to `/var/lib/stellaops/surface` (or another fast SSD) in production. |
| `SCANNER_SURFACE_CACHE_QUOTA_MB` | Soft limit for local cache usage. | `4096` | Enforced by Surface.FS eviction policy. | | `SCANNER_SURFACE_CACHE_QUOTA_MB` | Soft limit for on-disk cache usage. | `4096` | Enforced range 64262144 MB; validation emits `SURFACE_ENV_CACHE_QUOTA_INVALID` outside the range. |
| `SCANNER_SURFACE_TLS_CERT_PATH` | Path to PEM bundle for mutual TLS with Surface.FS. | `null` | If provided, library loads cert/key pair. | | `SCANNER_SURFACE_PREFETCH_ENABLED` | Enables manifest prefetch threads. | `false` | Workers honour this before analyzer execution. |
| `SCANNER_SURFACE_TENANT` | Tenant identifier used for cache namespaces. | derived from Authority token | Can be overridden for multi-tenant workers. | | `SCANNER_SURFACE_TENANT` | Tenant namespace used by cache + secret resolvers. | `TenantResolver(...)` or `"default"` | Default resolver may pull from Authority claims; you can override via env for multi-tenant pools. |
| `SCANNER_SURFACE_PREFETCH_ENABLED` | Toggle surface prefetch threads. | `false` | If `true`, Worker prefetches manifests before analyzer stage. | | `SCANNER_SURFACE_FEATURES` | Comma-separated feature switches. | `""` | Compared against `SurfaceEnvironmentOptions.KnownFeatureFlags`; unknown flags raise warnings. |
| `SCANNER_SURFACE_FEATURES` | Comma-separated feature switches. | `""` | e.g. `validation,prewarm,runtime-diff`. | | `SCANNER_SURFACE_TLS_CERT_PATH` | Path to PEM/PKCS#12 file for client auth. | `null` | When present, `SurfaceEnvironmentBuilder` loads the certificate into `SurfaceTlsConfiguration`. |
| `SCANNER_SURFACE_TLS_KEY_PATH` | Optional private-key path when cert/key are stored separately. | `null` | Stored in `SurfaceTlsConfiguration` for hosts that need to hydrate the key themselves. |
### 3.2 Secrets provider keys ### 3.2 Secrets provider keys
| Variable | Description | Notes | | Variable | Description | Notes |
|----------|-------------|-------| |----------|-------------|-------|
| `SCANNER_SURFACE_SECRETS_PROVIDER` | Provider ID (`kubernetes`, `file`, `inline`). | Controls Surface.Secrets back-end. | | `SCANNER_SURFACE_SECRETS_PROVIDER` | Provider ID (`kubernetes`, `file`, `inline`, future back-ends). | Defaults to `kubernetes`; validators reject unknown values via `SURFACE_SECRET_PROVIDER_UNKNOWN`. |
| `SCANNER_SURFACE_SECRETS_ROOT` | Path or secret namespace. | Example: `/etc/stellaops/secrets` for file provider. | | `SCANNER_SURFACE_SECRETS_ROOT` | Path or base namespace for the provider. | Required for the `file` provider (e.g., `/etc/stellaops/secrets`). |
| `SCANNER_SURFACE_SECRETS_TENANT` | Tenant override for secret lookup. | Defaults to `SCANNER_SURFACE_TENANT`. | | `SCANNER_SURFACE_SECRETS_NAMESPACE` | Kubernetes namespace used by the secrets provider. | Mandatory when `provider = kubernetes`. |
| `SCANNER_SURFACE_SECRETS_FALLBACK_PROVIDER` | Optional secondary provider ID. | Enables tiered lookups (e.g., `kubernetes``inline`) without changing code. |
| `SCANNER_SURFACE_SECRETS_ALLOW_INLINE` | Allows returning inline secrets (useful for tests). | Defaults to `false`; Production deployments should keep this disabled. |
| `SCANNER_SURFACE_SECRETS_TENANT` | Tenant override for secret lookups. | Defaults to `SCANNER_SURFACE_TENANT` or the tenant resolver result. |
### 3.3 Zastava-specific keys ### 3.3 Component-specific prefixes
Zastava containers read the same primary variables but may override names under the `ZASTAVA_` prefix (e.g., `ZASTAVA_SURFACE_CACHE_ROOT`, `ZASTAVA_SURFACE_FEATURES`). Surface.Env automatically checks component-specific prefixes before falling back to the scanner defaults. `SurfaceEnvironmentOptions.Prefixes` controls the order in which suffixes are probed. Every suffix listed above is combined with each prefix (e.g., `SCANNER_SURFACE_FS_ENDPOINT`, `ZASTAVA_SURFACE_FS_ENDPOINT`) and finally the bare suffix (`SURFACE_FS_ENDPOINT`). Configure prefixes per host so local overrides win but global scanner defaults remain available:
| Component | Suggested prefixes (first match wins) | Notes |
|-----------|---------------------------------------|-------|
| Scanner.Worker / WebService | `SCANNER` | Default already added by `AddSurfaceEnvironment`. |
| Zastava Observer/Webhook (planned) | `ZASTAVA`, `SCANNER` | Call `options.AddPrefix("ZASTAVA")` before relying on `ZASTAVA_*` overrides. |
| Future CLI / BuildX plug-ins | `CLI`, `SCANNER` | Allows per-user overrides without breaking shared env files. |
This approach means operators can define a single env file (SCANNER_*) and only override the handful of settings that diverge for a specific component by introducing an additional prefix.
### 3.4 Configuration precedence ### 3.4 Configuration precedence
1. Explicit overrides passed to `SurfaceEnvBuilder` (e.g., from appsettings). The builder resolves every suffix using the following precedence:
2. Component-specific env (e.g., `ZASTAVA_SURFACE_FS_ENDPOINT`).
3. Scanner global env (e.g., `SCANNER_SURFACE_FS_ENDPOINT`). 1. Environment variables using the configured prefixes (e.g., `ZASTAVA_SURFACE_FS_ENDPOINT`, then `SCANNER_SURFACE_FS_ENDPOINT`, then the bare `SURFACE_FS_ENDPOINT`).
4. `SurfaceEnvDefaults.json` (shipped with library for sensible defaults). 2. Configuration values under the `Surface:*` section (for example `Surface:Fs:Endpoint`, `Surface:Cache:Root` in `appsettings.json` or Helm values).
5. Emergency fallback values defined in code (only for development scenarios). 3. Hard-coded defaults baked into `SurfaceEnvironmentBuilder` (temporary directory, `surface-cache` bucket, etc.).
`SurfaceEnvironmentOptions.RequireSurfaceEndpoint` controls whether a missing endpoint results in an exception (default: `true`). Other values fall back to the default listed in §3.1/3.2 and are further validated by the Surface.Validation pipeline.
## 4. API Surface ## 4. API Surface
@@ -79,65 +93,99 @@ public interface ISurfaceEnvironment
IReadOnlyDictionary<string, string> RawVariables { get; } IReadOnlyDictionary<string, string> RawVariables { get; }
} }
public sealed record SurfaceEnvironmentSettings public sealed record SurfaceEnvironmentSettings(
(
Uri SurfaceFsEndpoint, Uri SurfaceFsEndpoint,
string SurfaceFsBucket, string SurfaceFsBucket,
string? SurfaceFsRegion, string? SurfaceFsRegion,
DirectoryInfo CacheRoot, DirectoryInfo CacheRoot,
int CacheQuotaMegabytes, int CacheQuotaMegabytes,
X509Certificate2Collection? ClientCertificates,
string Tenant,
bool PrefetchEnabled, bool PrefetchEnabled,
IReadOnlyCollection<string> FeatureFlags, IReadOnlyCollection<string> FeatureFlags,
SecretProviderConfiguration Secrets, SurfaceSecretsConfiguration Secrets,
IDictionary<string,string> ComponentOverrides string Tenant,
); SurfaceTlsConfiguration Tls)
{
public DateTimeOffset CreatedAtUtc { get; init; }
}
public sealed record SurfaceSecretsConfiguration(
string Provider,
string Tenant,
string? Root,
string? Namespace,
string? FallbackProvider,
bool AllowInline);
public sealed record SurfaceTlsConfiguration(
string? CertificatePath,
string? PrivateKeyPath,
X509Certificate2Collection? ClientCertificates);
``` ```
Consumers access `ISurfaceEnvironment.Settings` and pass the record into Surface.FS / Surface.Secrets factories. The interface memoises results so repeated access is cheap. `ISurfaceEnvironment.RawVariables` captures the exact env/config keys that produced the snapshot so operators can export them in diagnostics bundles.
`SurfaceEnvironmentOptions` configures how the snapshot is built:
* `ComponentName` used in logs/validation output.
* `Prefixes` ordered list of env prefixes (see §3.3). Defaults to `["SCANNER"]`.
* `RequireSurfaceEndpoint` throw when no endpoint is provided (default `true`).
* `TenantResolver` delegate invoked when `SCANNER_SURFACE_TENANT` is absent.
* `KnownFeatureFlags` recognised feature switches; unexpected values raise warnings.
Example registration:
```csharp
builder.Services.AddSurfaceEnvironment(options =>
{
options.ComponentName = "Scanner.Worker";
options.AddPrefix("ZASTAVA"); // optional future override
options.KnownFeatureFlags.Add("validation");
options.TenantResolver = sp => sp.GetRequiredService<ITenantContext>().TenantId;
});
```
Consumers access `ISurfaceEnvironment.Settings` and pass the record into Surface.FS, Surface.Secrets, cache, and validation helpers. The interface memoises results so repeated access is cheap.
## 5. Validation ## 5. Validation
Surface.Env invokes the following validators (implemented in Surface.Validation): `SurfaceEnvironmentBuilder` only throws `SurfaceEnvironmentException` for malformed inputs (non-integer quota, invalid URI, missing required variable when `RequireSurfaceEndpoint = true`). The richer validation pipeline lives in `StellaOps.Scanner.Surface.Validation` and runs via `services.AddSurfaceValidation()`:
1. **EndpointValidator** ensures endpoint URI is absolute HTTPS and not localhost in production. 1. **SurfaceEndpointValidator** checks for a non-placeholder endpoint and bucket (`SURFACE_ENV_MISSING_ENDPOINT`, `SURFACE_FS_BUCKET_MISSING`).
2. **CacheQuotaValidator** verifies quota > 0 and below host max. 2. **SurfaceCacheValidator** verifies the cache directory exists/is writable and that the quota is positive (`SURFACE_ENV_CACHE_DIR_UNWRITABLE`, `SURFACE_ENV_CACHE_QUOTA_INVALID`).
3. **FilesystemValidator** checks cache root exists/writable; attempts to create directory if missing. 3. **SurfaceSecretsValidator** validates provider names, required namespace/root fields, and tenant presence (`SURFACE_SECRET_PROVIDER_UNKNOWN`, `SURFACE_SECRET_CONFIGURATION_MISSING`, `SURFACE_ENV_TENANT_MISSING`).
4. **SecretsProviderValidator** ensures provider-specific settings (e.g., Kubernetes namespace) are present.
5. **FeatureFlagValidator** warns on unknown feature flag tokens.
Failures throw `SurfaceEnvironmentException` with error codes (`SURFACE_ENV_MISSING_ENDPOINT`, `SURFACE_ENV_CACHE_DIR_UNWRITABLE`, etc.). Hosts log the error and fail fast during startup. Validators emit `SurfaceValidationIssue` instances with codes defined in `SurfaceValidationIssueCodes`. `LoggingSurfaceValidationReporter` writes structured log entries (Info/Warning/Error) using the component name, issue code, and remediation hint. Hosts fail startup if any issue has `Error` severity; warnings allow startup but surface actionable hints.
## 6. Integration Guidance ## 6. Integration Guidance
- **Scanner Worker**: call `services.AddSurfaceEnvironment()` in `Program.cs` before registering analyzers. Pass `hostContext.Configuration.GetSection("Surface")` for overrides. - **Scanner Worker**: register `AddSurfaceEnvironment`, `AddSurfaceValidation`, `AddSurfaceFileCache`, and `AddSurfaceSecrets` before analyzer/services (see `src/Scanner/StellaOps.Scanner.Worker/Program.cs`). `SurfaceCacheOptionsConfigurator` already binds the cache root from `ISurfaceEnvironment`.
- **Scanner WebService**: build environment during startup using `AddSurfaceEnvironment`, `AddSurfaceValidation`, `AddSurfaceFileCache`, and `AddSurfaceSecrets`; readiness checks execute the validator runner and scan/report APIs emit Surface CAS pointers derived from the resolved configuration. - **Scanner WebService**: identical wiring, plus `SurfacePointerService`/`ScannerSurfaceSecretConfigurator` reuse the resolved settings (`Program.cs` demonstrates the pattern).
- **Zastava Observer/Webhook**: use the same builder; ensure Helm charts set `ZASTAVA_` variables. - **Zastava Observer/Webhook**: will reuse the same helper once the service adds `AddSurfaceEnvironment(options => options.AddPrefix("ZASTAVA"))` so per-component overrides function without diverging defaults.
- **Scheduler Planner (future)**: treat Surface.Env as read-only input; do not mutate settings. - **Scheduler / CLI / BuildX (future)**: treat `ISurfaceEnvironment` as read-only input; secret lookup, cache plumbing, and validation happen before any queue/enqueue work.
- `Scanner.Worker` and `Scanner.WebService` automatically bind the `SurfaceCacheOptions.RootDirectory` to `SurfaceEnvironment.Settings.CacheRoot` (2025-11-05); both hosts emit structured warnings (`surface.env.misconfiguration`) when the helper detects missing cache roots, endpoints, or secrets provider settings (2025-11-06).
### 6.1 Misconfiguration warnings Readiness probes should invoke `ISurfaceValidatorRunner` (registered by `AddSurfaceValidation`) and fail the endpoint when any issue is returned. The Scanner Worker/WebService hosted services already run the validators on startup; other consumers should follow the same pattern.
Surface.Env surfaces actionable warnings that appear in structured logs and readiness responses: ### 6.1 Validation output
- `surface.env.cache_root_missing` emitted when the resolved cache directory does not exist or is not writable. The host attempts to create the directory once; subsequent failures block startup. `LoggingSurfaceValidationReporter` produces log entries that include:
- `surface.env.endpoint_unreachable` emitted when `SurfaceFsEndpoint` is missing or not an absolute HTTPS URI.
- `surface.env.secrets_provider_invalid` emitted when the configured secrets provider lacks mandatory fields (e.g., `SCANNER_SURFACE_SECRETS_ROOT` for the `file` provider).
Each warning includes remediation text and a reference to this design document; operations runbooks should treat these warnings as blockers in production and as validation hints in staging. ```
Surface validation issue for component Scanner.Worker: SURFACE_ENV_MISSING_ENDPOINT - Surface FS endpoint is missing or invalid. Hint: Set SCANNER_SURFACE_FS_ENDPOINT to the RustFS/S3 endpoint.
```
Treat `SurfaceValidationIssueCodes.*` with severity `Error` as hard blockers (readiness must fail). `Warning` entries flag configuration drift (for example, missing namespaces) but allow startup so staging/offline runs can proceed. The codes appear in both the structured log state and the reporter payload, making it easy to alert on them.
## 7. Security & Observability ## 7. Security & Observability
- Never log raw secrets; Surface.Env redacts values by default. - Surface.Env never logs raw values; only suffix names and issue codes appear in logs. `RawVariables` is intended for diagnostics bundles and should be treated as sensitive metadata.
- Emit metric `surface_env_validation_total{status}` to observe validation outcomes. - TLS certificates are loaded into memory and not re-serialised; only the configured paths are exposed to downstream services.
- Provide `/metrics` gauge for cache quota/residual via Surface.FS integration. - To emit metrics, register a custom `ISurfaceValidationReporter` (e.g., wrapping Prometheus counters) in addition to the logging reporter.
## 8. Offline & Air-Gap Support ## 8. Offline & Air-Gap Support
- Defaults assume no public network access; endpoints should point to internal RustFS or S3-compatible system. - Defaults assume no public network access; point `SCANNER_SURFACE_FS_ENDPOINT` at an internal RustFS/S3 mirror.
- Offline kit templates supply env files under `offline/scanner/surface-env.env`. - Offline bundles must capture an env file (Ops track this under the Offline Kit tasks) so operators can seed `SCANNER_*` values before first boot.
- Document steps in `docs/modules/devops/runbooks/zastava-deployment.md` and `offline-kit` tasks for synchronising env values. - Keep `docs/modules/devops/runbooks/zastava-deployment.md` in sync so Zastava deployments reuse the same env contract.
## 9. Testing Strategy ## 9. Testing Strategy

View File

@@ -46,6 +46,17 @@
- Export Center profile with `attestations.bundle=true`. - Export Center profile with `attestations.bundle=true`.
- Rekor log snapshots mirrored (ORAS bundle or rsync of `/var/log/rekor`) for disconnected verification. - Rekor log snapshots mirrored (ORAS bundle or rsync of `/var/log/rekor`) for disconnected verification.
### 3.1 Configuration & env-var map
| Service | Key(s) | Env override | Notes |
|---------|--------|--------------|-------|
| Scanner WebService / Worker | `scanner.attestation.signerEndpoint`<br>`scanner.attestation.attestorEndpoint`<br>`scanner.attestation.requireDsse`<br>`scanner.attestation.uploadArtifacts` | `SCANNER__ATTESTATION__SIGNERENDPOINT`<br>`SCANNER__ATTESTATION__ATTESTORENDPOINT`<br>`SCANNER__ATTESTATION__REQUIREDSSE`<br>`SCANNER__ATTESTATION__UPLOADARTIFACTS` | Worker/WebService share the same config. Set `requireDsse=false` while observing, then flip to `true`. `uploadArtifacts=true` stores DSSE+Rekor bundles next to SBOM artefacts. |
| Signer | `signer.attestorEndpoint`<br>`signer.keyProvider`<br>`signer.fulcio.endpoint` | `SIGNER__ATTESTORENDPOINT` etc. | `attestorEndpoint` lets Signer push DSSE payloads downstream; key provider controls BYO KMS/HSM vs Fulcio. |
| Attestor | `attestor.rekor.api`<br>`attestor.rekor.publicKeyPath`<br>`attestor.rekor.offlineMirrorPath`<br>`attestor.retry.maxAttempts` | `ATTESTOR__REKOR__API`<br>`ATTESTOR__REKOR__PUBLICKEYPATH`<br>`ATTESTOR__REKOR__OFFLINEMIRRORPATH`<br>`ATTESTOR__RETRY__MAXATTEMPTS` | Mirror path points at the local snapshot directory used in sealed/air-gapped modes. |
| Export Center | `exportProfiles.<name>.includeAttestations`<br>`exportProfiles.<name>.includeRekorProofs` | `EXPORTCENTER__EXPORTPROFILES__SECURE-DEFAULT__INCLUDEATTESTATIONS` etc. | Use profiles to gate which bundles include DSSE/Reco r data; keep a “secure-default” profile enabled across tiers. |
> **Tip:** Every key above follows the ASP.NET Core double-underscore pattern. For Compose/Helm, add environment variables directly; for Offline Kit overrides, drop `appsettings.Offline.json` with the same sections.
--- ---
## 4. Enablement workflow ## 4. Enablement workflow
@@ -161,6 +172,38 @@ Roll forward per environment; keep the previous phases toggles for hot rollba
--- ---
## 8. Operational runbook & SLO guardrails
| Step | Owner | Target / Notes |
|------|-------|----------------|
| Health gate | Ops/SRE | `attestor_rekor_success_total` ≥ 99.5% rolling hour, `rekor_inclusion_latency_p95` ≤ 30s. Alert when retries spike or queue depth > 50. |
| Cutover dry-run | Scanner team | Set `SCANNER__ATTESTATION__REQUIREDSSE=false`, watch metrics + Attestor queue for 24h, capture Rekor proofs per environment. |
| Enforce | Platform | Flip `requireDsse=true`, promote Policy rule from `warn` → `deny`, notify AppSec + release managers. |
| Audit proof pack | Export Center | Run secure profile nightly; confirm `attestations/` + `rekor/` trees attached to Offline Kit. Store bundle hash in Evidence Locker. |
| Verification spot-check | AppSec | Weekly `stellaops-cli attest verify --bundle latest.tar --rekor-key rekor.pub --json` saved to ticket for auditors. |
| Rollback | Ops/SRE | If Rekor outage exceeds 15 min: set `requireDsse=false`, keep policy in `warn`, purge Attestor queue once log recovers, then re-enable. Document the waiver in the sprint log. |
**Dashboards & alerts**
- Grafana panel: Rekor inclusion latency (p50/p95) + Attestor retry rate.
- Alert when `attestationPending=true` events exceed 5 per minute for >5 minutes.
- Logs must include `rekorUuid`, `rekorLogIndex`, `attestationDigest` for SIEM correlation.
**Runbook snippets**
```bash
# test Rekor health + key mismatch
rekor-cli loginfo --rekor_server "${ATTESTOR__REKOR__API}" --format json | jq .rootHash
# replay stranded payloads after outage
stellaops-attestor replay --since "2025-11-13T00:00:00Z" \
--rekor ${ATTESTOR__REKOR__API} --rekor-key /etc/rekor/rekor.pub
# verify a single DSSE file against Rekor proof bundle
stellaops-cli attest verify --envelope artifacts/scan123/attest/sbom.dsse.json \
--rekor-proof artifacts/scan123/rekor/entry.json --rekor-key rekor.pub
```
---
## References ## References
- Gap analysis: `docs/benchmarks/scanner/scanning-gaps-stella-misses-from-competitors.md#dsse-rekor-operator-enablement-trivy-grype-snyk` - Gap analysis: `docs/benchmarks/scanner/scanning-gaps-stella-misses-from-competitors.md#dsse-rekor-operator-enablement-trivy-grype-snyk`
@@ -168,4 +211,3 @@ Roll forward per environment; keep the previous phases toggles for hot rollba
- Export Center profiles: `docs/modules/export-center/architecture.md` - Export Center profiles: `docs/modules/export-center/architecture.md`
- Policy Engine predicates: `docs/modules/policy/architecture.md` - Policy Engine predicates: `docs/modules/policy/architecture.md`
- CLI reference: `docs/09_API_CLI_REFERENCE.md` - CLI reference: `docs/09_API_CLI_REFERENCE.md`

View File

@@ -62,11 +62,12 @@ Failures during evaluation are logged with correlation IDs and surfaced through
## 3. Rendering & connectors ## 3. Rendering & connectors
- **Template resolution.** The renderer picks the template in this order: action template → channel default template → locale fallback → built-in minimal template. Locale negotiation reduces `en-US` to `en-us`. - **Template resolution.** The renderer picks the template in this order: action template → channel default template → locale fallback → built-in minimal template. Locale negotiation reduces `en-US` to `en-us`.
- **Helpers & partials.** Exposed helpers mirror the list in [`notifications/templates.md`](templates.md#3-variables-helpers-and-context). Plug-ins may register additional helpers but must remain deterministic and side-effect free. - **Helpers & partials.** Exposed helpers mirror the list in [`notifications/templates.md`](templates.md#3-variables-helpers-and-context). Plug-ins may register additional helpers but must remain deterministic and side-effect free.
- **Rendering output.** `NotifyDeliveryRendered` captures: - **Attestation lifecycle suite.** Sprint171 introduced dedicated `tmpl-attest-*` templates for verification failures, expiring attestations, key rotations, and transparency anomalies (see [`templates.md` §7](templates.md#7-attestation--signing-lifecycle-templates-notify-attest-74-001)). Rule actions referencing those templates must populate the attestation context fields so channels stay consistent online/offline.
- `channelType`, `format`, `locale` - **Rendering output.** `NotifyDeliveryRendered` captures:
- `title`, `body`, optional `summary`, `textBody` - `channelType`, `format`, `locale`
- `title`, `body`, optional `summary`, `textBody`
- `target` (redacted where necessary) - `target` (redacted where necessary)
- `attachments[]` (safe URLs or references) - `attachments[]` (safe URLs or references)
- `bodyHash` (lowercase SHA-256) for audit parity - `bodyHash` (lowercase SHA-256) for audit parity

View File

@@ -21,7 +21,7 @@ Notifications Studio turns raw platform events into concise, tenant-scoped alert
|------------|--------------|----------| |------------|--------------|----------|
| Rules engine | Declarative matchers for event kinds, severities, namespaces, VEX context, KEV flags, and more. | [`notifications/rules.md`](rules.md) | | Rules engine | Declarative matchers for event kinds, severities, namespaces, VEX context, KEV flags, and more. | [`notifications/rules.md`](rules.md) |
| Channel catalog | Slack, Teams, Email, Webhook connectors loaded via restart-time plug-ins; metadata stored without secrets. | [`notifications/architecture.md`](architecture.md) | | Channel catalog | Slack, Teams, Email, Webhook connectors loaded via restart-time plug-ins; metadata stored without secrets. | [`notifications/architecture.md`](architecture.md) |
| Templates | Locale-aware, deterministic rendering via safe helpers; channel defaults plus tenant-specific overrides. | [`notifications/templates.md`](templates.md) | | Templates | Locale-aware, deterministic rendering via safe helpers; channel defaults plus tenant-specific overrides, including the attestation lifecycle suite (`tmpl-attest-*`). | [`notifications/templates.md`](templates.md#7-attestation--signing-lifecycle-templates-notify-attest-74-001) |
| Digests | Coalesce bursts into periodic summaries with deterministic IDs and audit trails. | [`notifications/digests.md`](digests.md) | | Digests | Coalesce bursts into periodic summaries with deterministic IDs and audit trails. | [`notifications/digests.md`](digests.md) |
| Delivery ledger | Tracks rendered payload hashes, attempts, throttles, and outcomes for every action. | [`modules/notify/architecture.md`](../modules/notify/architecture.md#7-data-model-mongo) | | Delivery ledger | Tracks rendered payload hashes, attempts, throttles, and outcomes for every action. | [`modules/notify/architecture.md`](../modules/notify/architecture.md#7-data-model-mongo) |
| Ack tokens | DSSE-signed acknowledgement tokens with webhook allowlists and escalation guardrails enforced by Authority. | [`modules/notify/architecture.md`](../modules/notify/architecture.md#81-ack-tokens--escalation-workflows) | | Ack tokens | DSSE-signed acknowledgement tokens with webhook allowlists and escalation guardrails enforced by Authority. | [`modules/notify/architecture.md`](../modules/notify/architecture.md#81-ack-tokens--escalation-workflows) |
@@ -44,7 +44,7 @@ The Notify WebService fronts worker state with REST APIs used by the UI and CLI.
| Area | Guidance | | Area | Guidance |
|------|----------| |------|----------|
| **Tenancy** | Each rule, channel, template, and delivery belongs to exactly one tenant. Cross-tenant sharing is intentionally unsupported. | | **Tenancy** | Each rule, channel, template, and delivery belongs to exactly one tenant. Cross-tenant sharing is intentionally unsupported. |
| **Determinism** | Configuration persistence normalises strings and sorts collections. Template rendering produces identical `bodyHash` values when inputs match. | | **Determinism** | Configuration persistence normalises strings and sorts collections. Template rendering produces identical `bodyHash` values when inputs match; attestation events always reference the canonical `tmpl-attest-*` keys documented in the template guide. |
| **Scaling** | Workers scale horizontally; per-tenant rule snapshots are cached and refreshed from Mongo change streams. Redis (or equivalent) guards throttles and locks. | | **Scaling** | Workers scale horizontally; per-tenant rule snapshots are cached and refreshed from Mongo change streams. Redis (or equivalent) guards throttles and locks. |
| **Offline** | Offline Kits include plug-ins, default templates, and seed rules. Operators can edit YAML/JSON manifests before air-gapped deployment. | | **Offline** | Offline Kits include plug-ins, default templates, and seed rules. Operators can edit YAML/JSON manifests before air-gapped deployment. |
| **Security** | Channel secrets use indirection (`secretRef`), Authority-protected OAuth clients secure API access, and delivery payloads are redacted before storage where required. | | **Security** | Channel secrets use indirection (`secretRef`), Authority-protected OAuth clients secure API access, and delivery payloads are redacted before storage where required. |

View File

@@ -81,11 +81,24 @@ Each rule requires at least one action. Actions are deduplicated and sorted by `
| `throttle` | ISO8601 duration? | Optional throttle TTL (`PT300S`, `PT1H`). Prevents duplicate deliveries when the same idempotency hash appears before expiry. | | `throttle` | ISO8601 duration? | Optional throttle TTL (`PT300S`, `PT1H`). Prevents duplicate deliveries when the same idempotency hash appears before expiry. |
| `locale` | string? | BCP-47 tag (stored lower-case). Template lookup falls back to channel locale then `en-us`. | | `locale` | string? | BCP-47 tag (stored lower-case). Template lookup falls back to channel locale then `en-us`. |
| `enabled` | bool | Disabled actions skip rendering but remain stored. | | `enabled` | bool | Disabled actions skip rendering but remain stored. |
| `metadata` | map<string,string> | Connector-specific hints (priority, layout, etc.). | | `metadata` | map<string,string> | Connector-specific hints (priority, layout, etc.). |
### 4.1 Evaluation order ### 4.0 Attestation lifecycle templates
1. Verify channel exists and is enabled; disabled channels mark the delivery as `Dropped`. Rules targeting attestation/signing events (`attestor.verification.failed`, `attestor.attestation.expiring`, `authority.keys.revoked`, `attestor.transparency.anomaly`) must reference the dedicated template keys documented in [`notifications/templates.md` §7](templates.md#7-attestation--signing-lifecycle-templates-notify-attest-74-001) so payloads remain deterministic across channels and Offline Kits:
| Event kind | Required template key | Notes |
| --- | --- | --- |
| `attestor.verification.failed` | `tmpl-attest-verify-fail` | Include failure code, Rekor UUID/index, last good attestation link. |
| `attestor.attestation.expiring` | `tmpl-attest-expiry-warning` | Surface issued/expires timestamps, time remaining, renewal instructions. |
| `authority.keys.revoked` / `authority.keys.rotated` | `tmpl-attest-key-rotation` | List rotation batch ID, impacted services, remediation steps. |
| `attestor.transparency.anomaly` | `tmpl-attest-transparency-anomaly` | Highlight Rekor/witness metadata and anomaly classification. |
Locale-specific variants keep the same template key while varying `locale`; rule actions shouldn't create ad-hoc templates for these events.
### 4.1 Evaluation order
1. Verify channel exists and is enabled; disabled channels mark the delivery as `Dropped`.
2. Apply throttle idempotency key: `hash(ruleId|actionId|event.kind|scope.digest|delta.hash|dayBucket)`. Hits are logged as `Throttled`. 2. Apply throttle idempotency key: `hash(ruleId|actionId|event.kind|scope.digest|delta.hash|dayBucket)`. Hits are logged as `Throttled`.
3. If the action defines a digest window other than `instant`, append the event to the open window and defer delivery until flush. 3. If the action defines a digest window other than `instant`, append the event to the open window and defer delivery until flush.
4. When delivery proceeds, the renderer resolves the template, locale, and metadata before invoking the connector. 4. When delivery proceeds, the renderer resolves the template, locale, and metadata before invoking the connector.

View File

@@ -19,7 +19,7 @@ Templates shape the payload rendered for each channel when a rule action fires.
| Field | Type | Notes | | Field | Type | Notes |
|-------|------|-------| |-------|------|-------|
| `templateId` | string | Stable identifier (UUID/slug). | | `templateId` | string | Stable identifier (UUID/slug). |
| `tenantId` | string | Must match the tenant header in API calls. | | `tenantId` | string | Must match the tenant header in API calls. |
| `channelType` | [`NotifyChannelType`](../modules/notify/architecture.md#5-channels--connectors-plug-ins) | Determines connector payload envelope. | | `channelType` | [`NotifyChannelType`](../modules/notify/architecture.md#5-channels--connectors-plug-ins) | Determines connector payload envelope. |
| `key` | string | Human-readable key referenced by rules (`tmpl-critical`). | | `key` | string | Human-readable key referenced by rules (`tmpl-critical`). |
@@ -109,22 +109,95 @@ When delivering via email, connectors automatically attach a plain-text alternat
--- ---
## 5. Preview and validation ## 5. Preview and validation
- `POST /channels/{id}/test` accepts an optional `templateId` and sample payload to produce a rendered preview without dispatching the event. Results include channel type, target, title/summary, locale, body hash, and connector metadata. - `POST /channels/{id}/test` accepts an optional `templateId` and sample payload to produce a rendered preview without dispatching the event. Results include channel type, target, title/summary, locale, body hash, and connector metadata.
- UI previews rely on the same API and highlight connector fallbacks (e.g., Teams adaptive card vs. text fallback). - UI previews rely on the same API and highlight connector fallbacks (e.g., Teams adaptive card vs. text fallback).
- Offline Kit scenarios can call `/internal/notify/templates/normalize` to ensure bundled templates match the canonical schema before packaging. - Offline Kit scenarios can call `/internal/notify/templates/normalize` to ensure bundled templates match the canonical schema before packaging.
--- ---
## 6. Best practices ## 6. Best practices
- Keep channel-specific limits in mind (Slack block/character quotas, Teams adaptive card size, email line length). Lean on digests to summarise long lists. - Keep channel-specific limits in mind (Slack block/character quotas, Teams adaptive card size, email line length). Lean on digests to summarise long lists.
- Provide locale-specific versions for high-volume tenants; Notify selects the closest locale, falling back to `en-us`. - Provide locale-specific versions for high-volume tenants; Notify selects the closest locale, falling back to `en-us`.
- Store connector-specific hints (`metadata.layout`, `metadata.emoji`) in template metadata rather than rules when they affect rendering. - Store connector-specific hints (`metadata.layout`, `metadata.emoji`) in template metadata rather than rules when they affect rendering.
- Version template bodies through metadata (e.g., `metadata.revision: "2025-10-28"`) so tenants can track changes over time. - Version template bodies through metadata (e.g., `metadata.revision: "2025-10-28"`) so tenants can track changes over time.
- Run test previews whenever introducing new helpers to confirm body hashes remain stable across environments. - Run test previews whenever introducing new helpers to confirm body hashes remain stable across environments.
--- ---
> **Imposed rule reminder:** Work of this type or tasks of this type on this component must also be applied everywhere else it should be applied. ## 7. Attestation & signing lifecycle templates (NOTIFY-ATTEST-74-001)
Attestation lifecycle events (verification failures, expiring attestations, key revocations, transparency anomalies) reuse the same structural context so operators can differentiate urgency while reusing channels. Every template **must** surface:
- **Subject** (`payload.subject.digest`, `payload.subject.repository`, `payload.subject.tag`).
- **Attestation metadata** (`payload.attestation.kind`, `payload.attestation.id`, `payload.attestation.issuedAt`, `payload.attestation.expiresAt`).
- **Signer/Key fingerprint** (`payload.signer.kid`, `payload.signer.algorithm`, `payload.signer.rotationId`).
- **Traceability** (`payload.links.console`, `payload.links.rekor`, `payload.links.docs`).
### 7.1 Template keys & channels
| Event | Template key | Required channels | Optional channels | Notes |
| --- | --- | --- | --- | --- |
| Verification failure (`attestor.verification.failed`) | `tmpl-attest-verify-fail` | Slack `sec-alerts`, Email `supply-chain@`, Webhook (Pager/SOC) | Teams `risk-war-room`, Custom SIEM feed | Include failure code, Rekor UUID, last-known good attestation link. |
| Expiring attestation (`attestor.attestation.expiring`) | `tmpl-attest-expiry-warning` | Email summary, Slack reminder | Digest window (daily) | Provide expiration window, renewal instructions, `expiresIn` helper. |
| Key revocation/rotation (`authority.keys.revoked`, `authority.keys.rotated`) | `tmpl-attest-key-rotation` | Email + Webhook | Slack (if SOC watches channel) | Add rotation batch ID, impacted tenants/services, remediation steps. |
| Transparency anomaly (`attestor.transparency.anomaly`) | `tmpl-attest-transparency-anomaly` | Slack high-priority, Webhook, PagerDuty | Email follow-up | Show Rekor index delta, witness ID, anomaly classification, recommended actions. |
Assign these keys when creating templates so rule actions can reference them deterministically (`actions[].template: "tmpl-attest-verify-fail"`).
### 7.2 Context helpers
- `attestation_status_badge status`: renders ✅/⚠️/❌ depending on verdict (`valid`, `expiring`, `failed`).
- `expires_in expiresAt now`: returns human-readable duration, constrained to deterministic units (h/d).
- `fingerprint key`: shortens long key IDs/pems, exposing the last 10 characters.
### 7.3 Slack sample (verification failure)
```hbs
:rotating_light: {{attestation_status_badge payload.failure.status}} verification failed for `{{payload.subject.digest}}`
Signer: `{{fingerprint payload.signer.kid}}` ({{payload.signer.algorithm}})
Reason: `{{payload.failure.reasonCode}}` — {{payload.failure.reason}}
Last valid attestation: {{link "Console report" payload.links.console}}
Rekor entry: {{link "Transparency log" payload.links.rekor}}
```
### 7.4 Email sample (expiring attestation)
```hbs
<h2>Attestation expiry notice</h2>
<p>The attestation for <code>{{payload.subject.repository}}</code> (digest {{payload.subject.digest}}) expires on <strong>{{payload.attestation.expiresAt}}</strong>.</p>
<ul>
<li>Issued: {{payload.attestation.issuedAt}}</li>
<li>Signer: {{payload.signer.kid}} ({{payload.signer.algorithm}})</li>
<li>Time remaining: {{expires_in payload.attestation.expiresAt event.ts}}</li>
</ul>
<p>Please rotate the attestation before expiry. Reference <a href="{{payload.links.docs}}">renewal steps</a>.</p>
```
### 7.5 Webhook sample (transparency anomaly)
```json
{
"event": "attestor.transparency.anomaly",
"tenantId": "{{event.tenant}}",
"subjectDigest": "{{payload.subject.digest}}",
"rekorIndex": "{{payload.transparency.rekorIndex}}",
"witnessId": "{{payload.transparency.witnessId}}",
"anomaly": "{{payload.transparency.classification}}",
"detailsUrl": "{{payload.links.console}}",
"recommendation": "{{payload.recommendation}}"
}
```
### 7.6 Offline kit guidance
- Bundle these templates (JSON export) under `offline/notifier/templates/attestation/`.
- Baseline English templates for Slack, Email, and Webhook ship in the repository at `offline/notifier/templates/attestation/*.template.json`; copy and localise them per tenant as needed.
- Provide localized variants for `en-us` and `de-de` at minimum; additional locales can be appended per customer.
- Include preview fixtures in Offline Kit smoke tests to guarantee channel render parity when air-gapped.
---
> **Imposed rule reminder:** Work of this type or tasks of this type on this component must also be applied everywhere else it should be applied.

View File

@@ -0,0 +1,110 @@
# Advisory AI Assistant Parameters
_Primary audience: platform operators & policy authors • Updated: 2025-11-13_
This note centralises the tunable knobs that control Advisory AIs planner, retrieval stack, inference clients, and guardrails. All options live under the `AdvisoryAI` configuration section and can be set via `appsettings.*` files or environment variables using ASP.NET Cores double-underscore convention (`ADVISORYAI__Inference__Mode`, etc.).
| Area | Key(s) | Environment variable | Default | Notes |
| --- | --- | --- | --- | --- |
| Inference mode | `AdvisoryAI:Inference:Mode` | `ADVISORYAI__INFERENCE__MODE` | `Local` | `Local` runs the deterministic pipeline only; `Remote` posts sanitized prompts to `Remote.BaseAddress`. |
| Remote base URI | `AdvisoryAI:Inference:Remote:BaseAddress` | `ADVISORYAI__INFERENCE__REMOTE__BASEADDRESS` | — | Required when `Mode=Remote`. HTTPS strongly recommended. |
| Remote API key | `AdvisoryAI:Inference:Remote:ApiKey` | `ADVISORYAI__INFERENCE__REMOTE__APIKEY` | — | Injected as `Authorization: Bearer <key>` when present. |
| Remote timeout | `AdvisoryAI:Inference:Remote:TimeoutSeconds` | `ADVISORYAI__INFERENCE__REMOTE__TIMEOUTSECONDS` | `30` | Failing requests fall back to the sanitized prompt with `inference.fallback_reason=remote_timeout`. |
| Guardrail prompt cap | `AdvisoryAI:Guardrails:MaxPromptLength` | `ADVISORYAI__GUARDRAILS__MAXPROMPTLENGTH` | `16000` | Prompts longer than the cap are blocked with `prompt_too_long`. |
| Guardrail citations | `AdvisoryAI:Guardrails:RequireCitations` | `ADVISORYAI__GUARDRAILS__REQUIRECITATIONS` | `true` | When `true`, at least one citation must accompany every prompt. |
| Guardrail phrase seeds | `AdvisoryAI:Guardrails:BlockedPhrases[]`<br>`AdvisoryAI:Guardrails:BlockedPhraseFile` | `ADVISORYAI__GUARDRAILS__BLOCKEDPHRASES__0`<br>`ADVISORYAI__GUARDRAILS__BLOCKEDPHRASEFILE` | See defaults below | File paths are resolved relative to the content root; phrases are merged, de-duped, and lower-cased. |
| Plan cache TTL | `AdvisoryAI:PlanCache:DefaultTimeToLive`* | `ADVISORYAI__PLANCACHE__DEFAULTTIMETOLIVE` | `00:10:00` | Controls how long cached plans are reused. (`CleanupInterval` defaults to `00:05:00`). |
| Queue capacity | `AdvisoryAI:Queue:Capacity` | `ADVISORYAI__QUEUE__CAPACITY` | `1024` | Upper bound on in-memory tasks when using the default queue. |
| Queue wait interval | `AdvisoryAI:Queue:DequeueWaitInterval` | `ADVISORYAI__QUEUE__DEQUEUEWAITINTERVAL` | `00:00:01` | Back-off between queue polls when empty. |
> \* The plan-cache section is bound via `AddOptions<AdvisoryPlanCacheOptions>()`; override by adding an `AdvisoryAI__PlanCache` block to the host configuration.
---
## 1. Inference knobs & “temperature”
Advisory AI supports two inference modes:
- **Local (default)** The orchestrator emits deterministic prompts and the worker returns the sanitized prompt verbatim. This mode is offline-friendly and does **not** call any external LLMs. There is no stochastic “temperature” here—the pipeline is purely rule-based.
- **Remote** Sanitized prompts, citations, and metadata are POSTed to `Remote.BaseAddress + Remote.Endpoint` (default `/v1/inference`). Remote providers control sampling temperature on their side. StellaOps treats remote responses deterministically: we record the providers `modelId`, token usage, and any metadata they return. If your remote tier exposes a temperature knob, set it there; Advisory AI simply forwards the prompt.
### Remote inference quick sample
```json
{
"AdvisoryAI": {
"Inference": {
"Mode": "Remote",
"Remote": {
"BaseAddress": "https://inference.internal",
"Endpoint": "/v1/inference",
"ApiKey": "${ADVISORYAI_REMOTE_KEY}",
"TimeoutSeconds": 45
}
}
}
}
```
## 2. Guardrail configuration
| Setting | Default | Explanation |
| --- | --- | --- |
| `MaxPromptLength` | 16000 chars | Upper bound enforced after redaction. Increase cautiously—remote providers typically cap prompts at 32k tokens. |
| `RequireCitations` | `true` | Forces each prompt to include at least one citation. Disable only when testing synthetic prompts. |
| `BlockedPhrases[]` | `ignore previous instructions`, `disregard earlier instructions`, `you are now the system`, `override the system prompt`, `please jailbreak` | Inline list merged with the optional file. Comparisons are case-insensitive. |
| `BlockedPhraseFile` | — | Points to a newline-delimited list. Relative paths resolve against the content root (`AdvisoryAI.Hosting` sticks to AppContext base). |
Violations surface in the response metadata (`guardrail.violations[*]`) and increment `advisory_ai_guardrail_blocks_total`. Console consumes the same payload for its ribbon state.
## 3. Retrieval & ranking weights (per-task)
Each task type (Summary, Conflict, Remediation) inherits the defaults below. Override any value via `AdvisoryAI:Tasks:<TaskType>:<Property>`.
| Task | `StructuredMaxChunks` | `VectorTopK` | `VectorQueries` (default) | `SbomMaxTimelineEntries` | `SbomMaxDependencyPaths` | `IncludeBlastRadius` |
| --- | --- | --- | --- | --- | --- | --- |
| Summary | 25 | 5 | `Summarize key facts`, `What is impacted?` | 10 | 20 | ✔ |
| Conflict | 30 | 6 | `Highlight conflicting statements`, `Where do sources disagree?` | 8 | 15 | ✖ |
| Remediation | 35 | 6 | `Provide remediation steps`, `Outline mitigations and fixes` | 12 | 25 | ✔ |
These knobs act as weighting levers: lower `VectorTopK` emphasises deterministic evidence; higher values favor breadth. `StructuredMaxChunks` bounds how many CSAF/OSV/VEX chunks reach the prompt, keeping token budgets predictable.
## 4. Token budgets
`AdvisoryTaskBudget` holds `PromptTokens` and `CompletionTokens` per task. Defaults:
| Task | Prompt tokens | Completion tokens |
| --- | --- | --- |
| Summary | 2048 | 512 |
| Conflict | 2048 | 512 |
| Remediation | 2048 | 640 |
Overwrite via `AdvisoryAI:Tasks:Summary:Budget:PromptTokens`, etc. The worker records actual consumption in the response metadata (`inference.prompt_tokens`, `inference.completion_tokens`).
## 5. Cache TTLs & queue directories
- **Plan cache TTLs** In-memory and file-system caches honour `AdvisoryAI:PlanCache:DefaultTimeToLive` (default 10 minutes) and `CleanupInterval` (default 5 minutes). Shorten the TTL to reduce stale plans or increase it to favour offline reuse. Both values accept ISO 8601 or `hh:mm:ss` time spans.
- **Queue & storage paths** `AdvisoryAI:Queue:DirectoryPath`, `AdvisoryAI:Storage:PlanCacheDirectory`, and `AdvisoryAI:Storage:OutputDirectory` default to `data/advisory-ai/{queue,plans,outputs}` under the content root; override these when mounting RWX volumes in sovereign clusters.
- **Output TTLs** Output artefacts inherit the host file-system retention policies. Combine `DefaultTimeToLive` with a cron or systemd timer to prune `outputs/` periodically when operating in remote-inference-heavy environments.
### Example: raised TTL & custom queue path
```json
{
"AdvisoryAI": {
"PlanCache": {
"DefaultTimeToLive": "00:20:00",
"CleanupInterval": "00:05:00"
},
"Queue": {
"DirectoryPath": "/var/lib/advisory-ai/queue"
}
}
}
```
## 6. Operational notes
- Updating **guardrail phrases** triggers only on host reload. When distributing blocked-phrase files via Offline Kits, keep filenames stable and version them through Git so QA can diff changes.
- **Temperature / sampling** remains a remote-provider concern. StellaOps records the providers `modelId` and exposes fallback metadata so policy authors can audit when sanitized prompts were returned instead of model output.
- Always track changes in `docs/implplan/SPRINT_111_advisoryai.md` (task `DOCS-AIAI-31-006`) when promoting this document so the guild can trace which parameters were added per sprint.

View File

@@ -69,6 +69,21 @@ This document defines how StellaOps records provenance for SBOM, VEX, scan, a
3. **Attach** the provenance block before appending the event to Mongo, using `StellaOps.Provenance.Mongo` helpers. 3. **Attach** the provenance block before appending the event to Mongo, using `StellaOps.Provenance.Mongo` helpers.
4. **Backfill** historical events by resolving known subjects → attestation digests and running an update script. 4. **Backfill** historical events by resolving known subjects → attestation digests and running an update script.
### 2.1 Supplying metadata from Concelier statements
Concelier ingestion jobs can now inline provenance when they create advisory statements. Add an `AdvisoryProvenance` entry with `kind = "dsse"` (or `dsse-metadata` / `attestation-dsse`) and set `value` to the same JSON emitted by the CI snippet. `AdvisoryEventLog` and `AdvisoryMergeService` automatically parse that entry, hydrate `AdvisoryStatementInput.Provenance/Trust`, and persist the metadata alongside the statement.
```json
{
"source": "attestor",
"kind": "dsse",
"value": "{ \"dsse\": { \"envelopeDigest\": \"sha256:…\", \"payloadType\": \"application/vnd.in-toto+json\" }, \"trust\": { \"verified\": true, \"verifier\": \"Authority@stella\" } }",
"recordedAt": "2025-11-10T00:00:00Z"
}
```
Providing the metadata during ingestion keeps new statements self-contained and reduces the surface that the `/events/statements/{statementId}/provenance` endpoint needs to backfill later.
Reference helper: `src/__Libraries/StellaOps.Provenance.Mongo/ProvenanceMongoExtensions.cs`. Reference helper: `src/__Libraries/StellaOps.Provenance.Mongo/ProvenanceMongoExtensions.cs`.
--- ---
@@ -202,3 +217,17 @@ rules:
| `PROV-INDEX-401-030` | Create Mongo indexes and expose helper queries for audits. | | `PROV-INDEX-401-030` | Create Mongo indexes and expose helper queries for audits. |
Keep this document updated when new attestation types or mirror/witness policies land. Keep this document updated when new attestation types or mirror/witness policies land.
---
## 9. Feedser API for provenance updates
Feedser exposes a lightweight endpoint for attaching provenance after an event is recorded:
```
POST /events/statements/{statementId}/provenance
Headers: X-Stella-Tenant, Authorization (if Authority is enabled)
Body: { "dsse": { ... }, "trust": { ... } }
```
The body matches the JSON emitted by `publish_attestation_with_provenance.sh`. Feedser validates the payload, ensures `trust.verified = true`, and then calls `AttachStatementProvenanceAsync` so the DSSE metadata lands inline on the target statement. Clients receive HTTP 202 on success, 400 on malformed input, and 404 if the statement id is unknown.

View File

@@ -0,0 +1,19 @@
# 2025-11-12 Notifications Attestation Template Suite
## Summary
- Introduced the canonical `tmpl-attest-*` template family covering verification failures, expiring attestations, key rotations, and transparency anomalies.
- Synchronized overview, rules, and architecture docs so operators, rule authors, and implementers share the same guidance for attestation-triggered notifications.
- Captured Offline Kit expectations and helper usage so the upcoming NOTIFY-ATTEST-74-002 wiring work has stable artefacts to reference.
## Details
- `docs/notifications/templates.md` now includes Section7 with required fields, helper references, Slack/Email/Webhook samples, and Offline Kit packaging notes for the attestation lifecycle templates.
- Baseline exported templates for each required channel now live under `offline/notifier/templates/attestation/*.template.json` so Offline Kit consumers inherit the canonical payloads immediately.
- `docs/notifications/overview.md` highlights that template capabilities include the attestation suite and reiterates determinism requirements around the `tmpl-attest-*` keys.
- `docs/notifications/rules.md` adds Section4.0, mandating the new template keys for `attestor.*` and `authority.keys.*` events so rules do not drift.
- `docs/notifications/architecture.md` references the template suite inside the rendering pipeline description, reminding service owners to populate attestation context fields.
- Sprint trackers (`SPRINT_170_notifications_telemetry.md`, `SPRINT_171_notifier_i.md`) note the documentation progress for NOTIFY-ATTEST-74-001.
## Follow-ups
- [ ] Finalise the attestation event schema on 20251113 so the documented templates can be localised and promoted to Offline Kits.
- [ ] Export the new templates into Offline Kit manifests (`offline/notifier/templates/attestation/`) once schemas lock.
- [ ] Update rule/controller defaults so attestation-triggered rules reference the documented template keys by default.

View File

View File

@@ -0,0 +1,16 @@
{
"schemaVersion": "notify.template@1",
"templateId": "tmpl-attest-expiry-warning-email-en-us",
"tenantId": "bootstrap",
"channelType": "email",
"key": "tmpl-attest-expiry-warning",
"locale": "en-us",
"renderMode": "html",
"format": "email",
"description": "Expiry warning for attestations approaching their expiration window.",
"body": "<h2>Attestation expiry notice</h2>\n<p>The attestation for <code>{{payload.subject.repository}}</code> (digest {{payload.subject.digest}}) expires on <strong>{{payload.attestation.expiresAt}}</strong>.</p>\n<ul>\n <li>Issued: {{payload.attestation.issuedAt}}</li>\n <li>Signer: <code>{{payload.signer.kid}}</code> ({{payload.signer.algorithm}})</li>\n <li>Time remaining: {{expires_in payload.attestation.expiresAt event.ts}}</li>\n</ul>\n<p>Please rotate the attestation before expiry using <a href=\"{{payload.links.docs}}\">these instructions</a>.</p>\n<p>Console: <a href=\"{{payload.links.console}}\">{{payload.links.console}}</a></p>\n",
"metadata": {
"author": "notifications-bootstrap",
"version": "2025-11-12"
}
}

View File

@@ -0,0 +1,16 @@
{
"schemaVersion": "notify.template@1",
"templateId": "tmpl-attest-key-rotation-email-en-us",
"tenantId": "bootstrap",
"channelType": "email",
"key": "tmpl-attest-key-rotation",
"locale": "en-us",
"renderMode": "html",
"format": "email",
"description": "Email bulletin for attestation key rotation or revocation events.",
"body": "<h2>Attestation key rotation notice</h2>\n<p>Authority rotated or revoked signing keys at {{payload.rotation.executedAt}}.</p>\n<ul>\n <li>Rotation batch: {{payload.rotation.batchId}}</li>\n <li>Impacted services: {{payload.rotation.impactedServices}}</li>\n <li>Reason: {{payload.rotation.reason}}</li>\n</ul>\n<p>Recommended action: {{payload.recommendation}}</p>\n<p>Docs: <a href=\"{{payload.links.docs}}\">Rotation playbook</a></p>\n",
"metadata": {
"author": "notifications-bootstrap",
"version": "2025-11-12"
}
}

View File

@@ -0,0 +1,16 @@
{
"schemaVersion": "notify.template@1",
"templateId": "tmpl-attest-key-rotation-webhook-en-us",
"tenantId": "bootstrap",
"channelType": "webhook",
"key": "tmpl-attest-key-rotation",
"locale": "en-us",
"renderMode": "json",
"format": "webhook",
"description": "Webhook payload for attestation key rotation/revocation events.",
"body": "{\n \"event\": \"authority.keys.rotated\",\n \"tenantId\": \"{{event.tenant}}\",\n \"batchId\": \"{{payload.rotation.batchId}}\",\n \"executedAt\": \"{{payload.rotation.executedAt}}\",\n \"impactedServices\": \"{{payload.rotation.impactedServices}}\",\n \"reason\": \"{{payload.rotation.reason}}\",\n \"links\": {\n \"docs\": \"{{payload.links.docs}}\",\n \"console\": \"{{payload.links.console}}\"\n }\n}\n",
"metadata": {
"author": "notifications-bootstrap",
"version": "2025-11-12"
}
}

View File

@@ -0,0 +1,16 @@
{
"schemaVersion": "notify.template@1",
"templateId": "tmpl-attest-transparency-anomaly-slack-en-us",
"tenantId": "bootstrap",
"channelType": "slack",
"key": "tmpl-attest-transparency-anomaly",
"locale": "en-us",
"renderMode": "markdown",
"format": "slack",
"description": "Slack alert for transparency witness anomalies.",
"body": ":warning: Transparency anomaly detected for `{{payload.subject.digest}}`\nWitness: `{{payload.transparency.witnessId}}` ({{payload.transparency.classification}})\nRekor index: {{payload.transparency.rekorIndex}}\nAnomaly window: {{payload.transparency.windowStart}} → {{payload.transparency.windowEnd}}\nRecommended action: {{payload.recommendation}}\nConsole details: {{link \"Open in Console\" payload.links.console}}\n",
"metadata": {
"author": "notifications-bootstrap",
"version": "2025-11-12"
}
}

View File

@@ -0,0 +1,16 @@
{
"schemaVersion": "notify.template@1",
"templateId": "tmpl-attest-transparency-anomaly-webhook-en-us",
"tenantId": "bootstrap",
"channelType": "webhook",
"key": "tmpl-attest-transparency-anomaly",
"locale": "en-us",
"renderMode": "json",
"format": "webhook",
"description": "Webhook payload for Rekor transparency anomalies.",
"body": "{\n \"event\": \"attestor.transparency.anomaly\",\n \"tenantId\": \"{{event.tenant}}\",\n \"subjectDigest\": \"{{payload.subject.digest}}\",\n \"witnessId\": \"{{payload.transparency.witnessId}}\",\n \"classification\": \"{{payload.transparency.classification}}\",\n \"rekorIndex\": {{payload.transparency.rekorIndex}},\n \"window\": {\n \"start\": \"{{payload.transparency.windowStart}}\",\n \"end\": \"{{payload.transparency.windowEnd}}\"\n },\n \"links\": {\n \"console\": \"{{payload.links.console}}\",\n \"rekor\": \"{{payload.links.rekor}}\"\n },\n \"recommendation\": \"{{payload.recommendation}}\"\n}\n",
"metadata": {
"author": "notifications-bootstrap",
"version": "2025-11-12"
}
}

View File

@@ -0,0 +1,16 @@
{
"schemaVersion": "notify.template@1",
"templateId": "tmpl-attest-verify-fail-email-en-us",
"tenantId": "bootstrap",
"channelType": "email",
"key": "tmpl-attest-verify-fail",
"locale": "en-us",
"renderMode": "html",
"format": "email",
"description": "Email notice for attestation verification failures.",
"body": "<h2>Attestation verification failure</h2>\n<p>The attestation for <code>{{payload.subject.repository}}</code> (digest {{payload.subject.digest}}) failed verification at {{event.ts}}.</p>\n<ul>\n <li>Reason: <code>{{payload.failure.reasonCode}}</code> — {{payload.failure.reason}}</li>\n <li>Signer: <code>{{payload.signer.kid}}</code> ({{payload.signer.algorithm}})</li>\n <li>Rekor entry: <a href=\"{{payload.links.rekor}}\">{{payload.links.rekor}}</a></li>\n <li>Last valid attestation: <a href=\"{{payload.links.console}}\">Console report</a></li>\n</ul>\n<p>{{payload.recommendation}}</p>\n",
"metadata": {
"author": "notifications-bootstrap",
"version": "2025-11-12"
}
}

View File

@@ -0,0 +1,16 @@
{
"schemaVersion": "notify.template@1",
"templateId": "tmpl-attest-verify-fail-slack-en-us",
"tenantId": "bootstrap",
"channelType": "slack",
"key": "tmpl-attest-verify-fail",
"locale": "en-us",
"renderMode": "markdown",
"format": "slack",
"description": "Slack alert for attestation verification failures with Rekor traceability.",
"body": ":rotating_light: {{attestation_status_badge payload.failure.status}} verification failed for `{{payload.subject.digest}}`\nSigner: `{{fingerprint payload.signer.kid}}` ({{payload.signer.algorithm}})\nReason: `{{payload.failure.reasonCode}}` — {{payload.failure.reason}}\nLast valid attestation: {{link \"Console\" payload.links.console}}\nRekor entry: {{link \"Transparency log\" payload.links.rekor}}\nRecommended action: {{payload.recommendation}}\n",
"metadata": {
"author": "notifications-bootstrap",
"version": "2025-11-12"
}
}

View File

@@ -0,0 +1,16 @@
{
"schemaVersion": "notify.template@1",
"templateId": "tmpl-attest-verify-fail-webhook-en-us",
"tenantId": "bootstrap",
"channelType": "webhook",
"key": "tmpl-attest-verify-fail",
"locale": "en-us",
"renderMode": "json",
"format": "webhook",
"description": "JSON payload for Pager/SOC integrations on attestation verification failures.",
"body": "{\n \"event\": \"attestor.verification.failed\",\n \"tenantId\": \"{{event.tenant}}\",\n \"subjectDigest\": \"{{payload.subject.digest}}\",\n \"repository\": \"{{payload.subject.repository}}\",\n \"reasonCode\": \"{{payload.failure.reasonCode}}\",\n \"reason\": \"{{payload.failure.reason}}\",\n \"signer\": {\n \"kid\": \"{{payload.signer.kid}}\",\n \"algorithm\": \"{{payload.signer.algorithm}}\"\n },\n \"rekor\": {\n \"url\": \"{{payload.links.rekor}}\",\n \"uuid\": \"{{payload.rekor.uuid}}\",\n \"index\": {{payload.rekor.index}}\n },\n \"recommendation\": \"{{payload.recommendation}}\"\n}\n",
"metadata": {
"author": "notifications-bootstrap",
"version": "2025-11-12"
}
}

View File

@@ -0,0 +1,33 @@
// Index 1: core lookup subject + kind + Rekor presence
db.events.createIndex(
{
"subject.digest.sha256": 1,
"kind": 1,
"provenance.dsse.rekor.logIndex": 1
},
{
name: "events_by_subject_kind_provenance"
}
);
// Index 2: compliance gap by kind + verified + Rekor presence
db.events.createIndex(
{
"kind": 1,
"trust.verified": 1,
"provenance.dsse.rekor.logIndex": 1
},
{
name: "events_unproven_by_kind"
}
);
// Index 3: generic Rekor index scan for debugging / bulk audit
db.events.createIndex(
{
"provenance.dsse.rekor.logIndex": 1
},
{
name: "events_by_rekor_logindex"
}
);

View File

@@ -0,0 +1,68 @@
#!/usr/bin/env bash
set -euo pipefail
# Inputs (typically provided by CI/CD)
IMAGE_REF="${IMAGE_REF:?missing IMAGE_REF}" # e.g. ghcr.io/org/app:tag
ATTEST_PATH="${ATTEST_PATH:?missing ATTEST_PATH}" # DSSE envelope file path
REKOR_URL="${REKOR_URL:-https://rekor.sigstore.dev}"
KEY_REF="${KEY_REF:-cosign.key}" # could be KMS / keyless etc.
OUT_META_JSON="${OUT_META_JSON:-provenance-meta.json}"
# 1) Upload DSSE envelope to Rekor with JSON output
rekor-cli upload \
--rekor_server "${REKOR_URL}" \
--artifact "${ATTEST_PATH}" \
--type dsse \
--format json > rekor-upload.json
LOG_INDEX=$(jq '.LogIndex' rekor-upload.json)
UUID=$(jq -r '.UUID' rekor-upload.json)
INTEGRATED_TIME=$(jq '.IntegratedTime' rekor-upload.json)
# 2) Compute envelope SHA256
ENVELOPE_SHA256=$(sha256sum "${ATTEST_PATH}" | awk '{print $1}')
# 3) Extract key metadata (example for local file key; adapt for Fulcio/KMS)
# For keyless/Fulcio youd normally extract cert from cosign verify-attestation.
KEY_ID="${KEY_ID:-${KEY_REF}}"
KEY_ALGO="${KEY_ALGO:-unknown}"
KEY_ISSUER="${KEY_ISSUER:-unknown}"
# 4) Optional: resolve image digest (if not already known in CI)
IMAGE_DIGEST="${IMAGE_DIGEST:-}"
if [ -z "${IMAGE_DIGEST}" ]; then
IMAGE_DIGEST="$(cosign triangulate "${IMAGE_REF}")"
fi
# 5) Emit provenance sidecar
cat > "${OUT_META_JSON}" <<EOF
{
"subject": {
"imageRef": "${IMAGE_REF}",
"digest": {
"sha256": "${IMAGE_DIGEST}"
}
},
"attestation": {
"path": "${ATTEST_PATH}",
"envelopeDigest": "sha256:${ENVELOPE_SHA256}",
"payloadType": "application/vnd.in-toto+json"
},
"dsse": {
"envelopeDigest": "sha256:${ENVELOPE_SHA256}",
"payloadType": "application/vnd.in-toto+json",
"key": {
"keyId": "${KEY_ID}",
"issuer": "${KEY_ISSUER}",
"algo": "${KEY_ALGO}"
},
"rekor": {
"logIndex": ${LOG_INDEX},
"uuid": "${UUID}",
"integratedTime": ${INTEGRATED_TIME}
}
}
}
EOF
echo "Provenance metadata written to ${OUT_META_JSON}"

View File

@@ -901,7 +901,8 @@ internal sealed class BackendOperationsClient : IBackendOperationsClient
throw new ArgumentException("Scan identifier is required.", nameof(scanId)); throw new ArgumentException("Scan identifier is required.", nameof(scanId));
} }
using var request = CreateRequest(HttpMethod.Get, $"api/scans/{scanId}/ruby-packages"); var encodedScanId = Uri.EscapeDataString(scanId);
using var request = CreateRequest(HttpMethod.Get, $"api/scans/{encodedScanId}/ruby-packages");
await AuthorizeRequestAsync(request, cancellationToken).ConfigureAwait(false); await AuthorizeRequestAsync(request, cancellationToken).ConfigureAwait(false);
using var response = await _httpClient.SendAsync(request, cancellationToken).ConfigureAwait(false); using var response = await _httpClient.SendAsync(request, cancellationToken).ConfigureAwait(false);

View File

@@ -1,26 +1,72 @@
using System.Collections.Generic; using System.Collections.Generic;
using System.Text.Json.Serialization;
namespace StellaOps.Concelier.WebService.Contracts; namespace StellaOps.Concelier.WebService.Contracts;
public sealed record AdvisoryChunkCollectionResponse( public sealed record AdvisoryStructuredFieldResponse(
string AdvisoryKey, string AdvisoryKey,
int Total, int Total,
bool Truncated, bool Truncated,
IReadOnlyList<AdvisoryChunkItemResponse> Chunks, IReadOnlyList<AdvisoryStructuredFieldEntry> Entries);
IReadOnlyList<AdvisoryChunkSourceResponse> Sources);
public sealed record AdvisoryChunkItemResponse( public sealed record AdvisoryStructuredFieldEntry(
string Type,
string DocumentId, string DocumentId,
string FieldPath,
string ChunkId, string ChunkId,
string Section, AdvisoryStructuredFieldContent Content,
string ParagraphId, AdvisoryStructuredFieldProvenance Provenance);
string Text,
IReadOnlyDictionary<string, string> Metadata);
public sealed record AdvisoryChunkSourceResponse( public sealed record AdvisoryStructuredFieldContent
string ObservationId, {
string DocumentId, [JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)]
string Format, public string? Title { get; init; }
string Vendor,
string ContentHash, [JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)]
DateTimeOffset CreatedAt); public string? Description { get; init; }
[JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)]
public string? Url { get; init; }
[JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)]
public string? Note { get; init; }
[JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)]
public AdvisoryStructuredFixContent? Fix { get; init; }
[JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)]
public AdvisoryStructuredCvssContent? Cvss { get; init; }
[JsonIgnore(Condition = JsonIgnoreCondition.WhenWritingNull)]
public AdvisoryStructuredAffectedContent? Affected { get; init; }
}
public sealed record AdvisoryStructuredFixContent(
string? PackageType,
string? PackageIdentifier,
string? FixedVersion,
string? ReferenceUrl);
public sealed record AdvisoryStructuredCvssContent(
string Version,
string Vector,
double BaseScore,
string Severity);
public sealed record AdvisoryStructuredAffectedContent(
string PackageType,
string PackageIdentifier,
string? Platform,
string RangeKind,
string? IntroducedVersion,
string? FixedVersion,
string? LastAffectedVersion,
string? RangeExpression,
string? Status);
public sealed record AdvisoryStructuredFieldProvenance(
string Source,
string Kind,
string? Value,
DateTimeOffset RecordedAt,
IReadOnlyList<string> FieldMask);

View File

@@ -24,9 +24,9 @@ using MongoDB.Bson;
using MongoDB.Driver; using MongoDB.Driver;
using StellaOps.Concelier.Core.Events; using StellaOps.Concelier.Core.Events;
using StellaOps.Concelier.Core.Jobs; using StellaOps.Concelier.Core.Jobs;
using StellaOps.Concelier.Storage.Mongo;
using StellaOps.Concelier.Core.Observations; using StellaOps.Concelier.Core.Observations;
using StellaOps.Concelier.Core.Linksets; using StellaOps.Concelier.Core.Linksets;
using StellaOps.Concelier.Models;
using StellaOps.Concelier.WebService.Diagnostics; using StellaOps.Concelier.WebService.Diagnostics;
using Serilog; using Serilog;
using StellaOps.Concelier.Merge; using StellaOps.Concelier.Merge;
@@ -50,6 +50,10 @@ using StellaOps.Concelier.WebService.Contracts;
using StellaOps.Concelier.Core.Aoc; using StellaOps.Concelier.Core.Aoc;
using StellaOps.Concelier.Core.Raw; using StellaOps.Concelier.Core.Raw;
using StellaOps.Concelier.RawModels; using StellaOps.Concelier.RawModels;
using StellaOps.Concelier.Storage.Mongo;
using StellaOps.Concelier.Storage.Mongo.Advisories;
using StellaOps.Concelier.Storage.Mongo.Aliases;
using StellaOps.Provenance.Mongo;
var builder = WebApplication.CreateBuilder(args); var builder = WebApplication.CreateBuilder(args);
@@ -812,6 +816,8 @@ var advisoryChunksEndpoint = app.MapGet("/advisories/{advisoryKey}/chunks", asyn
[FromServices] IAdvisoryObservationQueryService observationService, [FromServices] IAdvisoryObservationQueryService observationService,
[FromServices] AdvisoryChunkBuilder chunkBuilder, [FromServices] AdvisoryChunkBuilder chunkBuilder,
[FromServices] IAdvisoryChunkCache chunkCache, [FromServices] IAdvisoryChunkCache chunkCache,
[FromServices] IAdvisoryStore advisoryStore,
[FromServices] IAliasStore aliasStore,
[FromServices] IAdvisoryAiTelemetry telemetry, [FromServices] IAdvisoryAiTelemetry telemetry,
[FromServices] TimeProvider timeProvider, [FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken) => CancellationToken cancellationToken) =>
@@ -854,21 +860,37 @@ var advisoryChunksEndpoint = app.MapGet("/advisories/{advisoryKey}/chunks", asyn
var sectionFilter = BuildFilterSet(context.Request.Query["section"]); var sectionFilter = BuildFilterSet(context.Request.Query["section"]);
var formatFilter = BuildFilterSet(context.Request.Query["format"]); var formatFilter = BuildFilterSet(context.Request.Query["format"]);
var resolution = await ResolveAdvisoryAsync(
normalizedKey,
advisoryStore,
aliasStore,
cancellationToken).ConfigureAwait(false);
if (resolution is null)
{
telemetry.TrackChunkFailure(tenant, normalizedKey, "advisory_not_found", "not_found");
return Problem(context, "Advisory not found", StatusCodes.Status404NotFound, ProblemTypes.NotFound, $"No advisory found for {normalizedKey}.");
}
var (advisory, aliasList, fingerprint) = resolution.Value;
var aliasCandidates = aliasList.IsDefaultOrEmpty
? ImmutableArray.Create(advisory.AdvisoryKey)
: aliasList;
var queryOptions = new AdvisoryObservationQueryOptions( var queryOptions = new AdvisoryObservationQueryOptions(
tenant, tenant,
aliases: new[] { normalizedKey }, aliases: aliasCandidates,
limit: observationLimit); limit: observationLimit);
var observationResult = await observationService.QueryAsync(queryOptions, cancellationToken).ConfigureAwait(false); var observationResult = await observationService.QueryAsync(queryOptions, cancellationToken).ConfigureAwait(false);
if (observationResult.Observations.IsDefaultOrEmpty || observationResult.Observations.Length == 0) if (observationResult.Observations.IsDefaultOrEmpty || observationResult.Observations.Length == 0)
{ {
telemetry.TrackChunkFailure(tenant, normalizedKey, "advisory_not_found", "not_found"); telemetry.TrackChunkFailure(tenant, advisory.AdvisoryKey, "advisory_not_found", "not_found");
return Problem(context, "Advisory not found", StatusCodes.Status404NotFound, ProblemTypes.NotFound, $"No observations available for {normalizedKey}."); return Problem(context, "Advisory not found", StatusCodes.Status404NotFound, ProblemTypes.NotFound, $"No observations available for {advisory.AdvisoryKey}.");
} }
var observations = observationResult.Observations.ToArray(); var observations = observationResult.Observations.ToArray();
var buildOptions = new AdvisoryChunkBuildOptions( var buildOptions = new AdvisoryChunkBuildOptions(
normalizedKey, advisory.AdvisoryKey,
chunkLimit, chunkLimit,
observationLimit, observationLimit,
sectionFilter, sectionFilter,
@@ -884,7 +906,7 @@ var advisoryChunksEndpoint = app.MapGet("/advisories/{advisoryKey}/chunks", asyn
if (cacheDuration > TimeSpan.Zero) if (cacheDuration > TimeSpan.Zero)
{ {
var cacheKey = AdvisoryChunkCacheKey.Create(tenant, normalizedKey, buildOptions, observations); var cacheKey = AdvisoryChunkCacheKey.Create(tenant, advisory.AdvisoryKey, buildOptions, observations, fingerprint);
if (chunkCache.TryGet(cacheKey, out var cachedResult)) if (chunkCache.TryGet(cacheKey, out var cachedResult))
{ {
buildResult = cachedResult; buildResult = cachedResult;
@@ -892,13 +914,13 @@ var advisoryChunksEndpoint = app.MapGet("/advisories/{advisoryKey}/chunks", asyn
} }
else else
{ {
buildResult = chunkBuilder.Build(buildOptions, observations); buildResult = chunkBuilder.Build(buildOptions, advisory, observations);
chunkCache.Set(cacheKey, buildResult, cacheDuration); chunkCache.Set(cacheKey, buildResult, cacheDuration);
} }
} }
else else
{ {
buildResult = chunkBuilder.Build(buildOptions, observations); buildResult = chunkBuilder.Build(buildOptions, advisory, observations);
} }
var duration = timeProvider.GetElapsedTime(requestStart); var duration = timeProvider.GetElapsedTime(requestStart);
@@ -907,13 +929,13 @@ var advisoryChunksEndpoint = app.MapGet("/advisories/{advisoryKey}/chunks", asyn
telemetry.TrackChunkResult(new AdvisoryAiChunkRequestTelemetry( telemetry.TrackChunkResult(new AdvisoryAiChunkRequestTelemetry(
tenant, tenant,
normalizedKey, advisory.AdvisoryKey,
"ok", "ok",
buildResult.Response.Truncated, buildResult.Response.Truncated,
cacheHit, cacheHit,
observations.Length, observations.Length,
buildResult.Telemetry.SourceCount, buildResult.Telemetry.SourceCount,
buildResult.Response.Chunks.Count, buildResult.Response.Entries.Count,
duration, duration,
guardrailCounts)); guardrailCounts));
@@ -1055,6 +1077,52 @@ app.MapGet("/concelier/advisories/{vulnerabilityKey}/replay", async (
return JsonResult(response); return JsonResult(response);
}); });
var statementProvenanceEndpoint = app.MapPost("/events/statements/{statementId:guid}/provenance", async (
Guid statementId,
HttpContext context,
[FromServices] IAdvisoryEventLog eventLog,
CancellationToken cancellationToken) =>
{
if (!TryResolveTenant(context, requireHeader: true, out var tenant, out var tenantError))
{
return tenantError;
}
var authorizationError = EnsureTenantAuthorized(context, tenant);
if (authorizationError is not null)
{
return authorizationError;
}
try
{
using var document = await JsonDocument.ParseAsync(context.Request.Body, cancellationToken: cancellationToken).ConfigureAwait(false);
var (dsse, trust) = ProvenanceJsonParser.Parse(document.RootElement);
if (!trust.Verified)
{
return Problem(context, "Unverified provenance", StatusCodes.Status400BadRequest, ProblemTypes.Validation, "trust.verified must be true.");
}
await eventLog.AttachStatementProvenanceAsync(statementId, dsse, trust, cancellationToken).ConfigureAwait(false);
}
catch (JsonException ex)
{
return Problem(context, "Invalid provenance payload", StatusCodes.Status400BadRequest, ProblemTypes.Validation, ex.Message);
}
catch (InvalidOperationException ex)
{
return Problem(context, "Statement not found", StatusCodes.Status404NotFound, ProblemTypes.NotFound, ex.Message);
}
return Results.Accepted($"/events/statements/{statementId}");
});
if (authorityConfigured)
{
statementProvenanceEndpoint.RequireAuthorization(AdvisoryIngestPolicyName);
}
var loggingEnabled = concelierOptions.Telemetry?.EnableLogging ?? true; var loggingEnabled = concelierOptions.Telemetry?.EnableLogging ?? true;
if (loggingEnabled) if (loggingEnabled)
@@ -1250,6 +1318,149 @@ IResult? EnsureTenantAuthorized(HttpContext context, string tenant)
return null; return null;
} }
async Task<(Advisory Advisory, ImmutableArray<string> Aliases, string Fingerprint)?> ResolveAdvisoryAsync(
string advisoryKey,
IAdvisoryStore advisoryStore,
IAliasStore aliasStore,
CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(advisoryStore);
ArgumentNullException.ThrowIfNull(aliasStore);
var directCandidates = new List<string>();
if (!string.IsNullOrWhiteSpace(advisoryKey))
{
var trimmed = advisoryKey.Trim();
if (!string.IsNullOrWhiteSpace(trimmed))
{
directCandidates.Add(trimmed);
var upper = trimmed.ToUpperInvariant();
if (!string.Equals(upper, trimmed, StringComparison.Ordinal))
{
directCandidates.Add(upper);
}
}
}
foreach (var candidate in directCandidates.Distinct(StringComparer.OrdinalIgnoreCase))
{
var advisory = await advisoryStore.FindAsync(candidate, cancellationToken).ConfigureAwait(false);
if (advisory is not null)
{
return CreateResolution(advisory);
}
}
var aliasMatches = new List<AliasRecord>();
foreach (var (scheme, value) in BuildAliasLookups(advisoryKey))
{
var records = await aliasStore.GetByAliasAsync(scheme, value, cancellationToken).ConfigureAwait(false);
if (records.Count > 0)
{
aliasMatches.AddRange(records);
}
}
if (aliasMatches.Count == 0)
{
return null;
}
foreach (var candidate in aliasMatches
.OrderByDescending(record => record.UpdatedAt)
.ThenBy(record => record.AdvisoryKey, StringComparer.Ordinal)
.Select(record => record.AdvisoryKey)
.Distinct(StringComparer.OrdinalIgnoreCase))
{
var advisory = await advisoryStore.FindAsync(candidate, cancellationToken).ConfigureAwait(false);
if (advisory is not null)
{
return CreateResolution(advisory);
}
}
return null;
}
static (Advisory Advisory, ImmutableArray<string> Aliases, string Fingerprint) CreateResolution(Advisory advisory)
{
var fingerprint = AdvisoryFingerprint.Compute(advisory);
var aliases = BuildAliasQuery(advisory);
return (advisory, aliases, fingerprint);
}
static ImmutableArray<string> BuildAliasQuery(Advisory advisory)
{
var set = new HashSet<string>(StringComparer.OrdinalIgnoreCase);
if (!string.IsNullOrWhiteSpace(advisory.AdvisoryKey))
{
set.Add(advisory.AdvisoryKey.Trim());
}
foreach (var alias in advisory.Aliases)
{
if (!string.IsNullOrWhiteSpace(alias))
{
set.Add(alias.Trim());
}
}
if (set.Count == 0)
{
return ImmutableArray<string>.Empty;
}
var ordered = set
.OrderBy(static value => value, StringComparer.OrdinalIgnoreCase)
.ToList();
var canonical = advisory.AdvisoryKey?.Trim();
if (!string.IsNullOrWhiteSpace(canonical))
{
ordered.RemoveAll(value => string.Equals(value, canonical, StringComparison.OrdinalIgnoreCase));
ordered.Insert(0, canonical);
}
return ordered.ToImmutableArray();
}
static IReadOnlyList<(string Scheme, string Value)> BuildAliasLookups(string? candidate)
{
var pairs = new List<(string Scheme, string Value)>();
var seen = new HashSet<string>(StringComparer.Ordinal);
void Add(string scheme, string? value)
{
if (string.IsNullOrWhiteSpace(scheme) || string.IsNullOrWhiteSpace(value))
{
return;
}
var trimmed = value.Trim();
if (trimmed.Length == 0)
{
return;
}
var key = $"{scheme}\u0001{trimmed}";
if (seen.Add(key))
{
pairs.Add((scheme, trimmed));
}
}
if (AliasSchemeRegistry.TryNormalize(candidate, out var normalized, out var scheme))
{
Add(scheme, normalized);
}
Add(AliasStoreConstants.UnscopedScheme, candidate);
Add(AliasStoreConstants.PrimaryScheme, candidate);
return pairs;
}
ImmutableHashSet<string> BuildFilterSet(StringValues values) ImmutableHashSet<string> BuildFilterSet(StringValues values)
{ {
if (values.Count == 0) if (values.Count == 0)

View File

@@ -3,8 +3,7 @@ using System.Collections.Immutable;
using System.Globalization; using System.Globalization;
using System.Linq; using System.Linq;
using System.Text; using System.Text;
using System.Text.Json; using StellaOps.Concelier.Models;
using System.Text.Json.Nodes;
using StellaOps.Concelier.Models.Observations; using StellaOps.Concelier.Models.Observations;
using StellaOps.Concelier.WebService.Contracts; using StellaOps.Concelier.WebService.Contracts;
using StellaOps.Cryptography; using StellaOps.Cryptography;
@@ -21,7 +20,24 @@ internal sealed record AdvisoryChunkBuildOptions(
internal sealed class AdvisoryChunkBuilder internal sealed class AdvisoryChunkBuilder
{ {
private const int DefaultMinLength = 40; private const string SectionWorkaround = "workaround";
private const string SectionFix = "fix";
private const string SectionCvss = "cvss";
private const string SectionAffected = "affected";
private static readonly ImmutableArray<string> SectionOrder = ImmutableArray.Create(
SectionWorkaround,
SectionFix,
SectionCvss,
SectionAffected);
private static readonly ImmutableHashSet<string> WorkaroundKinds = ImmutableHashSet.Create(
StringComparer.OrdinalIgnoreCase,
"workaround",
"mitigation",
"temporary_fix",
"work-around");
private readonly ICryptoHash _hash; private readonly ICryptoHash _hash;
public AdvisoryChunkBuilder(ICryptoHash hash) public AdvisoryChunkBuilder(ICryptoHash hash)
@@ -31,275 +47,330 @@ internal sealed class AdvisoryChunkBuilder
public AdvisoryChunkBuildResult Build( public AdvisoryChunkBuildResult Build(
AdvisoryChunkBuildOptions options, AdvisoryChunkBuildOptions options,
Advisory advisory,
IReadOnlyList<AdvisoryObservation> observations) IReadOnlyList<AdvisoryObservation> observations)
{ {
var chunks = new List<AdvisoryChunkItemResponse>(Math.Min(options.ChunkLimit, 256)); ArgumentNullException.ThrowIfNull(options);
var sources = new List<AdvisoryChunkSourceResponse>(); ArgumentNullException.ThrowIfNull(advisory);
ArgumentNullException.ThrowIfNull(observations);
var vendorIndex = new ObservationIndex(observations);
var chunkLimit = Math.Max(1, options.ChunkLimit);
var entries = new List<AdvisoryStructuredFieldEntry>(chunkLimit);
var total = 0; var total = 0;
var truncated = false; var truncated = false;
var guardrailCounts = new Dictionary<AdvisoryChunkGuardrailReason, int>(); var sectionFilter = options.SectionFilter ?? ImmutableHashSet<string>.Empty;
foreach (var observation in observations foreach (var section in SectionOrder)
.OrderByDescending(o => o.CreatedAt))
{ {
if (sources.Count >= options.ObservationLimit) if (!ShouldInclude(sectionFilter, section))
{
truncated = truncated || chunks.Count == options.ChunkLimit;
break;
}
if (options.FormatFilter.Count > 0 &&
!options.FormatFilter.Contains(observation.Content.Format))
{ {
continue; continue;
} }
var documentId = DetermineDocumentId(observation); IReadOnlyList<AdvisoryStructuredFieldEntry> bucket = section switch
sources.Add(new AdvisoryChunkSourceResponse(
observation.ObservationId,
documentId,
observation.Content.Format,
observation.Source.Vendor,
observation.Upstream.ContentHash,
observation.CreatedAt));
foreach (var chunk in ExtractChunks(observation, documentId, options, guardrailCounts))
{ {
total++; SectionWorkaround => BuildWorkaroundEntries(advisory, vendorIndex),
if (chunks.Count < options.ChunkLimit) SectionFix => BuildFixEntries(advisory, vendorIndex),
{ SectionCvss => BuildCvssEntries(advisory, vendorIndex),
chunks.Add(chunk); SectionAffected => BuildAffectedEntries(advisory, vendorIndex),
} _ => Array.Empty<AdvisoryStructuredFieldEntry>()
else };
{
truncated = true; if (bucket.Count == 0)
break; {
} continue;
} }
if (truncated) total += bucket.Count;
if (entries.Count >= chunkLimit)
{ {
break; truncated = true;
continue;
}
var remaining = chunkLimit - entries.Count;
if (bucket.Count <= remaining)
{
entries.AddRange(bucket);
}
else
{
entries.AddRange(bucket.Take(remaining));
truncated = true;
} }
} }
if (!truncated) var response = new AdvisoryStructuredFieldResponse(
{
total = chunks.Count;
}
var response = new AdvisoryChunkCollectionResponse(
options.AdvisoryKey, options.AdvisoryKey,
total, total,
truncated, truncated,
chunks, entries);
sources);
var guardrailSnapshot = guardrailCounts.Count == 0
? ImmutableDictionary<AdvisoryChunkGuardrailReason, int>.Empty
: guardrailCounts.ToImmutableDictionary();
var telemetry = new AdvisoryChunkTelemetrySummary( var telemetry = new AdvisoryChunkTelemetrySummary(
sources.Count, vendorIndex.SourceCount,
truncated, truncated,
guardrailSnapshot); ImmutableDictionary<AdvisoryChunkGuardrailReason, int>.Empty);
return new AdvisoryChunkBuildResult(response, telemetry); return new AdvisoryChunkBuildResult(response, telemetry);
} }
private static string DetermineDocumentId(AdvisoryObservation observation) private IReadOnlyList<AdvisoryStructuredFieldEntry> BuildWorkaroundEntries(Advisory advisory, ObservationIndex index)
{ {
if (!string.IsNullOrWhiteSpace(observation.Upstream.UpstreamId)) if (advisory.References.Length == 0)
{ {
return observation.Upstream.UpstreamId; return Array.Empty<AdvisoryStructuredFieldEntry>();
} }
return observation.ObservationId; var list = new List<AdvisoryStructuredFieldEntry>();
} for (var i = 0; i < advisory.References.Length; i++)
private IEnumerable<AdvisoryChunkItemResponse> ExtractChunks(
AdvisoryObservation observation,
string documentId,
AdvisoryChunkBuildOptions options,
IDictionary<AdvisoryChunkGuardrailReason, int> guardrailCounts)
{
var root = observation.Content.Raw;
if (root is null)
{ {
yield break; var reference = advisory.References[i];
} if (string.IsNullOrWhiteSpace(reference.Kind) || !WorkaroundKinds.Contains(reference.Kind))
var stack = new Stack<(JsonNode Node, string Path, string Section)>();
stack.Push((root, string.Empty, string.Empty));
while (stack.Count > 0)
{
var (node, path, section) = stack.Pop();
if (node is null)
{ {
continue; continue;
} }
switch (node) var content = new AdvisoryStructuredFieldContent
{ {
case JsonValue value: Title = reference.SourceTag ?? reference.Kind,
if (!TryNormalize(value, out var text)) Description = reference.Summary,
{ Url = reference.Url
IncrementGuardrailCount(guardrailCounts, AdvisoryChunkGuardrailReason.NormalizationFailed); };
break;
}
if (text.Length < Math.Max(options.MinimumLength, DefaultMinLength)) list.Add(CreateEntry(
{ SectionWorkaround,
IncrementGuardrailCount(guardrailCounts, AdvisoryChunkGuardrailReason.BelowMinimumLength); index.Resolve(reference.Provenance),
break; $"/references/{i}",
} content,
reference.Provenance));
if (!ContainsLetter(text))
{
IncrementGuardrailCount(guardrailCounts, AdvisoryChunkGuardrailReason.MissingAlphabeticCharacters);
break;
}
var resolvedSection = string.IsNullOrEmpty(section) ? documentId : section;
if (options.SectionFilter.Count > 0 && !options.SectionFilter.Contains(resolvedSection))
{
break;
}
var paragraphId = string.IsNullOrEmpty(path) ? resolvedSection : path;
var chunkId = CreateChunkId(documentId, paragraphId);
var metadata = new Dictionary<string, string>(StringComparer.Ordinal)
{
["path"] = paragraphId,
["section"] = resolvedSection,
["format"] = observation.Content.Format
};
if (!string.IsNullOrEmpty(observation.Content.SpecVersion))
{
metadata["specVersion"] = observation.Content.SpecVersion!;
}
yield return new AdvisoryChunkItemResponse(
documentId,
chunkId,
resolvedSection,
paragraphId,
text,
metadata);
break;
case JsonObject obj:
foreach (var property in obj.Reverse())
{
var childSection = string.IsNullOrEmpty(section) ? property.Key : section;
var childPath = AppendPath(path, property.Key);
if (property.Value is { } childNode)
{
stack.Push((childNode, childPath, childSection));
}
}
break;
case JsonArray array:
for (var index = array.Count - 1; index >= 0; index--)
{
var childPath = AppendIndex(path, index);
if (array[index] is { } childNode)
{
stack.Push((childNode, childPath, section));
}
}
break;
}
} }
return list.Count == 0 ? Array.Empty<AdvisoryStructuredFieldEntry>() : list;
} }
private IReadOnlyList<AdvisoryStructuredFieldEntry> BuildFixEntries(Advisory advisory, ObservationIndex index)
private static bool TryNormalize(JsonValue value, out string normalized)
{ {
normalized = string.Empty; if (advisory.AffectedPackages.Length == 0)
if (!value.TryGetValue(out string? text) || text is null)
{ {
return false; return Array.Empty<AdvisoryStructuredFieldEntry>();
} }
var span = text.AsSpan(); var list = new List<AdvisoryStructuredFieldEntry>();
var builder = new StringBuilder(span.Length);
var previousWhitespace = false;
foreach (var ch in span) for (var packageIndex = 0; packageIndex < advisory.AffectedPackages.Length; packageIndex++)
{ {
if (char.IsControl(ch) && !char.IsWhiteSpace(ch)) var package = advisory.AffectedPackages[packageIndex];
for (var rangeIndex = 0; rangeIndex < package.VersionRanges.Length; rangeIndex++)
{ {
continue; var range = package.VersionRanges[rangeIndex];
} if (string.IsNullOrWhiteSpace(range.FixedVersion))
if (char.IsWhiteSpace(ch))
{
if (previousWhitespace)
{ {
continue; continue;
} }
builder.Append(' '); var fix = new AdvisoryStructuredFixContent(
previousWhitespace = true; package.Type,
package.Identifier,
range.FixedVersion,
null);
var content = new AdvisoryStructuredFieldContent
{
Fix = fix,
Note = package.Provenance.FirstOrDefault()?.Value
};
list.Add(CreateEntry(
SectionFix,
index.Resolve(range.Provenance),
$"/affectedPackages/{packageIndex}/versionRanges/{rangeIndex}/fix",
content,
range.Provenance));
} }
else }
return list.Count == 0 ? Array.Empty<AdvisoryStructuredFieldEntry>() : list;
}
private IReadOnlyList<AdvisoryStructuredFieldEntry> BuildCvssEntries(Advisory advisory, ObservationIndex index)
{
if (advisory.CvssMetrics.Length == 0)
{
return Array.Empty<AdvisoryStructuredFieldEntry>();
}
var list = new List<AdvisoryStructuredFieldEntry>(advisory.CvssMetrics.Length);
for (var i = 0; i < advisory.CvssMetrics.Length; i++)
{
var metric = advisory.CvssMetrics[i];
var cvss = new AdvisoryStructuredCvssContent(
metric.Version,
metric.Vector,
metric.BaseScore,
metric.BaseSeverity);
var content = new AdvisoryStructuredFieldContent
{ {
builder.Append(ch); Cvss = cvss
previousWhitespace = false; };
list.Add(CreateEntry(
SectionCvss,
index.Resolve(metric.Provenance),
$"/cvssMetrics/{i}",
content,
metric.Provenance));
}
return list;
}
private IReadOnlyList<AdvisoryStructuredFieldEntry> BuildAffectedEntries(Advisory advisory, ObservationIndex index)
{
if (advisory.AffectedPackages.Length == 0)
{
return Array.Empty<AdvisoryStructuredFieldEntry>();
}
var list = new List<AdvisoryStructuredFieldEntry>();
for (var packageIndex = 0; packageIndex < advisory.AffectedPackages.Length; packageIndex++)
{
var package = advisory.AffectedPackages[packageIndex];
var status = package.Statuses.Length > 0 ? package.Statuses[0].Status : null;
for (var rangeIndex = 0; rangeIndex < package.VersionRanges.Length; rangeIndex++)
{
var range = package.VersionRanges[rangeIndex];
var affected = new AdvisoryStructuredAffectedContent(
package.Type,
package.Identifier,
package.Platform,
range.RangeKind,
range.IntroducedVersion,
range.FixedVersion,
range.LastAffectedVersion,
range.RangeExpression,
status);
var content = new AdvisoryStructuredFieldContent
{
Affected = affected
};
list.Add(CreateEntry(
SectionAffected,
index.Resolve(range.Provenance),
$"/affectedPackages/{packageIndex}/versionRanges/{rangeIndex}",
content,
range.Provenance));
} }
} }
normalized = builder.ToString().Trim(); return list.Count == 0 ? Array.Empty<AdvisoryStructuredFieldEntry>() : list;
return normalized.Length > 0;
} }
private static bool ContainsLetter(string text) private AdvisoryStructuredFieldEntry CreateEntry(
=> text.Any(static ch => char.IsLetter(ch)); string type,
string documentId,
private static string AppendPath(string path, string? segment) string fieldPath,
AdvisoryStructuredFieldContent content,
AdvisoryProvenance provenance)
{ {
var safeSegment = segment ?? string.Empty; var fingerprint = string.Concat(documentId, '|', fieldPath);
return string.IsNullOrEmpty(path) ? safeSegment : string.Concat(path, '.', safeSegment); var chunkId = CreateChunkId(fingerprint);
return new AdvisoryStructuredFieldEntry(
type,
documentId,
fieldPath,
chunkId,
content,
new AdvisoryStructuredFieldProvenance(
provenance.Source,
provenance.Kind,
provenance.Value,
provenance.RecordedAt,
NormalizeFieldMask(provenance.FieldMask)));
} }
private static string AppendIndex(string path, int index) private static IReadOnlyList<string> NormalizeFieldMask(ImmutableArray<string> mask)
=> mask.IsDefaultOrEmpty ? Array.Empty<string>() : mask;
private string CreateChunkId(string input)
{ {
if (string.IsNullOrEmpty(path)) var bytes = Encoding.UTF8.GetBytes(input);
var digest = _hash.ComputeHash(bytes, HashAlgorithms.Sha256);
return Convert.ToHexString(digest.AsSpan(0, 8));
}
private static bool ShouldInclude(ImmutableHashSet<string> filter, string type)
=> filter.Count == 0 || filter.Contains(type);
private sealed class ObservationIndex
{
private const string UnknownObservationId = "unknown";
private readonly Dictionary<string, AdvisoryObservation> _byVendor;
private readonly Dictionary<string, AdvisoryObservation> _byObservationId;
private readonly Dictionary<string, AdvisoryObservation> _byUpstreamId;
private readonly string _fallbackId;
public ObservationIndex(IReadOnlyList<AdvisoryObservation> observations)
{ {
return $"[{index}]"; _byVendor = new Dictionary<string, AdvisoryObservation>(StringComparer.OrdinalIgnoreCase);
_byObservationId = new Dictionary<string, AdvisoryObservation>(StringComparer.OrdinalIgnoreCase);
_byUpstreamId = new Dictionary<string, AdvisoryObservation>(StringComparer.OrdinalIgnoreCase);
foreach (var observation in observations)
{
_byObservationId[observation.ObservationId] = observation;
if (!string.IsNullOrWhiteSpace(observation.Source.Vendor))
{
_byVendor[observation.Source.Vendor] = observation;
}
if (!string.IsNullOrWhiteSpace(observation.Upstream.UpstreamId))
{
_byUpstreamId[observation.Upstream.UpstreamId] = observation;
}
}
_fallbackId = observations.Count > 0 ? observations[0].ObservationId : UnknownObservationId;
SourceCount = observations.Count;
} }
return string.Concat(path, '[', index.ToString(CultureInfo.InvariantCulture), ']'); public int SourceCount { get; }
}
private string CreateChunkId(string documentId, string paragraphId) public string Resolve(AdvisoryProvenance provenance)
{
var input = string.Concat(documentId, '|', paragraphId);
var digest = _hash.ComputeHash(Encoding.UTF8.GetBytes(input), HashAlgorithms.Sha256);
return string.Concat(documentId, ':', Convert.ToHexString(digest.AsSpan(0, 8)));
}
private static void IncrementGuardrailCount(
IDictionary<AdvisoryChunkGuardrailReason, int> counts,
AdvisoryChunkGuardrailReason reason)
{
if (!counts.TryGetValue(reason, out var current))
{ {
current = 0; if (!string.IsNullOrWhiteSpace(provenance.Value))
} {
if (_byObservationId.TryGetValue(provenance.Value, out var obs))
{
return obs.ObservationId;
}
counts[reason] = current + 1; if (_byUpstreamId.TryGetValue(provenance.Value, out obs))
{
return obs.ObservationId;
}
}
if (!string.IsNullOrWhiteSpace(provenance.Source) &&
_byVendor.TryGetValue(provenance.Source, out var vendorMatch))
{
return vendorMatch.ObservationId;
}
return _fallbackId;
}
} }
} }
internal sealed record AdvisoryChunkBuildResult( internal sealed record AdvisoryChunkBuildResult(
AdvisoryChunkCollectionResponse Response, AdvisoryStructuredFieldResponse Response,
AdvisoryChunkTelemetrySummary Telemetry); AdvisoryChunkTelemetrySummary Telemetry);
internal sealed record AdvisoryChunkTelemetrySummary( internal sealed record AdvisoryChunkTelemetrySummary(

View File

@@ -53,7 +53,8 @@ internal readonly record struct AdvisoryChunkCacheKey(string Value)
string tenant, string tenant,
string advisoryKey, string advisoryKey,
AdvisoryChunkBuildOptions options, AdvisoryChunkBuildOptions options,
IReadOnlyList<AdvisoryObservation> observations) IReadOnlyList<AdvisoryObservation> observations,
string advisoryFingerprint)
{ {
var builder = new StringBuilder(); var builder = new StringBuilder();
builder.Append(tenant); builder.Append(tenant);
@@ -70,6 +71,8 @@ internal readonly record struct AdvisoryChunkCacheKey(string Value)
builder.Append('|'); builder.Append('|');
AppendSet(builder, options.FormatFilter); AppendSet(builder, options.FormatFilter);
builder.Append('|'); builder.Append('|');
builder.Append(advisoryFingerprint);
builder.Append('|');
foreach (var observation in observations foreach (var observation in observations
.OrderBy(static o => o.ObservationId, StringComparer.Ordinal)) .OrderBy(static o => o.ObservationId, StringComparer.Ordinal))

View File

@@ -0,0 +1,20 @@
using System.Security.Cryptography;
using System.Text;
using StellaOps.Concelier.Core;
using StellaOps.Concelier.Models;
namespace StellaOps.Concelier.WebService.Services;
internal static class AdvisoryFingerprint
{
public static string Compute(Advisory advisory)
{
ArgumentNullException.ThrowIfNull(advisory);
var canonical = CanonicalJsonSerializer.Serialize(advisory);
var bytes = Encoding.UTF8.GetBytes(canonical);
using var sha256 = SHA256.Create();
var hash = sha256.ComputeHash(bytes);
return Convert.ToHexString(hash);
}
}

View File

@@ -0,0 +1,76 @@
using System;
using System.Text.Json;
using StellaOps.Concelier.Models;
using StellaOps.Provenance.Mongo;
namespace StellaOps.Concelier.Core.Events;
public static class AdvisoryDsseMetadataResolver
{
private static readonly string[] CandidateKinds =
{
"dsse",
"dsse-metadata",
"attestation",
"attestation-dsse"
};
public static bool TryResolve(Advisory advisory, out DsseProvenance? dsse, out TrustInfo? trust)
{
dsse = null;
trust = null;
if (advisory is null || advisory.Provenance.IsDefaultOrEmpty || advisory.Provenance.Length == 0)
{
return false;
}
foreach (var entry in advisory.Provenance)
{
if (!IsCandidateKind(entry.Kind) || string.IsNullOrWhiteSpace(entry.Value))
{
continue;
}
try
{
using var document = JsonDocument.Parse(entry.Value);
(dsse, trust) = ProvenanceJsonParser.Parse(document.RootElement);
if (dsse is not null && trust is not null)
{
return true;
}
}
catch (JsonException)
{
// Ignore malformed payloads; other provenance entries may contain valid DSSE metadata.
}
catch (InvalidOperationException)
{
// Same as above fall through to remaining provenance entries.
}
}
dsse = null;
trust = null;
return false;
}
private static bool IsCandidateKind(string? kind)
{
if (string.IsNullOrWhiteSpace(kind))
{
return false;
}
foreach (var candidate in CandidateKinds)
{
if (string.Equals(candidate, kind, StringComparison.OrdinalIgnoreCase))
{
return true;
}
}
return false;
}
}

View File

@@ -1,30 +1,35 @@
using System; using System;
using System.Collections.Immutable; using System.Collections.Immutable;
using System.Text.Json; using System.Text.Json;
using StellaOps.Concelier.Models; using StellaOps.Concelier.Models;
using StellaOps.Provenance.Mongo;
namespace StellaOps.Concelier.Core.Events; namespace StellaOps.Concelier.Core.Events;
/// <summary> /// <summary>
/// Input payload for appending a canonical advisory statement to the event log. /// Input payload for appending a canonical advisory statement to the event log.
/// </summary> /// </summary>
public sealed record AdvisoryStatementInput( public sealed record AdvisoryStatementInput(
string VulnerabilityKey, string VulnerabilityKey,
Advisory Advisory, Advisory Advisory,
DateTimeOffset AsOf, DateTimeOffset AsOf,
IReadOnlyCollection<Guid> InputDocumentIds, IReadOnlyCollection<Guid> InputDocumentIds,
Guid? StatementId = null, Guid? StatementId = null,
string? AdvisoryKey = null); string? AdvisoryKey = null,
DsseProvenance? Provenance = null,
TrustInfo? Trust = null);
/// <summary> /// <summary>
/// Input payload for appending an advisory conflict entry aligned with an advisory statement snapshot. /// Input payload for appending an advisory conflict entry aligned with an advisory statement snapshot.
/// </summary> /// </summary>
public sealed record AdvisoryConflictInput( public sealed record AdvisoryConflictInput(
string VulnerabilityKey, string VulnerabilityKey,
JsonDocument Details, JsonDocument Details,
DateTimeOffset AsOf, DateTimeOffset AsOf,
IReadOnlyCollection<Guid> StatementIds, IReadOnlyCollection<Guid> StatementIds,
Guid? ConflictId = null); Guid? ConflictId = null,
DsseProvenance? Provenance = null,
TrustInfo? Trust = null);
/// <summary> /// <summary>
/// Append request encapsulating statement and conflict batches sharing a single persistence window. /// Append request encapsulating statement and conflict batches sharing a single persistence window.
@@ -70,24 +75,28 @@ public sealed record AdvisoryConflictSnapshot(
/// <summary> /// <summary>
/// Persistence-facing representation of an advisory statement used by repositories. /// Persistence-facing representation of an advisory statement used by repositories.
/// </summary> /// </summary>
public sealed record AdvisoryStatementEntry( public sealed record AdvisoryStatementEntry(
Guid StatementId, Guid StatementId,
string VulnerabilityKey, string VulnerabilityKey,
string AdvisoryKey, string AdvisoryKey,
string CanonicalJson, string CanonicalJson,
ImmutableArray<byte> StatementHash, ImmutableArray<byte> StatementHash,
DateTimeOffset AsOf, DateTimeOffset AsOf,
DateTimeOffset RecordedAt, DateTimeOffset RecordedAt,
ImmutableArray<Guid> InputDocumentIds); ImmutableArray<Guid> InputDocumentIds,
DsseProvenance? Provenance = null,
TrustInfo? Trust = null);
/// <summary> /// <summary>
/// Persistence-facing representation of an advisory conflict used by repositories. /// Persistence-facing representation of an advisory conflict used by repositories.
/// </summary> /// </summary>
public sealed record AdvisoryConflictEntry( public sealed record AdvisoryConflictEntry(
Guid ConflictId, Guid ConflictId,
string VulnerabilityKey, string VulnerabilityKey,
string CanonicalJson, string CanonicalJson,
ImmutableArray<byte> ConflictHash, ImmutableArray<byte> ConflictHash,
DateTimeOffset AsOf, DateTimeOffset AsOf,
DateTimeOffset RecordedAt, DateTimeOffset RecordedAt,
ImmutableArray<Guid> StatementIds); ImmutableArray<Guid> StatementIds,
DsseProvenance? Provenance = null,
TrustInfo? Trust = null);

View File

@@ -6,10 +6,11 @@ using System.Linq;
using System.Security.Cryptography; using System.Security.Cryptography;
using System.Text; using System.Text;
using System.Text.Encodings.Web; using System.Text.Encodings.Web;
using System.Text.Json; using System.Text.Json;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using StellaOps.Concelier.Models; using StellaOps.Concelier.Models;
using StellaOps.Provenance.Mongo;
namespace StellaOps.Concelier.Core.Events; namespace StellaOps.Concelier.Core.Events;
@@ -78,14 +79,26 @@ public sealed class AdvisoryEventLog : IAdvisoryEventLog
.Select(ToStatementSnapshot) .Select(ToStatementSnapshot)
.ToImmutableArray(); .ToImmutableArray();
var conflictSnapshots = conflicts var conflictSnapshots = conflicts
.OrderByDescending(static entry => entry.AsOf) .OrderByDescending(static entry => entry.AsOf)
.ThenByDescending(static entry => entry.RecordedAt) .ThenByDescending(static entry => entry.RecordedAt)
.Select(ToConflictSnapshot) .Select(ToConflictSnapshot)
.ToImmutableArray(); .ToImmutableArray();
return new AdvisoryReplay(normalizedKey, asOf, statementSnapshots, conflictSnapshots); return new AdvisoryReplay(normalizedKey, asOf, statementSnapshots, conflictSnapshots);
} }
public ValueTask AttachStatementProvenanceAsync(
Guid statementId,
DsseProvenance provenance,
TrustInfo trust,
CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(provenance);
ArgumentNullException.ThrowIfNull(trust);
return _repository.AttachStatementProvenanceAsync(statementId, provenance, trust, cancellationToken);
}
private static AdvisoryStatementSnapshot ToStatementSnapshot(AdvisoryStatementEntry entry) private static AdvisoryStatementSnapshot ToStatementSnapshot(AdvisoryStatementEntry entry)
{ {
@@ -134,10 +147,10 @@ public sealed class AdvisoryEventLog : IAdvisoryEventLog
ArgumentNullException.ThrowIfNull(statement.Advisory); ArgumentNullException.ThrowIfNull(statement.Advisory);
var vulnerabilityKey = NormalizeKey(statement.VulnerabilityKey, nameof(statement.VulnerabilityKey)); var vulnerabilityKey = NormalizeKey(statement.VulnerabilityKey, nameof(statement.VulnerabilityKey));
var advisory = CanonicalJsonSerializer.Normalize(statement.Advisory); var advisory = CanonicalJsonSerializer.Normalize(statement.Advisory);
var advisoryKey = string.IsNullOrWhiteSpace(statement.AdvisoryKey) var advisoryKey = string.IsNullOrWhiteSpace(statement.AdvisoryKey)
? advisory.AdvisoryKey ? advisory.AdvisoryKey
: statement.AdvisoryKey.Trim(); : statement.AdvisoryKey.Trim();
if (string.IsNullOrWhiteSpace(advisoryKey)) if (string.IsNullOrWhiteSpace(advisoryKey))
{ {
@@ -149,30 +162,33 @@ public sealed class AdvisoryEventLog : IAdvisoryEventLog
throw new ArgumentException("Advisory key in payload must match provided advisory key.", nameof(statement)); throw new ArgumentException("Advisory key in payload must match provided advisory key.", nameof(statement));
} }
var canonicalJson = CanonicalJsonSerializer.Serialize(advisory); var canonicalJson = CanonicalJsonSerializer.Serialize(advisory);
var hashBytes = ComputeHash(canonicalJson); var hashBytes = ComputeHash(canonicalJson);
var asOf = statement.AsOf.ToUniversalTime(); var asOf = statement.AsOf.ToUniversalTime();
var inputDocuments = statement.InputDocumentIds?.Count > 0 var inputDocuments = statement.InputDocumentIds?.Count > 0
? statement.InputDocumentIds ? statement.InputDocumentIds
.Where(static id => id != Guid.Empty) .Where(static id => id != Guid.Empty)
.Distinct() .Distinct()
.OrderBy(static id => id) .OrderBy(static id => id)
.ToImmutableArray() .ToImmutableArray()
: ImmutableArray<Guid>.Empty; : ImmutableArray<Guid>.Empty;
var (provenance, trust) = ResolveStatementMetadata(advisory, statement.Provenance, statement.Trust);
entries.Add(new AdvisoryStatementEntry(
statement.StatementId ?? Guid.NewGuid(), entries.Add(new AdvisoryStatementEntry(
vulnerabilityKey, statement.StatementId ?? Guid.NewGuid(),
advisoryKey, vulnerabilityKey,
canonicalJson, advisoryKey,
hashBytes, canonicalJson,
asOf, hashBytes,
recordedAt, asOf,
inputDocuments)); recordedAt,
} inputDocuments,
provenance,
return entries; trust));
} }
return entries;
}
private static IReadOnlyCollection<AdvisoryConflictEntry> BuildConflictEntries( private static IReadOnlyCollection<AdvisoryConflictEntry> BuildConflictEntries(
IReadOnlyCollection<AdvisoryConflictInput> conflicts, IReadOnlyCollection<AdvisoryConflictInput> conflicts,
@@ -202,23 +218,44 @@ public sealed class AdvisoryEventLog : IAdvisoryEventLog
.ToImmutableArray() .ToImmutableArray()
: ImmutableArray<Guid>.Empty; : ImmutableArray<Guid>.Empty;
entries.Add(new AdvisoryConflictEntry( entries.Add(new AdvisoryConflictEntry(
conflict.ConflictId ?? Guid.NewGuid(), conflict.ConflictId ?? Guid.NewGuid(),
vulnerabilityKey, vulnerabilityKey,
canonicalJson, canonicalJson,
hashBytes, hashBytes,
asOf, asOf,
recordedAt, recordedAt,
statementIds)); statementIds,
conflict.Provenance,
conflict.Trust));
} }
return entries; return entries;
} }
private static string NormalizeKey(string value, string parameterName) private static (DsseProvenance?, TrustInfo?) ResolveStatementMetadata(
{ Advisory advisory,
if (string.IsNullOrWhiteSpace(value)) DsseProvenance? suppliedProvenance,
{ TrustInfo? suppliedTrust)
{
if (suppliedProvenance is not null && suppliedTrust is not null)
{
return (suppliedProvenance, suppliedTrust);
}
if (AdvisoryDsseMetadataResolver.TryResolve(advisory, out var resolvedProvenance, out var resolvedTrust))
{
suppliedProvenance ??= resolvedProvenance;
suppliedTrust ??= resolvedTrust;
}
return (suppliedProvenance, suppliedTrust);
}
private static string NormalizeKey(string value, string parameterName)
{
if (string.IsNullOrWhiteSpace(value))
{
throw new ArgumentException("Value must be provided.", parameterName); throw new ArgumentException("Value must be provided.", parameterName);
} }

View File

@@ -1,15 +1,22 @@
using System; using System;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using StellaOps.Provenance.Mongo;
namespace StellaOps.Concelier.Core.Events; namespace StellaOps.Concelier.Core.Events;
/// <summary> /// <summary>
/// High-level API for recording and replaying advisory statements with deterministic as-of queries. /// High-level API for recording and replaying advisory statements with deterministic as-of queries.
/// </summary> /// </summary>
public interface IAdvisoryEventLog public interface IAdvisoryEventLog
{ {
ValueTask AppendAsync(AdvisoryEventAppendRequest request, CancellationToken cancellationToken); ValueTask AppendAsync(AdvisoryEventAppendRequest request, CancellationToken cancellationToken);
ValueTask<AdvisoryReplay> ReplayAsync(string vulnerabilityKey, DateTimeOffset? asOf, CancellationToken cancellationToken); ValueTask<AdvisoryReplay> ReplayAsync(string vulnerabilityKey, DateTimeOffset? asOf, CancellationToken cancellationToken);
}
ValueTask AttachStatementProvenanceAsync(
Guid statementId,
DsseProvenance provenance,
TrustInfo trust,
CancellationToken cancellationToken);
}

View File

@@ -2,7 +2,8 @@ using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Collections.Immutable; using System.Collections.Immutable;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using StellaOps.Provenance.Mongo;
namespace StellaOps.Concelier.Core.Events; namespace StellaOps.Concelier.Core.Events;
@@ -19,13 +20,19 @@ public interface IAdvisoryEventRepository
IReadOnlyCollection<AdvisoryConflictEntry> conflicts, IReadOnlyCollection<AdvisoryConflictEntry> conflicts,
CancellationToken cancellationToken); CancellationToken cancellationToken);
ValueTask<IReadOnlyList<AdvisoryStatementEntry>> GetStatementsAsync( ValueTask<IReadOnlyList<AdvisoryStatementEntry>> GetStatementsAsync(
string vulnerabilityKey, string vulnerabilityKey,
DateTimeOffset? asOf, DateTimeOffset? asOf,
CancellationToken cancellationToken); CancellationToken cancellationToken);
ValueTask<IReadOnlyList<AdvisoryConflictEntry>> GetConflictsAsync( ValueTask<IReadOnlyList<AdvisoryConflictEntry>> GetConflictsAsync(
string vulnerabilityKey, string vulnerabilityKey,
DateTimeOffset? asOf, DateTimeOffset? asOf,
CancellationToken cancellationToken); CancellationToken cancellationToken);
}
ValueTask AttachStatementProvenanceAsync(
Guid statementId,
DsseProvenance provenance,
TrustInfo trust,
CancellationToken cancellationToken);
}

View File

@@ -19,6 +19,7 @@
<ProjectReference Include="..\StellaOps.Concelier.RawModels\StellaOps.Concelier.RawModels.csproj" /> <ProjectReference Include="..\StellaOps.Concelier.RawModels\StellaOps.Concelier.RawModels.csproj" />
<ProjectReference Include="..\StellaOps.Concelier.Normalization\StellaOps.Concelier.Normalization.csproj" /> <ProjectReference Include="..\StellaOps.Concelier.Normalization\StellaOps.Concelier.Normalization.csproj" />
<ProjectReference Include="..\..\..\__Libraries\StellaOps.Ingestion.Telemetry\StellaOps.Ingestion.Telemetry.csproj" /> <ProjectReference Include="..\..\..\__Libraries\StellaOps.Ingestion.Telemetry\StellaOps.Ingestion.Telemetry.csproj" />
<ProjectReference Include="..\..\..\__Libraries\StellaOps.Provenance.Mongo\StellaOps.Provenance.Mongo.csproj" />
<ProjectReference Include="../../../__Libraries/StellaOps.Plugin/StellaOps.Plugin.csproj" /> <ProjectReference Include="../../../__Libraries/StellaOps.Plugin/StellaOps.Plugin.csproj" />
<ProjectReference Include="../../../Aoc/__Libraries/StellaOps.Aoc/StellaOps.Aoc.csproj" /> <ProjectReference Include="../../../Aoc/__Libraries/StellaOps.Aoc/StellaOps.Aoc.csproj" />
</ItemGroup> </ItemGroup>

View File

@@ -6,13 +6,14 @@ using System.Linq;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using Microsoft.Extensions.Logging; using Microsoft.Extensions.Logging;
using StellaOps.Concelier.Core; using StellaOps.Concelier.Core;
using StellaOps.Concelier.Core.Events; using StellaOps.Concelier.Core.Events;
using StellaOps.Concelier.Models; using StellaOps.Concelier.Models;
using StellaOps.Concelier.Storage.Mongo.Advisories; using StellaOps.Concelier.Storage.Mongo.Advisories;
using StellaOps.Concelier.Storage.Mongo.Aliases; using StellaOps.Concelier.Storage.Mongo.Aliases;
using StellaOps.Concelier.Storage.Mongo.MergeEvents; using StellaOps.Concelier.Storage.Mongo.MergeEvents;
using System.Text.Json; using System.Text.Json;
using StellaOps.Provenance.Mongo;
namespace StellaOps.Concelier.Merge.Services; namespace StellaOps.Concelier.Merge.Services;
@@ -139,39 +140,45 @@ public sealed class AdvisoryMergeService
return new AdvisoryMergeResult(seedAdvisoryKey, canonicalKey, component, inputs, before, merged, conflictSummaries); return new AdvisoryMergeResult(seedAdvisoryKey, canonicalKey, component, inputs, before, merged, conflictSummaries);
} }
private async Task<IReadOnlyList<MergeConflictSummary>> AppendEventLogAsync( private async Task<IReadOnlyList<MergeConflictSummary>> AppendEventLogAsync(
string vulnerabilityKey, string vulnerabilityKey,
IReadOnlyList<Advisory> inputs, IReadOnlyList<Advisory> inputs,
Advisory merged, Advisory merged,
IReadOnlyList<MergeConflictDetail> conflicts, IReadOnlyList<MergeConflictDetail> conflicts,
CancellationToken cancellationToken) CancellationToken cancellationToken)
{ {
var recordedAt = _timeProvider.GetUtcNow(); var recordedAt = _timeProvider.GetUtcNow();
var statements = new List<AdvisoryStatementInput>(inputs.Count + 1); var statements = new List<AdvisoryStatementInput>(inputs.Count + 1);
var statementIds = new Dictionary<Advisory, Guid>(ReferenceEqualityComparer.Instance); var statementIds = new Dictionary<Advisory, Guid>(ReferenceEqualityComparer.Instance);
foreach (var advisory in inputs) foreach (var advisory in inputs)
{ {
var statementId = Guid.NewGuid(); var statementId = Guid.NewGuid();
statementIds[advisory] = statementId; statementIds[advisory] = statementId;
statements.Add(new AdvisoryStatementInput( var (provenance, trust) = ResolveDsseMetadata(advisory);
vulnerabilityKey, statements.Add(new AdvisoryStatementInput(
advisory, vulnerabilityKey,
DetermineAsOf(advisory, recordedAt), advisory,
InputDocumentIds: Array.Empty<Guid>(), DetermineAsOf(advisory, recordedAt),
StatementId: statementId, InputDocumentIds: Array.Empty<Guid>(),
AdvisoryKey: advisory.AdvisoryKey)); StatementId: statementId,
} AdvisoryKey: advisory.AdvisoryKey,
Provenance: provenance,
var canonicalStatementId = Guid.NewGuid(); Trust: trust));
statementIds[merged] = canonicalStatementId; }
statements.Add(new AdvisoryStatementInput(
vulnerabilityKey, var canonicalStatementId = Guid.NewGuid();
merged, statementIds[merged] = canonicalStatementId;
recordedAt, var (canonicalProvenance, canonicalTrust) = ResolveDsseMetadata(merged);
InputDocumentIds: Array.Empty<Guid>(), statements.Add(new AdvisoryStatementInput(
StatementId: canonicalStatementId, vulnerabilityKey,
AdvisoryKey: merged.AdvisoryKey)); merged,
recordedAt,
InputDocumentIds: Array.Empty<Guid>(),
StatementId: canonicalStatementId,
AdvisoryKey: merged.AdvisoryKey,
Provenance: canonicalProvenance,
Trust: canonicalTrust));
var conflictMaterialization = BuildConflictInputs(conflicts, vulnerabilityKey, statementIds, canonicalStatementId, recordedAt); var conflictMaterialization = BuildConflictInputs(conflicts, vulnerabilityKey, statementIds, canonicalStatementId, recordedAt);
var conflictInputs = conflictMaterialization.Inputs; var conflictInputs = conflictMaterialization.Inputs;
@@ -198,15 +205,22 @@ public sealed class AdvisoryMergeService
} }
} }
return conflictSummaries.Count == 0 return conflictSummaries.Count == 0
? Array.Empty<MergeConflictSummary>() ? Array.Empty<MergeConflictSummary>()
: conflictSummaries.ToArray(); : conflictSummaries.ToArray();
} }
private static DateTimeOffset DetermineAsOf(Advisory advisory, DateTimeOffset fallback) private static (DsseProvenance?, TrustInfo?) ResolveDsseMetadata(Advisory advisory)
{ {
return (advisory.Modified ?? advisory.Published ?? fallback).ToUniversalTime(); return AdvisoryDsseMetadataResolver.TryResolve(advisory, out var dsse, out var trust)
} ? (dsse, trust)
: (null, null);
}
private static DateTimeOffset DetermineAsOf(Advisory advisory, DateTimeOffset fallback)
{
return (advisory.Modified ?? advisory.Published ?? fallback).ToUniversalTime();
}
private static ConflictMaterialization BuildConflictInputs( private static ConflictMaterialization BuildConflictInputs(
IReadOnlyList<MergeConflictDetail> conflicts, IReadOnlyList<MergeConflictDetail> conflicts,

View File

@@ -27,31 +27,43 @@ public sealed class AdvisoryConflictDocument
[BsonElement("statementIds")] [BsonElement("statementIds")]
public List<string> StatementIds { get; set; } = new(); public List<string> StatementIds { get; set; } = new();
[BsonElement("details")] [BsonElement("details")]
public BsonDocument Details { get; set; } = new(); public BsonDocument Details { get; set; } = new();
[BsonElement("provenance")]
[BsonIgnoreIfNull]
public BsonDocument? Provenance { get; set; }
[BsonElement("trust")]
[BsonIgnoreIfNull]
public BsonDocument? Trust { get; set; }
} }
internal static class AdvisoryConflictDocumentExtensions internal static class AdvisoryConflictDocumentExtensions
{ {
public static AdvisoryConflictDocument FromRecord(AdvisoryConflictRecord record) public static AdvisoryConflictDocument FromRecord(AdvisoryConflictRecord record)
=> new() => new()
{ {
Id = record.Id.ToString(), Id = record.Id.ToString(),
VulnerabilityKey = record.VulnerabilityKey, VulnerabilityKey = record.VulnerabilityKey,
ConflictHash = record.ConflictHash, ConflictHash = record.ConflictHash,
AsOf = record.AsOf.UtcDateTime, AsOf = record.AsOf.UtcDateTime,
RecordedAt = record.RecordedAt.UtcDateTime, RecordedAt = record.RecordedAt.UtcDateTime,
StatementIds = record.StatementIds.Select(static id => id.ToString()).ToList(), StatementIds = record.StatementIds.Select(static id => id.ToString()).ToList(),
Details = (BsonDocument)record.Details.DeepClone(), Details = (BsonDocument)record.Details.DeepClone(),
}; Provenance = record.Provenance is null ? null : (BsonDocument)record.Provenance.DeepClone(),
Trust = record.Trust is null ? null : (BsonDocument)record.Trust.DeepClone(),
public static AdvisoryConflictRecord ToRecord(this AdvisoryConflictDocument document) };
=> new(
Guid.Parse(document.Id), public static AdvisoryConflictRecord ToRecord(this AdvisoryConflictDocument document)
document.VulnerabilityKey, => new(
document.ConflictHash, Guid.Parse(document.Id),
DateTime.SpecifyKind(document.AsOf, DateTimeKind.Utc), document.VulnerabilityKey,
DateTime.SpecifyKind(document.RecordedAt, DateTimeKind.Utc), document.ConflictHash,
document.StatementIds.Select(static value => Guid.Parse(value)).ToList(), DateTime.SpecifyKind(document.AsOf, DateTimeKind.Utc),
(BsonDocument)document.Details.DeepClone()); DateTime.SpecifyKind(document.RecordedAt, DateTimeKind.Utc),
document.StatementIds.Select(static value => Guid.Parse(value)).ToList(),
(BsonDocument)document.Details.DeepClone(),
document.Provenance is null ? null : (BsonDocument)document.Provenance.DeepClone(),
document.Trust is null ? null : (BsonDocument)document.Trust.DeepClone());
} }

View File

@@ -4,11 +4,13 @@ using MongoDB.Bson;
namespace StellaOps.Concelier.Storage.Mongo.Conflicts; namespace StellaOps.Concelier.Storage.Mongo.Conflicts;
public sealed record AdvisoryConflictRecord( public sealed record AdvisoryConflictRecord(
Guid Id, Guid Id,
string VulnerabilityKey, string VulnerabilityKey,
byte[] ConflictHash, byte[] ConflictHash,
DateTimeOffset AsOf, DateTimeOffset AsOf,
DateTimeOffset RecordedAt, DateTimeOffset RecordedAt,
IReadOnlyList<Guid> StatementIds, IReadOnlyList<Guid> StatementIds,
BsonDocument Details); BsonDocument Details,
BsonDocument? Provenance = null,
BsonDocument? Trust = null);

View File

@@ -1,224 +1,425 @@
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Collections.Immutable; using System.Collections.Immutable;
using System.IO; using System.IO;
using System.Linq; using System.Linq;
using System.Text; using System.Text;
using System.Text.Encodings.Web; using System.Text.Encodings.Web;
using System.Text.Json; using System.Text.Json;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using MongoDB.Bson; using MongoDB.Bson;
using StellaOps.Concelier.Core.Events; using StellaOps.Concelier.Core.Events;
using StellaOps.Concelier.Models; using StellaOps.Concelier.Models;
using StellaOps.Concelier.Storage.Mongo.Conflicts; using StellaOps.Concelier.Storage.Mongo.Conflicts;
using StellaOps.Concelier.Storage.Mongo.Statements; using StellaOps.Concelier.Storage.Mongo.Statements;
using StellaOps.Provenance.Mongo;
namespace StellaOps.Concelier.Storage.Mongo.Events;
namespace StellaOps.Concelier.Storage.Mongo.Events;
public sealed class MongoAdvisoryEventRepository : IAdvisoryEventRepository
{ public sealed class MongoAdvisoryEventRepository : IAdvisoryEventRepository
private readonly IAdvisoryStatementStore _statementStore; {
private readonly IAdvisoryConflictStore _conflictStore; private readonly IAdvisoryStatementStore _statementStore;
private readonly IAdvisoryConflictStore _conflictStore;
public MongoAdvisoryEventRepository(
IAdvisoryStatementStore statementStore, public MongoAdvisoryEventRepository(
IAdvisoryConflictStore conflictStore) IAdvisoryStatementStore statementStore,
{ IAdvisoryConflictStore conflictStore)
_statementStore = statementStore ?? throw new ArgumentNullException(nameof(statementStore)); {
_conflictStore = conflictStore ?? throw new ArgumentNullException(nameof(conflictStore)); _statementStore = statementStore ?? throw new ArgumentNullException(nameof(statementStore));
} _conflictStore = conflictStore ?? throw new ArgumentNullException(nameof(conflictStore));
}
public async ValueTask InsertStatementsAsync(
IReadOnlyCollection<AdvisoryStatementEntry> statements, public async ValueTask InsertStatementsAsync(
CancellationToken cancellationToken) IReadOnlyCollection<AdvisoryStatementEntry> statements,
{ CancellationToken cancellationToken)
if (statements is null) {
{ if (statements is null)
throw new ArgumentNullException(nameof(statements)); {
} throw new ArgumentNullException(nameof(statements));
}
if (statements.Count == 0)
{ if (statements.Count == 0)
return; {
} return;
}
var records = statements
.Select(static entry => var records = statements
{ .Select(static entry =>
var payload = BsonDocument.Parse(entry.CanonicalJson); {
return new AdvisoryStatementRecord( var payload = BsonDocument.Parse(entry.CanonicalJson);
entry.StatementId, var (provenanceDoc, trustDoc) = BuildMetadata(entry.Provenance, entry.Trust);
entry.VulnerabilityKey,
entry.AdvisoryKey, return new AdvisoryStatementRecord(
entry.StatementHash.ToArray(), entry.StatementId,
entry.AsOf, entry.VulnerabilityKey,
entry.RecordedAt, entry.AdvisoryKey,
payload, entry.StatementHash.ToArray(),
entry.InputDocumentIds.ToArray()); entry.AsOf,
}) entry.RecordedAt,
.ToList(); payload,
entry.InputDocumentIds.ToArray(),
await _statementStore.InsertAsync(records, cancellationToken).ConfigureAwait(false); provenanceDoc,
} trustDoc);
})
public async ValueTask InsertConflictsAsync( .ToList();
IReadOnlyCollection<AdvisoryConflictEntry> conflicts,
CancellationToken cancellationToken) await _statementStore.InsertAsync(records, cancellationToken).ConfigureAwait(false);
{ }
if (conflicts is null)
{ public async ValueTask InsertConflictsAsync(
throw new ArgumentNullException(nameof(conflicts)); IReadOnlyCollection<AdvisoryConflictEntry> conflicts,
} CancellationToken cancellationToken)
{
if (conflicts.Count == 0) if (conflicts is null)
{ {
return; throw new ArgumentNullException(nameof(conflicts));
} }
var records = conflicts if (conflicts.Count == 0)
.Select(static entry => {
{ return;
var payload = BsonDocument.Parse(entry.CanonicalJson); }
return new AdvisoryConflictRecord(
entry.ConflictId, var records = conflicts
entry.VulnerabilityKey, .Select(static entry =>
entry.ConflictHash.ToArray(), {
entry.AsOf, var payload = BsonDocument.Parse(entry.CanonicalJson);
entry.RecordedAt, var (provenanceDoc, trustDoc) = BuildMetadata(entry.Provenance, entry.Trust);
entry.StatementIds.ToArray(),
payload); return new AdvisoryConflictRecord(
}) entry.ConflictId,
.ToList(); entry.VulnerabilityKey,
entry.ConflictHash.ToArray(),
await _conflictStore.InsertAsync(records, cancellationToken).ConfigureAwait(false); entry.AsOf,
} entry.RecordedAt,
entry.StatementIds.ToArray(),
public async ValueTask<IReadOnlyList<AdvisoryStatementEntry>> GetStatementsAsync( payload,
string vulnerabilityKey, provenanceDoc,
DateTimeOffset? asOf, trustDoc);
CancellationToken cancellationToken) })
{ .ToList();
var records = await _statementStore
.GetStatementsAsync(vulnerabilityKey, asOf, cancellationToken) await _conflictStore.InsertAsync(records, cancellationToken).ConfigureAwait(false);
.ConfigureAwait(false); }
if (records.Count == 0) public async ValueTask<IReadOnlyList<AdvisoryStatementEntry>> GetStatementsAsync(
{ string vulnerabilityKey,
return Array.Empty<AdvisoryStatementEntry>(); DateTimeOffset? asOf,
} CancellationToken cancellationToken)
{
var entries = records var records = await _statementStore
.Select(static record => .GetStatementsAsync(vulnerabilityKey, asOf, cancellationToken)
{ .ConfigureAwait(false);
var advisory = CanonicalJsonSerializer.Deserialize<Advisory>(record.Payload.ToJson());
var canonicalJson = CanonicalJsonSerializer.Serialize(advisory); if (records.Count == 0)
{
return new AdvisoryStatementEntry( return Array.Empty<AdvisoryStatementEntry>();
record.Id, }
record.VulnerabilityKey,
record.AdvisoryKey, var entries = records
canonicalJson, .Select(static record =>
record.StatementHash.ToImmutableArray(), {
record.AsOf, var advisory = CanonicalJsonSerializer.Deserialize<Advisory>(record.Payload.ToJson());
record.RecordedAt, var canonicalJson = CanonicalJsonSerializer.Serialize(advisory);
record.InputDocumentIds.ToImmutableArray()); var (provenance, trust) = ParseMetadata(record.Provenance, record.Trust);
})
.ToList(); return new AdvisoryStatementEntry(
record.Id,
return entries; record.VulnerabilityKey,
} record.AdvisoryKey,
canonicalJson,
public async ValueTask<IReadOnlyList<AdvisoryConflictEntry>> GetConflictsAsync( record.StatementHash.ToImmutableArray(),
string vulnerabilityKey, record.AsOf,
DateTimeOffset? asOf, record.RecordedAt,
CancellationToken cancellationToken) record.InputDocumentIds.ToImmutableArray(),
{ provenance,
var records = await _conflictStore trust);
.GetConflictsAsync(vulnerabilityKey, asOf, cancellationToken) })
.ConfigureAwait(false); .ToList();
if (records.Count == 0) return entries;
{ }
return Array.Empty<AdvisoryConflictEntry>();
} public async ValueTask<IReadOnlyList<AdvisoryConflictEntry>> GetConflictsAsync(
string vulnerabilityKey,
var entries = records DateTimeOffset? asOf,
.Select(static record => CancellationToken cancellationToken)
{ {
var canonicalJson = Canonicalize(record.Details); var records = await _conflictStore
return new AdvisoryConflictEntry( .GetConflictsAsync(vulnerabilityKey, asOf, cancellationToken)
record.Id, .ConfigureAwait(false);
record.VulnerabilityKey,
canonicalJson, if (records.Count == 0)
record.ConflictHash.ToImmutableArray(), {
record.AsOf, return Array.Empty<AdvisoryConflictEntry>();
record.RecordedAt, }
record.StatementIds.ToImmutableArray());
}) var entries = records
.ToList(); .Select(static record =>
{
return entries; var canonicalJson = Canonicalize(record.Details);
} var (provenance, trust) = ParseMetadata(record.Provenance, record.Trust);
private static readonly JsonWriterOptions CanonicalWriterOptions = new() return new AdvisoryConflictEntry(
{ record.Id,
Encoder = JavaScriptEncoder.UnsafeRelaxedJsonEscaping, record.VulnerabilityKey,
Indented = false, canonicalJson,
SkipValidation = false, record.ConflictHash.ToImmutableArray(),
}; record.AsOf,
record.RecordedAt,
private static string Canonicalize(BsonDocument document) record.StatementIds.ToImmutableArray(),
{ provenance,
using var json = JsonDocument.Parse(document.ToJson()); trust);
using var stream = new MemoryStream(); })
using (var writer = new Utf8JsonWriter(stream, CanonicalWriterOptions)) .ToList();
{
WriteCanonical(json.RootElement, writer); return entries;
} }
return Encoding.UTF8.GetString(stream.ToArray()); public async ValueTask AttachStatementProvenanceAsync(
} Guid statementId,
DsseProvenance dsse,
private static void WriteCanonical(JsonElement element, Utf8JsonWriter writer) TrustInfo trust,
{ CancellationToken cancellationToken)
switch (element.ValueKind) {
{ ArgumentNullException.ThrowIfNull(dsse);
case JsonValueKind.Object: ArgumentNullException.ThrowIfNull(trust);
writer.WriteStartObject();
foreach (var property in element.EnumerateObject().OrderBy(static p => p.Name, StringComparer.Ordinal)) var (provenanceDoc, trustDoc) = BuildMetadata(dsse, trust);
{
writer.WritePropertyName(property.Name); if (provenanceDoc is null || trustDoc is null)
WriteCanonical(property.Value, writer); {
} throw new InvalidOperationException("Failed to build provenance documents.");
writer.WriteEndObject(); }
break;
case JsonValueKind.Array: await _statementStore
writer.WriteStartArray(); .UpdateProvenanceAsync(statementId, provenanceDoc, trustDoc, cancellationToken)
foreach (var item in element.EnumerateArray()) .ConfigureAwait(false);
{ }
WriteCanonical(item, writer); private static readonly JsonWriterOptions CanonicalWriterOptions = new()
} {
writer.WriteEndArray(); Encoder = JavaScriptEncoder.UnsafeRelaxedJsonEscaping,
break; Indented = false,
case JsonValueKind.String: SkipValidation = false,
writer.WriteStringValue(element.GetString()); };
break;
case JsonValueKind.Number: private static string Canonicalize(BsonDocument document)
writer.WriteRawValue(element.GetRawText()); {
break; using var json = JsonDocument.Parse(document.ToJson());
case JsonValueKind.True: using var stream = new MemoryStream();
writer.WriteBooleanValue(true); using (var writer = new Utf8JsonWriter(stream, CanonicalWriterOptions))
break; {
case JsonValueKind.False: WriteCanonical(json.RootElement, writer);
writer.WriteBooleanValue(false); }
break;
case JsonValueKind.Null: return Encoding.UTF8.GetString(stream.ToArray());
writer.WriteNullValue(); }
break;
default: private static (BsonDocument? Provenance, BsonDocument? Trust) BuildMetadata(DsseProvenance? provenance, TrustInfo? trust)
writer.WriteRawValue(element.GetRawText()); {
break; if (provenance is null || trust is null)
} {
} return (null, null);
}
}
var metadata = new BsonDocument();
metadata.AttachDsseProvenance(provenance, trust);
var provenanceDoc = metadata.TryGetValue("provenance", out var provenanceValue)
? (BsonDocument)provenanceValue.DeepClone()
: null;
var trustDoc = metadata.TryGetValue("trust", out var trustValue)
? (BsonDocument)trustValue.DeepClone()
: null;
return (provenanceDoc, trustDoc);
}
private static (DsseProvenance?, TrustInfo?) ParseMetadata(BsonDocument? provenanceDoc, BsonDocument? trustDoc)
{
DsseProvenance? dsse = null;
if (provenanceDoc is not null &&
provenanceDoc.TryGetValue("dsse", out var dsseValue) &&
dsseValue is BsonDocument dsseBody)
{
if (TryGetString(dsseBody, "envelopeDigest", out var envelopeDigest) &&
TryGetString(dsseBody, "payloadType", out var payloadType) &&
dsseBody.TryGetValue("key", out var keyValue) &&
keyValue is BsonDocument keyDoc &&
TryGetString(keyDoc, "keyId", out var keyId))
{
var keyInfo = new DsseKeyInfo
{
KeyId = keyId,
Issuer = GetOptionalString(keyDoc, "issuer"),
Algo = GetOptionalString(keyDoc, "algo"),
};
dsse = new DsseProvenance
{
EnvelopeDigest = envelopeDigest,
PayloadType = payloadType,
Key = keyInfo,
Rekor = ParseRekor(dsseBody),
Chain = ParseChain(dsseBody)
};
}
}
TrustInfo? trust = null;
if (trustDoc is not null)
{
trust = new TrustInfo
{
Verified = trustDoc.TryGetValue("verified", out var verifiedValue) && verifiedValue.ToBoolean(),
Verifier = GetOptionalString(trustDoc, "verifier"),
Witnesses = trustDoc.TryGetValue("witnesses", out var witnessValue) && witnessValue.IsInt32 ? witnessValue.AsInt32 : (int?)null,
PolicyScore = trustDoc.TryGetValue("policyScore", out var scoreValue) && scoreValue.IsNumeric ? scoreValue.AsDouble : (double?)null
};
}
return (dsse, trust);
}
private static DsseRekorInfo? ParseRekor(BsonDocument dsseBody)
{
if (!dsseBody.TryGetValue("rekor", out var rekorValue) || !rekorValue.IsBsonDocument)
{
return null;
}
var rekorDoc = rekorValue.AsBsonDocument;
if (!TryGetInt64(rekorDoc, "logIndex", out var logIndex))
{
return null;
}
return new DsseRekorInfo
{
LogIndex = logIndex,
Uuid = GetOptionalString(rekorDoc, "uuid") ?? string.Empty,
IntegratedTime = TryGetInt64(rekorDoc, "integratedTime", out var integratedTime) ? integratedTime : null,
MirrorSeq = TryGetInt64(rekorDoc, "mirrorSeq", out var mirrorSeq) ? mirrorSeq : null
};
}
private static IReadOnlyCollection<DsseChainLink>? ParseChain(BsonDocument dsseBody)
{
if (!dsseBody.TryGetValue("chain", out var chainValue) || !chainValue.IsBsonArray)
{
return null;
}
var links = new List<DsseChainLink>();
foreach (var element in chainValue.AsBsonArray)
{
if (!element.IsBsonDocument)
{
continue;
}
var linkDoc = element.AsBsonDocument;
if (!TryGetString(linkDoc, "type", out var type) ||
!TryGetString(linkDoc, "id", out var id) ||
!TryGetString(linkDoc, "digest", out var digest))
{
continue;
}
links.Add(new DsseChainLink
{
Type = type,
Id = id,
Digest = digest
});
}
return links.Count == 0 ? null : links;
}
private static bool TryGetString(BsonDocument document, string name, out string value)
{
if (document.TryGetValue(name, out var bsonValue) && bsonValue.IsString)
{
value = bsonValue.AsString;
return true;
}
value = string.Empty;
return false;
}
private static string? GetOptionalString(BsonDocument document, string name)
=> document.TryGetValue(name, out var bsonValue) && bsonValue.IsString ? bsonValue.AsString : null;
private static bool TryGetInt64(BsonDocument document, string name, out long value)
{
if (document.TryGetValue(name, out var bsonValue))
{
if (bsonValue.IsInt64)
{
value = bsonValue.AsInt64;
return true;
}
if (bsonValue.IsInt32)
{
value = bsonValue.AsInt32;
return true;
}
if (bsonValue.IsString && long.TryParse(bsonValue.AsString, out var parsed))
{
value = parsed;
return true;
}
}
value = 0;
return false;
}
private static void WriteCanonical(JsonElement element, Utf8JsonWriter writer)
{
switch (element.ValueKind)
{
case JsonValueKind.Object:
writer.WriteStartObject();
foreach (var property in element.EnumerateObject().OrderBy(static p => p.Name, StringComparer.Ordinal))
{
writer.WritePropertyName(property.Name);
WriteCanonical(property.Value, writer);
}
writer.WriteEndObject();
break;
case JsonValueKind.Array:
writer.WriteStartArray();
foreach (var item in element.EnumerateArray())
{
WriteCanonical(item, writer);
}
writer.WriteEndArray();
break;
case JsonValueKind.String:
writer.WriteStringValue(element.GetString());
break;
case JsonValueKind.Number:
writer.WriteRawValue(element.GetRawText());
break;
case JsonValueKind.True:
writer.WriteBooleanValue(true);
break;
case JsonValueKind.False:
writer.WriteBooleanValue(false);
break;
case JsonValueKind.Null:
writer.WriteNullValue();
break;
default:
writer.WriteRawValue(element.GetRawText());
break;
}
}
}

View File

@@ -28,7 +28,15 @@ public sealed class AdvisoryStatementDocument
public DateTime RecordedAt { get; set; } public DateTime RecordedAt { get; set; }
[BsonElement("payload")] [BsonElement("payload")]
public BsonDocument Payload { get; set; } = new(); public BsonDocument Payload { get; set; } = new();
[BsonElement("provenance")]
[BsonIgnoreIfNull]
public BsonDocument? Provenance { get; set; }
[BsonElement("trust")]
[BsonIgnoreIfNull]
public BsonDocument? Trust { get; set; }
[BsonElement("inputDocuments")] [BsonElement("inputDocuments")]
public List<string> InputDocuments { get; set; } = new(); public List<string> InputDocuments { get; set; } = new();
@@ -37,26 +45,30 @@ public sealed class AdvisoryStatementDocument
internal static class AdvisoryStatementDocumentExtensions internal static class AdvisoryStatementDocumentExtensions
{ {
public static AdvisoryStatementDocument FromRecord(AdvisoryStatementRecord record) public static AdvisoryStatementDocument FromRecord(AdvisoryStatementRecord record)
=> new() => new()
{ {
Id = record.Id.ToString(), Id = record.Id.ToString(),
VulnerabilityKey = record.VulnerabilityKey, VulnerabilityKey = record.VulnerabilityKey,
AdvisoryKey = record.AdvisoryKey, AdvisoryKey = record.AdvisoryKey,
StatementHash = record.StatementHash, StatementHash = record.StatementHash,
AsOf = record.AsOf.UtcDateTime, AsOf = record.AsOf.UtcDateTime,
RecordedAt = record.RecordedAt.UtcDateTime, RecordedAt = record.RecordedAt.UtcDateTime,
Payload = (BsonDocument)record.Payload.DeepClone(), Payload = (BsonDocument)record.Payload.DeepClone(),
InputDocuments = record.InputDocumentIds.Select(static id => id.ToString()).ToList(), Provenance = record.Provenance is null ? null : (BsonDocument)record.Provenance.DeepClone(),
}; Trust = record.Trust is null ? null : (BsonDocument)record.Trust.DeepClone(),
InputDocuments = record.InputDocumentIds.Select(static id => id.ToString()).ToList(),
};
public static AdvisoryStatementRecord ToRecord(this AdvisoryStatementDocument document) public static AdvisoryStatementRecord ToRecord(this AdvisoryStatementDocument document)
=> new( => new(
Guid.Parse(document.Id), Guid.Parse(document.Id),
document.VulnerabilityKey, document.VulnerabilityKey,
document.AdvisoryKey, document.AdvisoryKey,
document.StatementHash, document.StatementHash,
DateTime.SpecifyKind(document.AsOf, DateTimeKind.Utc), DateTime.SpecifyKind(document.AsOf, DateTimeKind.Utc),
DateTime.SpecifyKind(document.RecordedAt, DateTimeKind.Utc), DateTime.SpecifyKind(document.RecordedAt, DateTimeKind.Utc),
(BsonDocument)document.Payload.DeepClone(), (BsonDocument)document.Payload.DeepClone(),
document.InputDocuments.Select(static value => Guid.Parse(value)).ToList()); document.InputDocuments.Select(static value => Guid.Parse(value)).ToList(),
document.Provenance is null ? null : (BsonDocument)document.Provenance.DeepClone(),
document.Trust is null ? null : (BsonDocument)document.Trust.DeepClone());
} }

View File

@@ -4,12 +4,14 @@ using MongoDB.Bson;
namespace StellaOps.Concelier.Storage.Mongo.Statements; namespace StellaOps.Concelier.Storage.Mongo.Statements;
public sealed record AdvisoryStatementRecord( public sealed record AdvisoryStatementRecord(
Guid Id, Guid Id,
string VulnerabilityKey, string VulnerabilityKey,
string AdvisoryKey, string AdvisoryKey,
byte[] StatementHash, byte[] StatementHash,
DateTimeOffset AsOf, DateTimeOffset AsOf,
DateTimeOffset RecordedAt, DateTimeOffset RecordedAt,
BsonDocument Payload, BsonDocument Payload,
IReadOnlyList<Guid> InputDocumentIds); IReadOnlyList<Guid> InputDocumentIds,
BsonDocument? Provenance = null,
BsonDocument? Trust = null);

View File

@@ -3,23 +3,31 @@ using System.Collections.Generic;
using System.Linq; using System.Linq;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using MongoDB.Driver; using MongoDB.Bson;
using MongoDB.Driver;
namespace StellaOps.Concelier.Storage.Mongo.Statements; namespace StellaOps.Concelier.Storage.Mongo.Statements;
public interface IAdvisoryStatementStore public interface IAdvisoryStatementStore
{ {
ValueTask InsertAsync( ValueTask InsertAsync(
IReadOnlyCollection<AdvisoryStatementRecord> statements, IReadOnlyCollection<AdvisoryStatementRecord> statements,
CancellationToken cancellationToken, CancellationToken cancellationToken,
IClientSessionHandle? session = null); IClientSessionHandle? session = null);
ValueTask<IReadOnlyList<AdvisoryStatementRecord>> GetStatementsAsync( ValueTask<IReadOnlyList<AdvisoryStatementRecord>> GetStatementsAsync(
string vulnerabilityKey, string vulnerabilityKey,
DateTimeOffset? asOf, DateTimeOffset? asOf,
CancellationToken cancellationToken, CancellationToken cancellationToken,
IClientSessionHandle? session = null); IClientSessionHandle? session = null);
}
ValueTask UpdateProvenanceAsync(
Guid statementId,
BsonDocument provenance,
BsonDocument trust,
CancellationToken cancellationToken,
IClientSessionHandle? session = null);
}
public sealed class AdvisoryStatementStore : IAdvisoryStatementStore public sealed class AdvisoryStatementStore : IAdvisoryStatementStore
{ {
@@ -63,13 +71,13 @@ public sealed class AdvisoryStatementStore : IAdvisoryStatementStore
} }
} }
public async ValueTask<IReadOnlyList<AdvisoryStatementRecord>> GetStatementsAsync( public async ValueTask<IReadOnlyList<AdvisoryStatementRecord>> GetStatementsAsync(
string vulnerabilityKey, string vulnerabilityKey,
DateTimeOffset? asOf, DateTimeOffset? asOf,
CancellationToken cancellationToken, CancellationToken cancellationToken,
IClientSessionHandle? session = null) IClientSessionHandle? session = null)
{ {
ArgumentException.ThrowIfNullOrWhiteSpace(vulnerabilityKey); ArgumentException.ThrowIfNullOrWhiteSpace(vulnerabilityKey);
var filter = Builders<AdvisoryStatementDocument>.Filter.Eq(document => document.VulnerabilityKey, vulnerabilityKey); var filter = Builders<AdvisoryStatementDocument>.Filter.Eq(document => document.VulnerabilityKey, vulnerabilityKey);
@@ -88,6 +96,31 @@ public sealed class AdvisoryStatementStore : IAdvisoryStatementStore
.ToListAsync(cancellationToken) .ToListAsync(cancellationToken)
.ConfigureAwait(false); .ConfigureAwait(false);
return documents.Select(static document => document.ToRecord()).ToList(); return documents.Select(static document => document.ToRecord()).ToList();
} }
}
public async ValueTask UpdateProvenanceAsync(
Guid statementId,
BsonDocument provenance,
BsonDocument trust,
CancellationToken cancellationToken,
IClientSessionHandle? session = null)
{
ArgumentNullException.ThrowIfNull(provenance);
ArgumentNullException.ThrowIfNull(trust);
var filter = Builders<AdvisoryStatementDocument>.Filter.Eq(document => document.Id, statementId.ToString());
var update = Builders<AdvisoryStatementDocument>.Update
.Set(document => document.Provenance, provenance)
.Set(document => document.Trust, trust);
var result = session is null
? await _collection.UpdateOneAsync(filter, update, cancellationToken: cancellationToken).ConfigureAwait(false)
: await _collection.UpdateOneAsync(session, filter, update, cancellationToken: cancellationToken).ConfigureAwait(false);
if (result.MatchedCount == 0)
{
throw new InvalidOperationException($"Statement {statementId} not found.");
}
}
}

View File

@@ -15,5 +15,6 @@
<ProjectReference Include="..\StellaOps.Concelier.Core\StellaOps.Concelier.Core.csproj" /> <ProjectReference Include="..\StellaOps.Concelier.Core\StellaOps.Concelier.Core.csproj" />
<ProjectReference Include="..\StellaOps.Concelier.Models\StellaOps.Concelier.Models.csproj" /> <ProjectReference Include="..\StellaOps.Concelier.Models\StellaOps.Concelier.Models.csproj" />
<ProjectReference Include="..\..\..\__Libraries\StellaOps.Ingestion.Telemetry\StellaOps.Ingestion.Telemetry.csproj" /> <ProjectReference Include="..\..\..\__Libraries\StellaOps.Ingestion.Telemetry\StellaOps.Ingestion.Telemetry.csproj" />
<ProjectReference Include="..\..\..\__Libraries\StellaOps.Provenance.Mongo\StellaOps.Provenance.Mongo.csproj" />
</ItemGroup> </ItemGroup>
</Project> </Project>

View File

@@ -4,21 +4,22 @@ using System.Collections.Immutable;
using System.Linq; using System.Linq;
using System.Text.Json; using System.Text.Json;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using StellaOps.Concelier.Core.Events; using StellaOps.Concelier.Core.Events;
using StellaOps.Concelier.Models; using StellaOps.Concelier.Models;
using Xunit; using StellaOps.Provenance.Mongo;
using Xunit;
namespace StellaOps.Concelier.Core.Tests.Events; namespace StellaOps.Concelier.Core.Tests.Events;
public sealed class AdvisoryEventLogTests public sealed class AdvisoryEventLogTests
{ {
[Fact] [Fact]
public async Task AppendAsync_PersistsCanonicalStatementEntries() public async Task AppendAsync_PersistsCanonicalStatementEntries()
{ {
var repository = new FakeRepository(); var repository = new FakeRepository();
var timeProvider = new FixedTimeProvider(DateTimeOffset.UtcNow); var timeProvider = new FixedTimeProvider(DateTimeOffset.UtcNow);
var log = new AdvisoryEventLog(repository, timeProvider); var log = new AdvisoryEventLog(repository, timeProvider);
var advisory = new Advisory( var advisory = new Advisory(
"adv-1", "adv-1",
@@ -48,9 +49,54 @@ public sealed class AdvisoryEventLogTests
Assert.Equal("cve-2025-0001", entry.VulnerabilityKey); Assert.Equal("cve-2025-0001", entry.VulnerabilityKey);
Assert.Equal("adv-1", entry.AdvisoryKey); Assert.Equal("adv-1", entry.AdvisoryKey);
Assert.Equal(DateTimeOffset.Parse("2025-10-03T00:00:00Z"), entry.AsOf); Assert.Equal(DateTimeOffset.Parse("2025-10-03T00:00:00Z"), entry.AsOf);
Assert.Contains("\"advisoryKey\":\"adv-1\"", entry.CanonicalJson); Assert.Contains("\"advisoryKey\":\"adv-1\"", entry.CanonicalJson);
Assert.NotEqual(ImmutableArray<byte>.Empty, entry.StatementHash); Assert.NotEqual(ImmutableArray<byte>.Empty, entry.StatementHash);
} }
[Fact]
public async Task AppendAsync_AttachesDsseMetadataFromAdvisoryProvenance()
{
var repository = new FakeRepository();
var timeProvider = new FixedTimeProvider(DateTimeOffset.Parse("2025-11-11T00:00:00Z"));
var log = new AdvisoryEventLog(repository, timeProvider);
var dsseMetadata = new AdvisoryProvenance(
source: "attestor",
kind: "dsse",
value: BuildDsseMetadataJson(),
recordedAt: DateTimeOffset.Parse("2025-11-10T00:00:00Z"));
var advisory = new Advisory(
"adv-2",
"DSSE-backed",
summary: null,
language: "en",
published: DateTimeOffset.Parse("2025-11-09T00:00:00Z"),
modified: DateTimeOffset.Parse("2025-11-10T00:00:00Z"),
severity: "medium",
exploitKnown: false,
aliases: new[] { "CVE-2025-7777" },
references: Array.Empty<AdvisoryReference>(),
affectedPackages: Array.Empty<AffectedPackage>(),
cvssMetrics: Array.Empty<CvssMetric>(),
provenance: new[] { dsseMetadata });
var statementInput = new AdvisoryStatementInput(
VulnerabilityKey: "CVE-2025-7777",
Advisory: advisory,
AsOf: DateTimeOffset.Parse("2025-11-10T12:00:00Z"),
InputDocumentIds: Array.Empty<Guid>());
await log.AppendAsync(new AdvisoryEventAppendRequest(new[] { statementInput }), CancellationToken.None);
var entry = Assert.Single(repository.InsertedStatements);
Assert.NotNull(entry.Provenance);
Assert.NotNull(entry.Trust);
Assert.Equal("sha256:feedface", entry.Provenance!.EnvelopeDigest);
Assert.Equal(1337, entry.Provenance.Rekor!.LogIndex);
Assert.True(entry.Trust!.Verified);
Assert.Equal("Authority@stella", entry.Trust.Verifier);
}
[Fact] [Fact]
public async Task AppendAsync_PersistsConflictsWithCanonicalizedJson() public async Task AppendAsync_PersistsConflictsWithCanonicalizedJson()
@@ -190,8 +236,8 @@ public sealed class AdvisoryEventLogTests
Assert.Equal("{\"reason\":\"conflict\"}", replay.Conflicts[0].CanonicalJson); Assert.Equal("{\"reason\":\"conflict\"}", replay.Conflicts[0].CanonicalJson);
} }
private sealed class FakeRepository : IAdvisoryEventRepository private sealed class FakeRepository : IAdvisoryEventRepository
{ {
public List<AdvisoryStatementEntry> InsertedStatements { get; } = new(); public List<AdvisoryStatementEntry> InsertedStatements { get; } = new();
public List<AdvisoryConflictEntry> InsertedConflicts { get; } = new(); public List<AdvisoryConflictEntry> InsertedConflicts { get; } = new();
@@ -217,21 +263,61 @@ public sealed class AdvisoryEventLogTests
string.Equals(entry.VulnerabilityKey, vulnerabilityKey, StringComparison.Ordinal) && string.Equals(entry.VulnerabilityKey, vulnerabilityKey, StringComparison.Ordinal) &&
(!asOf.HasValue || entry.AsOf <= asOf.Value)).ToList()); (!asOf.HasValue || entry.AsOf <= asOf.Value)).ToList());
public ValueTask<IReadOnlyList<AdvisoryConflictEntry>> GetConflictsAsync(string vulnerabilityKey, DateTimeOffset? asOf, CancellationToken cancellationToken) public ValueTask<IReadOnlyList<AdvisoryConflictEntry>> GetConflictsAsync(string vulnerabilityKey, DateTimeOffset? asOf, CancellationToken cancellationToken)
=> ValueTask.FromResult<IReadOnlyList<AdvisoryConflictEntry>>(StoredConflicts.Where(entry => => ValueTask.FromResult<IReadOnlyList<AdvisoryConflictEntry>>(StoredConflicts.Where(entry =>
string.Equals(entry.VulnerabilityKey, vulnerabilityKey, StringComparison.Ordinal) && string.Equals(entry.VulnerabilityKey, vulnerabilityKey, StringComparison.Ordinal) &&
(!asOf.HasValue || entry.AsOf <= asOf.Value)).ToList()); (!asOf.HasValue || entry.AsOf <= asOf.Value)).ToList());
}
public ValueTask AttachStatementProvenanceAsync(
private sealed class FixedTimeProvider : TimeProvider Guid statementId,
{ DsseProvenance provenance,
TrustInfo trust,
CancellationToken cancellationToken)
=> ValueTask.CompletedTask;
}
private sealed class FixedTimeProvider : TimeProvider
{
private readonly DateTimeOffset _now; private readonly DateTimeOffset _now;
public FixedTimeProvider(DateTimeOffset now) public FixedTimeProvider(DateTimeOffset now)
{ {
_now = now.ToUniversalTime(); _now = now.ToUniversalTime();
} }
public override DateTimeOffset GetUtcNow() => _now; public override DateTimeOffset GetUtcNow() => _now;
} }
}
private static string BuildDsseMetadataJson()
{
var payload = new
{
dsse = new
{
envelopeDigest = "sha256:feedface",
payloadType = "application/vnd.in-toto+json",
key = new
{
keyId = "cosign:SHA256-PKIX:fixture",
issuer = "Authority@stella",
algo = "Ed25519"
},
rekor = new
{
logIndex = 1337,
uuid = "11111111-2222-3333-4444-555555555555",
integratedTime = 1731081600
}
},
trust = new
{
verified = true,
verifier = "Authority@stella",
witnesses = 1,
policyScore = 1.0
}
};
return JsonSerializer.Serialize(payload, new JsonSerializerOptions(JsonSerializerDefaults.Web));
}
}

View File

@@ -3,11 +3,12 @@ using System.Collections.Generic;
using System.Collections.Immutable; using System.Collections.Immutable;
using System.Linq; using System.Linq;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using StellaOps.Concelier.Core.Events; using StellaOps.Concelier.Core.Events;
using StellaOps.Concelier.Core.Noise; using StellaOps.Concelier.Core.Noise;
using StellaOps.Concelier.Models; using StellaOps.Concelier.Models;
using Xunit; using StellaOps.Provenance.Mongo;
using Xunit;
namespace StellaOps.Concelier.Core.Tests.Noise; namespace StellaOps.Concelier.Core.Tests.Noise;
@@ -249,12 +250,19 @@ public sealed class NoisePriorServiceTests
_replay = replay; _replay = replay;
} }
public ValueTask AppendAsync(AdvisoryEventAppendRequest request, CancellationToken cancellationToken) public ValueTask AppendAsync(AdvisoryEventAppendRequest request, CancellationToken cancellationToken)
=> throw new NotSupportedException("Append operations are not required for tests."); => throw new NotSupportedException("Append operations are not required for tests.");
public ValueTask<AdvisoryReplay> ReplayAsync(string vulnerabilityKey, DateTimeOffset? asOf, CancellationToken cancellationToken) public ValueTask<AdvisoryReplay> ReplayAsync(string vulnerabilityKey, DateTimeOffset? asOf, CancellationToken cancellationToken)
=> ValueTask.FromResult(_replay); => ValueTask.FromResult(_replay);
}
public ValueTask AttachStatementProvenanceAsync(
Guid statementId,
DsseProvenance provenance,
TrustInfo trust,
CancellationToken cancellationToken)
=> ValueTask.CompletedTask;
}
private sealed class FakeNoisePriorRepository : INoisePriorRepository private sealed class FakeNoisePriorRepository : INoisePriorRepository
{ {

View File

@@ -1,110 +1,223 @@
using System; using System;
using System.Collections.Immutable; using System.Collections.Immutable;
using System.Linq; using System.Linq;
using System.Text; using System.Text;
using System.Collections.Generic; using System.Collections.Generic;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using MongoDB.Driver; using MongoDB.Bson;
using StellaOps.Concelier.Core.Events; using MongoDB.Driver;
using StellaOps.Concelier.Models; using StellaOps.Concelier.Core.Events;
using StellaOps.Concelier.Storage.Mongo.Conflicts; using StellaOps.Concelier.Models;
using StellaOps.Concelier.Storage.Mongo.Events; using StellaOps.Concelier.Storage.Mongo.Conflicts;
using StellaOps.Concelier.Storage.Mongo.Events;
using StellaOps.Concelier.Storage.Mongo.Statements; using StellaOps.Concelier.Storage.Mongo.Statements;
using StellaOps.Concelier.Storage.Mongo;
using StellaOps.Concelier.Testing; using StellaOps.Concelier.Testing;
using StellaOps.Cryptography; using StellaOps.Cryptography;
using StellaOps.Provenance.Mongo;
using Xunit; using Xunit;
namespace StellaOps.Concelier.Storage.Mongo.Tests; namespace StellaOps.Concelier.Storage.Mongo.Tests;
[Collection("mongo-fixture")] [Collection("mongo-fixture")]
public sealed class MongoAdvisoryEventRepositoryTests public sealed class MongoAdvisoryEventRepositoryTests
{ {
private readonly IMongoDatabase _database; private readonly IMongoDatabase _database;
private readonly MongoAdvisoryEventRepository _repository; private readonly MongoAdvisoryEventRepository _repository;
private static readonly ICryptoHash Hash = CryptoHashFactory.CreateDefault(); private static readonly ICryptoHash Hash = CryptoHashFactory.CreateDefault();
public MongoAdvisoryEventRepositoryTests(MongoIntegrationFixture fixture) public MongoAdvisoryEventRepositoryTests(MongoIntegrationFixture fixture)
{ {
_database = fixture.Database ?? throw new ArgumentNullException(nameof(fixture.Database)); _database = fixture.Database ?? throw new ArgumentNullException(nameof(fixture.Database));
var statementStore = new AdvisoryStatementStore(_database); var statementStore = new AdvisoryStatementStore(_database);
var conflictStore = new AdvisoryConflictStore(_database); var conflictStore = new AdvisoryConflictStore(_database);
_repository = new MongoAdvisoryEventRepository(statementStore, conflictStore); _repository = new MongoAdvisoryEventRepository(statementStore, conflictStore);
} }
[Fact] [Fact]
public async Task InsertAndFetchStatements_RoundTripsCanonicalPayload() public async Task InsertAndFetchStatements_RoundTripsCanonicalPayload()
{ {
var advisory = CreateSampleAdvisory("CVE-2025-7777", "Sample advisory"); var advisory = CreateSampleAdvisory("CVE-2025-7777", "Sample advisory");
var canonicalJson = CanonicalJsonSerializer.Serialize(advisory); var canonicalJson = CanonicalJsonSerializer.Serialize(advisory);
var digest = Hash.ComputeHash(Encoding.UTF8.GetBytes(canonicalJson), HashAlgorithms.Sha256); var digest = Hash.ComputeHash(Encoding.UTF8.GetBytes(canonicalJson), HashAlgorithms.Sha256);
var hash = ImmutableArray.Create(digest); var hash = ImmutableArray.Create(digest);
var entry = new AdvisoryStatementEntry( var entry = new AdvisoryStatementEntry(
Guid.NewGuid(), Guid.NewGuid(),
"CVE-2025-7777", "CVE-2025-7777",
"CVE-2025-7777", "CVE-2025-7777",
canonicalJson, canonicalJson,
hash, hash,
DateTimeOffset.Parse("2025-10-19T14:00:00Z"), DateTimeOffset.Parse("2025-10-19T14:00:00Z"),
DateTimeOffset.Parse("2025-10-19T14:05:00Z"), DateTimeOffset.Parse("2025-10-19T14:05:00Z"),
ImmutableArray<Guid>.Empty); ImmutableArray<Guid>.Empty);
await _repository.InsertStatementsAsync(new[] { entry }, CancellationToken.None); await _repository.InsertStatementsAsync(new[] { entry }, CancellationToken.None);
var results = await _repository.GetStatementsAsync("CVE-2025-7777", null, CancellationToken.None); var results = await _repository.GetStatementsAsync("CVE-2025-7777", null, CancellationToken.None);
var snapshot = Assert.Single(results); var snapshot = Assert.Single(results);
Assert.Equal(entry.StatementId, snapshot.StatementId); Assert.Equal(entry.StatementId, snapshot.StatementId);
Assert.Equal(entry.CanonicalJson, snapshot.CanonicalJson); Assert.Equal(entry.CanonicalJson, snapshot.CanonicalJson);
Assert.True(entry.StatementHash.SequenceEqual(snapshot.StatementHash)); Assert.True(entry.StatementHash.SequenceEqual(snapshot.StatementHash));
} }
[Fact] [Fact]
public async Task InsertAndFetchConflicts_PreservesDetails() public async Task InsertAndFetchConflicts_PreservesDetails()
{ {
var detailJson = CanonicalJsonSerializer.Serialize(new ConflictPayload("severity", "mismatch")); var detailJson = CanonicalJsonSerializer.Serialize(new ConflictPayload("severity", "mismatch"));
var digest = Hash.ComputeHash(Encoding.UTF8.GetBytes(detailJson), HashAlgorithms.Sha256); var digest = Hash.ComputeHash(Encoding.UTF8.GetBytes(detailJson), HashAlgorithms.Sha256);
var hash = ImmutableArray.Create(digest); var hash = ImmutableArray.Create(digest);
var statementIds = ImmutableArray.Create(Guid.NewGuid(), Guid.NewGuid()); var statementIds = ImmutableArray.Create(Guid.NewGuid(), Guid.NewGuid());
var entry = new AdvisoryConflictEntry( var entry = new AdvisoryConflictEntry(
Guid.NewGuid(), Guid.NewGuid(),
"CVE-2025-4242", "CVE-2025-4242",
detailJson, detailJson,
hash, hash,
DateTimeOffset.Parse("2025-10-19T15:00:00Z"), DateTimeOffset.Parse("2025-10-19T15:00:00Z"),
DateTimeOffset.Parse("2025-10-19T15:05:00Z"), DateTimeOffset.Parse("2025-10-19T15:05:00Z"),
statementIds); statementIds);
await _repository.InsertConflictsAsync(new[] { entry }, CancellationToken.None); await _repository.InsertConflictsAsync(new[] { entry }, CancellationToken.None);
var results = await _repository.GetConflictsAsync("CVE-2025-4242", null, CancellationToken.None); var results = await _repository.GetConflictsAsync("CVE-2025-4242", null, CancellationToken.None);
var conflict = Assert.Single(results); var conflict = Assert.Single(results);
Assert.Equal(entry.CanonicalJson, conflict.CanonicalJson); Assert.Equal(entry.CanonicalJson, conflict.CanonicalJson);
Assert.True(entry.StatementIds.SequenceEqual(conflict.StatementIds)); Assert.True(entry.StatementIds.SequenceEqual(conflict.StatementIds));
Assert.True(entry.ConflictHash.SequenceEqual(conflict.ConflictHash)); Assert.True(entry.ConflictHash.SequenceEqual(conflict.ConflictHash));
} }
private static Advisory CreateSampleAdvisory(string key, string summary)
{ [Fact]
var provenance = new AdvisoryProvenance("nvd", "document", key, DateTimeOffset.Parse("2025-10-18T00:00:00Z"), new[] { ProvenanceFieldMasks.Advisory }); public async Task InsertStatementsAsync_PersistsProvenanceMetadata()
return new Advisory( {
key, var advisory = CreateSampleAdvisory("CVE-2025-8888", "Metadata coverage");
key, var canonicalJson = CanonicalJsonSerializer.Serialize(advisory);
summary, var digest = Hash.ComputeHash(Encoding.UTF8.GetBytes(canonicalJson), HashAlgorithms.Sha256);
"en", var hash = ImmutableArray.Create(digest);
DateTimeOffset.Parse("2025-10-17T00:00:00Z"), var (dsse, trust) = CreateSampleDsseMetadata();
DateTimeOffset.Parse("2025-10-18T00:00:00Z"),
"medium", var entry = new AdvisoryStatementEntry(
exploitKnown: false, Guid.NewGuid(),
aliases: new[] { key }, "CVE-2025-8888",
references: Array.Empty<AdvisoryReference>(), "CVE-2025-8888",
affectedPackages: Array.Empty<AffectedPackage>(), canonicalJson,
cvssMetrics: Array.Empty<CvssMetric>(), hash,
provenance: new[] { provenance }); DateTimeOffset.Parse("2025-10-20T10:00:00Z"),
} DateTimeOffset.Parse("2025-10-20T10:05:00Z"),
ImmutableArray<Guid>.Empty,
private sealed record ConflictPayload(string Type, string Reason); dsse,
} trust);
await _repository.InsertStatementsAsync(new[] { entry }, CancellationToken.None);
var statements = _database.GetCollection<BsonDocument>(MongoStorageDefaults.Collections.AdvisoryStatements);
var stored = await statements
.Find(Builders<BsonDocument>.Filter.Eq("_id", entry.StatementId.ToString()))
.FirstOrDefaultAsync();
Assert.NotNull(stored);
var provenance = stored!["provenance"].AsBsonDocument["dsse"].AsBsonDocument;
Assert.Equal(dsse.EnvelopeDigest, provenance["envelopeDigest"].AsString);
Assert.Equal(dsse.Key.KeyId, provenance["key"].AsBsonDocument["keyId"].AsString);
var trustDoc = stored["trust"].AsBsonDocument;
Assert.Equal(trust.Verifier, trustDoc["verifier"].AsString);
Assert.Equal(trust.Witnesses, trustDoc["witnesses"].AsInt32);
var roundTrip = await _repository.GetStatementsAsync("CVE-2025-8888", null, CancellationToken.None);
var hydrated = Assert.Single(roundTrip);
Assert.NotNull(hydrated.Provenance);
Assert.NotNull(hydrated.Trust);
Assert.Equal(dsse.EnvelopeDigest, hydrated.Provenance!.EnvelopeDigest);
Assert.Equal(trust.Verifier, hydrated.Trust!.Verifier);
}
private static Advisory CreateSampleAdvisory(string key, string summary)
{
var provenance = new AdvisoryProvenance("nvd", "document", key, DateTimeOffset.Parse("2025-10-18T00:00:00Z"), new[] { ProvenanceFieldMasks.Advisory });
return new Advisory(
key,
key,
summary,
"en",
DateTimeOffset.Parse("2025-10-17T00:00:00Z"),
DateTimeOffset.Parse("2025-10-18T00:00:00Z"),
"medium",
exploitKnown: false,
aliases: new[] { key },
references: Array.Empty<AdvisoryReference>(),
affectedPackages: Array.Empty<AffectedPackage>(),
cvssMetrics: Array.Empty<CvssMetric>(),
provenance: new[] { provenance });
}
[Fact]
public async Task AttachStatementProvenanceAsync_BackfillsExistingRecord()
{
var advisory = CreateSampleAdvisory("CVE-2025-9999", "Backfill metadata");
var canonicalJson = CanonicalJsonSerializer.Serialize(advisory);
var digest = Hash.ComputeHash(Encoding.UTF8.GetBytes(canonicalJson), HashAlgorithms.Sha256);
var hash = ImmutableArray.Create(digest);
var entry = new AdvisoryStatementEntry(
Guid.NewGuid(),
"CVE-2025-9999",
"CVE-2025-9999",
canonicalJson,
hash,
DateTimeOffset.Parse("2025-10-21T10:00:00Z"),
DateTimeOffset.Parse("2025-10-21T10:05:00Z"),
ImmutableArray<Guid>.Empty);
await _repository.InsertStatementsAsync(new[] { entry }, CancellationToken.None);
var (dsse, trust) = CreateSampleDsseMetadata();
await _repository.AttachStatementProvenanceAsync(entry.StatementId, dsse, trust, CancellationToken.None);
var statements = await _repository.GetStatementsAsync("CVE-2025-9999", null, CancellationToken.None);
var updated = Assert.Single(statements);
Assert.NotNull(updated.Provenance);
Assert.NotNull(updated.Trust);
Assert.Equal(dsse.EnvelopeDigest, updated.Provenance!.EnvelopeDigest);
Assert.Equal(trust.Verifier, updated.Trust!.Verifier);
}
private static (DsseProvenance Provenance, TrustInfo Trust) CreateSampleDsseMetadata()
{
var provenance = new DsseProvenance
{
EnvelopeDigest = "sha256:deadbeef",
PayloadType = "application/vnd.in-toto+json",
Key = new DsseKeyInfo
{
KeyId = "cosign:SHA256-PKIX:TEST",
Issuer = "fulcio",
Algo = "ECDSA"
},
Rekor = new DsseRekorInfo
{
LogIndex = 42,
Uuid = Guid.Parse("2d4d5f7c-1111-4a01-b9cb-aa42022a0a8c").ToString(),
IntegratedTime = 1_700_000_000
}
};
var trust = new TrustInfo
{
Verified = true,
Verifier = "Authority@stella",
Witnesses = 2,
PolicyScore = 0.9
};
return (provenance, trust);
}
private sealed record ConflictPayload(string Type, string Reason);
}

View File

@@ -1,7 +1,6 @@
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Diagnostics.Metrics; using System.Diagnostics.Metrics;
using FluentAssertions;
using Microsoft.Extensions.Logging.Abstractions; using Microsoft.Extensions.Logging.Abstractions;
using StellaOps.Concelier.WebService.Services; using StellaOps.Concelier.WebService.Services;
using StellaOps.Concelier.WebService.Diagnostics; using StellaOps.Concelier.WebService.Diagnostics;
@@ -12,7 +11,7 @@ namespace StellaOps.Concelier.WebService.Tests;
public sealed class AdvisoryAiTelemetryTests : IDisposable public sealed class AdvisoryAiTelemetryTests : IDisposable
{ {
private readonly MeterListener _listener; private readonly MeterListener _listener;
private readonly List<Measurement<long>> _guardrailMeasurements = new(); private readonly List<(long Value, KeyValuePair<string, object?>[] Tags)> _guardrailMeasurements = new();
public AdvisoryAiTelemetryTests() public AdvisoryAiTelemetryTests()
{ {
@@ -31,7 +30,7 @@ public sealed class AdvisoryAiTelemetryTests : IDisposable
if (instrument.Meter.Name == AdvisoryAiMetrics.MeterName && if (instrument.Meter.Name == AdvisoryAiMetrics.MeterName &&
instrument.Name == "advisory_ai_guardrail_blocks_total") instrument.Name == "advisory_ai_guardrail_blocks_total")
{ {
_guardrailMeasurements.Add(new Measurement<long>(measurement, tags, state)); _guardrailMeasurements.Add((measurement, tags.ToArray()));
} }
}); });
_listener.Start(); _listener.Start();
@@ -58,10 +57,20 @@ public sealed class AdvisoryAiTelemetryTests : IDisposable
Duration: TimeSpan.FromMilliseconds(5), Duration: TimeSpan.FromMilliseconds(5),
GuardrailCounts: guardrailCounts)); GuardrailCounts: guardrailCounts));
_guardrailMeasurements.Should().ContainSingle(); var measurement = Assert.Single(_guardrailMeasurements);
var measurement = _guardrailMeasurements[0]; Assert.Equal(2, measurement.Value);
measurement.Value.Should().Be(2);
measurement.Tags.Should().Contain(tag => tag.Key == "cache" && (string?)tag.Value == "hit"); var cacheHitTagFound = false;
foreach (var tag in measurement.Tags)
{
if (tag.Key == "cache" && (string?)tag.Value == "hit")
{
cacheHitTagFound = true;
break;
}
}
Assert.True(cacheHitTagFound, "guardrail measurement should be tagged with cache hit outcome.");
} }
public void Dispose() public void Dispose()

View File

@@ -31,6 +31,7 @@ using StellaOps.Concelier.Core.Jobs;
using StellaOps.Concelier.Models; using StellaOps.Concelier.Models;
using StellaOps.Concelier.Merge.Services; using StellaOps.Concelier.Merge.Services;
using StellaOps.Concelier.Storage.Mongo; using StellaOps.Concelier.Storage.Mongo;
using StellaOps.Concelier.Storage.Mongo.Advisories;
using StellaOps.Concelier.Storage.Mongo.Observations; using StellaOps.Concelier.Storage.Mongo.Observations;
using StellaOps.Concelier.Core.Raw; using StellaOps.Concelier.Core.Raw;
using StellaOps.Concelier.WebService.Jobs; using StellaOps.Concelier.WebService.Jobs;
@@ -265,42 +266,46 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
CreateAdvisoryRawDocument("tenant-a", "nvd", "tenant-a:chunk:newest", newerHash, newestRaw.DeepClone().AsBsonDocument), CreateAdvisoryRawDocument("tenant-a", "nvd", "tenant-a:chunk:newest", newerHash, newestRaw.DeepClone().AsBsonDocument),
CreateAdvisoryRawDocument("tenant-a", "nvd", "tenant-a:chunk:older", olderHash, olderRaw.DeepClone().AsBsonDocument)); CreateAdvisoryRawDocument("tenant-a", "nvd", "tenant-a:chunk:older", olderHash, olderRaw.DeepClone().AsBsonDocument));
await SeedCanonicalAdvisoriesAsync(
CreateStructuredAdvisory("CVE-2025-0001", "GHSA-2025-0001", "tenant-a:chunk:newest", newerCreatedAt));
using var client = _factory.CreateClient(); using var client = _factory.CreateClient();
var response = await client.GetAsync("/advisories/cve-2025-0001/chunks?tenant=tenant-a&section=summary&format=csaf"); var response = await client.GetAsync("/advisories/cve-2025-0001/chunks?tenant=tenant-a&section=workaround");
response.EnsureSuccessStatusCode(); response.EnsureSuccessStatusCode();
var payload = await response.Content.ReadAsStringAsync(); var payload = await response.Content.ReadAsStringAsync();
using var document = JsonDocument.Parse(payload); using var document = JsonDocument.Parse(payload);
var root = document.RootElement; var root = document.RootElement;
Assert.Equal("cve-2025-0001", root.GetProperty("advisoryKey").GetString()); Assert.Equal("CVE-2025-0001", root.GetProperty("advisoryKey").GetString());
Assert.Equal(1, root.GetProperty("total").GetInt32()); Assert.Equal(1, root.GetProperty("total").GetInt32());
Assert.False(root.GetProperty("truncated").GetBoolean()); Assert.False(root.GetProperty("truncated").GetBoolean());
var chunk = Assert.Single(root.GetProperty("chunks").EnumerateArray()); var entry = Assert.Single(root.GetProperty("entries").EnumerateArray());
Assert.Equal("summary", chunk.GetProperty("section").GetString()); Assert.Equal("workaround", entry.GetProperty("type").GetString());
Assert.Equal("summary.intro", chunk.GetProperty("paragraphId").GetString()); Assert.Equal("tenant-a:chunk:newest", entry.GetProperty("documentId").GetString());
var text = chunk.GetProperty("text").GetString(); Assert.Equal("/references/0", entry.GetProperty("fieldPath").GetString());
Assert.False(string.IsNullOrWhiteSpace(text)); Assert.False(string.IsNullOrWhiteSpace(entry.GetProperty("chunkId").GetString()));
Assert.Contains("deterministic summary paragraph", text, StringComparison.OrdinalIgnoreCase);
var metadata = chunk.GetProperty("metadata"); var content = entry.GetProperty("content");
Assert.Equal("summary.intro", metadata.GetProperty("path").GetString()); Assert.Equal("Vendor guidance", content.GetProperty("title").GetString());
Assert.Equal("csaf", metadata.GetProperty("format").GetString()); Assert.Equal("Apply configuration change immediately.", content.GetProperty("description").GetString());
Assert.Equal("https://vendor.example/workaround", content.GetProperty("url").GetString());
var sources = root.GetProperty("sources").EnumerateArray().ToArray(); var provenance = entry.GetProperty("provenance");
Assert.Equal(2, sources.Length); Assert.Equal("nvd", provenance.GetProperty("source").GetString());
Assert.Equal("tenant-a:chunk:newest", sources[0].GetProperty("observationId").GetString()); Assert.Equal("workaround", provenance.GetProperty("kind").GetString());
Assert.Equal("tenant-a:chunk:older", sources[1].GetProperty("observationId").GetString()); Assert.Equal("tenant-a:chunk:newest", provenance.GetProperty("value").GetString());
Assert.All( Assert.Contains(
sources, "/references/0",
source => Assert.True(string.Equals("csaf", source.GetProperty("format").GetString(), StringComparison.OrdinalIgnoreCase))); provenance.GetProperty("fieldMask").EnumerateArray().Select(element => element.GetString()));
} }
[Fact] [Fact]
public async Task AdvisoryChunksEndpoint_ReturnsNotFoundWhenAdvisoryMissing() public async Task AdvisoryChunksEndpoint_ReturnsNotFoundWhenAdvisoryMissing()
{ {
await SeedObservationDocumentsAsync(BuildSampleObservationDocuments()); await SeedObservationDocumentsAsync(BuildSampleObservationDocuments());
await SeedCanonicalAdvisoriesAsync();
using var client = _factory.CreateClient(); using var client = _factory.CreateClient();
var response = await client.GetAsync("/advisories/cve-2099-9999/chunks?tenant=tenant-a"); var response = await client.GetAsync("/advisories/cve-2099-9999/chunks?tenant=tenant-a");
@@ -526,6 +531,12 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
public async Task AdvisoryChunksEndpoint_EmitsRequestAndCacheMetrics() public async Task AdvisoryChunksEndpoint_EmitsRequestAndCacheMetrics()
{ {
await SeedObservationDocumentsAsync(BuildSampleObservationDocuments()); await SeedObservationDocumentsAsync(BuildSampleObservationDocuments());
await SeedCanonicalAdvisoriesAsync(
CreateStructuredAdvisory(
"CVE-2025-0001",
"GHSA-2025-0001",
"tenant-a:nvd:alpha:1",
new DateTimeOffset(2025, 1, 5, 0, 0, 0, TimeSpan.Zero)));
using var client = _factory.CreateClient(); using var client = _factory.CreateClient();
@@ -588,6 +599,12 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
raw); raw);
await SeedObservationDocumentsAsync(new[] { document }); await SeedObservationDocumentsAsync(new[] { document });
await SeedCanonicalAdvisoriesAsync(
CreateStructuredAdvisory(
"CVE-2025-GUARD",
"GHSA-2025-GUARD",
"tenant-a:chunk:1",
new DateTimeOffset(2025, 2, 1, 0, 0, 0, TimeSpan.Zero)));
using var client = _factory.CreateClient(); using var client = _factory.CreateClient();
@@ -1936,6 +1953,111 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
} }
} }
[Fact]
public async Task StatementProvenanceEndpointAttachesMetadata()
{
var tenant = "tenant-provenance";
var vulnerabilityKey = "CVE-2025-9200";
var statementId = Guid.NewGuid();
var recordedAt = DateTimeOffset.Parse("2025-03-01T00:00:00Z", CultureInfo.InvariantCulture);
using (var scope = _factory.Services.CreateScope())
{
var eventLog = scope.ServiceProvider.GetRequiredService<IAdvisoryEventLog>();
var advisory = new Advisory(
advisoryKey: vulnerabilityKey,
title: "Provenance seed",
summary: "Ready for DSSE metadata",
language: "en",
published: recordedAt.AddDays(-1),
modified: recordedAt,
severity: "high",
exploitKnown: false,
aliases: new[] { vulnerabilityKey },
references: Array.Empty<AdvisoryReference>(),
affectedPackages: Array.Empty<AffectedPackage>(),
cvssMetrics: Array.Empty<CvssMetric>(),
provenance: Array.Empty<AdvisoryProvenance>());
var statementInput = new AdvisoryStatementInput(
vulnerabilityKey,
advisory,
recordedAt,
InputDocumentIds: Array.Empty<Guid>(),
StatementId: statementId,
AdvisoryKey: advisory.AdvisoryKey);
await eventLog.AppendAsync(new AdvisoryEventAppendRequest(new[] { statementInput }), CancellationToken.None);
}
try
{
using var client = _factory.CreateClient();
client.DefaultRequestHeaders.Add("X-Stella-Tenant", tenant);
var response = await client.PostAsync(
$"/events/statements/{statementId}/provenance?tenant={tenant}",
new StringContent(BuildProvenancePayload(), Encoding.UTF8, "application/json"));
Assert.Equal(HttpStatusCode.Accepted, response.StatusCode);
using var validationScope = _factory.Services.CreateScope();
var database = validationScope.ServiceProvider.GetRequiredService<IMongoDatabase>();
var statements = database.GetCollection<BsonDocument>(MongoStorageDefaults.Collections.AdvisoryStatements);
var stored = await statements
.Find(Builders<BsonDocument>.Filter.Eq("_id", statementId.ToString()))
.FirstOrDefaultAsync();
Assert.NotNull(stored);
var dsse = stored!["provenance"].AsBsonDocument["dsse"].AsBsonDocument;
Assert.Equal("sha256:feedface", dsse["envelopeDigest"].AsString);
var trustDoc = stored["trust"].AsBsonDocument;
Assert.True(trustDoc["verified"].AsBoolean);
Assert.Equal("Authority@stella", trustDoc["verifier"].AsString);
}
finally
{
using var cleanupScope = _factory.Services.CreateScope();
var database = cleanupScope.ServiceProvider.GetRequiredService<IMongoDatabase>();
var statements = database.GetCollection<BsonDocument>(MongoStorageDefaults.Collections.AdvisoryStatements);
await statements.DeleteOneAsync(Builders<BsonDocument>.Filter.Eq("_id", statementId.ToString()));
}
}
private static string BuildProvenancePayload()
{
var payload = new
{
dsse = new
{
envelopeDigest = "sha256:feedface",
payloadType = "application/vnd.in-toto+json",
key = new
{
keyId = "cosign:SHA256-PKIX:fixture",
issuer = "Authority@stella",
algo = "Ed25519"
},
rekor = new
{
logIndex = 1337,
uuid = "11111111-2222-3333-4444-555555555555",
integratedTime = 1731081600
}
},
trust = new
{
verified = true,
verifier = "Authority@stella",
witnesses = 1,
policyScore = 1.0
}
};
return JsonSerializer.Serialize(payload, new JsonSerializerOptions(JsonSerializerDefaults.Web));
}
private sealed class TempDirectory : IDisposable private sealed class TempDirectory : IDisposable
{ {
public string Path { get; } public string Path { get; }
@@ -1978,6 +2100,121 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
private sealed record ProblemDocument(string? Type, string? Title, int? Status, string? Detail, string? Instance); private sealed record ProblemDocument(string? Type, string? Title, int? Status, string? Detail, string? Instance);
private async Task SeedCanonicalAdvisoriesAsync(params Advisory[] advisories)
{
using var scope = _factory.Services.CreateScope();
var database = scope.ServiceProvider.GetRequiredService<IMongoDatabase>();
await DropCollectionIfExistsAsync(database, MongoStorageDefaults.Collections.Advisory);
await DropCollectionIfExistsAsync(database, MongoStorageDefaults.Collections.Alias);
if (advisories.Length == 0)
{
return;
}
var store = scope.ServiceProvider.GetRequiredService<IAdvisoryStore>();
foreach (var advisory in advisories)
{
await store.UpsertAsync(advisory, CancellationToken.None);
}
}
private static async Task DropCollectionIfExistsAsync(IMongoDatabase database, string collectionName)
{
try
{
await database.DropCollectionAsync(collectionName);
}
catch (MongoCommandException ex) when (ex.CodeName == "NamespaceNotFound" || ex.Message.Contains("ns not found", StringComparison.OrdinalIgnoreCase))
{
}
}
private static Advisory CreateStructuredAdvisory(
string advisoryKey,
string alias,
string observationId,
DateTimeOffset recordedAt)
{
const string WorkaroundTitle = "Vendor guidance";
const string WorkaroundSummary = "Apply configuration change immediately.";
const string WorkaroundUrl = "https://vendor.example/workaround";
var reference = new AdvisoryReference(
WorkaroundUrl,
kind: "workaround",
sourceTag: WorkaroundTitle,
summary: WorkaroundSummary,
new AdvisoryProvenance(
"nvd",
"workaround",
observationId,
recordedAt,
new[] { "/references/0" }));
var affectedRange = new AffectedVersionRange(
rangeKind: "semver",
introducedVersion: "1.0.0",
fixedVersion: "1.1.0",
lastAffectedVersion: null,
rangeExpression: ">=1.0.0,<1.1.0",
new AdvisoryProvenance(
"nvd",
"affected",
observationId,
recordedAt,
new[] { "/affectedPackages/0/versionRanges/0" }));
var affectedPackage = new AffectedPackage(
type: AffectedPackageTypes.SemVer,
identifier: "pkg:npm/demo",
versionRanges: new[] { affectedRange },
statuses: Array.Empty<AffectedPackageStatus>(),
provenance: new[]
{
new AdvisoryProvenance(
"nvd",
"affected",
observationId,
recordedAt,
new[] { "/affectedPackages/0" })
},
normalizedVersions: Array.Empty<NormalizedVersionRule>());
var cvss = new CvssMetric(
"3.1",
"CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
9.8,
"critical",
new AdvisoryProvenance(
"nvd",
"cvss",
observationId,
recordedAt,
new[] { "/cvssMetrics/0" }));
var advisory = new Advisory(
advisoryKey,
title: "Fixture advisory",
summary: "Structured payload fixture",
language: "en",
published: recordedAt,
modified: recordedAt,
severity: "critical",
exploitKnown: false,
aliases: string.IsNullOrWhiteSpace(alias) ? new[] { advisoryKey } : new[] { advisoryKey, alias },
references: new[] { reference },
affectedPackages: new[] { affectedPackage },
cvssMetrics: new[] { cvss },
provenance: new[]
{
new AdvisoryProvenance("nvd", "advisory", observationId, recordedAt)
});
return advisory;
}
private async Task SeedAdvisoryRawDocumentsAsync(params BsonDocument[] documents) private async Task SeedAdvisoryRawDocumentsAsync(params BsonDocument[] documents)
{ {
var client = new MongoClient(_runner.ConnectionString); var client = new MongoClient(_runner.ConnectionString);

View File

@@ -7,6 +7,7 @@ using OpenTelemetry.Metrics;
using OpenTelemetry.Resources; using OpenTelemetry.Resources;
using OpenTelemetry.Trace; using OpenTelemetry.Trace;
using StellaOps.Excititor.WebService.Options; using StellaOps.Excititor.WebService.Options;
using StellaOps.Excititor.WebService.Telemetry;
using StellaOps.Ingestion.Telemetry; using StellaOps.Ingestion.Telemetry;
namespace StellaOps.Excititor.WebService.Extensions; namespace StellaOps.Excititor.WebService.Extensions;
@@ -64,6 +65,7 @@ internal static class TelemetryExtensions
{ {
metrics metrics
.AddMeter(IngestionTelemetry.MeterName) .AddMeter(IngestionTelemetry.MeterName)
.AddMeter(EvidenceTelemetry.MeterName)
.AddAspNetCoreInstrumentation() .AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation() .AddHttpClientInstrumentation()
.AddRuntimeInstrumentation(); .AddRuntimeInstrumentation();

View File

@@ -29,6 +29,7 @@ using StellaOps.Excititor.WebService.Options;
using StellaOps.Excititor.WebService.Services; using StellaOps.Excititor.WebService.Services;
using StellaOps.Excititor.Core.Aoc; using StellaOps.Excititor.Core.Aoc;
using StellaOps.Excititor.WebService.Contracts; using StellaOps.Excititor.WebService.Contracts;
using StellaOps.Excititor.WebService.Telemetry;
using MongoDB.Driver; using MongoDB.Driver;
using MongoDB.Bson; using MongoDB.Bson;
@@ -216,6 +217,7 @@ app.MapPost("/ingest/vex", async (
} }
catch (ExcititorAocGuardException guardException) catch (ExcititorAocGuardException guardException)
{ {
EvidenceTelemetry.RecordGuardViolations(tenant, "ingest", guardException);
logger.LogWarning( logger.LogWarning(
guardException, guardException,
"AOC guard rejected VEX ingest tenant={Tenant} digest={Digest}", "AOC guard rejected VEX ingest tenant={Tenant} digest={Digest}",
@@ -478,8 +480,27 @@ app.MapGet("/v1/vex/observations/{vulnerabilityId}/{productKey}", async (
since, since,
limit); limit);
var result = await projectionService.QueryAsync(request, cancellationToken).ConfigureAwait(false); VexObservationProjectionResult result;
var statements = result.Statements try
{
result = await projectionService.QueryAsync(request, cancellationToken).ConfigureAwait(false);
}
catch (OperationCanceledException)
{
EvidenceTelemetry.RecordObservationOutcome(tenant, "cancelled");
throw;
}
catch
{
EvidenceTelemetry.RecordObservationOutcome(tenant, "error");
throw;
}
var projectionStatements = result.Statements;
EvidenceTelemetry.RecordObservationOutcome(tenant, "success", projectionStatements.Count, result.Truncated);
EvidenceTelemetry.RecordSignatureStatus(tenant, projectionStatements);
var statements = projectionStatements
.Select(ToResponse) .Select(ToResponse)
.ToList(); .ToList();
@@ -575,6 +596,7 @@ app.MapPost("/aoc/verify", async (
} }
catch (ExcititorAocGuardException guardException) catch (ExcititorAocGuardException guardException)
{ {
EvidenceTelemetry.RecordGuardViolations(tenant, "aoc_verify", guardException);
checkedCount++; checkedCount++;
foreach (var violation in guardException.Violations) foreach (var violation in guardException.Violations)
{ {

View File

@@ -6,6 +6,7 @@ using System.Linq;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using StellaOps.Excititor.Core; using StellaOps.Excititor.Core;
using StellaOps.Excititor.Storage.Mongo;
namespace StellaOps.Excititor.WebService.Services; namespace StellaOps.Excititor.WebService.Services;

View File

@@ -0,0 +1,154 @@
using System;
using System.Collections.Generic;
using System.Diagnostics.Metrics;
using StellaOps.Excititor.Core.Aoc;
using StellaOps.Excititor.WebService.Services;
namespace StellaOps.Excititor.WebService.Telemetry;
internal static class EvidenceTelemetry
{
public const string MeterName = "StellaOps.Excititor.WebService.Evidence";
private static readonly Meter Meter = new(MeterName);
private static readonly Counter<long> ObservationRequestCounter =
Meter.CreateCounter<long>(
"excititor.vex.observation.requests",
unit: "requests",
description: "Number of observation projection requests handled by the evidence APIs.");
private static readonly Histogram<int> ObservationStatementHistogram =
Meter.CreateHistogram<int>(
"excititor.vex.observation.statement_count",
unit: "statements",
description: "Distribution of statements returned per observation projection request.");
private static readonly Counter<long> SignatureStatusCounter =
Meter.CreateCounter<long>(
"excititor.vex.signature.status",
unit: "statements",
description: "Signature verification status counts for observation statements.");
private static readonly Counter<long> GuardViolationCounter =
Meter.CreateCounter<long>(
"excititor.vex.aoc.guard_violations",
unit: "violations",
description: "Aggregated count of AOC guard violations detected by Excititor evidence APIs.");
public static void RecordObservationOutcome(string? tenant, string outcome, int returnedCount = 0, bool truncated = false)
{
var normalizedTenant = NormalizeTenant(tenant);
var tags = new[]
{
new KeyValuePair<string, object?>("tenant", normalizedTenant),
new KeyValuePair<string, object?>("outcome", outcome),
new KeyValuePair<string, object?>("truncated", truncated),
};
ObservationRequestCounter.Add(1, tags);
if (!string.Equals(outcome, "success", StringComparison.OrdinalIgnoreCase))
{
return;
}
ObservationStatementHistogram.Record(
returnedCount,
new[]
{
new KeyValuePair<string, object?>("tenant", normalizedTenant),
new KeyValuePair<string, object?>("outcome", outcome),
});
}
public static void RecordSignatureStatus(string? tenant, IReadOnlyList<VexObservationStatementProjection> statements)
{
if (statements is null || statements.Count == 0)
{
return;
}
var normalizedTenant = NormalizeTenant(tenant);
var missing = 0;
var unverified = 0;
foreach (var statement in statements)
{
var signature = statement.Signature;
if (signature is null)
{
missing++;
continue;
}
if (signature.VerifiedAt is null)
{
unverified++;
}
}
if (missing > 0)
{
SignatureStatusCounter.Add(
missing,
new[]
{
new KeyValuePair<string, object?>("tenant", normalizedTenant),
new KeyValuePair<string, object?>("status", "missing"),
});
}
if (unverified > 0)
{
SignatureStatusCounter.Add(
unverified,
new[]
{
new KeyValuePair<string, object?>("tenant", normalizedTenant),
new KeyValuePair<string, object?>("status", "unverified"),
});
}
}
public static void RecordGuardViolations(string? tenant, string surface, ExcititorAocGuardException exception)
{
var normalizedTenant = NormalizeTenant(tenant);
var normalizedSurface = NormalizeSurface(surface);
if (exception.Violations.IsDefaultOrEmpty)
{
GuardViolationCounter.Add(
1,
new[]
{
new KeyValuePair<string, object?>("tenant", normalizedTenant),
new KeyValuePair<string, object?>("surface", normalizedSurface),
new KeyValuePair<string, object?>("code", exception.PrimaryErrorCode),
});
return;
}
foreach (var violation in exception.Violations)
{
var code = string.IsNullOrWhiteSpace(violation.ErrorCode)
? exception.PrimaryErrorCode
: violation.ErrorCode;
GuardViolationCounter.Add(
1,
new[]
{
new KeyValuePair<string, object?>("tenant", normalizedTenant),
new KeyValuePair<string, object?>("surface", normalizedSurface),
new KeyValuePair<string, object?>("code", code),
});
}
}
private static string NormalizeTenant(string? tenant)
=> string.IsNullOrWhiteSpace(tenant) ? "default" : tenant;
private static string NormalizeSurface(string? surface)
=> string.IsNullOrWhiteSpace(surface) ? "unknown" : surface.ToLowerInvariant();
}

View File

@@ -0,0 +1,30 @@
using System.Diagnostics.Metrics;
namespace StellaOps.Findings.Ledger.Observability;
internal static class LedgerMetrics
{
private static readonly Meter Meter = new("StellaOps.Findings.Ledger");
private static readonly Histogram<double> WriteLatencySeconds = Meter.CreateHistogram<double>(
"ledger_write_latency_seconds",
unit: "s",
description: "Latency of successful ledger append operations.");
private static readonly Counter<long> EventsTotal = Meter.CreateCounter<long>(
"ledger_events_total",
description: "Number of ledger events appended.");
public static void RecordWriteSuccess(TimeSpan duration, string? tenantId, string? eventType, string? source)
{
var tags = new TagList
{
{ "tenant", tenantId ?? string.Empty },
{ "event_type", eventType ?? string.Empty },
{ "source", source ?? string.Empty }
};
WriteLatencySeconds.Record(duration.TotalSeconds, tags);
EventsTotal.Add(1, tags);
}
}

View File

@@ -1,8 +1,10 @@
using System.Diagnostics;
using System.Text.Json.Nodes; using System.Text.Json.Nodes;
using Microsoft.Extensions.Logging; using Microsoft.Extensions.Logging;
using StellaOps.Findings.Ledger.Domain; using StellaOps.Findings.Ledger.Domain;
using StellaOps.Findings.Ledger.Hashing; using StellaOps.Findings.Ledger.Hashing;
using StellaOps.Findings.Ledger.Infrastructure; using StellaOps.Findings.Ledger.Infrastructure;
using StellaOps.Findings.Ledger.Observability;
namespace StellaOps.Findings.Ledger.Services; namespace StellaOps.Findings.Ledger.Services;
@@ -29,6 +31,8 @@ public sealed class LedgerEventWriteService : ILedgerEventWriteService
public async Task<LedgerWriteResult> AppendAsync(LedgerEventDraft draft, CancellationToken cancellationToken) public async Task<LedgerWriteResult> AppendAsync(LedgerEventDraft draft, CancellationToken cancellationToken)
{ {
var stopwatch = Stopwatch.StartNew();
var validationErrors = ValidateDraft(draft); var validationErrors = ValidateDraft(draft);
if (validationErrors.Count > 0) if (validationErrors.Count > 0)
{ {
@@ -95,6 +99,9 @@ public sealed class LedgerEventWriteService : ILedgerEventWriteService
{ {
await _repository.AppendAsync(record, cancellationToken).ConfigureAwait(false); await _repository.AppendAsync(record, cancellationToken).ConfigureAwait(false);
await _merkleAnchorScheduler.EnqueueAsync(record, cancellationToken).ConfigureAwait(false); await _merkleAnchorScheduler.EnqueueAsync(record, cancellationToken).ConfigureAwait(false);
stopwatch.Stop();
LedgerMetrics.RecordWriteSuccess(stopwatch.Elapsed, draft.TenantId, draft.EventType, DetermineSource(draft));
} }
catch (Exception ex) when (IsDuplicateKeyException(ex)) catch (Exception ex) when (IsDuplicateKeyException(ex))
{ {
@@ -116,6 +123,21 @@ public sealed class LedgerEventWriteService : ILedgerEventWriteService
return LedgerWriteResult.Success(record); return LedgerWriteResult.Success(record);
} }
private static string DetermineSource(LedgerEventDraft draft)
{
if (draft.SourceRunId.HasValue)
{
return "policy_run";
}
return draft.ActorType switch
{
"operator" => "workflow",
"integration" => "integration",
_ => "system"
};
}
private static bool IsDuplicateKeyException(Exception exception) private static bool IsDuplicateKeyException(Exception exception)
{ {
if (exception is null) if (exception is null)

View File

@@ -168,12 +168,16 @@ internal static class ScanEndpoints
var snapshot = await coordinator.GetAsync(parsed, context.RequestAborted).ConfigureAwait(false); var snapshot = await coordinator.GetAsync(parsed, context.RequestAborted).ConfigureAwait(false);
if (snapshot is null) if (snapshot is null)
{ {
return ProblemResultFactory.Create( snapshot = await TryResolveSnapshotAsync(scanId, coordinator, cancellationToken).ConfigureAwait(false);
context, if (snapshot is null)
ProblemTypes.NotFound, {
"Scan not found", return ProblemResultFactory.Create(
StatusCodes.Status404NotFound, context,
detail: "Requested scan could not be located."); ProblemTypes.NotFound,
"Scan not found",
StatusCodes.Status404NotFound,
detail: "Requested scan could not be located.");
}
} }
SurfacePointersDto? surfacePointers = null; SurfacePointersDto? surfacePointers = null;
@@ -282,10 +286,12 @@ internal static class ScanEndpoints
private static async Task<IResult> HandleEntryTraceAsync( private static async Task<IResult> HandleEntryTraceAsync(
string scanId, string scanId,
IScanCoordinator coordinator,
IEntryTraceResultStore resultStore, IEntryTraceResultStore resultStore,
HttpContext context, HttpContext context,
CancellationToken cancellationToken) CancellationToken cancellationToken)
{ {
ArgumentNullException.ThrowIfNull(coordinator);
ArgumentNullException.ThrowIfNull(resultStore); ArgumentNullException.ThrowIfNull(resultStore);
if (!ScanId.TryParse(scanId, out var parsed)) if (!ScanId.TryParse(scanId, out var parsed))
@@ -298,15 +304,25 @@ internal static class ScanEndpoints
detail: "Scan identifier is required."); detail: "Scan identifier is required.");
} }
var result = await resultStore.GetAsync(parsed.Value, cancellationToken).ConfigureAwait(false); var targetScanId = parsed.Value;
var result = await resultStore.GetAsync(targetScanId, cancellationToken).ConfigureAwait(false);
if (result is null) if (result is null)
{ {
return ProblemResultFactory.Create( var snapshot = await TryResolveSnapshotAsync(scanId, coordinator, cancellationToken).ConfigureAwait(false);
context, if (snapshot is not null && !string.Equals(snapshot.ScanId.Value, targetScanId, StringComparison.Ordinal))
ProblemTypes.NotFound, {
"EntryTrace not found", result = await resultStore.GetAsync(snapshot.ScanId.Value, cancellationToken).ConfigureAwait(false);
StatusCodes.Status404NotFound, }
detail: "EntryTrace data is not available for the requested scan.");
if (result is null)
{
return ProblemResultFactory.Create(
context,
ProblemTypes.NotFound,
"EntryTrace not found",
StatusCodes.Status404NotFound,
detail: "EntryTrace data is not available for the requested scan.");
}
} }
var response = new EntryTraceResponse( var response = new EntryTraceResponse(
@@ -321,10 +337,12 @@ internal static class ScanEndpoints
private static async Task<IResult> HandleRubyPackagesAsync( private static async Task<IResult> HandleRubyPackagesAsync(
string scanId, string scanId,
IScanCoordinator coordinator,
IRubyPackageInventoryStore inventoryStore, IRubyPackageInventoryStore inventoryStore,
HttpContext context, HttpContext context,
CancellationToken cancellationToken) CancellationToken cancellationToken)
{ {
ArgumentNullException.ThrowIfNull(coordinator);
ArgumentNullException.ThrowIfNull(inventoryStore); ArgumentNullException.ThrowIfNull(inventoryStore);
if (!ScanId.TryParse(scanId, out var parsed)) if (!ScanId.TryParse(scanId, out var parsed))
@@ -340,12 +358,27 @@ internal static class ScanEndpoints
var inventory = await inventoryStore.GetAsync(parsed.Value, cancellationToken).ConfigureAwait(false); var inventory = await inventoryStore.GetAsync(parsed.Value, cancellationToken).ConfigureAwait(false);
if (inventory is null) if (inventory is null)
{ {
return ProblemResultFactory.Create( RubyPackageInventory? fallback = null;
context, if (!LooksLikeScanId(scanId))
ProblemTypes.NotFound, {
"Ruby packages not found", var snapshot = await TryResolveSnapshotAsync(scanId, coordinator, cancellationToken).ConfigureAwait(false);
StatusCodes.Status404NotFound, if (snapshot is not null)
detail: "Ruby package inventory is not available for the requested scan."); {
fallback = await inventoryStore.GetAsync(snapshot.ScanId.Value, cancellationToken).ConfigureAwait(false);
}
}
if (fallback is null)
{
return ProblemResultFactory.Create(
context,
ProblemTypes.NotFound,
"Ruby packages not found",
StatusCodes.Status404NotFound,
detail: "Ruby package inventory is not available for the requested scan.");
}
inventory = fallback;
} }
var response = new RubyPackagesResponse var response = new RubyPackagesResponse
@@ -420,4 +453,130 @@ internal static class ScanEndpoints
var trimmed = segment.Trim('/'); var trimmed = segment.Trim('/');
return "/" + trimmed; return "/" + trimmed;
} }
private static async ValueTask<ScanSnapshot?> TryResolveSnapshotAsync(
string identifier,
IScanCoordinator coordinator,
CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(coordinator);
if (string.IsNullOrWhiteSpace(identifier))
{
return null;
}
var trimmed = identifier.Trim();
var decoded = Uri.UnescapeDataString(trimmed);
if (LooksLikeScanId(decoded))
{
return null;
}
var (reference, digest) = ExtractTargetHints(decoded);
if (reference is null && digest is null)
{
return null;
}
return await coordinator.TryFindByTargetAsync(reference, digest, cancellationToken).ConfigureAwait(false);
}
private static (string? Reference, string? Digest) ExtractTargetHints(string identifier)
{
if (string.IsNullOrWhiteSpace(identifier))
{
return (null, null);
}
var trimmed = identifier.Trim();
if (TryExtractDigest(trimmed, out var digest, out var reference))
{
return (reference, digest);
}
return (trimmed, null);
}
private static bool TryExtractDigest(string candidate, out string? digest, out string? reference)
{
var atIndex = candidate.IndexOf('@');
if (atIndex >= 0 && atIndex < candidate.Length - 1)
{
var digestCandidate = candidate[(atIndex + 1)..];
if (IsDigestValue(digestCandidate))
{
digest = digestCandidate.ToLowerInvariant();
reference = candidate[..atIndex].Trim();
if (string.IsNullOrWhiteSpace(reference))
{
reference = null;
}
return true;
}
}
if (IsDigestValue(candidate))
{
digest = candidate.ToLowerInvariant();
reference = null;
return true;
}
digest = null;
reference = null;
return false;
}
private static bool IsDigestValue(string value)
{
var separatorIndex = value.IndexOf(':');
if (separatorIndex <= 0 || separatorIndex >= value.Length - 1)
{
return false;
}
var algorithm = value[..separatorIndex];
var digestPart = value[(separatorIndex + 1)..];
if (string.IsNullOrWhiteSpace(algorithm) || string.IsNullOrWhiteSpace(digestPart) || digestPart.Length < 32)
{
return false;
}
foreach (var c in digestPart)
{
if (!IsHexChar(c))
{
return false;
}
}
return true;
}
private static bool LooksLikeScanId(string value)
{
if (value.Length != 40)
{
return false;
}
foreach (var c in value)
{
if (!IsHexChar(c))
{
return false;
}
}
return true;
}
private static bool IsHexChar(char c)
=> (c >= '0' && c <= '9')
|| (c >= 'a' && c <= 'f')
|| (c >= 'A' && c <= 'F');
} }

View File

@@ -2,9 +2,11 @@ using StellaOps.Scanner.WebService.Domain;
namespace StellaOps.Scanner.WebService.Services; namespace StellaOps.Scanner.WebService.Services;
public interface IScanCoordinator public interface IScanCoordinator
{ {
ValueTask<ScanSubmissionResult> SubmitAsync(ScanSubmission submission, CancellationToken cancellationToken); ValueTask<ScanSubmissionResult> SubmitAsync(ScanSubmission submission, CancellationToken cancellationToken);
ValueTask<ScanSnapshot?> GetAsync(ScanId scanId, CancellationToken cancellationToken); ValueTask<ScanSnapshot?> GetAsync(ScanId scanId, CancellationToken cancellationToken);
}
ValueTask<ScanSnapshot?> TryFindByTargetAsync(string? reference, string? digest, CancellationToken cancellationToken);
}

View File

@@ -7,11 +7,13 @@ namespace StellaOps.Scanner.WebService.Services;
public sealed class InMemoryScanCoordinator : IScanCoordinator public sealed class InMemoryScanCoordinator : IScanCoordinator
{ {
private sealed record ScanEntry(ScanSnapshot Snapshot); private sealed record ScanEntry(ScanSnapshot Snapshot);
private readonly ConcurrentDictionary<string, ScanEntry> scans = new(StringComparer.OrdinalIgnoreCase); private readonly ConcurrentDictionary<string, ScanEntry> scans = new(StringComparer.OrdinalIgnoreCase);
private readonly TimeProvider timeProvider; private readonly ConcurrentDictionary<string, string> scansByDigest = new(StringComparer.OrdinalIgnoreCase);
private readonly IScanProgressPublisher progressPublisher; private readonly ConcurrentDictionary<string, string> scansByReference = new(StringComparer.OrdinalIgnoreCase);
private readonly TimeProvider timeProvider;
private readonly IScanProgressPublisher progressPublisher;
public InMemoryScanCoordinator(TimeProvider timeProvider, IScanProgressPublisher progressPublisher) public InMemoryScanCoordinator(TimeProvider timeProvider, IScanProgressPublisher progressPublisher)
{ {
@@ -37,12 +39,12 @@ public sealed class InMemoryScanCoordinator : IScanCoordinator
eventData[$"meta.{pair.Key}"] = pair.Value; eventData[$"meta.{pair.Key}"] = pair.Value;
} }
ScanEntry entry = scans.AddOrUpdate( ScanEntry entry = scans.AddOrUpdate(
scanId.Value, scanId.Value,
_ => new ScanEntry(new ScanSnapshot( _ => new ScanEntry(new ScanSnapshot(
scanId, scanId,
normalizedTarget, normalizedTarget,
ScanStatus.Pending, ScanStatus.Pending,
now, now,
now, now,
null)), null)),
@@ -59,22 +61,87 @@ public sealed class InMemoryScanCoordinator : IScanCoordinator
return new ScanEntry(snapshot); return new ScanEntry(snapshot);
} }
return existing; return existing;
}); });
var created = entry.Snapshot.CreatedAt == now; IndexTarget(scanId.Value, normalizedTarget);
var state = entry.Snapshot.Status.ToString();
progressPublisher.Publish(scanId, state, created ? "queued" : "requeued", eventData); var created = entry.Snapshot.CreatedAt == now;
return ValueTask.FromResult(new ScanSubmissionResult(entry.Snapshot, created)); var state = entry.Snapshot.Status.ToString();
} progressPublisher.Publish(scanId, state, created ? "queued" : "requeued", eventData);
return ValueTask.FromResult(new ScanSubmissionResult(entry.Snapshot, created));
}
public ValueTask<ScanSnapshot?> GetAsync(ScanId scanId, CancellationToken cancellationToken) public ValueTask<ScanSnapshot?> GetAsync(ScanId scanId, CancellationToken cancellationToken)
{ {
if (scans.TryGetValue(scanId.Value, out var entry)) if (scans.TryGetValue(scanId.Value, out var entry))
{ {
return ValueTask.FromResult<ScanSnapshot?>(entry.Snapshot); return ValueTask.FromResult<ScanSnapshot?>(entry.Snapshot);
} }
return ValueTask.FromResult<ScanSnapshot?>(null); return ValueTask.FromResult<ScanSnapshot?>(null);
} }
}
public ValueTask<ScanSnapshot?> TryFindByTargetAsync(string? reference, string? digest, CancellationToken cancellationToken)
{
if (!string.IsNullOrWhiteSpace(digest))
{
var normalizedDigest = NormalizeDigest(digest);
if (normalizedDigest is not null &&
scansByDigest.TryGetValue(normalizedDigest, out var digestScanId) &&
scans.TryGetValue(digestScanId, out var digestEntry))
{
return ValueTask.FromResult<ScanSnapshot?>(digestEntry.Snapshot);
}
}
if (!string.IsNullOrWhiteSpace(reference))
{
var normalizedReference = NormalizeReference(reference);
if (normalizedReference is not null &&
scansByReference.TryGetValue(normalizedReference, out var referenceScanId) &&
scans.TryGetValue(referenceScanId, out var referenceEntry))
{
return ValueTask.FromResult<ScanSnapshot?>(referenceEntry.Snapshot);
}
}
return ValueTask.FromResult<ScanSnapshot?>(null);
}
private void IndexTarget(string scanId, ScanTarget target)
{
if (!string.IsNullOrWhiteSpace(target.Digest))
{
scansByDigest[target.Digest!] = scanId;
}
if (!string.IsNullOrWhiteSpace(target.Reference))
{
scansByReference[target.Reference!] = scanId;
}
}
private static string? NormalizeDigest(string? value)
{
if (string.IsNullOrWhiteSpace(value))
{
return null;
}
var trimmed = value.Trim();
return trimmed.Contains(':', StringComparison.Ordinal)
? trimmed.ToLowerInvariant()
: null;
}
private static string? NormalizeReference(string? value)
{
if (string.IsNullOrWhiteSpace(value))
{
return null;
}
return value.Trim();
}
}

View File

@@ -2,7 +2,7 @@
| Task ID | State | Notes | | Task ID | State | Notes |
| --- | --- | --- | | --- | --- | --- |
| `SCANNER-ENG-0009` | DOING (2025-11-12) | Added bundler-version metadata + observation summaries, richer CLI output, and the `complex-app` fixture to drive parity validation. | | `SCANNER-ENG-0009` | DONE (2025-11-13) | Ruby analyzer parity landed end-to-end: Mongo-backed `ruby.packages` inventories, WebService `/api/scans/{scanId}/ruby-packages`, CLI `ruby resolve` + observations, plugin manifest packaging, and targeted tests (`StellaOps.Scanner.Analyzers.Lang.Ruby.Tests`, `StellaOps.Scanner.Worker.Tests`, `StellaOps.Scanner.WebService.Tests --filter FullyQualifiedName~RubyPackages`). |
| `SCANNER-ENG-0016` | DONE (2025-11-10) | RubyLockCollector merged with vendor cache ingestion; workspace overrides, bundler groups, git/path fixture, and offline-kit mirror updated. | | `SCANNER-ENG-0016` | DONE (2025-11-10) | RubyLockCollector merged with vendor cache ingestion; workspace overrides, bundler groups, git/path fixture, and offline-kit mirror updated. |
| `SCANNER-ENG-0017` | DONE (2025-11-09) | Build runtime require/autoload graph builder with tree-sitter Ruby per design §4.4, feed EntryTrace hints. | | `SCANNER-ENG-0017` | DONE (2025-11-09) | Build runtime require/autoload graph builder with tree-sitter Ruby per design §4.4, feed EntryTrace hints. |
| `SCANNER-ENG-0018` | DONE (2025-11-09) | Emit Ruby capability + framework surface signals, align with design §4.5 / Sprint 138. | | `SCANNER-ENG-0018` | DONE (2025-11-09) | Emit Ruby capability + framework surface signals, align with design §4.5 / Sprint 138. |

View File

@@ -0,0 +1,327 @@
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Collections.Immutable;
using System.IO;
using System.Net;
using System.Net.Http.Json;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Extensions.DependencyInjection;
using StellaOps.Scanner.Core.Contracts;
using StellaOps.Scanner.EntryTrace;
using StellaOps.Scanner.EntryTrace.Serialization;
using StellaOps.Scanner.Storage.Catalog;
using StellaOps.Scanner.Storage.Repositories;
using StellaOps.Scanner.WebService.Contracts;
using StellaOps.Scanner.WebService.Domain;
using StellaOps.Scanner.WebService.Services;
using Xunit;
namespace StellaOps.Scanner.WebService.Tests;
public sealed class RubyPackagesEndpointsTests
{
[Fact]
public async Task GetRubyPackagesReturnsNotFoundWhenInventoryMissing()
{
using var secrets = new TestSurfaceSecretsScope();
using var factory = new ScannerApplicationFactory();
using var client = factory.CreateClient();
var response = await client.GetAsync("/api/v1/scans/scan-ruby-missing/ruby-packages");
Assert.Equal(HttpStatusCode.NotFound, response.StatusCode);
}
[Fact]
public async Task GetRubyPackagesReturnsInventory()
{
const string scanId = "scan-ruby-existing";
const string digest = "sha256:feedfacefeedfacefeedfacefeedfacefeedfacefeedfacefeedfacefeedface";
var generatedAt = DateTime.UtcNow.AddMinutes(-10);
using var secrets = new TestSurfaceSecretsScope();
using var factory = new ScannerApplicationFactory();
using (var serviceScope = factory.Services.CreateScope())
{
var repository = serviceScope.ServiceProvider.GetRequiredService<RubyPackageInventoryRepository>();
var document = new RubyPackageInventoryDocument
{
ScanId = scanId,
ImageDigest = digest,
GeneratedAtUtc = generatedAt,
Packages = new List<RubyPackageDocument>
{
new()
{
Id = "pkg:gem/rack@3.1.0",
Name = "rack",
Version = "3.1.0",
Source = "rubygems",
Platform = "ruby",
Groups = new List<string> { "default" },
RuntimeUsed = true,
Provenance = new RubyPackageProvenance("rubygems", "Gemfile.lock", "Gemfile.lock")
}
}
};
await repository.UpsertAsync(document, CancellationToken.None);
}
using var client = factory.CreateClient();
var response = await client.GetAsync($"/api/v1/scans/{scanId}/ruby-packages");
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
var payload = await response.Content.ReadFromJsonAsync<RubyPackagesResponse>();
Assert.NotNull(payload);
Assert.Equal(scanId, payload!.ScanId);
Assert.Equal(digest, payload.ImageDigest);
Assert.Single(payload.Packages);
Assert.Equal("rack", payload.Packages[0].Name);
Assert.Equal("rubygems", payload.Packages[0].Source);
}
[Fact]
public async Task GetRubyPackagesAllowsDigestIdentifier()
{
const string reference = "ghcr.io/demo/ruby-service:1.2.3";
const string digest = "sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef";
var generatedAt = DateTime.UtcNow.AddMinutes(-5);
using var secrets = new TestSurfaceSecretsScope();
using var factory = new ScannerApplicationFactory();
string? scanId = null;
using (var scope = factory.Services.CreateScope())
{
var coordinator = scope.ServiceProvider.GetRequiredService<IScanCoordinator>();
var submission = new ScanSubmission(
new ScanTarget(reference, digest),
Force: false,
ClientRequestId: null,
Metadata: new Dictionary<string, string>());
var result = await coordinator.SubmitAsync(submission, CancellationToken.None);
scanId = result.Snapshot.ScanId.Value;
var resolved = await coordinator.TryFindByTargetAsync(reference, digest, CancellationToken.None);
Assert.NotNull(resolved);
var repository = scope.ServiceProvider.GetRequiredService<RubyPackageInventoryRepository>();
var document = new RubyPackageInventoryDocument
{
ScanId = scanId,
ImageDigest = digest,
GeneratedAtUtc = generatedAt,
Packages = new List<RubyPackageDocument>
{
new()
{
Id = "pkg:gem/rails@7.1.0",
Name = "rails",
Version = "7.1.0",
Source = "rubygems"
}
}
};
await repository.UpsertAsync(document, CancellationToken.None);
}
using var client = factory.CreateClient();
var encodedDigest = Uri.EscapeDataString(digest);
var response = await client.GetAsync($"/api/v1/scans/{encodedDigest}/ruby-packages");
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
var payload = await response.Content.ReadFromJsonAsync<RubyPackagesResponse>();
Assert.NotNull(payload);
Assert.Equal(scanId, payload!.ScanId);
Assert.Equal(digest, payload.ImageDigest);
Assert.Single(payload.Packages);
Assert.Equal("rails", payload.Packages[0].Name);
}
[Fact]
public async Task GetRubyPackagesAllowsReferenceIdentifier()
{
const string reference = "ghcr.io/demo/ruby-service:latest";
const string digest = "sha512:abcdefabcdefabcdefabcdefabcdefabcdefabcdefabcdefabcdefabcdefabcd";
using var secrets = new TestSurfaceSecretsScope();
using var factory = new ScannerApplicationFactory();
string? scanId = null;
using (var scope = factory.Services.CreateScope())
{
var coordinator = scope.ServiceProvider.GetRequiredService<IScanCoordinator>();
var submission = new ScanSubmission(
new ScanTarget(reference, digest),
Force: false,
ClientRequestId: "cli-test",
Metadata: new Dictionary<string, string>());
var result = await coordinator.SubmitAsync(submission, CancellationToken.None);
scanId = result.Snapshot.ScanId.Value;
var resolved = await coordinator.TryFindByTargetAsync(reference, digest, CancellationToken.None);
Assert.NotNull(resolved);
var repository = scope.ServiceProvider.GetRequiredService<RubyPackageInventoryRepository>();
var document = new RubyPackageInventoryDocument
{
ScanId = scanId,
ImageDigest = digest,
GeneratedAtUtc = DateTime.UtcNow.AddMinutes(-2),
Packages = new List<RubyPackageDocument>
{
new()
{
Id = "pkg:gem/sidekiq@7.2.1",
Name = "sidekiq",
Version = "7.2.1",
Source = "rubygems"
}
}
};
await repository.UpsertAsync(document, CancellationToken.None);
}
using var client = factory.CreateClient();
var encodedReference = Uri.EscapeDataString(reference);
var response = await client.GetAsync($"/api/v1/scans/{encodedReference}/ruby-packages");
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
var payload = await response.Content.ReadFromJsonAsync<RubyPackagesResponse>();
Assert.NotNull(payload);
Assert.Equal(scanId, payload!.ScanId);
Assert.Equal(digest, payload.ImageDigest);
Assert.Single(payload.Packages);
Assert.Equal("sidekiq", payload.Packages[0].Name);
}
[Fact]
public async Task GetEntryTraceAllowsDigestIdentifier()
{
const string reference = "ghcr.io/demo/app:2.0.0";
const string digest = "sha256:111122223333444455556666777788889999aaaabbbbccccddddeeeeffff0000";
var generatedAt = DateTimeOffset.UtcNow.AddMinutes(-1);
var plan = new EntryTracePlan(
ImmutableArray.Create("/app/bin/app"),
ImmutableDictionary<string, string>.Empty,
"/workspace",
"appuser",
"/app/bin/app",
EntryTraceTerminalType.Native,
"ruby",
0.85,
ImmutableDictionary<string, string>.Empty);
var terminal = new EntryTraceTerminal(
"/app/bin/app",
EntryTraceTerminalType.Native,
"ruby",
0.85,
ImmutableDictionary<string, string>.Empty,
"appuser",
"/workspace",
ImmutableArray<string>.Empty);
var graph = new EntryTraceGraph(
EntryTraceOutcome.Resolved,
ImmutableArray<EntryTraceNode>.Empty,
ImmutableArray<EntryTraceEdge>.Empty,
ImmutableArray<EntryTraceDiagnostic>.Empty,
ImmutableArray.Create(plan),
ImmutableArray.Create(terminal));
var ndjson = EntryTraceNdjsonWriter.Serialize(
graph,
new EntryTraceNdjsonMetadata("scan-placeholder", digest, generatedAt));
using var secrets = new TestSurfaceSecretsScope();
using var factory = new ScannerApplicationFactory(configureServices: services =>
{
services.AddSingleton<IEntryTraceResultStore, RecordingEntryTraceResultStore>();
});
string? canonicalScanId = null;
using (var scope = factory.Services.CreateScope())
{
var coordinator = scope.ServiceProvider.GetRequiredService<IScanCoordinator>();
var submission = new ScanSubmission(
new ScanTarget(reference, digest),
Force: false,
ClientRequestId: null,
Metadata: new Dictionary<string, string>());
var result = await coordinator.SubmitAsync(submission, CancellationToken.None);
canonicalScanId = result.Snapshot.ScanId.Value;
var store = (RecordingEntryTraceResultStore)scope.ServiceProvider.GetRequiredService<IEntryTraceResultStore>();
store.Set(new EntryTraceResult(canonicalScanId, digest, generatedAt, graph, ndjson));
}
using var client = factory.CreateClient();
var encodedDigest = Uri.EscapeDataString(digest);
var response = await client.GetAsync($"/api/v1/scans/{encodedDigest}/entrytrace");
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
var payload = await response.Content.ReadFromJsonAsync<EntryTraceResponse>();
Assert.NotNull(payload);
Assert.Equal(canonicalScanId, payload!.ScanId);
Assert.Equal(digest, payload.ImageDigest);
Assert.Equal(graph.Plans.Length, payload.Graph.Plans.Length);
}
}
internal sealed class SurfaceSecretsScope : IDisposable
{
private readonly string? _provider;
private readonly string? _root;
public SurfaceSecretsScope()
{
_provider = Environment.GetEnvironmentVariable("SURFACE_SECRETS_PROVIDER");
_root = Environment.GetEnvironmentVariable("SURFACE_SECRETS_ROOT");
Environment.SetEnvironmentVariable("SURFACE_SECRETS_PROVIDER", "file");
Environment.SetEnvironmentVariable("SURFACE_SECRETS_ROOT", Path.GetTempPath());
}
public void Dispose()
{
Environment.SetEnvironmentVariable("SURFACE_SECRETS_PROVIDER", _provider);
Environment.SetEnvironmentVariable("SURFACE_SECRETS_ROOT", _root);
}
}
internal sealed class RecordingEntryTraceResultStore : IEntryTraceResultStore
{
private readonly ConcurrentDictionary<string, EntryTraceResult> _entries = new(StringComparer.OrdinalIgnoreCase);
public void Set(EntryTraceResult result)
{
if (result is null)
{
throw new ArgumentNullException(nameof(result));
}
_entries[result.ScanId] = result;
}
public Task<EntryTraceResult?> GetAsync(string scanId, CancellationToken cancellationToken)
{
if (_entries.TryGetValue(scanId, out var value))
{
return Task.FromResult<EntryTraceResult?>(value);
}
return Task.FromResult<EntryTraceResult?>(null);
}
public Task StoreAsync(EntryTraceResult result, CancellationToken cancellationToken)
{
Set(result);
return Task.CompletedTask;
}
}

View File

@@ -1,665 +1,84 @@
using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Collections.Immutable; using System.Collections.Immutable;
using System.IO;
using System.Net; using System.Net;
using System.Net.Http.Json; using System.Net.Http.Json;
using System.Linq;
using System.Text.Json;
using System.Text.Json.Serialization;
using System.Threading.Tasks;
using System.Threading; using System.Threading;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http; using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc.Testing; using Microsoft.AspNetCore.Mvc.Testing;
using Microsoft.AspNetCore.TestHost;
using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.DependencyInjection;
using StellaOps.Scanner.Core.Contracts;
using StellaOps.Scanner.EntryTrace; using StellaOps.Scanner.EntryTrace;
using StellaOps.Scanner.EntryTrace.Serialization; using StellaOps.Scanner.EntryTrace.Serialization;
using StellaOps.Scanner.Storage.Catalog;
using StellaOps.Scanner.Storage.Repositories;
using StellaOps.Scanner.Storage.ObjectStore;
using StellaOps.Scanner.WebService.Contracts; using StellaOps.Scanner.WebService.Contracts;
using StellaOps.Scanner.WebService.Domain; using StellaOps.Scanner.WebService.Domain;
using StellaOps.Scanner.WebService.Services; using StellaOps.Scanner.WebService.Services;
using Xunit;
namespace StellaOps.Scanner.WebService.Tests;
namespace StellaOps.Scanner.WebService.Tests;
public sealed class ScansEndpointsTests
public sealed partial class ScansEndpointsTests
{ {
[Fact]
public async Task SubmitScanReturnsAcceptedAndStatusRetrievable()
{
using var factory = new ScannerApplicationFactory();
using var client = factory.CreateClient();
var request = new ScanSubmitRequest
{
Image = new ScanImageDescriptor { Reference = "ghcr.io/demo/app:1.0.0" },
Force = false
};
var response = await client.PostAsJsonAsync("/api/v1/scans", request);
Assert.Equal(HttpStatusCode.Accepted, response.StatusCode);
var payload = await response.Content.ReadFromJsonAsync<ScanSubmitResponse>();
Assert.NotNull(payload);
Assert.False(string.IsNullOrWhiteSpace(payload!.ScanId));
Assert.Equal("Pending", payload.Status);
Assert.True(payload.Created);
Assert.False(string.IsNullOrWhiteSpace(payload.Location));
var statusResponse = await client.GetAsync(payload.Location);
Assert.Equal(HttpStatusCode.OK, statusResponse.StatusCode);
var status = await statusResponse.Content.ReadFromJsonAsync<ScanStatusResponse>();
Assert.NotNull(status);
Assert.Equal(payload.ScanId, status!.ScanId);
Assert.Equal("Pending", status.Status);
Assert.Equal("ghcr.io/demo/app:1.0.0", status.Image.Reference);
}
[Fact]
public async Task SubmitScanIsDeterministicForIdenticalPayloads()
{
using var factory = new ScannerApplicationFactory();
using var client = factory.CreateClient();
var request = new ScanSubmitRequest
{
Image = new ScanImageDescriptor { Reference = "registry.example.com/acme/app:latest" },
Force = false,
ClientRequestId = "client-123",
Metadata = new Dictionary<string, string> { ["origin"] = "unit-test" }
};
var first = await client.PostAsJsonAsync("/api/v1/scans", request);
var firstPayload = await first.Content.ReadFromJsonAsync<ScanSubmitResponse>();
var second = await client.PostAsJsonAsync("/api/v1/scans", request);
var secondPayload = await second.Content.ReadFromJsonAsync<ScanSubmitResponse>();
Assert.NotNull(firstPayload);
Assert.NotNull(secondPayload);
Assert.Equal(firstPayload!.ScanId, secondPayload!.ScanId);
Assert.True(firstPayload.Created);
Assert.False(secondPayload.Created);
}
[Fact]
public async Task ScanStatusIncludesSurfacePointersWhenArtifactsExist()
{
const string digest = "sha256:0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef";
var digestValue = digest.Split(':', 2)[1];
using var factory = new ScannerApplicationFactory();
const string manifestDigest = "sha256:b2efc2d1f8b042b7f168bcb7d4e2f8e91d36b8306bd855382c5f847efc2c1111";
const string graphDigest = "sha256:9a0d4f8c7b6a5e4d3c2b1a0f9e8d7c6b5a4f3e2d1c0b9a8f7e6d5c4b3a291819";
const string ndjsonDigest = "sha256:3f2e1d0c9b8a7f6e5d4c3b2a1908f7e6d5c4b3a29181726354433221100ffeec";
const string fragmentsDigest = "sha256:aa55aa55aa55aa55aa55aa55aa55aa55aa55aa55aa55aa55aa55aa55aa55aa55";
using (var scope = factory.Services.CreateScope())
{
var artifactRepository = scope.ServiceProvider.GetRequiredService<ArtifactRepository>();
var linkRepository = scope.ServiceProvider.GetRequiredService<LinkRepository>();
var now = DateTime.UtcNow;
async Task InsertAsync(
ArtifactDocumentType type,
ArtifactDocumentFormat format,
string artifactDigest,
string mediaType,
string ttlClass)
{
var artifactId = CatalogIdFactory.CreateArtifactId(type, artifactDigest);
var document = new ArtifactDocument
{
Id = artifactId,
Type = type,
Format = format,
MediaType = mediaType,
BytesSha256 = artifactDigest,
SizeBytes = 2048,
Immutable = true,
RefCount = 1,
TtlClass = ttlClass,
CreatedAtUtc = now,
UpdatedAtUtc = now
};
await artifactRepository.UpsertAsync(document, CancellationToken.None).ConfigureAwait(false);
var link = new LinkDocument
{
Id = CatalogIdFactory.CreateLinkId(LinkSourceType.Image, digest, artifactId),
FromType = LinkSourceType.Image,
FromDigest = digest,
ArtifactId = artifactId,
CreatedAtUtc = now
};
await linkRepository.UpsertAsync(link, CancellationToken.None).ConfigureAwait(false);
}
await InsertAsync(
ArtifactDocumentType.ImageBom,
ArtifactDocumentFormat.CycloneDxJson,
digest,
"application/vnd.cyclonedx+json; version=1.6; view=inventory",
"default").ConfigureAwait(false);
await InsertAsync(
ArtifactDocumentType.SurfaceManifest,
ArtifactDocumentFormat.SurfaceManifestJson,
manifestDigest,
"application/vnd.stellaops.surface.manifest+json",
"surface.manifest").ConfigureAwait(false);
await InsertAsync(
ArtifactDocumentType.SurfaceEntryTrace,
ArtifactDocumentFormat.EntryTraceGraphJson,
graphDigest,
"application/json",
"surface.payload").ConfigureAwait(false);
await InsertAsync(
ArtifactDocumentType.SurfaceEntryTrace,
ArtifactDocumentFormat.EntryTraceNdjson,
ndjsonDigest,
"application/x-ndjson",
"surface.payload").ConfigureAwait(false);
await InsertAsync(
ArtifactDocumentType.SurfaceLayerFragment,
ArtifactDocumentFormat.ComponentFragmentJson,
fragmentsDigest,
"application/json",
"surface.payload").ConfigureAwait(false);
}
using var client = factory.CreateClient();
var submitRequest = new ScanSubmitRequest
{
Image = new ScanImageDescriptor
{
Digest = digest
}
};
var submitResponse = await client.PostAsJsonAsync("/api/v1/scans", submitRequest);
submitResponse.EnsureSuccessStatusCode();
var submission = await submitResponse.Content.ReadFromJsonAsync<ScanSubmitResponse>();
Assert.NotNull(submission);
var statusResponse = await client.GetAsync($"/api/v1/scans/{submission!.ScanId}");
statusResponse.EnsureSuccessStatusCode();
var status = await statusResponse.Content.ReadFromJsonAsync<ScanStatusResponse>();
Assert.NotNull(status);
Assert.NotNull(status!.Surface);
var surface = status.Surface!;
Assert.Equal("default", surface.Tenant);
Assert.False(string.IsNullOrWhiteSpace(surface.ManifestDigest));
Assert.NotNull(surface.ManifestUri);
Assert.Contains("cas://scanner-artifacts/", surface.ManifestUri, StringComparison.Ordinal);
var manifest = surface.Manifest;
Assert.Equal(digest, manifest.ImageDigest);
Assert.Equal(surface.Tenant, manifest.Tenant);
Assert.NotEqual(default, manifest.GeneratedAt);
var artifactsByKind = manifest.Artifacts.ToDictionary(a => a.Kind, StringComparer.Ordinal);
Assert.Equal(5, artifactsByKind.Count);
static string BuildUri(ArtifactDocumentType type, ArtifactDocumentFormat format, string digestValue)
=> $"cas://scanner-artifacts/{ArtifactObjectKeyBuilder.Build(type, format, digestValue, \"scanner\")}";
var inventory = artifactsByKind["sbom-inventory"];
Assert.Equal(digest, inventory.Digest);
Assert.Equal("cdx-json", inventory.Format);
Assert.Equal("application/vnd.cyclonedx+json; version=1.6; view=inventory", inventory.MediaType);
Assert.Equal("inventory", inventory.View);
Assert.Equal(BuildUri(ArtifactDocumentType.ImageBom, ArtifactDocumentFormat.CycloneDxJson, digest), inventory.Uri);
var manifestArtifact = artifactsByKind["surface.manifest"];
Assert.Equal(manifestDigest, manifestArtifact.Digest);
Assert.Equal("surface.manifest", manifestArtifact.Format);
Assert.Equal("application/vnd.stellaops.surface.manifest+json", manifestArtifact.MediaType);
Assert.Null(manifestArtifact.View);
Assert.Equal(BuildUri(ArtifactDocumentType.SurfaceManifest, ArtifactDocumentFormat.SurfaceManifestJson, manifestDigest), manifestArtifact.Uri);
var graphArtifact = artifactsByKind["entrytrace.graph"];
Assert.Equal(graphDigest, graphArtifact.Digest);
Assert.Equal("entrytrace.graph", graphArtifact.Format);
Assert.Equal("application/json", graphArtifact.MediaType);
Assert.Null(graphArtifact.View);
Assert.Equal(BuildUri(ArtifactDocumentType.SurfaceEntryTrace, ArtifactDocumentFormat.EntryTraceGraphJson, graphDigest), graphArtifact.Uri);
var ndjsonArtifact = artifactsByKind["entrytrace.ndjson"];
Assert.Equal(ndjsonDigest, ndjsonArtifact.Digest);
Assert.Equal("entrytrace.ndjson", ndjsonArtifact.Format);
Assert.Equal("application/x-ndjson", ndjsonArtifact.MediaType);
Assert.Null(ndjsonArtifact.View);
Assert.Equal(BuildUri(ArtifactDocumentType.SurfaceEntryTrace, ArtifactDocumentFormat.EntryTraceNdjson, ndjsonDigest), ndjsonArtifact.Uri);
var fragmentsArtifact = artifactsByKind["layer.fragments"];
Assert.Equal(fragmentsDigest, fragmentsArtifact.Digest);
Assert.Equal("layer.fragments", fragmentsArtifact.Format);
Assert.Equal("application/json", fragmentsArtifact.MediaType);
Assert.Equal("inventory", fragmentsArtifact.View);
Assert.Equal(BuildUri(ArtifactDocumentType.SurfaceLayerFragment, ArtifactDocumentFormat.ComponentFragmentJson, fragmentsDigest), fragmentsArtifact.Uri);
}
[Fact] [Fact]
public async Task SubmitScanValidatesImageDescriptor() public async Task SubmitScanValidatesImageDescriptor()
{ {
using var factory = new ScannerApplicationFactory(); using var secrets = new TestSurfaceSecretsScope();
using var client = factory.CreateClient();
var request = new
{
image = new { reference = "", digest = "" }
};
var response = await client.PostAsJsonAsync("/api/v1/scans", request);
Assert.Equal(HttpStatusCode.BadRequest, response.StatusCode);
}
[Fact]
public async Task SubmitScanPropagatesRequestAbortedToken()
{
RecordingCoordinator coordinator = null!;
using var factory = new ScannerApplicationFactory(configuration =>
{
configuration["scanner:authority:enabled"] = "false";
}, services =>
{
services.AddSingleton<IScanCoordinator>(sp =>
{
coordinator = new RecordingCoordinator(
sp.GetRequiredService<IHttpContextAccessor>(),
sp.GetRequiredService<TimeProvider>(),
sp.GetRequiredService<IScanProgressPublisher>());
return coordinator;
});
});
using var client = factory.CreateClient(new WebApplicationFactoryClientOptions
{
AllowAutoRedirect = false
});
var cts = new CancellationTokenSource();
var request = new ScanSubmitRequest
{
Image = new ScanImageDescriptor { Reference = "example.com/demo:1.0" }
};
var response = await client.PostAsJsonAsync("/api/v1/scans", request, cts.Token);
Assert.Equal(HttpStatusCode.Accepted, response.StatusCode);
Assert.NotNull(coordinator);
Assert.True(coordinator.TokenMatched);
Assert.True(coordinator.LastToken.CanBeCanceled);
}
[Fact]
public async Task EntryTraceEndpointReturnsStoredResult()
{
using var factory = new ScannerApplicationFactory(); using var factory = new ScannerApplicationFactory();
var scanId = $"scan-entrytrace-{Guid.NewGuid():n}"; using var client = factory.CreateClient();
var graph = new EntryTraceGraph(
EntryTraceOutcome.Resolved,
ImmutableArray<EntryTraceNode>.Empty,
ImmutableArray<EntryTraceEdge>.Empty,
ImmutableArray<EntryTraceDiagnostic>.Empty,
ImmutableArray.Create(new EntryTracePlan(
ImmutableArray.Create("/bin/bash", "-lc", "./start.sh"),
ImmutableDictionary<string, string>.Empty,
"/workspace",
"root",
"/bin/bash",
EntryTraceTerminalType.Script,
"bash",
0.9,
ImmutableDictionary<string, string>.Empty)),
ImmutableArray.Create(new EntryTraceTerminal(
"/bin/bash",
EntryTraceTerminalType.Script,
"bash",
0.9,
ImmutableDictionary<string, string>.Empty,
"root",
"/workspace",
ImmutableArray<string>.Empty)));
var ndjson = new List<string> { "{\"kind\":\"entry\"}" }; var response = await client.PostAsJsonAsync("/api/v1/scans", new
using (var scope = factory.Services.CreateScope())
{ {
var repository = scope.ServiceProvider.GetRequiredService<EntryTraceRepository>(); image = new { reference = string.Empty, digest = string.Empty }
await repository.UpsertAsync(new EntryTraceDocument
{
ScanId = scanId,
ImageDigest = "sha256:entrytrace",
GeneratedAtUtc = DateTime.UtcNow,
GraphJson = EntryTraceGraphSerializer.Serialize(graph),
Ndjson = ndjson
}, CancellationToken.None).ConfigureAwait(false);
}
using var client = factory.CreateClient();
var response = await client.GetAsync($"/api/v1/scans/{scanId}/entrytrace");
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
var payload = await response.Content.ReadFromJsonAsync<EntryTraceResponse>(SerializerOptions, CancellationToken.None);
Assert.NotNull(payload);
Assert.Equal(scanId, payload!.ScanId);
Assert.Equal("sha256:entrytrace", payload.ImageDigest);
Assert.Equal(graph.Outcome, payload.Graph.Outcome);
Assert.Single(payload.Graph.Plans);
Assert.Equal("/bin/bash", payload.Graph.Plans[0].TerminalPath);
Assert.Single(payload.Graph.Terminals);
Assert.Equal(ndjson, payload.Ndjson);
}
[Fact]
public async Task RubyPackagesEndpointReturnsNotFoundWhenMissing()
{
using var factory = new ScannerApplicationFactory();
using var client = factory.CreateClient();
var response = await client.GetAsync("/api/v1/scans/scan-ruby-missing/ruby-packages");
Assert.Equal(HttpStatusCode.NotFound, response.StatusCode);
}
[Fact]
public async Task RubyPackagesEndpointReturnsInventory()
{
const string scanId = "scan-ruby-existing";
const string digest = "sha256:feedfacefeedfacefeedfacefeedfacefeedfacefeedfacefeedfacefeedface";
var generatedAt = DateTime.UtcNow.AddMinutes(-10);
using var factory = new ScannerApplicationFactory();
using (var scope = factory.Services.CreateScope())
{
var repository = scope.ServiceProvider.GetRequiredService<RubyPackageInventoryRepository>();
var document = new RubyPackageInventoryDocument
{
ScanId = scanId,
ImageDigest = digest,
GeneratedAtUtc = generatedAt,
Packages = new List<RubyPackageDocument>
{
new()
{
Id = "pkg:gem/rack@3.1.0",
Name = "rack",
Version = "3.1.0",
Source = "rubygems",
Platform = "ruby",
Groups = new List<string> { "default" },
RuntimeUsed = true,
Provenance = new RubyPackageProvenance("rubygems", "Gemfile.lock", "Gemfile.lock")
}
}
};
await repository.UpsertAsync(document, CancellationToken.None).ConfigureAwait(false);
}
using var client = factory.CreateClient();
var response = await client.GetAsync($"/api/v1/scans/{scanId}/ruby-packages");
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
var payload = await response.Content.ReadFromJsonAsync<RubyPackagesResponse>();
Assert.NotNull(payload);
Assert.Equal(scanId, payload!.ScanId);
Assert.Equal(digest, payload.ImageDigest);
Assert.Single(payload.Packages);
Assert.Equal("rack", payload.Packages[0].Name);
Assert.Equal("rubygems", payload.Packages[0].Source);
}
private sealed class RecordingCoordinator : IScanCoordinator
{
private readonly IHttpContextAccessor accessor;
private readonly InMemoryScanCoordinator inner;
public RecordingCoordinator(IHttpContextAccessor accessor, TimeProvider timeProvider, IScanProgressPublisher publisher)
{
this.accessor = accessor;
inner = new InMemoryScanCoordinator(timeProvider, publisher);
}
public CancellationToken LastToken { get; private set; }
public bool TokenMatched { get; private set; }
public async ValueTask<ScanSubmissionResult> SubmitAsync(ScanSubmission submission, CancellationToken cancellationToken)
{
LastToken = cancellationToken;
TokenMatched = accessor.HttpContext?.RequestAborted.Equals(cancellationToken) ?? false;
return await inner.SubmitAsync(submission, cancellationToken);
}
public ValueTask<ScanSnapshot?> GetAsync(ScanId scanId, CancellationToken cancellationToken)
=> inner.GetAsync(scanId, cancellationToken);
}
[Fact]
public async Task ProgressStreamReturnsInitialPendingEvent()
{
using var factory = new ScannerApplicationFactory();
using var client = factory.CreateClient();
var request = new ScanSubmitRequest
{
Image = new ScanImageDescriptor { Reference = "ghcr.io/demo/app:2.0.0" }
};
var submit = await client.PostAsJsonAsync("/api/v1/scans", request);
var submitPayload = await submit.Content.ReadFromJsonAsync<ScanSubmitResponse>();
Assert.NotNull(submitPayload);
var response = await client.GetAsync($"/api/v1/scans/{submitPayload!.ScanId}/events?format=jsonl", HttpCompletionOption.ResponseHeadersRead);
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
Assert.Equal("application/x-ndjson", response.Content.Headers.ContentType?.MediaType);
await using var stream = await response.Content.ReadAsStreamAsync();
using var reader = new StreamReader(stream);
var line = await reader.ReadLineAsync();
Assert.False(string.IsNullOrWhiteSpace(line));
var envelope = JsonSerializer.Deserialize<ProgressEnvelope>(line!, SerializerOptions);
Assert.NotNull(envelope);
Assert.Equal(submitPayload.ScanId, envelope!.ScanId);
Assert.Equal("Pending", envelope.State);
Assert.Equal(1, envelope.Sequence);
Assert.NotEqual(default, envelope.Timestamp);
}
[Fact]
public async Task ProgressStreamYieldsSubsequentEvents()
{
using var factory = new ScannerApplicationFactory();
using var client = factory.CreateClient();
var request = new ScanSubmitRequest
{
Image = new ScanImageDescriptor { Reference = "registry.example.com/acme/app:stream" }
};
var submit = await client.PostAsJsonAsync("/api/v1/scans", request);
var submitPayload = await submit.Content.ReadFromJsonAsync<ScanSubmitResponse>();
Assert.NotNull(submitPayload);
var publisher = factory.Services.GetRequiredService<IScanProgressPublisher>();
var response = await client.GetAsync($"/api/v1/scans/{submitPayload!.ScanId}/events?format=jsonl", HttpCompletionOption.ResponseHeadersRead);
await using var stream = await response.Content.ReadAsStreamAsync();
using var reader = new StreamReader(stream);
var firstLine = await reader.ReadLineAsync();
Assert.NotNull(firstLine);
var firstEnvelope = JsonSerializer.Deserialize<ProgressEnvelope>(firstLine!, SerializerOptions);
Assert.NotNull(firstEnvelope);
Assert.Equal("Pending", firstEnvelope!.State);
_ = Task.Run(async () =>
{
await Task.Delay(50);
publisher.Publish(new ScanId(submitPayload.ScanId), "Running", "worker-started", new Dictionary<string, object?>
{
["stage"] = "download"
});
});
ProgressEnvelope? envelope = null;
string? line;
do
{
line = await reader.ReadLineAsync();
if (line is null)
{
break;
}
if (line.Length == 0)
{
continue;
}
envelope = JsonSerializer.Deserialize<ProgressEnvelope>(line, SerializerOptions);
}
while (envelope is not null && envelope.State == "Pending");
Assert.NotNull(envelope);
Assert.Equal("Running", envelope!.State);
Assert.True(envelope.Sequence >= 2);
Assert.Contains(envelope.Data.Keys, key => key == "stage");
}
[Fact]
public async Task ProgressStreamSupportsServerSentEvents()
{
using var factory = new ScannerApplicationFactory();
using var client = factory.CreateClient();
var request = new ScanSubmitRequest
{
Image = new ScanImageDescriptor { Reference = "ghcr.io/demo/app:3.0.0" }
};
var submit = await client.PostAsJsonAsync("/api/v1/scans", request);
var submitPayload = await submit.Content.ReadFromJsonAsync<ScanSubmitResponse>();
Assert.NotNull(submitPayload);
var response = await client.GetAsync($"/api/v1/scans/{submitPayload!.ScanId}/events", HttpCompletionOption.ResponseHeadersRead);
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
Assert.Equal("text/event-stream", response.Content.Headers.ContentType?.MediaType);
await using var stream = await response.Content.ReadAsStreamAsync();
using var reader = new StreamReader(stream);
var idLine = await reader.ReadLineAsync();
var eventLine = await reader.ReadLineAsync();
var dataLine = await reader.ReadLineAsync();
var separator = await reader.ReadLineAsync();
Assert.Equal("id: 1", idLine);
Assert.Equal("event: pending", eventLine);
Assert.NotNull(dataLine);
Assert.StartsWith("data: ", dataLine, StringComparison.Ordinal);
Assert.Equal(string.Empty, separator);
var json = dataLine!["data: ".Length..];
var envelope = JsonSerializer.Deserialize<ProgressEnvelope>(json, SerializerOptions);
Assert.NotNull(envelope);
Assert.Equal(submitPayload.ScanId, envelope!.ScanId);
Assert.Equal("Pending", envelope.State);
Assert.Equal(1, envelope.Sequence);
Assert.True(envelope.Timestamp.UtcDateTime <= DateTime.UtcNow);
}
[Fact]
public async Task ProgressStreamDataKeysAreSortedDeterministically()
{
using var factory = new ScannerApplicationFactory();
using var client = factory.CreateClient();
var request = new ScanSubmitRequest
{
Image = new ScanImageDescriptor { Reference = "ghcr.io/demo/app:sorted" }
};
var submit = await client.PostAsJsonAsync("/api/v1/scans", request);
var submitPayload = await submit.Content.ReadFromJsonAsync<ScanSubmitResponse>();
Assert.NotNull(submitPayload);
var publisher = factory.Services.GetRequiredService<IScanProgressPublisher>();
var response = await client.GetAsync($"/api/v1/scans/{submitPayload!.ScanId}/events?format=jsonl", HttpCompletionOption.ResponseHeadersRead);
await using var stream = await response.Content.ReadAsStreamAsync();
using var reader = new StreamReader(stream);
// Drain the initial pending event.
_ = await reader.ReadLineAsync();
_ = Task.Run(async () =>
{
await Task.Delay(25);
publisher.Publish(
new ScanId(submitPayload.ScanId),
"Running",
"stage-change",
new Dictionary<string, object?>
{
["zeta"] = 1,
["alpha"] = 2,
["Beta"] = 3
});
}); });
string? line; Assert.Equal(HttpStatusCode.BadRequest, response.StatusCode);
JsonDocument? document = null; }
while ((line = await reader.ReadLineAsync()) is not null)
[Fact]
public async Task SubmitScanPropagatesRequestAbortedToken()
{
using var secrets = new TestSurfaceSecretsScope();
RecordingCoordinator coordinator = null!;
using var factory = new ScannerApplicationFactory(configuration =>
{ {
if (string.IsNullOrWhiteSpace(line)) configuration["scanner:authority:enabled"] = "false";
{ }, configureServices: services =>
continue;
}
var parsed = JsonDocument.Parse(line);
if (parsed.RootElement.TryGetProperty("state", out var state) &&
string.Equals(state.GetString(), "Running", StringComparison.OrdinalIgnoreCase))
{
document = parsed;
break;
}
parsed.Dispose();
}
Assert.NotNull(document);
using (document)
{ {
var data = document!.RootElement.GetProperty("data"); services.AddSingleton<IScanCoordinator>(sp =>
var names = data.EnumerateObject().Select(p => p.Name).ToArray(); {
Assert.Equal(new[] { "alpha", "Beta", "zeta" }, names); coordinator = new RecordingCoordinator(
} sp.GetRequiredService<IHttpContextAccessor>(),
sp.GetRequiredService<TimeProvider>(),
sp.GetRequiredService<IScanProgressPublisher>());
return coordinator;
});
});
using var client = factory.CreateClient(new WebApplicationFactoryClientOptions
{
AllowAutoRedirect = false
});
using var cts = new CancellationTokenSource();
var request = new ScanSubmitRequest
{
Image = new ScanImageDescriptor { Reference = "example.com/demo:1.0" }
};
var response = await client.PostAsJsonAsync("/api/v1/scans", request, cts.Token);
Assert.Equal(HttpStatusCode.Accepted, response.StatusCode);
Assert.NotNull(coordinator);
Assert.True(coordinator!.TokenMatched);
Assert.True(coordinator.LastToken.CanBeCanceled);
} }
[Fact] [Fact]
public async Task GetEntryTraceReturnsStoredResult() public async Task GetEntryTraceReturnsStoredResult()
{ {
using var secrets = new TestSurfaceSecretsScope();
var scanId = $"scan-{Guid.NewGuid():n}"; var scanId = $"scan-{Guid.NewGuid():n}";
var generatedAt = new DateTimeOffset(2025, 11, 1, 12, 0, 0, TimeSpan.Zero); var generatedAt = DateTimeOffset.UtcNow;
var plan = new EntryTracePlan( var plan = new EntryTracePlan(
ImmutableArray.Create("/usr/local/bin/app"), ImmutableArray.Create("/usr/local/bin/app"),
ImmutableDictionary<string, string>.Empty, ImmutableDictionary<string, string>.Empty,
@@ -668,17 +87,19 @@ public sealed class ScansEndpointsTests
"/usr/local/bin/app", "/usr/local/bin/app",
EntryTraceTerminalType.Native, EntryTraceTerminalType.Native,
"go", "go",
90d, 0.9,
ImmutableDictionary<string, string>.Empty); ImmutableDictionary<string, string>.Empty);
var terminal = new EntryTraceTerminal( var terminal = new EntryTraceTerminal(
"/usr/local/bin/app", "/usr/local/bin/app",
EntryTraceTerminalType.Native, EntryTraceTerminalType.Native,
"go", "go",
90d, 0.9,
ImmutableDictionary<string, string>.Empty, ImmutableDictionary<string, string>.Empty,
"appuser", "appuser",
"/workspace", "/workspace",
ImmutableArray<string>.Empty); ImmutableArray<string>.Empty);
var graph = new EntryTraceGraph( var graph = new EntryTraceGraph(
EntryTraceOutcome.Resolved, EntryTraceOutcome.Resolved,
ImmutableArray<EntryTraceNode>.Empty, ImmutableArray<EntryTraceNode>.Empty,
@@ -686,59 +107,67 @@ public sealed class ScansEndpointsTests
ImmutableArray<EntryTraceDiagnostic>.Empty, ImmutableArray<EntryTraceDiagnostic>.Empty,
ImmutableArray.Create(plan), ImmutableArray.Create(plan),
ImmutableArray.Create(terminal)); ImmutableArray.Create(terminal));
var ndjson = EntryTraceNdjsonWriter.Serialize(
graph, var ndjson = EntryTraceNdjsonWriter.Serialize(graph, new EntryTraceNdjsonMetadata(scanId, "sha256:test", generatedAt));
new EntryTraceNdjsonMetadata(scanId, "sha256:test", generatedAt));
var storedResult = new EntryTraceResult(scanId, "sha256:test", generatedAt, graph, ndjson); var storedResult = new EntryTraceResult(scanId, "sha256:test", generatedAt, graph, ndjson);
using var factory = new ScannerApplicationFactory( using var factory = new ScannerApplicationFactory(configureServices: services =>
configureConfiguration: null, {
services => services.AddSingleton<IEntryTraceResultStore>(new StubEntryTraceResultStore(storedResult));
{ });
services.AddSingleton<IEntryTraceResultStore>(new StubEntryTraceResultStore(storedResult));
});
using var client = factory.CreateClient(); using var client = factory.CreateClient();
var response = await client.GetAsync($"/api/v1/scans/{scanId}/entrytrace"); var response = await client.GetAsync($"/api/v1/scans/{scanId}/entrytrace");
Assert.Equal(HttpStatusCode.OK, response.StatusCode); Assert.Equal(HttpStatusCode.OK, response.StatusCode);
var payload = await response.Content.ReadFromJsonAsync<EntryTraceResponse>(SerializerOptions, CancellationToken.None); var payload = await response.Content.ReadFromJsonAsync<EntryTraceResponse>();
Assert.NotNull(payload); Assert.NotNull(payload);
Assert.Equal(storedResult.ScanId, payload!.ScanId); Assert.Equal(storedResult.ScanId, payload!.ScanId);
Assert.Equal(storedResult.ImageDigest, payload.ImageDigest); Assert.Equal(storedResult.ImageDigest, payload.ImageDigest);
Assert.Equal(storedResult.GeneratedAtUtc, payload.GeneratedAt);
Assert.Equal(storedResult.Graph.Plans.Length, payload.Graph.Plans.Length); Assert.Equal(storedResult.Graph.Plans.Length, payload.Graph.Plans.Length);
Assert.Equal(storedResult.Ndjson, payload.Ndjson);
} }
[Fact] [Fact]
public async Task GetEntryTraceReturnsNotFoundWhenMissing() public async Task GetEntryTraceReturnsNotFoundWhenMissing()
{ {
using var factory = new ScannerApplicationFactory( using var secrets = new TestSurfaceSecretsScope();
configureConfiguration: null, using var factory = new ScannerApplicationFactory(configureServices: services =>
services => {
{ services.AddSingleton<IEntryTraceResultStore>(new StubEntryTraceResultStore(null));
services.AddSingleton<IEntryTraceResultStore>(new StubEntryTraceResultStore(null)); });
});
using var client = factory.CreateClient(); using var client = factory.CreateClient();
var response = await client.GetAsync("/api/v1/scans/scan-missing/entrytrace"); var response = await client.GetAsync("/api/v1/scans/scan-missing/entrytrace");
Assert.Equal(HttpStatusCode.NotFound, response.StatusCode); Assert.Equal(HttpStatusCode.NotFound, response.StatusCode);
} }
private static readonly JsonSerializerOptions SerializerOptions = new(JsonSerializerDefaults.Web)
{
Converters = { new JsonStringEnumConverter() }
};
private sealed record ProgressEnvelope( private sealed class RecordingCoordinator : IScanCoordinator
string ScanId, {
int Sequence, private readonly IHttpContextAccessor _accessor;
string State, private readonly InMemoryScanCoordinator _inner;
string? Message,
DateTimeOffset Timestamp, public RecordingCoordinator(IHttpContextAccessor accessor, TimeProvider timeProvider, IScanProgressPublisher publisher)
string CorrelationId, {
Dictionary<string, JsonElement> Data); _accessor = accessor;
_inner = new InMemoryScanCoordinator(timeProvider, publisher);
}
public CancellationToken LastToken { get; private set; }
public bool TokenMatched { get; private set; }
public async ValueTask<ScanSubmissionResult> SubmitAsync(ScanSubmission submission, CancellationToken cancellationToken)
{
LastToken = cancellationToken;
TokenMatched = _accessor.HttpContext?.RequestAborted.Equals(cancellationToken) ?? false;
return await _inner.SubmitAsync(submission, cancellationToken);
}
public ValueTask<ScanSnapshot?> GetAsync(ScanId scanId, CancellationToken cancellationToken)
=> _inner.GetAsync(scanId, cancellationToken);
public ValueTask<ScanSnapshot?> TryFindByTargetAsync(string? reference, string? digest, CancellationToken cancellationToken)
=> _inner.TryFindByTargetAsync(reference, digest, cancellationToken);
}
private sealed class StubEntryTraceResultStore : IEntryTraceResultStore private sealed class StubEntryTraceResultStore : IEntryTraceResultStore
{ {
@@ -760,8 +189,6 @@ public sealed class ScansEndpointsTests
} }
public Task StoreAsync(EntryTraceResult result, CancellationToken cancellationToken) public Task StoreAsync(EntryTraceResult result, CancellationToken cancellationToken)
{ => Task.CompletedTask;
return Task.CompletedTask;
}
} }
} }

View File

@@ -0,0 +1,23 @@
using System;
namespace StellaOps.Scanner.WebService.Tests;
internal sealed class TestSurfaceSecretsScope : IDisposable
{
private readonly string? _provider;
private readonly string? _root;
public TestSurfaceSecretsScope()
{
_provider = Environment.GetEnvironmentVariable("SURFACE_SECRETS_PROVIDER");
_root = Environment.GetEnvironmentVariable("SURFACE_SECRETS_ROOT");
Environment.SetEnvironmentVariable("SURFACE_SECRETS_PROVIDER", "file");
Environment.SetEnvironmentVariable("SURFACE_SECRETS_ROOT", Path.GetTempPath());
}
public void Dispose()
{
Environment.SetEnvironmentVariable("SURFACE_SECRETS_PROVIDER", _provider);
Environment.SetEnvironmentVariable("SURFACE_SECRETS_ROOT", _root);
}
}

View File

@@ -0,0 +1,99 @@
using System.Collections.Generic;
using System.Linq;
using MongoDB.Bson;
using StellaOps.Provenance.Mongo;
using Xunit;
namespace StellaOps.Events.Mongo.Tests;
public sealed class ProvenanceMongoExtensionsTests
{
[Fact]
public void AttachDsseProvenance_WritesNestedDocuments()
{
var document = new BsonDocument
{
{ "kind", "VEX" },
{ "subject", new BsonDocument("digest", new BsonDocument("sha256", "sha256:abc")) }
};
var dsse = new DsseProvenance
{
EnvelopeDigest = "sha256:deadbeef",
PayloadType = "application/vnd.in-toto+json",
Key = new DsseKeyInfo
{
KeyId = "cosign:SHA256-PKIX:TEST",
Issuer = "fulcio",
Algo = "ECDSA"
},
Rekor = new DsseRekorInfo
{
LogIndex = 123,
Uuid = Guid.Parse("2d4d5f7c-1111-4a01-b9cb-aa42022a0a8c").ToString(),
IntegratedTime = 1_699_999_999,
MirrorSeq = 10
},
Chain = new List<DsseChainLink>
{
new()
{
Type = "build",
Id = "att:build#1",
Digest = "sha256:chain"
}
}
};
var trust = new TrustInfo
{
Verified = true,
Verifier = "Authority@stella",
Witnesses = 2,
PolicyScore = 0.9
};
document.AttachDsseProvenance(dsse, trust);
var provenanceDoc = document["provenance"].AsBsonDocument["dsse"].AsBsonDocument;
Assert.Equal("sha256:deadbeef", provenanceDoc["envelopeDigest"].AsString);
Assert.Equal(123, provenanceDoc["rekor"].AsBsonDocument["logIndex"].AsInt64);
Assert.Equal("att:build#1", provenanceDoc["chain"].AsBsonArray.Single().AsBsonDocument["id"].AsString);
var trustDoc = document["trust"].AsBsonDocument;
Assert.True(trustDoc["verified"].AsBoolean);
Assert.Equal(2, trustDoc["witnesses"].AsInt32);
Assert.Equal(0.9, trustDoc["policyScore"].AsDouble);
}
[Fact]
public void BuildProvenVexFilter_TargetsKindSubjectAndVerified()
{
var filter = ProvenanceMongoExtensions.BuildProvenVexFilter("VEX", "sha256:123");
Assert.Equal("VEX", filter["kind"].AsString);
Assert.Equal("sha256:123", filter["subject.digest.sha256"].AsString);
Assert.True(filter.Contains("provenance.dsse.rekor.logIndex"));
Assert.True(filter.Contains("trust.verified"));
}
[Fact]
public void BuildUnprovenEvidenceFilter_FlagsMissingTrustOrRekor()
{
var filter = ProvenanceMongoExtensions.BuildUnprovenEvidenceFilter(new[] { "SBOM", "VEX" });
var kindClause = filter["kind"].AsBsonDocument["$in"].AsBsonArray.Select(v => v.AsString).ToArray();
Assert.Contains("SBOM", kindClause);
Assert.Contains("VEX", kindClause);
var orConditions = filter["$or"].AsBsonArray;
Assert.Equal(2, orConditions.Count);
var trustCondition = orConditions[0].AsBsonDocument;
Assert.Equal("$ne", trustCondition["trust.verified"].AsBsonDocument.Elements.Single().Name);
var rekorCondition = orConditions[1].AsBsonDocument;
Assert.Equal("$exists", rekorCondition["provenance.dsse.rekor.logIndex"].AsBsonDocument.Elements.Single().Name);
Assert.False(rekorCondition["provenance.dsse.rekor.logIndex"].AsBsonDocument["$exists"].AsBoolean);
}
}

View File

@@ -0,0 +1,21 @@
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net10.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<IsPackable>false</IsPackable>
<IsTestProject>true</IsTestProject>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.14.0" />
<PackageReference Include="MongoDB.Driver" Version="3.5.0" />
<PackageReference Include="xunit" Version="2.9.2" />
<PackageReference Include="xunit.runner.visualstudio" Version="2.8.2" />
<PackageReference Include="coverlet.collector" Version="6.0.4" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="../StellaOps.Events.Mongo/StellaOps.Events.Mongo.csproj" />
</ItemGroup>
</Project>

View File

@@ -0,0 +1,82 @@
using System.IO;
using System.Text.Json;
using MongoDB.Bson;
using MongoDB.Driver;
using StellaOps.Provenance.Mongo;
namespace StellaOps.Events.Mongo;
public sealed class EventProvenanceWriter
{
private readonly IMongoCollection<BsonDocument> _events;
public EventProvenanceWriter(IMongoDatabase database, string collectionName = "events")
{
if (database is null) throw new ArgumentNullException(nameof(database));
if (string.IsNullOrWhiteSpace(collectionName)) throw new ArgumentException("Collection name is required", nameof(collectionName));
_events = database.GetCollection<BsonDocument>(collectionName);
}
public Task AttachAsync(string eventId, DsseProvenance dsse, TrustInfo trust, CancellationToken cancellationToken = default)
{
var filter = BuildIdFilter(eventId);
return AttachAsync(filter, dsse, trust, cancellationToken);
}
public async Task AttachAsync(FilterDefinition<BsonDocument> filter, DsseProvenance dsse, TrustInfo trust, CancellationToken cancellationToken = default)
{
if (filter is null) throw new ArgumentNullException(nameof(filter));
if (dsse is null) throw new ArgumentNullException(nameof(dsse));
if (trust is null) throw new ArgumentNullException(nameof(trust));
var update = BuildUpdateDefinition(dsse, trust);
var result = await _events.UpdateOneAsync(filter, update, cancellationToken: cancellationToken).ConfigureAwait(false);
if (result.MatchedCount == 0)
{
throw new InvalidOperationException("Target event document not found.");
}
}
public async Task AttachFromJsonAsync(string eventId, string provenanceMetaJson, TrustInfo? trustOverride = null, CancellationToken cancellationToken = default)
{
if (string.IsNullOrWhiteSpace(provenanceMetaJson)) throw new ArgumentException("JSON payload is required.", nameof(provenanceMetaJson));
using var document = JsonDocument.Parse(provenanceMetaJson);
await AttachFromJsonElementAsync(eventId, document.RootElement, trustOverride, cancellationToken).ConfigureAwait(false);
}
public async Task AttachFromJsonAsync(string eventId, Stream provenanceMetaStream, TrustInfo? trustOverride = null, CancellationToken cancellationToken = default)
{
if (provenanceMetaStream is null) throw new ArgumentNullException(nameof(provenanceMetaStream));
var (dsse, trust) = await ProvenanceJsonParser.ParseAsync(provenanceMetaStream, trustOverride, cancellationToken).ConfigureAwait(false);
await AttachAsync(eventId, dsse, trust, cancellationToken).ConfigureAwait(false);
}
private Task AttachFromJsonElementAsync(string eventId, JsonElement root, TrustInfo? trustOverride, CancellationToken cancellationToken)
{
var (dsse, trust) = ProvenanceJsonParser.Parse(root, trustOverride);
return AttachAsync(eventId, dsse, trust, cancellationToken);
}
private static FilterDefinition<BsonDocument> BuildIdFilter(string eventId)
{
if (string.IsNullOrWhiteSpace(eventId)) throw new ArgumentException("Event identifier is required.", nameof(eventId));
return ObjectId.TryParse(eventId, out var objectId)
? Builders<BsonDocument>.Filter.Eq("_id", objectId)
: Builders<BsonDocument>.Filter.Eq("_id", eventId);
}
private static UpdateDefinition<BsonDocument> BuildUpdateDefinition(DsseProvenance dsse, TrustInfo trust)
{
var temp = new BsonDocument();
temp.AttachDsseProvenance(dsse, trust);
return Builders<BsonDocument>.Update
.Set("provenance", temp["provenance"])
.Set("trust", temp["trust"]);
}
}

View File

@@ -0,0 +1,25 @@
using MongoDB.Bson;
using MongoDB.Driver;
using StellaOps.Provenance.Mongo;
namespace StellaOps.Events.Mongo;
public sealed class EventWriter
{
private readonly IMongoCollection<BsonDocument> _events;
public EventWriter(IMongoDatabase db, string collectionName = "events")
{
_events = db.GetCollection<BsonDocument>(collectionName);
}
public async Task AppendEventAsync(
BsonDocument eventDoc,
DsseProvenance dsse,
TrustInfo trust,
CancellationToken ct = default)
{
eventDoc.AttachDsseProvenance(dsse, trust);
await _events.InsertOneAsync(eventDoc, cancellationToken: ct);
}
}

View File

@@ -0,0 +1,45 @@
using MongoDB.Bson;
using MongoDB.Driver;
namespace StellaOps.Events.Mongo;
public static class MongoIndexes
{
public static Task EnsureEventIndexesAsync(IMongoDatabase db, CancellationToken ct = default)
{
var events = db.GetCollection<BsonDocument>("events");
var models = new[]
{
new CreateIndexModel<BsonDocument>(
Builders<BsonDocument>.IndexKeys
.Ascending("subject.digest.sha256")
.Ascending("kind")
.Ascending("provenance.dsse.rekor.logIndex"),
new CreateIndexOptions
{
Name = "events_by_subject_kind_provenance"
}),
new CreateIndexModel<BsonDocument>(
Builders<BsonDocument>.IndexKeys
.Ascending("kind")
.Ascending("trust.verified")
.Ascending("provenance.dsse.rekor.logIndex"),
new CreateIndexOptions
{
Name = "events_unproven_by_kind"
}),
new CreateIndexModel<BsonDocument>(
Builders<BsonDocument>.IndexKeys
.Ascending("provenance.dsse.rekor.logIndex"),
new CreateIndexOptions
{
Name = "events_by_rekor_logindex"
})
};
return events.Indexes.CreateManyAsync(models, ct);
}
}

View File

@@ -0,0 +1,17 @@
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<TargetFramework>net10.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="MongoDB.Driver" Version="3.5.0" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="../__Libraries/StellaOps.Provenance.Mongo/StellaOps.Provenance.Mongo.csproj" />
</ItemGroup>
</Project>

View File

@@ -0,0 +1,43 @@
using System.Collections.Generic;
using MongoDB.Bson;
namespace StellaOps.Provenance.Mongo;
public sealed class DsseKeyInfo
{
public string KeyId { get; set; } = default!; // e.g. "cosign:SHA256-PKIX:..."
public string? Issuer { get; set; } // e.g. Fulcio issuer, KMS URI, X.509 CN
public string? Algo { get; set; } // "ECDSA" | "RSA" | "Ed25519" | "Dilithium"
}
public sealed class DsseRekorInfo
{
public long LogIndex { get; set; } // Rekor log index
public string Uuid { get; set; } = default!; // Rekor entry UUID
public long? IntegratedTime { get; set; } // unix timestamp (seconds)
public long? MirrorSeq { get; set; } // optional mirror sequence in Proof-Market ledger
}
public sealed class DsseChainLink
{
public string Type { get; set; } = default!; // e.g. "build" | "sbom" | "scan"
public string Id { get; set; } = default!; // e.g. "att:build#..."
public string Digest { get; set; } = default!; // sha256 of DSSE envelope or payload
}
public sealed class DsseProvenance
{
public string EnvelopeDigest { get; set; } = default!; // sha256 of envelope (not payload)
public string PayloadType { get; set; } = default!; // "application/vnd.in-toto+json"
public DsseKeyInfo Key { get; set; } = new();
public DsseRekorInfo? Rekor { get; set; }
public IReadOnlyCollection<DsseChainLink>? Chain { get; set; }
}
public sealed class TrustInfo
{
public bool Verified { get; set; } // local cryptographic verification
public string? Verifier { get; set; } // e.g. "Authority@stella"
public int? Witnesses { get; set; } // number of verified transparency witnesses
public double? PolicyScore { get; set; } // lattice / policy score (0..1)
}

View File

@@ -0,0 +1,203 @@
using System;
using System.Collections.Generic;
using System.IO;
using System.Text.Json;
using System.Threading;
using System.Threading.Tasks;
namespace StellaOps.Provenance.Mongo;
public static class ProvenanceJsonParser
{
public static (DsseProvenance Dsse, TrustInfo Trust) Parse(JsonElement root, TrustInfo? trustOverride = null)
{
var dsse = ParseDsse(root);
var trust = trustOverride ?? ParseTrust(root) ?? throw new InvalidOperationException("Provenance metadata missing trust block.");
return (dsse, trust);
}
public static (DsseProvenance Dsse, TrustInfo Trust) Parse(string json, TrustInfo? trustOverride = null)
{
using var document = JsonDocument.Parse(json);
return Parse(document.RootElement, trustOverride);
}
public static async Task<(DsseProvenance Dsse, TrustInfo Trust)> ParseAsync(
Stream utf8JsonStream,
TrustInfo? trustOverride = null,
CancellationToken cancellationToken = default)
{
var document = await JsonDocument.ParseAsync(utf8JsonStream, cancellationToken: cancellationToken).ConfigureAwait(false);
using (document)
{
return Parse(document.RootElement, trustOverride);
}
}
private static DsseProvenance ParseDsse(JsonElement root)
{
if (!root.TryGetProperty("dsse", out var dsseElement) || dsseElement.ValueKind != JsonValueKind.Object)
{
throw new InvalidOperationException("Provenance metadata missing dsse block.");
}
var keyElement = GetRequiredProperty(dsseElement, "key");
var dsse = new DsseProvenance
{
EnvelopeDigest = GetRequiredString(dsseElement, "envelopeDigest"),
PayloadType = GetRequiredString(dsseElement, "payloadType"),
Key = new DsseKeyInfo
{
KeyId = GetRequiredString(keyElement, "keyId"),
Issuer = GetOptionalString(keyElement, "issuer"),
Algo = GetOptionalString(keyElement, "algo"),
},
Chain = ParseChain(dsseElement)
};
if (dsseElement.TryGetProperty("rekor", out var rekorElement) && rekorElement.ValueKind == JsonValueKind.Object)
{
dsse.Rekor = new DsseRekorInfo
{
LogIndex = GetInt64(rekorElement, "logIndex"),
Uuid = GetRequiredString(rekorElement, "uuid"),
IntegratedTime = GetOptionalInt64(rekorElement, "integratedTime"),
MirrorSeq = GetOptionalInt64(rekorElement, "mirrorSeq")
};
}
return dsse;
}
private static IReadOnlyCollection<DsseChainLink>? ParseChain(JsonElement dsseElement)
{
if (!dsseElement.TryGetProperty("chain", out var chainElement) || chainElement.ValueKind != JsonValueKind.Array || chainElement.GetArrayLength() == 0)
{
return null;
}
var links = new List<DsseChainLink>(chainElement.GetArrayLength());
foreach (var entry in chainElement.EnumerateArray())
{
if (entry.ValueKind != JsonValueKind.Object)
{
continue;
}
var type = GetOptionalString(entry, "type");
var id = GetOptionalString(entry, "id");
var digest = GetOptionalString(entry, "digest");
if (string.IsNullOrEmpty(type) || string.IsNullOrEmpty(id) || string.IsNullOrEmpty(digest))
{
continue;
}
links.Add(new DsseChainLink
{
Type = type,
Id = id,
Digest = digest
});
}
return links.Count == 0 ? null : links;
}
private static TrustInfo? ParseTrust(JsonElement root)
{
if (!root.TryGetProperty("trust", out var trustElement) || trustElement.ValueKind != JsonValueKind.Object)
{
return null;
}
var trust = new TrustInfo
{
Verified = trustElement.TryGetProperty("verified", out var verified) && verified.ValueKind == JsonValueKind.True,
Verifier = GetOptionalString(trustElement, "verifier"),
Witnesses = trustElement.TryGetProperty("witnesses", out var witnessesElement) && witnessesElement.TryGetInt32(out var witnesses)
? witnesses
: null,
PolicyScore = trustElement.TryGetProperty("policyScore", out var scoreElement) && scoreElement.TryGetDouble(out var score)
? score
: null
};
return trust;
}
private static JsonElement GetRequiredProperty(JsonElement parent, string name)
{
if (!parent.TryGetProperty(name, out var property) || property.ValueKind == JsonValueKind.Null)
{
throw new InvalidOperationException($"Provenance metadata missing required property {name}.");
}
return property;
}
private static string GetRequiredString(JsonElement parent, string name)
{
var element = GetRequiredProperty(parent, name);
if (element.ValueKind is JsonValueKind.String)
{
var value = element.GetString();
if (!string.IsNullOrWhiteSpace(value))
{
return value;
}
}
throw new InvalidOperationException($"Provenance metadata property {name} must be a non-empty string.");
}
private static string? GetOptionalString(JsonElement parent, string name)
{
if (!parent.TryGetProperty(name, out var element))
{
return null;
}
return element.ValueKind == JsonValueKind.String ? element.GetString() : null;
}
private static long GetInt64(JsonElement parent, string name)
{
if (!parent.TryGetProperty(name, out var element))
{
throw new InvalidOperationException($"Provenance metadata missing {name}.");
}
if (element.TryGetInt64(out var value))
{
return value;
}
if (element.ValueKind == JsonValueKind.String && long.TryParse(element.GetString(), out value))
{
return value;
}
throw new InvalidOperationException($"Provenance metadata property {name} must be an integer.");
}
private static long? GetOptionalInt64(JsonElement parent, string name)
{
if (!parent.TryGetProperty(name, out var element))
{
return null;
}
if (element.TryGetInt64(out var value))
{
return value;
}
if (element.ValueKind == JsonValueKind.String && long.TryParse(element.GetString(), out value))
{
return value;
}
return null;
}
}

View File

@@ -0,0 +1,142 @@
using MongoDB.Bson;
namespace StellaOps.Provenance.Mongo;
public static class ProvenanceMongoExtensions
{
private const string ProvenanceFieldName = "provenance";
private const string DsseFieldName = "dsse";
private const string TrustFieldName = "trust";
private const string ChainFieldName = "chain";
private static BsonValue StringOrNull(string? value) =>
value is null ? BsonNull.Value : new BsonString(value);
/// <summary>
/// Attach DSSE provenance + trust info to an event document in-place.
/// Designed for generic BsonDocument-based event envelopes.
/// </summary>
public static BsonDocument AttachDsseProvenance(
this BsonDocument eventDoc,
DsseProvenance dsse,
TrustInfo trust)
{
if (eventDoc is null) throw new ArgumentNullException(nameof(eventDoc));
if (dsse is null) throw new ArgumentNullException(nameof(dsse));
if (trust is null) throw new ArgumentNullException(nameof(trust));
var dsseDoc = new BsonDocument
{
{ "envelopeDigest", dsse.EnvelopeDigest },
{ "payloadType", dsse.PayloadType },
{ "key", new BsonDocument
{
{ "keyId", dsse.Key.KeyId },
{ "issuer", StringOrNull(dsse.Key.Issuer) },
{ "algo", StringOrNull(dsse.Key.Algo) }
}
}
};
if (dsse.Rekor is not null)
{
var rekorDoc = new BsonDocument
{
{ "logIndex", dsse.Rekor.LogIndex },
{ "uuid", dsse.Rekor.Uuid }
};
if (dsse.Rekor.IntegratedTime is not null)
rekorDoc.Add("integratedTime", dsse.Rekor.IntegratedTime);
if (dsse.Rekor.MirrorSeq is not null)
rekorDoc.Add("mirrorSeq", dsse.Rekor.MirrorSeq);
dsseDoc.Add("rekor", rekorDoc);
}
if (dsse.Chain is not null && dsse.Chain.Count > 0)
{
var chainArray = new BsonArray();
foreach (var link in dsse.Chain)
{
chainArray.Add(new BsonDocument
{
{ "type", link.Type },
{ "id", link.Id },
{ "digest", link.Digest }
});
}
dsseDoc.Add(ChainFieldName, chainArray);
}
var trustDoc = new BsonDocument
{
{ "verified", trust.Verified },
{ "verifier", StringOrNull(trust.Verifier) }
};
if (trust.Witnesses is not null)
trustDoc.Add("witnesses", trust.Witnesses);
if (trust.PolicyScore is not null)
trustDoc.Add("policyScore", trust.PolicyScore);
var provenanceDoc = new BsonDocument
{
{ DsseFieldName, dsseDoc }
};
eventDoc[ProvenanceFieldName] = provenanceDoc;
eventDoc[TrustFieldName] = trustDoc;
return eventDoc;
}
/// <summary>
/// Helper to query for "cryptographically proven" events:
/// kind + subject.digest.sha256 + presence of Rekor logIndex + trust.verified = true.
/// </summary>
public static BsonDocument BuildProvenVexFilter(
string kind,
string subjectDigestSha256)
{
return new BsonDocument
{
{ "kind", kind },
{ "subject.digest.sha256", subjectDigestSha256 },
{ $"{ProvenanceFieldName}.{DsseFieldName}.rekor.logIndex", new BsonDocument("$exists", true) },
{ $"{TrustFieldName}.verified", true }
};
}
/// <summary>
/// Helper to query for events influencing policy without solid provenance.
/// </summary>
public static BsonDocument BuildUnprovenEvidenceFilter(
IEnumerable<string> kinds)
{
var kindsArray = new BsonArray(kinds);
return new BsonDocument
{
{
"kind", new BsonDocument("$in", kindsArray)
},
{
"$or", new BsonArray
{
new BsonDocument
{
{ $"{TrustFieldName}.verified", new BsonDocument("$ne", true) }
},
new BsonDocument
{
{ $"{ProvenanceFieldName}.{DsseFieldName}.rekor.logIndex",
new BsonDocument("$exists", false) }
}
}
}
};
}
}

Some files were not shown because too many files have changed in this diff Show More