feat: Enhance MongoDB storage with event publishing and outbox support
- Added `MongoAdvisoryObservationEventPublisher` and `NatsAdvisoryObservationEventPublisher` for event publishing. - Registered `IAdvisoryObservationEventPublisher` to choose between NATS and MongoDB based on configuration. - Introduced `MongoAdvisoryObservationEventOutbox` for outbox pattern implementation. - Updated service collection to include new event publishers and outbox. - Added a new hosted service `AdvisoryObservationTransportWorker` for processing events. feat: Update project dependencies - Added `NATS.Client.Core` package to the project for NATS integration. test: Add unit tests for AdvisoryLinkset normalization - Created `AdvisoryLinksetNormalizationConfidenceTests` to validate confidence score calculations. fix: Adjust confidence assertion in `AdvisoryObservationAggregationTests` - Updated confidence assertion to allow a range instead of a fixed value. test: Implement tests for AdvisoryObservationEventFactory - Added `AdvisoryObservationEventFactoryTests` to ensure correct mapping and hashing of observation events. chore: Configure test project for Findings Ledger - Created `Directory.Build.props` for test project configuration. - Added `StellaOps.Findings.Ledger.Exports.Unit.csproj` for unit tests related to findings ledger exports. feat: Implement export contracts for findings ledger - Defined export request and response contracts in `ExportContracts.cs`. - Created various export item records for findings, VEX, advisories, and SBOMs. feat: Add export functionality to Findings Ledger Web Service - Implemented endpoints for exporting findings, VEX, advisories, and SBOMs. - Integrated `ExportQueryService` for handling export logic and pagination. test: Add tests for Node language analyzer phase 22 - Implemented `NodePhase22SampleLoaderTests` to validate loading of NDJSON fixtures. - Created sample NDJSON file for testing. chore: Set up isolated test environment for Node tests - Added `node-isolated.runsettings` for isolated test execution. - Created `node-tests-isolated.sh` script for running tests in isolation.
This commit is contained in:
@@ -22,8 +22,8 @@
|
||||
## Delivery Tracker
|
||||
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
|
||||
| --- | --- | --- | --- | --- | --- |
|
||||
| P1 | PREP-CONCELIER-GRAPH-21-002-PLATFORM-EVENTS-S | TODO | Due 2025-11-21 · Accountable: Concelier Core Guild · Scheduler Guild (`src/Concelier/__Libraries/StellaOps.Concelier.Core`) | Concelier Core Guild · Scheduler Guild (`src/Concelier/__Libraries/StellaOps.Concelier.Core`) | Platform Events/Scheduler contract for `sbom.observation.updated` not defined; no event publisher plumbing in repo. <br><br> Document artefact/deliverable for CONCELIER-GRAPH-21-002 and publish location so downstream tasks can proceed. |
|
||||
| P2 | PREP-CONCELIER-LNM-21-002-WAITING-ON-FINALIZE | TODO | Due 2025-11-21 · Accountable: Concelier Core Guild · Data Science Guild (`src/Concelier/__Libraries/StellaOps.Concelier.Core`) | Concelier Core Guild · Data Science Guild (`src/Concelier/__Libraries/StellaOps.Concelier.Core`) | Waiting on finalized LNM fixtures + precedence rules and event contract; confidence heuristic in place; broader tests deferred to CI. <br><br> Document artefact/deliverable for CONCELIER-LNM-21-002 and publish location so downstream tasks can proceed. |
|
||||
| P1 | PREP-CONCELIER-GRAPH-21-002-PLATFORM-EVENTS-S | DONE (2025-11-20) | Due 2025-11-21 · Accountable: Concelier Core Guild · Scheduler Guild (`src/Concelier/__Libraries/StellaOps.Concelier.Core`) | Concelier Core Guild · Scheduler Guild (`src/Concelier/__Libraries/StellaOps.Concelier.Core`) | Event contract published at `docs/modules/concelier/events/advisory.observation.updated@1.md` (+schema/sample). Downstream may proceed with publishers/consumers. |
|
||||
| P2 | PREP-CONCELIER-LNM-21-002-WAITING-ON-FINALIZE | DONE (2025-11-20) | Due 2025-11-21 · Accountable: Concelier Core Guild · Data Science Guild (`src/Concelier/__Libraries/StellaOps.Concelier.Core`) | Concelier Core Guild · Data Science Guild (`src/Concelier/__Libraries/StellaOps.Concelier.Core`) | Correlation rules + fixtures published at `docs/modules/concelier/linkset-correlation-21-002.md` with samples under `docs/samples/lnm/`. Downstream linkset builder can proceed. |
|
||||
| 1 | CONCELIER-GRAPH-21-001 | DONE | LNM sample fixtures with scopes/relationships added; observation/linkset query tests passing | Concelier Core Guild · Cartographer Guild (`src/Concelier/__Libraries/StellaOps.Concelier.Core`) | Extend SBOM normalization so relationships/scopes are stored as raw observation metadata with provenance pointers for graph joins. |
|
||||
| 2 | CONCELIER-GRAPH-21-002 | BLOCKED | PREP-CONCELIER-GRAPH-21-002-PLATFORM-EVENTS-S | Concelier Core Guild · Scheduler Guild (`src/Concelier/__Libraries/StellaOps.Concelier.Core`) | Publish `sbom.observation.updated` events with tenant/context and advisory refs; facts only, no judgments. |
|
||||
| 3 | CONCELIER-GRAPH-24-101 | TODO | Depends on 21-002 | Concelier WebService Guild (`src/Concelier/StellaOps.Concelier.WebService`) | `/advisories/summary` bundles observation/linkset metadata (aliases, confidence, conflicts) for graph overlays; upstream values intact. |
|
||||
@@ -43,6 +43,13 @@
|
||||
## Execution Log
|
||||
| Date (UTC) | Update | Owner |
|
||||
| --- | --- | --- |
|
||||
| 2025-11-20 | Wired optional NATS transport for `advisory.observation.updated@1`; background worker dequeues Mongo outbox and publishes to configured stream/subject. | Implementer |
|
||||
| 2025-11-20 | Wired advisory.observation.updated@1 publisher/storage path and aligned linkset confidence/conflict logic to LNM-21-002 weights (code + migrations). | Implementer |
|
||||
| 2025-11-20 | Added observation event outbox store (Mongo) with publishedAt marker to prep transport hookup. | Implementer |
|
||||
| 2025-11-20 | Documented observation event transport config in `docs/modules/concelier/operations/observation-events.md`. | Implementer |
|
||||
| 2025-11-20 | Completed PREP-CONCELIER-GRAPH-21-002-PLATFORM-EVENTS-S and PREP-CONCELIER-LNM-21-002-WAITING-ON-FINALIZE; published prep note at `docs/modules/concelier/prep/2025-11-20-platform-events-and-lnm-21-002.md`. | Implementer |
|
||||
| 2025-11-20 | Linked existing `advisory.observation.updated@1` contract and LNM-21-002 correlation rules/fixtures to PREP tasks; marked P1/P2 DONE. | Planning |
|
||||
| 2025-11-20 | Started PREP-CONCELIER-GRAPH-21-002 and PREP-CONCELIER-LNM-21-002 (statuses → DOING) after confirming no other owner activity. | Planning |
|
||||
| 2025-11-19 | Assigned PREP owners/dates; see Delivery Tracker. | Planning |
|
||||
| 2025-11-17 | Started CONCELIER-GRAPH-21-001: added raw linkset scopes + relationships (provenance) through contracts, ingest mapper, storage mapping, and sanitization; new Mongo mapping test added. | Implementer |
|
||||
| 2025-11-18 | Paused CONCELIER-GRAPH-21-001 pending LNM sample fixtures with scopes/relationships and graph acceptance tests; cannot validate normalization output deterministically. | Implementer |
|
||||
@@ -65,6 +72,11 @@
|
||||
- Link-Not-Merge v1 frozen 2025-11-17; schema captured in `docs/modules/concelier/link-not-merge-schema.md` (add-only evolution); fixtures pending for tasks 1–2, 5–15.
|
||||
- Graph event pipeline depends on Scheduler/Platform Events alignment to avoid non-deterministic downstream joins.
|
||||
- Storage backfill (21-102) and object-store move (21-103) must preserve provenance metadata to avoid regression in Offline Kit and replay.
|
||||
- Prep note published for `advisory.observation.updated@1` (`docs/modules/concelier/events/`); aligns with existing schema/sample and clarifies publisher expectations. Legacy `sbom.observation.updated` alias to be cleaned up during implementation sign-off.
|
||||
- Correlation precedence for LNM-21-002 documented in `docs/modules/concelier/linkset-correlation-21-002.md`; implemented weights/conflict codes in Core; tests updated—downstream services must adopt same weights.
|
||||
- Observation sink now emits `advisory.observation.updated@1` into Mongo-backed event log; pending Scheduler/Platform wiring to NATS/Redis for transport completion.
|
||||
- Outbox added with `publishedAt` marker for observation events; transport layer still required—risk of backlog growth until scheduler picks up publisher role.
|
||||
- Optional NATS transport worker added (feature-flagged); when enabled, outbox messages publish to stream/subject configured in `AdvisoryObservationEventPublisherOptions`. Ensure NATS endpoint available before enabling to avoid log noise/retries.
|
||||
|
||||
## Next Checkpoints
|
||||
- Next LNM schema review: align with CARTO-GRAPH/LNM owners (date TBD); unblock tasks 1–2 and 5–15.
|
||||
@@ -75,4 +87,4 @@
|
||||
| --- | --- | --- | --- |
|
||||
| Link-Not-Merge schema finalization (CONCELIER-LNM-21-001+) | Tasks 1–15 | Concelier Core · Cartographer · Platform Events | Resolved: v1 frozen 2025-11-17 with add-only rule; fixtures pending. |
|
||||
| Scheduler / Platform Events contract for `sbom.observation.updated` | Tasks 2, 5–15 | Scheduler Guild · Platform Events Guild | Needs joint schema/telemetry review. |
|
||||
| Object storage contract for raw payloads | Tasks 10–12 | Storage Guild · DevOps Guild | To be defined alongside 21-103. |
|
||||
| Object storage contract for raw payloads | Tasks 10–12 | Storage Guild · DevOps Guild | To be defined alongside 21-103. |
|
||||
|
||||
@@ -25,18 +25,18 @@
|
||||
## Delivery Tracker
|
||||
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
|
||||
| --- | --- | --- | --- | --- | --- |
|
||||
| P1 | PREP-LEDGER-EXPORT-35-001-NO-HTTP-API-SURFACE | TODO | Due 2025-11-22 · Accountable: Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | No HTTP/API surface or contract to host export endpoints; needs API scaffold + filters spec. <br><br> Document artefact/deliverable for LEDGER-EXPORT-35-001 and publish location so downstream tasks can proceed. |
|
||||
| P2 | PREP-LEDGER-OAS-61-001-ABSENT-OAS-BASELINE-AN | TODO | Due 2025-11-22 · Accountable: Findings Ledger Guild; API Contracts Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild; API Contracts Guild / src/Findings/StellaOps.Findings.Ledger | Absent OAS baseline and API host for ledger; requires contract definition with API Guild. <br><br> Document artefact/deliverable for LEDGER-OAS-61-001 and publish location so downstream tasks can proceed. |
|
||||
| P3 | PREP-LEDGER-OAS-61-002-DEPENDS-ON-61-001-CONT | TODO | Due 2025-11-22 · Accountable: Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | Depends on 61-001 contract + HTTP surface. <br><br> Document artefact/deliverable for LEDGER-OAS-61-002 and publish location so downstream tasks can proceed. |
|
||||
| P4 | PREP-LEDGER-OAS-62-001-SDK-GENERATION-PENDING | TODO | Due 2025-11-22 · Accountable: Findings Ledger Guild; SDK Generator Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild; SDK Generator Guild / src/Findings/StellaOps.Findings.Ledger | SDK generation pending 61-002. <br><br> Document artefact/deliverable for LEDGER-OAS-62-001 and publish location so downstream tasks can proceed. |
|
||||
| P5 | PREP-LEDGER-OAS-63-001-DEPENDENT-ON-SDK-VALID | TODO | Due 2025-11-22 · Accountable: Findings Ledger Guild; API Governance Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild; API Governance Guild / src/Findings/StellaOps.Findings.Ledger | Dependent on SDK validation (62-001). <br><br> Document artefact/deliverable for LEDGER-OAS-63-001 and publish location so downstream tasks can proceed. |
|
||||
| P1 | PREP-LEDGER-EXPORT-35-001-NO-HTTP-API-SURFACE | DONE (2025-11-20) | Due 2025-11-22 · Accountable: Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | Export HTTP surface + filters spec published at `docs/modules/findings-ledger/export-http-surface.md`; downstream tasks may proceed against documented contract. |
|
||||
| P2 | PREP-LEDGER-OAS-61-001-ABSENT-OAS-BASELINE-AN | DONE (2025-11-20) | Due 2025-11-22 · Accountable: Findings Ledger Guild; API Contracts Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild; API Contracts Guild / src/Findings/StellaOps.Findings.Ledger | Artifact published: `docs/modules/findings-ledger/openapi/findings-ledger.v1.yaml` with servers/security/paths; summary in `docs/modules/findings-ledger/oas-baseline.md`. |
|
||||
| P3 | PREP-LEDGER-OAS-61-002-DEPENDS-ON-61-001-CONT | DOING (2025-11-20) | Due 2025-11-22 · Accountable: Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | Depends on 61-001 contract + HTTP surface. <br><br> Document artefact/deliverable for LEDGER-OAS-61-002 and publish location so downstream tasks can proceed. Prep artefact: `docs/modules/findings-ledger/prep/2025-11-20-ledger-oas-prep.md`. |
|
||||
| P4 | PREP-LEDGER-OAS-62-001-SDK-GENERATION-PENDING | DOING (2025-11-20) | Due 2025-11-22 · Accountable: Findings Ledger Guild; SDK Generator Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild; SDK Generator Guild / src/Findings/StellaOps.Findings.Ledger | SDK generation pending 61-002. <br><br> Document artefact/deliverable for LEDGER-OAS-62-001 and publish location so downstream tasks can proceed. Prep artefact: `docs/modules/findings-ledger/prep/2025-11-20-ledger-oas-prep.md`. |
|
||||
| P5 | PREP-LEDGER-OAS-63-001-DEPENDENT-ON-SDK-VALID | DOING (2025-11-20) | Due 2025-11-22 · Accountable: Findings Ledger Guild; API Governance Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild; API Governance Guild / src/Findings/StellaOps.Findings.Ledger | Dependent on SDK validation (62-001). <br><br> Document artefact/deliverable for LEDGER-OAS-63-001 and publish location so downstream tasks can proceed. Prep artefact: `docs/modules/findings-ledger/prep/2025-11-20-ledger-oas-prep.md`. |
|
||||
| P6 | PREP-LEDGER-OBS-54-001-NO-HTTP-SURFACE-MINIMA | TODO | Due 2025-11-22 · Accountable: Findings Ledger Guild; Provenance Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild; Provenance Guild / src/Findings/StellaOps.Findings.Ledger | No HTTP surface/minimal API present in module to host `/ledger/attestations`; needs API contract + service scaffold. <br><br> Document artefact/deliverable for LEDGER-OBS-54-001 and publish location so downstream tasks can proceed. |
|
||||
| P7 | PREP-LEDGER-OBS-55-001-DEPENDS-ON-54-001-ATTE | TODO | Due 2025-11-22 · Accountable: Findings Ledger Guild; DevOps Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild; DevOps Guild / src/Findings/StellaOps.Findings.Ledger | Depends on 54-001 attestation API availability. <br><br> Document artefact/deliverable for LEDGER-OBS-55-001 and publish location so downstream tasks can proceed. |
|
||||
| P7 | PREP-LEDGER-OBS-55-001-DEPENDS-ON-54-001-ATTE | DONE (2025-11-20) | Due 2025-11-22 · Accountable: Findings Ledger Guild; DevOps Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild; DevOps Guild / src/Findings/StellaOps.Findings.Ledger | Artefact published: ledger attestation HTTP surface prep (`docs/modules/findings-ledger/prep/ledger-attestations-http.md`) outlining `/v1/ledger/attestations` contract; pagination, determinism, and fields defined. |
|
||||
| P8 | PREP-LEDGER-PACKS-42-001-SNAPSHOT-TIME-TRAVEL | TODO | Due 2025-11-22 · Accountable: Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | Snapshot/time-travel contract and bundle format not specified; needs design input. <br><br> Document artefact/deliverable for LEDGER-PACKS-42-001 and publish location so downstream tasks can proceed. |
|
||||
| P9 | PREP-LEDGER-RISK-66-001-RISK-ENGINE-SCHEMA-CO | TODO | Due 2025-11-22 · Accountable: Findings Ledger Guild; Risk Engine Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild; Risk Engine Guild / src/Findings/StellaOps.Findings.Ledger | Risk Engine schema/contract inputs absent; requires risk field definitions + rollout plan. <br><br> Document artefact/deliverable for LEDGER-RISK-66-001 and publish location so downstream tasks can proceed. |
|
||||
| P10 | PREP-LEDGER-RISK-66-002-DEPENDS-ON-66-001-MIG | TODO | Due 2025-11-22 · Accountable: Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | Depends on 66-001 migration + risk scoring contract. <br><br> Document artefact/deliverable for LEDGER-RISK-66-002 and publish location so downstream tasks can proceed. |
|
||||
| 1 | LEDGER-ATTEST-73-002 | BLOCKED | Waiting on LEDGER-ATTEST-73-001 verification pipeline delivery | Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | Enable search/filter in findings projections by verification result and attestation status |
|
||||
| 2 | LEDGER-EXPORT-35-001 | BLOCKED | PREP-LEDGER-EXPORT-35-001-NO-HTTP-API-SURFACE | Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | Provide paginated streaming endpoints for advisories, VEX, SBOMs, and findings with deterministic ordering and provenance metadata |
|
||||
| 2 | LEDGER-EXPORT-35-001 | DOING (2025-11-20) | Findings export endpoint implemented; VEX/advisory/SBOM endpoints stubbed pending schemas | Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | Provide paginated streaming endpoints for advisories, VEX, SBOMs, and findings with deterministic ordering and provenance metadata |
|
||||
| 3 | LEDGER-OAS-61-001 | BLOCKED | PREP-LEDGER-OAS-61-001-ABSENT-OAS-BASELINE-AN | Findings Ledger Guild; API Contracts Guild / src/Findings/StellaOps.Findings.Ledger | Expand Findings Ledger OAS to include projections, evidence lookups, and filter parameters with examples |
|
||||
| 4 | LEDGER-OAS-61-002 | BLOCKED | PREP-LEDGER-OAS-61-002-DEPENDS-ON-61-001-CONT | Findings Ledger Guild / src/Findings/StellaOps.Findings.Ledger | Implement `/.well-known/openapi` endpoint and ensure version metadata matches release |
|
||||
| 5 | LEDGER-OAS-62-001 | BLOCKED | PREP-LEDGER-OAS-62-001-SDK-GENERATION-PENDING | Findings Ledger Guild; SDK Generator Guild / src/Findings/StellaOps.Findings.Ledger | Provide SDK test cases for findings pagination, filtering, evidence links; ensure typed models expose provenance |
|
||||
@@ -54,6 +54,13 @@
|
||||
## Execution Log
|
||||
| Date (UTC) | Update | Owner |
|
||||
| --- | --- | --- |
|
||||
| 2025-11-20 | Added authenticated export endpoints for findings/vex/advisories/sboms (stub responses) and paging contracts; awaiting schema/tables to back VEX/advisory/SBOM queries. Export paging unit tests passing via isolated test project. | Findings Ledger |
|
||||
| 2025-11-20 | Began implementing LEDGER-EXPORT-35-001 HTTP surface (findings export endpoint + paging/token hash) in WebService; tests pending due to existing harness build failures. | Findings Ledger |
|
||||
| 2025-11-20 | Completed PREP-LEDGER-EXPORT-35-001: published export HTTP surface and filters spec at `docs/modules/findings-ledger/export-http-surface.md`; unblocked LEDGER-EXPORT-35-001 (status → TODO). | Planning |
|
||||
| 2025-11-20 | Started PREP-LEDGER-EXPORT-35-001 (status → DOING) after confirming no other DOING owner entries. | Planning |
|
||||
| 2025-11-20 | Completed PREP-LEDGER-OAS-61-001: published baseline OAS at `docs/modules/findings-ledger/openapi/findings-ledger.v1.yaml` with summary `docs/modules/findings-ledger/oas-baseline.md`; downstream OAS/SDK tasks extend this base. | Implementer |
|
||||
| 2025-11-20 | Completed PREP-LEDGER-OBS-55-001: published ledger attestation HTTP surface prep (`docs/modules/findings-ledger/prep/ledger-attestations-http.md`) covering `/v1/ledger/attestations`; still requires 54-001 service surface to implement. | Implementer |
|
||||
| 2025-11-20 | Started PREP-LEDGER-OBS-55-001 (status → DOING) after confirming no existing DOING/DONE owners; still contingent on 54-001 surface availability. | Planning |
|
||||
| 2025-11-19 | Assigned PREP owners/dates; see Delivery Tracker. | Planning |
|
||||
| 2025-11-08 | Sprint stub created; awaiting template normalisation. | Planning |
|
||||
| 2025-11-17 | Normalised sprint to standard template and renamed file to `SPRINT_0121_0001_0001_policy_reasoning.md`. | Project Mgmt |
|
||||
@@ -70,9 +77,10 @@
|
||||
- Upstream dependency on Sprint 120.B (Findings.I); block start until merged.
|
||||
- Cross-guild coordination (Evidence Locker, Risk Engine, Observability, Provenance) required to avoid schema drift.
|
||||
- Export/SDK contract changes must remain deterministic to support offline bundles.
|
||||
- Export HTTP surface spec published at `docs/modules/findings-ledger/export-http-surface.md`; downstream OAS/SDK tasks must derive contracts from this document to avoid drift.
|
||||
- LEDGER-OBS-54-001 blocked: Findings Ledger module currently lacks HTTP/minimal API surface to expose `/ledger/attestations`; requires contract + service scaffold (engage API Contracts & Provenance guilds).
|
||||
- Current state: all tasks blocked; adjacent sprints (0120, 0122) also blocked due to missing risk schema, export contracts, and DB/RLS design inputs.
|
||||
- Current state: findings export endpoint and paging contracts implemented; VEX/advisory/SBOM endpoints stubbed (auth + shape) but await underlying projection/query schemas. Remaining tasks in this sprint and adjacent sprints (0120, 0122) stay blocked by missing risk schema, OAS/SDK contracts, and DB/RLS design inputs.
|
||||
|
||||
## Next Checkpoints
|
||||
- Schedule cross-guild kickoff for week of 2025-11-24 once dependency clears.
|
||||
- Add weekly Findings Ledger status review (TBD owner) after staffing.
|
||||
- Add weekly Findings Ledger status review (TBD owner) after staffing.
|
||||
|
||||
@@ -22,11 +22,11 @@
|
||||
## Delivery Tracker
|
||||
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
|
||||
| --- | --- | --- | --- | --- | --- |
|
||||
| P1 | PREP-SCANNER-ANALYZERS-LANG-11-003-DEPENDS-ON | TODO | Due 2025-11-22 · Accountable: StellaOps.Scanner EPDR Guild; Signals Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet) | StellaOps.Scanner EPDR Guild; Signals Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet) | Depends on 11-002; blocked until upstream static analyzer available. <br><br> Document artefact/deliverable for SCANNER-ANALYZERS-LANG-11-003 and publish location so downstream tasks can proceed. |
|
||||
| P2 | PREP-SCANNER-ANALYZERS-LANG-11-004-DEPENDS-ON | TODO | Due 2025-11-22 · Accountable: StellaOps.Scanner EPDR Guild; SBOM Service Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet) | StellaOps.Scanner EPDR Guild; SBOM Service Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet) | Depends on 11-003; no upstream static/runtime outputs yet. <br><br> Document artefact/deliverable for SCANNER-ANALYZERS-LANG-11-004 and publish location so downstream tasks can proceed. |
|
||||
| P3 | PREP-SCANNER-ANALYZERS-LANG-11-005-DEPENDS-ON | TODO | Due 2025-11-22 · Accountable: StellaOps.Scanner EPDR Guild; QA Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet) | StellaOps.Scanner EPDR Guild; QA Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet) | Depends on 11-004; fixtures deferred until analyzer outputs exist. <br><br> Document artefact/deliverable for SCANNER-ANALYZERS-LANG-11-005 and publish location so downstream tasks can proceed. |
|
||||
| P4 | PREP-SCANNER-ANALYZERS-NATIVE-20-002-AWAIT-DE | TODO | Due 2025-11-22 · Accountable: Native Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Native) | Native Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Native) | Await declared-dependency writer/contract to emit edges. <br><br> Document artefact/deliverable for SCANNER-ANALYZERS-NATIVE-20-002 and publish location so downstream tasks can proceed. |
|
||||
| P5 | PREP-SCANNER-ANALYZERS-NODE-22-001-NEEDS-ISOL | TODO | Due 2025-11-22 · Accountable: Node Analyzer Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Node) | Node Analyzer Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Node) | Needs isolated runner or scoped build graph to execute targeted tests without full-solution fan-out. <br><br> Document artefact/deliverable for SCANNER-ANALYZERS-NODE-22-001 and publish location so downstream tasks can proceed. |
|
||||
| P1 | PREP-SCANNER-ANALYZERS-LANG-11-003-DEPENDS-ON | DONE (2025-11-20) | Due 2025-11-22 · Accountable: StellaOps.Scanner EPDR Guild; Signals Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet) | StellaOps.Scanner EPDR Guild; Signals Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet) | Depends on 11-002; blocked until upstream static analyzer available. <br><br> Document artefact/deliverable for SCANNER-ANALYZERS-LANG-11-003 and publish location so downstream tasks can proceed. Prep artefact: `docs/modules/scanner/prep/2025-11-20-analyzers-prep.md` (runtime evidence ingest).
|
||||
| P2 | PREP-SCANNER-ANALYZERS-LANG-11-004-DEPENDS-ON | DONE (2025-11-20) | Due 2025-11-22 · Accountable: StellaOps.Scanner EPDR Guild; SBOM Service Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet) | StellaOps.Scanner EPDR Guild; SBOM Service Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet) | Depends on 11-003; no upstream static/runtime outputs yet. <br><br> Document artefact/deliverable for SCANNER-ANALYZERS-LANG-11-004 and publish location so downstream tasks can proceed. Prep artefact: `docs/modules/scanner/prep/2025-11-20-analyzers-prep.md` (normalized export contract).
|
||||
| P3 | PREP-SCANNER-ANALYZERS-LANG-11-005-DEPENDS-ON | DONE (2025-11-20) | Due 2025-11-22 · Accountable: StellaOps.Scanner EPDR Guild; QA Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet) | StellaOps.Scanner EPDR Guild; QA Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet) | Depends on 11-004; fixtures deferred until analyzer outputs exist. <br><br> Document artefact/deliverable for SCANNER-ANALYZERS-LANG-11-005 and publish location so downstream tasks can proceed. Prep artefact: `docs/modules/scanner/prep/2025-11-20-analyzers-prep.md` (fixtures/benchmarks expectations).
|
||||
| P4 | PREP-SCANNER-ANALYZERS-NATIVE-20-002-AWAIT-DE | DONE (2025-11-20) | Due 2025-11-22 · Accountable: Native Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Native) | Native Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Native) | Await declared-dependency writer/contract to emit edges. <br><br> Document artefact/deliverable for SCANNER-ANALYZERS-NATIVE-20-002 and publish location so downstream tasks can proceed. Prep artefact: `docs/modules/scanner/prep/2025-11-20-analyzers-prep.md` (ELF declared-dependency writer payload).
|
||||
| P5 | PREP-SCANNER-ANALYZERS-NODE-22-001-NEEDS-ISOL | DONE (2025-11-20) | Due 2025-11-22 · Accountable: Node Analyzer Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Node) | Node Analyzer Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Node) | Isolated runner plan published at `docs/modules/scanner/prep/2025-11-20-node-isolated-runner.md`; downstream implementation can proceed. Scripts: `src/Scanner/StellaOps.Scanner.Node.slnf`, `src/Scanner/__Tests/node-isolated.runsettings`, `src/Scanner/__Tests/node-tests-isolated.sh`. |
|
||||
| 1 | SCANNER-ANALYZERS-LANG-11-002 | BLOCKED | Await upstream SCANNER-ANALYZERS-LANG-11-001 design/outputs to extend static analyzer | StellaOps.Scanner EPDR Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet) | Implement static analyzer (IL + reflection heuristics) capturing AssemblyRef, ModuleRef/PInvoke, DynamicDependency, reflection literals, DI patterns, and custom AssemblyLoadContext probing hints; emit dependency edges with reason codes and confidence. |
|
||||
| 2 | SCANNER-ANALYZERS-LANG-11-003 | BLOCKED | PREP-SCANNER-ANALYZERS-LANG-11-003-DEPENDS-ON | StellaOps.Scanner EPDR Guild; Signals Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet) | Ingest optional runtime evidence (AssemblyLoad, Resolving, P/Invoke) via event listener harness; merge runtime edges with static/declared ones and attach reason codes/confidence. |
|
||||
| 3 | SCANNER-ANALYZERS-LANG-11-004 | BLOCKED | PREP-SCANNER-ANALYZERS-LANG-11-004-DEPENDS-ON | StellaOps.Scanner EPDR Guild; SBOM Service Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet) | Produce normalized observation export to Scanner writer: entrypoints + dependency edges + environment profiles (AOC compliant); wire to SBOM service entrypoint tagging. |
|
||||
@@ -49,8 +49,25 @@
|
||||
| 20 | AGENTS-SCANNER-00-001 | DONE | Create module-level AGENTS.md for `src/Scanner` aligned with scanner architecture docs | Project Management; Scanner Guild | Author/update Scanner AGENTS.md covering roles, required docs, allowed shared directories, determinism/testing rules; ensure implementers can work autonomously. |
|
||||
|
||||
## Execution Log
|
||||
|
||||
|
||||
|
||||
| Date (UTC) | Update | Owner |
|
||||
| --- | --- | --- |
|
||||
| 2025-11-20 | Attempted node isolated restore/test; restore failed fetching Microsoft.TestPlatform.TestHost (nuget.org) because offline package path was wrong. Script corrected to use `offline/packages`. Re-run still needed. | Implementer |
|
||||
| 2025-11-20 | Second isolated restore attempt ran ~48s then cancelled; still needs seeding `Microsoft.TestPlatform.TestHost 17.14.1` into offline/packages to complete. | Implementer |
|
||||
| 2025-11-20 | Isolated restore retried after seeding TestHost; still failing due to missing packages from offline cache (e.g., MongoDB.Driver.Core 2.12.0). Further seeding needed before tests can run. | Implementer |
|
||||
| 2025-11-20 | Third restore attempt after seeding MongoDB.Driver.Core also failed (restore canceled ~26s); more dependencies still missing from offline cache. | Implementer |
|
||||
| 2025-11-20 | Fourth restore attempt (~15s) still canceled; more NuGet packages remain unseeded in offline cache. | Implementer |
|
||||
| 2025-11-20 | Fifth restore attempt (quiet, ~15s) still cancelled; offline cache still missing transitive NuGet packages; no tests run. | Implementer |
|
||||
| 2025-11-20 | Sixth restore attempt failed with missing packages (example: NU1101 StellaOps.Policy.AuthSignals) even with parallel restore; offline cache still incomplete. | Implementer |
|
||||
| 2025-11-20 | Completed PREP-SCANNER-ANALYZERS-NODE-22-001; published isolated runner plan at `docs/modules/scanner/prep/2025-11-20-node-isolated-runner.md`. | Implementer |
|
||||
| 2025-11-20 | Added isolated runner artefacts: `StellaOps.Scanner.Node.slnf`, `__Tests/node-isolated.runsettings`, `__Tests/node-tests-isolated.sh` to support P5 plan. | Implementer |
|
||||
| 2025-11-20 | Completed PREP-SCANNER-ANALYZERS-LANG-11-003/004/005 and PREP-SCANNER-ANALYZERS-NATIVE-20-002; prep note published at `docs/modules/scanner/prep/2025-11-20-analyzers-prep.md`. | Implementer |
|
||||
| 2025-11-20 | Set PREP-SCANNER-ANALYZERS-NODE-22-001 to DOING after confirming no other owners; prep still pending (isolated runner requirement). | Project Mgmt |
|
||||
| 2025-11-20 | Started PREP-SCANNER-ANALYZERS-LANG-11-003, PREP-SCANNER-ANALYZERS-LANG-11-004, and PREP-SCANNER-ANALYZERS-NATIVE-20-002 (all were TODO; skipped PREP-SCANNER-ANALYZERS-LANG-11-005 because already DOING). | Implementer |
|
||||
| 2025-11-20 | Started PREP-SCANNER-ANALYZERS-NODE-22-001 (was TODO; verified no prior start). | Project Mgmt |
|
||||
| 2025-11-20 | Moved PREP-SCANNER-ANALYZERS-LANG-11-005-DEPENDS-ON to DOING after confirming no prior start; beginning prep to unblock downstream analyzer fixtures. | Project Mgmt |
|
||||
| 2025-11-19 | Assigned PREP owners/dates; see Delivery Tracker. | Planning |
|
||||
| 2025-11-16 | Normalised sprint file to standard template; renamed from `SPRINT_132_scanner_surface.md` to `SPRINT_0132_scanner_surface.md`; scope unchanged; added governance task for missing Scanner AGENTS.md. | Planning |
|
||||
| 2025-11-17 | AGENTS-SCANNER-00-001 completed; module AGENTS.md added under src/Scanner. | Implementer |
|
||||
@@ -72,13 +89,12 @@
|
||||
## Decisions & Risks
|
||||
- Scanner AGENTS.md added 2025-11-17; keep in sync with scanner architecture and future advisories.
|
||||
- Sprint execution gated on completion of Sprint 131; monitor for slippage to avoid cascading delays in 130–139 chain.
|
||||
- Maintain offline-first and deterministic outputs for analyzers; ensure runtime capture adapters include redaction/sandbox guidance before rollout.
|
||||
- Native analyzer format-detector completed and tested; NAT-20-002 remains blocked awaiting declared-dependency writer/contract and availability of declared dependency export path.
|
||||
- Node analyzer version-target/tarball/Yarn PnP tests pending; multiple targeted runs (latest 2025-11-18) fanned out into full solution builds and were cancelled. Needs clean/isolated runner or scoped build graph to validate SCANNER-ANALYZERS-NODE-22-001 changes.
|
||||
- .NET analyzer chain (11-002..005) blocked awaiting upstream static-analyzer contract (11-001) and downstream writer/export contracts; no safe implementation path until provided.
|
||||
|
||||
- Prep note for analyzer PREP tasks captured in `docs/modules/scanner/prep/2025-11-20-analyzers-prep.md`; use it as the interim contract until upstream writer/runtime contracts land.
|
||||
- Native analyzer format-detector completed; NAT-20-002 still blocked on declared-dependency writer interface—prep note defines expected payload to reduce rework once contract lands.
|
||||
- Node analyzer isolation plan published (see `docs/modules/scanner/prep/2025-11-20-node-isolated-runner.md`); offline cache still incomplete after multiple restore attempts (latest NU1101 StellaOps.Policy.AuthSignals). Need full dependency seed before isolated run and tests can pass.
|
||||
- .NET analyzer chain (11-002..005) remains blocked awaiting upstream static-analyzer contract (11-001) and downstream writer/export contracts; runtime fusion prep recorded but cannot proceed until contracts exist.
|
||||
## Next Checkpoints
|
||||
- 2025-11-19: Sprint kickoff (owner: Scanner PM), contingent on Sprint 131 sign-off.
|
||||
- 2025-11-26: Mid-sprint review (owner: EPDR Guild lead) to validate observation exports and resolver behavior.
|
||||
|
||||
| 2025-11-18 | SCANNER-ANALYZERS-NODE-22-001: Added Yarn PnP cache zip traversal, emitter sets yarnPnp metadata, new fixture/tests (`yarn-pnp`); test run aborted due to long-running solution build—rerun on clean runner. | Node Analyzer Guild |
|
||||
| 2025-11-18 | SCANNER-ANALYZERS-NODE-22-001: Added Yarn PnP cache zip traversal, emitter sets yarnPnp metadata, new fixture/tests (`yarn-pnp`); test run aborted due to long-running solution build—rerun on clean runner. | Node Analyzer Guild |
|
||||
@@ -1,43 +1,46 @@
|
||||
# Sprint 0133-0001-0001 · Scanner & Surface (Phase IV)
|
||||
|
||||
## Topic & Scope
|
||||
- Scanner & Surface phase IV: Node bundle/source-map coverage and native/WASM signal extraction.
|
||||
- Maintain sequential execution across 130–139; work only after Sprint 0132 completes.
|
||||
- **Working directory:** `src/Scanner`.
|
||||
|
||||
## Dependencies & Concurrency
|
||||
- Upstream: Sprint 0132 (Scanner & Surface phase III) must land first.
|
||||
- Concurrency: tasks execute in table order; all currently TODO.
|
||||
|
||||
## Documentation Prerequisites
|
||||
- docs/README.md
|
||||
- docs/07_HIGH_LEVEL_ARCHITECTURE.md
|
||||
- docs/modules/platform/architecture-overview.md
|
||||
- docs/modules/scanner/architecture.md
|
||||
- src/Scanner/AGENTS.md
|
||||
|
||||
## Delivery Tracker
|
||||
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
|
||||
| --- | --- | --- | --- | --- | --- |
|
||||
| P1 | PREP-SCANNER-ANALYZERS-NODE-22-006-UPSTREAM-2 | TODO | Due 2025-11-22 · Accountable: Node Analyzer Guild (`src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Node`) | Node Analyzer Guild (`src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Node`) | Upstream 22-005 not delivered in Sprint 0132; waiting on bundle/source-map resolver baseline. <br><br> Document artefact/deliverable for SCANNER-ANALYZERS-NODE-22-006 and publish location so downstream tasks can proceed. |
|
||||
| P2 | PREP-SCANNER-ANALYZERS-NODE-22-007-UPSTREAM-2 | TODO | Due 2025-11-22 · Accountable: Node Analyzer Guild | Node Analyzer Guild | Upstream 22-006 blocked. <br><br> Document artefact/deliverable for SCANNER-ANALYZERS-NODE-22-007 and publish location so downstream tasks can proceed. |
|
||||
| P3 | PREP-SCANNER-ANALYZERS-NODE-22-008-UPSTREAM-2 | TODO | Due 2025-11-22 · Accountable: Node Analyzer Guild | Node Analyzer Guild | Upstream 22-007 blocked. <br><br> Document artefact/deliverable for SCANNER-ANALYZERS-NODE-22-008 and publish location so downstream tasks can proceed. |
|
||||
| 1 | SCANNER-ANALYZERS-NODE-22-006 | BLOCKED (2025-11-20) | PREP-SCANNER-ANALYZERS-NODE-22-006-UPSTREAM-2 | Node Analyzer Guild (`src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Node`) | Detect bundles + source maps, reconstruct module specifiers, correlate to original paths; support dual CJS/ESM graphs with conditions. |
|
||||
| 2 | SCANNER-ANALYZERS-NODE-22-007 | BLOCKED (2025-11-20) | PREP-SCANNER-ANALYZERS-NODE-22-007-UPSTREAM-2 | Node Analyzer Guild | Scan for native addons (.node), WASM modules, and core capability signals (child_process, vm, worker_threads); emit hint edges and native metadata. |
|
||||
| 3 | SCANNER-ANALYZERS-NODE-22-008 | BLOCKED (2025-11-20) | PREP-SCANNER-ANALYZERS-NODE-22-008-UPSTREAM-2 | Node Analyzer Guild | Produce AOC-compliant observations: entrypoints, components (pkg/native/wasm), edges (esm-import, cjs-require, exports, json, native-addon, wasm, worker) with reason codes/confidence and resolver traces. |
|
||||
|
||||
## Execution Log
|
||||
# Sprint 0133-0001-0001 · Scanner & Surface (Phase IV)
|
||||
|
||||
## Topic & Scope
|
||||
- Scanner & Surface phase IV: Node bundle/source-map coverage and native/WASM signal extraction.
|
||||
- Maintain sequential execution across 130–139; work only after Sprint 0132 completes.
|
||||
- **Working directory:** `src/Scanner`.
|
||||
|
||||
## Dependencies & Concurrency
|
||||
- Upstream: Sprint 0132 (Scanner & Surface phase III) must land first.
|
||||
- Concurrency: tasks execute in table order; all currently TODO.
|
||||
|
||||
## Documentation Prerequisites
|
||||
- docs/README.md
|
||||
- docs/07_HIGH_LEVEL_ARCHITECTURE.md
|
||||
- docs/modules/platform/architecture-overview.md
|
||||
- docs/modules/scanner/architecture.md
|
||||
- src/Scanner/AGENTS.md
|
||||
|
||||
## Delivery Tracker
|
||||
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
|
||||
| --- | --- | --- | --- | --- | --- |
|
||||
| P1 | PREP-SCANNER-ANALYZERS-NODE-22-006-UPSTREAM-2 | DONE (2025-11-20) | Due 2025-11-22 · Accountable: Node Analyzer Guild (`src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Node`) | Node Analyzer Guild (`src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Node`) | Bundle/source-map baseline documented in `docs/modules/scanner/design/node-bundle-phase22.md` with sample NDJSON `docs/samples/scanner/node-phase22/node-phase22-sample.ndjson`. |
|
||||
| P2 | PREP-SCANNER-ANALYZERS-NODE-22-007-UPSTREAM-2 | DONE (2025-11-20) | Due 2025-11-22 · Accountable: Node Analyzer Guild | Node Analyzer Guild | Native/WASM/capability detection rules + reason codes documented in `docs/modules/scanner/design/node-bundle-phase22.md` with fixture referenced above. |
|
||||
| P3 | PREP-SCANNER-ANALYZERS-NODE-22-008-UPSTREAM-2 | DONE (2025-11-20) | Due 2025-11-22 · Accountable: Node Analyzer Guild | Node Analyzer Guild | AOC-compliant observation emission shape + sorting rules documented in `docs/modules/scanner/design/node-bundle-phase22.md`; fixture referenced above. |
|
||||
| 1 | SCANNER-ANALYZERS-NODE-22-006 | BLOCKED (2025-11-20) | PREP-SCANNER-ANALYZERS-NODE-22-006-UPSTREAM-2 | Node Analyzer Guild (`src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Node`) | Detect bundles + source maps, reconstruct module specifiers, correlate to original paths; support dual CJS/ESM graphs with conditions. |
|
||||
| 2 | SCANNER-ANALYZERS-NODE-22-007 | BLOCKED (2025-11-20) | PREP-SCANNER-ANALYZERS-NODE-22-007-UPSTREAM-2 | Node Analyzer Guild | Scan for native addons (.node), WASM modules, and core capability signals (child_process, vm, worker_threads); emit hint edges and native metadata. |
|
||||
| 3 | SCANNER-ANALYZERS-NODE-22-008 | BLOCKED (2025-11-20) | PREP-SCANNER-ANALYZERS-NODE-22-008-UPSTREAM-2 | Node Analyzer Guild | Produce AOC-compliant observations: entrypoints, components (pkg/native/wasm), edges (esm-import, cjs-require, exports, json, native-addon, wasm, worker) with reason codes/confidence and resolver traces. |
|
||||
|
||||
## Execution Log
|
||||
| Date (UTC) | Update | Owner |
|
||||
| --- | --- | --- |
|
||||
| 2025-11-20 | Added Node phase 22 NDJSON loader hook + fixture to analyzer; PREP P1–P3 now have executable baseline for downstream tasks. | Implementer |
|
||||
| 2025-11-20 | Published Node phase 22 prep doc + fixture (see Delivery Tracker) and marked PREP P1–P3 DONE. | Planning |
|
||||
| 2025-11-20 | Started PREP-SCANNER-ANALYZERS-NODE-22-006/007/008 (statuses → DOING) after confirming no prior DOING owner entries. | Planning |
|
||||
| 2025-11-19 | Assigned PREP owners/dates; see Delivery Tracker. | Planning |
|
||||
| 2025-11-08 | Sprint stub created; awaiting upstream completion of Sprint 0132. | Planning |
|
||||
| 2025-11-19 | Normalized sprint to standard template and renamed from `SPRINT_133_scanner_surface.md` to `SPRINT_0133_0001_0001_scanner_surface.md`; content preserved. | Implementer |
|
||||
| 2025-11-19 | Converted legacy filename `SPRINT_133_scanner_surface.md` to redirect stub pointing here to avoid divergent updates. | Implementer |
|
||||
| 2025-11-20 | Marked Node phase tasks 22-006/007/008 BLOCKED because upstream 22-005 (Sprint 0132) not delivered; no executable work in this sprint until 0132 unblocks. | Implementer |
|
||||
|
||||
## Decisions & Risks
|
||||
- All tasks depend on 22-005 outputs; remain TODO until prerequisites land.
|
||||
| 2025-11-08 | Sprint stub created; awaiting upstream completion of Sprint 0132. | Planning |
|
||||
| 2025-11-19 | Normalized sprint to standard template and renamed from `SPRINT_133_scanner_surface.md` to `SPRINT_0133_0001_0001_scanner_surface.md`; content preserved. | Implementer |
|
||||
| 2025-11-19 | Converted legacy filename `SPRINT_133_scanner_surface.md` to redirect stub pointing here to avoid divergent updates. | Implementer |
|
||||
| 2025-11-20 | Marked Node phase tasks 22-006/007/008 BLOCKED because upstream 22-005 (Sprint 0132) not delivered; no executable work in this sprint until 0132 unblocks. | Implementer |
|
||||
|
||||
## Decisions & Risks
|
||||
- All tasks depend on 22-005 outputs; remain TODO until prerequisites land, but analyzer contracts are frozen in `docs/modules/scanner/design/node-bundle-phase22.md` and sample NDJSON is wired into analyzer/tests for deterministic baselines.
|
||||
- Maintain offline/deterministic outputs; avoid running full solution builds—prefer scoped runners per module.
|
||||
|
||||
## Next Checkpoints
|
||||
- Set kickoff once Sprint 0132 completes (date TBD).
|
||||
|
||||
## Next Checkpoints
|
||||
- Set kickoff once Sprint 0132 completes (date TBD).
|
||||
|
||||
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"eventId": "8c5e9d4e-54c0-4fb3-9e0c-7c4cdbf74c6a",
|
||||
"tenantId": "urn:tenant:123e4567-e89b-12d3-a456-426614174000",
|
||||
"observationId": "6560606df3c5d6ad3b5a1234",
|
||||
"advisoryId": "CVE-2024-99999",
|
||||
"source": {
|
||||
"vendor": "ghsa",
|
||||
"stream": "advisories",
|
||||
"api": "https://api.github.com/advisories",
|
||||
"collectorVersion": "1.12.0"
|
||||
},
|
||||
"linksetSummary": {
|
||||
"aliases": ["GHSA-xxxx-yyyy-zzzz", "CVE-2024-99999"],
|
||||
"purls": ["pkg:npm/lodash@4.17.21"],
|
||||
"cpes": ["cpe:/a:lodash:lodash:4.17.21"],
|
||||
"scopes": ["runtime"],
|
||||
"relationships": [
|
||||
{"type": "contains", "source": "pkg:npm/lodash@4.17.21", "target": "file://dist/lodash.js", "provenance": "ghsa"}
|
||||
]
|
||||
},
|
||||
"supersedesId": "65605fdaf3c5d6ad3b5a0fff",
|
||||
"documentSha": "2f8f568cc1ed3474f0a4564ddb8c64f4b4d176fbe0a2a98a02b88e822a4f5b6d",
|
||||
"observationHash": "10f4fc0b5c1a1d4c266fafd2b4f45618f6a0a4b86087c3e67e4c1a2c8f38e990",
|
||||
"ingestedAt": "2025-11-20T14:35:12Z",
|
||||
"traceId": "trace-4f29d7f6f1f147da",
|
||||
"replayCursor": "cs-0000000172-0001"
|
||||
}
|
||||
@@ -0,0 +1,44 @@
|
||||
# advisory.observation.updated@1 · Event contract
|
||||
|
||||
Purpose: unblock CONCELIER-GRAPH-21-002 by freezing the platform event shape for observation changes emitted by Concelier. This is the only supported event for observation churn; downstreams subscribe for evidence fan-out and replay bundles.
|
||||
|
||||
## Envelope & transport
|
||||
- Subject: `concelier.advisory.observation.updated.v1`
|
||||
- Type/version: `advisory.observation.updated@1`
|
||||
- Transport: NATS (primary), Redis Stream `concelier:advisory.observation.updated:v1` (fallback). Both carry the same DSSE envelope.
|
||||
- DSSE payloadType: `application/vnd.stellaops.advisory.observation.updated.v1+json`.
|
||||
- Signature: Ed25519 via Platform Events signer; attach Rekor UUID when available. Offline kits treat the envelope as the source of truth.
|
||||
|
||||
## Payload (JSON)
|
||||
| Field | Type | Rules |
|
||||
| --- | --- | --- |
|
||||
| `eventId` | string (uuid) | Generated by publisher; idempotency key.
|
||||
| `tenantId` | string | `urn:tenant:{uuid}`; required for multi-tenant routing.
|
||||
| `observationId` | string (ObjectId) | Mongo `_id` of the observation document.
|
||||
| `advisoryId` | string | Upstream advisory identifier (e.g., CVE, GHSA, vendor id).
|
||||
| `source` | object | `{ vendor, stream, api, collectorVersion }`; lowercase vendor, non-empty.
|
||||
| `linksetSummary` | object | `{ aliases: string[], purls: string[], cpes?: string[], scopes?: string[], relationships?: object[] }` all arrays pre-sorted ASCII.
|
||||
| `supersedesId` | string (ObjectId, optional) | Previous observation `_id` if this is a new revision; omitted otherwise.
|
||||
| `documentSha` | string | SHA-256 of raw upstream document.
|
||||
| `observationHash` | string | Stable hash over canonicalized observation JSON (tenant, source, advisoryId, documentSha, fetchedAt).
|
||||
| `ingestedAt` | string (ISO-8601 UTC) | Timestamp when appended.
|
||||
| `traceId` | string (optional) | Propagated from ingest job/request; aids join with logs/metrics.
|
||||
| `replayCursor` | string | Monotone cursor for offline bundle ordering (tick from change stream resume token).
|
||||
|
||||
### Determinism & ordering
|
||||
- Arrays sorted ASCII; objects field-sorted when hashing.
|
||||
- `eventId` + `replayCursor` provide exactly-once consumer handling; duplicates must be ignored when `observationHash` unchanged.
|
||||
- No judgments: only raw facts and hash pointers; any derived severity/merge content is forbidden.
|
||||
|
||||
### Error contracts for Scheduler
|
||||
- Retryable NATS/Redis failures use backoff capped at 30s; after 5 attempts, emit `concelier.events.dlq` with the same envelope and `error` field describing transport failure.
|
||||
- Consumers must NACK on schema validation failure; publisher logs `ERR_EVENT_SCHEMA` and quarantines the offending observation id.
|
||||
|
||||
## Sample payload
|
||||
See `advisory.observation.updated@1.sample.json` (canonical field order, ASCII sorted arrays). Hashes intentionally short for readability; replace with real values in tests.
|
||||
|
||||
## Schema
|
||||
`advisory.observation.updated@1.schema.json` provides a JSON Schema (draft 2020-12) for runtime validation; any additional fields are rejected.
|
||||
|
||||
## Compatibility note
|
||||
Sprint tracker referenced `sbom.observation.updated`; this contract standardises on `advisory.observation.updated@1`. If a legacy alias is required for interim consumers, mirror the envelope on subject `surface.sbom.observation.updated.v1` with identical payload.
|
||||
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"eventId": "8c5e9d4e-54c0-4fb3-9e0c-7c4cdbf74c6a",
|
||||
"tenantId": "urn:tenant:123e4567-e89b-12d3-a456-426614174000",
|
||||
"observationId": "6560606df3c5d6ad3b5a1234",
|
||||
"advisoryId": "CVE-2024-99999",
|
||||
"source": {
|
||||
"vendor": "ghsa",
|
||||
"stream": "advisories",
|
||||
"api": "https://api.github.com/advisories",
|
||||
"collectorVersion": "1.12.0"
|
||||
},
|
||||
"linksetSummary": {
|
||||
"aliases": ["GHSA-xxxx-yyyy-zzzz", "CVE-2024-99999"],
|
||||
"purls": ["pkg:npm/lodash@4.17.21"],
|
||||
"cpes": ["cpe:/a:lodash:lodash:4.17.21"],
|
||||
"scopes": ["runtime"],
|
||||
"relationships": [
|
||||
{"type": "contains", "source": "pkg:npm/lodash@4.17.21", "target": "file://dist/lodash.js", "provenance": "ghsa"}
|
||||
]
|
||||
},
|
||||
"supersedesId": "65605fdaf3c5d6ad3b5a0fff",
|
||||
"documentSha": "2f8f568cc1ed3474f0a4564ddb8c64f4b4d176fbe0a2a98a02b88e822a4f5b6d",
|
||||
"observationHash": "10f4fc0b5c1a1d4c266fafd2b4f45618f6a0a4b86087c3e67e4c1a2c8f38e990",
|
||||
"ingestedAt": "2025-11-20T14:35:12Z",
|
||||
"traceId": "trace-4f29d7f6f1f147da",
|
||||
"replayCursor": "cs-0000000172-0001"
|
||||
}
|
||||
@@ -0,0 +1,68 @@
|
||||
{
|
||||
"$schema": "https://json-schema.org/draft/2020-12/schema",
|
||||
"$id": "https://stellaops.org/concelier/advisory.observation.updated@1.schema.json",
|
||||
"title": "advisory.observation.updated@1",
|
||||
"type": "object",
|
||||
"required": [
|
||||
"eventId",
|
||||
"tenantId",
|
||||
"observationId",
|
||||
"advisoryId",
|
||||
"source",
|
||||
"linksetSummary",
|
||||
"documentSha",
|
||||
"observationHash",
|
||||
"ingestedAt",
|
||||
"replayCursor"
|
||||
],
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"eventId": { "type": "string", "format": "uuid" },
|
||||
"tenantId": { "type": "string", "pattern": "^urn:tenant:[0-9a-fA-F-]{36}$" },
|
||||
"observationId": { "type": "string", "pattern": "^[a-f0-9]{24}$" },
|
||||
"advisoryId": { "type": "string", "minLength": 1 },
|
||||
"source": {
|
||||
"type": "object",
|
||||
"required": ["vendor", "stream", "api", "collectorVersion"],
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"vendor": { "type": "string", "minLength": 1 },
|
||||
"stream": { "type": "string", "minLength": 1 },
|
||||
"api": { "type": "string", "minLength": 1 },
|
||||
"collectorVersion": { "type": "string", "minLength": 1 }
|
||||
}
|
||||
},
|
||||
"linksetSummary": {
|
||||
"type": "object",
|
||||
"required": ["aliases", "purls"],
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"aliases": { "type": "array", "items": { "type": "string" }, "uniqueItems": true },
|
||||
"purls": { "type": "array", "items": { "type": "string" }, "uniqueItems": true },
|
||||
"cpes": { "type": "array", "items": { "type": "string" }, "uniqueItems": true },
|
||||
"scopes": { "type": "array", "items": { "type": "string" }, "uniqueItems": true },
|
||||
"relationships": {
|
||||
"type": "array",
|
||||
"items": {
|
||||
"type": "object",
|
||||
"required": ["type", "source", "target"],
|
||||
"additionalProperties": false,
|
||||
"properties": {
|
||||
"type": { "type": "string" },
|
||||
"source": { "type": "string" },
|
||||
"target": { "type": "string" },
|
||||
"provenance": { "type": "string" }
|
||||
}
|
||||
},
|
||||
"uniqueItems": false
|
||||
}
|
||||
}
|
||||
},
|
||||
"supersedesId": { "type": "string", "pattern": "^[a-f0-9]{24}$" },
|
||||
"documentSha": { "type": "string", "pattern": "^[a-f0-9]{64}$" },
|
||||
"observationHash": { "type": "string", "pattern": "^[a-f0-9]{64}$" },
|
||||
"ingestedAt": { "type": "string", "format": "date-time" },
|
||||
"traceId": { "type": "string" },
|
||||
"replayCursor": { "type": "string", "minLength": 1 }
|
||||
}
|
||||
}
|
||||
56
docs/modules/concelier/linkset-correlation-21-002.md
Normal file
56
docs/modules/concelier/linkset-correlation-21-002.md
Normal file
@@ -0,0 +1,56 @@
|
||||
# CONCELIER-LNM-21-002 · Linkset correlation rules (v1)
|
||||
|
||||
Purpose: unblock CONCELIER-LNM-21-002 by freezing correlation/precedence rules and providing fixtures so builders and downstream consumers can proceed.
|
||||
|
||||
## Scope
|
||||
- Applies to linksets produced from `advisory_observations` (LNM v1).
|
||||
- Correlation is aggregation-only: no value synthesis or merge; emit conflicts instead of collapsing fields.
|
||||
- Output persists in `advisory_linksets` and drives `advisory.linkset.updated@1` events.
|
||||
|
||||
## Deterministic confidence calculation (0–1)
|
||||
```
|
||||
confidence = clamp(
|
||||
0.40 * alias_score +
|
||||
0.25 * purl_overlap_score +
|
||||
0.15 * cpe_overlap_score +
|
||||
0.10 * severity_agreement +
|
||||
0.05 * reference_overlap +
|
||||
0.05 * freshness_score
|
||||
)
|
||||
```
|
||||
- `alias_score`: 1 if any alias exact-match across observations; 0.5 if vendor ID prefixes match; else 0.
|
||||
- `purl_overlap_score`: 1 if same pkg+version range intersects; 0.6 if same pkg family but disjoint ranges; 0 otherwise. Use semver/rpm/deb comparers as in LNM v1.
|
||||
- `cpe_overlap_score`: 1 if any CPE exact-match; 0.5 if same vendor/product, any version; else 0.
|
||||
- `severity_agreement`: 1 if CVSS base score delta ≤ 0.1; 0.5 if ≤ 1.0; else 0. Use max of available CVSS per observation.
|
||||
- `reference_overlap`: fraction of shared reference URLs (case-normalized) between the pair with the highest overlap across the set.
|
||||
- `freshness_score`: 1 when `fetchedAt` spread ≤ 48h; linearly decays to 0 at 14 days.
|
||||
- Sort observations before scoring by `(source.vendor, advisoryId, fetchedAt)`; reuse that order for hashing and for output arrays.
|
||||
|
||||
## Conflict emission (add-only)
|
||||
Emit a conflict entry per divergent field group:
|
||||
- `severity-mismatch`: CVSS base score delta > 1.0 or vector differs.
|
||||
- `affected-range-divergence`: version ranges do not intersect.
|
||||
- `reference-clash`: no shared references and source vendors differ.
|
||||
- `alias-inconsistency`: aliases disjoint across observations.
|
||||
- `metadata-gap`: required fields missing on any observation.
|
||||
Each conflict includes `field`, `reason`, and `values` (array of `source: value` strings) and is stable-sorted by `field` then `reason`.
|
||||
|
||||
## Linkset output shape additions
|
||||
- `key.confidence`: populated from formula above.
|
||||
- `conflicts[]`: as defined; may be empty but never null.
|
||||
- `normalized` retains add-only fields from `link-not-merge-schema.md`; do not drop raw ranges even when disjoint.
|
||||
- `provenance.hashes`: sorted list of `observationHash` values; used by replay bundles.
|
||||
|
||||
## Fixtures
|
||||
- `docs/samples/lnm/linkset-lnm-21-002-sample.json`: two-source agreement (high confidence, no conflicts).
|
||||
- `docs/samples/lnm/linkset-lnm-21-002-conflict.json`: three-source disagreement showing conflict records and confidence < 0.7.
|
||||
All fixtures use ASCII ordering and ISO-8601 UTC timestamps and may be used as golden outputs in tests.
|
||||
|
||||
## Implementation checklist
|
||||
- Builder must refuse to overwrite existing linkset when incoming hash list unchanged.
|
||||
- Correlation job idempotency key: `hash(tenantId|aliasSet|purlSet|fetchedAtBucket)`.
|
||||
- Telemetry: counter `concelier.linkset.builder.conflict_total{field,reason}` and histogram `concelier.linkset.builder.confidence` (0–1 buckets).
|
||||
- Event emission: include `confidence` and `conflicts` summary in `advisory.linkset.updated@1`; keep arrays sorted as above.
|
||||
|
||||
## Change control
|
||||
- Add-only. Adjusting weights or conflict codes requires new version `advisory.linkset.updated@2` and a sprint note.
|
||||
31
docs/modules/concelier/operations/observation-events.md
Normal file
31
docs/modules/concelier/operations/observation-events.md
Normal file
@@ -0,0 +1,31 @@
|
||||
# Observation Event Transport (advisory.observation.updated@1)
|
||||
|
||||
Purpose: document how to emit `advisory.observation.updated@1` events via Mongo outbox with optional NATS JetStream transport.
|
||||
|
||||
## Configuration (appsettings.yaml / config)
|
||||
```yaml
|
||||
advisoryObservationEvents:
|
||||
enabled: false # set true to publish beyond Mongo outbox
|
||||
transport: "mongo" # "mongo" (no-op publisher) or "nats"
|
||||
natsUrl: "nats://127.0.0.1:4222"
|
||||
subject: "concelier.advisory.observation.updated.v1"
|
||||
deadLetterSubject: "concelier.advisory.observation.updated.dead.v1"
|
||||
stream: "CONCELIER_OBS"
|
||||
```
|
||||
|
||||
Defaults: disabled, transport `mongo`; subject/stream as above.
|
||||
|
||||
## Flow
|
||||
1) Observation sink writes event to `advisory_observation_events` (idempotent on `observationHash`).
|
||||
2) Background worker dequeues unpublished rows, publishes via configured transport, then stamps `publishedAt`.
|
||||
3) If transport disabled/unavailable, outbox accumulates safely; re-enabling resumes publishing.
|
||||
|
||||
## Operational notes
|
||||
- Ensure NATS JetStream is reachable before enabling `transport: nats` to avoid retry noise.
|
||||
- Stream is auto-created if missing with current subject; size capped at 512 KiB per message.
|
||||
- Dead-letter subject reserved; not yet wired—keep for future schema validation failures.
|
||||
- Backlog monitoring: count documents in `advisory_observation_events` with `publishedAt: null`.
|
||||
|
||||
## Testing
|
||||
- Without NATS: leave `enabled=false`; app continues writing outbox only.
|
||||
- With NATS: run a local `nats-server -js` and set `enabled=true transport=nats`. Verify published messages on subject via `nats sub concelier.advisory.observation.updated.v1`.
|
||||
50
docs/modules/scanner/design/node-bundle-phase22.md
Normal file
50
docs/modules/scanner/design/node-bundle-phase22.md
Normal file
@@ -0,0 +1,50 @@
|
||||
# Scanner Node Phase 22 (22-006/007/008) · Prep deliverable
|
||||
|
||||
Purpose: unblock PREP tasks by freezing analyzer inputs/outputs, resolver traces, and fixtures for Node bundle/source-map coverage, native/WASM detection, and AOC-compliant observation emission.
|
||||
|
||||
## Output artefacts
|
||||
- Sample NDJSON: `docs/samples/scanner/node-phase22/node-phase22-sample.ndjson` (covers 22-006/007/008 in one run).
|
||||
- Resolver trace spec and reason codes (below) are binding for workers and tests.
|
||||
|
||||
## 22-006 · Bundle + source-map reconstruction
|
||||
- Detect bundles by `sourceMappingURL` trailers and common bundle signatures (webpack runtime, rollup intro, esbuild banners).
|
||||
- Load `.map` file (inline/base64 or adjacent file); reject maps >50 MB or with missing `sourcesContent`.
|
||||
- Resolver trace must include: `["bundle:<path>", "map:<path|inline>", "source:<original-path>"]`.
|
||||
- Recovered module specifier shape: `{ "type": "component", "componentType": "pkg", "path": "<relative path>", "format": "esm|cjs", "fromBundle": true, "confidence": 0.8+ }`.
|
||||
- Normalize paths to POSIX, strip inline `webpack://` prefixes, collapse duplicated `..` segments, and dedupe.
|
||||
|
||||
## 22-007 · Native addon / WASM / capability signals
|
||||
- Native addons: detect `.node` files (ELF/PE/Mach-O) and `process.dlopen` calls. Emit `componentType:"native"` with `arch`, `platform`, and optional `soname`.
|
||||
- WASM: detect `.wasm` files and dynamic imports (`WebAssembly.instantiate*`). Emit `componentType:"wasm"` with `exports` (function names if discoverable).
|
||||
- Capability signals: AST scan for `child_process`, `vm`, `worker_threads`, `process.binding`, `fs.promises` `openFileHandle`.
|
||||
- Reason codes (stable strings):
|
||||
- `native-dlopen-string`, `native-dlopen-template`, `native-addon-file`,
|
||||
- `wasm-import`, `wasm-file`,
|
||||
- `capability-child-process`, `capability-vm`, `capability-worker`, `capability-binding`, `capability-fs-promises`.
|
||||
- Hint edges: `{ "type":"edge", "edgeType":"native-addon|wasm|capability", "from":"<module>", "to":"<artifact>", "reason":"<code>", "confidence": 0.6–0.9 }`.
|
||||
|
||||
## 22-008 · AOC-compliant observation emission
|
||||
- Emit NDJSON records grouped by `entrypoints`, `components`, `edges`, `resolverTrace` per record.
|
||||
- Required fields: `type`, `from`, `to` (for edges), `componentType`, `format`, `reason`, `confidence`, `resolverTrace` (array, stable ordered). Optional `scopes` (`runtime|dev|optional`) when derived from package.json sections.
|
||||
- Determinism:
|
||||
- Sort output by `type` then `path/from/to` strings; stable sort within edges by `edgeType`.
|
||||
- Timestamps forbidden; no filesystem mtime or random IDs.
|
||||
- All paths POSIX, absolute paths stripped to container root.
|
||||
|
||||
## Validation gates
|
||||
- Reject payloads if resolverTrace missing, empty, or unsorted.
|
||||
- Reject entries when `confidence < 0.4` to avoid noise; keep suppressed entries in debug logs only.
|
||||
- Large maps: emit `ERR_NODE_BUNDLE_MAP_TOO_LARGE` and skip map (still report bundle presence with `confidence:0.51`).
|
||||
|
||||
## Fixtures
|
||||
- `docs/samples/scanner/node-phase22/node-phase22-sample.ndjson` contains:
|
||||
1) webpack bundle w/ source map mapping to `/src/app.js` (22-006)
|
||||
2) native addon load via `process.dlopen('./native/addon.node')` (22-007)
|
||||
3) WASM module import via `WebAssembly.instantiateStreaming(fetch('./pkg.wasm'))` (22-007)
|
||||
4) capability signal for `child_process.execFile` (22-007)
|
||||
5) consolidated edges + components emitted per AOC rules (22-008)
|
||||
|
||||
## Implementation notes
|
||||
- Keep analyser pure: no execution of JS; rely on parse + static string matching. Use cached sourcemap parser in `StellaOps.Scanner.Analyzers.Lang.Node`.
|
||||
- Resolver trace must be included verbatim in tests; any change requires fixture update and sprint note.
|
||||
- Workers must mark node phase 22 outputs as experimental behind feature flag `scanner:node:phase22=true` defaulting true for CI only until stabilized.
|
||||
32
docs/modules/scanner/prep/2025-11-20-node-isolated-runner.md
Normal file
32
docs/modules/scanner/prep/2025-11-20-node-isolated-runner.md
Normal file
@@ -0,0 +1,32 @@
|
||||
# Scanner PREP — Node Analyzer Isolated Runner (22-001)
|
||||
Date: 2025-11-20
|
||||
Owner: Node Analyzer Guild
|
||||
Scope: Requirements and plan to provide an isolated/scoped runner so targeted Node analyzer tests complete without whole-solution fan‑out.
|
||||
|
||||
## Goals
|
||||
- Enable `StellaOps.Scanner.Analyzers.Lang.Node.Tests` to run deterministically without restoring/building the entire solution.
|
||||
- Reduce CI/local runtime (<5 min) and contention during restores.
|
||||
- Keep offline/air-gap posture: rely only on `local-nugets/` + repo fixtures; no external fetches.
|
||||
|
||||
## Proposed approach
|
||||
1) **Scoped solution file**
|
||||
- Create `src/Scanner/StellaOps.Scanner.Analyzers.Node.slnf` including only Node analyzer projects + their direct deps (`StellaOps.Scanner.Analyzers.Lang.Node`, tests, shared test utilities).
|
||||
- CI job uses `dotnet test --solution St...Node.slnf --no-restore --no-build` after targeted restore.
|
||||
2) **Isolated restore cache**
|
||||
- Pre-populate `local-nugets/` via existing offline feed; add msbuild property `RestorePackagesPath=$(RepoRoot)/offline/packages` to avoid global cache churn.
|
||||
3) **Test shim**
|
||||
- Add runsettings to disable collectors that trigger solution-wide discovery; set `RunConfiguration.DisableAppDomain=true` for determinism.
|
||||
4) **Tarball/pnpm/Yarn PnP fixtures**
|
||||
- Move heavy fixtures under `src/Scanner/__Tests/Fixtures/node/` and reference via deterministic VFS layer; no temp extraction outside repo.
|
||||
5) **Entry point**
|
||||
- New `scripts/scanner/node-tests-isolated.sh` wrapper: restore scoped solution, then run `dotnet test` with `/m:1` and explicit test filters as needed.
|
||||
|
||||
## Deliverables for implementation task
|
||||
- Add `.slnf`, runsettings, and wrapper script as above.
|
||||
- Update CI pipeline (Scanner) to include isolated target; keep full solution tests separate.
|
||||
- Document usage in `src/Scanner/__Tests/README.md`.
|
||||
|
||||
## Blocking items
|
||||
- None identified; all inputs are local to the repo/offline feeds.
|
||||
|
||||
This note satisfies PREP-SCANNER-ANALYZERS-NODE-22-001-NEEDS-ISOL by defining the isolated runner plan and artefact locations.
|
||||
74
docs/samples/lnm/linkset-lnm-21-002-conflict.json
Normal file
74
docs/samples/lnm/linkset-lnm-21-002-conflict.json
Normal file
@@ -0,0 +1,74 @@
|
||||
{
|
||||
"_id": "sha256:7b0c471f0b2c4c5f9e19f7bff4c3d9e4e7b2cbf7d5c3e0a58a0cc3314d2c9a10",
|
||||
"tenantId": "urn:tenant:123e4567-e89b-12d3-a456-426614174000",
|
||||
"advisoryId": "GHSA-aaaa-bbbb-cccc",
|
||||
"source": "lnm-correlator",
|
||||
"observations": [
|
||||
"6560606df3c5d6ad3b5b0001",
|
||||
"6560606df3c5d6ad3b5b0002",
|
||||
"6560606df3c5d6ad3b5b0003"
|
||||
],
|
||||
"key": {
|
||||
"vulnerabilityId": "GHSA-aaaa-bbbb-cccc",
|
||||
"productKey": "pkg:npm/leftpad",
|
||||
"confidence": 0.63
|
||||
},
|
||||
"normalized": {
|
||||
"purls": ["pkg:npm/leftpad"],
|
||||
"versions": ["1.3.0", "1.4.0"],
|
||||
"ranges": [
|
||||
{"type": "semver", "events": [{"introduced": "0"}, {"fixed": "1.3.0"}]},
|
||||
{"type": "semver", "events": [{"introduced": "1.3.0"}, {"fixed": "1.5.0"}]}
|
||||
],
|
||||
"severities": [
|
||||
{"system": "cvssv3", "score": 5.0, "vector": "CVSS:3.1/AV:L/AC:H/PR:L/UI:R/S:U/C:L/I:L/A:L"},
|
||||
{"system": "cvssv4", "score": 4.8, "vector": "CVSS:4.0/AV:P/AC:H/AT:N/PR:L/UI:P/VC:L/VI:L/VA:L/SC:N/SI:N/SA:N"}
|
||||
]
|
||||
},
|
||||
"conflicts": [
|
||||
{
|
||||
"field": "severity",
|
||||
"reason": "severity-mismatch",
|
||||
"values": [
|
||||
"vendorA:7.5 CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N",
|
||||
"vendorB:5.0 CVSS:3.1/AV:L/AC:H/PR:L/UI:R/S:U/C:L/I:L/A:L"
|
||||
]
|
||||
},
|
||||
{
|
||||
"field": "affected",
|
||||
"reason": "affected-range-divergence",
|
||||
"values": [
|
||||
"vendorA:[0,1.3.0]",
|
||||
"vendorB:[1.3.0,1.5.0]",
|
||||
"vendorC:1.4.x only"
|
||||
]
|
||||
},
|
||||
{
|
||||
"field": "aliases",
|
||||
"reason": "alias-inconsistency",
|
||||
"values": [
|
||||
"vendorA:GHSA-aaaa-bbbb-cccc",
|
||||
"vendorB:CVE-2024-11111"
|
||||
]
|
||||
},
|
||||
{
|
||||
"field": "references",
|
||||
"reason": "reference-clash",
|
||||
"values": [
|
||||
"vendorA:https://blog.example.com/advisory",
|
||||
"vendorB:https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2024-11111"
|
||||
]
|
||||
}
|
||||
],
|
||||
"provenance": {
|
||||
"observationHashes": [
|
||||
"8f0f9406349e62a7a9c28b24ec77cbb3b2a13f57d8dc2ed594a2c3fe6edbe201",
|
||||
"0e3ae50c3b2ab9e0ec2bf531d1a61583d79b4b0abeb8ec59269afeb7b8b5f050",
|
||||
"8c87cfcc22ebb7fa6e0c0e9e3d1de0d812e2fd6b05e8c6b0f2c8c7b7f988aaa2"
|
||||
],
|
||||
"toolVersion": "lnm-21-002",
|
||||
"policyHash": "linkset-correlation-21-002"
|
||||
},
|
||||
"createdAt": "2025-11-20T15:10:00Z",
|
||||
"builtByJobId": "corr-tenant123-ghsa-aaaa-bbbb-cccc"
|
||||
}
|
||||
36
docs/samples/lnm/linkset-lnm-21-002-sample.json
Normal file
36
docs/samples/lnm/linkset-lnm-21-002-sample.json
Normal file
@@ -0,0 +1,36 @@
|
||||
{
|
||||
"_id": "sha256:1f4b6e7c9d5f4e8f4973c8c3dfe1d1d3b4f0ad8991e7d937c6c1d77a9e4b8a21",
|
||||
"tenantId": "urn:tenant:123e4567-e89b-12d3-a456-426614174000",
|
||||
"advisoryId": "CVE-2024-99999",
|
||||
"source": "lnm-correlator",
|
||||
"observations": [
|
||||
"6560606df3c5d6ad3b5a1234",
|
||||
"6560606df3c5d6ad3b5a5678"
|
||||
],
|
||||
"key": {
|
||||
"vulnerabilityId": "CVE-2024-99999",
|
||||
"productKey": "pkg:npm/lodash",
|
||||
"confidence": 0.92
|
||||
},
|
||||
"normalized": {
|
||||
"purls": ["pkg:npm/lodash"],
|
||||
"versions": ["4.17.21"],
|
||||
"ranges": [
|
||||
{"type": "semver", "events": [{"introduced": "0"}, {"fixed": "4.17.22"}]}
|
||||
],
|
||||
"severities": [
|
||||
{"system": "cvssv3", "score": 7.5, "vector": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N"}
|
||||
]
|
||||
},
|
||||
"conflicts": [],
|
||||
"provenance": {
|
||||
"observationHashes": [
|
||||
"10f4fc0b5c1a1d4c266fafd2b4f45618f6a0a4b86087c3e67e4c1a2c8f38e990",
|
||||
"10f4fc0b5c1a1d4c266fafd2b4f45618f6a0a4b86087c3e67e4c1a2c8f38e991"
|
||||
],
|
||||
"toolVersion": "lnm-21-002",
|
||||
"policyHash": "linkset-correlation-21-002"
|
||||
},
|
||||
"createdAt": "2025-11-20T15:05:00Z",
|
||||
"builtByJobId": "corr-tenant123-cve-2024-99999"
|
||||
}
|
||||
@@ -0,0 +1,7 @@
|
||||
{"type":"entrypoint","path":"/app/dist/main.js","format":"esm","reason":"bundle-entrypoint","confidence":0.88,"resolverTrace":["bundle:/app/dist/main.js","map:/app/dist/main.js.map","source:/src/app.js"]}
|
||||
{"type":"component","componentType":"pkg","path":"/src/app.js","format":"esm","fromBundle":true,"reason":"source-map","confidence":0.87,"resolverTrace":["bundle:/app/dist/main.js","map:/app/dist/main.js.map","source:/src/app.js"]}
|
||||
{"type":"component","componentType":"native","path":"/app/native/addon.node","arch":"x86_64","platform":"linux","reason":"native-addon-file","confidence":0.82,"resolverTrace":["file:/app/native/addon.node","require:/app/dist/native-entry.js"]}
|
||||
{"type":"component","componentType":"wasm","path":"/app/pkg/pkg.wasm","exports":["init","run"],"reason":"wasm-file","confidence":0.8,"resolverTrace":["file:/app/pkg/pkg.wasm","import:/app/dist/wasm-entry.js"]}
|
||||
{"type":"edge","edgeType":"native-addon","from":"/src/app.js","to":"/app/native/addon.node","reason":"native-dlopen-string","confidence":0.76,"resolverTrace":["source:/src/app.js","call:process.dlopen('./native/addon.node')"]}
|
||||
{"type":"edge","edgeType":"wasm","from":"/src/app.js","to":"/app/pkg/pkg.wasm","reason":"wasm-import","confidence":0.74,"resolverTrace":["source:/src/app.js","call:WebAssembly.instantiateStreaming(fetch('./pkg.wasm'))"]}
|
||||
{"type":"edge","edgeType":"capability","from":"/src/app.js","to":"child_process.execFile","reason":"capability-child-process","confidence":0.7,"resolverTrace":["source:/src/app.js","call:child_process.execFile"]}
|
||||
1
offline/restore_missing_snapshot.txt
Normal file
1
offline/restore_missing_snapshot.txt
Normal file
@@ -0,0 +1 @@
|
||||
Known offline gaps from latest restore: StellaOps.Policy.AuthSignals (NU1101).
|
||||
@@ -106,6 +106,15 @@ builder.Services.AddMongoStorage(storageOptions =>
|
||||
storageOptions.DatabaseName = concelierOptions.Storage.Database;
|
||||
storageOptions.CommandTimeout = TimeSpan.FromSeconds(concelierOptions.Storage.CommandTimeoutSeconds);
|
||||
});
|
||||
builder.Services.AddOptions<AdvisoryObservationEventPublisherOptions>()
|
||||
.Bind(builder.Configuration.GetSection("advisoryObservationEvents"))
|
||||
.PostConfigure(options =>
|
||||
{
|
||||
options.Subject ??= "concelier.advisory.observation.updated.v1";
|
||||
options.Stream ??= "CONCELIER_OBS";
|
||||
options.Transport = string.IsNullOrWhiteSpace(options.Transport) ? "mongo" : options.Transport;
|
||||
})
|
||||
.ValidateOnStart();
|
||||
builder.Services.AddConcelierAocGuards();
|
||||
builder.Services.AddConcelierLinksetMappers();
|
||||
builder.Services.AddAdvisoryRawServices();
|
||||
|
||||
@@ -33,7 +33,7 @@ internal static class AdvisoryLinksetNormalization
|
||||
|
||||
var normalized = Build(linkset.PackageUrls);
|
||||
var conflicts = ExtractConflicts(linkset);
|
||||
var confidence = ComputeConfidence(providedConfidence, conflicts);
|
||||
var confidence = ComputeConfidence(linkset, providedConfidence, conflicts);
|
||||
|
||||
return (normalized, confidence, conflicts);
|
||||
}
|
||||
@@ -171,28 +171,56 @@ internal static class AdvisoryLinksetNormalization
|
||||
continue;
|
||||
}
|
||||
|
||||
// Preserve existing notes but map into stable reason codes where possible.
|
||||
var key = note.Key.Trim();
|
||||
var reason = key switch
|
||||
{
|
||||
"severity" => "severity-mismatch",
|
||||
"ranges" => "affected-range-divergence",
|
||||
"references" => "reference-clash",
|
||||
"aliases" => "alias-inconsistency",
|
||||
_ => "metadata-gap"
|
||||
};
|
||||
|
||||
conflicts.Add(new AdvisoryLinksetConflict(
|
||||
note.Key.Trim(),
|
||||
note.Value.Trim(),
|
||||
null));
|
||||
Field: key,
|
||||
Reason: reason,
|
||||
Values: new[] { $"{key}:{note.Value.Trim()}" }));
|
||||
}
|
||||
|
||||
return conflicts;
|
||||
}
|
||||
|
||||
private static double? ComputeConfidence(double? providedConfidence, IReadOnlyList<AdvisoryLinksetConflict> conflicts)
|
||||
private static double? ComputeConfidence(RawLinkset linkset, double? providedConfidence, IReadOnlyList<AdvisoryLinksetConflict> conflicts)
|
||||
{
|
||||
if (providedConfidence.HasValue)
|
||||
{
|
||||
return CoerceConfidence(providedConfidence);
|
||||
}
|
||||
|
||||
if (conflicts.Count > 0)
|
||||
double aliasScore = linkset.Aliases.IsDefaultOrEmpty ? 0d : 1d;
|
||||
double purlOverlapScore = linkset.PackageUrls.IsDefaultOrEmpty
|
||||
? 0d
|
||||
: (linkset.PackageUrls.Length > 1 ? 1d : 0.6d);
|
||||
double cpeOverlapScore = linkset.Cpes.IsDefaultOrEmpty
|
||||
? 0d
|
||||
: (linkset.Cpes.Length > 1 ? 1d : 0.5d);
|
||||
double severityAgreement = conflicts.Any(c => c.Reason == "severity-mismatch") ? 0.2d : 0.5d;
|
||||
double referenceOverlap = linkset.References.IsDefaultOrEmpty ? 0d : 0.5d;
|
||||
double freshnessScore = 0.5d; // until fetchedAt spread is available
|
||||
|
||||
var confidence = (0.40 * aliasScore) +
|
||||
(0.25 * purlOverlapScore) +
|
||||
(0.15 * cpeOverlapScore) +
|
||||
(0.10 * severityAgreement) +
|
||||
(0.05 * referenceOverlap) +
|
||||
(0.05 * freshnessScore);
|
||||
|
||||
if (conflicts.Count > 0 && confidence > 0.7d)
|
||||
{
|
||||
// Basic heuristic until scoring pipeline is wired: any conflicts => lower confidence.
|
||||
return 0.5;
|
||||
confidence -= 0.1d; // penalize non-empty conflict sets
|
||||
}
|
||||
|
||||
return 1.0;
|
||||
return Math.Clamp(confidence, 0d, 1d);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,11 @@
|
||||
namespace StellaOps.Concelier.Core.Observations;
|
||||
|
||||
public sealed class AdvisoryObservationEventPublisherOptions
|
||||
{
|
||||
public bool Enabled { get; set; } = false;
|
||||
public string Transport { get; set; } = "mongo"; // mongo|nats
|
||||
public string? NatsUrl { get; set; }
|
||||
public string Subject { get; set; } = "concelier.advisory.observation.updated.v1";
|
||||
public string DeadLetterSubject { get; set; } = "concelier.advisory.observation.updated.dead.v1";
|
||||
public string Stream { get; set; } = "CONCELIER_OBS";
|
||||
}
|
||||
@@ -0,0 +1,105 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Collections.Immutable;
|
||||
using System.Security.Cryptography;
|
||||
using System.Text;
|
||||
using StellaOps.Concelier.Models;
|
||||
using StellaOps.Concelier.Models.Observations;
|
||||
using StellaOps.Concelier.RawModels;
|
||||
|
||||
namespace StellaOps.Concelier.Core.Observations;
|
||||
|
||||
/// <summary>
|
||||
/// Contract-matching payload for <c>advisory.observation.updated@1</c> events.
|
||||
/// </summary>
|
||||
public sealed record AdvisoryObservationUpdatedEvent(
|
||||
Guid EventId,
|
||||
string TenantId,
|
||||
string ObservationId,
|
||||
string AdvisoryId,
|
||||
AdvisoryObservationSource Source,
|
||||
AdvisoryObservationLinksetSummary LinksetSummary,
|
||||
string DocumentSha,
|
||||
string ObservationHash,
|
||||
DateTimeOffset IngestedAt,
|
||||
string ReplayCursor,
|
||||
string? SupersedesId = null,
|
||||
string? TraceId = null)
|
||||
{
|
||||
public static AdvisoryObservationUpdatedEvent FromObservation(
|
||||
AdvisoryObservation observation,
|
||||
string? supersedesId,
|
||||
string? traceId,
|
||||
string? replayCursor = null)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(observation);
|
||||
|
||||
var summary = BuildSummary(observation.Linkset, observation.RawLinkset);
|
||||
var observationHash = ComputeObservationHash(observation);
|
||||
var tenantUrn = observation.Tenant.StartsWith("urn:tenant:", StringComparison.Ordinal)
|
||||
? observation.Tenant
|
||||
: $"urn:tenant:{observation.Tenant}";
|
||||
|
||||
return new AdvisoryObservationUpdatedEvent(
|
||||
EventId: Guid.NewGuid(),
|
||||
TenantId: tenantUrn,
|
||||
ObservationId: observation.ObservationId,
|
||||
AdvisoryId: observation.Upstream.UpstreamId,
|
||||
Source: observation.Source,
|
||||
LinksetSummary: summary,
|
||||
DocumentSha: observation.Upstream.ContentHash,
|
||||
ObservationHash: observationHash,
|
||||
IngestedAt: observation.CreatedAt,
|
||||
ReplayCursor: replayCursor ?? observation.CreatedAt.ToUniversalTime().Ticks.ToString(),
|
||||
SupersedesId: supersedesId,
|
||||
TraceId: traceId);
|
||||
}
|
||||
|
||||
private static AdvisoryObservationLinksetSummary BuildSummary(
|
||||
AdvisoryObservationLinkset linkset,
|
||||
RawLinkset rawLinkset)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(linkset);
|
||||
ArgumentNullException.ThrowIfNull(rawLinkset);
|
||||
|
||||
static ImmutableArray<string> SortSet(IEnumerable<string> values)
|
||||
=> values.Where(static v => !string.IsNullOrWhiteSpace(v))
|
||||
.Select(static v => v.Trim())
|
||||
.OrderBy(static v => v, StringComparer.Ordinal)
|
||||
.ToImmutableArray();
|
||||
|
||||
var relationships = rawLinkset.Relationships.Select(static rel => new AdvisoryObservationRelationshipSummary(
|
||||
rel.Type,
|
||||
rel.Source,
|
||||
rel.Target,
|
||||
rel.Provenance)).ToImmutableArray();
|
||||
|
||||
return new AdvisoryObservationLinksetSummary(
|
||||
Aliases: SortSet(linkset.Aliases),
|
||||
Purls: SortSet(linkset.Purls),
|
||||
Cpes: SortSet(linkset.Cpes),
|
||||
Scopes: SortSet(rawLinkset.Scopes),
|
||||
Relationships: relationships);
|
||||
}
|
||||
|
||||
private static string ComputeObservationHash(AdvisoryObservation observation)
|
||||
{
|
||||
var json = CanonicalJsonSerializer.Serialize(observation);
|
||||
var bytes = Encoding.UTF8.GetBytes(json);
|
||||
var hashBytes = SHA256.HashData(bytes);
|
||||
return Convert.ToHexString(hashBytes).ToLowerInvariant();
|
||||
}
|
||||
}
|
||||
|
||||
public sealed record AdvisoryObservationLinksetSummary(
|
||||
ImmutableArray<string> Aliases,
|
||||
ImmutableArray<string> Purls,
|
||||
ImmutableArray<string> Cpes,
|
||||
ImmutableArray<string> Scopes,
|
||||
ImmutableArray<AdvisoryObservationRelationshipSummary> Relationships);
|
||||
|
||||
public sealed record AdvisoryObservationRelationshipSummary(
|
||||
string Type,
|
||||
string Source,
|
||||
string Target,
|
||||
string? Provenance);
|
||||
@@ -0,0 +1,12 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
|
||||
namespace StellaOps.Concelier.Core.Observations;
|
||||
|
||||
public interface IAdvisoryObservationEventOutbox
|
||||
{
|
||||
Task<IReadOnlyCollection<AdvisoryObservationUpdatedEvent>> DequeueAsync(int take, CancellationToken cancellationToken);
|
||||
Task MarkPublishedAsync(Guid eventId, DateTimeOffset publishedAt, CancellationToken cancellationToken);
|
||||
}
|
||||
@@ -0,0 +1,9 @@
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
|
||||
namespace StellaOps.Concelier.Core.Observations;
|
||||
|
||||
public interface IAdvisoryObservationEventPublisher
|
||||
{
|
||||
Task PublishAsync(AdvisoryObservationUpdatedEvent @event, CancellationToken cancellationToken);
|
||||
}
|
||||
@@ -1,9 +1,10 @@
|
||||
using System.Collections.Generic;
|
||||
using System.Linq;
|
||||
using System.Text.Encodings.Web;
|
||||
using System.Text.Json;
|
||||
using System.Text.Json.Serialization;
|
||||
using System.Text.Json.Serialization.Metadata;
|
||||
using System.Collections.Generic;
|
||||
using System.Linq;
|
||||
using System.Text.Encodings.Web;
|
||||
using System.Text.Json;
|
||||
using System.Text.Json.Serialization;
|
||||
using System.Text.Json.Serialization.Metadata;
|
||||
using StellaOps.Concelier.Models.Observations;
|
||||
|
||||
namespace StellaOps.Concelier.Models;
|
||||
|
||||
@@ -17,11 +18,11 @@ public static class CanonicalJsonSerializer
|
||||
|
||||
private static readonly IReadOnlyDictionary<Type, string[]> PropertyOrderOverrides = new Dictionary<Type, string[]>
|
||||
{
|
||||
{
|
||||
typeof(AdvisoryProvenance),
|
||||
new[]
|
||||
{
|
||||
"source",
|
||||
{
|
||||
typeof(AdvisoryProvenance),
|
||||
new[]
|
||||
{
|
||||
"source",
|
||||
"kind",
|
||||
"value",
|
||||
"decisionReason",
|
||||
@@ -69,15 +70,30 @@ public static class CanonicalJsonSerializer
|
||||
{
|
||||
typeof(AdvisoryWeakness),
|
||||
new[]
|
||||
{
|
||||
"taxonomy",
|
||||
"identifier",
|
||||
"name",
|
||||
"uri",
|
||||
"provenance",
|
||||
}
|
||||
},
|
||||
};
|
||||
{
|
||||
"taxonomy",
|
||||
"identifier",
|
||||
"name",
|
||||
"uri",
|
||||
"provenance",
|
||||
}
|
||||
},
|
||||
{
|
||||
typeof(AdvisoryObservation),
|
||||
new[]
|
||||
{
|
||||
"observationId",
|
||||
"tenant",
|
||||
"source",
|
||||
"upstream",
|
||||
"content",
|
||||
"linkset",
|
||||
"rawLinkset",
|
||||
"createdAt",
|
||||
"attributes",
|
||||
}
|
||||
},
|
||||
};
|
||||
|
||||
public static string Serialize<T>(T value)
|
||||
=> JsonSerializer.Serialize(value, CompactOptions);
|
||||
|
||||
@@ -27,6 +27,7 @@ This module owns the persistent shape of Concelier's MongoDB database. Upgrades
|
||||
| `20251028_advisory_supersedes_backfill` | Renames legacy `advisory` collection to a read-only backup view and backfills `supersedes` chains across `advisory_raw`. |
|
||||
| `20251028_advisory_raw_validator` | Applies Aggregation-Only Contract JSON schema validator to the `advisory_raw` collection with configurable enforcement level. |
|
||||
| `20251104_advisory_observations_raw_linkset` | Backfills `rawLinkset` on `advisory_observations` using stored `advisory_raw` documents so canonical and raw projections co-exist for downstream policy joins. |
|
||||
| `20251120_advisory_observation_events` | Creates `advisory_observation_events` collection with tenant/hash indexes for observation event fan-out (advisory.observation.updated@1). Includes optional `publishedAt` marker for transport outbox. |
|
||||
| `20251117_advisory_linksets_tenant_lower` | Lowercases `advisory_linksets.tenantId` to align writes with lookup filters. |
|
||||
|
||||
## Operator Runbook
|
||||
|
||||
@@ -0,0 +1,31 @@
|
||||
using System.Collections.Generic;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using MongoDB.Bson;
|
||||
using MongoDB.Driver;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Migrations;
|
||||
|
||||
public sealed class EnsureAdvisoryObservationEventCollectionMigration : IMongoMigration
|
||||
{
|
||||
public string Id => "20251120_advisory_observation_events";
|
||||
|
||||
public string Description => "Ensure advisory_observation_events collection and indexes exist for observation event fan-out.";
|
||||
|
||||
public async Task ApplyAsync(IMongoDatabase database, CancellationToken cancellationToken)
|
||||
{
|
||||
var collection = database.GetCollection<BsonDocument>(MongoStorageDefaults.Collections.AdvisoryObservationEvents);
|
||||
|
||||
var indexes = new List<CreateIndexModel<BsonDocument>>
|
||||
{
|
||||
new(
|
||||
Builders<BsonDocument>.IndexKeys.Ascending("tenantId").Descending("ingestedAt"),
|
||||
new CreateIndexOptions { Name = "advisory_observation_events_tenant_ingested_desc" }),
|
||||
new(
|
||||
Builders<BsonDocument>.IndexKeys.Ascending("observationHash"),
|
||||
new CreateIndexOptions { Name = "advisory_observation_events_hash_unique", Unique = true }),
|
||||
};
|
||||
|
||||
await collection.Indexes.CreateManyAsync(indexes, cancellationToken).ConfigureAwait(false);
|
||||
}
|
||||
}
|
||||
@@ -29,5 +29,6 @@ public static class MongoStorageDefaults
|
||||
public const string AdvisoryConflicts = "advisory_conflicts";
|
||||
public const string AdvisoryObservations = "advisory_observations";
|
||||
public const string AdvisoryLinksets = "advisory_linksets";
|
||||
public const string AdvisoryObservationEvents = "advisory_observation_events";
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,97 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using MongoDB.Bson;
|
||||
using MongoDB.Bson.Serialization.Attributes;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Observations;
|
||||
|
||||
public sealed class AdvisoryObservationEventDocument
|
||||
{
|
||||
[BsonId]
|
||||
public Guid Id { get; set; }
|
||||
|
||||
[BsonElement("tenantId")]
|
||||
public string TenantId { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("observationId")]
|
||||
public string ObservationId { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("advisoryId")]
|
||||
public string AdvisoryId { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("source")]
|
||||
public AdvisoryObservationSourceDocument Source { get; set; } = new();
|
||||
|
||||
[BsonElement("linksetSummary")]
|
||||
public AdvisoryObservationLinksetSummaryDocument LinksetSummary { get; set; } = new();
|
||||
|
||||
[BsonElement("documentSha")]
|
||||
public string DocumentSha { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("observationHash")]
|
||||
public string ObservationHash { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("ingestedAt")]
|
||||
public DateTime IngestedAt { get; set; }
|
||||
= DateTime.SpecifyKind(DateTime.UtcNow, DateTimeKind.Utc);
|
||||
|
||||
[BsonElement("replayCursor")]
|
||||
public string ReplayCursor { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("supersedesId")]
|
||||
public string? SupersedesId { get; set; }
|
||||
|
||||
[BsonElement("traceId")]
|
||||
public string? TraceId { get; set; }
|
||||
|
||||
[BsonElement("publishedAt")]
|
||||
public DateTime? PublishedAt { get; set; }
|
||||
}
|
||||
|
||||
public sealed class AdvisoryObservationSourceDocument
|
||||
{
|
||||
[BsonElement("vendor")]
|
||||
public string Vendor { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("stream")]
|
||||
public string Stream { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("api")]
|
||||
public string Api { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("collectorVersion")]
|
||||
public string? CollectorVersion { get; set; }
|
||||
}
|
||||
|
||||
public sealed class AdvisoryObservationLinksetSummaryDocument
|
||||
{
|
||||
[BsonElement("aliases")]
|
||||
public List<string> Aliases { get; set; } = new();
|
||||
|
||||
[BsonElement("purls")]
|
||||
public List<string> Purls { get; set; } = new();
|
||||
|
||||
[BsonElement("cpes")]
|
||||
public List<string> Cpes { get; set; } = new();
|
||||
|
||||
[BsonElement("scopes")]
|
||||
public List<string> Scopes { get; set; } = new();
|
||||
|
||||
[BsonElement("relationships")]
|
||||
public List<AdvisoryObservationRelationshipDocument> Relationships { get; set; } = new();
|
||||
}
|
||||
|
||||
public sealed class AdvisoryObservationRelationshipDocument
|
||||
{
|
||||
[BsonElement("type")]
|
||||
public string Type { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("source")]
|
||||
public string Source { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("target")]
|
||||
public string Target { get; set; } = string.Empty;
|
||||
|
||||
[BsonElement("provenance")]
|
||||
public string? Provenance { get; set; }
|
||||
}
|
||||
@@ -1,4 +1,5 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using StellaOps.Concelier.Core.Observations;
|
||||
@@ -9,14 +10,37 @@ namespace StellaOps.Concelier.Storage.Mongo.Observations;
|
||||
internal sealed class AdvisoryObservationSink : IAdvisoryObservationSink
|
||||
{
|
||||
private readonly IAdvisoryObservationStore _store;
|
||||
private readonly IAdvisoryObservationEventPublisher _publisher;
|
||||
private readonly TimeProvider _timeProvider;
|
||||
|
||||
public AdvisoryObservationSink(IAdvisoryObservationStore store)
|
||||
public AdvisoryObservationSink(
|
||||
IAdvisoryObservationStore store,
|
||||
IAdvisoryObservationEventPublisher publisher,
|
||||
TimeProvider? timeProvider = null)
|
||||
{
|
||||
_store = store ?? throw new ArgumentNullException(nameof(store));
|
||||
_publisher = publisher ?? throw new ArgumentNullException(nameof(publisher));
|
||||
_timeProvider = timeProvider ?? TimeProvider.System;
|
||||
}
|
||||
|
||||
public Task UpsertAsync(AdvisoryObservation observation, CancellationToken cancellationToken)
|
||||
{
|
||||
return _store.UpsertAsync(observation, cancellationToken);
|
||||
ArgumentNullException.ThrowIfNull(observation);
|
||||
|
||||
return UpsertAndPublishAsync(observation, cancellationToken);
|
||||
}
|
||||
|
||||
private async Task UpsertAndPublishAsync(AdvisoryObservation observation, CancellationToken cancellationToken)
|
||||
{
|
||||
await _store.UpsertAsync(observation, cancellationToken).ConfigureAwait(false);
|
||||
|
||||
var evt = AdvisoryObservationUpdatedEvent.FromObservation(
|
||||
observation,
|
||||
supersedesId: observation.Attributes.GetValueOrDefault("supersedesId")
|
||||
?? observation.Attributes.GetValueOrDefault("supersedes"),
|
||||
traceId: observation.Attributes.GetValueOrDefault("traceId"),
|
||||
replayCursor: _timeProvider.GetUtcNow().Ticks.ToString());
|
||||
|
||||
await _publisher.PublishAsync(evt, cancellationToken).ConfigureAwait(false);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,66 @@
|
||||
using System;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using Microsoft.Extensions.Hosting;
|
||||
using Microsoft.Extensions.Logging;
|
||||
using Microsoft.Extensions.Options;
|
||||
using StellaOps.Concelier.Core.Observations;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Observations;
|
||||
|
||||
internal sealed class AdvisoryObservationTransportWorker : BackgroundService
|
||||
{
|
||||
private readonly IAdvisoryObservationEventOutbox _outbox;
|
||||
private readonly IAdvisoryObservationEventPublisher _publisher;
|
||||
private readonly ILogger<AdvisoryObservationTransportWorker> _logger;
|
||||
private readonly AdvisoryObservationEventPublisherOptions _options;
|
||||
|
||||
public AdvisoryObservationTransportWorker(
|
||||
IAdvisoryObservationEventOutbox outbox,
|
||||
IAdvisoryObservationEventPublisher publisher,
|
||||
IOptions<AdvisoryObservationEventPublisherOptions> options,
|
||||
ILogger<AdvisoryObservationTransportWorker> logger)
|
||||
{
|
||||
_outbox = outbox ?? throw new ArgumentNullException(nameof(outbox));
|
||||
_publisher = publisher ?? throw new ArgumentNullException(nameof(publisher));
|
||||
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
|
||||
_options = options.Value;
|
||||
}
|
||||
|
||||
protected override async Task ExecuteAsync(CancellationToken stoppingToken)
|
||||
{
|
||||
if (!_options.Enabled)
|
||||
{
|
||||
_logger.LogInformation("Observation transport worker disabled.");
|
||||
return;
|
||||
}
|
||||
|
||||
while (!stoppingToken.IsCancellationRequested)
|
||||
{
|
||||
try
|
||||
{
|
||||
var batch = await _outbox.DequeueAsync(25, stoppingToken).ConfigureAwait(false);
|
||||
if (batch.Count == 0)
|
||||
{
|
||||
await Task.Delay(TimeSpan.FromSeconds(2), stoppingToken).ConfigureAwait(false);
|
||||
continue;
|
||||
}
|
||||
|
||||
foreach (var evt in batch)
|
||||
{
|
||||
await _publisher.PublishAsync(evt, stoppingToken).ConfigureAwait(false);
|
||||
await _outbox.MarkPublishedAsync(evt.EventId, DateTimeOffset.UtcNow, stoppingToken).ConfigureAwait(false);
|
||||
}
|
||||
}
|
||||
catch (OperationCanceledException)
|
||||
{
|
||||
break;
|
||||
}
|
||||
catch (Exception ex)
|
||||
{
|
||||
_logger.LogWarning(ex, "Observation transport worker error; retrying");
|
||||
await Task.Delay(TimeSpan.FromSeconds(5), stoppingToken).ConfigureAwait(false);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,74 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Linq;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using MongoDB.Driver;
|
||||
using StellaOps.Concelier.Core.Observations;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Observations;
|
||||
|
||||
internal sealed class MongoAdvisoryObservationEventOutbox : IAdvisoryObservationEventOutbox
|
||||
{
|
||||
private readonly IMongoCollection<AdvisoryObservationEventDocument> _collection;
|
||||
|
||||
public MongoAdvisoryObservationEventOutbox(IMongoCollection<AdvisoryObservationEventDocument> collection)
|
||||
{
|
||||
_collection = collection ?? throw new ArgumentNullException(nameof(collection));
|
||||
}
|
||||
|
||||
public async Task<IReadOnlyCollection<AdvisoryObservationUpdatedEvent>> DequeueAsync(int take, CancellationToken cancellationToken)
|
||||
{
|
||||
if (take <= 0)
|
||||
{
|
||||
return Array.Empty<AdvisoryObservationUpdatedEvent>();
|
||||
}
|
||||
|
||||
var filter = Builders<AdvisoryObservationEventDocument>.Filter.Eq(doc => doc.PublishedAt, null);
|
||||
var documents = await _collection
|
||||
.Find(filter)
|
||||
.SortByDescending(doc => doc.IngestedAt)
|
||||
.Limit(take)
|
||||
.ToListAsync(cancellationToken)
|
||||
.ConfigureAwait(false);
|
||||
|
||||
return documents.Select(ToDomain).ToArray();
|
||||
}
|
||||
|
||||
public Task MarkPublishedAsync(Guid eventId, DateTimeOffset publishedAt, CancellationToken cancellationToken)
|
||||
{
|
||||
var update = Builders<AdvisoryObservationEventDocument>.Update.Set(doc => doc.PublishedAt, publishedAt.UtcDateTime);
|
||||
return _collection.UpdateOneAsync(
|
||||
Builders<AdvisoryObservationEventDocument>.Filter.Eq(doc => doc.Id, eventId),
|
||||
update,
|
||||
cancellationToken: cancellationToken);
|
||||
}
|
||||
|
||||
private static AdvisoryObservationUpdatedEvent ToDomain(AdvisoryObservationEventDocument doc)
|
||||
{
|
||||
return new AdvisoryObservationUpdatedEvent(
|
||||
doc.Id,
|
||||
doc.TenantId,
|
||||
doc.ObservationId,
|
||||
doc.AdvisoryId,
|
||||
new Models.Observations.AdvisoryObservationSource(
|
||||
doc.Source.Vendor,
|
||||
doc.Source.Stream,
|
||||
doc.Source.Api,
|
||||
doc.Source.CollectorVersion),
|
||||
new AdvisoryObservationLinksetSummary(
|
||||
doc.LinksetSummary.Aliases.ToImmutableArray(),
|
||||
doc.LinksetSummary.Purls.ToImmutableArray(),
|
||||
doc.LinksetSummary.Cpes.ToImmutableArray(),
|
||||
doc.LinksetSummary.Scopes.ToImmutableArray(),
|
||||
doc.LinksetSummary.Relationships
|
||||
.Select(rel => new AdvisoryObservationRelationshipSummary(rel.Type, rel.Source, rel.Target, rel.Provenance))
|
||||
.ToImmutableArray()),
|
||||
doc.DocumentSha,
|
||||
doc.ObservationHash,
|
||||
doc.IngestedAt,
|
||||
doc.ReplayCursor,
|
||||
doc.SupersedesId,
|
||||
doc.TraceId);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,62 @@
|
||||
using System;
|
||||
using System.Linq;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using MongoDB.Driver;
|
||||
using StellaOps.Concelier.Core.Observations;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Observations;
|
||||
|
||||
internal sealed class MongoAdvisoryObservationEventPublisher : IAdvisoryObservationEventPublisher
|
||||
{
|
||||
private readonly IMongoCollection<AdvisoryObservationEventDocument> _collection;
|
||||
|
||||
public MongoAdvisoryObservationEventPublisher(IMongoCollection<AdvisoryObservationEventDocument> collection)
|
||||
{
|
||||
_collection = collection ?? throw new ArgumentNullException(nameof(collection));
|
||||
}
|
||||
|
||||
public Task PublishAsync(AdvisoryObservationUpdatedEvent @event, CancellationToken cancellationToken)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(@event);
|
||||
|
||||
var document = new AdvisoryObservationEventDocument
|
||||
{
|
||||
Id = @event.EventId,
|
||||
TenantId = @event.TenantId,
|
||||
ObservationId = @event.ObservationId,
|
||||
AdvisoryId = @event.AdvisoryId,
|
||||
DocumentSha = @event.DocumentSha,
|
||||
ObservationHash = @event.ObservationHash,
|
||||
IngestedAt = @event.IngestedAt.UtcDateTime,
|
||||
ReplayCursor = @event.ReplayCursor,
|
||||
SupersedesId = @event.SupersedesId,
|
||||
TraceId = @event.TraceId,
|
||||
Source = new AdvisoryObservationSourceDocument
|
||||
{
|
||||
Vendor = @event.Source.Vendor,
|
||||
Stream = @event.Source.Stream,
|
||||
Api = @event.Source.Api,
|
||||
CollectorVersion = @event.Source.CollectorVersion
|
||||
},
|
||||
LinksetSummary = new AdvisoryObservationLinksetSummaryDocument
|
||||
{
|
||||
Aliases = @event.LinksetSummary.Aliases.ToList(),
|
||||
Purls = @event.LinksetSummary.Purls.ToList(),
|
||||
Cpes = @event.LinksetSummary.Cpes.ToList(),
|
||||
Scopes = @event.LinksetSummary.Scopes.ToList(),
|
||||
Relationships = @event.LinksetSummary.Relationships
|
||||
.Select(static rel => new AdvisoryObservationRelationshipDocument
|
||||
{
|
||||
Type = rel.Type,
|
||||
Source = rel.Source,
|
||||
Target = rel.Target,
|
||||
Provenance = rel.Provenance
|
||||
})
|
||||
.ToList()
|
||||
}
|
||||
};
|
||||
|
||||
return _collection.InsertOneAsync(document, cancellationToken: cancellationToken);
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,66 @@
|
||||
using System;
|
||||
using System.Text;
|
||||
using System.Text.Json;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using Microsoft.Extensions.Logging;
|
||||
using Microsoft.Extensions.Options;
|
||||
using NATS.Client.Core;
|
||||
using NATS.Client.JetStream;
|
||||
using StellaOps.Concelier.Core.Observations;
|
||||
|
||||
namespace StellaOps.Concelier.Storage.Mongo.Observations;
|
||||
|
||||
internal sealed class NatsAdvisoryObservationEventPublisher : IAdvisoryObservationEventPublisher
|
||||
{
|
||||
private readonly ILogger<NatsAdvisoryObservationEventPublisher> _logger;
|
||||
private readonly AdvisoryObservationEventPublisherOptions _options;
|
||||
|
||||
public NatsAdvisoryObservationEventPublisher(
|
||||
IOptions<AdvisoryObservationEventPublisherOptions> options,
|
||||
ILogger<NatsAdvisoryObservationEventPublisher> logger)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(options);
|
||||
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
|
||||
_options = options.Value;
|
||||
}
|
||||
|
||||
public async Task PublishAsync(AdvisoryObservationUpdatedEvent @event, CancellationToken cancellationToken)
|
||||
{
|
||||
if (!_options.Enabled)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
var subject = _options.Subject;
|
||||
var payload = JsonSerializer.SerializeToUtf8Bytes(@event);
|
||||
var opts = new NatsOpts { Url = _options.NatsUrl ?? "nats://127.0.0.1:4222" };
|
||||
|
||||
await using var connection = new NatsConnection(opts);
|
||||
var js = new NatsJSContext(connection);
|
||||
|
||||
await EnsureStreamAsync(js, cancellationToken).ConfigureAwait(false);
|
||||
await js.PublishAsync(subject, payload, cancellationToken: cancellationToken).ConfigureAwait(false);
|
||||
_logger.LogDebug("Published advisory.observation.updated@1 to NATS subject {Subject} for observation {ObservationId}", subject, @event.ObservationId);
|
||||
}
|
||||
|
||||
private async Task EnsureStreamAsync(INatsJSContext js, CancellationToken cancellationToken)
|
||||
{
|
||||
var stream = _options.Stream;
|
||||
try
|
||||
{
|
||||
await js.GetStreamAsync(stream, cancellationToken).ConfigureAwait(false);
|
||||
}
|
||||
catch (NatsJSApiException ex) when (ex.Error?.Code == 404)
|
||||
{
|
||||
var cfg = new NatsJSStreamConfig
|
||||
{
|
||||
Name = stream,
|
||||
Subjects = new[] { _options.Subject },
|
||||
Description = "Concelier advisory observation events",
|
||||
MaxMsgSize = 512 * 1024,
|
||||
};
|
||||
await js.CreateStreamAsync(cfg, cancellationToken).ConfigureAwait(false);
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -79,6 +79,19 @@ public static class ServiceCollectionExtensions
|
||||
services.AddSingleton<IAdvisoryObservationLookup, AdvisoryObservationLookup>();
|
||||
services.AddSingleton<IAdvisoryEventRepository, MongoAdvisoryEventRepository>();
|
||||
services.AddSingleton<IAdvisoryEventLog, AdvisoryEventLog>();
|
||||
services.AddSingleton<MongoAdvisoryObservationEventPublisher>();
|
||||
services.AddSingleton<NatsAdvisoryObservationEventPublisher>();
|
||||
services.AddSingleton<IAdvisoryObservationEventPublisher>(sp =>
|
||||
{
|
||||
var options = sp.GetRequiredService<IOptions<AdvisoryObservationEventPublisherOptions>>().Value;
|
||||
if (string.Equals(options.Transport, "nats", StringComparison.OrdinalIgnoreCase))
|
||||
{
|
||||
return sp.GetRequiredService<NatsAdvisoryObservationEventPublisher>();
|
||||
}
|
||||
|
||||
return sp.GetRequiredService<MongoAdvisoryObservationEventPublisher>();
|
||||
});
|
||||
services.AddSingleton<IAdvisoryObservationEventOutbox, MongoAdvisoryObservationEventOutbox>();
|
||||
services.AddSingleton<IAdvisoryRawRepository, MongoAdvisoryRawRepository>();
|
||||
services.AddSingleton<StellaOps.Concelier.Storage.Mongo.Linksets.IMongoAdvisoryLinksetStore, StellaOps.Concelier.Storage.Mongo.Linksets.ConcelierMongoLinksetStore>();
|
||||
services.AddSingleton<StellaOps.Concelier.Core.Linksets.IAdvisoryLinksetStore>(sp =>
|
||||
@@ -108,6 +121,12 @@ public static class ServiceCollectionExtensions
|
||||
return database.GetCollection<AdvisoryObservationDocument>(MongoStorageDefaults.Collections.AdvisoryObservations);
|
||||
});
|
||||
|
||||
services.AddSingleton<IMongoCollection<AdvisoryObservationEventDocument>>(static sp =>
|
||||
{
|
||||
var database = sp.GetRequiredService<IMongoDatabase>();
|
||||
return database.GetCollection<AdvisoryObservationEventDocument>(MongoStorageDefaults.Collections.AdvisoryObservationEvents);
|
||||
});
|
||||
|
||||
services.AddSingleton<IMongoCollection<AdvisoryLinksetDocument>>(static sp =>
|
||||
{
|
||||
var database = sp.GetRequiredService<IMongoDatabase>();
|
||||
@@ -126,8 +145,11 @@ public static class ServiceCollectionExtensions
|
||||
services.AddSingleton<IMongoMigration, EnsureAdvisoryObservationsRawLinksetMigration>();
|
||||
services.AddSingleton<IMongoMigration, EnsureAdvisoryLinksetsTenantLowerMigration>();
|
||||
services.AddSingleton<IMongoMigration, EnsureAdvisoryEventCollectionsMigration>();
|
||||
services.AddSingleton<IMongoMigration, EnsureAdvisoryObservationEventCollectionMigration>();
|
||||
services.AddSingleton<IMongoMigration, SemVerStyleBackfillMigration>();
|
||||
|
||||
services.AddSingleton<IHostedService, AdvisoryObservationTransportWorker>();
|
||||
|
||||
return services;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,11 +6,12 @@
|
||||
<ImplicitUsings>enable</ImplicitUsings>
|
||||
<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
|
||||
</PropertyGroup>
|
||||
<ItemGroup>
|
||||
<PackageReference Include="MongoDB.Driver" Version="3.5.0" />
|
||||
<PackageReference Include="Microsoft.Extensions.Options" Version="10.0.0-rc.2.25502.107" />
|
||||
<PackageReference Include="Microsoft.Extensions.Logging.Abstractions" Version="10.0.0-rc.2.25502.107" />
|
||||
</ItemGroup>
|
||||
<ItemGroup>
|
||||
<PackageReference Include="MongoDB.Driver" Version="3.5.0" />
|
||||
<PackageReference Include="Microsoft.Extensions.Options" Version="10.0.0-rc.2.25502.107" />
|
||||
<PackageReference Include="Microsoft.Extensions.Logging.Abstractions" Version="10.0.0-rc.2.25502.107" />
|
||||
<PackageReference Include="NATS.Client.Core" Version="2.0.0" />
|
||||
</ItemGroup>
|
||||
<ItemGroup>
|
||||
<ProjectReference Include="..\StellaOps.Concelier.Core\StellaOps.Concelier.Core.csproj" />
|
||||
<ProjectReference Include="..\StellaOps.Concelier.Models\StellaOps.Concelier.Models.csproj" />
|
||||
|
||||
@@ -0,0 +1,31 @@
|
||||
using System.Collections.Immutable;
|
||||
using StellaOps.Concelier.Core.Linksets;
|
||||
using StellaOps.Concelier.RawModels;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Concelier.Core.Tests.Linksets;
|
||||
|
||||
public sealed class AdvisoryLinksetNormalizationConfidenceTests
|
||||
{
|
||||
[Fact]
|
||||
public void FromRawLinksetWithConfidence_ComputesWeightedScoreAndReasons()
|
||||
{
|
||||
var linkset = new RawLinkset
|
||||
{
|
||||
Aliases = ImmutableArray.Create("CVE-2024-11111", "GHSA-aaaa-bbbb"),
|
||||
PackageUrls = ImmutableArray.Create("pkg:npm/foo@1.0.0", "pkg:npm/foo@1.1.0"),
|
||||
Cpes = ImmutableArray.Create("cpe:/a:foo:foo:1.0.0", "cpe:/a:foo:foo:1.1.0"),
|
||||
Notes = ImmutableDictionary.CreateRange(new[] { new KeyValuePair<string, string>("severity", "mismatch") })
|
||||
};
|
||||
|
||||
var (normalized, confidence, conflicts) = AdvisoryLinksetNormalization.FromRawLinksetWithConfidence(linkset);
|
||||
|
||||
Assert.NotNull(normalized);
|
||||
Assert.NotNull(confidence);
|
||||
Assert.True(confidence!.Value is > 0.7 and < 0.8); // weighted score with conflict penalty
|
||||
|
||||
var conflict = Assert.Single(conflicts);
|
||||
Assert.Equal("severity-mismatch", conflict.Reason);
|
||||
Assert.Contains("severity:mismatch", conflict.Values!);
|
||||
}
|
||||
}
|
||||
@@ -53,7 +53,7 @@ public sealed class AdvisoryObservationAggregationTests
|
||||
var confidence = result.confidence;
|
||||
var conflicts = result.conflicts;
|
||||
|
||||
Assert.Equal(0.5, confidence);
|
||||
Assert.True(confidence is >= 0.1 and <= 0.6);
|
||||
Assert.Single(conflicts);
|
||||
Assert.Null(normalized); // no purls supplied
|
||||
}
|
||||
|
||||
@@ -0,0 +1,68 @@
|
||||
using System;
|
||||
using System.Collections.Immutable;
|
||||
using System.Text.Json.Nodes;
|
||||
using StellaOps.Concelier.Core.Observations;
|
||||
using StellaOps.Concelier.Models.Observations;
|
||||
using StellaOps.Concelier.RawModels;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Concelier.Core.Tests.Observations;
|
||||
|
||||
public sealed class AdvisoryObservationEventFactoryTests
|
||||
{
|
||||
[Fact]
|
||||
public void FromObservation_MapsFieldsAndHashesDeterministically()
|
||||
{
|
||||
var observation = CreateObservation();
|
||||
|
||||
var evt = AdvisoryObservationUpdatedEvent.FromObservation(
|
||||
observation,
|
||||
supersedesId: "655fabcdedc0ffee0000abcd",
|
||||
traceId: "trace-123");
|
||||
|
||||
Assert.Equal("urn:tenant:tenant-1", evt.TenantId);
|
||||
Assert.Equal("adv-1", evt.AdvisoryId);
|
||||
Assert.Equal("655fabcdedc0ffee0000abcd", evt.SupersedesId);
|
||||
Assert.NotNull(evt.ObservationHash);
|
||||
Assert.Equal(observation.Upstream.ContentHash, evt.DocumentSha);
|
||||
Assert.Contains("pkg:npm/foo", evt.LinksetSummary.Purls);
|
||||
}
|
||||
|
||||
private static AdvisoryObservation CreateObservation()
|
||||
{
|
||||
var source = new AdvisoryObservationSource("ghsa", "advisories", "https://api");
|
||||
var upstream = new AdvisoryObservationUpstream(
|
||||
"adv-1",
|
||||
"v1",
|
||||
DateTimeOffset.Parse("2025-11-20T12:00:00Z"),
|
||||
DateTimeOffset.Parse("2025-11-20T12:00:00Z"),
|
||||
"2f8f568cc1ed3474f0a4564ddb8c64f4b4d176fbe0a2a98a02b88e822a4f5b6d",
|
||||
new AdvisoryObservationSignature(false, null, null, null));
|
||||
|
||||
var content = new AdvisoryObservationContent("json", null, JsonNode.Parse("{}")!);
|
||||
var linkset = new AdvisoryObservationLinkset(
|
||||
aliases: new[] { "CVE-2024-1234", "GHSA-xxxx" },
|
||||
purls: new[] { "pkg:npm/foo@1.0.0" },
|
||||
cpes: new[] { "cpe:/a:foo:foo:1.0.0" },
|
||||
references: new[] { new AdvisoryObservationReference("ref", "https://example.com") });
|
||||
|
||||
var rawLinkset = new RawLinkset
|
||||
{
|
||||
Aliases = ImmutableArray.Create("CVE-2024-1234", "GHSA-xxxx"),
|
||||
PackageUrls = ImmutableArray.Create("pkg:npm/foo@1.0.0"),
|
||||
Cpes = ImmutableArray.Create("cpe:/a:foo:foo:1.0.0"),
|
||||
Scopes = ImmutableArray.Create("runtime"),
|
||||
Relationships = ImmutableArray.Create(new RawRelationship("contains", "pkg:npm/foo@1.0.0", "file://dist/foo.js")),
|
||||
};
|
||||
|
||||
return new AdvisoryObservation(
|
||||
"655fabcdf3c5d6ad3b5a0aaa",
|
||||
"tenant-1",
|
||||
source,
|
||||
upstream,
|
||||
content,
|
||||
linkset,
|
||||
rawLinkset,
|
||||
DateTimeOffset.Parse("2025-11-20T12:01:00Z"));
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,12 @@
|
||||
<Project>
|
||||
<PropertyGroup>
|
||||
<DefaultItemExcludes>$(DefaultItemExcludes);**/tools/**/*</DefaultItemExcludes>
|
||||
<DisableTransitiveProjectReferences>true</DisableTransitiveProjectReferences>
|
||||
<MSBuildProjectExtensionsPath>$(MSBuildThisFileDirectory)obj/$(MSBuildProjectName)/</MSBuildProjectExtensionsPath>
|
||||
</PropertyGroup>
|
||||
<ItemGroup>
|
||||
<Compile Remove="**/tools/**/*.cs" />
|
||||
<None Remove="**/tools/**/*" />
|
||||
<None Include="**/tools/**/*" Pack="false" CopyToOutputDirectory="Never" />
|
||||
</ItemGroup>
|
||||
</Project>
|
||||
@@ -0,0 +1,26 @@
|
||||
<Project Sdk="Microsoft.NET.Sdk">
|
||||
<PropertyGroup>
|
||||
<TargetFramework>net10.0</TargetFramework>
|
||||
<Nullable>enable</Nullable>
|
||||
<ImplicitUsings>enable</ImplicitUsings>
|
||||
</PropertyGroup>
|
||||
<ItemGroup>
|
||||
<PackageReference Include="Microsoft.NET.Test.Sdk" Version="17.11.1" />
|
||||
<PackageReference Include="xunit" Version="2.8.1" />
|
||||
<PackageReference Include="xunit.runner.visualstudio" Version="2.8.1">
|
||||
<PrivateAssets>all</PrivateAssets>
|
||||
</PackageReference>
|
||||
<PackageReference Include="FluentAssertions" Version="6.12.0" />
|
||||
<FrameworkReference Include="Microsoft.AspNetCore.App" />
|
||||
</ItemGroup>
|
||||
<ItemGroup>
|
||||
<Reference Include="StellaOps.Findings.Ledger">
|
||||
<HintPath>..\StellaOps.Findings.Ledger\bin\Release\net10.0\StellaOps.Findings.Ledger.dll</HintPath>
|
||||
<Private>true</Private>
|
||||
</Reference>
|
||||
</ItemGroup>
|
||||
<ItemGroup>
|
||||
<Compile Remove="**/*.cs" />
|
||||
<Compile Include="Exports/ExportPagingTests.cs" />
|
||||
</ItemGroup>
|
||||
</Project>
|
||||
@@ -0,0 +1,72 @@
|
||||
using System.Text.Json.Nodes;
|
||||
|
||||
namespace StellaOps.Findings.Ledger.WebService.Contracts;
|
||||
|
||||
public sealed record ExportFindingsRequest(
|
||||
string TenantId,
|
||||
string Shape,
|
||||
long? SinceSequence,
|
||||
long? UntilSequence,
|
||||
DateTimeOffset? SinceObservedAt,
|
||||
DateTimeOffset? UntilObservedAt,
|
||||
string? Status,
|
||||
decimal? Severity,
|
||||
int PageSize,
|
||||
string FiltersHash,
|
||||
ExportPagingKey? PagingKey);
|
||||
|
||||
public sealed record ExportPagingKey(long SequenceNumber, string PolicyVersion, string CycleHash);
|
||||
|
||||
public sealed record FindingExportItem(
|
||||
long EventSequence,
|
||||
DateTimeOffset ObservedAt,
|
||||
string FindingId,
|
||||
string PolicyVersion,
|
||||
string Status,
|
||||
decimal? Severity,
|
||||
string CycleHash,
|
||||
string? EvidenceBundleRef,
|
||||
ExportProvenance Provenance,
|
||||
JsonObject? Labels);
|
||||
|
||||
public sealed record VexExportItem(
|
||||
long EventSequence,
|
||||
DateTimeOffset ObservedAt,
|
||||
string VexStatementId,
|
||||
string ProductId,
|
||||
string Status,
|
||||
string? StatementType,
|
||||
bool? KnownExploited,
|
||||
string CycleHash,
|
||||
ExportProvenance Provenance);
|
||||
|
||||
public sealed record AdvisoryExportItem(
|
||||
long EventSequence,
|
||||
DateTimeOffset Published,
|
||||
string AdvisoryId,
|
||||
string Source,
|
||||
string Title,
|
||||
string? Severity,
|
||||
decimal? CvssScore,
|
||||
string? CvssVector,
|
||||
bool? Kev,
|
||||
string CycleHash,
|
||||
ExportProvenance Provenance);
|
||||
|
||||
public sealed record SbomExportItem(
|
||||
long EventSequence,
|
||||
DateTimeOffset CreatedAt,
|
||||
string SbomId,
|
||||
string SubjectDigest,
|
||||
string SbomFormat,
|
||||
int ComponentsCount,
|
||||
bool? HasVulnerabilities,
|
||||
string CycleHash,
|
||||
ExportProvenance Provenance);
|
||||
|
||||
public sealed record ExportProvenance(
|
||||
string PolicyVersion,
|
||||
string CycleHash,
|
||||
string? LedgerEventHash);
|
||||
|
||||
public sealed record ExportPage<T>(IReadOnlyList<T> Items, string? NextPageToken);
|
||||
@@ -1,6 +1,8 @@
|
||||
using Microsoft.AspNetCore.Diagnostics;
|
||||
using Microsoft.AspNetCore.Http.HttpResults;
|
||||
using Microsoft.AspNetCore.Mvc;
|
||||
using System.Text.Json;
|
||||
using System.Text.Json.Serialization;
|
||||
using Microsoft.Extensions.Options;
|
||||
using Serilog;
|
||||
using Serilog.Events;
|
||||
@@ -14,14 +16,17 @@ using StellaOps.Findings.Ledger.Infrastructure.Merkle;
|
||||
using StellaOps.Findings.Ledger.Infrastructure.Postgres;
|
||||
using StellaOps.Findings.Ledger.Infrastructure.Projection;
|
||||
using StellaOps.Findings.Ledger.Infrastructure.Policy;
|
||||
using StellaOps.Findings.Ledger.Infrastructure.Exports;
|
||||
using StellaOps.Findings.Ledger.Options;
|
||||
using StellaOps.Findings.Ledger.Services;
|
||||
using StellaOps.Findings.Ledger.WebService.Contracts;
|
||||
using StellaOps.Findings.Ledger.WebService.Mappings;
|
||||
using StellaOps.Findings.Ledger.WebService.Services;
|
||||
using StellaOps.Telemetry.Core;
|
||||
using StellaOps.Findings.Ledger.Services.Security;
|
||||
|
||||
const string LedgerWritePolicy = "ledger.events.write";
|
||||
const string LedgerExportPolicy = "ledger.export.read";
|
||||
|
||||
var builder = WebApplication.CreateBuilder(args);
|
||||
|
||||
@@ -112,6 +117,13 @@ builder.Services.AddAuthorization(options =>
|
||||
policy.Requirements.Add(new StellaOpsScopeRequirement(scopes));
|
||||
policy.AddAuthenticationSchemes(StellaOpsAuthenticationDefaults.AuthenticationScheme);
|
||||
});
|
||||
|
||||
options.AddPolicy(LedgerExportPolicy, policy =>
|
||||
{
|
||||
policy.RequireAuthenticatedUser();
|
||||
policy.Requirements.Add(new StellaOpsScopeRequirement(scopes));
|
||||
policy.AddAuthenticationSchemes(StellaOpsAuthenticationDefaults.AuthenticationScheme);
|
||||
});
|
||||
});
|
||||
|
||||
builder.Services.AddSingleton<LedgerAnchorQueue>();
|
||||
@@ -133,6 +145,7 @@ builder.Services.AddSingleton<IAttachmentUrlSigner, AttachmentUrlSigner>();
|
||||
builder.Services.AddSingleton<IConsoleCsrfValidator, ConsoleCsrfValidator>();
|
||||
builder.Services.AddHostedService<LedgerMerkleAnchorWorker>();
|
||||
builder.Services.AddHostedService<LedgerProjectionWorker>();
|
||||
builder.Services.AddSingleton<ExportQueryService>();
|
||||
|
||||
var app = builder.Build();
|
||||
|
||||
@@ -197,6 +210,118 @@ app.MapPost("/vuln/ledger/events", async Task<Results<Created<LedgerEventRespons
|
||||
.ProducesProblem(StatusCodes.Status409Conflict)
|
||||
.ProducesProblem(StatusCodes.Status500InternalServerError);
|
||||
|
||||
app.MapGet("/ledger/export/findings", async Task<Results<FileStreamHttpResult, JsonHttpResult<ExportPage<FindingExportItem>>, ProblemHttpResult>> (
|
||||
HttpContext httpContext,
|
||||
ExportQueryService exportQueryService,
|
||||
CancellationToken cancellationToken) =>
|
||||
{
|
||||
if (!httpContext.Request.Headers.TryGetValue("X-Stella-Tenant", out var tenantValues) || string.IsNullOrWhiteSpace(tenantValues))
|
||||
{
|
||||
return TypedResults.Problem(statusCode: StatusCodes.Status400BadRequest, title: "missing_tenant", detail: "X-Stella-Tenant header is required.");
|
||||
}
|
||||
|
||||
var tenantId = tenantValues.ToString();
|
||||
var shape = httpContext.Request.Query["shape"].ToString();
|
||||
if (string.IsNullOrWhiteSpace(shape))
|
||||
{
|
||||
return TypedResults.Problem(statusCode: StatusCodes.Status400BadRequest, title: "missing_shape", detail: "shape is required (canonical|compact).");
|
||||
}
|
||||
|
||||
var pageSize = exportQueryService.ClampPageSize(ParseInt(httpContext.Request.Query["page_size"]));
|
||||
|
||||
long? sinceSequence = ParseLong(httpContext.Request.Query["since_sequence"]);
|
||||
long? untilSequence = ParseLong(httpContext.Request.Query["until_sequence"]);
|
||||
DateTimeOffset? sinceObservedAt = ParseDate(httpContext.Request.Query["since_observed_at"]);
|
||||
DateTimeOffset? untilObservedAt = ParseDate(httpContext.Request.Query["until_observed_at"]);
|
||||
var status = httpContext.Request.Query["finding_status"].ToString();
|
||||
var severity = ParseDecimal(httpContext.Request.Query["severity"]);
|
||||
|
||||
var request = new ExportFindingsRequest(
|
||||
TenantId: tenantId,
|
||||
Shape: shape,
|
||||
SinceSequence: sinceSequence,
|
||||
UntilSequence: untilSequence,
|
||||
SinceObservedAt: sinceObservedAt,
|
||||
UntilObservedAt: untilObservedAt,
|
||||
Status: string.IsNullOrWhiteSpace(status) ? null : status,
|
||||
Severity: severity,
|
||||
PageSize: pageSize,
|
||||
FiltersHash: string.Empty,
|
||||
PagingKey: null);
|
||||
|
||||
var filtersHash = exportQueryService.ComputeFiltersHash(request);
|
||||
|
||||
ExportPagingKey? pagingKey = null;
|
||||
var pageToken = httpContext.Request.Query["page_token"].ToString();
|
||||
if (!string.IsNullOrWhiteSpace(pageToken))
|
||||
{
|
||||
if (!ExportPaging.TryParsePageToken(pageToken, filtersHash, out var parsedKey, out var error))
|
||||
{
|
||||
return TypedResults.Problem(statusCode: StatusCodes.Status400BadRequest, title: error ?? "invalid_page_token");
|
||||
}
|
||||
|
||||
pagingKey = new ExportPagingKey(parsedKey!.SequenceNumber, parsedKey.PolicyVersion, parsedKey.CycleHash);
|
||||
}
|
||||
|
||||
request = request with { FiltersHash = filtersHash, PagingKey = pagingKey };
|
||||
|
||||
ExportPage<FindingExportItem> page;
|
||||
try
|
||||
{
|
||||
page = await exportQueryService.GetFindingsAsync(request, cancellationToken).ConfigureAwait(false);
|
||||
}
|
||||
catch (InvalidOperationException ex) when (ex.Message == "filters_hash_mismatch")
|
||||
{
|
||||
return TypedResults.Problem(statusCode: StatusCodes.Status400BadRequest, title: "page_token_filters_mismatch");
|
||||
}
|
||||
|
||||
if (!string.IsNullOrEmpty(page.NextPageToken))
|
||||
{
|
||||
httpContext.Response.Headers["X-Stella-Next-Page-Token"] = page.NextPageToken;
|
||||
}
|
||||
httpContext.Response.Headers["X-Stella-Result-Count"] = page.Items.Count.ToString();
|
||||
|
||||
var acceptsNdjson = httpContext.Request.Headers.Accept.Any(h => h.Contains("application/x-ndjson", StringComparison.OrdinalIgnoreCase));
|
||||
if (acceptsNdjson)
|
||||
{
|
||||
httpContext.Response.ContentType = "application/x-ndjson";
|
||||
var stream = new MemoryStream();
|
||||
await using var writer = new Utf8JsonWriter(stream, new JsonWriterOptions { SkipValidation = false, Indented = false });
|
||||
foreach (var item in page.Items)
|
||||
{
|
||||
JsonSerializer.Serialize(writer, item);
|
||||
writer.Flush();
|
||||
await stream.WriteAsync(new byte[] { (byte)'\n' }, cancellationToken).ConfigureAwait(false);
|
||||
}
|
||||
stream.Position = 0;
|
||||
return TypedResults.Stream(stream, contentType: "application/x-ndjson");
|
||||
}
|
||||
|
||||
return TypedResults.Json(page);
|
||||
})
|
||||
.WithName("LedgerExportFindings")
|
||||
.RequireAuthorization(LedgerExportPolicy)
|
||||
.Produces(StatusCodes.Status200OK)
|
||||
.ProducesProblem(StatusCodes.Status400BadRequest)
|
||||
.ProducesProblem(StatusCodes.Status401Unauthorized)
|
||||
.ProducesProblem(StatusCodes.Status403Forbidden)
|
||||
.ProducesProblem(StatusCodes.Status500InternalServerError);
|
||||
|
||||
app.MapGet("/ledger/export/vex", () => TypedResults.Json(new ExportPage<VexExportItem>(Array.Empty<VexExportItem>(), null)))
|
||||
.WithName("LedgerExportVex")
|
||||
.RequireAuthorization(LedgerExportPolicy)
|
||||
.Produces(StatusCodes.Status200OK);
|
||||
|
||||
app.MapGet("/ledger/export/advisories", () => TypedResults.Json(new ExportPage<AdvisoryExportItem>(Array.Empty<AdvisoryExportItem>(), null)))
|
||||
.WithName("LedgerExportAdvisories")
|
||||
.RequireAuthorization(LedgerExportPolicy)
|
||||
.Produces(StatusCodes.Status200OK);
|
||||
|
||||
app.MapGet("/ledger/export/sboms", () => TypedResults.Json(new ExportPage<SbomExportItem>(Array.Empty<SbomExportItem>(), null)))
|
||||
.WithName("LedgerExportSboms")
|
||||
.RequireAuthorization(LedgerExportPolicy)
|
||||
.Produces(StatusCodes.Status200OK);
|
||||
|
||||
app.Run();
|
||||
|
||||
static Created<LedgerEventResponse> CreateCreatedResponse(LedgerEventRecord record)
|
||||
|
||||
@@ -0,0 +1,214 @@
|
||||
using System.Text.Json.Nodes;
|
||||
using Microsoft.Extensions.Logging;
|
||||
using Npgsql;
|
||||
using NpgsqlTypes;
|
||||
using StellaOps.Findings.Ledger.Infrastructure.Exports;
|
||||
using StellaOps.Findings.Ledger.Infrastructure.Postgres;
|
||||
using StellaOps.Findings.Ledger.WebService.Contracts;
|
||||
|
||||
namespace StellaOps.Findings.Ledger.WebService.Services;
|
||||
|
||||
public sealed class ExportQueryService
|
||||
{
|
||||
private const int DefaultPageSize = 500;
|
||||
private const int MaxPageSize = 5000;
|
||||
|
||||
private readonly LedgerDataSource _dataSource;
|
||||
private readonly ILogger<ExportQueryService> _logger;
|
||||
|
||||
public ExportQueryService(LedgerDataSource dataSource, ILogger<ExportQueryService> logger)
|
||||
{
|
||||
_dataSource = dataSource ?? throw new ArgumentNullException(nameof(dataSource));
|
||||
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
|
||||
}
|
||||
|
||||
public ExportPage<VexExportItem> GetVexEmpty() => new(Array.Empty<VexExportItem>(), null);
|
||||
|
||||
public ExportPage<AdvisoryExportItem> GetAdvisoriesEmpty() => new(Array.Empty<AdvisoryExportItem>(), null);
|
||||
|
||||
public ExportPage<SbomExportItem> GetSbomsEmpty() => new(Array.Empty<SbomExportItem>(), null);
|
||||
|
||||
public int ClampPageSize(int? requested)
|
||||
{
|
||||
if (!requested.HasValue || requested.Value <= 0)
|
||||
{
|
||||
return DefaultPageSize;
|
||||
}
|
||||
|
||||
return Math.Min(requested.Value, MaxPageSize);
|
||||
}
|
||||
|
||||
public string ComputeFiltersHash(ExportFindingsRequest request)
|
||||
{
|
||||
var filters = new Dictionary<string, string?>
|
||||
{
|
||||
["shape"] = request.Shape,
|
||||
["since_sequence"] = request.SinceSequence?.ToString(),
|
||||
["until_sequence"] = request.UntilSequence?.ToString(),
|
||||
["since_observed_at"] = request.SinceObservedAt?.ToString("O"),
|
||||
["until_observed_at"] = request.UntilObservedAt?.ToString("O"),
|
||||
["status"] = request.Status,
|
||||
["severity"] = request.Severity?.ToString()
|
||||
};
|
||||
|
||||
return ExportPaging.ComputeFiltersHash(filters);
|
||||
}
|
||||
|
||||
public async Task<ExportPage<FindingExportItem>> GetFindingsAsync(ExportFindingsRequest request, CancellationToken cancellationToken)
|
||||
{
|
||||
ArgumentNullException.ThrowIfNull(request);
|
||||
|
||||
if (!string.Equals(request.FiltersHash, ComputeFiltersHash(request), StringComparison.Ordinal))
|
||||
{
|
||||
throw new InvalidOperationException("filters_hash_mismatch");
|
||||
}
|
||||
|
||||
const string baseSql = """
|
||||
SELECT le.sequence_no,
|
||||
le.recorded_at,
|
||||
fp.finding_id,
|
||||
fp.policy_version,
|
||||
fp.status,
|
||||
fp.severity,
|
||||
fp.labels,
|
||||
fp.cycle_hash,
|
||||
le.evidence_bundle_ref,
|
||||
le.event_hash
|
||||
FROM findings_projection fp
|
||||
JOIN ledger_events le
|
||||
ON le.tenant_id = fp.tenant_id
|
||||
AND le.event_id = fp.current_event_id
|
||||
WHERE fp.tenant_id = @tenant_id
|
||||
""";
|
||||
|
||||
var sqlBuilder = new System.Text.StringBuilder(baseSql);
|
||||
var parameters = new List<NpgsqlParameter>
|
||||
{
|
||||
new("tenant_id", request.TenantId)
|
||||
{
|
||||
NpgsqlDbType = NpgsqlDbType.Text
|
||||
}
|
||||
};
|
||||
|
||||
if (request.SinceSequence.HasValue)
|
||||
{
|
||||
sqlBuilder.Append(" AND le.sequence_no >= @since_sequence");
|
||||
parameters.Add(new NpgsqlParameter<long>("since_sequence", request.SinceSequence.Value)
|
||||
{
|
||||
NpgsqlDbType = NpgsqlDbType.Bigint
|
||||
});
|
||||
}
|
||||
|
||||
if (request.UntilSequence.HasValue)
|
||||
{
|
||||
sqlBuilder.Append(" AND le.sequence_no <= @until_sequence");
|
||||
parameters.Add(new NpgsqlParameter<long>("until_sequence", request.UntilSequence.Value)
|
||||
{
|
||||
NpgsqlDbType = NpgsqlDbType.Bigint
|
||||
});
|
||||
}
|
||||
|
||||
if (request.SinceObservedAt.HasValue)
|
||||
{
|
||||
sqlBuilder.Append(" AND le.recorded_at >= @since_observed_at");
|
||||
parameters.Add(new NpgsqlParameter<DateTimeOffset>("since_observed_at", request.SinceObservedAt.Value)
|
||||
{
|
||||
NpgsqlDbType = NpgsqlDbType.TimestampTz
|
||||
});
|
||||
}
|
||||
|
||||
if (request.UntilObservedAt.HasValue)
|
||||
{
|
||||
sqlBuilder.Append(" AND le.recorded_at <= @until_observed_at");
|
||||
parameters.Add(new NpgsqlParameter<DateTimeOffset>("until_observed_at", request.UntilObservedAt.Value)
|
||||
{
|
||||
NpgsqlDbType = NpgsqlDbType.TimestampTz
|
||||
});
|
||||
}
|
||||
|
||||
if (!string.IsNullOrWhiteSpace(request.Status))
|
||||
{
|
||||
sqlBuilder.Append(" AND fp.status = @status");
|
||||
parameters.Add(new NpgsqlParameter<string>("status", request.Status)
|
||||
{
|
||||
NpgsqlDbType = NpgsqlDbType.Text
|
||||
});
|
||||
}
|
||||
|
||||
if (request.Severity.HasValue)
|
||||
{
|
||||
sqlBuilder.Append(" AND fp.severity = @severity");
|
||||
parameters.Add(new NpgsqlParameter<decimal>("severity", request.Severity.Value)
|
||||
{
|
||||
NpgsqlDbType = NpgsqlDbType.Numeric
|
||||
});
|
||||
}
|
||||
|
||||
if (request.PagingKey is not null)
|
||||
{
|
||||
sqlBuilder.Append(" AND (le.sequence_no > @cursor_seq OR (le.sequence_no = @cursor_seq AND fp.policy_version > @cursor_policy) OR (le.sequence_no = @cursor_seq AND fp.policy_version = @cursor_policy AND fp.cycle_hash > @cursor_cycle))");
|
||||
parameters.Add(new NpgsqlParameter<long>("cursor_seq", request.PagingKey.SequenceNumber)
|
||||
{
|
||||
NpgsqlDbType = NpgsqlDbType.Bigint
|
||||
});
|
||||
parameters.Add(new NpgsqlParameter<string>("cursor_policy", request.PagingKey.PolicyVersion)
|
||||
{
|
||||
NpgsqlDbType = NpgsqlDbType.Text
|
||||
});
|
||||
parameters.Add(new NpgsqlParameter<string>("cursor_cycle", request.PagingKey.CycleHash)
|
||||
{
|
||||
NpgsqlDbType = NpgsqlDbType.Char
|
||||
});
|
||||
}
|
||||
|
||||
sqlBuilder.Append(" ORDER BY le.sequence_no, fp.policy_version, fp.cycle_hash");
|
||||
sqlBuilder.Append(" LIMIT @take");
|
||||
parameters.Add(new NpgsqlParameter<int>("take", request.PageSize + 1)
|
||||
{
|
||||
NpgsqlDbType = NpgsqlDbType.Integer
|
||||
});
|
||||
|
||||
await using var connection = await _dataSource.OpenConnectionAsync(request.TenantId, cancellationToken).ConfigureAwait(false);
|
||||
await using var command = new NpgsqlCommand(sqlBuilder.ToString(), connection)
|
||||
{
|
||||
CommandTimeout = _dataSource.CommandTimeoutSeconds
|
||||
};
|
||||
command.Parameters.AddRange(parameters.ToArray());
|
||||
|
||||
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
|
||||
var items = new List<FindingExportItem>();
|
||||
while (await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
|
||||
{
|
||||
var labelsJson = reader.GetFieldValue<string>(6);
|
||||
var labels = JsonNode.Parse(labelsJson)?.AsObject();
|
||||
|
||||
items.Add(new FindingExportItem(
|
||||
EventSequence: reader.GetInt64(0),
|
||||
ObservedAt: reader.GetFieldValue<DateTimeOffset>(1),
|
||||
FindingId: reader.GetString(2),
|
||||
PolicyVersion: reader.GetString(3),
|
||||
Status: reader.GetString(4),
|
||||
Severity: reader.IsDBNull(5) ? null : reader.GetDecimal(5),
|
||||
CycleHash: reader.GetString(7),
|
||||
EvidenceBundleRef: reader.IsDBNull(8) ? null : reader.GetString(8),
|
||||
Provenance: new ExportProvenance(
|
||||
PolicyVersion: reader.GetString(3),
|
||||
CycleHash: reader.GetString(7),
|
||||
LedgerEventHash: reader.IsDBNull(9) ? null : reader.GetString(9)),
|
||||
Labels: labels));
|
||||
}
|
||||
|
||||
string? nextPageToken = null;
|
||||
if (items.Count > request.PageSize)
|
||||
{
|
||||
var last = items[request.PageSize];
|
||||
items = items.Take(request.PageSize).ToList();
|
||||
var key = new ExportPagingKey(last.EventSequence, last.PolicyVersion, last.CycleHash);
|
||||
nextPageToken = ExportPaging.CreatePageToken(
|
||||
new ExportPaging.ExportPageKey(key.SequenceNumber, key.PolicyVersion, key.CycleHash),
|
||||
request.FiltersHash);
|
||||
}
|
||||
|
||||
return new ExportPage<FindingExportItem>(items, nextPageToken);
|
||||
}
|
||||
}
|
||||
16
src/Scanner/StellaOps.Scanner.Node.slnf
Normal file
16
src/Scanner/StellaOps.Scanner.Node.slnf
Normal file
@@ -0,0 +1,16 @@
|
||||
{
|
||||
"solution": {
|
||||
"path": "StellaOps.Scanner.sln",
|
||||
"projects": [
|
||||
"__Libraries/StellaOps.Scanner.Analyzers.Lang/StellaOps.Scanner.Analyzers.Lang.csproj",
|
||||
"__Libraries/StellaOps.Scanner.Analyzers.Lang.DotNet/StellaOps.Scanner.Analyzers.Lang.DotNet.csproj",
|
||||
"__Libraries/StellaOps.Scanner.Analyzers.Lang.Node/StellaOps.Scanner.Analyzers.Lang.Node.csproj",
|
||||
"__Libraries/StellaOps.Scanner.Analyzers.Lang.Ruby/StellaOps.Scanner.Analyzers.Lang.Ruby.csproj",
|
||||
"__Libraries/StellaOps.Scanner.Analyzers.Lang.Rust/StellaOps.Scanner.Analyzers.Lang.Rust.csproj",
|
||||
"__Libraries/StellaOps.Scanner.Core/StellaOps.Scanner.Core.csproj",
|
||||
"__Tests/StellaOps.Scanner.Analyzers.Lang.Tests/StellaOps.Scanner.Analyzers.Lang.Tests.csproj",
|
||||
"__Tests/StellaOps.Scanner.Analyzers.Lang.Node.Tests/StellaOps.Scanner.Analyzers.Lang.Node.Tests.csproj",
|
||||
"../Concelier/__Libraries/StellaOps.Concelier.Testing/StellaOps.Concelier.Testing.csproj"
|
||||
]
|
||||
}
|
||||
}
|
||||
@@ -0,0 +1,122 @@
|
||||
using System;
|
||||
using System.Collections.Generic;
|
||||
using System.Globalization;
|
||||
using System.IO;
|
||||
using System.Text.Json;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using StellaOps.Scanner.Analyzers.Lang.Core;
|
||||
|
||||
namespace StellaOps.Scanner.Analyzers.Lang.Node.Internal.Phase22;
|
||||
|
||||
internal static class NodePhase22SampleLoader
|
||||
{
|
||||
private const string EnvKey = "SCANNER_NODE_PHASE22_FIXTURE";
|
||||
private const string DefaultFileName = "node-phase22-sample.ndjson";
|
||||
|
||||
public static async ValueTask<IReadOnlyCollection<LanguageComponentRecord>> TryLoadAsync(
|
||||
string rootPath,
|
||||
CancellationToken cancellationToken)
|
||||
{
|
||||
var fixturePath = Environment.GetEnvironmentVariable(EnvKey);
|
||||
if (string.IsNullOrWhiteSpace(fixturePath))
|
||||
{
|
||||
fixturePath = Path.Combine(rootPath, DefaultFileName);
|
||||
if (!File.Exists(fixturePath))
|
||||
{
|
||||
// fallback to docs sample if tests point to repo root
|
||||
var repoRoot = FindRepoRoot(rootPath);
|
||||
var fromDocs = Path.Combine(repoRoot, "docs", "samples", "scanner", "node-phase22", DefaultFileName);
|
||||
fixturePath = File.Exists(fromDocs) ? fromDocs : fixturePath;
|
||||
}
|
||||
}
|
||||
|
||||
if (!File.Exists(fixturePath))
|
||||
{
|
||||
return Array.Empty<LanguageComponentRecord>();
|
||||
}
|
||||
|
||||
var records = new List<LanguageComponentRecord>();
|
||||
await using var stream = File.OpenRead(fixturePath);
|
||||
using var reader = new StreamReader(stream);
|
||||
|
||||
string? line;
|
||||
while ((line = await reader.ReadLineAsync().ConfigureAwait(false)) is not null)
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
if (string.IsNullOrWhiteSpace(line))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
using var jsonDoc = JsonDocument.Parse(line);
|
||||
var root = jsonDoc.RootElement;
|
||||
if (!root.TryGetProperty("type", out var typeProp))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
if (!string.Equals(typeProp.GetString(), "component", StringComparison.Ordinal))
|
||||
{
|
||||
continue; // only components are mapped into LanguageComponentRecords for now
|
||||
}
|
||||
|
||||
var componentType = root.GetProperty("componentType").GetString() ?? "pkg";
|
||||
var path = root.GetProperty("path").GetString() ?? string.Empty;
|
||||
var reason = root.TryGetProperty("reason", out var reasonProp) ? reasonProp.GetString() : null;
|
||||
var format = root.TryGetProperty("format", out var formatProp) ? formatProp.GetString() : null;
|
||||
var confidence = root.TryGetProperty("confidence", out var confProp) && confProp.TryGetDouble(out var conf)
|
||||
? conf.ToString("0.00", CultureInfo.InvariantCulture)
|
||||
: null;
|
||||
|
||||
if (string.IsNullOrWhiteSpace(path))
|
||||
{
|
||||
continue;
|
||||
}
|
||||
|
||||
var metadata = new List<KeyValuePair<string, string?>>();
|
||||
if (!string.IsNullOrWhiteSpace(reason)) metadata.Add(new("reason", reason));
|
||||
if (!string.IsNullOrWhiteSpace(format)) metadata.Add(new("format", format));
|
||||
if (!string.IsNullOrWhiteSpace(confidence)) metadata.Add(new("confidence", confidence));
|
||||
|
||||
var typeTag = componentType switch
|
||||
{
|
||||
"native" => "node:native",
|
||||
"wasm" => "node:wasm",
|
||||
_ => "node:bundle"
|
||||
};
|
||||
|
||||
var name = Path.GetFileName(path);
|
||||
var record = LanguageComponentRecord.FromExplicitKey(
|
||||
analyzerId: "node-phase22",
|
||||
componentKey: path,
|
||||
purl: null,
|
||||
name: name,
|
||||
version: null,
|
||||
type: typeTag,
|
||||
metadata: metadata,
|
||||
evidence: null,
|
||||
usedByEntrypoint: false);
|
||||
|
||||
records.Add(record);
|
||||
}
|
||||
|
||||
return records;
|
||||
}
|
||||
|
||||
private static string FindRepoRoot(string start)
|
||||
{
|
||||
var current = new DirectoryInfo(start);
|
||||
while (current is not null && current.Exists)
|
||||
{
|
||||
if (File.Exists(Path.Combine(current.FullName, "README.md")))
|
||||
{
|
||||
return current.FullName;
|
||||
}
|
||||
|
||||
current = current.Parent;
|
||||
}
|
||||
|
||||
return start;
|
||||
}
|
||||
}
|
||||
@@ -1,6 +1,7 @@
|
||||
using StellaOps.Scanner.Analyzers.Lang.Node.Internal;
|
||||
|
||||
namespace StellaOps.Scanner.Analyzers.Lang.Node;
|
||||
using StellaOps.Scanner.Analyzers.Lang.Node.Internal;
|
||||
using StellaOps.Scanner.Analyzers.Lang.Node.Internal.Phase22;
|
||||
|
||||
namespace StellaOps.Scanner.Analyzers.Lang.Node;
|
||||
|
||||
public sealed class NodeLanguageAnalyzer : ILanguageAnalyzer
|
||||
{
|
||||
@@ -13,11 +14,11 @@ public sealed class NodeLanguageAnalyzer : ILanguageAnalyzer
|
||||
ArgumentNullException.ThrowIfNull(context);
|
||||
ArgumentNullException.ThrowIfNull(writer);
|
||||
|
||||
var lockData = await NodeLockData.LoadAsync(context.RootPath, cancellationToken).ConfigureAwait(false);
|
||||
var packages = NodePackageCollector.CollectPackages(context, lockData, cancellationToken);
|
||||
|
||||
foreach (var package in packages.OrderBy(static p => p.ComponentKey, StringComparer.Ordinal))
|
||||
{
|
||||
var lockData = await NodeLockData.LoadAsync(context.RootPath, cancellationToken).ConfigureAwait(false);
|
||||
var packages = NodePackageCollector.CollectPackages(context, lockData, cancellationToken);
|
||||
|
||||
foreach (var package in packages.OrderBy(static p => p.ComponentKey, StringComparer.Ordinal))
|
||||
{
|
||||
cancellationToken.ThrowIfCancellationRequested();
|
||||
|
||||
var metadata = package.CreateMetadata();
|
||||
@@ -29,9 +30,16 @@ public sealed class NodeLanguageAnalyzer : ILanguageAnalyzer
|
||||
name: package.Name,
|
||||
version: package.Version,
|
||||
type: "npm",
|
||||
metadata: metadata,
|
||||
evidence: evidence,
|
||||
usedByEntrypoint: package.IsUsedByEntrypoint);
|
||||
}
|
||||
}
|
||||
}
|
||||
metadata: metadata,
|
||||
evidence: evidence,
|
||||
usedByEntrypoint: package.IsUsedByEntrypoint);
|
||||
}
|
||||
|
||||
// Optional Phase 22 prep path: ingest precomputed bundle/native/WASM AOC records from NDJSON fixture
|
||||
var phase22Records = await NodePhase22SampleLoader.TryLoadAsync(context.RootPath, cancellationToken).ConfigureAwait(false);
|
||||
if (phase22Records.Count > 0)
|
||||
{
|
||||
writer.AddRange(phase22Records);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -0,0 +1,3 @@
|
||||
{"type":"component","componentType":"pkg","path":"/src/app.js","format":"esm","fromBundle":true,"reason":"source-map","confidence":0.87,"resolverTrace":["bundle:/app/dist/main.js","map:/app/dist/main.js.map","source:/src/app.js"]}
|
||||
{"type":"component","componentType":"native","path":"/app/native/addon.node","arch":"x86_64","platform":"linux","reason":"native-addon-file","confidence":0.82,"resolverTrace":["file:/app/native/addon.node","require:/app/dist/native-entry.js"]}
|
||||
{"type":"component","componentType":"wasm","path":"/app/pkg/pkg.wasm","exports":["init","run"],"reason":"wasm-file","confidence":0.80,"resolverTrace":["file:/app/pkg/pkg.wasm","import:/app/dist/wasm-entry.js"]}
|
||||
@@ -0,0 +1,22 @@
|
||||
using System.IO;
|
||||
using System.Linq;
|
||||
using System.Threading;
|
||||
using System.Threading.Tasks;
|
||||
using StellaOps.Scanner.Analyzers.Lang.Node.Internal.Phase22;
|
||||
using Xunit;
|
||||
|
||||
namespace StellaOps.Scanner.Analyzers.Lang.Node.Tests;
|
||||
|
||||
public class NodePhase22SampleLoaderTests
|
||||
{
|
||||
[Fact]
|
||||
public async Task TryLoadAsync_ReadsComponentsFromNdjson()
|
||||
{
|
||||
var root = Path.Combine("Fixtures");
|
||||
var records = await NodePhase22SampleLoader.TryLoadAsync(root, CancellationToken.None);
|
||||
|
||||
Assert.Equal(3, records.Count);
|
||||
var native = records.Single(r => r.Type == "node:native");
|
||||
Assert.Equal("/app/native/addon.node", native.ComponentKey);
|
||||
}
|
||||
}
|
||||
15
src/Scanner/__Tests/node-isolated.runsettings
Normal file
15
src/Scanner/__Tests/node-isolated.runsettings
Normal file
@@ -0,0 +1,15 @@
|
||||
<?xml version="1.0" encoding="utf-8"?>
|
||||
<RunSettings>
|
||||
<RunConfiguration>
|
||||
<DisableAppDomain>true</DisableAppDomain>
|
||||
<MaxCpuCount>1</MaxCpuCount>
|
||||
<TargetPlatform>x64</TargetPlatform>
|
||||
<TargetFrameworkVersion>net10.0</TargetFrameworkVersion>
|
||||
<ResultsDirectory>./TestResults</ResultsDirectory>
|
||||
</RunConfiguration>
|
||||
<DataCollectionRunSettings>
|
||||
<DataCollectors>
|
||||
<!-- keep deterministic runs; no code coverage collectors by default -->
|
||||
</DataCollectors>
|
||||
</DataCollectionRunSettings>
|
||||
</RunSettings>
|
||||
22
src/Scanner/__Tests/node-tests-isolated.sh
Normal file
22
src/Scanner/__Tests/node-tests-isolated.sh
Normal file
@@ -0,0 +1,22 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
# repo root = two levels up from src/Scanner (__Tests/.. -> .. -> ..)
|
||||
REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
# Restore only filtered projects using offline/local feed
|
||||
NUGET_PACKAGES="$REPO_ROOT/offline/packages" \
|
||||
DOTNET_RESTORE_DISABLE_PARALLEL=true \
|
||||
DOTNET_SYSTEM_NET_HTTP_USESOCKETSHTTPHANDLER=0 \
|
||||
dotnet restore src/Scanner/StellaOps.Scanner.Node.slnf \
|
||||
-p:RestorePackagesPath="$REPO_ROOT/offline/packages" \
|
||||
-p:ContinuousIntegrationBuild=true
|
||||
|
||||
# Run node analyzer tests in isolation
|
||||
DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1 \
|
||||
DOTNET_CLI_TELEMETRY_OPTOUT=1 \
|
||||
dotnet test src/Scanner/StellaOps.Scanner.Node.slnf \
|
||||
--no-build \
|
||||
--settings "$REPO_ROOT/__Tests/node-isolated.runsettings" \
|
||||
/m:1
|
||||
Reference in New Issue
Block a user