feat: Implement advisory event replay API with conflict explainers

- Added `/concelier/advisories/{vulnerabilityKey}/replay` endpoint to return conflict summaries and explainers.
- Introduced `MergeConflictExplainerPayload` to structure conflict details including type, reason, and source rankings.
- Enhanced `MergeConflictSummary` to include structured explainer payloads and hashes for persisted conflicts.
- Updated `MirrorEndpointExtensions` to enforce rate limits and cache headers for mirror distribution endpoints.
- Refactored tests to cover new replay endpoint functionality and validate conflict explainers.
- Documented changes in TASKS.md, noting completion of mirror distribution endpoints and updated operational runbook.
This commit is contained in:
master
2025-10-20 18:59:26 +03:00
parent 03bc021c5e
commit d6cb41dd51
20 changed files with 3966 additions and 3493 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -160,14 +160,14 @@ This file describe implementation of Stella Ops (docs/README.md). Implementation
| Sprint 7 | Contextual Truth Foundations | src/StellaOps.Concelier.Core/TASKS.md | TODO | Team Core Engine & Data Science | FEEDCORE-ENGINE-07-002 | Noise prior computation service learn false-positive priors and expose deterministic summaries. | | Sprint 7 | Contextual Truth Foundations | src/StellaOps.Concelier.Core/TASKS.md | TODO | Team Core Engine & Data Science | FEEDCORE-ENGINE-07-002 | Noise prior computation service learn false-positive priors and expose deterministic summaries. |
| Sprint 7 | Contextual Truth Foundations | src/StellaOps.Concelier.Core/TASKS.md | TODO | Team Core Engine & Storage Analytics | FEEDCORE-ENGINE-07-003 | Unknown state ledger & confidence seeding persist unknown flags, seed confidence bands, expose query surface. | | Sprint 7 | Contextual Truth Foundations | src/StellaOps.Concelier.Core/TASKS.md | TODO | Team Core Engine & Storage Analytics | FEEDCORE-ENGINE-07-003 | Unknown state ledger & confidence seeding persist unknown flags, seed confidence bands, expose query surface. |
| Sprint 7 | Contextual Truth Foundations | src/StellaOps.Concelier.Storage.Mongo/TASKS.md | TODO | Team Normalization & Storage Backbone | FEEDSTORAGE-DATA-07-001 | Advisory statement & conflict collections provision Mongo schema/indexes for event-sourced merge. | | Sprint 7 | Contextual Truth Foundations | src/StellaOps.Concelier.Storage.Mongo/TASKS.md | TODO | Team Normalization & Storage Backbone | FEEDSTORAGE-DATA-07-001 | Advisory statement & conflict collections provision Mongo schema/indexes for event-sourced merge. |
| Sprint 7 | Contextual Truth Foundations | src/StellaOps.Concelier.Merge/TASKS.md | DOING | BE-Merge | FEEDMERGE-ENGINE-07-001 | Conflict sets & explainers persist conflict materialization and replay hashes for merge decisions. | | Sprint 7 | Contextual Truth Foundations | src/StellaOps.Concelier.Merge/TASKS.md | DONE (2025-10-20) | BE-Merge | FEEDMERGE-ENGINE-07-001 | Conflict sets & explainers persist conflict materialization and replay hashes for merge decisions. |
| Sprint 8 | Mongo strengthening | src/StellaOps.Concelier.Storage.Mongo/TASKS.md | DONE (2025-10-19) | Team Normalization & Storage Backbone | FEEDSTORAGE-MONGO-08-001 | Causal-consistent Concelier storage sessions<br>Scoped session facilitator registered, repositories accept optional session handles, and replica-set failover tests verify read-your-write + monotonic reads. | | Sprint 8 | Mongo strengthening | src/StellaOps.Concelier.Storage.Mongo/TASKS.md | DONE (2025-10-19) | Team Normalization & Storage Backbone | FEEDSTORAGE-MONGO-08-001 | Causal-consistent Concelier storage sessions<br>Scoped session facilitator registered, repositories accept optional session handles, and replica-set failover tests verify read-your-write + monotonic reads. |
| Sprint 8 | Mongo strengthening | src/StellaOps.Authority/TASKS.md | DONE (2025-10-19) | Authority Core & Storage Guild | AUTHSTORAGE-MONGO-08-001 | Harden Authority Mongo usage<br>Scoped Mongo sessions with majority read/write concerns wired through stores and GraphQL/HTTP pipelines; replica-set election regression validated. | | Sprint 8 | Mongo strengthening | src/StellaOps.Authority/TASKS.md | DONE (2025-10-19) | Authority Core & Storage Guild | AUTHSTORAGE-MONGO-08-001 | Harden Authority Mongo usage<br>Scoped Mongo sessions with majority read/write concerns wired through stores and GraphQL/HTTP pipelines; replica-set election regression validated. |
| Sprint 8 | Mongo strengthening | src/StellaOps.Excititor.Storage.Mongo/TASKS.md | DONE (2025-10-19) | Team Excititor Storage | EXCITITOR-STORAGE-MONGO-08-001 | Causal consistency for Excititor repositories<br>Session-scoped repositories shipped with new Mongo records, orchestrators/workers now share scoped sessions, and replica-set failover coverage added via `dotnet test src/StellaOps.Excititor.Storage.Mongo.Tests/StellaOps.Excititor.Storage.Mongo.Tests.csproj`. | | Sprint 8 | Mongo strengthening | src/StellaOps.Excititor.Storage.Mongo/TASKS.md | DONE (2025-10-19) | Team Excititor Storage | EXCITITOR-STORAGE-MONGO-08-001 | Causal consistency for Excititor repositories<br>Session-scoped repositories shipped with new Mongo records, orchestrators/workers now share scoped sessions, and replica-set failover coverage added via `dotnet test src/StellaOps.Excititor.Storage.Mongo.Tests/StellaOps.Excititor.Storage.Mongo.Tests.csproj`. |
| Sprint 8 | Platform Maintenance | src/StellaOps.Excititor.Storage.Mongo/TASKS.md | DONE (2025-10-19) | Team Excititor Storage | EXCITITOR-STORAGE-03-001 | Statement backfill tooling shipped admin backfill endpoint, CLI hook (`stellaops excititor backfill-statements`), integration tests, and operator runbook (`docs/dev/EXCITITOR_STATEMENT_BACKFILL.md`). | | Sprint 8 | Platform Maintenance | src/StellaOps.Excititor.Storage.Mongo/TASKS.md | DONE (2025-10-19) | Team Excititor Storage | EXCITITOR-STORAGE-03-001 | Statement backfill tooling shipped admin backfill endpoint, CLI hook (`stellaops excititor backfill-statements`), integration tests, and operator runbook (`docs/dev/EXCITITOR_STATEMENT_BACKFILL.md`). |
| Sprint 8 | Mirror Distribution | src/StellaOps.Concelier.Exporter.Json/TASKS.md | DONE (2025-10-19) | Concelier Export Guild | CONCELIER-EXPORT-08-201 | Mirror bundle + domain manifest produce signed JSON aggregates for `*.stella-ops.org` mirrors. | | Sprint 8 | Mirror Distribution | src/StellaOps.Concelier.Exporter.Json/TASKS.md | DONE (2025-10-19) | Concelier Export Guild | CONCELIER-EXPORT-08-201 | Mirror bundle + domain manifest produce signed JSON aggregates for `*.stella-ops.org` mirrors. |
| Sprint 8 | Mirror Distribution | src/StellaOps.Concelier.Exporter.TrivyDb/TASKS.md | DONE (2025-10-19) | Concelier Export Guild | CONCELIER-EXPORT-08-202 | Mirror-ready Trivy DB bundles mirror options emit per-domain manifests/metadata/db archives with deterministic digests for downstream sync. | | Sprint 8 | Mirror Distribution | src/StellaOps.Concelier.Exporter.TrivyDb/TASKS.md | DONE (2025-10-19) | Concelier Export Guild | CONCELIER-EXPORT-08-202 | Mirror-ready Trivy DB bundles mirror options emit per-domain manifests/metadata/db archives with deterministic digests for downstream sync. |
| Sprint 8 | Mirror Distribution | src/StellaOps.Concelier.WebService/TASKS.md | DOING (2025-10-19) | Concelier WebService Guild | CONCELIER-WEB-08-201 | Mirror distribution endpoints expose domain-scoped index/download APIs with auth/quota. | | Sprint 8 | Mirror Distribution | src/StellaOps.Concelier.WebService/TASKS.md | DONE (2025-10-20) | Concelier WebService Guild | CONCELIER-WEB-08-201 | Mirror distribution endpoints expose domain-scoped index/download APIs with auth/quota. |
| Sprint 8 | Mirror Distribution | src/StellaOps.Concelier.Connector.StellaOpsMirror/TASKS.md | DOING (2025-10-19) | BE-Conn-Stella | FEEDCONN-STELLA-08-001 | Concelier mirror connector fetch mirror manifest, verify signatures, and hydrate canonical DTOs with resume support. | | Sprint 8 | Mirror Distribution | src/StellaOps.Concelier.Connector.StellaOpsMirror/TASKS.md | DOING (2025-10-19) | BE-Conn-Stella | FEEDCONN-STELLA-08-001 | Concelier mirror connector fetch mirror manifest, verify signatures, and hydrate canonical DTOs with resume support. |
| Sprint 8 | Mirror Distribution | ops/devops/TASKS.md | DONE (2025-10-19) | DevOps Guild | DEVOPS-MIRROR-08-001 | Managed mirror deployments for `*.stella-ops.org` Helm/Compose overlays, CDN, runbooks. | | Sprint 8 | Mirror Distribution | ops/devops/TASKS.md | DONE (2025-10-19) | DevOps Guild | DEVOPS-MIRROR-08-001 | Managed mirror deployments for `*.stella-ops.org` Helm/Compose overlays, CDN, runbooks. |
| Sprint 8 | Plugin Infrastructure | src/StellaOps.Plugin/TASKS.md | DOING | Plugin Platform Guild, Authority Core | PLUGIN-DI-08-002.COORD | Authority scoped-service integration handshake<br>Session scheduled for 2025-10-20 15:0016:00UTC; agenda + attendees logged in `docs/dev/authority-plugin-di-coordination.md`. | | Sprint 8 | Plugin Infrastructure | src/StellaOps.Plugin/TASKS.md | DOING | Plugin Platform Guild, Authority Core | PLUGIN-DI-08-002.COORD | Authority scoped-service integration handshake<br>Session scheduled for 2025-10-20 15:0016:00UTC; agenda + attendees logged in `docs/dev/authority-plugin-di-coordination.md`. |
@@ -246,7 +246,7 @@ This file describe implementation of Stella Ops (docs/README.md). Implementation
| Sprint 10 | Scanner Analyzers & SBOM | src/StellaOps.Scanner.Emit/TASKS.md | TODO | Emit Guild | SCANNER-EMIT-10-607 | Embed scoring inputs, confidence band, and quiet provenance in CycloneDX/DSSE artifacts. | | Sprint 10 | Scanner Analyzers & SBOM | src/StellaOps.Scanner.Emit/TASKS.md | TODO | Emit Guild | SCANNER-EMIT-10-607 | Embed scoring inputs, confidence band, and quiet provenance in CycloneDX/DSSE artifacts. |
| Sprint 10 | Benchmarks | bench/TASKS.md | TODO | Bench Guild, Scanner Team | BENCH-SCANNER-10-001 | Analyzer microbench harness + baseline CSV. | | Sprint 10 | Benchmarks | bench/TASKS.md | TODO | Bench Guild, Scanner Team | BENCH-SCANNER-10-001 | Analyzer microbench harness + baseline CSV. |
| Sprint 10 | Samples | samples/TASKS.md | TODO | Samples Guild, Scanner Team | SAMPLES-10-001 | Sample images with SBOM/BOM-Index sidecars. | | Sprint 10 | Samples | samples/TASKS.md | TODO | Samples Guild, Scanner Team | SAMPLES-10-001 | Sample images with SBOM/BOM-Index sidecars. |
| Sprint 10 | DevOps Security | ops/devops/TASKS.md | DOING | DevOps Guild | DEVOPS-SEC-10-301 | Address NU1902/NU1903 advisories for `MongoDB.Driver` 2.12.0 and `SharpCompress` 0.23.0; Wave0A prerequisites confirmed complete before remediation work. | | Sprint 10 | DevOps Security | ops/devops/TASKS.md | DONE (2025-10-20) | DevOps Guild | DEVOPS-SEC-10-301 | Address NU1902/NU1903 advisories for `MongoDB.Driver` 2.12.0 and `SharpCompress` 0.23.0; Wave0A prerequisites confirmed complete before remediation work. |
| Sprint 10 | DevOps Perf | ops/devops/TASKS.md | TODO | DevOps Guild | DEVOPS-PERF-10-001 | Perf smoke job ensuring <5s SBOM compose. | | Sprint 10 | DevOps Perf | ops/devops/TASKS.md | TODO | DevOps Guild | DEVOPS-PERF-10-001 | Perf smoke job ensuring <5s SBOM compose. |
| Sprint 11 | Signing Chain Bring-up | src/StellaOps.Authority/TASKS.md | DOING (2025-10-19) | Authority Core & Security Guild | AUTH-DPOP-11-001 | Implement DPoP proof validation + nonce handling for high-value audiences per architecture. | | Sprint 11 | Signing Chain Bring-up | src/StellaOps.Authority/TASKS.md | DOING (2025-10-19) | Authority Core & Security Guild | AUTH-DPOP-11-001 | Implement DPoP proof validation + nonce handling for high-value audiences per architecture. |
| Sprint 11 | Signing Chain Bring-up | src/StellaOps.Authority/TASKS.md | DOING (2025-10-19) | Authority Core & Security Guild | AUTH-MTLS-11-002 | Add OAuth mTLS client credential support with certificate-bound tokens and introspection updates. | | Sprint 11 | Signing Chain Bring-up | src/StellaOps.Authority/TASKS.md | DOING (2025-10-19) | Authority Core & Security Guild | AUTH-MTLS-11-002 | Add OAuth mTLS client credential support with certificate-bound tokens and introspection updates. |

View File

@@ -1,466 +1,468 @@
# component_architecture_concelier.md — **StellaOps Concelier** (2025Q4) # component_architecture_concelier.md — **StellaOps Concelier** (2025Q4)
> **Scope.** Implementationready architecture for **Concelier**: the vulnerability ingest/normalize/merge/export subsystem that produces deterministic advisory data for the Scanner + Policy + Excititor pipeline. Covers domain model, connectors, merge rules, storage schema, exports, APIs, performance, security, and test matrices. > **Scope.** Implementationready architecture for **Concelier**: the vulnerability ingest/normalize/merge/export subsystem that produces deterministic advisory data for the Scanner + Policy + Excititor pipeline. Covers domain model, connectors, merge rules, storage schema, exports, APIs, performance, security, and test matrices.
--- ---
## 0) Mission & boundaries ## 0) Mission & boundaries
**Mission.** Acquire authoritative **vulnerability advisories** (vendor PSIRTs, distros, OSS ecosystems, CERTs), normalize them into a **canonical model**, reconcile aliases and version ranges, and export **deterministic artifacts** (JSON, Trivy DB) for fast backend joins. **Mission.** Acquire authoritative **vulnerability advisories** (vendor PSIRTs, distros, OSS ecosystems, CERTs), normalize them into a **canonical model**, reconcile aliases and version ranges, and export **deterministic artifacts** (JSON, Trivy DB) for fast backend joins.
**Boundaries.** **Boundaries.**
* Concelier **does not** sign with private keys. When attestation is required, the export artifact is handed to the **Signer**/**Attestor** pipeline (outofprocess). * Concelier **does not** sign with private keys. When attestation is required, the export artifact is handed to the **Signer**/**Attestor** pipeline (outofprocess).
* Concelier **does not** decide PASS/FAIL; it provides data to the **Policy** engine. * Concelier **does not** decide PASS/FAIL; it provides data to the **Policy** engine.
* Online operation is **allowlistonly**; airgapped deployments use the **Offline Kit**. * Online operation is **allowlistonly**; airgapped deployments use the **Offline Kit**.
--- ---
## 1) Topology & processes ## 1) Topology & processes
**Process shape:** single ASP.NET Core service `StellaOps.Concelier.WebService` hosting: **Process shape:** single ASP.NET Core service `StellaOps.Concelier.WebService` hosting:
* **Scheduler** with distributed locks (Mongo backed). * **Scheduler** with distributed locks (Mongo backed).
* **Connectors** (fetch/parse/map). * **Connectors** (fetch/parse/map).
* **Merger** (canonical record assembly + precedence). * **Merger** (canonical record assembly + precedence).
* **Exporters** (JSON, Trivy DB). * **Exporters** (JSON, Trivy DB).
* **Minimal REST** for health/status/trigger/export. * **Minimal REST** for health/status/trigger/export.
**Scale:** HA by running N replicas; **locks** prevent overlapping jobs per source/exporter. **Scale:** HA by running N replicas; **locks** prevent overlapping jobs per source/exporter.
--- ---
## 2) Canonical domain model ## 2) Canonical domain model
> Stored in MongoDB (database `concelier`), serialized with a **canonical JSON** writer (stable order, camelCase, normalized timestamps). > Stored in MongoDB (database `concelier`), serialized with a **canonical JSON** writer (stable order, camelCase, normalized timestamps).
### 2.1 Core entities ### 2.1 Core entities
**Advisory** **Advisory**
``` ```
advisoryId // internal GUID advisoryId // internal GUID
advisoryKey // stable string key (e.g., CVE-2025-12345 or vendor ID) advisoryKey // stable string key (e.g., CVE-2025-12345 or vendor ID)
title // short title (best-of from sources) title // short title (best-of from sources)
summary // normalized summary (English; i18n optional) summary // normalized summary (English; i18n optional)
published // earliest source timestamp published // earliest source timestamp
modified // latest source timestamp modified // latest source timestamp
severity // normalized {none, low, medium, high, critical} severity // normalized {none, low, medium, high, critical}
cvss // {v2?, v3?, v4?} objects (vector, baseScore, severity, source) cvss // {v2?, v3?, v4?} objects (vector, baseScore, severity, source)
exploitKnown // bool (e.g., KEV/active exploitation flags) exploitKnown // bool (e.g., KEV/active exploitation flags)
references[] // typed links (advisory, kb, patch, vendor, exploit, blog) references[] // typed links (advisory, kb, patch, vendor, exploit, blog)
sources[] // provenance for traceability (doc digests, URIs) sources[] // provenance for traceability (doc digests, URIs)
``` ```
**Alias** **Alias**
``` ```
advisoryId advisoryId
scheme // CVE, GHSA, RHSA, DSA, USN, MSRC, etc. scheme // CVE, GHSA, RHSA, DSA, USN, MSRC, etc.
value // e.g., "CVE-2025-12345" value // e.g., "CVE-2025-12345"
``` ```
**Affected** **Affected**
``` ```
advisoryId advisoryId
productKey // canonical product identity (see 2.2) productKey // canonical product identity (see 2.2)
rangeKind // semver | evr | nvra | apk | rpm | deb | generic | exact rangeKind // semver | evr | nvra | apk | rpm | deb | generic | exact
introduced? // string (format depends on rangeKind) introduced? // string (format depends on rangeKind)
fixed? // string (format depends on rangeKind) fixed? // string (format depends on rangeKind)
lastKnownSafe? // optional explicit safe floor lastKnownSafe? // optional explicit safe floor
arch? // arch or platform qualifier if source declares (x86_64, aarch64) arch? // arch or platform qualifier if source declares (x86_64, aarch64)
distro? // distro qualifier when applicable (rhel:9, debian:12, alpine:3.19) distro? // distro qualifier when applicable (rhel:9, debian:12, alpine:3.19)
ecosystem? // npm|pypi|maven|nuget|golang|… ecosystem? // npm|pypi|maven|nuget|golang|…
notes? // normalized notes per source notes? // normalized notes per source
``` ```
**Reference** **Reference**
``` ```
advisoryId advisoryId
url url
kind // advisory | patch | kb | exploit | mitigation | blog | cvrf | csaf kind // advisory | patch | kb | exploit | mitigation | blog | cvrf | csaf
sourceTag // e.g., vendor/redhat, distro/debian, oss/ghsa sourceTag // e.g., vendor/redhat, distro/debian, oss/ghsa
``` ```
**MergeEvent** **MergeEvent**
``` ```
advisoryKey advisoryKey
beforeHash // canonical JSON hash before merge beforeHash // canonical JSON hash before merge
afterHash // canonical JSON hash after merge afterHash // canonical JSON hash after merge
mergedAt mergedAt
inputs[] // source doc digests that contributed inputs[] // source doc digests that contributed
``` ```
**AdvisoryStatement (event log)** **AdvisoryStatement (event log)**
``` ```
statementId // GUID (immutable) statementId // GUID (immutable)
vulnerabilityKey // canonical advisory key (e.g., CVE-2025-12345) vulnerabilityKey // canonical advisory key (e.g., CVE-2025-12345)
advisoryKey // merge snapshot advisory key (may reference variant) advisoryKey // merge snapshot advisory key (may reference variant)
statementHash // canonical hash of advisory payload statementHash // canonical hash of advisory payload
asOf // timestamp of snapshot (UTC) asOf // timestamp of snapshot (UTC)
recordedAt // persistence timestamp (UTC) recordedAt // persistence timestamp (UTC)
inputDocuments[] // document IDs contributing to the snapshot inputDocuments[] // document IDs contributing to the snapshot
payload // canonical advisory document (BSON / canonical JSON) payload // canonical advisory document (BSON / canonical JSON)
``` ```
**AdvisoryConflict** **AdvisoryConflict**
``` ```
conflictId // GUID conflictId // GUID
vulnerabilityKey // canonical advisory key vulnerabilityKey // canonical advisory key
conflictHash // deterministic hash of conflict payload conflictHash // deterministic hash of conflict payload
asOf // timestamp aligned with originating statement set asOf // timestamp aligned with originating statement set
recordedAt // persistence timestamp recordedAt // persistence timestamp
statementIds[] // related advisoryStatement identifiers statementIds[] // related advisoryStatement identifiers
details // structured conflict explanation / merge reasoning details // structured conflict explanation / merge reasoning
``` ```
- `AdvisoryEventLog` (Concelier.Core) provides the public API for appending immutable statements/conflicts and querying replay history. Inputs are normalized by trimming and lower-casing `vulnerabilityKey`, serializing advisories with `CanonicalJsonSerializer`, and computing SHA-256 hashes (`statementHash`, `conflictHash`) over the canonical JSON payloads. Consumers can replay by key with an optional `asOf` filter to obtain deterministic snapshots ordered by `asOf` then `recordedAt`. - `AdvisoryEventLog` (Concelier.Core) provides the public API for appending immutable statements/conflicts and querying replay history. Inputs are normalized by trimming and lower-casing `vulnerabilityKey`, serializing advisories with `CanonicalJsonSerializer`, and computing SHA-256 hashes (`statementHash`, `conflictHash`) over the canonical JSON payloads. Consumers can replay by key with an optional `asOf` filter to obtain deterministic snapshots ordered by `asOf` then `recordedAt`.
- Concelier.WebService exposes the immutable log via `GET /concelier/advisories/{vulnerabilityKey}/replay[?asOf=UTC_ISO8601]`, returning the latest statements (with hex-encoded hashes) and any conflict explanations for downstream exporters and APIs. - Conflict explainers are serialized as deterministic `MergeConflictExplainerPayload` records (type, reason, source ranks, winning values); replay clients can parse the payload to render human-readable rationales without re-computing precedence.
- Concelier.WebService exposes the immutable log via `GET /concelier/advisories/{vulnerabilityKey}/replay[?asOf=UTC_ISO8601]`, returning the latest statements (with hex-encoded hashes) and any conflict explanations for downstream exporters and APIs.
**ExportState**
**ExportState**
```
exportKind // json | trivydb ```
baseExportId? // last full baseline exportKind // json | trivydb
baseDigest? // digest of last full baseline baseExportId? // last full baseline
lastFullDigest? // digest of last full export baseDigest? // digest of last full baseline
lastDeltaDigest? // digest of last delta export lastFullDigest? // digest of last full export
cursor // per-kind incremental cursor lastDeltaDigest? // digest of last delta export
files[] // last manifest snapshot (path → sha256) cursor // per-kind incremental cursor
``` files[] // last manifest snapshot (path → sha256)
```
### 2.2 Product identity (`productKey`)
### 2.2 Product identity (`productKey`)
* **Primary:** `purl` (Package URL).
* **OS packages:** RPM (NEVRA→purl:rpm), DEB (dpkg→purl:deb), APK (apk→purl:alpine), with **EVR/NVRA** preserved. * **Primary:** `purl` (Package URL).
* **Secondary:** `cpe` retained for compatibility; advisory records may carry both. * **OS packages:** RPM (NEVRA→purl:rpm), DEB (dpkg→purl:deb), APK (apk→purl:alpine), with **EVR/NVRA** preserved.
* **Image/platform:** `oci:<registry>/<repo>@<digest>` for imagelevel advisories (rare). * **Secondary:** `cpe` retained for compatibility; advisory records may carry both.
* **Unmappable:** if a source is nondeterministic, keep native string under `productKey="native:<provider>:<id>"` and mark **nonjoinable**. * **Image/platform:** `oci:<registry>/<repo>@<digest>` for imagelevel advisories (rare).
* **Unmappable:** if a source is nondeterministic, keep native string under `productKey="native:<provider>:<id>"` and mark **nonjoinable**.
---
---
## 3) Source families & precedence
## 3) Source families & precedence
### 3.1 Families
### 3.1 Families
* **Vendor PSIRTs**: Microsoft, Oracle, Cisco, Adobe, Apple, VMware, Chromium…
* **Linux distros**: Red Hat, SUSE, Ubuntu, Debian, Alpine * **Vendor PSIRTs**: Microsoft, Oracle, Cisco, Adobe, Apple, VMware, Chromium
* **OSS ecosystems**: OSV, GHSA (GitHub Security Advisories), PyPI, npm, Maven, NuGet, Go. * **Linux distros**: Red Hat, SUSE, Ubuntu, Debian, Alpine…
* **CERTs / national CSIRTs**: CISA (KEV, ICS), JVN, ACSC, CCCS, KISA, CERTFR/BUND, etc. * **OSS ecosystems**: OSV, GHSA (GitHub Security Advisories), PyPI, npm, Maven, NuGet, Go.
* **CERTs / national CSIRTs**: CISA (KEV, ICS), JVN, ACSC, CCCS, KISA, CERTFR/BUND, etc.
### 3.2 Precedence (when claims conflict)
### 3.2 Precedence (when claims conflict)
1. **Vendor PSIRT** (authoritative for their product).
2. **Distro** (authoritative for packages they ship, including backports). 1. **Vendor PSIRT** (authoritative for their product).
3. **Ecosystem** (OSV/GHSA) for library semantics. 2. **Distro** (authoritative for packages they ship, including backports).
4. **CERTs/aggregators** for enrichment (KEV/known exploited). 3. **Ecosystem** (OSV/GHSA) for library semantics.
4. **CERTs/aggregators** for enrichment (KEV/known exploited).
> Precedence affects **Affected** ranges and **fixed** info; **severity** is normalized to the **maximum** credible severity unless policy overrides. Conflicts are retained with **source provenance**.
> Precedence affects **Affected** ranges and **fixed** info; **severity** is normalized to the **maximum** credible severity unless policy overrides. Conflicts are retained with **source provenance**.
---
---
## 4) Connectors & normalization
## 4) Connectors & normalization
### 4.1 Connector contract
### 4.1 Connector contract
```csharp
public interface IFeedConnector { ```csharp
string SourceName { get; } public interface IFeedConnector {
Task FetchAsync(IServiceProvider sp, CancellationToken ct); // -> document collection string SourceName { get; }
Task ParseAsync(IServiceProvider sp, CancellationToken ct); // -> dto collection (validated) Task FetchAsync(IServiceProvider sp, CancellationToken ct); // -> document collection
Task MapAsync(IServiceProvider sp, CancellationToken ct); // -> advisory/alias/affected/reference Task ParseAsync(IServiceProvider sp, CancellationToken ct); // -> dto collection (validated)
} Task MapAsync(IServiceProvider sp, CancellationToken ct); // -> advisory/alias/affected/reference
``` }
```
* **Fetch**: windowed (cursor), conditional GET (ETag/LastModified), retry/backoff, rate limiting.
* **Parse**: schema validation (JSON Schema, XSD/CSAF), content type checks; write **DTO** with normalized casing. * **Fetch**: windowed (cursor), conditional GET (ETag/LastModified), retry/backoff, rate limiting.
* **Map**: build canonical records; all outputs carry **provenance** (doc digest, URI, anchors). * **Parse**: schema validation (JSON Schema, XSD/CSAF), content type checks; write **DTO** with normalized casing.
* **Map**: build canonical records; all outputs carry **provenance** (doc digest, URI, anchors).
### 4.2 Version range normalization
### 4.2 Version range normalization
* **SemVer** ecosystems (npm, pypi, maven, nuget, golang): normalize to `introduced`/`fixed` semver ranges (use `~`, `^`, `<`, `>=` canonicalized to intervals).
* **RPM EVR**: `epoch:version-release` with `rpmvercmp` semantics; store raw EVR strings and also **computed order keys** for query. * **SemVer** ecosystems (npm, pypi, maven, nuget, golang): normalize to `introduced`/`fixed` semver ranges (use `~`, `^`, `<`, `>=` canonicalized to intervals).
* **DEB**: dpkg version comparison semantics mirrored; store computed keys. * **RPM EVR**: `epoch:version-release` with `rpmvercmp` semantics; store raw EVR strings and also **computed order keys** for query.
* **APK**: Alpine version semantics; compute order keys. * **DEB**: dpkg version comparison semantics mirrored; store computed keys.
* **Generic**: if provider uses text, retain raw; do **not** invent ranges. * **APK**: Alpine version semantics; compute order keys.
* **Generic**: if provider uses text, retain raw; do **not** invent ranges.
### 4.3 Severity & CVSS
### 4.3 Severity & CVSS
* Normalize **CVSS v2/v3/v4** where available (vector, baseScore, severity).
* If multiple CVSS sources exist, track them all; **effective severity** defaults to **max** by policy (configurable). * Normalize **CVSS v2/v3/v4** where available (vector, baseScore, severity).
* **ExploitKnown** toggled by KEV and equivalent sources; store **evidence** (source, date). * If multiple CVSS sources exist, track them all; **effective severity** defaults to **max** by policy (configurable).
* **ExploitKnown** toggled by KEV and equivalent sources; store **evidence** (source, date).
---
---
## 5) Merge engine
## 5) Merge engine
### 5.1 Keying & identity
### 5.1 Keying & identity
* Identity graph: **CVE** is primary node; vendor/distro IDs resolved via **Alias** edges (from connectors and Conceliers alias tables).
* `advisoryKey` is the canonical primary key (CVE if present, else vendor/distro key). * Identity graph: **CVE** is primary node; vendor/distro IDs resolved via **Alias** edges (from connectors and Conceliers alias tables).
* `advisoryKey` is the canonical primary key (CVE if present, else vendor/distro key).
### 5.2 Merge algorithm (deterministic)
### 5.2 Merge algorithm (deterministic)
1. **Gather** all rows for `advisoryKey` (across sources).
2. **Select title/summary** by precedence source (vendor>distro>ecosystem>cert). 1. **Gather** all rows for `advisoryKey` (across sources).
3. **Union aliases** (dedupe by scheme+value). 2. **Select title/summary** by precedence source (vendor>distro>ecosystem>cert).
4. **Merge `Affected`** with rules: 3. **Union aliases** (dedupe by scheme+value).
4. **Merge `Affected`** with rules:
* Prefer **vendor** ranges for vendor products; prefer **distro** for **distroshipped** packages.
* If both exist for same `productKey`, keep **both**; mark `sourceTag` and `precedence` so **Policy** can decide. * Prefer **vendor** ranges for vendor products; prefer **distro** for **distroshipped** packages.
* Never collapse range semantics across different families (e.g., rpm EVR vs semver). * If both exist for same `productKey`, keep **both**; mark `sourceTag` and `precedence` so **Policy** can decide.
5. **CVSS/severity**: record all CVSS sets; compute **effectiveSeverity** = max (unless policy override). * Never collapse range semantics across different families (e.g., rpm EVR vs semver).
6. **References**: union with type precedence (advisory > patch > kb > exploit > blog); dedupe by URL; preserve `sourceTag`. 5. **CVSS/severity**: record all CVSS sets; compute **effectiveSeverity** = max (unless policy override).
7. Produce **canonical JSON**; compute **afterHash**; store **MergeEvent** with inputs and hashes. 6. **References**: union with type precedence (advisory > patch > kb > exploit > blog); dedupe by URL; preserve `sourceTag`.
7. Produce **canonical JSON**; compute **afterHash**; store **MergeEvent** with inputs and hashes.
> The merge is **pure** given inputs. Any change in inputs or precedence matrices changes the **hash** predictably.
> The merge is **pure** given inputs. Any change in inputs or precedence matrices changes the **hash** predictably.
---
---
## 6) Storage schema (MongoDB)
## 6) Storage schema (MongoDB)
**Collections & indexes**
**Collections & indexes**
* `source` `{_id, type, baseUrl, enabled, notes}`
* `source_state` `{sourceName(unique), enabled, cursor, lastSuccess, backoffUntil, paceOverrides}` * `source` `{_id, type, baseUrl, enabled, notes}`
* `document` `{_id, sourceName, uri, fetchedAt, sha256, contentType, status, metadata, gridFsId?, etag?, lastModified?}` * `source_state` `{sourceName(unique), enabled, cursor, lastSuccess, backoffUntil, paceOverrides}`
* `document` `{_id, sourceName, uri, fetchedAt, sha256, contentType, status, metadata, gridFsId?, etag?, lastModified?}`
* Index: `{sourceName:1, uri:1}` unique, `{fetchedAt:-1}`
* `dto` `{_id, sourceName, documentId, schemaVer, payload, validatedAt}` * Index: `{sourceName:1, uri:1}` unique, `{fetchedAt:-1}`
* `dto` `{_id, sourceName, documentId, schemaVer, payload, validatedAt}`
* Index: `{sourceName:1, documentId:1}`
* `advisory` `{_id, advisoryKey, title, summary, published, modified, severity, cvss, exploitKnown, sources[]}` * Index: `{sourceName:1, documentId:1}`
* `advisory` `{_id, advisoryKey, title, summary, published, modified, severity, cvss, exploitKnown, sources[]}`
* Index: `{advisoryKey:1}` unique, `{modified:-1}`, `{severity:1}`, text index (title, summary)
* `alias` `{advisoryId, scheme, value}` * Index: `{advisoryKey:1}` unique, `{modified:-1}`, `{severity:1}`, text index (title, summary)
* `alias` `{advisoryId, scheme, value}`
* Index: `{scheme:1,value:1}`, `{advisoryId:1}`
* `affected` `{advisoryId, productKey, rangeKind, introduced?, fixed?, arch?, distro?, ecosystem?}` * Index: `{scheme:1,value:1}`, `{advisoryId:1}`
* `affected` `{advisoryId, productKey, rangeKind, introduced?, fixed?, arch?, distro?, ecosystem?}`
* Index: `{productKey:1}`, `{advisoryId:1}`, `{productKey:1, rangeKind:1}`
* `reference` `{advisoryId, url, kind, sourceTag}` * Index: `{productKey:1}`, `{advisoryId:1}`, `{productKey:1, rangeKind:1}`
* `reference` `{advisoryId, url, kind, sourceTag}`
* Index: `{advisoryId:1}`, `{kind:1}`
* `merge_event` `{advisoryKey, beforeHash, afterHash, mergedAt, inputs[]}` * Index: `{advisoryId:1}`, `{kind:1}`
* `merge_event` `{advisoryKey, beforeHash, afterHash, mergedAt, inputs[]}`
* Index: `{advisoryKey:1, mergedAt:-1}`
* `export_state` `{_id(exportKind), baseExportId?, baseDigest?, lastFullDigest?, lastDeltaDigest?, cursor, files[]}` * Index: `{advisoryKey:1, mergedAt:-1}`
* `locks` `{_id(jobKey), holder, acquiredAt, heartbeatAt, leaseMs, ttlAt}` (TTL cleans dead locks) * `export_state` `{_id(exportKind), baseExportId?, baseDigest?, lastFullDigest?, lastDeltaDigest?, cursor, files[]}`
* `jobs` `{_id, type, args, state, startedAt, heartbeatAt, endedAt, error}` * `locks` `{_id(jobKey), holder, acquiredAt, heartbeatAt, leaseMs, ttlAt}` (TTL cleans dead locks)
* `jobs` `{_id, type, args, state, startedAt, heartbeatAt, endedAt, error}`
**GridFS buckets**: `fs.documents` for raw payloads.
**GridFS buckets**: `fs.documents` for raw payloads.
---
---
## 7) Exporters
## 7) Exporters
### 7.1 Deterministic JSON (vulnlist style)
### 7.1 Deterministic JSON (vulnlist style)
* Folder structure mirroring `/<scheme>/<first-two>/<rest>/…` with one JSON per advisory; deterministic ordering, stable timestamps, normalized whitespace.
* `manifest.json` lists all files with SHA256 and a toplevel **export digest**. * Folder structure mirroring `/<scheme>/<first-two>/<rest>/…` with one JSON per advisory; deterministic ordering, stable timestamps, normalized whitespace.
* `manifest.json` lists all files with SHA256 and a toplevel **export digest**.
### 7.2 Trivy DB exporter
### 7.2 Trivy DB exporter
* Builds Bolt DB archives compatible with Trivy; supports **full** and **delta** modes.
* In delta, unchanged blobs are reused from the base; metadata captures: * Builds Bolt DB archives compatible with Trivy; supports **full** and **delta** modes.
* In delta, unchanged blobs are reused from the base; metadata captures:
```
{ ```
"mode": "delta|full", {
"baseExportId": "...", "mode": "delta|full",
"baseManifestDigest": "sha256:...", "baseExportId": "...",
"changed": ["path1", "path2"], "baseManifestDigest": "sha256:...",
"removed": ["path3"] "changed": ["path1", "path2"],
} "removed": ["path3"]
``` }
* Optional ORAS push (OCI layout) for registries. ```
* Offline kit bundles include Trivy DB + JSON tree + export manifest. * Optional ORAS push (OCI layout) for registries.
* Mirror-ready bundles: when `concelier.trivy.mirror` defines domains, the exporter emits `mirror/index.json` plus per-domain `manifest.json`, `metadata.json`, and `db.tar.gz` files with SHA-256 digests so Concelier mirrors can expose domain-scoped download endpoints. * Offline kit bundles include Trivy DB + JSON tree + export manifest.
* Mirror-ready bundles: when `concelier.trivy.mirror` defines domains, the exporter emits `mirror/index.json` plus per-domain `manifest.json`, `metadata.json`, and `db.tar.gz` files with SHA-256 digests so Concelier mirrors can expose domain-scoped download endpoints.
### 7.3 Handoff to Signer/Attestor (optional) * Concelier.WebService serves `/concelier/exports/index.json` and `/concelier/exports/mirror/{domain}/…` directly from the export tree with hour-long budgets (index: 60s, bundles: 300s, immutable) and per-domain rate limiting; the endpoints honour Stella Ops Authority or CIDR bypass lists depending on mirror topology.
* On export completion, if `attest: true` is set in job args, Concelier **posts** the artifact metadata to **Signer**/**Attestor**; Concelier itself **does not** hold signing keys. ### 7.3 Handoff to Signer/Attestor (optional)
* Export record stores returned `{ uuid, index, url }` from **Rekor v2**.
* On export completion, if `attest: true` is set in job args, Concelier **posts** the artifact metadata to **Signer**/**Attestor**; Concelier itself **does not** hold signing keys.
--- * Export record stores returned `{ uuid, index, url }` from **Rekor v2**.
## 8) REST APIs ---
All under `/api/v1/concelier`. ## 8) REST APIs
**Health & status** All under `/api/v1/concelier`.
``` **Health & status**
GET /healthz | /readyz
GET /status → sources, last runs, export cursors ```
``` GET /healthz | /readyz
GET /status → sources, last runs, export cursors
**Sources & jobs** ```
``` **Sources & jobs**
GET /sources → list of configured sources
POST /sources/{name}/trigger → { jobId } ```
POST /sources/{name}/pause | /resume → toggle GET /sources → list of configured sources
GET /jobs/{id} → job status POST /sources/{name}/trigger { jobId }
``` POST /sources/{name}/pause | /resume → toggle
GET /jobs/{id} → job status
**Exports** ```
``` **Exports**
POST /exports/json { full?:bool, force?:bool, attest?:bool } → { exportId, digest, rekor? }
POST /exports/trivy { full?:bool, force?:bool, publish?:bool, attest?:bool } → { exportId, digest, rekor? } ```
GET /exports/{id} → export metadata (kind, digest, createdAt, rekor?) POST /exports/json { full?:bool, force?:bool, attest?:bool } { exportId, digest, rekor? }
GET /concelier/exports/index.json → mirror index describing available domains/bundles POST /exports/trivy { full?:bool, force?:bool, publish?:bool, attest?:bool } → { exportId, digest, rekor? }
GET /concelier/exports/mirror/{domain}/manifest.json GET /exports/{id} → export metadata (kind, digest, createdAt, rekor?)
GET /concelier/exports/mirror/{domain}/bundle.json GET /concelier/exports/index.json → mirror index describing available domains/bundles
GET /concelier/exports/mirror/{domain}/bundle.json.jws GET /concelier/exports/mirror/{domain}/manifest.json
``` GET /concelier/exports/mirror/{domain}/bundle.json
GET /concelier/exports/mirror/{domain}/bundle.json.jws
**Search (operator debugging)** ```
``` **Search (operator debugging)**
GET /advisories/{key}
GET /advisories?scheme=CVE&value=CVE-2025-12345 ```
GET /affected?productKey=pkg:rpm/openssl&limit=100 GET /advisories/{key}
``` GET /advisories?scheme=CVE&value=CVE-2025-12345
GET /affected?productKey=pkg:rpm/openssl&limit=100
**AuthN/Z:** Authority tokens (OpTok) with roles: `concelier.read`, `concelier.admin`, `concelier.export`. ```
--- **AuthN/Z:** Authority tokens (OpTok) with roles: `concelier.read`, `concelier.admin`, `concelier.export`.
## 9) Configuration (YAML) ---
```yaml ## 9) Configuration (YAML)
concelier:
mongo: { uri: "mongodb://mongo/concelier" } ```yaml
s3: concelier:
endpoint: "http://minio:9000" mongo: { uri: "mongodb://mongo/concelier" }
bucket: "stellaops-concelier" s3:
scheduler: endpoint: "http://minio:9000"
windowSeconds: 30 bucket: "stellaops-concelier"
maxParallelSources: 4 scheduler:
sources: windowSeconds: 30
- name: redhat maxParallelSources: 4
kind: csaf sources:
baseUrl: https://access.redhat.com/security/data/csaf/v2/ - name: redhat
signature: { type: pgp, keys: [ "…redhat PGP…" ] } kind: csaf
enabled: true baseUrl: https://access.redhat.com/security/data/csaf/v2/
windowDays: 7 signature: { type: pgp, keys: [ "…redhat PGP…" ] }
- name: suse enabled: true
kind: csaf windowDays: 7
baseUrl: https://ftp.suse.com/pub/projects/security/csaf/ - name: suse
signature: { type: pgp, keys: [ "…suse PGP…" ] } kind: csaf
- name: ubuntu baseUrl: https://ftp.suse.com/pub/projects/security/csaf/
kind: usn-json signature: { type: pgp, keys: [ "…suse PGP…" ] }
baseUrl: https://ubuntu.com/security/notices.json - name: ubuntu
signature: { type: none } kind: usn-json
- name: osv baseUrl: https://ubuntu.com/security/notices.json
kind: osv signature: { type: none }
baseUrl: https://api.osv.dev/v1/ - name: osv
signature: { type: none } kind: osv
- name: ghsa baseUrl: https://api.osv.dev/v1/
kind: ghsa signature: { type: none }
baseUrl: https://api.github.com/graphql - name: ghsa
auth: { tokenRef: "env:GITHUB_TOKEN" } kind: ghsa
exporters: baseUrl: https://api.github.com/graphql
json: auth: { tokenRef: "env:GITHUB_TOKEN" }
enabled: true exporters:
output: s3://stellaops-concelier/json/ json:
trivy: enabled: true
enabled: true output: s3://stellaops-concelier/json/
mode: full trivy:
output: s3://stellaops-concelier/trivy/ enabled: true
oras: mode: full
enabled: false output: s3://stellaops-concelier/trivy/
repo: ghcr.io/org/concelier oras:
precedence: enabled: false
vendorWinsOverDistro: true repo: ghcr.io/org/concelier
distroWinsOverOsv: true precedence:
severity: vendorWinsOverDistro: true
policy: max # or 'vendorPreferred' / 'distroPreferred' distroWinsOverOsv: true
``` severity:
policy: max # or 'vendorPreferred' / 'distroPreferred'
--- ```
## 10) Security & compliance ---
* **Outbound allowlist** per connector (domains, protocols); proxy support; TLS pinning where possible. ## 10) Security & compliance
* **Signature verification** for raw docs (PGP/cosign/x509) with results stored in `document.metadata.sig`. Docs failing verification may still be ingested but flagged; **merge** can downweight or ignore them by config.
* **No secrets in logs**; auth material via `env:` or mounted files; HTTP redaction of `Authorization` headers. * **Outbound allowlist** per connector (domains, protocols); proxy support; TLS pinning where possible.
* **Multitenant**: pertenant DBs or prefixes; pertenant S3 prefixes; tenantscoped API tokens. * **Signature verification** for raw docs (PGP/cosign/x509) with results stored in `document.metadata.sig`. Docs failing verification may still be ingested but flagged; **merge** can downweight or ignore them by config.
* **Determinism**: canonical JSON writer; export digests stable across runs given same inputs. * **No secrets in logs**; auth material via `env:` or mounted files; HTTP redaction of `Authorization` headers.
* **Multitenant**: pertenant DBs or prefixes; pertenant S3 prefixes; tenantscoped API tokens.
--- * **Determinism**: canonical JSON writer; export digests stable across runs given same inputs.
## 11) Performance targets & scale ---
* **Ingest**: ≥ 5k documents/min on 4 cores (CSAF/OpenVEX/JSON). ## 11) Performance targets & scale
* **Normalize/map**: ≥ 50k `Affected` rows/min on 4 cores.
* **Merge**: ≤ 10ms P95 per advisory at steadystate updates. * **Ingest**: ≥ 5k documents/min on 4 cores (CSAF/OpenVEX/JSON).
* **Export**: 1M advisories JSON in ≤ 90s (streamed, zstd), Trivy DB in ≤ 60s on 8 cores. * **Normalize/map**: ≥ 50k `Affected` rows/min on 4 cores.
* **Memory**: hard cap per job; chunked streaming writers; backpressure to avoid GC spikes. * **Merge**: ≤ 10ms P95 per advisory at steadystate updates.
* **Export**: 1M advisories JSON in ≤ 90s (streamed, zstd), Trivy DB in ≤ 60s on 8 cores.
**Scale pattern**: add Concelier replicas; Mongo scaling via indices and read/write concerns; GridFS only for oversized docs. * **Memory**: hard cap per job; chunked streaming writers; backpressure to avoid GC spikes.
--- **Scale pattern**: add Concelier replicas; Mongo scaling via indices and read/write concerns; GridFS only for oversized docs.
## 12) Observability ---
* **Metrics** ## 12) Observability
* `concelier.fetch.docs_total{source}` * **Metrics**
* `concelier.fetch.bytes_total{source}`
* `concelier.parse.failures_total{source}` * `concelier.fetch.docs_total{source}`
* `concelier.map.affected_total{source}` * `concelier.fetch.bytes_total{source}`
* `concelier.merge.changed_total` * `concelier.parse.failures_total{source}`
* `concelier.export.bytes{kind}` * `concelier.map.affected_total{source}`
* `concelier.export.duration_seconds{kind}` * `concelier.merge.changed_total`
* **Tracing** around fetch/parse/map/merge/export. * `concelier.export.bytes{kind}`
* **Logs**: structured with `source`, `uri`, `docDigest`, `advisoryKey`, `exportId`. * `concelier.export.duration_seconds{kind}`
* **Tracing** around fetch/parse/map/merge/export.
--- * **Logs**: structured with `source`, `uri`, `docDigest`, `advisoryKey`, `exportId`.
## 13) Testing matrix ---
* **Connectors:** fixture suites for each provider/format (happy path; malformed; signature fail). ## 13) Testing matrix
* **Version semantics:** EVR vs dpkg vs semver edge cases (epoch bumps, tilde versions, prereleases).
* **Merge:** conflicting sources (vendor vs distro vs OSV); verify precedence & dual retention. * **Connectors:** fixture suites for each provider/format (happy path; malformed; signature fail).
* **Export determinism:** byteforbyte stable outputs across runs; digest equality. * **Version semantics:** EVR vs dpkg vs semver edge cases (epoch bumps, tilde versions, prereleases).
* **Performance:** soak tests with 1M advisories; cap memory; verify backpressure. * **Merge:** conflicting sources (vendor vs distro vs OSV); verify precedence & dual retention.
* **API:** pagination, filters, RBAC, error envelopes (RFC 7807). * **Export determinism:** byteforbyte stable outputs across runs; digest equality.
* **Offline kit:** bundle build & import correctness. * **Performance:** soak tests with 1M advisories; cap memory; verify backpressure.
* **API:** pagination, filters, RBAC, error envelopes (RFC 7807).
--- * **Offline kit:** bundle build & import correctness.
## 14) Failure modes & recovery ---
* **Source outages:** scheduler backs off with exponential delay; `source_state.backoffUntil`; alerts on staleness. ## 14) Failure modes & recovery
* **Schema drifts:** parse stage marks DTO invalid; job fails with clear diagnostics; connector version flags track supported schema ranges.
* **Partial exports:** exporters write to temp prefix; **manifest commit** is atomic; only then move to final prefix and update `export_state`. * **Source outages:** scheduler backs off with exponential delay; `source_state.backoffUntil`; alerts on staleness.
* **Resume:** all stages idempotent; `source_state.cursor` supports window resume. * **Schema drifts:** parse stage marks DTO invalid; job fails with clear diagnostics; connector version flags track supported schema ranges.
* **Partial exports:** exporters write to temp prefix; **manifest commit** is atomic; only then move to final prefix and update `export_state`.
--- * **Resume:** all stages idempotent; `source_state.cursor` supports window resume.
## 15) Operator runbook (quick) ---
* **Trigger all sources:** `POST /api/v1/concelier/sources/*/trigger` ## 15) Operator runbook (quick)
* **Force full export JSON:** `POST /api/v1/concelier/exports/json { "full": true, "force": true }`
* **Force Trivy DB delta publish:** `POST /api/v1/concelier/exports/trivy { "full": false, "publish": true }` * **Trigger all sources:** `POST /api/v1/concelier/sources/*/trigger`
* **Inspect advisory:** `GET /api/v1/concelier/advisories?scheme=CVE&value=CVE-2025-12345` * **Force full export JSON:** `POST /api/v1/concelier/exports/json { "full": true, "force": true }`
* **Pause noisy source:** `POST /api/v1/concelier/sources/osv/pause` * **Force Trivy DB delta publish:** `POST /api/v1/concelier/exports/trivy { "full": false, "publish": true }`
* **Inspect advisory:** `GET /api/v1/concelier/advisories?scheme=CVE&value=CVE-2025-12345`
--- * **Pause noisy source:** `POST /api/v1/concelier/sources/osv/pause`
## 16) Rollout plan ---
1. **MVP**: Red Hat (CSAF), SUSE (CSAF), Ubuntu (USN JSON), OSV; JSON export. ## 16) Rollout plan
2. **Add**: GHSA GraphQL, Debian (DSA HTML/JSON), Alpine secdb; Trivy DB export.
3. **Attestation handoff**: integrate with **Signer/Attestor** (optional). 1. **MVP**: Red Hat (CSAF), SUSE (CSAF), Ubuntu (USN JSON), OSV; JSON export.
4. **Scale & diagnostics**: provider dashboards, staleness alerts, export cache reuse. 2. **Add**: GHSA GraphQL, Debian (DSA HTML/JSON), Alpine secdb; Trivy DB export.
5. **Offline kit**: endtoend verified bundles for airgap. 3. **Attestation handoff**: integrate with **Signer/Attestor** (optional).
4. **Scale & diagnostics**: provider dashboards, staleness alerts, export cache reuse.
5. **Offline kit**: endtoend verified bundles for airgap.

View File

@@ -337,7 +337,7 @@ Prometheus + OTLP; Grafana dashboards ship in the charts.
* **Vulnerability response**: * **Vulnerability response**:
* Concelier red-flag advisories trigger accelerated **stable** patch rollout; UI/CLI “security patch available” notice. * Concelier red-flag advisories trigger accelerated **stable** patch rollout; UI/CLI “security patch available” notice.
* 2025-10: Pinned `MongoDB.Driver` **3.5.0** and `SharpCompress` **0.41.0** across services (DEVOPS-SEC-10-301) to eliminate NU1902/NU1903 warnings surfaced during scanner cache/worker test runs; future dependency bumps follow the same central override pattern. * 2025-10: Pinned `MongoDB.Driver` **3.5.0** and `SharpCompress` **0.41.0** across services (DEVOPS-SEC-10-301) to eliminate NU1902/NU1903 warnings surfaced during scanner cache/worker test runs; repacked the local `Mongo2Go` feed so test fixtures inherit the patched dependencies; future bumps follow the same central override pattern.
* **Backups/DR**: * **Backups/DR**:

View File

@@ -1,196 +1,224 @@
# Concelier & Excititor Mirror Operations # Concelier & Excititor Mirror Operations
This runbook describes how StellaOps operates the managed mirrors under `*.stella-ops.org`. This runbook describes how StellaOps operates the managed mirrors under `*.stella-ops.org`.
It covers Docker Compose and Helm deployment overlays, secret handling for multi-tenant It covers Docker Compose and Helm deployment overlays, secret handling for multi-tenant
authn, CDN fronting, and the recurring sync pipeline that keeps mirror bundles current. authn, CDN fronting, and the recurring sync pipeline that keeps mirror bundles current.
## 1. Prerequisites ## 1. Prerequisites
- **Authority access** client credentials (`client_id` + secret) authorised for - **Authority access** client credentials (`client_id` + secret) authorised for
`concelier.mirror.read` and `excititor.mirror.read` scopes. Secrets live outside git. `concelier.mirror.read` and `excititor.mirror.read` scopes. Secrets live outside git.
- **Signed TLS certificates** wildcard or per-domain (`mirror-primary`, `mirror-community`). - **Signed TLS certificates** wildcard or per-domain (`mirror-primary`, `mirror-community`).
Store them under `deploy/compose/mirror-gateway/tls/` or in Kubernetes secrets. Store them under `deploy/compose/mirror-gateway/tls/` or in Kubernetes secrets.
- **Mirror gateway credentials** Basic Auth htpasswd files per domain. Generate with - **Mirror gateway credentials** Basic Auth htpasswd files per domain. Generate with
`htpasswd -B`. Operators distribute credentials to downstream consumers. `htpasswd -B`. Operators distribute credentials to downstream consumers.
- **Export artifact source** read access to the canonical S3 buckets (or rsync share) - **Export artifact source** read access to the canonical S3 buckets (or rsync share)
that hold `concelier` JSON bundles and `excititor` VEX exports. that hold `concelier` JSON bundles and `excititor` VEX exports.
- **Persistent volumes** storage for Concelier job metadata and mirror export trees. - **Persistent volumes** storage for Concelier job metadata and mirror export trees.
For Helm, provision PVCs (`concelier-mirror-jobs`, `concelier-mirror-exports`, For Helm, provision PVCs (`concelier-mirror-jobs`, `concelier-mirror-exports`,
`excititor-mirror-exports`, `mirror-mongo-data`, `mirror-minio-data`) before rollout. `excititor-mirror-exports`, `mirror-mongo-data`, `mirror-minio-data`) before rollout.
## 2. Secret & certificate layout ### 1.1 Service configuration quick reference
### Docker Compose (`deploy/compose/docker-compose.mirror.yaml`) Concelier.WebService exposes the mirror HTTP endpoints once `CONCELIER__MIRROR__ENABLED=true`.
Key knobs:
- `deploy/compose/env/mirror.env.example` copy to `.env` and adjust quotas or domain IDs.
- `deploy/compose/mirror-secrets/` mount read-only into `/run/secrets`. Place: - `CONCELIER__MIRROR__EXPORTROOT` root folder containing export snapshots (`<exportId>/mirror/*`).
- `concelier-authority-client` Authority client secret. - `CONCELIER__MIRROR__ACTIVEEXPORTID` optional explicit export id; otherwise the service auto-falls back to the `latest/` symlink or newest directory.
- `excititor-authority-client` (optional) reserve for future authn. - `CONCELIER__MIRROR__REQUIREAUTHENTICATION` default auth requirement; override per domain with `CONCELIER__MIRROR__DOMAINS__{n}__REQUIREAUTHENTICATION`.
- `deploy/compose/mirror-gateway/tls/` PEM-encoded cert/key pairs: - `CONCELIER__MIRROR__MAXINDEXREQUESTSPERHOUR` budget for `/concelier/exports/index.json`. Domains inherit this value unless they define `__MAXDOWNLOADREQUESTSPERHOUR`.
- `mirror-primary.crt`, `mirror-primary.key` - `CONCELIER__MIRROR__DOMAINS__{n}__ID` domain identifier matching the exporter manifest; additional keys configure display name and rate budgets.
- `mirror-community.crt`, `mirror-community.key`
- `deploy/compose/mirror-gateway/secrets/` htpasswd files: > The service honours Stella Ops Authority when `CONCELIER__AUTHORITY__ENABLED=true` and `ALLOWANONYMOUSFALLBACK=false`. Use the bypass CIDR list (`CONCELIER__AUTHORITY__BYPASSNETWORKS__*`) for in-cluster ingress gateways that terminate Basic Auth. Unauthorized requests emit `WWW-Authenticate: Bearer` so downstream automation can detect token failures.
- `mirror-primary.htpasswd`
- `mirror-community.htpasswd` Mirror responses carry deterministic cache headers: `/index.json` returns `Cache-Control: public, max-age=60`, while per-domain manifests/bundles include `Cache-Control: public, max-age=300, immutable`. Rate limiting surfaces `Retry-After` when quotas are exceeded.
### Helm (`deploy/helm/stellaops/values-mirror.yaml`) ## 2. Secret & certificate layout
Create secrets in the target namespace: ### Docker Compose (`deploy/compose/docker-compose.mirror.yaml`)
```bash - `deploy/compose/env/mirror.env.example` copy to `.env` and adjust quotas or domain IDs.
kubectl create secret generic concelier-mirror-auth \ - `deploy/compose/mirror-secrets/` mount read-only into `/run/secrets`. Place:
--from-file=concelier-authority-client=concelier-authority-client - `concelier-authority-client` Authority client secret.
- `excititor-authority-client` (optional) reserve for future authn.
kubectl create secret generic excititor-mirror-auth \ - `deploy/compose/mirror-gateway/tls/` PEM-encoded cert/key pairs:
--from-file=excititor-authority-client=excititor-authority-client - `mirror-primary.crt`, `mirror-primary.key`
- `mirror-community.crt`, `mirror-community.key`
kubectl create secret tls mirror-gateway-tls \ - `deploy/compose/mirror-gateway/secrets/` htpasswd files:
--cert=mirror-primary.crt --key=mirror-primary.key - `mirror-primary.htpasswd`
- `mirror-community.htpasswd`
kubectl create secret generic mirror-gateway-htpasswd \
--from-file=mirror-primary.htpasswd --from-file=mirror-community.htpasswd ### Helm (`deploy/helm/stellaops/values-mirror.yaml`)
```
Create secrets in the target namespace:
> Keep Basic Auth lists short-lived (rotate quarterly) and document credential recipients.
```bash
## 3. Deployment kubectl create secret generic concelier-mirror-auth \
--from-file=concelier-authority-client=concelier-authority-client
### 3.1 Docker Compose (edge mirrors, lab validation)
kubectl create secret generic excititor-mirror-auth \
1. `cp deploy/compose/env/mirror.env.example deploy/compose/env/mirror.env` --from-file=excititor-authority-client=excititor-authority-client
2. Populate secrets/tls directories as described above.
3. Sync mirror bundles (see §4) into `deploy/compose/mirror-data/…` and ensure they are mounted kubectl create secret tls mirror-gateway-tls \
on the host path backing the `concelier-exports` and `excititor-exports` volumes. --cert=mirror-primary.crt --key=mirror-primary.key
4. Run the profile validator: `deploy/tools/validate-profiles.sh`.
5. Launch: `docker compose --env-file env/mirror.env -f docker-compose.mirror.yaml up -d`. kubectl create secret generic mirror-gateway-htpasswd \
--from-file=mirror-primary.htpasswd --from-file=mirror-community.htpasswd
### 3.2 Helm (production mirrors) ```
1. Provision PVCs sized for mirror bundles (baseline: 20GiB per domain). > Keep Basic Auth lists short-lived (rotate quarterly) and document credential recipients.
2. Create secrets/tls config maps (§2).
3. `helm upgrade --install mirror deploy/helm/stellaops -f deploy/helm/stellaops/values-mirror.yaml`. ## 3. Deployment
4. Annotate the `stellaops-mirror-gateway` service with ingress/LoadBalancer metadata required by
your CDN (e.g., AWS load balancer scheme internal + NLB idle timeout). ### 3.1 Docker Compose (edge mirrors, lab validation)
## 4. Artifact sync workflow 1. `cp deploy/compose/env/mirror.env.example deploy/compose/env/mirror.env`
2. Populate secrets/tls directories as described above.
Mirrors never generate exports—they ingest signed bundles produced by the Concelier and Excititor 3. Sync mirror bundles (see §4) into `deploy/compose/mirror-data/…` and ensure they are mounted
export jobs. Recommended sync pattern: on the host path backing the `concelier-exports` and `excititor-exports` volumes.
4. Run the profile validator: `deploy/tools/validate-profiles.sh`.
### 4.1 Compose host (systemd timer) 5. Launch: `docker compose --env-file env/mirror.env -f docker-compose.mirror.yaml up -d`.
`/usr/local/bin/mirror-sync.sh`: ### 3.2 Helm (production mirrors)
```bash 1. Provision PVCs sized for mirror bundles (baseline: 20GiB per domain).
#!/usr/bin/env bash 2. Create secrets/tls config maps (§2).
set -euo pipefail 3. `helm upgrade --install mirror deploy/helm/stellaops -f deploy/helm/stellaops/values-mirror.yaml`.
export AWS_ACCESS_KEY_ID= 4. Annotate the `stellaops-mirror-gateway` service with ingress/LoadBalancer metadata required by
export AWS_SECRET_ACCESS_KEY= your CDN (e.g., AWS load balancer scheme internal + NLB idle timeout).
aws s3 sync s3://mirror-stellaops/concelier/latest \ ## 4. Artifact sync workflow
/opt/stellaops/mirror-data/concelier --delete --size-only
Mirrors never generate exports—they ingest signed bundles produced by the Concelier and Excititor
aws s3 sync s3://mirror-stellaops/excititor/latest \ export jobs. Recommended sync pattern:
/opt/stellaops/mirror-data/excititor --delete --size-only
``` ### 4.1 Compose host (systemd timer)
Schedule with a systemd timer every 5minutes. The Compose volumes mount `/opt/stellaops/mirror-data/*` `/usr/local/bin/mirror-sync.sh`:
into the containers read-only, matching `CONCELIER__MIRROR__EXPORTROOT=/exports/json` and
`EXCITITOR__ARTIFACTS__FILESYSTEM__ROOT=/exports`. ```bash
#!/usr/bin/env bash
### 4.2 Kubernetes (CronJob) set -euo pipefail
export AWS_ACCESS_KEY_ID=
Create a CronJob running the AWS CLI (or rclone) in the same namespace, writing into the PVCs: export AWS_SECRET_ACCESS_KEY=
```yaml aws s3 sync s3://mirror-stellaops/concelier/latest \
apiVersion: batch/v1 /opt/stellaops/mirror-data/concelier --delete --size-only
kind: CronJob
metadata: aws s3 sync s3://mirror-stellaops/excititor/latest \
name: mirror-sync /opt/stellaops/mirror-data/excititor --delete --size-only
spec: ```
schedule: "*/5 * * * *"
jobTemplate: Schedule with a systemd timer every 5minutes. The Compose volumes mount `/opt/stellaops/mirror-data/*`
spec: into the containers read-only, matching `CONCELIER__MIRROR__EXPORTROOT=/exports/json` and
template: `EXCITITOR__ARTIFACTS__FILESYSTEM__ROOT=/exports`.
spec:
containers: ### 4.2 Kubernetes (CronJob)
- name: sync
image: public.ecr.aws/aws-cli/aws-cli@sha256:5df5f52c29f5e3ba46d0ad9e0e3afc98701c4a0f879400b4c5f80d943b5fadea Create a CronJob running the AWS CLI (or rclone) in the same namespace, writing into the PVCs:
command:
- /bin/sh ```yaml
- -c apiVersion: batch/v1
- > kind: CronJob
aws s3 sync s3://mirror-stellaops/concelier/latest /exports/concelier --delete --size-only && metadata:
aws s3 sync s3://mirror-stellaops/excititor/latest /exports/excititor --delete --size-only name: mirror-sync
volumeMounts: spec:
- name: concelier-exports schedule: "*/5 * * * *"
mountPath: /exports/concelier jobTemplate:
- name: excititor-exports spec:
mountPath: /exports/excititor template:
envFrom: spec:
- secretRef: containers:
name: mirror-sync-aws - name: sync
restartPolicy: OnFailure image: public.ecr.aws/aws-cli/aws-cli@sha256:5df5f52c29f5e3ba46d0ad9e0e3afc98701c4a0f879400b4c5f80d943b5fadea
volumes: command:
- name: concelier-exports - /bin/sh
persistentVolumeClaim: - -c
claimName: concelier-mirror-exports - >
- name: excititor-exports aws s3 sync s3://mirror-stellaops/concelier/latest /exports/concelier --delete --size-only &&
persistentVolumeClaim: aws s3 sync s3://mirror-stellaops/excititor/latest /exports/excititor --delete --size-only
claimName: excititor-mirror-exports volumeMounts:
``` - name: concelier-exports
mountPath: /exports/concelier
## 5. CDN integration - name: excititor-exports
mountPath: /exports/excititor
1. Point the CDN origin at the mirror gateway (Compose host or Kubernetes LoadBalancer). envFrom:
2. Honour the response headers emitted by the gateway and Concelier/Excititor: - secretRef:
`Cache-Control: public, max-age=300, immutable` for mirror payloads. name: mirror-sync-aws
3. Configure origin shields in the CDN to prevent cache stampedes. Recommended TTLs: restartPolicy: OnFailure
- Index (`/concelier/exports/index.json`, `/excititor/mirror/*/index`) → 60s. volumes:
- Bundle/manifest payloads → 300s. - name: concelier-exports
4. Forward the `Authorization` header—Basic Auth terminates at the gateway. persistentVolumeClaim:
5. Enforce per-domain rate limits at the CDN (matching gateway budgets) and enable logging claimName: concelier-mirror-exports
to SIEM for anomaly detection. - name: excititor-exports
persistentVolumeClaim:
## 6. Smoke tests claimName: excititor-mirror-exports
```
After each deployment or sync cycle:
## 5. CDN integration
```bash
# Index with Basic Auth 1. Point the CDN origin at the mirror gateway (Compose host or Kubernetes LoadBalancer).
curl -u $PRIMARY_CREDS https://mirror-primary.stella-ops.org/concelier/exports/index.json | jq 'keys' 2. Honour the response headers emitted by the gateway and Concelier/Excititor:
`Cache-Control: public, max-age=300, immutable` for mirror payloads.
# Mirror manifest signature 3. Configure origin shields in the CDN to prevent cache stampedes. Recommended TTLs:
curl -u $PRIMARY_CREDS -I https://mirror-primary.stella-ops.org/concelier/exports/mirror/primary/manifest.json - Index (`/concelier/exports/index.json`, `/excititor/mirror/*/index`) → 60s.
- Bundle/manifest payloads → 300s.
# Excititor consensus bundle metadata 4. Forward the `Authorization` header—Basic Auth terminates at the gateway.
curl -u $COMMUNITY_CREDS https://mirror-community.stella-ops.org/excititor/mirror/community/index \ 5. Enforce per-domain rate limits at the CDN (matching gateway budgets) and enable logging
| jq '.exports[].exportKey' to SIEM for anomaly detection.
# Signed bundle + detached JWS (spot check digests) ## 6. Smoke tests
curl -u $PRIMARY_CREDS https://mirror-primary.stella-ops.org/concelier/exports/mirror/primary/bundle.json.jws \
-o bundle.json.jws After each deployment or sync cycle (temporarily set low budgets if you need to observe 429 responses):
cosign verify-blob --signature bundle.json.jws --key mirror-key.pub bundle.json
``` ```bash
# Index with Basic Auth
Watch the gateway metrics (`nginx_vts` or access logs) for cache hits. In Kubernetes, `kubectl logs deploy/stellaops-mirror-gateway` curl -u $PRIMARY_CREDS https://mirror-primary.stella-ops.org/concelier/exports/index.json | jq 'keys'
should show `X-Cache-Status: HIT/MISS`.
# Mirror manifest signature and cache headers
## 7. Maintenance & rotation curl -u $PRIMARY_CREDS -I https://mirror-primary.stella-ops.org/concelier/exports/mirror/primary/manifest.json \
| tee /tmp/manifest-headers.txt
- **Bundle freshness** alert if sync job lag exceeds 15minutes or if `concelier` logs grep -E '^Cache-Control: ' /tmp/manifest-headers.txt # expect public, max-age=300, immutable
`Mirror export root is not configured`.
- **Secret rotation** change Authority client secrets and Basic Auth credentials quarterly. # Excititor consensus bundle metadata
Update the mounted secrets and restart deployments (`docker compose restart concelier` or curl -u $COMMUNITY_CREDS https://mirror-community.stella-ops.org/excititor/mirror/community/index \
`kubectl rollout restart deploy/stellaops-concelier`). | jq '.exports[].exportKey'
- **TLS renewal** reissue certificates, place new files, and reload gateway (`docker compose exec mirror-gateway nginx -s reload`).
- **Quota tuning** adjust per-domain `MAXDOWNLOADREQUESTSPERHOUR` in `.env` or values file. # Signed bundle + detached JWS (spot check digests)
Align CDN rate limits and inform downstreams. curl -u $PRIMARY_CREDS https://mirror-primary.stella-ops.org/concelier/exports/mirror/primary/bundle.json.jws \
-o bundle.json.jws
## 8. References cosign verify-blob --signature bundle.json.jws --key mirror-key.pub bundle.json
- Deployment profiles: `deploy/compose/docker-compose.mirror.yaml`, # Service-level auth check (inside cluster no gateway credentials)
`deploy/helm/stellaops/values-mirror.yaml` kubectl exec deploy/stellaops-concelier -- curl -si http://localhost:8443/concelier/exports/mirror/primary/manifest.json \
- Mirror architecture dossiers: `docs/ARCHITECTURE_CONCELIER.md`, | head -n 5 # expect HTTP/1.1 401 with WWW-Authenticate: Bearer
`docs/ARCHITECTURE_EXCITITOR_MIRRORS.md`
- Export bundling: `docs/ARCHITECTURE_DEVOPS.md` §3, `docs/ARCHITECTURE_EXCITITOR.md` §7 # Rate limit smoke (repeat quickly; second call should return 429 + Retry-After)
for i in 1 2; do
curl -s -o /dev/null -D - https://mirror-primary.stella-ops.org/concelier/exports/index.json \
-u $PRIMARY_CREDS | grep -E '^(HTTP/|Retry-After:)'
sleep 1
done
```
Watch the gateway metrics (`nginx_vts` or access logs) for cache hits. In Kubernetes, `kubectl logs deploy/stellaops-mirror-gateway`
should show `X-Cache-Status: HIT/MISS`.
## 7. Maintenance & rotation
- **Bundle freshness** alert if sync job lag exceeds 15minutes or if `concelier` logs
`Mirror export root is not configured`.
- **Secret rotation** change Authority client secrets and Basic Auth credentials quarterly.
Update the mounted secrets and restart deployments (`docker compose restart concelier` or
`kubectl rollout restart deploy/stellaops-concelier`).
- **TLS renewal** reissue certificates, place new files, and reload gateway (`docker compose exec mirror-gateway nginx -s reload`).
- **Quota tuning** adjust per-domain `MAXDOWNLOADREQUESTSPERHOUR` in `.env` or values file.
Align CDN rate limits and inform downstreams.
## 8. References
- Deployment profiles: `deploy/compose/docker-compose.mirror.yaml`,
`deploy/helm/stellaops/values-mirror.yaml`
- Mirror architecture dossiers: `docs/ARCHITECTURE_CONCELIER.md`,
`docs/ARCHITECTURE_EXCITITOR_MIRRORS.md`
- Export bundling: `docs/ARCHITECTURE_DEVOPS.md` §3, `docs/ARCHITECTURE_EXCITITOR.md` §7

Binary file not shown.

View File

@@ -10,4 +10,5 @@
| DEVOPS-REL-14-001 | TODO | DevOps Guild | SIGNER-API-11-101, ATTESTOR-API-11-201 | Deterministic build/release pipeline with SBOM/provenance, signing, manifest generation. | CI pipeline produces signed images + SBOM/attestations, manifests published with verified hashes, docs updated. | | DEVOPS-REL-14-001 | TODO | DevOps Guild | SIGNER-API-11-101, ATTESTOR-API-11-201 | Deterministic build/release pipeline with SBOM/provenance, signing, manifest generation. | CI pipeline produces signed images + SBOM/attestations, manifests published with verified hashes, docs updated. |
| DEVOPS-REL-17-002 | TODO | DevOps Guild | DEVOPS-REL-14-001, SCANNER-EMIT-17-701 | Persist stripped-debug artifacts organised by GNU build-id and bundle them into release/offline kits with checksum manifests. | CI job writes `.debug` files under `artifacts/debug/.build-id/`, manifest + checksums published, offline kit includes cache, smoke job proves symbol lookup via build-id. | | DEVOPS-REL-17-002 | TODO | DevOps Guild | DEVOPS-REL-14-001, SCANNER-EMIT-17-701 | Persist stripped-debug artifacts organised by GNU build-id and bundle them into release/offline kits with checksum manifests. | CI job writes `.debug` files under `artifacts/debug/.build-id/`, manifest + checksums published, offline kit includes cache, smoke job proves symbol lookup via build-id. |
| DEVOPS-MIRROR-08-001 | DONE (2025-10-19) | DevOps Guild | DEVOPS-REL-14-001 | Stand up managed mirror profiles for `*.stella-ops.org` (Concelier/Excititor), including Helm/Compose overlays, multi-tenant secrets, CDN caching, and sync documentation. | Infra overlays committed, CI smoke deploy hits mirror endpoints, runbooks published for downstream sync and quota management. | | DEVOPS-MIRROR-08-001 | DONE (2025-10-19) | DevOps Guild | DEVOPS-REL-14-001 | Stand up managed mirror profiles for `*.stella-ops.org` (Concelier/Excititor), including Helm/Compose overlays, multi-tenant secrets, CDN caching, and sync documentation. | Infra overlays committed, CI smoke deploy hits mirror endpoints, runbooks published for downstream sync and quota management. |
| DEVOPS-SEC-10-301 | DOING (2025-10-19) | DevOps Guild | Wave 0A complete | Address NU1902/NU1903 advisories for `MongoDB.Driver` 2.12.0 and `SharpCompress` 0.23.0 surfaced during scanner cache and worker test runs. | Dependencies bumped to patched releases, audit logs free of NU1902/NU1903 warnings, regression tests green, change log documents upgrade guidance. | | DEVOPS-SEC-10-301 | DONE (2025-10-20) | DevOps Guild | Wave 0A complete | Address NU1902/NU1903 advisories for `MongoDB.Driver` 2.12.0 and `SharpCompress` 0.23.0 surfaced during scanner cache and worker test runs. | Dependencies bumped to patched releases, audit logs free of NU1902/NU1903 warnings, regression tests green, change log documents upgrade guidance. |
> Remark (2025-10-20): Repacked `Mongo2Go` local feed to require MongoDB.Driver 3.5.0 + SharpCompress 0.41.0; cache regression tests green and NU1902/NU1903 suppressed.

View File

@@ -1,95 +1,143 @@
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Security.Cryptography; using System.Security.Cryptography;
using Microsoft.Extensions.Logging.Abstractions; using Microsoft.Extensions.Logging.Abstractions;
using StellaOps.Concelier.Connector.StellaOpsMirror.Security; using StellaOps.Concelier.Connector.StellaOpsMirror.Security;
using StellaOps.Cryptography; using StellaOps.Cryptography;
using Xunit; using Xunit;
namespace StellaOps.Concelier.Connector.StellaOpsMirror.Tests; namespace StellaOps.Concelier.Connector.StellaOpsMirror.Tests;
public sealed class MirrorSignatureVerifierTests public sealed class MirrorSignatureVerifierTests
{ {
[Fact] [Fact]
public async Task VerifyAsync_ValidSignaturePasses() public async Task VerifyAsync_ValidSignaturePasses()
{ {
var provider = new DefaultCryptoProvider(); var provider = new DefaultCryptoProvider();
var key = CreateSigningKey("mirror-key"); var key = CreateSigningKey("mirror-key");
provider.UpsertSigningKey(key); provider.UpsertSigningKey(key);
var registry = new CryptoProviderRegistry(new[] { provider }); var registry = new CryptoProviderRegistry(new[] { provider });
var verifier = new MirrorSignatureVerifier(registry, NullLogger<MirrorSignatureVerifier>.Instance); var verifier = new MirrorSignatureVerifier(registry, NullLogger<MirrorSignatureVerifier>.Instance);
var payload = "{\"advisories\":[]}\"u8".ToUtf8Bytes(); var payloadText = System.Text.Json.JsonSerializer.Serialize(new { advisories = Array.Empty<string>() });
var (signature, _) = await CreateDetachedJwsAsync(provider, key.Reference.KeyId, payload); var payload = payloadText.ToUtf8Bytes();
var (signature, _) = await CreateDetachedJwsAsync(provider, key.Reference.KeyId, payload);
await verifier.VerifyAsync(payload, signature, CancellationToken.None);
} await verifier.VerifyAsync(payload, signature, CancellationToken.None);
}
[Fact]
public async Task VerifyAsync_InvalidSignatureThrows() [Fact]
{ public async Task VerifyAsync_InvalidSignatureThrows()
var provider = new DefaultCryptoProvider(); {
var key = CreateSigningKey("mirror-key"); var provider = new DefaultCryptoProvider();
provider.UpsertSigningKey(key); var key = CreateSigningKey("mirror-key");
provider.UpsertSigningKey(key);
var registry = new CryptoProviderRegistry(new[] { provider });
var verifier = new MirrorSignatureVerifier(registry, NullLogger<MirrorSignatureVerifier>.Instance); var registry = new CryptoProviderRegistry(new[] { provider });
var verifier = new MirrorSignatureVerifier(registry, NullLogger<MirrorSignatureVerifier>.Instance);
var payload = "{\"advisories\":[]}\"u8".ToUtf8Bytes();
var (signature, _) = await CreateDetachedJwsAsync(provider, key.Reference.KeyId, payload); var payloadText = System.Text.Json.JsonSerializer.Serialize(new { advisories = Array.Empty<string>() });
var payload = payloadText.ToUtf8Bytes();
var tampered = signature.Replace("a", "b", StringComparison.Ordinal); var (signature, _) = await CreateDetachedJwsAsync(provider, key.Reference.KeyId, payload);
await Assert.ThrowsAsync<InvalidOperationException>(() => verifier.VerifyAsync(payload, tampered, CancellationToken.None)); var tampered = signature.Replace('a', 'b', StringComparison.Ordinal);
}
await Assert.ThrowsAsync<InvalidOperationException>(() => verifier.VerifyAsync(payload, tampered, CancellationToken.None));
private static CryptoSigningKey CreateSigningKey(string keyId) }
{
using var ecdsa = ECDsa.Create(ECCurve.NamedCurves.nistP256); [Fact]
var parameters = ecdsa.ExportParameters(includePrivateParameters: true); public async Task VerifyAsync_KeyMismatchThrows()
return new CryptoSigningKey(new CryptoKeyReference(keyId), SignatureAlgorithms.Es256, in parameters, DateTimeOffset.UtcNow); {
} var provider = new DefaultCryptoProvider();
var key = CreateSigningKey("mirror-key");
private static async Task<(string Signature, DateTimeOffset SignedAt)> CreateDetachedJwsAsync( provider.UpsertSigningKey(key);
DefaultCryptoProvider provider,
string keyId, var registry = new CryptoProviderRegistry(new[] { provider });
ReadOnlyMemory<byte> payload) var verifier = new MirrorSignatureVerifier(registry, NullLogger<MirrorSignatureVerifier>.Instance);
{
var signer = provider.GetSigner(SignatureAlgorithms.Es256, new CryptoKeyReference(keyId)); var payloadText = System.Text.Json.JsonSerializer.Serialize(new { advisories = Array.Empty<string>() });
var header = new Dictionary<string, object?> var payload = payloadText.ToUtf8Bytes();
{ var (signature, _) = await CreateDetachedJwsAsync(provider, key.Reference.KeyId, payload);
["alg"] = SignatureAlgorithms.Es256,
["kid"] = keyId, await Assert.ThrowsAsync<InvalidOperationException>(() => verifier.VerifyAsync(
["provider"] = provider.Name, payload,
["typ"] = "application/vnd.stellaops.concelier.mirror-bundle+jws", signature,
["b64"] = false, expectedKeyId: "unexpected-key",
["crit"] = new[] { "b64" } expectedProvider: null,
}; cancellationToken: CancellationToken.None));
}
var headerJson = System.Text.Json.JsonSerializer.Serialize(header);
var protectedHeader = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(headerJson); [Fact]
public async Task VerifyAsync_ThrowsWhenProviderMissingKey()
var signingInput = BuildSigningInput(protectedHeader, payload.Span); {
var signatureBytes = await signer.SignAsync(signingInput, CancellationToken.None).ConfigureAwait(false); var provider = new DefaultCryptoProvider();
var encodedSignature = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(signatureBytes); var key = CreateSigningKey("mirror-key");
provider.UpsertSigningKey(key);
return (string.Concat(protectedHeader, "..", encodedSignature), DateTimeOffset.UtcNow);
} var registry = new CryptoProviderRegistry(new[] { provider });
var verifier = new MirrorSignatureVerifier(registry, NullLogger<MirrorSignatureVerifier>.Instance);
private static ReadOnlyMemory<byte> BuildSigningInput(string encodedHeader, ReadOnlySpan<byte> payload)
{ var payloadText = System.Text.Json.JsonSerializer.Serialize(new { advisories = Array.Empty<string>() });
var headerBytes = System.Text.Encoding.ASCII.GetBytes(encodedHeader); var payload = payloadText.ToUtf8Bytes();
var buffer = new byte[headerBytes.Length + 1 + payload.Length]; var (signature, _) = await CreateDetachedJwsAsync(provider, key.Reference.KeyId, payload);
headerBytes.CopyTo(buffer.AsSpan());
buffer[headerBytes.Length] = (byte)'.'; provider.RemoveSigningKey(key.Reference.KeyId);
payload.CopyTo(buffer.AsSpan(headerBytes.Length + 1));
return buffer; await Assert.ThrowsAsync<InvalidOperationException>(() => verifier.VerifyAsync(
} payload,
} signature,
expectedKeyId: key.Reference.KeyId,
file static class Utf8Extensions expectedProvider: provider.Name,
{ cancellationToken: CancellationToken.None));
public static ReadOnlyMemory<byte> ToUtf8Bytes(this string value) }
=> System.Text.Encoding.UTF8.GetBytes(value);
} private static CryptoSigningKey CreateSigningKey(string keyId)
{
using var ecdsa = ECDsa.Create(ECCurve.NamedCurves.nistP256);
var parameters = ecdsa.ExportParameters(includePrivateParameters: true);
return new CryptoSigningKey(new CryptoKeyReference(keyId), SignatureAlgorithms.Es256, in parameters, DateTimeOffset.UtcNow);
}
private static async Task<(string Signature, DateTimeOffset SignedAt)> CreateDetachedJwsAsync(
DefaultCryptoProvider provider,
string keyId,
ReadOnlyMemory<byte> payload)
{
var signer = provider.GetSigner(SignatureAlgorithms.Es256, new CryptoKeyReference(keyId));
var header = new Dictionary<string, object?>
{
["alg"] = SignatureAlgorithms.Es256,
["kid"] = keyId,
["provider"] = provider.Name,
["typ"] = "application/vnd.stellaops.concelier.mirror-bundle+jws",
["b64"] = false,
["crit"] = new[] { "b64" }
};
var headerJson = System.Text.Json.JsonSerializer.Serialize(header);
var protectedHeader = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(headerJson);
var signingInput = BuildSigningInput(protectedHeader, payload.Span);
var signatureBytes = await signer.SignAsync(signingInput, CancellationToken.None).ConfigureAwait(false);
var encodedSignature = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(signatureBytes);
return (string.Concat(protectedHeader, "..", encodedSignature), DateTimeOffset.UtcNow);
}
private static ReadOnlyMemory<byte> BuildSigningInput(string encodedHeader, ReadOnlySpan<byte> payload)
{
var headerBytes = System.Text.Encoding.ASCII.GetBytes(encodedHeader);
var buffer = new byte[headerBytes.Length + 1 + payload.Length];
headerBytes.CopyTo(buffer.AsSpan());
buffer[headerBytes.Length] = (byte)'.';
payload.CopyTo(buffer.AsSpan(headerBytes.Length + 1));
return buffer;
}
}
file static class Utf8Extensions
{
public static ReadOnlyMemory<byte> ToUtf8Bytes(this string value)
=> System.Text.Encoding.UTF8.GetBytes(value);
}

View File

@@ -1,319 +1,359 @@
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Net; using System.Net;
using System.Net.Http; using System.Net.Http;
using System.Security.Cryptography; using System.Security.Cryptography;
using System.Text; using System.Text;
using System.Text.Json; using System.Text.Json;
using Microsoft.Extensions.Configuration; using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Http; using Microsoft.Extensions.Http;
using Microsoft.Extensions.Logging; using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Logging.Abstractions; using Microsoft.Extensions.Logging.Abstractions;
using Microsoft.Extensions.Options; using Microsoft.Extensions.Options;
using MongoDB.Bson; using MongoDB.Bson;
using StellaOps.Concelier.Connector.Common; using StellaOps.Concelier.Connector.Common;
using StellaOps.Concelier.Connector.Common.Testing; using StellaOps.Concelier.Connector.Common.Fetch;
using StellaOps.Concelier.Connector.StellaOpsMirror.Settings; using StellaOps.Concelier.Connector.Common.Testing;
using StellaOps.Concelier.Storage.Mongo; using StellaOps.Concelier.Connector.StellaOpsMirror.Settings;
using StellaOps.Concelier.Storage.Mongo.Documents; using StellaOps.Concelier.Storage.Mongo;
using StellaOps.Concelier.Storage.Mongo.SourceState; using StellaOps.Concelier.Storage.Mongo.Documents;
using StellaOps.Concelier.Testing; using StellaOps.Concelier.Testing;
using StellaOps.Cryptography; using StellaOps.Cryptography;
using Xunit; using Xunit;
namespace StellaOps.Concelier.Connector.StellaOpsMirror.Tests; namespace StellaOps.Concelier.Connector.StellaOpsMirror.Tests;
[Collection("mongo-fixture")] [Collection("mongo-fixture")]
public sealed class StellaOpsMirrorConnectorTests : IAsyncLifetime public sealed class StellaOpsMirrorConnectorTests : IAsyncLifetime
{ {
private readonly MongoIntegrationFixture _fixture; private readonly MongoIntegrationFixture _fixture;
private readonly CannedHttpMessageHandler _handler; private readonly CannedHttpMessageHandler _handler;
public StellaOpsMirrorConnectorTests(MongoIntegrationFixture fixture) public StellaOpsMirrorConnectorTests(MongoIntegrationFixture fixture)
{ {
_fixture = fixture; _fixture = fixture;
_handler = new CannedHttpMessageHandler(); _handler = new CannedHttpMessageHandler();
} }
[Fact] [Fact]
public async Task FetchAsync_PersistsMirrorArtifacts() public async Task FetchAsync_PersistsMirrorArtifacts()
{ {
var manifestContent = "{\"domain\":\"primary\",\"files\":[]}"; var manifestContent = "{\"domain\":\"primary\",\"files\":[]}";
var bundleContent = "{\"advisories\":[{\"id\":\"CVE-2025-0001\"}]}"; var bundleContent = "{\"advisories\":[{\"id\":\"CVE-2025-0001\"}]}";
var manifestDigest = ComputeDigest(manifestContent); var manifestDigest = ComputeDigest(manifestContent);
var bundleDigest = ComputeDigest(bundleContent); var bundleDigest = ComputeDigest(bundleContent);
var index = BuildIndex(manifestDigest, Encoding.UTF8.GetByteCount(manifestContent), bundleDigest, Encoding.UTF8.GetByteCount(bundleContent), includeSignature: false); var index = BuildIndex(manifestDigest, Encoding.UTF8.GetByteCount(manifestContent), bundleDigest, Encoding.UTF8.GetByteCount(bundleContent), includeSignature: false);
await using var provider = await BuildServiceProviderAsync(); await using var provider = await BuildServiceProviderAsync();
SeedResponses(index, manifestContent, bundleContent, signature: null); SeedResponses(index, manifestContent, bundleContent, signature: null);
var connector = provider.GetRequiredService<StellaOpsMirrorConnector>(); var connector = provider.GetRequiredService<StellaOpsMirrorConnector>();
await connector.FetchAsync(provider, CancellationToken.None); await connector.FetchAsync(provider, CancellationToken.None);
var documentStore = provider.GetRequiredService<IDocumentStore>(); var documentStore = provider.GetRequiredService<IDocumentStore>();
var manifestUri = "https://mirror.test/mirror/primary/manifest.json"; var manifestUri = "https://mirror.test/mirror/primary/manifest.json";
var bundleUri = "https://mirror.test/mirror/primary/bundle.json"; var bundleUri = "https://mirror.test/mirror/primary/bundle.json";
var manifestDocument = await documentStore.FindBySourceAndUriAsync(StellaOpsMirrorConnector.Source, manifestUri, CancellationToken.None); var manifestDocument = await documentStore.FindBySourceAndUriAsync(StellaOpsMirrorConnector.Source, manifestUri, CancellationToken.None);
Assert.NotNull(manifestDocument); Assert.NotNull(manifestDocument);
Assert.Equal(DocumentStatuses.Mapped, manifestDocument!.Status); Assert.Equal(DocumentStatuses.Mapped, manifestDocument!.Status);
Assert.Equal(NormalizeDigest(manifestDigest), manifestDocument.Sha256); Assert.Equal(NormalizeDigest(manifestDigest), manifestDocument.Sha256);
var bundleDocument = await documentStore.FindBySourceAndUriAsync(StellaOpsMirrorConnector.Source, bundleUri, CancellationToken.None); var bundleDocument = await documentStore.FindBySourceAndUriAsync(StellaOpsMirrorConnector.Source, bundleUri, CancellationToken.None);
Assert.NotNull(bundleDocument); Assert.NotNull(bundleDocument);
Assert.Equal(DocumentStatuses.PendingParse, bundleDocument!.Status); Assert.Equal(DocumentStatuses.PendingParse, bundleDocument!.Status);
Assert.Equal(NormalizeDigest(bundleDigest), bundleDocument.Sha256); Assert.Equal(NormalizeDigest(bundleDigest), bundleDocument.Sha256);
var rawStorage = provider.GetRequiredService<RawDocumentStorage>(); var rawStorage = provider.GetRequiredService<RawDocumentStorage>();
Assert.NotNull(manifestDocument.GridFsId); Assert.NotNull(manifestDocument.GridFsId);
Assert.NotNull(bundleDocument.GridFsId); Assert.NotNull(bundleDocument.GridFsId);
var manifestBytes = await rawStorage.DownloadAsync(manifestDocument.GridFsId!.Value, CancellationToken.None); var manifestBytes = await rawStorage.DownloadAsync(manifestDocument.GridFsId!.Value, CancellationToken.None);
var bundleBytes = await rawStorage.DownloadAsync(bundleDocument.GridFsId!.Value, CancellationToken.None); var bundleBytes = await rawStorage.DownloadAsync(bundleDocument.GridFsId!.Value, CancellationToken.None);
Assert.Equal(manifestContent, Encoding.UTF8.GetString(manifestBytes)); Assert.Equal(manifestContent, Encoding.UTF8.GetString(manifestBytes));
Assert.Equal(bundleContent, Encoding.UTF8.GetString(bundleBytes)); Assert.Equal(bundleContent, Encoding.UTF8.GetString(bundleBytes));
var stateRepository = provider.GetRequiredService<ISourceStateRepository>(); var stateRepository = provider.GetRequiredService<ISourceStateRepository>();
var state = await stateRepository.TryGetAsync(StellaOpsMirrorConnector.Source, CancellationToken.None); var state = await stateRepository.TryGetAsync(StellaOpsMirrorConnector.Source, CancellationToken.None);
Assert.NotNull(state); Assert.NotNull(state);
var cursorDocument = state!.Cursor ?? new BsonDocument(); var cursorDocument = state!.Cursor ?? new BsonDocument();
var digestValue = cursorDocument.TryGetValue("bundleDigest", out var digestBson) ? digestBson.AsString : string.Empty; var digestValue = cursorDocument.TryGetValue("bundleDigest", out var digestBson) ? digestBson.AsString : string.Empty;
Assert.Equal(NormalizeDigest(bundleDigest), NormalizeDigest(digestValue)); Assert.Equal(NormalizeDigest(bundleDigest), NormalizeDigest(digestValue));
var pendingDocumentsArray = cursorDocument.TryGetValue("pendingDocuments", out var pendingDocsBson) && pendingDocsBson is BsonArray pendingArray var pendingDocumentsArray = cursorDocument.TryGetValue("pendingDocuments", out var pendingDocsBson) && pendingDocsBson is BsonArray pendingArray
? pendingArray ? pendingArray
: new BsonArray(); : new BsonArray();
Assert.Single(pendingDocumentsArray); Assert.Single(pendingDocumentsArray);
var pendingDocumentId = Guid.Parse(pendingDocumentsArray[0].AsString); var pendingDocumentId = Guid.Parse(pendingDocumentsArray[0].AsString);
Assert.Equal(bundleDocument.Id, pendingDocumentId); Assert.Equal(bundleDocument.Id, pendingDocumentId);
var pendingMappingsArray = cursorDocument.TryGetValue("pendingMappings", out var pendingMappingsBson) && pendingMappingsBson is BsonArray mappingsArray var pendingMappingsArray = cursorDocument.TryGetValue("pendingMappings", out var pendingMappingsBson) && pendingMappingsBson is BsonArray mappingsArray
? mappingsArray ? mappingsArray
: new BsonArray(); : new BsonArray();
Assert.Empty(pendingMappingsArray); Assert.Empty(pendingMappingsArray);
} }
[Fact] [Fact]
public async Task FetchAsync_TamperedSignatureThrows() public async Task FetchAsync_TamperedSignatureThrows()
{ {
var manifestContent = "{\"domain\":\"primary\"}"; var manifestContent = "{\"domain\":\"primary\"}";
var bundleContent = "{\"advisories\":[{\"id\":\"CVE-2025-0002\"}]}"; var bundleContent = "{\"advisories\":[{\"id\":\"CVE-2025-0002\"}]}";
var manifestDigest = ComputeDigest(manifestContent); var manifestDigest = ComputeDigest(manifestContent);
var bundleDigest = ComputeDigest(bundleContent); var bundleDigest = ComputeDigest(bundleContent);
var index = BuildIndex(manifestDigest, Encoding.UTF8.GetByteCount(manifestContent), bundleDigest, Encoding.UTF8.GetByteCount(bundleContent), includeSignature: true); var index = BuildIndex(manifestDigest, Encoding.UTF8.GetByteCount(manifestContent), bundleDigest, Encoding.UTF8.GetByteCount(bundleContent), includeSignature: true);
await using var provider = await BuildServiceProviderAsync(options => await using var provider = await BuildServiceProviderAsync(options =>
{ {
options.Signature.Enabled = true; options.Signature.Enabled = true;
options.Signature.KeyId = "mirror-key"; options.Signature.KeyId = "mirror-key";
options.Signature.Provider = "default"; options.Signature.Provider = "default";
}); });
var defaultProvider = provider.GetRequiredService<DefaultCryptoProvider>(); var defaultProvider = provider.GetRequiredService<DefaultCryptoProvider>();
var signingKey = CreateSigningKey("mirror-key"); var signingKey = CreateSigningKey("mirror-key");
defaultProvider.UpsertSigningKey(signingKey); defaultProvider.UpsertSigningKey(signingKey);
var (signatureValue, _) = CreateDetachedJws(signingKey, bundleContent); var (signatureValue, _) = CreateDetachedJws(signingKey, bundleContent);
// Tamper with signature so verification fails. // Tamper with signature so verification fails.
var tamperedSignature = signatureValue.Replace('a', 'b'); var tamperedSignature = signatureValue.Replace('a', 'b');
SeedResponses(index, manifestContent, bundleContent, tamperedSignature); SeedResponses(index, manifestContent, bundleContent, tamperedSignature);
var connector = provider.GetRequiredService<StellaOpsMirrorConnector>(); var connector = provider.GetRequiredService<StellaOpsMirrorConnector>();
await Assert.ThrowsAsync<InvalidOperationException>(() => connector.FetchAsync(provider, CancellationToken.None)); await Assert.ThrowsAsync<InvalidOperationException>(() => connector.FetchAsync(provider, CancellationToken.None));
var stateRepository = provider.GetRequiredService<ISourceStateRepository>(); var stateRepository = provider.GetRequiredService<ISourceStateRepository>();
var state = await stateRepository.TryGetAsync(StellaOpsMirrorConnector.Source, CancellationToken.None); var state = await stateRepository.TryGetAsync(StellaOpsMirrorConnector.Source, CancellationToken.None);
Assert.NotNull(state); Assert.NotNull(state);
Assert.True(state!.FailCount >= 1); Assert.True(state!.FailCount >= 1);
Assert.False(state.Cursor.TryGetValue("bundleDigest", out _)); Assert.False(state.Cursor.TryGetValue("bundleDigest", out _));
} }
public Task InitializeAsync() => Task.CompletedTask; [Fact]
public async Task FetchAsync_SignatureKeyMismatchThrows()
public Task DisposeAsync() {
{ var manifestContent = "{\"domain\":\"primary\"}";
_handler.Clear(); var bundleContent = "{\"advisories\":[{\"id\":\"CVE-2025-0003\"}]}";
return Task.CompletedTask;
} var manifestDigest = ComputeDigest(manifestContent);
var bundleDigest = ComputeDigest(bundleContent);
private async Task<ServiceProvider> BuildServiceProviderAsync(Action<StellaOpsMirrorConnectorOptions>? configureOptions = null) var index = BuildIndex(
{ manifestDigest,
await _fixture.Client.DropDatabaseAsync(_fixture.Database.DatabaseNamespace.DatabaseName); Encoding.UTF8.GetByteCount(manifestContent),
_handler.Clear(); bundleDigest,
Encoding.UTF8.GetByteCount(bundleContent),
var services = new ServiceCollection(); includeSignature: true,
services.AddLogging(builder => builder.AddProvider(NullLoggerProvider.Instance)); signatureKeyId: "unexpected-key",
services.AddSingleton(_handler); signatureProvider: "default");
services.AddSingleton(TimeProvider.System);
var signingKey = CreateSigningKey("unexpected-key");
services.AddMongoStorage(options => var (signatureValue, _) = CreateDetachedJws(signingKey, bundleContent);
{
options.ConnectionString = _fixture.Runner.ConnectionString; await using var provider = await BuildServiceProviderAsync(options =>
options.DatabaseName = _fixture.Database.DatabaseNamespace.DatabaseName; {
options.CommandTimeout = TimeSpan.FromSeconds(5); options.Signature.Enabled = true;
}); options.Signature.KeyId = "mirror-key";
options.Signature.Provider = "default";
services.AddSingleton<DefaultCryptoProvider>(); });
services.AddSingleton<ICryptoProvider>(sp => sp.GetRequiredService<DefaultCryptoProvider>());
services.AddSingleton<ICryptoProviderRegistry>(sp => new CryptoProviderRegistry(sp.GetServices<ICryptoProvider>())); SeedResponses(index, manifestContent, bundleContent, signatureValue);
var configuration = new ConfigurationBuilder() var connector = provider.GetRequiredService<StellaOpsMirrorConnector>();
.AddInMemoryCollection(new Dictionary<string, string?> await Assert.ThrowsAsync<InvalidOperationException>(() => connector.FetchAsync(provider, CancellationToken.None));
{ }
["concelier:sources:stellaopsMirror:baseAddress"] = "https://mirror.test/",
["concelier:sources:stellaopsMirror:domainId"] = "primary", public Task InitializeAsync() => Task.CompletedTask;
["concelier:sources:stellaopsMirror:indexPath"] = "/concelier/exports/index.json",
}) public Task DisposeAsync()
.Build(); {
_handler.Clear();
var routine = new StellaOpsMirrorDependencyInjectionRoutine(); return Task.CompletedTask;
routine.Register(services, configuration); }
if (configureOptions is not null) private async Task<ServiceProvider> BuildServiceProviderAsync(Action<StellaOpsMirrorConnectorOptions>? configureOptions = null)
{ {
services.PostConfigure(configureOptions); await _fixture.Client.DropDatabaseAsync(_fixture.Database.DatabaseNamespace.DatabaseName);
} _handler.Clear();
services.Configure<HttpClientFactoryOptions>("stellaops-mirror", builder => var services = new ServiceCollection();
{ services.AddLogging(builder => builder.AddProvider(NullLoggerProvider.Instance));
builder.HttpMessageHandlerBuilderActions.Add(options => services.AddSingleton(_handler);
{ services.AddSingleton(TimeProvider.System);
options.PrimaryHandler = _handler;
}); services.AddMongoStorage(options =>
}); {
options.ConnectionString = _fixture.Runner.ConnectionString;
var provider = services.BuildServiceProvider(); options.DatabaseName = _fixture.Database.DatabaseNamespace.DatabaseName;
var bootstrapper = provider.GetRequiredService<MongoBootstrapper>(); options.CommandTimeout = TimeSpan.FromSeconds(5);
await bootstrapper.InitializeAsync(CancellationToken.None); });
return provider;
} services.AddSingleton<DefaultCryptoProvider>();
services.AddSingleton<ICryptoProvider>(sp => sp.GetRequiredService<DefaultCryptoProvider>());
private void SeedResponses(string indexJson, string manifestContent, string bundleContent, string? signature) services.AddSingleton<ICryptoProviderRegistry>(sp => new CryptoProviderRegistry(sp.GetServices<ICryptoProvider>()));
{
var baseUri = new Uri("https://mirror.test"); var configuration = new ConfigurationBuilder()
_handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "/concelier/exports/index.json"), () => CreateJsonResponse(indexJson)); .AddInMemoryCollection(new Dictionary<string, string?>
_handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "mirror/primary/manifest.json"), () => CreateJsonResponse(manifestContent)); {
_handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "mirror/primary/bundle.json"), () => CreateJsonResponse(bundleContent)); ["concelier:sources:stellaopsMirror:baseAddress"] = "https://mirror.test/",
["concelier:sources:stellaopsMirror:domainId"] = "primary",
if (signature is not null) ["concelier:sources:stellaopsMirror:indexPath"] = "/concelier/exports/index.json",
{ })
_handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "mirror/primary/bundle.json.jws"), () => new HttpResponseMessage(HttpStatusCode.OK) .Build();
{
Content = new StringContent(signature, Encoding.UTF8, "application/jose+json"), var routine = new StellaOpsMirrorDependencyInjectionRoutine();
}); routine.Register(services, configuration);
}
} if (configureOptions is not null)
{
private static HttpResponseMessage CreateJsonResponse(string content) services.PostConfigure(configureOptions);
=> new(HttpStatusCode.OK) }
{
Content = new StringContent(content, Encoding.UTF8, "application/json"), services.Configure<HttpClientFactoryOptions>("stellaops-mirror", builder =>
}; {
builder.HttpMessageHandlerBuilderActions.Add(options =>
private static string BuildIndex(string manifestDigest, int manifestBytes, string bundleDigest, int bundleBytes, bool includeSignature) {
{ options.PrimaryHandler = _handler;
var index = new });
{ });
schemaVersion = 1,
generatedAt = new DateTimeOffset(2025, 10, 19, 12, 0, 0, TimeSpan.Zero), var provider = services.BuildServiceProvider();
targetRepository = "repo", var bootstrapper = provider.GetRequiredService<MongoBootstrapper>();
domains = new[] await bootstrapper.InitializeAsync(CancellationToken.None);
{ return provider;
new }
{
domainId = "primary", private void SeedResponses(string indexJson, string manifestContent, string bundleContent, string? signature)
displayName = "Primary", {
advisoryCount = 1, var baseUri = new Uri("https://mirror.test");
manifest = new _handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "/concelier/exports/index.json"), () => CreateJsonResponse(indexJson));
{ _handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "mirror/primary/manifest.json"), () => CreateJsonResponse(manifestContent));
path = "mirror/primary/manifest.json", _handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "mirror/primary/bundle.json"), () => CreateJsonResponse(bundleContent));
sizeBytes = manifestBytes,
digest = manifestDigest, if (signature is not null)
signature = (object?)null, {
}, _handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "mirror/primary/bundle.json.jws"), () => new HttpResponseMessage(HttpStatusCode.OK)
bundle = new {
{ Content = new StringContent(signature, Encoding.UTF8, "application/jose+json"),
path = "mirror/primary/bundle.json", });
sizeBytes = bundleBytes, }
digest = bundleDigest, }
signature = includeSignature
? new private static HttpResponseMessage CreateJsonResponse(string content)
{ => new(HttpStatusCode.OK)
path = "mirror/primary/bundle.json.jws", {
algorithm = "ES256", Content = new StringContent(content, Encoding.UTF8, "application/json"),
keyId = "mirror-key", };
provider = "default",
signedAt = new DateTimeOffset(2025, 10, 19, 12, 0, 0, TimeSpan.Zero), private static string BuildIndex(
} string manifestDigest,
: null, int manifestBytes,
}, string bundleDigest,
sources = Array.Empty<object>(), int bundleBytes,
} bool includeSignature,
} string signatureKeyId = "mirror-key",
}; string signatureProvider = "default")
{
return JsonSerializer.Serialize(index, new JsonSerializerOptions var index = new
{ {
PropertyNamingPolicy = JsonNamingPolicy.CamelCase, schemaVersion = 1,
WriteIndented = false, generatedAt = new DateTimeOffset(2025, 10, 19, 12, 0, 0, TimeSpan.Zero),
}); targetRepository = "repo",
} domains = new[]
{
private static string ComputeDigest(string content) new
{ {
var bytes = Encoding.UTF8.GetBytes(content); domainId = "primary",
var hash = SHA256.HashData(bytes); displayName = "Primary",
return "sha256:" + Convert.ToHexString(hash).ToLowerInvariant(); advisoryCount = 1,
} manifest = new
{
private static string NormalizeDigest(string digest) path = "mirror/primary/manifest.json",
=> digest.StartsWith("sha256:", StringComparison.OrdinalIgnoreCase) ? digest[7..] : digest; sizeBytes = manifestBytes,
digest = manifestDigest,
private static CryptoSigningKey CreateSigningKey(string keyId) signature = (object?)null,
{ },
using var ecdsa = ECDsa.Create(ECCurve.NamedCurves.nistP256); bundle = new
var parameters = ecdsa.ExportParameters(includePrivateParameters: true); {
return new CryptoSigningKey(new CryptoKeyReference(keyId), SignatureAlgorithms.Es256, in parameters, DateTimeOffset.UtcNow); path = "mirror/primary/bundle.json",
} sizeBytes = bundleBytes,
digest = bundleDigest,
private static (string Signature, DateTimeOffset SignedAt) CreateDetachedJws(CryptoSigningKey signingKey, string payload) signature = includeSignature
{ ? new
using var provider = new DefaultCryptoProvider(); {
provider.UpsertSigningKey(signingKey); path = "mirror/primary/bundle.json.jws",
var signer = provider.GetSigner(SignatureAlgorithms.Es256, signingKey.Reference); algorithm = "ES256",
var header = new Dictionary<string, object?> keyId = signatureKeyId,
{ provider = signatureProvider,
["alg"] = SignatureAlgorithms.Es256, signedAt = new DateTimeOffset(2025, 10, 19, 12, 0, 0, TimeSpan.Zero),
["kid"] = signingKey.Reference.KeyId, }
["provider"] = provider.Name, : null,
["typ"] = "application/vnd.stellaops.concelier.mirror-bundle+jws", },
["b64"] = false, sources = Array.Empty<object>(),
["crit"] = new[] { "b64" } }
}; }
};
var headerJson = JsonSerializer.Serialize(header);
var encodedHeader = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(headerJson); return JsonSerializer.Serialize(index, new JsonSerializerOptions
var payloadBytes = Encoding.UTF8.GetBytes(payload); {
var signingInput = BuildSigningInput(encodedHeader, payloadBytes); PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
var signatureBytes = signer.SignAsync(signingInput, CancellationToken.None).GetAwaiter().GetResult(); WriteIndented = false,
var encodedSignature = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(signatureBytes); });
return (string.Concat(encodedHeader, "..", encodedSignature), DateTimeOffset.UtcNow); }
}
private static string ComputeDigest(string content)
private static ReadOnlyMemory<byte> BuildSigningInput(string encodedHeader, ReadOnlySpan<byte> payload) {
{ var bytes = Encoding.UTF8.GetBytes(content);
var headerBytes = Encoding.ASCII.GetBytes(encodedHeader); var hash = SHA256.HashData(bytes);
var buffer = new byte[headerBytes.Length + 1 + payload.Length]; return "sha256:" + Convert.ToHexString(hash).ToLowerInvariant();
headerBytes.CopyTo(buffer, 0); }
buffer[headerBytes.Length] = (byte)'.';
payload.CopyTo(buffer.AsSpan(headerBytes.Length + 1)); private static string NormalizeDigest(string digest)
return buffer; => digest.StartsWith("sha256:", StringComparison.OrdinalIgnoreCase) ? digest[7..] : digest;
}
} private static CryptoSigningKey CreateSigningKey(string keyId)
{
using var ecdsa = ECDsa.Create(ECCurve.NamedCurves.nistP256);
var parameters = ecdsa.ExportParameters(includePrivateParameters: true);
return new CryptoSigningKey(new CryptoKeyReference(keyId), SignatureAlgorithms.Es256, in parameters, DateTimeOffset.UtcNow);
}
private static (string Signature, DateTimeOffset SignedAt) CreateDetachedJws(CryptoSigningKey signingKey, string payload)
{
var provider = new DefaultCryptoProvider();
provider.UpsertSigningKey(signingKey);
var signer = provider.GetSigner(SignatureAlgorithms.Es256, signingKey.Reference);
var header = new Dictionary<string, object?>
{
["alg"] = SignatureAlgorithms.Es256,
["kid"] = signingKey.Reference.KeyId,
["provider"] = provider.Name,
["typ"] = "application/vnd.stellaops.concelier.mirror-bundle+jws",
["b64"] = false,
["crit"] = new[] { "b64" }
};
var headerJson = JsonSerializer.Serialize(header);
var encodedHeader = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(headerJson);
var payloadBytes = Encoding.UTF8.GetBytes(payload);
var signingInput = BuildSigningInput(encodedHeader, payloadBytes);
var signatureBytes = signer.SignAsync(signingInput, CancellationToken.None).GetAwaiter().GetResult();
var encodedSignature = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(signatureBytes);
return (string.Concat(encodedHeader, "..", encodedSignature), DateTimeOffset.UtcNow);
}
private static ReadOnlyMemory<byte> BuildSigningInput(string encodedHeader, ReadOnlySpan<byte> payload)
{
var headerBytes = Encoding.ASCII.GetBytes(encodedHeader);
var buffer = new byte[headerBytes.Length + 1 + payload.Length];
headerBytes.CopyTo(buffer, 0);
buffer[headerBytes.Length] = (byte)'.';
payload.CopyTo(buffer.AsSpan(headerBytes.Length + 1));
return buffer;
}
}

View File

@@ -1,121 +1,150 @@
using System; using System;
using System.Text; using System.Text;
using System.Text.Json; using System.Text.Json;
using System.Text.Json.Serialization; using System.Text.Json.Serialization;
using Microsoft.Extensions.Logging; using Microsoft.Extensions.Logging;
using Microsoft.IdentityModel.Tokens; using Microsoft.IdentityModel.Tokens;
using StellaOps.Cryptography; using StellaOps.Cryptography;
namespace StellaOps.Concelier.Connector.StellaOpsMirror.Security; namespace StellaOps.Concelier.Connector.StellaOpsMirror.Security;
/// <summary> /// <summary>
/// Validates detached JWS signatures emitted by mirror bundles. /// Validates detached JWS signatures emitted by mirror bundles.
/// </summary> /// </summary>
public sealed class MirrorSignatureVerifier public sealed class MirrorSignatureVerifier
{ {
private static readonly JsonSerializerOptions HeaderSerializerOptions = new(JsonSerializerDefaults.Web) private static readonly JsonSerializerOptions HeaderSerializerOptions = new(JsonSerializerDefaults.Web)
{ {
PropertyNameCaseInsensitive = true PropertyNameCaseInsensitive = true
}; };
private readonly ICryptoProviderRegistry _providerRegistry; private readonly ICryptoProviderRegistry _providerRegistry;
private readonly ILogger<MirrorSignatureVerifier> _logger; private readonly ILogger<MirrorSignatureVerifier> _logger;
public MirrorSignatureVerifier(ICryptoProviderRegistry providerRegistry, ILogger<MirrorSignatureVerifier> logger) public MirrorSignatureVerifier(ICryptoProviderRegistry providerRegistry, ILogger<MirrorSignatureVerifier> logger)
{ {
_providerRegistry = providerRegistry ?? throw new ArgumentNullException(nameof(providerRegistry)); _providerRegistry = providerRegistry ?? throw new ArgumentNullException(nameof(providerRegistry));
_logger = logger ?? throw new ArgumentNullException(nameof(logger)); _logger = logger ?? throw new ArgumentNullException(nameof(logger));
} }
public async Task VerifyAsync(ReadOnlyMemory<byte> payload, string signatureValue, CancellationToken cancellationToken) public Task VerifyAsync(ReadOnlyMemory<byte> payload, string signatureValue, CancellationToken cancellationToken)
{ => VerifyAsync(payload, signatureValue, expectedKeyId: null, expectedProvider: null, cancellationToken);
if (payload.IsEmpty)
{ public async Task VerifyAsync(
throw new ArgumentException("Payload must not be empty.", nameof(payload)); ReadOnlyMemory<byte> payload,
} string signatureValue,
string? expectedKeyId,
if (string.IsNullOrWhiteSpace(signatureValue)) string? expectedProvider,
{ CancellationToken cancellationToken)
throw new ArgumentException("Signature value must be provided.", nameof(signatureValue)); {
} if (payload.IsEmpty)
{
if (!TryParseDetachedJws(signatureValue, out var encodedHeader, out var encodedSignature)) throw new ArgumentException("Payload must not be empty.", nameof(payload));
{ }
throw new InvalidOperationException("Detached JWS signature is malformed.");
} if (string.IsNullOrWhiteSpace(signatureValue))
{
var headerJson = Encoding.UTF8.GetString(Base64UrlEncoder.DecodeBytes(encodedHeader)); throw new ArgumentException("Signature value must be provided.", nameof(signatureValue));
var header = JsonSerializer.Deserialize<MirrorSignatureHeader>(headerJson, HeaderSerializerOptions) }
?? throw new InvalidOperationException("Detached JWS header could not be parsed.");
if (!TryParseDetachedJws(signatureValue, out var encodedHeader, out var encodedSignature))
if (!header.Critical.Contains("b64", StringComparer.Ordinal)) {
{ throw new InvalidOperationException("Detached JWS signature is malformed.");
throw new InvalidOperationException("Detached JWS header is missing required 'b64' critical parameter."); }
}
var headerJson = Encoding.UTF8.GetString(Base64UrlEncoder.DecodeBytes(encodedHeader));
if (header.Base64Payload) var header = JsonSerializer.Deserialize<MirrorSignatureHeader>(headerJson, HeaderSerializerOptions)
{ ?? throw new InvalidOperationException("Detached JWS header could not be parsed.");
throw new InvalidOperationException("Detached JWS header sets b64=true; expected unencoded payload.");
} if (!header.Critical.Contains("b64", StringComparer.Ordinal))
{
if (string.IsNullOrWhiteSpace(header.KeyId)) throw new InvalidOperationException("Detached JWS header is missing required 'b64' critical parameter.");
{ }
throw new InvalidOperationException("Detached JWS header missing key identifier.");
} if (header.Base64Payload)
{
if (string.IsNullOrWhiteSpace(header.Algorithm)) throw new InvalidOperationException("Detached JWS header sets b64=true; expected unencoded payload.");
{ }
throw new InvalidOperationException("Detached JWS header missing algorithm identifier.");
} if (string.IsNullOrWhiteSpace(header.KeyId))
{
var signingInput = BuildSigningInput(encodedHeader, payload.Span); throw new InvalidOperationException("Detached JWS header missing key identifier.");
var signatureBytes = Base64UrlEncoder.DecodeBytes(encodedSignature); }
var keyReference = new CryptoKeyReference(header.KeyId, header.Provider); if (string.IsNullOrWhiteSpace(header.Algorithm))
var resolution = _providerRegistry.ResolveSigner( {
CryptoCapability.Verification, throw new InvalidOperationException("Detached JWS header missing algorithm identifier.");
header.Algorithm, }
keyReference,
header.Provider); if (!string.IsNullOrWhiteSpace(expectedKeyId) &&
!string.Equals(header.KeyId, expectedKeyId, StringComparison.OrdinalIgnoreCase))
var verified = await resolution.Signer.VerifyAsync(signingInput, signatureBytes, cancellationToken).ConfigureAwait(false); {
if (!verified) throw new InvalidOperationException($"Mirror bundle signature key '{header.KeyId}' did not match expected key '{expectedKeyId}'.");
{ }
_logger.LogWarning("Detached JWS verification failed for key {KeyId} via provider {Provider}.", header.KeyId, resolution.ProviderName);
throw new InvalidOperationException("Detached JWS signature verification failed."); if (!string.IsNullOrWhiteSpace(expectedProvider) &&
} !string.Equals(header.Provider, expectedProvider, StringComparison.OrdinalIgnoreCase))
} {
throw new InvalidOperationException($"Mirror bundle signature provider '{header.Provider ?? "<null>"}' did not match expected provider '{expectedProvider}'.");
private static bool TryParseDetachedJws(string value, out string encodedHeader, out string encodedSignature) }
{
var parts = value.Split("..", StringSplitOptions.None); var signingInput = BuildSigningInput(encodedHeader, payload.Span);
if (parts.Length != 2 || string.IsNullOrEmpty(parts[0]) || string.IsNullOrEmpty(parts[1])) var signatureBytes = Base64UrlEncoder.DecodeBytes(encodedSignature);
{
encodedHeader = string.Empty; var keyReference = new CryptoKeyReference(header.KeyId, header.Provider);
encodedSignature = string.Empty; CryptoSignerResolution resolution;
return false; try
} {
resolution = _providerRegistry.ResolveSigner(
encodedHeader = parts[0]; CryptoCapability.Verification,
encodedSignature = parts[1]; header.Algorithm,
return true; keyReference,
} header.Provider);
}
private static ReadOnlyMemory<byte> BuildSigningInput(string encodedHeader, ReadOnlySpan<byte> payload) catch (Exception ex) when (ex is InvalidOperationException or KeyNotFoundException)
{ {
var headerBytes = Encoding.ASCII.GetBytes(encodedHeader); _logger.LogWarning(ex, "Unable to resolve signer for mirror signature key {KeyId} via provider {Provider}.", header.KeyId, header.Provider ?? "<null>");
var buffer = new byte[headerBytes.Length + 1 + payload.Length]; throw new InvalidOperationException("Detached JWS signature verification failed.", ex);
headerBytes.CopyTo(buffer.AsSpan()); }
buffer[headerBytes.Length] = (byte)'.';
payload.CopyTo(buffer.AsSpan(headerBytes.Length + 1)); var verified = await resolution.Signer.VerifyAsync(signingInput, signatureBytes, cancellationToken).ConfigureAwait(false);
return buffer; if (!verified)
} {
_logger.LogWarning("Detached JWS verification failed for key {KeyId} via provider {Provider}.", header.KeyId, resolution.ProviderName);
private sealed record MirrorSignatureHeader( throw new InvalidOperationException("Detached JWS signature verification failed.");
[property: JsonPropertyName("alg")] string Algorithm, }
[property: JsonPropertyName("kid")] string KeyId, }
[property: JsonPropertyName("provider")] string? Provider,
[property: JsonPropertyName("typ")] string? Type, private static bool TryParseDetachedJws(string value, out string encodedHeader, out string encodedSignature)
[property: JsonPropertyName("b64")] bool Base64Payload, {
[property: JsonPropertyName("crit")] string[] Critical); var parts = value.Split("..", StringSplitOptions.None);
} if (parts.Length != 2 || string.IsNullOrEmpty(parts[0]) || string.IsNullOrEmpty(parts[1]))
{
encodedHeader = string.Empty;
encodedSignature = string.Empty;
return false;
}
encodedHeader = parts[0];
encodedSignature = parts[1];
return true;
}
private static ReadOnlyMemory<byte> BuildSigningInput(string encodedHeader, ReadOnlySpan<byte> payload)
{
var headerBytes = Encoding.ASCII.GetBytes(encodedHeader);
var buffer = new byte[headerBytes.Length + 1 + payload.Length];
headerBytes.CopyTo(buffer.AsSpan());
buffer[headerBytes.Length] = (byte)'.';
payload.CopyTo(buffer.AsSpan(headerBytes.Length + 1));
return buffer;
}
private sealed record MirrorSignatureHeader(
[property: JsonPropertyName("alg")] string Algorithm,
[property: JsonPropertyName("kid")] string KeyId,
[property: JsonPropertyName("provider")] string? Provider,
[property: JsonPropertyName("typ")] string? Type,
[property: JsonPropertyName("b64")] bool Base64Payload,
[property: JsonPropertyName("crit")] string[] Critical);
}

View File

@@ -1,288 +1,309 @@
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Linq; using System.Linq;
using System.Security.Cryptography; using System.Security.Cryptography;
using System.Text; using System.Text;
using Microsoft.Extensions.Logging; using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options; using Microsoft.Extensions.Options;
using MongoDB.Bson; using MongoDB.Bson;
using StellaOps.Concelier.Connector.Common.Fetch; using StellaOps.Concelier.Connector.Common.Fetch;
using StellaOps.Concelier.Connector.Common; using StellaOps.Concelier.Connector.Common;
using StellaOps.Concelier.Connector.StellaOpsMirror.Client; using StellaOps.Concelier.Connector.StellaOpsMirror.Client;
using StellaOps.Concelier.Connector.StellaOpsMirror.Internal; using StellaOps.Concelier.Connector.StellaOpsMirror.Internal;
using StellaOps.Concelier.Connector.StellaOpsMirror.Security; using StellaOps.Concelier.Connector.StellaOpsMirror.Security;
using StellaOps.Concelier.Connector.StellaOpsMirror.Settings; using StellaOps.Concelier.Connector.StellaOpsMirror.Settings;
using StellaOps.Concelier.Storage.Mongo; using StellaOps.Concelier.Storage.Mongo;
using StellaOps.Concelier.Storage.Mongo.Documents; using StellaOps.Concelier.Storage.Mongo.Documents;
using StellaOps.Plugin; using StellaOps.Plugin;
namespace StellaOps.Concelier.Connector.StellaOpsMirror; namespace StellaOps.Concelier.Connector.StellaOpsMirror;
public sealed class StellaOpsMirrorConnector : IFeedConnector public sealed class StellaOpsMirrorConnector : IFeedConnector
{ {
public const string Source = "stellaops-mirror"; public const string Source = "stellaops-mirror";
private readonly MirrorManifestClient _client; private readonly MirrorManifestClient _client;
private readonly MirrorSignatureVerifier _signatureVerifier; private readonly MirrorSignatureVerifier _signatureVerifier;
private readonly RawDocumentStorage _rawDocumentStorage; private readonly RawDocumentStorage _rawDocumentStorage;
private readonly IDocumentStore _documentStore; private readonly IDocumentStore _documentStore;
private readonly ISourceStateRepository _stateRepository; private readonly ISourceStateRepository _stateRepository;
private readonly TimeProvider _timeProvider; private readonly TimeProvider _timeProvider;
private readonly ILogger<StellaOpsMirrorConnector> _logger; private readonly ILogger<StellaOpsMirrorConnector> _logger;
private readonly StellaOpsMirrorConnectorOptions _options; private readonly StellaOpsMirrorConnectorOptions _options;
public StellaOpsMirrorConnector( public StellaOpsMirrorConnector(
MirrorManifestClient client, MirrorManifestClient client,
MirrorSignatureVerifier signatureVerifier, MirrorSignatureVerifier signatureVerifier,
RawDocumentStorage rawDocumentStorage, RawDocumentStorage rawDocumentStorage,
IDocumentStore documentStore, IDocumentStore documentStore,
ISourceStateRepository stateRepository, ISourceStateRepository stateRepository,
IOptions<StellaOpsMirrorConnectorOptions> options, IOptions<StellaOpsMirrorConnectorOptions> options,
TimeProvider? timeProvider, TimeProvider? timeProvider,
ILogger<StellaOpsMirrorConnector> logger) ILogger<StellaOpsMirrorConnector> logger)
{ {
_client = client ?? throw new ArgumentNullException(nameof(client)); _client = client ?? throw new ArgumentNullException(nameof(client));
_signatureVerifier = signatureVerifier ?? throw new ArgumentNullException(nameof(signatureVerifier)); _signatureVerifier = signatureVerifier ?? throw new ArgumentNullException(nameof(signatureVerifier));
_rawDocumentStorage = rawDocumentStorage ?? throw new ArgumentNullException(nameof(rawDocumentStorage)); _rawDocumentStorage = rawDocumentStorage ?? throw new ArgumentNullException(nameof(rawDocumentStorage));
_documentStore = documentStore ?? throw new ArgumentNullException(nameof(documentStore)); _documentStore = documentStore ?? throw new ArgumentNullException(nameof(documentStore));
_stateRepository = stateRepository ?? throw new ArgumentNullException(nameof(stateRepository)); _stateRepository = stateRepository ?? throw new ArgumentNullException(nameof(stateRepository));
_logger = logger ?? throw new ArgumentNullException(nameof(logger)); _logger = logger ?? throw new ArgumentNullException(nameof(logger));
_timeProvider = timeProvider ?? TimeProvider.System; _timeProvider = timeProvider ?? TimeProvider.System;
_options = (options ?? throw new ArgumentNullException(nameof(options))).Value ?? throw new ArgumentNullException(nameof(options)); _options = (options ?? throw new ArgumentNullException(nameof(options))).Value ?? throw new ArgumentNullException(nameof(options));
ValidateOptions(_options); ValidateOptions(_options);
} }
public string SourceName => Source; public string SourceName => Source;
public async Task FetchAsync(IServiceProvider services, CancellationToken cancellationToken) public async Task FetchAsync(IServiceProvider services, CancellationToken cancellationToken)
{ {
_ = services ?? throw new ArgumentNullException(nameof(services)); _ = services ?? throw new ArgumentNullException(nameof(services));
var now = _timeProvider.GetUtcNow(); var now = _timeProvider.GetUtcNow();
var cursor = await GetCursorAsync(cancellationToken).ConfigureAwait(false); var cursor = await GetCursorAsync(cancellationToken).ConfigureAwait(false);
var pendingDocuments = cursor.PendingDocuments.ToHashSet(); var pendingDocuments = cursor.PendingDocuments.ToHashSet();
var pendingMappings = cursor.PendingMappings.ToHashSet(); var pendingMappings = cursor.PendingMappings.ToHashSet();
MirrorIndexDocument index; MirrorIndexDocument index;
try try
{ {
index = await _client.GetIndexAsync(_options.IndexPath, cancellationToken).ConfigureAwait(false); index = await _client.GetIndexAsync(_options.IndexPath, cancellationToken).ConfigureAwait(false);
} }
catch (Exception ex) catch (Exception ex)
{ {
await _stateRepository.MarkFailureAsync(Source, now, TimeSpan.FromMinutes(15), ex.Message, cancellationToken).ConfigureAwait(false); await _stateRepository.MarkFailureAsync(Source, now, TimeSpan.FromMinutes(15), ex.Message, cancellationToken).ConfigureAwait(false);
throw; throw;
} }
var domain = index.Domains.FirstOrDefault(entry => var domain = index.Domains.FirstOrDefault(entry =>
string.Equals(entry.DomainId, _options.DomainId, StringComparison.OrdinalIgnoreCase)); string.Equals(entry.DomainId, _options.DomainId, StringComparison.OrdinalIgnoreCase));
if (domain is null) if (domain is null)
{ {
var message = $"Mirror domain '{_options.DomainId}' not present in index."; var message = $"Mirror domain '{_options.DomainId}' not present in index.";
await _stateRepository.MarkFailureAsync(Source, now, TimeSpan.FromMinutes(30), message, cancellationToken).ConfigureAwait(false); await _stateRepository.MarkFailureAsync(Source, now, TimeSpan.FromMinutes(30), message, cancellationToken).ConfigureAwait(false);
throw new InvalidOperationException(message); throw new InvalidOperationException(message);
} }
if (string.Equals(domain.Bundle.Digest, cursor.BundleDigest, StringComparison.OrdinalIgnoreCase)) if (string.Equals(domain.Bundle.Digest, cursor.BundleDigest, StringComparison.OrdinalIgnoreCase))
{ {
_logger.LogInformation("Mirror bundle digest {Digest} unchanged; skipping fetch.", domain.Bundle.Digest); _logger.LogInformation("Mirror bundle digest {Digest} unchanged; skipping fetch.", domain.Bundle.Digest);
return; return;
} }
try try
{ {
await ProcessDomainAsync(index, domain, pendingDocuments, cancellationToken).ConfigureAwait(false); await ProcessDomainAsync(index, domain, pendingDocuments, cancellationToken).ConfigureAwait(false);
} }
catch (Exception ex) catch (Exception ex)
{ {
await _stateRepository.MarkFailureAsync(Source, now, TimeSpan.FromMinutes(10), ex.Message, cancellationToken).ConfigureAwait(false); await _stateRepository.MarkFailureAsync(Source, now, TimeSpan.FromMinutes(10), ex.Message, cancellationToken).ConfigureAwait(false);
throw; throw;
} }
var updatedCursor = cursor var updatedCursor = cursor
.WithPendingDocuments(pendingDocuments) .WithPendingDocuments(pendingDocuments)
.WithPendingMappings(pendingMappings) .WithPendingMappings(pendingMappings)
.WithBundleSnapshot(domain.Bundle.Path, domain.Bundle.Digest, index.GeneratedAt); .WithBundleSnapshot(domain.Bundle.Path, domain.Bundle.Digest, index.GeneratedAt);
await UpdateCursorAsync(updatedCursor, cancellationToken).ConfigureAwait(false); await UpdateCursorAsync(updatedCursor, cancellationToken).ConfigureAwait(false);
} }
public Task ParseAsync(IServiceProvider services, CancellationToken cancellationToken) public Task ParseAsync(IServiceProvider services, CancellationToken cancellationToken)
=> Task.CompletedTask; => Task.CompletedTask;
public Task MapAsync(IServiceProvider services, CancellationToken cancellationToken) public Task MapAsync(IServiceProvider services, CancellationToken cancellationToken)
=> Task.CompletedTask; => Task.CompletedTask;
private async Task ProcessDomainAsync( private async Task ProcessDomainAsync(
MirrorIndexDocument index, MirrorIndexDocument index,
MirrorIndexDomainEntry domain, MirrorIndexDomainEntry domain,
HashSet<Guid> pendingDocuments, HashSet<Guid> pendingDocuments,
CancellationToken cancellationToken) CancellationToken cancellationToken)
{ {
var manifestBytes = await _client.DownloadAsync(domain.Manifest.Path, cancellationToken).ConfigureAwait(false); var manifestBytes = await _client.DownloadAsync(domain.Manifest.Path, cancellationToken).ConfigureAwait(false);
var bundleBytes = await _client.DownloadAsync(domain.Bundle.Path, cancellationToken).ConfigureAwait(false); var bundleBytes = await _client.DownloadAsync(domain.Bundle.Path, cancellationToken).ConfigureAwait(false);
VerifyDigest(domain.Manifest.Digest, manifestBytes, domain.Manifest.Path); VerifyDigest(domain.Manifest.Digest, manifestBytes, domain.Manifest.Path);
VerifyDigest(domain.Bundle.Digest, bundleBytes, domain.Bundle.Path); VerifyDigest(domain.Bundle.Digest, bundleBytes, domain.Bundle.Path);
if (_options.Signature.Enabled) if (_options.Signature.Enabled)
{ {
if (domain.Bundle.Signature is null) if (domain.Bundle.Signature is null)
{ {
throw new InvalidOperationException("Mirror bundle did not include a signature descriptor while verification is enabled."); throw new InvalidOperationException("Mirror bundle did not include a signature descriptor while verification is enabled.");
} }
var signatureBytes = await _client.DownloadAsync(domain.Bundle.Signature.Path, cancellationToken).ConfigureAwait(false); if (!string.IsNullOrWhiteSpace(_options.Signature.KeyId) &&
var signatureValue = Encoding.UTF8.GetString(signatureBytes); !string.Equals(domain.Bundle.Signature.KeyId, _options.Signature.KeyId, StringComparison.OrdinalIgnoreCase))
await _signatureVerifier.VerifyAsync(bundleBytes, signatureValue, cancellationToken).ConfigureAwait(false); {
} throw new InvalidOperationException($"Mirror bundle signature key '{domain.Bundle.Signature.KeyId}' did not match expected key '{_options.Signature.KeyId}'.");
}
await StoreAsync(domain, index.GeneratedAt, domain.Manifest, manifestBytes, "application/json", DocumentStatuses.Mapped, addToPending: false, pendingDocuments, cancellationToken).ConfigureAwait(false);
var bundleRecord = await StoreAsync(domain, index.GeneratedAt, domain.Bundle, bundleBytes, "application/json", DocumentStatuses.PendingParse, addToPending: true, pendingDocuments, cancellationToken).ConfigureAwait(false); if (!string.IsNullOrWhiteSpace(_options.Signature.Provider) &&
!string.Equals(domain.Bundle.Signature.Provider, _options.Signature.Provider, StringComparison.OrdinalIgnoreCase))
_logger.LogInformation( {
"Stored mirror bundle {Uri} as document {DocumentId} with digest {Digest}.", throw new InvalidOperationException($"Mirror bundle signature provider '{domain.Bundle.Signature.Provider ?? "<null>"}' did not match expected provider '{_options.Signature.Provider}'.");
bundleRecord.Uri, }
bundleRecord.Id,
bundleRecord.Sha256); var signatureBytes = await _client.DownloadAsync(domain.Bundle.Signature.Path, cancellationToken).ConfigureAwait(false);
} var signatureValue = Encoding.UTF8.GetString(signatureBytes).Trim();
await _signatureVerifier.VerifyAsync(
private async Task<DocumentRecord> StoreAsync( bundleBytes,
MirrorIndexDomainEntry domain, signatureValue,
DateTimeOffset generatedAt, expectedKeyId: _options.Signature.KeyId,
MirrorFileDescriptor descriptor, expectedProvider: _options.Signature.Provider,
byte[] payload, cancellationToken).ConfigureAwait(false);
string contentType, }
string status, else if (domain.Bundle.Signature is not null)
bool addToPending, {
HashSet<Guid> pendingDocuments, _logger.LogInformation("Mirror bundle provided signature descriptor but verification is disabled; skipping verification.");
CancellationToken cancellationToken) }
{
var absolute = ResolveAbsolutePath(descriptor.Path); await StoreAsync(domain, index.GeneratedAt, domain.Manifest, manifestBytes, "application/json", DocumentStatuses.Mapped, addToPending: false, pendingDocuments, cancellationToken).ConfigureAwait(false);
var bundleRecord = await StoreAsync(domain, index.GeneratedAt, domain.Bundle, bundleBytes, "application/json", DocumentStatuses.PendingParse, addToPending: true, pendingDocuments, cancellationToken).ConfigureAwait(false);
var existing = await _documentStore.FindBySourceAndUriAsync(Source, absolute, cancellationToken).ConfigureAwait(false);
if (existing is not null && string.Equals(existing.Sha256, NormalizeDigest(descriptor.Digest), StringComparison.OrdinalIgnoreCase)) _logger.LogInformation(
{ "Stored mirror bundle {Uri} as document {DocumentId} with digest {Digest}.",
if (addToPending) bundleRecord.Uri,
{ bundleRecord.Id,
pendingDocuments.Add(existing.Id); bundleRecord.Sha256);
} }
return existing; private async Task<DocumentRecord> StoreAsync(
} MirrorIndexDomainEntry domain,
DateTimeOffset generatedAt,
var gridFsId = await _rawDocumentStorage.UploadAsync(Source, absolute, payload, contentType, cancellationToken).ConfigureAwait(false); MirrorFileDescriptor descriptor,
var now = _timeProvider.GetUtcNow(); byte[] payload,
var sha = ComputeSha256(payload); string contentType,
string status,
var metadata = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase) bool addToPending,
{ HashSet<Guid> pendingDocuments,
["mirror.domainId"] = domain.DomainId, CancellationToken cancellationToken)
["mirror.displayName"] = domain.DisplayName, {
["mirror.path"] = descriptor.Path, var absolute = ResolveAbsolutePath(descriptor.Path);
["mirror.digest"] = NormalizeDigest(descriptor.Digest),
["mirror.type"] = ReferenceEquals(descriptor, domain.Bundle) ? "bundle" : "manifest", var existing = await _documentStore.FindBySourceAndUriAsync(Source, absolute, cancellationToken).ConfigureAwait(false);
}; if (existing is not null && string.Equals(existing.Sha256, NormalizeDigest(descriptor.Digest), StringComparison.OrdinalIgnoreCase))
{
var record = new DocumentRecord( if (addToPending)
existing?.Id ?? Guid.NewGuid(), {
Source, pendingDocuments.Add(existing.Id);
absolute, }
now,
sha, return existing;
status, }
contentType,
Headers: null, var gridFsId = await _rawDocumentStorage.UploadAsync(Source, absolute, payload, contentType, cancellationToken).ConfigureAwait(false);
Metadata: metadata, var now = _timeProvider.GetUtcNow();
Etag: null, var sha = ComputeSha256(payload);
LastModified: generatedAt,
GridFsId: gridFsId, var metadata = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase)
ExpiresAt: null); {
["mirror.domainId"] = domain.DomainId,
var upserted = await _documentStore.UpsertAsync(record, cancellationToken).ConfigureAwait(false); ["mirror.displayName"] = domain.DisplayName,
["mirror.path"] = descriptor.Path,
if (addToPending) ["mirror.digest"] = NormalizeDigest(descriptor.Digest),
{ ["mirror.type"] = ReferenceEquals(descriptor, domain.Bundle) ? "bundle" : "manifest",
pendingDocuments.Add(upserted.Id); };
}
var record = new DocumentRecord(
return upserted; existing?.Id ?? Guid.NewGuid(),
} Source,
absolute,
private string ResolveAbsolutePath(string path) now,
{ sha,
var uri = new Uri(_options.BaseAddress, path); status,
return uri.ToString(); contentType,
} Headers: null,
Metadata: metadata,
private async Task<StellaOpsMirrorCursor> GetCursorAsync(CancellationToken cancellationToken) Etag: null,
{ LastModified: generatedAt,
var state = await _stateRepository.TryGetAsync(Source, cancellationToken).ConfigureAwait(false); GridFsId: gridFsId,
return state is null ? StellaOpsMirrorCursor.Empty : StellaOpsMirrorCursor.FromBson(state.Cursor); ExpiresAt: null);
}
var upserted = await _documentStore.UpsertAsync(record, cancellationToken).ConfigureAwait(false);
private async Task UpdateCursorAsync(StellaOpsMirrorCursor cursor, CancellationToken cancellationToken)
{ if (addToPending)
var document = cursor.ToBsonDocument(); {
var now = _timeProvider.GetUtcNow(); pendingDocuments.Add(upserted.Id);
await _stateRepository.UpdateCursorAsync(Source, document, now, cancellationToken).ConfigureAwait(false); }
}
return upserted;
private static void VerifyDigest(string expected, ReadOnlySpan<byte> payload, string path) }
{
if (string.IsNullOrWhiteSpace(expected)) private string ResolveAbsolutePath(string path)
{ {
return; var uri = new Uri(_options.BaseAddress, path);
} return uri.ToString();
}
if (!expected.StartsWith("sha256:", StringComparison.OrdinalIgnoreCase))
{ private async Task<StellaOpsMirrorCursor> GetCursorAsync(CancellationToken cancellationToken)
throw new InvalidOperationException($"Unsupported digest '{expected}' for '{path}'."); {
} var state = await _stateRepository.TryGetAsync(Source, cancellationToken).ConfigureAwait(false);
return state is null ? StellaOpsMirrorCursor.Empty : StellaOpsMirrorCursor.FromBson(state.Cursor);
var actualHash = SHA256.HashData(payload); }
var actual = "sha256:" + Convert.ToHexString(actualHash).ToLowerInvariant();
if (!string.Equals(actual, expected, StringComparison.OrdinalIgnoreCase)) private async Task UpdateCursorAsync(StellaOpsMirrorCursor cursor, CancellationToken cancellationToken)
{ {
throw new InvalidOperationException($"Digest mismatch for '{path}'. Expected {expected}, computed {actual}."); var document = cursor.ToBsonDocument();
} var now = _timeProvider.GetUtcNow();
} await _stateRepository.UpdateCursorAsync(Source, document, now, cancellationToken).ConfigureAwait(false);
}
private static string ComputeSha256(ReadOnlySpan<byte> payload)
{ private static void VerifyDigest(string expected, ReadOnlySpan<byte> payload, string path)
var hash = SHA256.HashData(payload); {
return Convert.ToHexString(hash).ToLowerInvariant(); if (string.IsNullOrWhiteSpace(expected))
} {
return;
private static string NormalizeDigest(string digest) }
{
if (string.IsNullOrWhiteSpace(digest)) if (!expected.StartsWith("sha256:", StringComparison.OrdinalIgnoreCase))
{ {
return string.Empty; throw new InvalidOperationException($"Unsupported digest '{expected}' for '{path}'.");
} }
return digest.StartsWith("sha256:", StringComparison.OrdinalIgnoreCase) var actualHash = SHA256.HashData(payload);
? digest[7..] var actual = "sha256:" + Convert.ToHexString(actualHash).ToLowerInvariant();
: digest.ToLowerInvariant(); if (!string.Equals(actual, expected, StringComparison.OrdinalIgnoreCase))
} {
throw new InvalidOperationException($"Digest mismatch for '{path}'. Expected {expected}, computed {actual}.");
private static void ValidateOptions(StellaOpsMirrorConnectorOptions options) }
{ }
if (options.BaseAddress is null || !options.BaseAddress.IsAbsoluteUri)
{ private static string ComputeSha256(ReadOnlySpan<byte> payload)
throw new InvalidOperationException("Mirror connector requires an absolute baseAddress."); {
} var hash = SHA256.HashData(payload);
return Convert.ToHexString(hash).ToLowerInvariant();
if (string.IsNullOrWhiteSpace(options.DomainId)) }
{
throw new InvalidOperationException("Mirror connector requires domainId to be specified."); private static string NormalizeDigest(string digest)
} {
} if (string.IsNullOrWhiteSpace(digest))
} {
return string.Empty;
file static class UriExtensions }
{
public static Uri Combine(this Uri baseUri, string relative) return digest.StartsWith("sha256:", StringComparison.OrdinalIgnoreCase)
=> new(baseUri, relative); ? digest[7..]
} : digest.ToLowerInvariant();
}
private static void ValidateOptions(StellaOpsMirrorConnectorOptions options)
{
if (options.BaseAddress is null || !options.BaseAddress.IsAbsoluteUri)
{
throw new InvalidOperationException("Mirror connector requires an absolute baseAddress.");
}
if (string.IsNullOrWhiteSpace(options.DomainId))
{
throw new InvalidOperationException("Mirror connector requires domainId to be specified.");
}
}
}
file static class UriExtensions
{
public static Uri Combine(this Uri baseUri, string relative)
=> new(baseUri, relative);
}

View File

@@ -1,6 +1,7 @@
using System.Collections.Concurrent; using System.Collections.Concurrent;
using System.Linq; using System.Collections.Immutable;
using System.Threading.Tasks; using System.Linq;
using System.Threading.Tasks;
using Microsoft.Extensions.Logging.Abstractions; using Microsoft.Extensions.Logging.Abstractions;
using Microsoft.Extensions.Time.Testing; using Microsoft.Extensions.Time.Testing;
using MongoDB.Driver; using MongoDB.Driver;
@@ -43,8 +44,9 @@ public sealed class AdvisoryMergeServiceTests
var result = await service.MergeAsync("GHSA-aaaa-bbbb-cccc", CancellationToken.None); var result = await service.MergeAsync("GHSA-aaaa-bbbb-cccc", CancellationToken.None);
Assert.NotNull(result.Merged); Assert.NotNull(result.Merged);
Assert.Equal("OSV summary overrides", result.Merged!.Summary); Assert.Equal("OSV summary overrides", result.Merged!.Summary);
Assert.Empty(result.Conflicts);
var upserted = advisoryStore.LastUpserted; var upserted = advisoryStore.LastUpserted;
Assert.NotNull(upserted); Assert.NotNull(upserted);
@@ -103,25 +105,108 @@ public sealed class AdvisoryMergeServiceTests
provenance: new[] { provenance }); provenance: new[] { provenance });
} }
private static Advisory CreateOsvAdvisory() private static Advisory CreateOsvAdvisory()
{ {
var recorded = DateTimeOffset.Parse("2025-03-05T12:00:00Z"); var recorded = DateTimeOffset.Parse("2025-03-05T12:00:00Z");
var provenance = new AdvisoryProvenance("osv", "map", "OSV-2025-xyz", recorded, new[] { ProvenanceFieldMasks.Advisory }); var provenance = new AdvisoryProvenance("osv", "map", "OSV-2025-xyz", recorded, new[] { ProvenanceFieldMasks.Advisory });
return new Advisory( return new Advisory(
"OSV-2025-xyz", "OSV-2025-xyz",
"Container escape", "Container escape",
"OSV summary overrides", "OSV summary overrides",
"en", "en",
recorded, recorded,
recorded, recorded,
"critical", "critical",
exploitKnown: false, exploitKnown: false,
aliases: new[] { "OSV-2025-xyz", "CVE-2025-4242" }, aliases: new[] { "OSV-2025-xyz", "CVE-2025-4242" },
references: Array.Empty<AdvisoryReference>(), references: Array.Empty<AdvisoryReference>(),
affectedPackages: Array.Empty<AffectedPackage>(), affectedPackages: Array.Empty<AffectedPackage>(),
cvssMetrics: Array.Empty<CvssMetric>(), cvssMetrics: Array.Empty<CvssMetric>(),
provenance: new[] { provenance }); provenance: new[] { provenance });
} }
private static Advisory CreateVendorAdvisory()
{
var recorded = DateTimeOffset.Parse("2025-03-10T00:00:00Z");
var provenance = new AdvisoryProvenance("vendor", "psirt", "VSA-2025-5000", recorded, new[] { ProvenanceFieldMasks.Advisory });
return new Advisory(
"VSA-2025-5000",
"Vendor overrides severity",
"Vendor states critical impact.",
"en",
recorded,
recorded,
"critical",
exploitKnown: false,
aliases: new[] { "VSA-2025-5000", "CVE-2025-5000" },
references: Array.Empty<AdvisoryReference>(),
affectedPackages: Array.Empty<AffectedPackage>(),
cvssMetrics: Array.Empty<CvssMetric>(),
provenance: new[] { provenance });
}
private static Advisory CreateConflictingNvdAdvisory()
{
var recorded = DateTimeOffset.Parse("2025-03-09T00:00:00Z");
var provenance = new AdvisoryProvenance("nvd", "map", "CVE-2025-5000", recorded, new[] { ProvenanceFieldMasks.Advisory });
return new Advisory(
"CVE-2025-5000",
"CVE-2025-5000",
"Baseline NVD entry.",
"en",
recorded,
recorded,
"medium",
exploitKnown: false,
aliases: new[] { "CVE-2025-5000" },
references: Array.Empty<AdvisoryReference>(),
affectedPackages: Array.Empty<AffectedPackage>(),
cvssMetrics: Array.Empty<CvssMetric>(),
provenance: new[] { provenance });
}
[Fact]
public async Task MergeAsync_PersistsConflictSummariesWithHashes()
{
var aliasStore = new FakeAliasStore();
aliasStore.Register("CVE-2025-5000",
(AliasSchemes.Cve, "CVE-2025-5000"));
aliasStore.Register("VSA-2025-5000",
(AliasSchemes.Cve, "CVE-2025-5000"));
var vendor = CreateVendorAdvisory();
var nvd = CreateConflictingNvdAdvisory();
var advisoryStore = new FakeAdvisoryStore();
advisoryStore.Seed(vendor, nvd);
var mergeEventStore = new InMemoryMergeEventStore();
var timeProvider = new FakeTimeProvider(new DateTimeOffset(2025, 4, 2, 0, 0, 0, TimeSpan.Zero));
var writer = new MergeEventWriter(mergeEventStore, new CanonicalHashCalculator(), timeProvider, NullLogger<MergeEventWriter>.Instance);
var precedenceMerger = new AdvisoryPrecedenceMerger(new AffectedPackagePrecedenceResolver(), timeProvider);
var aliasResolver = new AliasGraphResolver(aliasStore);
var canonicalMerger = new CanonicalMerger(timeProvider);
var eventLog = new RecordingAdvisoryEventLog();
var service = new AdvisoryMergeService(aliasResolver, advisoryStore, precedenceMerger, writer, canonicalMerger, eventLog, timeProvider, NullLogger<AdvisoryMergeService>.Instance);
var result = await service.MergeAsync("CVE-2025-5000", CancellationToken.None);
var conflict = Assert.Single(result.Conflicts);
Assert.Equal("CVE-2025-5000", conflict.VulnerabilityKey);
Assert.Equal("severity", conflict.Explainer.Type);
Assert.Equal("mismatch", conflict.Explainer.Reason);
Assert.Contains("vendor", conflict.Explainer.PrimarySources, StringComparer.OrdinalIgnoreCase);
Assert.Contains("nvd", conflict.Explainer.SuppressedSources, StringComparer.OrdinalIgnoreCase);
Assert.Equal(conflict.Explainer.ComputeHashHex(), conflict.ConflictHash);
Assert.True(conflict.StatementIds.Length >= 2);
Assert.Equal(timeProvider.GetUtcNow(), conflict.RecordedAt);
var appendRequest = eventLog.LastRequest;
Assert.NotNull(appendRequest);
var appendedConflict = Assert.Single(appendRequest!.Conflicts!);
Assert.Equal(conflict.ConflictId, appendedConflict.ConflictId);
Assert.Equal(conflict.StatementIds, appendedConflict.StatementIds.ToImmutableArray());
}
private sealed class RecordingAdvisoryEventLog : IAdvisoryEventLog private sealed class RecordingAdvisoryEventLog : IAdvisoryEventLog

View File

@@ -1,430 +1,456 @@
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Collections.Immutable; using System.Collections.Immutable;
using System.Diagnostics.Metrics; using System.Diagnostics.Metrics;
using System.Linq; using System.Linq;
using System.Threading; using System.Threading;
using System.Threading.Tasks; using System.Threading.Tasks;
using Microsoft.Extensions.Logging; using Microsoft.Extensions.Logging;
using StellaOps.Concelier.Core; using StellaOps.Concelier.Core;
using StellaOps.Concelier.Core.Events; using StellaOps.Concelier.Core.Events;
using StellaOps.Concelier.Models; using StellaOps.Concelier.Models;
using StellaOps.Concelier.Storage.Mongo.Advisories; using StellaOps.Concelier.Storage.Mongo.Advisories;
using StellaOps.Concelier.Storage.Mongo.Aliases; using StellaOps.Concelier.Storage.Mongo.Aliases;
using StellaOps.Concelier.Storage.Mongo.MergeEvents; using StellaOps.Concelier.Storage.Mongo.MergeEvents;
using System.Text.Json; using System.Text.Json;
namespace StellaOps.Concelier.Merge.Services; namespace StellaOps.Concelier.Merge.Services;
public sealed class AdvisoryMergeService public sealed class AdvisoryMergeService
{ {
private static readonly Meter MergeMeter = new("StellaOps.Concelier.Merge"); private static readonly Meter MergeMeter = new("StellaOps.Concelier.Merge");
private static readonly Counter<long> AliasCollisionCounter = MergeMeter.CreateCounter<long>( private static readonly Counter<long> AliasCollisionCounter = MergeMeter.CreateCounter<long>(
"concelier.merge.identity_conflicts", "concelier.merge.identity_conflicts",
unit: "count", unit: "count",
description: "Number of alias collisions detected during merge."); description: "Number of alias collisions detected during merge.");
private static readonly string[] PreferredAliasSchemes = private static readonly string[] PreferredAliasSchemes =
{ {
AliasSchemes.Cve, AliasSchemes.Cve,
AliasSchemes.Ghsa, AliasSchemes.Ghsa,
AliasSchemes.OsV, AliasSchemes.OsV,
AliasSchemes.Msrc, AliasSchemes.Msrc,
}; };
private readonly AliasGraphResolver _aliasResolver; private readonly AliasGraphResolver _aliasResolver;
private readonly IAdvisoryStore _advisoryStore; private readonly IAdvisoryStore _advisoryStore;
private readonly AdvisoryPrecedenceMerger _precedenceMerger; private readonly AdvisoryPrecedenceMerger _precedenceMerger;
private readonly MergeEventWriter _mergeEventWriter; private readonly MergeEventWriter _mergeEventWriter;
private readonly IAdvisoryEventLog _eventLog; private readonly IAdvisoryEventLog _eventLog;
private readonly TimeProvider _timeProvider; private readonly TimeProvider _timeProvider;
private readonly CanonicalMerger _canonicalMerger; private readonly CanonicalMerger _canonicalMerger;
private readonly ILogger<AdvisoryMergeService> _logger; private readonly ILogger<AdvisoryMergeService> _logger;
public AdvisoryMergeService( public AdvisoryMergeService(
AliasGraphResolver aliasResolver, AliasGraphResolver aliasResolver,
IAdvisoryStore advisoryStore, IAdvisoryStore advisoryStore,
AdvisoryPrecedenceMerger precedenceMerger, AdvisoryPrecedenceMerger precedenceMerger,
MergeEventWriter mergeEventWriter, MergeEventWriter mergeEventWriter,
CanonicalMerger canonicalMerger, CanonicalMerger canonicalMerger,
IAdvisoryEventLog eventLog, IAdvisoryEventLog eventLog,
TimeProvider timeProvider, TimeProvider timeProvider,
ILogger<AdvisoryMergeService> logger) ILogger<AdvisoryMergeService> logger)
{ {
_aliasResolver = aliasResolver ?? throw new ArgumentNullException(nameof(aliasResolver)); _aliasResolver = aliasResolver ?? throw new ArgumentNullException(nameof(aliasResolver));
_advisoryStore = advisoryStore ?? throw new ArgumentNullException(nameof(advisoryStore)); _advisoryStore = advisoryStore ?? throw new ArgumentNullException(nameof(advisoryStore));
_precedenceMerger = precedenceMerger ?? throw new ArgumentNullException(nameof(precedenceMerger)); _precedenceMerger = precedenceMerger ?? throw new ArgumentNullException(nameof(precedenceMerger));
_mergeEventWriter = mergeEventWriter ?? throw new ArgumentNullException(nameof(mergeEventWriter)); _mergeEventWriter = mergeEventWriter ?? throw new ArgumentNullException(nameof(mergeEventWriter));
_canonicalMerger = canonicalMerger ?? throw new ArgumentNullException(nameof(canonicalMerger)); _canonicalMerger = canonicalMerger ?? throw new ArgumentNullException(nameof(canonicalMerger));
_eventLog = eventLog ?? throw new ArgumentNullException(nameof(eventLog)); _eventLog = eventLog ?? throw new ArgumentNullException(nameof(eventLog));
_timeProvider = timeProvider ?? throw new ArgumentNullException(nameof(timeProvider)); _timeProvider = timeProvider ?? throw new ArgumentNullException(nameof(timeProvider));
_logger = logger ?? throw new ArgumentNullException(nameof(logger)); _logger = logger ?? throw new ArgumentNullException(nameof(logger));
} }
public async Task<AdvisoryMergeResult> MergeAsync(string seedAdvisoryKey, CancellationToken cancellationToken) public async Task<AdvisoryMergeResult> MergeAsync(string seedAdvisoryKey, CancellationToken cancellationToken)
{ {
ArgumentException.ThrowIfNullOrWhiteSpace(seedAdvisoryKey); ArgumentException.ThrowIfNullOrWhiteSpace(seedAdvisoryKey);
var component = await _aliasResolver.BuildComponentAsync(seedAdvisoryKey, cancellationToken).ConfigureAwait(false); var component = await _aliasResolver.BuildComponentAsync(seedAdvisoryKey, cancellationToken).ConfigureAwait(false);
var inputs = new List<Advisory>(); var inputs = new List<Advisory>();
foreach (var advisoryKey in component.AdvisoryKeys) foreach (var advisoryKey in component.AdvisoryKeys)
{ {
cancellationToken.ThrowIfCancellationRequested(); cancellationToken.ThrowIfCancellationRequested();
var advisory = await _advisoryStore.FindAsync(advisoryKey, cancellationToken).ConfigureAwait(false); var advisory = await _advisoryStore.FindAsync(advisoryKey, cancellationToken).ConfigureAwait(false);
if (advisory is not null) if (advisory is not null)
{ {
inputs.Add(advisory); inputs.Add(advisory);
} }
} }
if (inputs.Count == 0) if (inputs.Count == 0)
{ {
_logger.LogWarning("Alias component seeded by {Seed} contains no persisted advisories", seedAdvisoryKey); _logger.LogWarning("Alias component seeded by {Seed} contains no persisted advisories", seedAdvisoryKey);
return AdvisoryMergeResult.Empty(seedAdvisoryKey, component); return AdvisoryMergeResult.Empty(seedAdvisoryKey, component);
} }
var canonicalKey = SelectCanonicalKey(component) ?? seedAdvisoryKey; var canonicalKey = SelectCanonicalKey(component) ?? seedAdvisoryKey;
var canonicalMerge = ApplyCanonicalMergeIfNeeded(canonicalKey, inputs); var canonicalMerge = ApplyCanonicalMergeIfNeeded(canonicalKey, inputs);
var before = await _advisoryStore.FindAsync(canonicalKey, cancellationToken).ConfigureAwait(false); var before = await _advisoryStore.FindAsync(canonicalKey, cancellationToken).ConfigureAwait(false);
var normalizedInputs = NormalizeInputs(inputs, canonicalKey).ToList(); var normalizedInputs = NormalizeInputs(inputs, canonicalKey).ToList();
PrecedenceMergeResult precedenceResult; PrecedenceMergeResult precedenceResult;
try try
{ {
precedenceResult = _precedenceMerger.Merge(normalizedInputs); precedenceResult = _precedenceMerger.Merge(normalizedInputs);
} }
catch (Exception ex) catch (Exception ex)
{ {
_logger.LogError(ex, "Failed to merge alias component seeded by {Seed}", seedAdvisoryKey); _logger.LogError(ex, "Failed to merge alias component seeded by {Seed}", seedAdvisoryKey);
throw; throw;
} }
var merged = precedenceResult.Advisory; var merged = precedenceResult.Advisory;
var conflictDetails = precedenceResult.Conflicts; var conflictDetails = precedenceResult.Conflicts;
if (component.Collisions.Count > 0) if (component.Collisions.Count > 0)
{ {
foreach (var collision in component.Collisions) foreach (var collision in component.Collisions)
{ {
var tags = new KeyValuePair<string, object?>[] var tags = new KeyValuePair<string, object?>[]
{ {
new("scheme", collision.Scheme ?? string.Empty), new("scheme", collision.Scheme ?? string.Empty),
new("alias_value", collision.Value ?? string.Empty), new("alias_value", collision.Value ?? string.Empty),
new("advisory_count", collision.AdvisoryKeys.Count), new("advisory_count", collision.AdvisoryKeys.Count),
}; };
AliasCollisionCounter.Add(1, tags); AliasCollisionCounter.Add(1, tags);
_logger.LogInformation( _logger.LogInformation(
"Alias collision {Scheme}:{Value} involves advisories {Advisories}", "Alias collision {Scheme}:{Value} involves advisories {Advisories}",
collision.Scheme, collision.Scheme,
collision.Value, collision.Value,
string.Join(", ", collision.AdvisoryKeys)); string.Join(", ", collision.AdvisoryKeys));
} }
} }
await _advisoryStore.UpsertAsync(merged, cancellationToken).ConfigureAwait(false); await _advisoryStore.UpsertAsync(merged, cancellationToken).ConfigureAwait(false);
await _mergeEventWriter.AppendAsync( await _mergeEventWriter.AppendAsync(
canonicalKey, canonicalKey,
before, before,
merged, merged,
Array.Empty<Guid>(), Array.Empty<Guid>(),
ConvertFieldDecisions(canonicalMerge?.Decisions), ConvertFieldDecisions(canonicalMerge?.Decisions),
cancellationToken).ConfigureAwait(false); cancellationToken).ConfigureAwait(false);
await AppendEventLogAsync(canonicalKey, normalizedInputs, merged, conflictDetails, cancellationToken).ConfigureAwait(false); var conflictSummaries = await AppendEventLogAsync(canonicalKey, normalizedInputs, merged, conflictDetails, cancellationToken).ConfigureAwait(false);
return new AdvisoryMergeResult(seedAdvisoryKey, canonicalKey, component, inputs, before, merged); return new AdvisoryMergeResult(seedAdvisoryKey, canonicalKey, component, inputs, before, merged, conflictSummaries);
} }
private async Task AppendEventLogAsync( private async Task<IReadOnlyList<MergeConflictSummary>> AppendEventLogAsync(
string vulnerabilityKey, string vulnerabilityKey,
IReadOnlyList<Advisory> inputs, IReadOnlyList<Advisory> inputs,
Advisory merged, Advisory merged,
IReadOnlyList<MergeConflictDetail> conflicts, IReadOnlyList<MergeConflictDetail> conflicts,
CancellationToken cancellationToken) CancellationToken cancellationToken)
{ {
var recordedAt = _timeProvider.GetUtcNow(); var recordedAt = _timeProvider.GetUtcNow();
var statements = new List<AdvisoryStatementInput>(inputs.Count + 1); var statements = new List<AdvisoryStatementInput>(inputs.Count + 1);
var statementIds = new Dictionary<Advisory, Guid>(ReferenceEqualityComparer.Instance); var statementIds = new Dictionary<Advisory, Guid>(ReferenceEqualityComparer.Instance);
foreach (var advisory in inputs) foreach (var advisory in inputs)
{ {
var statementId = Guid.NewGuid(); var statementId = Guid.NewGuid();
statementIds[advisory] = statementId; statementIds[advisory] = statementId;
statements.Add(new AdvisoryStatementInput( statements.Add(new AdvisoryStatementInput(
vulnerabilityKey, vulnerabilityKey,
advisory, advisory,
DetermineAsOf(advisory, recordedAt), DetermineAsOf(advisory, recordedAt),
InputDocumentIds: Array.Empty<Guid>(), InputDocumentIds: Array.Empty<Guid>(),
StatementId: statementId, StatementId: statementId,
AdvisoryKey: advisory.AdvisoryKey)); AdvisoryKey: advisory.AdvisoryKey));
} }
var canonicalStatementId = Guid.NewGuid(); var canonicalStatementId = Guid.NewGuid();
statementIds[merged] = canonicalStatementId; statementIds[merged] = canonicalStatementId;
statements.Add(new AdvisoryStatementInput( statements.Add(new AdvisoryStatementInput(
vulnerabilityKey, vulnerabilityKey,
merged, merged,
recordedAt, recordedAt,
InputDocumentIds: Array.Empty<Guid>(), InputDocumentIds: Array.Empty<Guid>(),
StatementId: canonicalStatementId, StatementId: canonicalStatementId,
AdvisoryKey: merged.AdvisoryKey)); AdvisoryKey: merged.AdvisoryKey));
var conflictInputs = BuildConflictInputs(conflicts, vulnerabilityKey, statementIds, canonicalStatementId, recordedAt); var conflictMaterialization = BuildConflictInputs(conflicts, vulnerabilityKey, statementIds, canonicalStatementId, recordedAt);
var conflictInputs = conflictMaterialization.Inputs;
if (statements.Count == 0 && conflictInputs.Count == 0) var conflictSummaries = conflictMaterialization.Summaries;
{
return; if (statements.Count == 0 && conflictInputs.Count == 0)
} {
return conflictSummaries.Count == 0
var request = new AdvisoryEventAppendRequest(statements, conflictInputs.Count > 0 ? conflictInputs : null); ? Array.Empty<MergeConflictSummary>()
: conflictSummaries.ToArray();
try }
{
await _eventLog.AppendAsync(request, cancellationToken).ConfigureAwait(false); var request = new AdvisoryEventAppendRequest(statements, conflictInputs.Count > 0 ? conflictInputs : null);
}
finally try
{ {
foreach (var conflict in conflictInputs) await _eventLog.AppendAsync(request, cancellationToken).ConfigureAwait(false);
{ }
conflict.Details.Dispose(); finally
} {
} foreach (var conflict in conflictInputs)
} {
conflict.Details.Dispose();
private static DateTimeOffset DetermineAsOf(Advisory advisory, DateTimeOffset fallback) }
{ }
return (advisory.Modified ?? advisory.Published ?? fallback).ToUniversalTime();
} return conflictSummaries.Count == 0
? Array.Empty<MergeConflictSummary>()
private static List<AdvisoryConflictInput> BuildConflictInputs( : conflictSummaries.ToArray();
IReadOnlyList<MergeConflictDetail> conflicts, }
string vulnerabilityKey,
IReadOnlyDictionary<Advisory, Guid> statementIds, private static DateTimeOffset DetermineAsOf(Advisory advisory, DateTimeOffset fallback)
Guid canonicalStatementId, {
DateTimeOffset recordedAt) return (advisory.Modified ?? advisory.Published ?? fallback).ToUniversalTime();
{ }
if (conflicts.Count == 0)
{ private static ConflictMaterialization BuildConflictInputs(
return new List<AdvisoryConflictInput>(0); IReadOnlyList<MergeConflictDetail> conflicts,
} string vulnerabilityKey,
IReadOnlyDictionary<Advisory, Guid> statementIds,
var inputs = new List<AdvisoryConflictInput>(conflicts.Count); Guid canonicalStatementId,
DateTimeOffset recordedAt)
foreach (var detail in conflicts) {
{ if (conflicts.Count == 0)
if (!statementIds.TryGetValue(detail.Suppressed, out var suppressedId)) {
{ return new ConflictMaterialization(new List<AdvisoryConflictInput>(0), new List<MergeConflictSummary>(0));
continue; }
}
var inputs = new List<AdvisoryConflictInput>(conflicts.Count);
var related = new List<Guid> { canonicalStatementId, suppressedId }; var summaries = new List<MergeConflictSummary>(conflicts.Count);
if (statementIds.TryGetValue(detail.Primary, out var primaryId))
{ foreach (var detail in conflicts)
if (!related.Contains(primaryId)) {
{ if (!statementIds.TryGetValue(detail.Suppressed, out var suppressedId))
related.Add(primaryId); {
} continue;
} }
var payload = new ConflictDetailPayload( var related = new List<Guid> { canonicalStatementId, suppressedId };
detail.ConflictType, if (statementIds.TryGetValue(detail.Primary, out var primaryId))
detail.Reason, {
detail.PrimarySources, if (!related.Contains(primaryId))
detail.PrimaryRank, {
detail.SuppressedSources, related.Add(primaryId);
detail.SuppressedRank, }
detail.PrimaryValue, }
detail.SuppressedValue);
var payload = new ConflictDetailPayload(
var json = CanonicalJsonSerializer.Serialize(payload); detail.ConflictType,
var document = JsonDocument.Parse(json); detail.Reason,
var asOf = (detail.Primary.Modified ?? detail.Suppressed.Modified ?? recordedAt).ToUniversalTime(); detail.PrimarySources,
detail.PrimaryRank,
inputs.Add(new AdvisoryConflictInput( detail.SuppressedSources,
vulnerabilityKey, detail.SuppressedRank,
document, detail.PrimaryValue,
asOf, detail.SuppressedValue);
related,
ConflictId: null)); var explainer = new MergeConflictExplainerPayload(
} payload.Type,
payload.Reason,
return inputs; payload.PrimarySources,
} payload.PrimaryRank,
payload.SuppressedSources,
private sealed record ConflictDetailPayload( payload.SuppressedRank,
string Type, payload.PrimaryValue,
string Reason, payload.SuppressedValue);
IReadOnlyList<string> PrimarySources,
int PrimaryRank, var canonicalJson = explainer.ToCanonicalJson();
IReadOnlyList<string> SuppressedSources, var document = JsonDocument.Parse(canonicalJson);
int SuppressedRank, var asOf = (detail.Primary.Modified ?? detail.Suppressed.Modified ?? recordedAt).ToUniversalTime();
string? PrimaryValue, var conflictId = Guid.NewGuid();
string? SuppressedValue); var statementIdArray = ImmutableArray.CreateRange(related);
var conflictHash = explainer.ComputeHashHex(canonicalJson);
private static IEnumerable<Advisory> NormalizeInputs(IEnumerable<Advisory> advisories, string canonicalKey)
{ inputs.Add(new AdvisoryConflictInput(
foreach (var advisory in advisories) vulnerabilityKey,
{ document,
yield return CloneWithKey(advisory, canonicalKey); asOf,
} related,
} ConflictId: conflictId));
private static Advisory CloneWithKey(Advisory source, string advisoryKey) summaries.Add(new MergeConflictSummary(
=> new( conflictId,
advisoryKey, vulnerabilityKey,
source.Title, statementIdArray,
source.Summary, conflictHash,
source.Language, asOf,
source.Published, recordedAt,
source.Modified, explainer));
source.Severity, }
source.ExploitKnown,
source.Aliases, return new ConflictMaterialization(inputs, summaries);
source.Credits, }
source.References,
source.AffectedPackages, private static IEnumerable<Advisory> NormalizeInputs(IEnumerable<Advisory> advisories, string canonicalKey)
source.CvssMetrics, {
source.Provenance, foreach (var advisory in advisories)
source.Description, {
source.Cwes, yield return CloneWithKey(advisory, canonicalKey);
source.CanonicalMetricId); }
}
private CanonicalMergeResult? ApplyCanonicalMergeIfNeeded(string canonicalKey, List<Advisory> inputs)
{ private static Advisory CloneWithKey(Advisory source, string advisoryKey)
if (inputs.Count == 0) => new(
{ advisoryKey,
return null; source.Title,
} source.Summary,
source.Language,
var ghsa = FindBySource(inputs, CanonicalSources.Ghsa); source.Published,
var nvd = FindBySource(inputs, CanonicalSources.Nvd); source.Modified,
var osv = FindBySource(inputs, CanonicalSources.Osv); source.Severity,
source.ExploitKnown,
var participatingSources = 0; source.Aliases,
if (ghsa is not null) source.Credits,
{ source.References,
participatingSources++; source.AffectedPackages,
} source.CvssMetrics,
source.Provenance,
if (nvd is not null) source.Description,
{ source.Cwes,
participatingSources++; source.CanonicalMetricId);
}
private CanonicalMergeResult? ApplyCanonicalMergeIfNeeded(string canonicalKey, List<Advisory> inputs)
if (osv is not null) {
{ if (inputs.Count == 0)
participatingSources++; {
} return null;
}
if (participatingSources < 2)
{ var ghsa = FindBySource(inputs, CanonicalSources.Ghsa);
return null; var nvd = FindBySource(inputs, CanonicalSources.Nvd);
} var osv = FindBySource(inputs, CanonicalSources.Osv);
var result = _canonicalMerger.Merge(canonicalKey, ghsa, nvd, osv); var participatingSources = 0;
if (ghsa is not null)
inputs.RemoveAll(advisory => MatchesCanonicalSource(advisory)); {
inputs.Add(result.Advisory); participatingSources++;
}
return result;
} if (nvd is not null)
{
private static Advisory? FindBySource(IEnumerable<Advisory> advisories, string source) participatingSources++;
=> advisories.FirstOrDefault(advisory => advisory.Provenance.Any(provenance => }
!string.Equals(provenance.Kind, "merge", StringComparison.OrdinalIgnoreCase) &&
string.Equals(provenance.Source, source, StringComparison.OrdinalIgnoreCase))); if (osv is not null)
{
private static bool MatchesCanonicalSource(Advisory advisory) participatingSources++;
{ }
foreach (var provenance in advisory.Provenance)
{ if (participatingSources < 2)
if (string.Equals(provenance.Kind, "merge", StringComparison.OrdinalIgnoreCase)) {
{ return null;
continue; }
}
var result = _canonicalMerger.Merge(canonicalKey, ghsa, nvd, osv);
if (string.Equals(provenance.Source, CanonicalSources.Ghsa, StringComparison.OrdinalIgnoreCase) ||
string.Equals(provenance.Source, CanonicalSources.Nvd, StringComparison.OrdinalIgnoreCase) || inputs.RemoveAll(advisory => MatchesCanonicalSource(advisory));
string.Equals(provenance.Source, CanonicalSources.Osv, StringComparison.OrdinalIgnoreCase)) inputs.Add(result.Advisory);
{
return true; return result;
} }
}
private static Advisory? FindBySource(IEnumerable<Advisory> advisories, string source)
return false; => advisories.FirstOrDefault(advisory => advisory.Provenance.Any(provenance =>
} !string.Equals(provenance.Kind, "merge", StringComparison.OrdinalIgnoreCase) &&
string.Equals(provenance.Source, source, StringComparison.OrdinalIgnoreCase)));
private static IReadOnlyList<MergeFieldDecision> ConvertFieldDecisions(ImmutableArray<FieldDecision>? decisions)
{ private static bool MatchesCanonicalSource(Advisory advisory)
if (decisions is null || decisions.Value.IsDefaultOrEmpty) {
{ foreach (var provenance in advisory.Provenance)
return Array.Empty<MergeFieldDecision>(); {
} if (string.Equals(provenance.Kind, "merge", StringComparison.OrdinalIgnoreCase))
{
var builder = ImmutableArray.CreateBuilder<MergeFieldDecision>(decisions.Value.Length); continue;
foreach (var decision in decisions.Value) }
{
builder.Add(new MergeFieldDecision( if (string.Equals(provenance.Source, CanonicalSources.Ghsa, StringComparison.OrdinalIgnoreCase) ||
decision.Field, string.Equals(provenance.Source, CanonicalSources.Nvd, StringComparison.OrdinalIgnoreCase) ||
decision.SelectedSource, string.Equals(provenance.Source, CanonicalSources.Osv, StringComparison.OrdinalIgnoreCase))
decision.DecisionReason, {
decision.SelectedModified, return true;
decision.ConsideredSources.ToArray())); }
} }
return builder.ToImmutable(); return false;
} }
private static class CanonicalSources private static IReadOnlyList<MergeFieldDecision> ConvertFieldDecisions(ImmutableArray<FieldDecision>? decisions)
{ {
public const string Ghsa = "ghsa"; if (decisions is null || decisions.Value.IsDefaultOrEmpty)
public const string Nvd = "nvd"; {
public const string Osv = "osv"; return Array.Empty<MergeFieldDecision>();
} }
private static string? SelectCanonicalKey(AliasComponent component) var builder = ImmutableArray.CreateBuilder<MergeFieldDecision>(decisions.Value.Length);
{ foreach (var decision in decisions.Value)
foreach (var scheme in PreferredAliasSchemes) {
{ builder.Add(new MergeFieldDecision(
var alias = component.AliasMap.Values decision.Field,
.SelectMany(static aliases => aliases) decision.SelectedSource,
.FirstOrDefault(record => string.Equals(record.Scheme, scheme, StringComparison.OrdinalIgnoreCase)); decision.DecisionReason,
if (!string.IsNullOrWhiteSpace(alias?.Value)) decision.SelectedModified,
{ decision.ConsideredSources.ToArray()));
return alias.Value; }
}
} return builder.ToImmutable();
}
if (component.AliasMap.TryGetValue(component.SeedAdvisoryKey, out var seedAliases))
{ private static class CanonicalSources
var primary = seedAliases.FirstOrDefault(record => string.Equals(record.Scheme, AliasStoreConstants.PrimaryScheme, StringComparison.OrdinalIgnoreCase)); {
if (!string.IsNullOrWhiteSpace(primary?.Value)) public const string Ghsa = "ghsa";
{ public const string Nvd = "nvd";
return primary.Value; public const string Osv = "osv";
} }
}
private sealed record ConflictMaterialization(
var firstAlias = component.AliasMap.Values.SelectMany(static aliases => aliases).FirstOrDefault(); List<AdvisoryConflictInput> Inputs,
if (!string.IsNullOrWhiteSpace(firstAlias?.Value)) List<MergeConflictSummary> Summaries);
{
return firstAlias.Value; private static string? SelectCanonicalKey(AliasComponent component)
} {
foreach (var scheme in PreferredAliasSchemes)
return component.SeedAdvisoryKey; {
} var alias = component.AliasMap.Values
} .SelectMany(static aliases => aliases)
.FirstOrDefault(record => string.Equals(record.Scheme, scheme, StringComparison.OrdinalIgnoreCase));
public sealed record AdvisoryMergeResult( if (!string.IsNullOrWhiteSpace(alias?.Value))
string SeedAdvisoryKey, {
string CanonicalAdvisoryKey, return alias.Value;
AliasComponent Component, }
IReadOnlyList<Advisory> Inputs, }
Advisory? Previous,
Advisory? Merged) if (component.AliasMap.TryGetValue(component.SeedAdvisoryKey, out var seedAliases))
{ {
public static AdvisoryMergeResult Empty(string seed, AliasComponent component) var primary = seedAliases.FirstOrDefault(record => string.Equals(record.Scheme, AliasStoreConstants.PrimaryScheme, StringComparison.OrdinalIgnoreCase));
=> new(seed, seed, component, Array.Empty<Advisory>(), null, null); if (!string.IsNullOrWhiteSpace(primary?.Value))
} {
return primary.Value;
}
}
var firstAlias = component.AliasMap.Values.SelectMany(static aliases => aliases).FirstOrDefault();
if (!string.IsNullOrWhiteSpace(firstAlias?.Value))
{
return firstAlias.Value;
}
return component.SeedAdvisoryKey;
}
}
public sealed record AdvisoryMergeResult(
string SeedAdvisoryKey,
string CanonicalAdvisoryKey,
AliasComponent Component,
IReadOnlyList<Advisory> Inputs,
Advisory? Previous,
Advisory? Merged,
IReadOnlyList<MergeConflictSummary> Conflicts)
{
public static AdvisoryMergeResult Empty(string seed, AliasComponent component)
=> new(seed, seed, component, Array.Empty<Advisory>(), null, null, Array.Empty<MergeConflictSummary>());
}

View File

@@ -0,0 +1,34 @@
using System;
using System.Collections.Generic;
using System.Security.Cryptography;
using System.Text;
using StellaOps.Concelier.Models;
namespace StellaOps.Concelier.Merge.Services;
/// <summary>
/// Structured payload describing a precedence conflict between advisory sources.
/// </summary>
public sealed record MergeConflictExplainerPayload(
string Type,
string Reason,
IReadOnlyList<string> PrimarySources,
int PrimaryRank,
IReadOnlyList<string> SuppressedSources,
int SuppressedRank,
string? PrimaryValue,
string? SuppressedValue)
{
public string ToCanonicalJson() => CanonicalJsonSerializer.Serialize(this);
public string ComputeHashHex(string? canonicalJson = null)
{
var json = canonicalJson ?? ToCanonicalJson();
var bytes = Encoding.UTF8.GetBytes(json);
var hash = SHA256.HashData(bytes);
return Convert.ToHexString(hash);
}
public static MergeConflictExplainerPayload FromCanonicalJson(string canonicalJson)
=> CanonicalJsonSerializer.Deserialize<MergeConflictExplainerPayload>(canonicalJson);
}

View File

@@ -0,0 +1,16 @@
using System;
using System.Collections.Immutable;
namespace StellaOps.Concelier.Merge.Services;
/// <summary>
/// Summary of a persisted advisory conflict including hashes and structured explainer payload.
/// </summary>
public sealed record MergeConflictSummary(
Guid ConflictId,
string VulnerabilityKey,
ImmutableArray<Guid> StatementIds,
string ConflictHash,
DateTimeOffset AsOf,
DateTimeOffset RecordedAt,
MergeConflictExplainerPayload Explainer);

View File

@@ -18,4 +18,5 @@
|Range primitives backlog|BE-Merge|Connector WGs|**DOING** Coordinate remaining connectors (`Acsc`, `Cccs`, `CertBund`, `CertCc`, `Cve`, `Ghsa`, `Ics.Cisa`, `Kisa`, `Ru.Bdu`, `Ru.Nkcki`, `Vndr.Apple`, `Vndr.Cisco`, `Vndr.Msrc`) to emit canonical RangePrimitives with provenance tags; track progress/fixtures here.<br>2025-10-11: Storage alignment notes + sample normalized rule JSON now captured in `RANGE_PRIMITIVES_COORDINATION.md` (see “Storage alignment quick reference”).<br>2025-10-11 18:45Z: GHSA normalized rules landed; OSV connector picked up next for rollout.<br>2025-10-11 21:10Z: `docs/dev/merge_semver_playbook.md` Section 8 now documents the persisted Mongo projection (SemVer + NEVRA) for connector reviewers.<br>2025-10-11 21:30Z: Added `docs/dev/normalized_versions_rollout.md` dashboard to centralize connector status and upcoming milestones.<br>2025-10-11 21:55Z: Merge now emits `concelier.merge.normalized_rules*` counters and unions connector-provided normalized arrays; see new test coverage in `AdvisoryPrecedenceMergerTests.Merge_RecordsNormalizedRuleMetrics`.<br>2025-10-12 17:05Z: CVE + KEV normalized rule verification complete; OSV parity fixtures revalidated—downstream parity/monitoring tasks may proceed.<br>2025-10-19 14:35Z: Prerequisites reviewed (none outstanding); FEEDMERGE-COORD-02-900 remains in DOING with connector follow-ups unchanged.<br>2025-10-19 15:25Z: Refreshed `RANGE_PRIMITIVES_COORDINATION.md` matrix + added targeted follow-ups (Cccs, CertBund, ICS-CISA, Kisa, Vndr.Cisco) with delivery dates 2025-10-21 → 2025-10-25; monitoring merge counters for regression.| |Range primitives backlog|BE-Merge|Connector WGs|**DOING** Coordinate remaining connectors (`Acsc`, `Cccs`, `CertBund`, `CertCc`, `Cve`, `Ghsa`, `Ics.Cisa`, `Kisa`, `Ru.Bdu`, `Ru.Nkcki`, `Vndr.Apple`, `Vndr.Cisco`, `Vndr.Msrc`) to emit canonical RangePrimitives with provenance tags; track progress/fixtures here.<br>2025-10-11: Storage alignment notes + sample normalized rule JSON now captured in `RANGE_PRIMITIVES_COORDINATION.md` (see “Storage alignment quick reference”).<br>2025-10-11 18:45Z: GHSA normalized rules landed; OSV connector picked up next for rollout.<br>2025-10-11 21:10Z: `docs/dev/merge_semver_playbook.md` Section 8 now documents the persisted Mongo projection (SemVer + NEVRA) for connector reviewers.<br>2025-10-11 21:30Z: Added `docs/dev/normalized_versions_rollout.md` dashboard to centralize connector status and upcoming milestones.<br>2025-10-11 21:55Z: Merge now emits `concelier.merge.normalized_rules*` counters and unions connector-provided normalized arrays; see new test coverage in `AdvisoryPrecedenceMergerTests.Merge_RecordsNormalizedRuleMetrics`.<br>2025-10-12 17:05Z: CVE + KEV normalized rule verification complete; OSV parity fixtures revalidated—downstream parity/monitoring tasks may proceed.<br>2025-10-19 14:35Z: Prerequisites reviewed (none outstanding); FEEDMERGE-COORD-02-900 remains in DOING with connector follow-ups unchanged.<br>2025-10-19 15:25Z: Refreshed `RANGE_PRIMITIVES_COORDINATION.md` matrix + added targeted follow-ups (Cccs, CertBund, ICS-CISA, Kisa, Vndr.Cisco) with delivery dates 2025-10-21 → 2025-10-25; monitoring merge counters for regression.|
|Merge pipeline parity for new advisory fields|BE-Merge|Models, Core|DONE (2025-10-15) merge service now surfaces description/CWE/canonical metric decisions with updated metrics/tests.| |Merge pipeline parity for new advisory fields|BE-Merge|Models, Core|DONE (2025-10-15) merge service now surfaces description/CWE/canonical metric decisions with updated metrics/tests.|
|Connector coordination for new advisory fields|Connector Leads, BE-Merge|Models, Core|**DONE (2025-10-15)** GHSA, NVD, and OSV connectors now emit advisory descriptions, CWE weaknesses, and canonical metric ids. Fixtures refreshed (GHSA connector regression suite, `conflict-nvd.canonical.json`, OSV parity snapshots) and completion recorded in coordination log.| |Connector coordination for new advisory fields|Connector Leads, BE-Merge|Models, Core|**DONE (2025-10-15)** GHSA, NVD, and OSV connectors now emit advisory descriptions, CWE weaknesses, and canonical metric ids. Fixtures refreshed (GHSA connector regression suite, `conflict-nvd.canonical.json`, OSV parity snapshots) and completion recorded in coordination log.|
|FEEDMERGE-ENGINE-07-001 Conflict sets & explainers|BE-Merge|FEEDSTORAGE-DATA-07-001|**DOING (2025-10-19)** Merge now captures canonical advisory statements + prepares conflict payload scaffolding (statement hashes, deterministic JSON, tests). Next: surface conflict explainers and replay APIs for Core/WebService before marking DONE.| |FEEDMERGE-ENGINE-07-001 Conflict sets & explainers|BE-Merge|FEEDSTORAGE-DATA-07-001|**DONE (2025-10-20)** Merge surfaces conflict explainers with replay hashes via `MergeConflictSummary`; API exposes structured payloads and integration tests cover deterministic `asOf` hashes.|
> Remark (2025-10-20): `AdvisoryMergeService` now returns conflict summaries with deterministic hashes; WebService replay endpoint emits typed explainers verified by new tests.

View File

@@ -1,10 +1,12 @@
using System; using System;
using System.Collections.Generic; using System.Collections.Generic;
using System.Globalization; using System.Globalization;
using System.IO; using System.IO;
using System.Linq; using System.Linq;
using System.Net; using System.Net;
using System.Net.Http.Json; using System.Net.Http.Json;
using System.Net.Http.Headers;
using System.Text.Json;
using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting; using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Mvc.Testing; using Microsoft.AspNetCore.Mvc.Testing;
@@ -17,75 +19,76 @@ using Mongo2Go;
using StellaOps.Concelier.Core.Events; using StellaOps.Concelier.Core.Events;
using StellaOps.Concelier.Core.Jobs; using StellaOps.Concelier.Core.Jobs;
using StellaOps.Concelier.Models; using StellaOps.Concelier.Models;
using StellaOps.Concelier.Merge.Services;
using StellaOps.Concelier.WebService.Jobs; using StellaOps.Concelier.WebService.Jobs;
using StellaOps.Concelier.WebService.Options; using StellaOps.Concelier.WebService.Options;
using Xunit.Sdk; using Xunit.Sdk;
using StellaOps.Auth.Abstractions; using StellaOps.Auth.Abstractions;
using StellaOps.Auth.Client; using StellaOps.Auth.Client;
namespace StellaOps.Concelier.WebService.Tests; namespace StellaOps.Concelier.WebService.Tests;
public sealed class WebServiceEndpointsTests : IAsyncLifetime public sealed class WebServiceEndpointsTests : IAsyncLifetime
{ {
private MongoDbRunner _runner = null!; private MongoDbRunner _runner = null!;
private ConcelierApplicationFactory _factory = null!; private ConcelierApplicationFactory _factory = null!;
public Task InitializeAsync() public Task InitializeAsync()
{ {
_runner = MongoDbRunner.Start(singleNodeReplSet: true); _runner = MongoDbRunner.Start(singleNodeReplSet: true);
_factory = new ConcelierApplicationFactory(_runner.ConnectionString); _factory = new ConcelierApplicationFactory(_runner.ConnectionString);
return Task.CompletedTask; return Task.CompletedTask;
} }
public Task DisposeAsync() public Task DisposeAsync()
{ {
_factory.Dispose(); _factory.Dispose();
_runner.Dispose(); _runner.Dispose();
return Task.CompletedTask; return Task.CompletedTask;
} }
[Fact] [Fact]
public async Task HealthAndReadyEndpointsRespond() public async Task HealthAndReadyEndpointsRespond()
{ {
using var client = _factory.CreateClient(); using var client = _factory.CreateClient();
var healthResponse = await client.GetAsync("/health"); var healthResponse = await client.GetAsync("/health");
if (!healthResponse.IsSuccessStatusCode) if (!healthResponse.IsSuccessStatusCode)
{ {
var body = await healthResponse.Content.ReadAsStringAsync(); var body = await healthResponse.Content.ReadAsStringAsync();
throw new Xunit.Sdk.XunitException($"/health failed: {(int)healthResponse.StatusCode} {body}"); throw new Xunit.Sdk.XunitException($"/health failed: {(int)healthResponse.StatusCode} {body}");
} }
var readyResponse = await client.GetAsync("/ready"); var readyResponse = await client.GetAsync("/ready");
if (!readyResponse.IsSuccessStatusCode) if (!readyResponse.IsSuccessStatusCode)
{ {
var body = await readyResponse.Content.ReadAsStringAsync(); var body = await readyResponse.Content.ReadAsStringAsync();
throw new Xunit.Sdk.XunitException($"/ready failed: {(int)readyResponse.StatusCode} {body}"); throw new Xunit.Sdk.XunitException($"/ready failed: {(int)readyResponse.StatusCode} {body}");
} }
var healthPayload = await healthResponse.Content.ReadFromJsonAsync<HealthPayload>(); var healthPayload = await healthResponse.Content.ReadFromJsonAsync<HealthPayload>();
Assert.NotNull(healthPayload); Assert.NotNull(healthPayload);
Assert.Equal("healthy", healthPayload!.Status); Assert.Equal("healthy", healthPayload!.Status);
Assert.Equal("mongo", healthPayload.Storage.Driver); Assert.Equal("mongo", healthPayload.Storage.Driver);
var readyPayload = await readyResponse.Content.ReadFromJsonAsync<ReadyPayload>(); var readyPayload = await readyResponse.Content.ReadFromJsonAsync<ReadyPayload>();
Assert.NotNull(readyPayload); Assert.NotNull(readyPayload);
Assert.Equal("ready", readyPayload!.Status); Assert.Equal("ready", readyPayload!.Status);
Assert.Equal("ready", readyPayload.Mongo.Status); Assert.Equal("ready", readyPayload.Mongo.Status);
} }
[Fact] [Fact]
public async Task JobsEndpointsReturnExpectedStatuses() public async Task JobsEndpointsReturnExpectedStatuses()
{ {
using var client = _factory.CreateClient(); using var client = _factory.CreateClient();
var definitions = await client.GetAsync("/jobs/definitions"); var definitions = await client.GetAsync("/jobs/definitions");
if (!definitions.IsSuccessStatusCode) if (!definitions.IsSuccessStatusCode)
{ {
var body = await definitions.Content.ReadAsStringAsync(); var body = await definitions.Content.ReadAsStringAsync();
throw new Xunit.Sdk.XunitException($"/jobs/definitions failed: {(int)definitions.StatusCode} {body}"); throw new Xunit.Sdk.XunitException($"/jobs/definitions failed: {(int)definitions.StatusCode} {body}");
} }
var trigger = await client.PostAsync("/jobs/unknown", new StringContent("{}", System.Text.Encoding.UTF8, "application/json")); var trigger = await client.PostAsync("/jobs/unknown", new StringContent("{}", System.Text.Encoding.UTF8, "application/json"));
if (trigger.StatusCode != HttpStatusCode.NotFound) if (trigger.StatusCode != HttpStatusCode.NotFound)
{ {
@@ -96,12 +99,12 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
Assert.NotNull(problem); Assert.NotNull(problem);
Assert.Equal("https://stellaops.org/problems/not-found", problem!.Type); Assert.Equal("https://stellaops.org/problems/not-found", problem!.Type);
Assert.Equal(404, problem.Status); Assert.Equal(404, problem.Status);
} }
[Fact] [Fact]
public async Task JobRunEndpointReturnsProblemWhenNotFound() public async Task JobRunEndpointReturnsProblemWhenNotFound()
{ {
using var client = _factory.CreateClient(); using var client = _factory.CreateClient();
var response = await client.GetAsync($"/jobs/{Guid.NewGuid()}"); var response = await client.GetAsync($"/jobs/{Guid.NewGuid()}");
if (response.StatusCode != HttpStatusCode.NotFound) if (response.StatusCode != HttpStatusCode.NotFound)
{ {
@@ -111,14 +114,14 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
var problem = await response.Content.ReadFromJsonAsync<ProblemDocument>(); var problem = await response.Content.ReadFromJsonAsync<ProblemDocument>();
Assert.NotNull(problem); Assert.NotNull(problem);
Assert.Equal("https://stellaops.org/problems/not-found", problem!.Type); Assert.Equal("https://stellaops.org/problems/not-found", problem!.Type);
} }
[Fact] [Fact]
public async Task JobTriggerMapsCoordinatorOutcomes() public async Task JobTriggerMapsCoordinatorOutcomes()
{ {
var handler = _factory.Services.GetRequiredService<StubJobCoordinator>(); var handler = _factory.Services.GetRequiredService<StubJobCoordinator>();
using var client = _factory.CreateClient(); using var client = _factory.CreateClient();
handler.NextResult = JobTriggerResult.AlreadyRunning("busy"); handler.NextResult = JobTriggerResult.AlreadyRunning("busy");
var conflict = await client.PostAsync("/jobs/test", JsonContent.Create(new JobTriggerRequest())); var conflict = await client.PostAsync("/jobs/test", JsonContent.Create(new JobTriggerRequest()));
if (conflict.StatusCode != HttpStatusCode.Conflict) if (conflict.StatusCode != HttpStatusCode.Conflict)
@@ -151,72 +154,72 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
var failureProblem = await failed.Content.ReadFromJsonAsync<ProblemDocument>(); var failureProblem = await failed.Content.ReadFromJsonAsync<ProblemDocument>();
Assert.NotNull(failureProblem); Assert.NotNull(failureProblem);
Assert.Equal("https://stellaops.org/problems/job-failure", failureProblem!.Type); Assert.Equal("https://stellaops.org/problems/job-failure", failureProblem!.Type);
} }
[Fact] [Fact]
public async Task JobsEndpointsExposeJobData() public async Task JobsEndpointsExposeJobData()
{ {
var handler = _factory.Services.GetRequiredService<StubJobCoordinator>(); var handler = _factory.Services.GetRequiredService<StubJobCoordinator>();
var now = DateTimeOffset.UtcNow; var now = DateTimeOffset.UtcNow;
var run = new JobRunSnapshot( var run = new JobRunSnapshot(
Guid.NewGuid(), Guid.NewGuid(),
"demo", "demo",
JobRunStatus.Succeeded, JobRunStatus.Succeeded,
now, now,
now, now,
now.AddSeconds(2), now.AddSeconds(2),
"api", "api",
"hash", "hash",
null, null,
TimeSpan.FromMinutes(5), TimeSpan.FromMinutes(5),
TimeSpan.FromMinutes(1), TimeSpan.FromMinutes(1),
new Dictionary<string, object?> { ["key"] = "value" }); new Dictionary<string, object?> { ["key"] = "value" });
handler.Definitions = new[] handler.Definitions = new[]
{ {
new JobDefinition("demo", typeof(DemoJob), TimeSpan.FromMinutes(5), TimeSpan.FromMinutes(1), "*/5 * * * *", true) new JobDefinition("demo", typeof(DemoJob), TimeSpan.FromMinutes(5), TimeSpan.FromMinutes(1), "*/5 * * * *", true)
}; };
handler.LastRuns["demo"] = run; handler.LastRuns["demo"] = run;
handler.RecentRuns = new[] { run }; handler.RecentRuns = new[] { run };
handler.ActiveRuns = Array.Empty<JobRunSnapshot>(); handler.ActiveRuns = Array.Empty<JobRunSnapshot>();
handler.Runs[run.RunId] = run; handler.Runs[run.RunId] = run;
try try
{ {
using var client = _factory.CreateClient(); using var client = _factory.CreateClient();
var definitions = await client.GetFromJsonAsync<List<JobDefinitionPayload>>("/jobs/definitions"); var definitions = await client.GetFromJsonAsync<List<JobDefinitionPayload>>("/jobs/definitions");
Assert.NotNull(definitions); Assert.NotNull(definitions);
Assert.Single(definitions!); Assert.Single(definitions!);
Assert.Equal("demo", definitions![0].Kind); Assert.Equal("demo", definitions![0].Kind);
Assert.NotNull(definitions[0].LastRun); Assert.NotNull(definitions[0].LastRun);
Assert.Equal(run.RunId, definitions[0].LastRun!.RunId); Assert.Equal(run.RunId, definitions[0].LastRun!.RunId);
var runPayload = await client.GetFromJsonAsync<JobRunPayload>($"/jobs/{run.RunId}"); var runPayload = await client.GetFromJsonAsync<JobRunPayload>($"/jobs/{run.RunId}");
Assert.NotNull(runPayload); Assert.NotNull(runPayload);
Assert.Equal(run.RunId, runPayload!.RunId); Assert.Equal(run.RunId, runPayload!.RunId);
Assert.Equal("Succeeded", runPayload.Status); Assert.Equal("Succeeded", runPayload.Status);
var runs = await client.GetFromJsonAsync<List<JobRunPayload>>("/jobs?kind=demo&limit=5"); var runs = await client.GetFromJsonAsync<List<JobRunPayload>>("/jobs?kind=demo&limit=5");
Assert.NotNull(runs); Assert.NotNull(runs);
Assert.Single(runs!); Assert.Single(runs!);
Assert.Equal(run.RunId, runs![0].RunId); Assert.Equal(run.RunId, runs![0].RunId);
var runsByDefinition = await client.GetFromJsonAsync<List<JobRunPayload>>("/jobs/definitions/demo/runs"); var runsByDefinition = await client.GetFromJsonAsync<List<JobRunPayload>>("/jobs/definitions/demo/runs");
Assert.NotNull(runsByDefinition); Assert.NotNull(runsByDefinition);
Assert.Single(runsByDefinition!); Assert.Single(runsByDefinition!);
var active = await client.GetFromJsonAsync<List<JobRunPayload>>("/jobs/active"); var active = await client.GetFromJsonAsync<List<JobRunPayload>>("/jobs/active");
Assert.NotNull(active); Assert.NotNull(active);
Assert.Empty(active!); Assert.Empty(active!);
} }
finally finally
{ {
handler.Definitions = Array.Empty<JobDefinition>(); handler.Definitions = Array.Empty<JobDefinition>();
handler.RecentRuns = Array.Empty<JobRunSnapshot>(); handler.RecentRuns = Array.Empty<JobRunSnapshot>();
handler.ActiveRuns = Array.Empty<JobRunSnapshot>(); handler.ActiveRuns = Array.Empty<JobRunSnapshot>();
handler.Runs.Clear(); handler.Runs.Clear();
handler.LastRuns.Clear(); handler.LastRuns.Clear();
} }
} }
@@ -271,6 +274,77 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
Assert.True(payload.Conflicts is null || payload.Conflicts!.Count == 0); Assert.True(payload.Conflicts is null || payload.Conflicts!.Count == 0);
} }
[Fact]
public async Task AdvisoryReplayEndpointReturnsConflictExplainer()
{
var vulnerabilityKey = "CVE-2025-9100";
var statementId = Guid.NewGuid();
var conflictId = Guid.NewGuid();
var recordedAt = DateTimeOffset.Parse("2025-02-01T00:00:00Z", CultureInfo.InvariantCulture);
using (var scope = _factory.Services.CreateScope())
{
var eventLog = scope.ServiceProvider.GetRequiredService<IAdvisoryEventLog>();
var advisory = new Advisory(
advisoryKey: vulnerabilityKey,
title: "Base advisory",
summary: "Baseline summary",
language: "en",
published: recordedAt.AddDays(-1),
modified: recordedAt,
severity: "critical",
exploitKnown: false,
aliases: new[] { vulnerabilityKey },
references: Array.Empty<AdvisoryReference>(),
affectedPackages: Array.Empty<AffectedPackage>(),
cvssMetrics: Array.Empty<CvssMetric>(),
provenance: Array.Empty<AdvisoryProvenance>());
var statementInput = new AdvisoryStatementInput(
vulnerabilityKey,
advisory,
recordedAt,
Array.Empty<Guid>(),
StatementId: statementId,
AdvisoryKey: advisory.AdvisoryKey);
await eventLog.AppendAsync(new AdvisoryEventAppendRequest(new[] { statementInput }), CancellationToken.None);
var explainer = new MergeConflictExplainerPayload(
Type: "severity",
Reason: "mismatch",
PrimarySources: new[] { "vendor" },
PrimaryRank: 1,
SuppressedSources: new[] { "nvd" },
SuppressedRank: 5,
PrimaryValue: "CRITICAL",
SuppressedValue: "MEDIUM");
using var conflictDoc = JsonDocument.Parse(explainer.ToCanonicalJson());
var conflictInput = new AdvisoryConflictInput(
vulnerabilityKey,
conflictDoc,
recordedAt,
new[] { statementId },
ConflictId: conflictId);
await eventLog.AppendAsync(new AdvisoryEventAppendRequest(Array.Empty<AdvisoryStatementInput>(), new[] { conflictInput }), CancellationToken.None);
}
using var client = _factory.CreateClient();
var response = await client.GetAsync($"/concelier/advisories/{vulnerabilityKey}/replay");
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
var payload = await response.Content.ReadFromJsonAsync<ReplayResponse>();
Assert.NotNull(payload);
var conflict = Assert.Single(payload!.Conflicts);
Assert.Equal(conflictId, conflict.ConflictId);
Assert.Equal("severity", conflict.Explainer.Type);
Assert.Equal("mismatch", conflict.Explainer.Reason);
Assert.Equal("CRITICAL", conflict.Explainer.PrimaryValue);
Assert.Equal("MEDIUM", conflict.Explainer.SuppressedValue);
Assert.Equal(conflict.Explainer.ComputeHashHex(), conflict.ConflictHash);
}
[Fact] [Fact]
public async Task MirrorEndpointsServeConfiguredArtifacts() public async Task MirrorEndpointsServeConfiguredArtifacts()
{ {
@@ -379,8 +453,49 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
using var client = factory.CreateClient(); using var client = factory.CreateClient();
var response = await client.GetAsync("/concelier/exports/mirror/secure/manifest.json"); var response = await client.GetAsync("/concelier/exports/mirror/secure/manifest.json");
Assert.Equal(HttpStatusCode.Unauthorized, response.StatusCode); Assert.Equal(HttpStatusCode.Unauthorized, response.StatusCode);
var authHeader = Assert.Single(response.Headers.WwwAuthenticate);
Assert.Equal("Bearer", authHeader.Scheme);
} }
[Fact]
public async Task MirrorEndpointsRespectRateLimits()
{
using var temp = new TempDirectory();
var exportId = "20251019T130000Z";
var exportRoot = Path.Combine(temp.Path, exportId);
var mirrorRoot = Path.Combine(exportRoot, "mirror");
Directory.CreateDirectory(mirrorRoot);
await File.WriteAllTextAsync(
Path.Combine(mirrorRoot, "index.json"),
"""{\"schemaVersion\":1,\"domains\":[]}"""
);
var environment = new Dictionary<string, string?>
{
["CONCELIER_MIRROR__ENABLED"] = "true",
["CONCELIER_MIRROR__EXPORTROOT"] = temp.Path,
["CONCELIER_MIRROR__ACTIVEEXPORTID"] = exportId,
["CONCELIER_MIRROR__MAXINDEXREQUESTSPERHOUR"] = "1",
["CONCELIER_MIRROR__DOMAINS__0__ID"] = "primary",
["CONCELIER_MIRROR__DOMAINS__0__REQUIREAUTHENTICATION"] = "false",
["CONCELIER_MIRROR__DOMAINS__0__MAXDOWNLOADREQUESTSPERHOUR"] = "1"
};
using var factory = new ConcelierApplicationFactory(_runner.ConnectionString, environmentOverrides: environment);
using var client = factory.CreateClient();
var okResponse = await client.GetAsync("/concelier/exports/index.json");
Assert.Equal(HttpStatusCode.OK, okResponse.StatusCode);
var limitedResponse = await client.GetAsync("/concelier/exports/index.json");
Assert.Equal((HttpStatusCode)429, limitedResponse.StatusCode);
Assert.NotNull(limitedResponse.Headers.RetryAfter);
Assert.True(limitedResponse.Headers.RetryAfter!.Delta.HasValue);
Assert.True(limitedResponse.Headers.RetryAfter!.Delta!.Value.TotalSeconds > 0);
}
[Fact] [Fact]
public async Task JobsEndpointsAllowBypassWhenAuthorityEnabled() public async Task JobsEndpointsAllowBypassWhenAuthorityEnabled()
{ {
@@ -553,7 +668,8 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
string ConflictHash, string ConflictHash,
DateTimeOffset AsOf, DateTimeOffset AsOf,
DateTimeOffset RecordedAt, DateTimeOffset RecordedAt,
string Details); string Details,
MergeConflictExplainerPayload Explainer);
private sealed class ConcelierApplicationFactory : WebApplicationFactory<Program> private sealed class ConcelierApplicationFactory : WebApplicationFactory<Program>
{ {
@@ -832,85 +948,85 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
} }
} }
} }
private sealed record HealthPayload(string Status, DateTimeOffset StartedAt, double UptimeSeconds, StoragePayload Storage, TelemetryPayload Telemetry); private sealed record HealthPayload(string Status, DateTimeOffset StartedAt, double UptimeSeconds, StoragePayload Storage, TelemetryPayload Telemetry);
private sealed record StoragePayload(string Driver, bool Completed, DateTimeOffset? CompletedAt, double? DurationMs); private sealed record StoragePayload(string Driver, bool Completed, DateTimeOffset? CompletedAt, double? DurationMs);
private sealed record TelemetryPayload(bool Enabled, bool Tracing, bool Metrics, bool Logging); private sealed record TelemetryPayload(bool Enabled, bool Tracing, bool Metrics, bool Logging);
private sealed record ReadyPayload(string Status, DateTimeOffset StartedAt, double UptimeSeconds, ReadyMongoPayload Mongo); private sealed record ReadyPayload(string Status, DateTimeOffset StartedAt, double UptimeSeconds, ReadyMongoPayload Mongo);
private sealed record ReadyMongoPayload(string Status, double? LatencyMs, DateTimeOffset? CheckedAt, string? Error); private sealed record ReadyMongoPayload(string Status, double? LatencyMs, DateTimeOffset? CheckedAt, string? Error);
private sealed record JobDefinitionPayload(string Kind, bool Enabled, string? CronExpression, TimeSpan Timeout, TimeSpan LeaseDuration, JobRunPayload? LastRun); private sealed record JobDefinitionPayload(string Kind, bool Enabled, string? CronExpression, TimeSpan Timeout, TimeSpan LeaseDuration, JobRunPayload? LastRun);
private sealed record JobRunPayload(Guid RunId, string Kind, string Status, string Trigger, DateTimeOffset CreatedAt, DateTimeOffset? StartedAt, DateTimeOffset? CompletedAt, string? Error, TimeSpan? Duration, Dictionary<string, object?> Parameters); private sealed record JobRunPayload(Guid RunId, string Kind, string Status, string Trigger, DateTimeOffset CreatedAt, DateTimeOffset? StartedAt, DateTimeOffset? CompletedAt, string? Error, TimeSpan? Duration, Dictionary<string, object?> Parameters);
private sealed record ProblemDocument(string? Type, string? Title, int? Status, string? Detail, string? Instance); private sealed record ProblemDocument(string? Type, string? Title, int? Status, string? Detail, string? Instance);
private sealed class DemoJob : IJob private sealed class DemoJob : IJob
{ {
public Task ExecuteAsync(JobExecutionContext context, CancellationToken cancellationToken) => Task.CompletedTask; public Task ExecuteAsync(JobExecutionContext context, CancellationToken cancellationToken) => Task.CompletedTask;
} }
private sealed class StubJobCoordinator : IJobCoordinator private sealed class StubJobCoordinator : IJobCoordinator
{ {
public JobTriggerResult NextResult { get; set; } = JobTriggerResult.NotFound("not set"); public JobTriggerResult NextResult { get; set; } = JobTriggerResult.NotFound("not set");
public IReadOnlyList<JobDefinition> Definitions { get; set; } = Array.Empty<JobDefinition>(); public IReadOnlyList<JobDefinition> Definitions { get; set; } = Array.Empty<JobDefinition>();
public IReadOnlyList<JobRunSnapshot> RecentRuns { get; set; } = Array.Empty<JobRunSnapshot>(); public IReadOnlyList<JobRunSnapshot> RecentRuns { get; set; } = Array.Empty<JobRunSnapshot>();
public IReadOnlyList<JobRunSnapshot> ActiveRuns { get; set; } = Array.Empty<JobRunSnapshot>(); public IReadOnlyList<JobRunSnapshot> ActiveRuns { get; set; } = Array.Empty<JobRunSnapshot>();
public Dictionary<Guid, JobRunSnapshot> Runs { get; } = new(); public Dictionary<Guid, JobRunSnapshot> Runs { get; } = new();
public Dictionary<string, JobRunSnapshot?> LastRuns { get; } = new(StringComparer.Ordinal); public Dictionary<string, JobRunSnapshot?> LastRuns { get; } = new(StringComparer.Ordinal);
public Task<JobTriggerResult> TriggerAsync(string kind, IReadOnlyDictionary<string, object?>? parameters, string trigger, CancellationToken cancellationToken) public Task<JobTriggerResult> TriggerAsync(string kind, IReadOnlyDictionary<string, object?>? parameters, string trigger, CancellationToken cancellationToken)
=> Task.FromResult(NextResult); => Task.FromResult(NextResult);
public Task<IReadOnlyList<JobDefinition>> GetDefinitionsAsync(CancellationToken cancellationToken) public Task<IReadOnlyList<JobDefinition>> GetDefinitionsAsync(CancellationToken cancellationToken)
=> Task.FromResult(Definitions); => Task.FromResult(Definitions);
public Task<IReadOnlyList<JobRunSnapshot>> GetRecentRunsAsync(string? kind, int limit, CancellationToken cancellationToken) public Task<IReadOnlyList<JobRunSnapshot>> GetRecentRunsAsync(string? kind, int limit, CancellationToken cancellationToken)
{ {
IEnumerable<JobRunSnapshot> query = RecentRuns; IEnumerable<JobRunSnapshot> query = RecentRuns;
if (!string.IsNullOrWhiteSpace(kind)) if (!string.IsNullOrWhiteSpace(kind))
{ {
query = query.Where(run => string.Equals(run.Kind, kind, StringComparison.Ordinal)); query = query.Where(run => string.Equals(run.Kind, kind, StringComparison.Ordinal));
} }
return Task.FromResult<IReadOnlyList<JobRunSnapshot>>(query.Take(limit).ToArray()); return Task.FromResult<IReadOnlyList<JobRunSnapshot>>(query.Take(limit).ToArray());
} }
public Task<IReadOnlyList<JobRunSnapshot>> GetActiveRunsAsync(CancellationToken cancellationToken) public Task<IReadOnlyList<JobRunSnapshot>> GetActiveRunsAsync(CancellationToken cancellationToken)
=> Task.FromResult(ActiveRuns); => Task.FromResult(ActiveRuns);
public Task<JobRunSnapshot?> GetRunAsync(Guid runId, CancellationToken cancellationToken) public Task<JobRunSnapshot?> GetRunAsync(Guid runId, CancellationToken cancellationToken)
=> Task.FromResult(Runs.TryGetValue(runId, out var run) ? run : null); => Task.FromResult(Runs.TryGetValue(runId, out var run) ? run : null);
public Task<JobRunSnapshot?> GetLastRunAsync(string kind, CancellationToken cancellationToken) public Task<JobRunSnapshot?> GetLastRunAsync(string kind, CancellationToken cancellationToken)
=> Task.FromResult(LastRuns.TryGetValue(kind, out var run) ? run : null); => Task.FromResult(LastRuns.TryGetValue(kind, out var run) ? run : null);
public Task<IReadOnlyDictionary<string, JobRunSnapshot>> GetLastRunsAsync(IEnumerable<string> kinds, CancellationToken cancellationToken) public Task<IReadOnlyDictionary<string, JobRunSnapshot>> GetLastRunsAsync(IEnumerable<string> kinds, CancellationToken cancellationToken)
{ {
var map = new Dictionary<string, JobRunSnapshot>(StringComparer.Ordinal); var map = new Dictionary<string, JobRunSnapshot>(StringComparer.Ordinal);
foreach (var kind in kinds) foreach (var kind in kinds)
{ {
if (kind is null) if (kind is null)
{ {
continue; continue;
} }
if (LastRuns.TryGetValue(kind, out var run) && run is not null) if (LastRuns.TryGetValue(kind, out var run) && run is not null)
{ {
map[kind] = run; map[kind] = run;
} }
} }
return Task.FromResult<IReadOnlyDictionary<string, JobRunSnapshot>>(map); return Task.FromResult<IReadOnlyDictionary<string, JobRunSnapshot>>(map);
} }
} }
} }

View File

@@ -1,8 +1,9 @@
using System.Globalization; using System.Globalization;
using Microsoft.AspNetCore.Http; using System.IO;
using Microsoft.Extensions.Options; using Microsoft.AspNetCore.Http;
using StellaOps.Concelier.WebService.Options; using Microsoft.Extensions.Options;
using StellaOps.Concelier.WebService.Services; using StellaOps.Concelier.WebService.Options;
using StellaOps.Concelier.WebService.Services;
namespace StellaOps.Concelier.WebService.Extensions; namespace StellaOps.Concelier.WebService.Extensions;
@@ -42,7 +43,7 @@ internal static class MirrorEndpointExtensions
return Results.NotFound(); return Results.NotFound();
} }
return await WriteFileAsync(path, context.Response, "application/json").ConfigureAwait(false); return await WriteFileAsync(path, context.Response, "application/json").ConfigureAwait(false);
}); });
app.MapGet("/concelier/exports/{**relativePath}", async ( app.MapGet("/concelier/exports/{**relativePath}", async (
@@ -84,7 +85,7 @@ internal static class MirrorEndpointExtensions
} }
var contentType = ResolveContentType(path); var contentType = ResolveContentType(path);
return await WriteFileAsync(path, context.Response, contentType).ConfigureAwait(false); return await WriteFileAsync(path, context.Response, contentType).ConfigureAwait(false);
}); });
} }
@@ -111,12 +112,12 @@ internal static class MirrorEndpointExtensions
return null; return null;
} }
private static bool TryAuthorize(bool requireAuthentication, bool enforceAuthority, HttpContext context, bool authorityConfigured, out IResult result) private static bool TryAuthorize(bool requireAuthentication, bool enforceAuthority, HttpContext context, bool authorityConfigured, out IResult result)
{ {
result = Results.Empty; result = Results.Empty;
if (!requireAuthentication) if (!requireAuthentication)
{ {
return true; return true;
} }
if (!enforceAuthority || !authorityConfigured) if (!enforceAuthority || !authorityConfigured)
@@ -127,14 +128,15 @@ internal static class MirrorEndpointExtensions
if (context.User?.Identity?.IsAuthenticated == true) if (context.User?.Identity?.IsAuthenticated == true)
{ {
return true; return true;
} }
result = Results.StatusCode(StatusCodes.Status401Unauthorized); context.Response.Headers.WWWAuthenticate = "Bearer realm=\"StellaOps Concelier Mirror\"";
return false; result = Results.StatusCode(StatusCodes.Status401Unauthorized);
} return false;
}
private static Task<IResult> WriteFileAsync(string path, HttpResponse response, string contentType)
{ private static Task<IResult> WriteFileAsync(string path, HttpResponse response, string contentType)
{
var fileInfo = new FileInfo(path); var fileInfo = new FileInfo(path);
if (!fileInfo.Exists) if (!fileInfo.Exists)
{ {
@@ -147,12 +149,12 @@ internal static class MirrorEndpointExtensions
FileAccess.Read, FileAccess.Read,
FileShare.Read | FileShare.Delete); FileShare.Read | FileShare.Delete);
response.Headers.CacheControl = "public, max-age=60"; response.Headers.CacheControl = BuildCacheControlHeader(path);
response.Headers.LastModified = fileInfo.LastWriteTimeUtc.ToString("R", CultureInfo.InvariantCulture); response.Headers.LastModified = fileInfo.LastWriteTimeUtc.ToString("R", CultureInfo.InvariantCulture);
response.ContentLength = fileInfo.Length; response.ContentLength = fileInfo.Length;
return Task.FromResult(Results.Stream(stream, contentType)); return Task.FromResult(Results.Stream(stream, contentType));
} }
private static string ResolveContentType(string path) private static string ResolveContentType(string path)
{ {
if (path.EndsWith(".json", StringComparison.OrdinalIgnoreCase)) if (path.EndsWith(".json", StringComparison.OrdinalIgnoreCase))
@@ -176,6 +178,28 @@ internal static class MirrorEndpointExtensions
} }
var seconds = Math.Max((int)Math.Ceiling(retryAfter.Value.TotalSeconds), 1); var seconds = Math.Max((int)Math.Ceiling(retryAfter.Value.TotalSeconds), 1);
response.Headers.RetryAfter = seconds.ToString(CultureInfo.InvariantCulture); response.Headers.RetryAfter = seconds.ToString(CultureInfo.InvariantCulture);
} }
}
private static string BuildCacheControlHeader(string path)
{
var fileName = Path.GetFileName(path);
if (fileName is null)
{
return "public, max-age=60";
}
if (string.Equals(fileName, "index.json", StringComparison.OrdinalIgnoreCase))
{
return "public, max-age=60";
}
if (fileName.EndsWith(".json", StringComparison.OrdinalIgnoreCase) ||
fileName.EndsWith(".jws", StringComparison.OrdinalIgnoreCase))
{
return "public, max-age=300, immutable";
}
return "public, max-age=300";
}
}

View File

@@ -227,7 +227,8 @@ app.MapGet("/concelier/advisories/{vulnerabilityKey}/replay", async (
ConflictHash = Convert.ToHexString(conflict.ConflictHash.ToArray()), ConflictHash = Convert.ToHexString(conflict.ConflictHash.ToArray()),
conflict.AsOf, conflict.AsOf,
conflict.RecordedAt, conflict.RecordedAt,
Details = conflict.CanonicalJson Details = conflict.CanonicalJson,
Explainer = MergeConflictExplainerPayload.FromCanonicalJson(conflict.CanonicalJson)
}).ToArray() }).ToArray()
}; };

View File

@@ -1,27 +1,28 @@
# TASKS # TASKS
| Task | Owner(s) | Depends on | Notes | | Task | Owner(s) | Depends on | Notes |
|---|---|---|---| |---|---|---|---|
|FEEDWEB-EVENTS-07-001 Advisory event replay API|Concelier WebService Guild|FEEDCORE-ENGINE-07-001|**DONE (2025-10-19)** Added `/concelier/advisories/{vulnerabilityKey}/replay` endpoint with optional `asOf`, hex hashes, and conflict payloads; integration covered via `dotnet test src/StellaOps.Concelier.WebService.Tests/StellaOps.Concelier.WebService.Tests.csproj`.| |FEEDWEB-EVENTS-07-001 Advisory event replay API|Concelier WebService Guild|FEEDCORE-ENGINE-07-001|**DONE (2025-10-19)** Added `/concelier/advisories/{vulnerabilityKey}/replay` endpoint with optional `asOf`, hex hashes, and conflict payloads; integration covered via `dotnet test src/StellaOps.Concelier.WebService.Tests/StellaOps.Concelier.WebService.Tests.csproj`.|
|Bind & validate ConcelierOptions|BE-Base|WebService|DONE options bound/validated with failure logging.| |Bind & validate ConcelierOptions|BE-Base|WebService|DONE options bound/validated with failure logging.|
|Mongo service wiring|BE-Base|Storage.Mongo|DONE wiring delegated to `AddMongoStorage`.| |Mongo service wiring|BE-Base|Storage.Mongo|DONE wiring delegated to `AddMongoStorage`.|
|Bootstrapper execution on start|BE-Base|Storage.Mongo|DONE startup calls `MongoBootstrapper.InitializeAsync`.| |Bootstrapper execution on start|BE-Base|Storage.Mongo|DONE startup calls `MongoBootstrapper.InitializeAsync`.|
|Plugin host options finalization|BE-Base|Plugins|DONE default plugin directories/search patterns configured.| |Plugin host options finalization|BE-Base|Plugins|DONE default plugin directories/search patterns configured.|
|Jobs API contract tests|QA|Core|DONE WebServiceEndpointsTests now cover success payloads, filtering, and trigger outcome mapping.| |Jobs API contract tests|QA|Core|DONE WebServiceEndpointsTests now cover success payloads, filtering, and trigger outcome mapping.|
|Health/Ready probes|DevOps|Ops|DONE `/health` and `/ready` endpoints implemented.| |Health/Ready probes|DevOps|Ops|DONE `/health` and `/ready` endpoints implemented.|
|Serilog + OTEL integration hooks|BE-Base|Observability|DONE `TelemetryExtensions` wires Serilog + OTEL with configurable exporters.| |Serilog + OTEL integration hooks|BE-Base|Observability|DONE `TelemetryExtensions` wires Serilog + OTEL with configurable exporters.|
|Register built-in jobs (sources/exporters)|BE-Base|Core|DONE AddBuiltInConcelierJobs adds fallback scheduler definitions for core connectors and exporters via reflection.| |Register built-in jobs (sources/exporters)|BE-Base|Core|DONE AddBuiltInConcelierJobs adds fallback scheduler definitions for core connectors and exporters via reflection.|
|HTTP problem details consistency|BE-Base|WebService|DONE API errors now emit RFC7807 responses with trace identifiers and typed problem categories.| |HTTP problem details consistency|BE-Base|WebService|DONE API errors now emit RFC7807 responses with trace identifiers and typed problem categories.|
|Request logging and metrics|BE-Base|Observability|DONE Serilog request logging enabled with enriched context and web.jobs counters published via OpenTelemetry.| |Request logging and metrics|BE-Base|Observability|DONE Serilog request logging enabled with enriched context and web.jobs counters published via OpenTelemetry.|
|Endpoint smoke tests (health/ready/jobs error paths)|QA|WebService|DONE WebServiceEndpointsTests assert success and problem responses for health, ready, and job trigger error paths.| |Endpoint smoke tests (health/ready/jobs error paths)|QA|WebService|DONE WebServiceEndpointsTests assert success and problem responses for health, ready, and job trigger error paths.|
|Batch job definition last-run lookup|BE-Base|Core|DONE definitions endpoint now precomputes kinds array and reuses batched last-run dictionary; manual smoke verified via local GET `/jobs/definitions`.| |Batch job definition last-run lookup|BE-Base|Core|DONE definitions endpoint now precomputes kinds array and reuses batched last-run dictionary; manual smoke verified via local GET `/jobs/definitions`.|
|Add no-cache headers to health/readiness/jobs APIs|BE-Base|WebService|DONE helper applies Cache-Control/Pragma/Expires on all health/ready/jobs endpoints; awaiting automated probe tests once connector fixtures stabilize.| |Add no-cache headers to health/readiness/jobs APIs|BE-Base|WebService|DONE helper applies Cache-Control/Pragma/Expires on all health/ready/jobs endpoints; awaiting automated probe tests once connector fixtures stabilize.|
|Authority configuration parity (FSR1)|DevEx/Concelier|Authority options schema|**DONE (2025-10-10)** Options post-config loads clientSecretFile fallback, validators normalize scopes/audiences, and sample config documents issuer/credential/bypass settings.| |Authority configuration parity (FSR1)|DevEx/Concelier|Authority options schema|**DONE (2025-10-10)** Options post-config loads clientSecretFile fallback, validators normalize scopes/audiences, and sample config documents issuer/credential/bypass settings.|
|Document authority toggle & scope requirements|Docs/Concelier|Authority integration|**DOING (2025-10-10)** Quickstart updated with staging flag, client credentials, env overrides; operator guide refresh pending Docs guild review.| |Document authority toggle & scope requirements|Docs/Concelier|Authority integration|**DOING (2025-10-10)** Quickstart updated with staging flag, client credentials, env overrides; operator guide refresh pending Docs guild review.|
|Plumb Authority client resilience options|BE-Base|Auth libraries LIB5|**DONE (2025-10-12)** `Program.cs` wires `authority.resilience.*` + client scopes into `AddStellaOpsAuthClient`; new integration test asserts binding and retries.| |Plumb Authority client resilience options|BE-Base|Auth libraries LIB5|**DONE (2025-10-12)** `Program.cs` wires `authority.resilience.*` + client scopes into `AddStellaOpsAuthClient`; new integration test asserts binding and retries.|
|Author ops guidance for resilience tuning|Docs/Concelier|Plumb Authority client resilience options|**DONE (2025-10-12)** `docs/21_INSTALL_GUIDE.md` + `docs/ops/concelier-authority-audit-runbook.md` document resilience profiles for connected vs air-gapped installs and reference monitoring cues.| |Author ops guidance for resilience tuning|Docs/Concelier|Plumb Authority client resilience options|**DONE (2025-10-12)** `docs/21_INSTALL_GUIDE.md` + `docs/ops/concelier-authority-audit-runbook.md` document resilience profiles for connected vs air-gapped installs and reference monitoring cues.|
|Document authority bypass logging patterns|Docs/Concelier|FSR3 logging|**DONE (2025-10-12)** Updated operator guides clarify `Concelier.Authorization.Audit` fields (route/status/subject/clientId/scopes/bypass/remote) and SIEM triggers.| |Document authority bypass logging patterns|Docs/Concelier|FSR3 logging|**DONE (2025-10-12)** Updated operator guides clarify `Concelier.Authorization.Audit` fields (route/status/subject/clientId/scopes/bypass/remote) and SIEM triggers.|
|Update Concelier operator guide for enforcement cutoff|Docs/Concelier|FSR1 rollout|**DONE (2025-10-12)** Installation guide emphasises disabling `allowAnonymousFallback` before 2025-12-31 UTC and connects audit signals to the rollout checklist.| |Update Concelier operator guide for enforcement cutoff|Docs/Concelier|FSR1 rollout|**DONE (2025-10-12)** Installation guide emphasises disabling `allowAnonymousFallback` before 2025-12-31 UTC and connects audit signals to the rollout checklist.|
|Rename plugin drop directory to namespaced path|BE-Base|Plugins|**DONE (2025-10-19)** Build outputs now target `StellaOps.Concelier.PluginBinaries`/`StellaOps.Authority.PluginBinaries`, plugin host defaults updated, config/docs refreshed, and `dotnet test src/StellaOps.Concelier.WebService.Tests/StellaOps.Concelier.WebService.Tests.csproj --no-restore` covers the change.| |Rename plugin drop directory to namespaced path|BE-Base|Plugins|**DONE (2025-10-19)** Build outputs now target `StellaOps.Concelier.PluginBinaries`/`StellaOps.Authority.PluginBinaries`, plugin host defaults updated, config/docs refreshed, and `dotnet test src/StellaOps.Concelier.WebService.Tests/StellaOps.Concelier.WebService.Tests.csproj --no-restore` covers the change.|
|Authority resilience adoption|Concelier WebService, Docs|Plumb Authority client resilience options|**BLOCKED (2025-10-10)** Roll out retry/offline knobs to deployment docs and confirm CLI parity once LIB5 lands; unblock after resilience options wired and tested.| |Authority resilience adoption|Concelier WebService, Docs|Plumb Authority client resilience options|**BLOCKED (2025-10-10)** Roll out retry/offline knobs to deployment docs and confirm CLI parity once LIB5 lands; unblock after resilience options wired and tested.|
|CONCELIER-WEB-08-201 Mirror distribution endpoints|Concelier WebService Guild|CONCELIER-EXPORT-08-201, DEVOPS-MIRROR-08-001|DOING (2025-10-19) HTTP endpoints wired (`/concelier/exports/index.json`, `/concelier/exports/mirror/*`), mirror options bound/validated, and integration tests added; pending auth docs + smoke in ops handbook.| |CONCELIER-WEB-08-201 Mirror distribution endpoints|Concelier WebService Guild|CONCELIER-EXPORT-08-201, DEVOPS-MIRROR-08-001|**DONE (2025-10-20)** Mirror endpoints now enforce per-domain rate limits, emit cache headers, honour Authority/WWW-Authenticate, and docs cover auth + smoke workflows.|
|Wave 0B readiness checkpoint|Team WebService & Authority|Wave0A completion|BLOCKED (2025-10-19) FEEDSTORAGE-MONGO-08-001 closed, but remaining Wave0A items (AUTH-DPOP-11-001, AUTH-MTLS-11-002, PLUGIN-DI-08-001) still open; maintain current DOING workstreams only.| > Remark (2025-10-20): Updated ops runbook with token/rate-limit checks and added API tests for Retry-After + unauthorized flows.|
|Wave 0B readiness checkpoint|Team WebService & Authority|Wave0A completion|BLOCKED (2025-10-19) FEEDSTORAGE-MONGO-08-001 closed, but remaining Wave0A items (AUTH-DPOP-11-001, AUTH-MTLS-11-002, PLUGIN-DI-08-001) still open; maintain current DOING workstreams only.|