feat: Implement advisory event replay API with conflict explainers
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled

- Added `/concelier/advisories/{vulnerabilityKey}/replay` endpoint to return conflict summaries and explainers.
- Introduced `MergeConflictExplainerPayload` to structure conflict details including type, reason, and source rankings.
- Enhanced `MergeConflictSummary` to include structured explainer payloads and hashes for persisted conflicts.
- Updated `MirrorEndpointExtensions` to enforce rate limits and cache headers for mirror distribution endpoints.
- Refactored tests to cover new replay endpoint functionality and validate conflict explainers.
- Documented changes in TASKS.md, noting completion of mirror distribution endpoints and updated operational runbook.
This commit is contained in:
Vladimir Moushkov
2025-10-20 18:59:26 +03:00
parent 44ad31591c
commit 2b6304c9c3
20 changed files with 3966 additions and 3493 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -160,14 +160,14 @@ This file describe implementation of Stella Ops (docs/README.md). Implementation
| Sprint 7 | Contextual Truth Foundations | src/StellaOps.Concelier.Core/TASKS.md | TODO | Team Core Engine & Data Science | FEEDCORE-ENGINE-07-002 | Noise prior computation service learn false-positive priors and expose deterministic summaries. |
| Sprint 7 | Contextual Truth Foundations | src/StellaOps.Concelier.Core/TASKS.md | TODO | Team Core Engine & Storage Analytics | FEEDCORE-ENGINE-07-003 | Unknown state ledger & confidence seeding persist unknown flags, seed confidence bands, expose query surface. |
| Sprint 7 | Contextual Truth Foundations | src/StellaOps.Concelier.Storage.Mongo/TASKS.md | TODO | Team Normalization & Storage Backbone | FEEDSTORAGE-DATA-07-001 | Advisory statement & conflict collections provision Mongo schema/indexes for event-sourced merge. |
| Sprint 7 | Contextual Truth Foundations | src/StellaOps.Concelier.Merge/TASKS.md | DOING | BE-Merge | FEEDMERGE-ENGINE-07-001 | Conflict sets & explainers persist conflict materialization and replay hashes for merge decisions. |
| Sprint 7 | Contextual Truth Foundations | src/StellaOps.Concelier.Merge/TASKS.md | DONE (2025-10-20) | BE-Merge | FEEDMERGE-ENGINE-07-001 | Conflict sets & explainers persist conflict materialization and replay hashes for merge decisions. |
| Sprint 8 | Mongo strengthening | src/StellaOps.Concelier.Storage.Mongo/TASKS.md | DONE (2025-10-19) | Team Normalization & Storage Backbone | FEEDSTORAGE-MONGO-08-001 | Causal-consistent Concelier storage sessions<br>Scoped session facilitator registered, repositories accept optional session handles, and replica-set failover tests verify read-your-write + monotonic reads. |
| Sprint 8 | Mongo strengthening | src/StellaOps.Authority/TASKS.md | DONE (2025-10-19) | Authority Core & Storage Guild | AUTHSTORAGE-MONGO-08-001 | Harden Authority Mongo usage<br>Scoped Mongo sessions with majority read/write concerns wired through stores and GraphQL/HTTP pipelines; replica-set election regression validated. |
| Sprint 8 | Mongo strengthening | src/StellaOps.Excititor.Storage.Mongo/TASKS.md | DONE (2025-10-19) | Team Excititor Storage | EXCITITOR-STORAGE-MONGO-08-001 | Causal consistency for Excititor repositories<br>Session-scoped repositories shipped with new Mongo records, orchestrators/workers now share scoped sessions, and replica-set failover coverage added via `dotnet test src/StellaOps.Excititor.Storage.Mongo.Tests/StellaOps.Excititor.Storage.Mongo.Tests.csproj`. |
| Sprint 8 | Platform Maintenance | src/StellaOps.Excititor.Storage.Mongo/TASKS.md | DONE (2025-10-19) | Team Excititor Storage | EXCITITOR-STORAGE-03-001 | Statement backfill tooling shipped admin backfill endpoint, CLI hook (`stellaops excititor backfill-statements`), integration tests, and operator runbook (`docs/dev/EXCITITOR_STATEMENT_BACKFILL.md`). |
| Sprint 8 | Mirror Distribution | src/StellaOps.Concelier.Exporter.Json/TASKS.md | DONE (2025-10-19) | Concelier Export Guild | CONCELIER-EXPORT-08-201 | Mirror bundle + domain manifest produce signed JSON aggregates for `*.stella-ops.org` mirrors. |
| Sprint 8 | Mirror Distribution | src/StellaOps.Concelier.Exporter.TrivyDb/TASKS.md | DONE (2025-10-19) | Concelier Export Guild | CONCELIER-EXPORT-08-202 | Mirror-ready Trivy DB bundles mirror options emit per-domain manifests/metadata/db archives with deterministic digests for downstream sync. |
| Sprint 8 | Mirror Distribution | src/StellaOps.Concelier.WebService/TASKS.md | DOING (2025-10-19) | Concelier WebService Guild | CONCELIER-WEB-08-201 | Mirror distribution endpoints expose domain-scoped index/download APIs with auth/quota. |
| Sprint 8 | Mirror Distribution | src/StellaOps.Concelier.WebService/TASKS.md | DONE (2025-10-20) | Concelier WebService Guild | CONCELIER-WEB-08-201 | Mirror distribution endpoints expose domain-scoped index/download APIs with auth/quota. |
| Sprint 8 | Mirror Distribution | src/StellaOps.Concelier.Connector.StellaOpsMirror/TASKS.md | DOING (2025-10-19) | BE-Conn-Stella | FEEDCONN-STELLA-08-001 | Concelier mirror connector fetch mirror manifest, verify signatures, and hydrate canonical DTOs with resume support. |
| Sprint 8 | Mirror Distribution | ops/devops/TASKS.md | DONE (2025-10-19) | DevOps Guild | DEVOPS-MIRROR-08-001 | Managed mirror deployments for `*.stella-ops.org` Helm/Compose overlays, CDN, runbooks. |
| Sprint 8 | Plugin Infrastructure | src/StellaOps.Plugin/TASKS.md | DOING | Plugin Platform Guild, Authority Core | PLUGIN-DI-08-002.COORD | Authority scoped-service integration handshake<br>Session scheduled for 2025-10-20 15:0016:00UTC; agenda + attendees logged in `docs/dev/authority-plugin-di-coordination.md`. |
@@ -246,7 +246,7 @@ This file describe implementation of Stella Ops (docs/README.md). Implementation
| Sprint 10 | Scanner Analyzers & SBOM | src/StellaOps.Scanner.Emit/TASKS.md | TODO | Emit Guild | SCANNER-EMIT-10-607 | Embed scoring inputs, confidence band, and quiet provenance in CycloneDX/DSSE artifacts. |
| Sprint 10 | Benchmarks | bench/TASKS.md | TODO | Bench Guild, Scanner Team | BENCH-SCANNER-10-001 | Analyzer microbench harness + baseline CSV. |
| Sprint 10 | Samples | samples/TASKS.md | TODO | Samples Guild, Scanner Team | SAMPLES-10-001 | Sample images with SBOM/BOM-Index sidecars. |
| Sprint 10 | DevOps Security | ops/devops/TASKS.md | DOING | DevOps Guild | DEVOPS-SEC-10-301 | Address NU1902/NU1903 advisories for `MongoDB.Driver` 2.12.0 and `SharpCompress` 0.23.0; Wave0A prerequisites confirmed complete before remediation work. |
| Sprint 10 | DevOps Security | ops/devops/TASKS.md | DONE (2025-10-20) | DevOps Guild | DEVOPS-SEC-10-301 | Address NU1902/NU1903 advisories for `MongoDB.Driver` 2.12.0 and `SharpCompress` 0.23.0; Wave0A prerequisites confirmed complete before remediation work. |
| Sprint 10 | DevOps Perf | ops/devops/TASKS.md | TODO | DevOps Guild | DEVOPS-PERF-10-001 | Perf smoke job ensuring <5s SBOM compose. |
| Sprint 11 | Signing Chain Bring-up | src/StellaOps.Authority/TASKS.md | DOING (2025-10-19) | Authority Core & Security Guild | AUTH-DPOP-11-001 | Implement DPoP proof validation + nonce handling for high-value audiences per architecture. |
| Sprint 11 | Signing Chain Bring-up | src/StellaOps.Authority/TASKS.md | DOING (2025-10-19) | Authority Core & Security Guild | AUTH-MTLS-11-002 | Add OAuth mTLS client credential support with certificate-bound tokens and introspection updates. |

View File

@@ -1,466 +1,468 @@
# component_architecture_concelier.md — **StellaOps Concelier** (2025Q4)
> **Scope.** Implementationready architecture for **Concelier**: the vulnerability ingest/normalize/merge/export subsystem that produces deterministic advisory data for the Scanner + Policy + Excititor pipeline. Covers domain model, connectors, merge rules, storage schema, exports, APIs, performance, security, and test matrices.
---
## 0) Mission & boundaries
**Mission.** Acquire authoritative **vulnerability advisories** (vendor PSIRTs, distros, OSS ecosystems, CERTs), normalize them into a **canonical model**, reconcile aliases and version ranges, and export **deterministic artifacts** (JSON, Trivy DB) for fast backend joins.
**Boundaries.**
* Concelier **does not** sign with private keys. When attestation is required, the export artifact is handed to the **Signer**/**Attestor** pipeline (outofprocess).
* Concelier **does not** decide PASS/FAIL; it provides data to the **Policy** engine.
* Online operation is **allowlistonly**; airgapped deployments use the **Offline Kit**.
---
## 1) Topology & processes
**Process shape:** single ASP.NET Core service `StellaOps.Concelier.WebService` hosting:
* **Scheduler** with distributed locks (Mongo backed).
* **Connectors** (fetch/parse/map).
* **Merger** (canonical record assembly + precedence).
* **Exporters** (JSON, Trivy DB).
* **Minimal REST** for health/status/trigger/export.
**Scale:** HA by running N replicas; **locks** prevent overlapping jobs per source/exporter.
---
## 2) Canonical domain model
> Stored in MongoDB (database `concelier`), serialized with a **canonical JSON** writer (stable order, camelCase, normalized timestamps).
### 2.1 Core entities
**Advisory**
```
advisoryId // internal GUID
advisoryKey // stable string key (e.g., CVE-2025-12345 or vendor ID)
title // short title (best-of from sources)
summary // normalized summary (English; i18n optional)
published // earliest source timestamp
modified // latest source timestamp
severity // normalized {none, low, medium, high, critical}
cvss // {v2?, v3?, v4?} objects (vector, baseScore, severity, source)
exploitKnown // bool (e.g., KEV/active exploitation flags)
references[] // typed links (advisory, kb, patch, vendor, exploit, blog)
sources[] // provenance for traceability (doc digests, URIs)
```
**Alias**
```
advisoryId
scheme // CVE, GHSA, RHSA, DSA, USN, MSRC, etc.
value // e.g., "CVE-2025-12345"
```
**Affected**
```
advisoryId
productKey // canonical product identity (see 2.2)
rangeKind // semver | evr | nvra | apk | rpm | deb | generic | exact
introduced? // string (format depends on rangeKind)
fixed? // string (format depends on rangeKind)
lastKnownSafe? // optional explicit safe floor
arch? // arch or platform qualifier if source declares (x86_64, aarch64)
distro? // distro qualifier when applicable (rhel:9, debian:12, alpine:3.19)
ecosystem? // npm|pypi|maven|nuget|golang|…
notes? // normalized notes per source
```
**Reference**
```
advisoryId
url
kind // advisory | patch | kb | exploit | mitigation | blog | cvrf | csaf
sourceTag // e.g., vendor/redhat, distro/debian, oss/ghsa
```
**MergeEvent**
```
advisoryKey
beforeHash // canonical JSON hash before merge
afterHash // canonical JSON hash after merge
mergedAt
inputs[] // source doc digests that contributed
```
**AdvisoryStatement (event log)**
```
statementId // GUID (immutable)
vulnerabilityKey // canonical advisory key (e.g., CVE-2025-12345)
advisoryKey // merge snapshot advisory key (may reference variant)
statementHash // canonical hash of advisory payload
asOf // timestamp of snapshot (UTC)
recordedAt // persistence timestamp (UTC)
inputDocuments[] // document IDs contributing to the snapshot
payload // canonical advisory document (BSON / canonical JSON)
```
**AdvisoryConflict**
```
conflictId // GUID
vulnerabilityKey // canonical advisory key
conflictHash // deterministic hash of conflict payload
asOf // timestamp aligned with originating statement set
recordedAt // persistence timestamp
statementIds[] // related advisoryStatement identifiers
details // structured conflict explanation / merge reasoning
```
- `AdvisoryEventLog` (Concelier.Core) provides the public API for appending immutable statements/conflicts and querying replay history. Inputs are normalized by trimming and lower-casing `vulnerabilityKey`, serializing advisories with `CanonicalJsonSerializer`, and computing SHA-256 hashes (`statementHash`, `conflictHash`) over the canonical JSON payloads. Consumers can replay by key with an optional `asOf` filter to obtain deterministic snapshots ordered by `asOf` then `recordedAt`.
- Concelier.WebService exposes the immutable log via `GET /concelier/advisories/{vulnerabilityKey}/replay[?asOf=UTC_ISO8601]`, returning the latest statements (with hex-encoded hashes) and any conflict explanations for downstream exporters and APIs.
**ExportState**
```
exportKind // json | trivydb
baseExportId? // last full baseline
baseDigest? // digest of last full baseline
lastFullDigest? // digest of last full export
lastDeltaDigest? // digest of last delta export
cursor // per-kind incremental cursor
files[] // last manifest snapshot (path → sha256)
```
### 2.2 Product identity (`productKey`)
* **Primary:** `purl` (Package URL).
* **OS packages:** RPM (NEVRA→purl:rpm), DEB (dpkg→purl:deb), APK (apk→purl:alpine), with **EVR/NVRA** preserved.
* **Secondary:** `cpe` retained for compatibility; advisory records may carry both.
* **Image/platform:** `oci:<registry>/<repo>@<digest>` for imagelevel advisories (rare).
* **Unmappable:** if a source is nondeterministic, keep native string under `productKey="native:<provider>:<id>"` and mark **nonjoinable**.
---
## 3) Source families & precedence
### 3.1 Families
* **Vendor PSIRTs**: Microsoft, Oracle, Cisco, Adobe, Apple, VMware, Chromium…
* **Linux distros**: Red Hat, SUSE, Ubuntu, Debian, Alpine
* **OSS ecosystems**: OSV, GHSA (GitHub Security Advisories), PyPI, npm, Maven, NuGet, Go.
* **CERTs / national CSIRTs**: CISA (KEV, ICS), JVN, ACSC, CCCS, KISA, CERTFR/BUND, etc.
### 3.2 Precedence (when claims conflict)
1. **Vendor PSIRT** (authoritative for their product).
2. **Distro** (authoritative for packages they ship, including backports).
3. **Ecosystem** (OSV/GHSA) for library semantics.
4. **CERTs/aggregators** for enrichment (KEV/known exploited).
> Precedence affects **Affected** ranges and **fixed** info; **severity** is normalized to the **maximum** credible severity unless policy overrides. Conflicts are retained with **source provenance**.
---
## 4) Connectors & normalization
### 4.1 Connector contract
```csharp
public interface IFeedConnector {
string SourceName { get; }
Task FetchAsync(IServiceProvider sp, CancellationToken ct); // -> document collection
Task ParseAsync(IServiceProvider sp, CancellationToken ct); // -> dto collection (validated)
Task MapAsync(IServiceProvider sp, CancellationToken ct); // -> advisory/alias/affected/reference
}
```
* **Fetch**: windowed (cursor), conditional GET (ETag/LastModified), retry/backoff, rate limiting.
* **Parse**: schema validation (JSON Schema, XSD/CSAF), content type checks; write **DTO** with normalized casing.
* **Map**: build canonical records; all outputs carry **provenance** (doc digest, URI, anchors).
### 4.2 Version range normalization
* **SemVer** ecosystems (npm, pypi, maven, nuget, golang): normalize to `introduced`/`fixed` semver ranges (use `~`, `^`, `<`, `>=` canonicalized to intervals).
* **RPM EVR**: `epoch:version-release` with `rpmvercmp` semantics; store raw EVR strings and also **computed order keys** for query.
* **DEB**: dpkg version comparison semantics mirrored; store computed keys.
* **APK**: Alpine version semantics; compute order keys.
* **Generic**: if provider uses text, retain raw; do **not** invent ranges.
### 4.3 Severity & CVSS
* Normalize **CVSS v2/v3/v4** where available (vector, baseScore, severity).
* If multiple CVSS sources exist, track them all; **effective severity** defaults to **max** by policy (configurable).
* **ExploitKnown** toggled by KEV and equivalent sources; store **evidence** (source, date).
---
## 5) Merge engine
### 5.1 Keying & identity
* Identity graph: **CVE** is primary node; vendor/distro IDs resolved via **Alias** edges (from connectors and Conceliers alias tables).
* `advisoryKey` is the canonical primary key (CVE if present, else vendor/distro key).
### 5.2 Merge algorithm (deterministic)
1. **Gather** all rows for `advisoryKey` (across sources).
2. **Select title/summary** by precedence source (vendor>distro>ecosystem>cert).
3. **Union aliases** (dedupe by scheme+value).
4. **Merge `Affected`** with rules:
* Prefer **vendor** ranges for vendor products; prefer **distro** for **distroshipped** packages.
* If both exist for same `productKey`, keep **both**; mark `sourceTag` and `precedence` so **Policy** can decide.
* Never collapse range semantics across different families (e.g., rpm EVR vs semver).
5. **CVSS/severity**: record all CVSS sets; compute **effectiveSeverity** = max (unless policy override).
6. **References**: union with type precedence (advisory > patch > kb > exploit > blog); dedupe by URL; preserve `sourceTag`.
7. Produce **canonical JSON**; compute **afterHash**; store **MergeEvent** with inputs and hashes.
> The merge is **pure** given inputs. Any change in inputs or precedence matrices changes the **hash** predictably.
---
## 6) Storage schema (MongoDB)
**Collections & indexes**
* `source` `{_id, type, baseUrl, enabled, notes}`
* `source_state` `{sourceName(unique), enabled, cursor, lastSuccess, backoffUntil, paceOverrides}`
* `document` `{_id, sourceName, uri, fetchedAt, sha256, contentType, status, metadata, gridFsId?, etag?, lastModified?}`
* Index: `{sourceName:1, uri:1}` unique, `{fetchedAt:-1}`
* `dto` `{_id, sourceName, documentId, schemaVer, payload, validatedAt}`
* Index: `{sourceName:1, documentId:1}`
* `advisory` `{_id, advisoryKey, title, summary, published, modified, severity, cvss, exploitKnown, sources[]}`
* Index: `{advisoryKey:1}` unique, `{modified:-1}`, `{severity:1}`, text index (title, summary)
* `alias` `{advisoryId, scheme, value}`
* Index: `{scheme:1,value:1}`, `{advisoryId:1}`
* `affected` `{advisoryId, productKey, rangeKind, introduced?, fixed?, arch?, distro?, ecosystem?}`
* Index: `{productKey:1}`, `{advisoryId:1}`, `{productKey:1, rangeKind:1}`
* `reference` `{advisoryId, url, kind, sourceTag}`
* Index: `{advisoryId:1}`, `{kind:1}`
* `merge_event` `{advisoryKey, beforeHash, afterHash, mergedAt, inputs[]}`
* Index: `{advisoryKey:1, mergedAt:-1}`
* `export_state` `{_id(exportKind), baseExportId?, baseDigest?, lastFullDigest?, lastDeltaDigest?, cursor, files[]}`
* `locks` `{_id(jobKey), holder, acquiredAt, heartbeatAt, leaseMs, ttlAt}` (TTL cleans dead locks)
* `jobs` `{_id, type, args, state, startedAt, heartbeatAt, endedAt, error}`
**GridFS buckets**: `fs.documents` for raw payloads.
---
## 7) Exporters
### 7.1 Deterministic JSON (vulnlist style)
* Folder structure mirroring `/<scheme>/<first-two>/<rest>/…` with one JSON per advisory; deterministic ordering, stable timestamps, normalized whitespace.
* `manifest.json` lists all files with SHA256 and a toplevel **export digest**.
### 7.2 Trivy DB exporter
* Builds Bolt DB archives compatible with Trivy; supports **full** and **delta** modes.
* In delta, unchanged blobs are reused from the base; metadata captures:
```
{
"mode": "delta|full",
"baseExportId": "...",
"baseManifestDigest": "sha256:...",
"changed": ["path1", "path2"],
"removed": ["path3"]
}
```
* Optional ORAS push (OCI layout) for registries.
* Offline kit bundles include Trivy DB + JSON tree + export manifest.
* Mirror-ready bundles: when `concelier.trivy.mirror` defines domains, the exporter emits `mirror/index.json` plus per-domain `manifest.json`, `metadata.json`, and `db.tar.gz` files with SHA-256 digests so Concelier mirrors can expose domain-scoped download endpoints.
### 7.3 Handoff to Signer/Attestor (optional)
* On export completion, if `attest: true` is set in job args, Concelier **posts** the artifact metadata to **Signer**/**Attestor**; Concelier itself **does not** hold signing keys.
* Export record stores returned `{ uuid, index, url }` from **Rekor v2**.
---
## 8) REST APIs
All under `/api/v1/concelier`.
**Health & status**
```
GET /healthz | /readyz
GET /status → sources, last runs, export cursors
```
**Sources & jobs**
```
GET /sources → list of configured sources
POST /sources/{name}/trigger → { jobId }
POST /sources/{name}/pause | /resume → toggle
GET /jobs/{id} → job status
```
**Exports**
```
POST /exports/json { full?:bool, force?:bool, attest?:bool } → { exportId, digest, rekor? }
POST /exports/trivy { full?:bool, force?:bool, publish?:bool, attest?:bool } → { exportId, digest, rekor? }
GET /exports/{id} → export metadata (kind, digest, createdAt, rekor?)
GET /concelier/exports/index.json → mirror index describing available domains/bundles
GET /concelier/exports/mirror/{domain}/manifest.json
GET /concelier/exports/mirror/{domain}/bundle.json
GET /concelier/exports/mirror/{domain}/bundle.json.jws
```
**Search (operator debugging)**
```
GET /advisories/{key}
GET /advisories?scheme=CVE&value=CVE-2025-12345
GET /affected?productKey=pkg:rpm/openssl&limit=100
```
**AuthN/Z:** Authority tokens (OpTok) with roles: `concelier.read`, `concelier.admin`, `concelier.export`.
---
## 9) Configuration (YAML)
```yaml
concelier:
mongo: { uri: "mongodb://mongo/concelier" }
s3:
endpoint: "http://minio:9000"
bucket: "stellaops-concelier"
scheduler:
windowSeconds: 30
maxParallelSources: 4
sources:
- name: redhat
kind: csaf
baseUrl: https://access.redhat.com/security/data/csaf/v2/
signature: { type: pgp, keys: [ "…redhat PGP…" ] }
enabled: true
windowDays: 7
- name: suse
kind: csaf
baseUrl: https://ftp.suse.com/pub/projects/security/csaf/
signature: { type: pgp, keys: [ "…suse PGP…" ] }
- name: ubuntu
kind: usn-json
baseUrl: https://ubuntu.com/security/notices.json
signature: { type: none }
- name: osv
kind: osv
baseUrl: https://api.osv.dev/v1/
signature: { type: none }
- name: ghsa
kind: ghsa
baseUrl: https://api.github.com/graphql
auth: { tokenRef: "env:GITHUB_TOKEN" }
exporters:
json:
enabled: true
output: s3://stellaops-concelier/json/
trivy:
enabled: true
mode: full
output: s3://stellaops-concelier/trivy/
oras:
enabled: false
repo: ghcr.io/org/concelier
precedence:
vendorWinsOverDistro: true
distroWinsOverOsv: true
severity:
policy: max # or 'vendorPreferred' / 'distroPreferred'
```
---
## 10) Security & compliance
* **Outbound allowlist** per connector (domains, protocols); proxy support; TLS pinning where possible.
* **Signature verification** for raw docs (PGP/cosign/x509) with results stored in `document.metadata.sig`. Docs failing verification may still be ingested but flagged; **merge** can downweight or ignore them by config.
* **No secrets in logs**; auth material via `env:` or mounted files; HTTP redaction of `Authorization` headers.
* **Multitenant**: pertenant DBs or prefixes; pertenant S3 prefixes; tenantscoped API tokens.
* **Determinism**: canonical JSON writer; export digests stable across runs given same inputs.
---
## 11) Performance targets & scale
* **Ingest**: ≥ 5k documents/min on 4 cores (CSAF/OpenVEX/JSON).
* **Normalize/map**: ≥ 50k `Affected` rows/min on 4 cores.
* **Merge**: ≤ 10ms P95 per advisory at steadystate updates.
* **Export**: 1M advisories JSON in ≤ 90s (streamed, zstd), Trivy DB in ≤ 60s on 8 cores.
* **Memory**: hard cap per job; chunked streaming writers; backpressure to avoid GC spikes.
**Scale pattern**: add Concelier replicas; Mongo scaling via indices and read/write concerns; GridFS only for oversized docs.
---
## 12) Observability
* **Metrics**
* `concelier.fetch.docs_total{source}`
* `concelier.fetch.bytes_total{source}`
* `concelier.parse.failures_total{source}`
* `concelier.map.affected_total{source}`
* `concelier.merge.changed_total`
* `concelier.export.bytes{kind}`
* `concelier.export.duration_seconds{kind}`
* **Tracing** around fetch/parse/map/merge/export.
* **Logs**: structured with `source`, `uri`, `docDigest`, `advisoryKey`, `exportId`.
---
## 13) Testing matrix
* **Connectors:** fixture suites for each provider/format (happy path; malformed; signature fail).
* **Version semantics:** EVR vs dpkg vs semver edge cases (epoch bumps, tilde versions, prereleases).
* **Merge:** conflicting sources (vendor vs distro vs OSV); verify precedence & dual retention.
* **Export determinism:** byteforbyte stable outputs across runs; digest equality.
* **Performance:** soak tests with 1M advisories; cap memory; verify backpressure.
* **API:** pagination, filters, RBAC, error envelopes (RFC 7807).
* **Offline kit:** bundle build & import correctness.
---
## 14) Failure modes & recovery
* **Source outages:** scheduler backs off with exponential delay; `source_state.backoffUntil`; alerts on staleness.
* **Schema drifts:** parse stage marks DTO invalid; job fails with clear diagnostics; connector version flags track supported schema ranges.
* **Partial exports:** exporters write to temp prefix; **manifest commit** is atomic; only then move to final prefix and update `export_state`.
* **Resume:** all stages idempotent; `source_state.cursor` supports window resume.
---
## 15) Operator runbook (quick)
* **Trigger all sources:** `POST /api/v1/concelier/sources/*/trigger`
* **Force full export JSON:** `POST /api/v1/concelier/exports/json { "full": true, "force": true }`
* **Force Trivy DB delta publish:** `POST /api/v1/concelier/exports/trivy { "full": false, "publish": true }`
* **Inspect advisory:** `GET /api/v1/concelier/advisories?scheme=CVE&value=CVE-2025-12345`
* **Pause noisy source:** `POST /api/v1/concelier/sources/osv/pause`
---
## 16) Rollout plan
1. **MVP**: Red Hat (CSAF), SUSE (CSAF), Ubuntu (USN JSON), OSV; JSON export.
2. **Add**: GHSA GraphQL, Debian (DSA HTML/JSON), Alpine secdb; Trivy DB export.
3. **Attestation handoff**: integrate with **Signer/Attestor** (optional).
4. **Scale & diagnostics**: provider dashboards, staleness alerts, export cache reuse.
5. **Offline kit**: endtoend verified bundles for airgap.
# component_architecture_concelier.md — **StellaOps Concelier** (2025Q4)
> **Scope.** Implementationready architecture for **Concelier**: the vulnerability ingest/normalize/merge/export subsystem that produces deterministic advisory data for the Scanner + Policy + Excititor pipeline. Covers domain model, connectors, merge rules, storage schema, exports, APIs, performance, security, and test matrices.
---
## 0) Mission & boundaries
**Mission.** Acquire authoritative **vulnerability advisories** (vendor PSIRTs, distros, OSS ecosystems, CERTs), normalize them into a **canonical model**, reconcile aliases and version ranges, and export **deterministic artifacts** (JSON, Trivy DB) for fast backend joins.
**Boundaries.**
* Concelier **does not** sign with private keys. When attestation is required, the export artifact is handed to the **Signer**/**Attestor** pipeline (outofprocess).
* Concelier **does not** decide PASS/FAIL; it provides data to the **Policy** engine.
* Online operation is **allowlistonly**; airgapped deployments use the **Offline Kit**.
---
## 1) Topology & processes
**Process shape:** single ASP.NET Core service `StellaOps.Concelier.WebService` hosting:
* **Scheduler** with distributed locks (Mongo backed).
* **Connectors** (fetch/parse/map).
* **Merger** (canonical record assembly + precedence).
* **Exporters** (JSON, Trivy DB).
* **Minimal REST** for health/status/trigger/export.
**Scale:** HA by running N replicas; **locks** prevent overlapping jobs per source/exporter.
---
## 2) Canonical domain model
> Stored in MongoDB (database `concelier`), serialized with a **canonical JSON** writer (stable order, camelCase, normalized timestamps).
### 2.1 Core entities
**Advisory**
```
advisoryId // internal GUID
advisoryKey // stable string key (e.g., CVE-2025-12345 or vendor ID)
title // short title (best-of from sources)
summary // normalized summary (English; i18n optional)
published // earliest source timestamp
modified // latest source timestamp
severity // normalized {none, low, medium, high, critical}
cvss // {v2?, v3?, v4?} objects (vector, baseScore, severity, source)
exploitKnown // bool (e.g., KEV/active exploitation flags)
references[] // typed links (advisory, kb, patch, vendor, exploit, blog)
sources[] // provenance for traceability (doc digests, URIs)
```
**Alias**
```
advisoryId
scheme // CVE, GHSA, RHSA, DSA, USN, MSRC, etc.
value // e.g., "CVE-2025-12345"
```
**Affected**
```
advisoryId
productKey // canonical product identity (see 2.2)
rangeKind // semver | evr | nvra | apk | rpm | deb | generic | exact
introduced? // string (format depends on rangeKind)
fixed? // string (format depends on rangeKind)
lastKnownSafe? // optional explicit safe floor
arch? // arch or platform qualifier if source declares (x86_64, aarch64)
distro? // distro qualifier when applicable (rhel:9, debian:12, alpine:3.19)
ecosystem? // npm|pypi|maven|nuget|golang|…
notes? // normalized notes per source
```
**Reference**
```
advisoryId
url
kind // advisory | patch | kb | exploit | mitigation | blog | cvrf | csaf
sourceTag // e.g., vendor/redhat, distro/debian, oss/ghsa
```
**MergeEvent**
```
advisoryKey
beforeHash // canonical JSON hash before merge
afterHash // canonical JSON hash after merge
mergedAt
inputs[] // source doc digests that contributed
```
**AdvisoryStatement (event log)**
```
statementId // GUID (immutable)
vulnerabilityKey // canonical advisory key (e.g., CVE-2025-12345)
advisoryKey // merge snapshot advisory key (may reference variant)
statementHash // canonical hash of advisory payload
asOf // timestamp of snapshot (UTC)
recordedAt // persistence timestamp (UTC)
inputDocuments[] // document IDs contributing to the snapshot
payload // canonical advisory document (BSON / canonical JSON)
```
**AdvisoryConflict**
```
conflictId // GUID
vulnerabilityKey // canonical advisory key
conflictHash // deterministic hash of conflict payload
asOf // timestamp aligned with originating statement set
recordedAt // persistence timestamp
statementIds[] // related advisoryStatement identifiers
details // structured conflict explanation / merge reasoning
```
- `AdvisoryEventLog` (Concelier.Core) provides the public API for appending immutable statements/conflicts and querying replay history. Inputs are normalized by trimming and lower-casing `vulnerabilityKey`, serializing advisories with `CanonicalJsonSerializer`, and computing SHA-256 hashes (`statementHash`, `conflictHash`) over the canonical JSON payloads. Consumers can replay by key with an optional `asOf` filter to obtain deterministic snapshots ordered by `asOf` then `recordedAt`.
- Conflict explainers are serialized as deterministic `MergeConflictExplainerPayload` records (type, reason, source ranks, winning values); replay clients can parse the payload to render human-readable rationales without re-computing precedence.
- Concelier.WebService exposes the immutable log via `GET /concelier/advisories/{vulnerabilityKey}/replay[?asOf=UTC_ISO8601]`, returning the latest statements (with hex-encoded hashes) and any conflict explanations for downstream exporters and APIs.
**ExportState**
```
exportKind // json | trivydb
baseExportId? // last full baseline
baseDigest? // digest of last full baseline
lastFullDigest? // digest of last full export
lastDeltaDigest? // digest of last delta export
cursor // per-kind incremental cursor
files[] // last manifest snapshot (path → sha256)
```
### 2.2 Product identity (`productKey`)
* **Primary:** `purl` (Package URL).
* **OS packages:** RPM (NEVRA→purl:rpm), DEB (dpkg→purl:deb), APK (apk→purl:alpine), with **EVR/NVRA** preserved.
* **Secondary:** `cpe` retained for compatibility; advisory records may carry both.
* **Image/platform:** `oci:<registry>/<repo>@<digest>` for imagelevel advisories (rare).
* **Unmappable:** if a source is nondeterministic, keep native string under `productKey="native:<provider>:<id>"` and mark **nonjoinable**.
---
## 3) Source families & precedence
### 3.1 Families
* **Vendor PSIRTs**: Microsoft, Oracle, Cisco, Adobe, Apple, VMware, Chromium
* **Linux distros**: Red Hat, SUSE, Ubuntu, Debian, Alpine…
* **OSS ecosystems**: OSV, GHSA (GitHub Security Advisories), PyPI, npm, Maven, NuGet, Go.
* **CERTs / national CSIRTs**: CISA (KEV, ICS), JVN, ACSC, CCCS, KISA, CERTFR/BUND, etc.
### 3.2 Precedence (when claims conflict)
1. **Vendor PSIRT** (authoritative for their product).
2. **Distro** (authoritative for packages they ship, including backports).
3. **Ecosystem** (OSV/GHSA) for library semantics.
4. **CERTs/aggregators** for enrichment (KEV/known exploited).
> Precedence affects **Affected** ranges and **fixed** info; **severity** is normalized to the **maximum** credible severity unless policy overrides. Conflicts are retained with **source provenance**.
---
## 4) Connectors & normalization
### 4.1 Connector contract
```csharp
public interface IFeedConnector {
string SourceName { get; }
Task FetchAsync(IServiceProvider sp, CancellationToken ct); // -> document collection
Task ParseAsync(IServiceProvider sp, CancellationToken ct); // -> dto collection (validated)
Task MapAsync(IServiceProvider sp, CancellationToken ct); // -> advisory/alias/affected/reference
}
```
* **Fetch**: windowed (cursor), conditional GET (ETag/LastModified), retry/backoff, rate limiting.
* **Parse**: schema validation (JSON Schema, XSD/CSAF), content type checks; write **DTO** with normalized casing.
* **Map**: build canonical records; all outputs carry **provenance** (doc digest, URI, anchors).
### 4.2 Version range normalization
* **SemVer** ecosystems (npm, pypi, maven, nuget, golang): normalize to `introduced`/`fixed` semver ranges (use `~`, `^`, `<`, `>=` canonicalized to intervals).
* **RPM EVR**: `epoch:version-release` with `rpmvercmp` semantics; store raw EVR strings and also **computed order keys** for query.
* **DEB**: dpkg version comparison semantics mirrored; store computed keys.
* **APK**: Alpine version semantics; compute order keys.
* **Generic**: if provider uses text, retain raw; do **not** invent ranges.
### 4.3 Severity & CVSS
* Normalize **CVSS v2/v3/v4** where available (vector, baseScore, severity).
* If multiple CVSS sources exist, track them all; **effective severity** defaults to **max** by policy (configurable).
* **ExploitKnown** toggled by KEV and equivalent sources; store **evidence** (source, date).
---
## 5) Merge engine
### 5.1 Keying & identity
* Identity graph: **CVE** is primary node; vendor/distro IDs resolved via **Alias** edges (from connectors and Conceliers alias tables).
* `advisoryKey` is the canonical primary key (CVE if present, else vendor/distro key).
### 5.2 Merge algorithm (deterministic)
1. **Gather** all rows for `advisoryKey` (across sources).
2. **Select title/summary** by precedence source (vendor>distro>ecosystem>cert).
3. **Union aliases** (dedupe by scheme+value).
4. **Merge `Affected`** with rules:
* Prefer **vendor** ranges for vendor products; prefer **distro** for **distroshipped** packages.
* If both exist for same `productKey`, keep **both**; mark `sourceTag` and `precedence` so **Policy** can decide.
* Never collapse range semantics across different families (e.g., rpm EVR vs semver).
5. **CVSS/severity**: record all CVSS sets; compute **effectiveSeverity** = max (unless policy override).
6. **References**: union with type precedence (advisory > patch > kb > exploit > blog); dedupe by URL; preserve `sourceTag`.
7. Produce **canonical JSON**; compute **afterHash**; store **MergeEvent** with inputs and hashes.
> The merge is **pure** given inputs. Any change in inputs or precedence matrices changes the **hash** predictably.
---
## 6) Storage schema (MongoDB)
**Collections & indexes**
* `source` `{_id, type, baseUrl, enabled, notes}`
* `source_state` `{sourceName(unique), enabled, cursor, lastSuccess, backoffUntil, paceOverrides}`
* `document` `{_id, sourceName, uri, fetchedAt, sha256, contentType, status, metadata, gridFsId?, etag?, lastModified?}`
* Index: `{sourceName:1, uri:1}` unique, `{fetchedAt:-1}`
* `dto` `{_id, sourceName, documentId, schemaVer, payload, validatedAt}`
* Index: `{sourceName:1, documentId:1}`
* `advisory` `{_id, advisoryKey, title, summary, published, modified, severity, cvss, exploitKnown, sources[]}`
* Index: `{advisoryKey:1}` unique, `{modified:-1}`, `{severity:1}`, text index (title, summary)
* `alias` `{advisoryId, scheme, value}`
* Index: `{scheme:1,value:1}`, `{advisoryId:1}`
* `affected` `{advisoryId, productKey, rangeKind, introduced?, fixed?, arch?, distro?, ecosystem?}`
* Index: `{productKey:1}`, `{advisoryId:1}`, `{productKey:1, rangeKind:1}`
* `reference` `{advisoryId, url, kind, sourceTag}`
* Index: `{advisoryId:1}`, `{kind:1}`
* `merge_event` `{advisoryKey, beforeHash, afterHash, mergedAt, inputs[]}`
* Index: `{advisoryKey:1, mergedAt:-1}`
* `export_state` `{_id(exportKind), baseExportId?, baseDigest?, lastFullDigest?, lastDeltaDigest?, cursor, files[]}`
* `locks` `{_id(jobKey), holder, acquiredAt, heartbeatAt, leaseMs, ttlAt}` (TTL cleans dead locks)
* `jobs` `{_id, type, args, state, startedAt, heartbeatAt, endedAt, error}`
**GridFS buckets**: `fs.documents` for raw payloads.
---
## 7) Exporters
### 7.1 Deterministic JSON (vulnlist style)
* Folder structure mirroring `/<scheme>/<first-two>/<rest>/…` with one JSON per advisory; deterministic ordering, stable timestamps, normalized whitespace.
* `manifest.json` lists all files with SHA256 and a toplevel **export digest**.
### 7.2 Trivy DB exporter
* Builds Bolt DB archives compatible with Trivy; supports **full** and **delta** modes.
* In delta, unchanged blobs are reused from the base; metadata captures:
```
{
"mode": "delta|full",
"baseExportId": "...",
"baseManifestDigest": "sha256:...",
"changed": ["path1", "path2"],
"removed": ["path3"]
}
```
* Optional ORAS push (OCI layout) for registries.
* Offline kit bundles include Trivy DB + JSON tree + export manifest.
* Mirror-ready bundles: when `concelier.trivy.mirror` defines domains, the exporter emits `mirror/index.json` plus per-domain `manifest.json`, `metadata.json`, and `db.tar.gz` files with SHA-256 digests so Concelier mirrors can expose domain-scoped download endpoints.
* Concelier.WebService serves `/concelier/exports/index.json` and `/concelier/exports/mirror/{domain}/…` directly from the export tree with hour-long budgets (index: 60s, bundles: 300s, immutable) and per-domain rate limiting; the endpoints honour Stella Ops Authority or CIDR bypass lists depending on mirror topology.
### 7.3 Handoff to Signer/Attestor (optional)
* On export completion, if `attest: true` is set in job args, Concelier **posts** the artifact metadata to **Signer**/**Attestor**; Concelier itself **does not** hold signing keys.
* Export record stores returned `{ uuid, index, url }` from **Rekor v2**.
---
## 8) REST APIs
All under `/api/v1/concelier`.
**Health & status**
```
GET /healthz | /readyz
GET /status → sources, last runs, export cursors
```
**Sources & jobs**
```
GET /sources → list of configured sources
POST /sources/{name}/trigger { jobId }
POST /sources/{name}/pause | /resume → toggle
GET /jobs/{id} → job status
```
**Exports**
```
POST /exports/json { full?:bool, force?:bool, attest?:bool } { exportId, digest, rekor? }
POST /exports/trivy { full?:bool, force?:bool, publish?:bool, attest?:bool } → { exportId, digest, rekor? }
GET /exports/{id} → export metadata (kind, digest, createdAt, rekor?)
GET /concelier/exports/index.json → mirror index describing available domains/bundles
GET /concelier/exports/mirror/{domain}/manifest.json
GET /concelier/exports/mirror/{domain}/bundle.json
GET /concelier/exports/mirror/{domain}/bundle.json.jws
```
**Search (operator debugging)**
```
GET /advisories/{key}
GET /advisories?scheme=CVE&value=CVE-2025-12345
GET /affected?productKey=pkg:rpm/openssl&limit=100
```
**AuthN/Z:** Authority tokens (OpTok) with roles: `concelier.read`, `concelier.admin`, `concelier.export`.
---
## 9) Configuration (YAML)
```yaml
concelier:
mongo: { uri: "mongodb://mongo/concelier" }
s3:
endpoint: "http://minio:9000"
bucket: "stellaops-concelier"
scheduler:
windowSeconds: 30
maxParallelSources: 4
sources:
- name: redhat
kind: csaf
baseUrl: https://access.redhat.com/security/data/csaf/v2/
signature: { type: pgp, keys: [ "…redhat PGP…" ] }
enabled: true
windowDays: 7
- name: suse
kind: csaf
baseUrl: https://ftp.suse.com/pub/projects/security/csaf/
signature: { type: pgp, keys: [ "…suse PGP…" ] }
- name: ubuntu
kind: usn-json
baseUrl: https://ubuntu.com/security/notices.json
signature: { type: none }
- name: osv
kind: osv
baseUrl: https://api.osv.dev/v1/
signature: { type: none }
- name: ghsa
kind: ghsa
baseUrl: https://api.github.com/graphql
auth: { tokenRef: "env:GITHUB_TOKEN" }
exporters:
json:
enabled: true
output: s3://stellaops-concelier/json/
trivy:
enabled: true
mode: full
output: s3://stellaops-concelier/trivy/
oras:
enabled: false
repo: ghcr.io/org/concelier
precedence:
vendorWinsOverDistro: true
distroWinsOverOsv: true
severity:
policy: max # or 'vendorPreferred' / 'distroPreferred'
```
---
## 10) Security & compliance
* **Outbound allowlist** per connector (domains, protocols); proxy support; TLS pinning where possible.
* **Signature verification** for raw docs (PGP/cosign/x509) with results stored in `document.metadata.sig`. Docs failing verification may still be ingested but flagged; **merge** can downweight or ignore them by config.
* **No secrets in logs**; auth material via `env:` or mounted files; HTTP redaction of `Authorization` headers.
* **Multitenant**: pertenant DBs or prefixes; pertenant S3 prefixes; tenantscoped API tokens.
* **Determinism**: canonical JSON writer; export digests stable across runs given same inputs.
---
## 11) Performance targets & scale
* **Ingest**: ≥ 5k documents/min on 4 cores (CSAF/OpenVEX/JSON).
* **Normalize/map**: ≥ 50k `Affected` rows/min on 4 cores.
* **Merge**: ≤ 10ms P95 per advisory at steadystate updates.
* **Export**: 1M advisories JSON in ≤ 90s (streamed, zstd), Trivy DB in ≤ 60s on 8 cores.
* **Memory**: hard cap per job; chunked streaming writers; backpressure to avoid GC spikes.
**Scale pattern**: add Concelier replicas; Mongo scaling via indices and read/write concerns; GridFS only for oversized docs.
---
## 12) Observability
* **Metrics**
* `concelier.fetch.docs_total{source}`
* `concelier.fetch.bytes_total{source}`
* `concelier.parse.failures_total{source}`
* `concelier.map.affected_total{source}`
* `concelier.merge.changed_total`
* `concelier.export.bytes{kind}`
* `concelier.export.duration_seconds{kind}`
* **Tracing** around fetch/parse/map/merge/export.
* **Logs**: structured with `source`, `uri`, `docDigest`, `advisoryKey`, `exportId`.
---
## 13) Testing matrix
* **Connectors:** fixture suites for each provider/format (happy path; malformed; signature fail).
* **Version semantics:** EVR vs dpkg vs semver edge cases (epoch bumps, tilde versions, prereleases).
* **Merge:** conflicting sources (vendor vs distro vs OSV); verify precedence & dual retention.
* **Export determinism:** byteforbyte stable outputs across runs; digest equality.
* **Performance:** soak tests with 1M advisories; cap memory; verify backpressure.
* **API:** pagination, filters, RBAC, error envelopes (RFC 7807).
* **Offline kit:** bundle build & import correctness.
---
## 14) Failure modes & recovery
* **Source outages:** scheduler backs off with exponential delay; `source_state.backoffUntil`; alerts on staleness.
* **Schema drifts:** parse stage marks DTO invalid; job fails with clear diagnostics; connector version flags track supported schema ranges.
* **Partial exports:** exporters write to temp prefix; **manifest commit** is atomic; only then move to final prefix and update `export_state`.
* **Resume:** all stages idempotent; `source_state.cursor` supports window resume.
---
## 15) Operator runbook (quick)
* **Trigger all sources:** `POST /api/v1/concelier/sources/*/trigger`
* **Force full export JSON:** `POST /api/v1/concelier/exports/json { "full": true, "force": true }`
* **Force Trivy DB delta publish:** `POST /api/v1/concelier/exports/trivy { "full": false, "publish": true }`
* **Inspect advisory:** `GET /api/v1/concelier/advisories?scheme=CVE&value=CVE-2025-12345`
* **Pause noisy source:** `POST /api/v1/concelier/sources/osv/pause`
---
## 16) Rollout plan
1. **MVP**: Red Hat (CSAF), SUSE (CSAF), Ubuntu (USN JSON), OSV; JSON export.
2. **Add**: GHSA GraphQL, Debian (DSA HTML/JSON), Alpine secdb; Trivy DB export.
3. **Attestation handoff**: integrate with **Signer/Attestor** (optional).
4. **Scale & diagnostics**: provider dashboards, staleness alerts, export cache reuse.
5. **Offline kit**: endtoend verified bundles for airgap.

View File

@@ -337,7 +337,7 @@ Prometheus + OTLP; Grafana dashboards ship in the charts.
* **Vulnerability response**:
* Concelier red-flag advisories trigger accelerated **stable** patch rollout; UI/CLI “security patch available” notice.
* 2025-10: Pinned `MongoDB.Driver` **3.5.0** and `SharpCompress` **0.41.0** across services (DEVOPS-SEC-10-301) to eliminate NU1902/NU1903 warnings surfaced during scanner cache/worker test runs; future dependency bumps follow the same central override pattern.
* 2025-10: Pinned `MongoDB.Driver` **3.5.0** and `SharpCompress` **0.41.0** across services (DEVOPS-SEC-10-301) to eliminate NU1902/NU1903 warnings surfaced during scanner cache/worker test runs; repacked the local `Mongo2Go` feed so test fixtures inherit the patched dependencies; future bumps follow the same central override pattern.
* **Backups/DR**:

View File

@@ -1,196 +1,224 @@
# Concelier & Excititor Mirror Operations
This runbook describes how StellaOps operates the managed mirrors under `*.stella-ops.org`.
It covers Docker Compose and Helm deployment overlays, secret handling for multi-tenant
authn, CDN fronting, and the recurring sync pipeline that keeps mirror bundles current.
## 1. Prerequisites
- **Authority access** client credentials (`client_id` + secret) authorised for
`concelier.mirror.read` and `excititor.mirror.read` scopes. Secrets live outside git.
- **Signed TLS certificates** wildcard or per-domain (`mirror-primary`, `mirror-community`).
Store them under `deploy/compose/mirror-gateway/tls/` or in Kubernetes secrets.
- **Mirror gateway credentials** Basic Auth htpasswd files per domain. Generate with
`htpasswd -B`. Operators distribute credentials to downstream consumers.
- **Export artifact source** read access to the canonical S3 buckets (or rsync share)
that hold `concelier` JSON bundles and `excititor` VEX exports.
- **Persistent volumes** storage for Concelier job metadata and mirror export trees.
For Helm, provision PVCs (`concelier-mirror-jobs`, `concelier-mirror-exports`,
`excititor-mirror-exports`, `mirror-mongo-data`, `mirror-minio-data`) before rollout.
## 2. Secret & certificate layout
### Docker Compose (`deploy/compose/docker-compose.mirror.yaml`)
- `deploy/compose/env/mirror.env.example` copy to `.env` and adjust quotas or domain IDs.
- `deploy/compose/mirror-secrets/` mount read-only into `/run/secrets`. Place:
- `concelier-authority-client` Authority client secret.
- `excititor-authority-client` (optional) reserve for future authn.
- `deploy/compose/mirror-gateway/tls/` PEM-encoded cert/key pairs:
- `mirror-primary.crt`, `mirror-primary.key`
- `mirror-community.crt`, `mirror-community.key`
- `deploy/compose/mirror-gateway/secrets/` htpasswd files:
- `mirror-primary.htpasswd`
- `mirror-community.htpasswd`
### Helm (`deploy/helm/stellaops/values-mirror.yaml`)
Create secrets in the target namespace:
```bash
kubectl create secret generic concelier-mirror-auth \
--from-file=concelier-authority-client=concelier-authority-client
kubectl create secret generic excititor-mirror-auth \
--from-file=excititor-authority-client=excititor-authority-client
kubectl create secret tls mirror-gateway-tls \
--cert=mirror-primary.crt --key=mirror-primary.key
kubectl create secret generic mirror-gateway-htpasswd \
--from-file=mirror-primary.htpasswd --from-file=mirror-community.htpasswd
```
> Keep Basic Auth lists short-lived (rotate quarterly) and document credential recipients.
## 3. Deployment
### 3.1 Docker Compose (edge mirrors, lab validation)
1. `cp deploy/compose/env/mirror.env.example deploy/compose/env/mirror.env`
2. Populate secrets/tls directories as described above.
3. Sync mirror bundles (see §4) into `deploy/compose/mirror-data/…` and ensure they are mounted
on the host path backing the `concelier-exports` and `excititor-exports` volumes.
4. Run the profile validator: `deploy/tools/validate-profiles.sh`.
5. Launch: `docker compose --env-file env/mirror.env -f docker-compose.mirror.yaml up -d`.
### 3.2 Helm (production mirrors)
1. Provision PVCs sized for mirror bundles (baseline: 20GiB per domain).
2. Create secrets/tls config maps (§2).
3. `helm upgrade --install mirror deploy/helm/stellaops -f deploy/helm/stellaops/values-mirror.yaml`.
4. Annotate the `stellaops-mirror-gateway` service with ingress/LoadBalancer metadata required by
your CDN (e.g., AWS load balancer scheme internal + NLB idle timeout).
## 4. Artifact sync workflow
Mirrors never generate exports—they ingest signed bundles produced by the Concelier and Excititor
export jobs. Recommended sync pattern:
### 4.1 Compose host (systemd timer)
`/usr/local/bin/mirror-sync.sh`:
```bash
#!/usr/bin/env bash
set -euo pipefail
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
aws s3 sync s3://mirror-stellaops/concelier/latest \
/opt/stellaops/mirror-data/concelier --delete --size-only
aws s3 sync s3://mirror-stellaops/excititor/latest \
/opt/stellaops/mirror-data/excititor --delete --size-only
```
Schedule with a systemd timer every 5minutes. The Compose volumes mount `/opt/stellaops/mirror-data/*`
into the containers read-only, matching `CONCELIER__MIRROR__EXPORTROOT=/exports/json` and
`EXCITITOR__ARTIFACTS__FILESYSTEM__ROOT=/exports`.
### 4.2 Kubernetes (CronJob)
Create a CronJob running the AWS CLI (or rclone) in the same namespace, writing into the PVCs:
```yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: mirror-sync
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: sync
image: public.ecr.aws/aws-cli/aws-cli@sha256:5df5f52c29f5e3ba46d0ad9e0e3afc98701c4a0f879400b4c5f80d943b5fadea
command:
- /bin/sh
- -c
- >
aws s3 sync s3://mirror-stellaops/concelier/latest /exports/concelier --delete --size-only &&
aws s3 sync s3://mirror-stellaops/excititor/latest /exports/excititor --delete --size-only
volumeMounts:
- name: concelier-exports
mountPath: /exports/concelier
- name: excititor-exports
mountPath: /exports/excititor
envFrom:
- secretRef:
name: mirror-sync-aws
restartPolicy: OnFailure
volumes:
- name: concelier-exports
persistentVolumeClaim:
claimName: concelier-mirror-exports
- name: excititor-exports
persistentVolumeClaim:
claimName: excititor-mirror-exports
```
## 5. CDN integration
1. Point the CDN origin at the mirror gateway (Compose host or Kubernetes LoadBalancer).
2. Honour the response headers emitted by the gateway and Concelier/Excititor:
`Cache-Control: public, max-age=300, immutable` for mirror payloads.
3. Configure origin shields in the CDN to prevent cache stampedes. Recommended TTLs:
- Index (`/concelier/exports/index.json`, `/excititor/mirror/*/index`) → 60s.
- Bundle/manifest payloads → 300s.
4. Forward the `Authorization` header—Basic Auth terminates at the gateway.
5. Enforce per-domain rate limits at the CDN (matching gateway budgets) and enable logging
to SIEM for anomaly detection.
## 6. Smoke tests
After each deployment or sync cycle:
```bash
# Index with Basic Auth
curl -u $PRIMARY_CREDS https://mirror-primary.stella-ops.org/concelier/exports/index.json | jq 'keys'
# Mirror manifest signature
curl -u $PRIMARY_CREDS -I https://mirror-primary.stella-ops.org/concelier/exports/mirror/primary/manifest.json
# Excititor consensus bundle metadata
curl -u $COMMUNITY_CREDS https://mirror-community.stella-ops.org/excititor/mirror/community/index \
| jq '.exports[].exportKey'
# Signed bundle + detached JWS (spot check digests)
curl -u $PRIMARY_CREDS https://mirror-primary.stella-ops.org/concelier/exports/mirror/primary/bundle.json.jws \
-o bundle.json.jws
cosign verify-blob --signature bundle.json.jws --key mirror-key.pub bundle.json
```
Watch the gateway metrics (`nginx_vts` or access logs) for cache hits. In Kubernetes, `kubectl logs deploy/stellaops-mirror-gateway`
should show `X-Cache-Status: HIT/MISS`.
## 7. Maintenance & rotation
- **Bundle freshness** alert if sync job lag exceeds 15minutes or if `concelier` logs
`Mirror export root is not configured`.
- **Secret rotation** change Authority client secrets and Basic Auth credentials quarterly.
Update the mounted secrets and restart deployments (`docker compose restart concelier` or
`kubectl rollout restart deploy/stellaops-concelier`).
- **TLS renewal** reissue certificates, place new files, and reload gateway (`docker compose exec mirror-gateway nginx -s reload`).
- **Quota tuning** adjust per-domain `MAXDOWNLOADREQUESTSPERHOUR` in `.env` or values file.
Align CDN rate limits and inform downstreams.
## 8. References
- Deployment profiles: `deploy/compose/docker-compose.mirror.yaml`,
`deploy/helm/stellaops/values-mirror.yaml`
- Mirror architecture dossiers: `docs/ARCHITECTURE_CONCELIER.md`,
`docs/ARCHITECTURE_EXCITITOR_MIRRORS.md`
- Export bundling: `docs/ARCHITECTURE_DEVOPS.md` §3, `docs/ARCHITECTURE_EXCITITOR.md` §7
# Concelier & Excititor Mirror Operations
This runbook describes how StellaOps operates the managed mirrors under `*.stella-ops.org`.
It covers Docker Compose and Helm deployment overlays, secret handling for multi-tenant
authn, CDN fronting, and the recurring sync pipeline that keeps mirror bundles current.
## 1. Prerequisites
- **Authority access** client credentials (`client_id` + secret) authorised for
`concelier.mirror.read` and `excititor.mirror.read` scopes. Secrets live outside git.
- **Signed TLS certificates** wildcard or per-domain (`mirror-primary`, `mirror-community`).
Store them under `deploy/compose/mirror-gateway/tls/` or in Kubernetes secrets.
- **Mirror gateway credentials** Basic Auth htpasswd files per domain. Generate with
`htpasswd -B`. Operators distribute credentials to downstream consumers.
- **Export artifact source** read access to the canonical S3 buckets (or rsync share)
that hold `concelier` JSON bundles and `excititor` VEX exports.
- **Persistent volumes** storage for Concelier job metadata and mirror export trees.
For Helm, provision PVCs (`concelier-mirror-jobs`, `concelier-mirror-exports`,
`excititor-mirror-exports`, `mirror-mongo-data`, `mirror-minio-data`) before rollout.
### 1.1 Service configuration quick reference
Concelier.WebService exposes the mirror HTTP endpoints once `CONCELIER__MIRROR__ENABLED=true`.
Key knobs:
- `CONCELIER__MIRROR__EXPORTROOT` root folder containing export snapshots (`<exportId>/mirror/*`).
- `CONCELIER__MIRROR__ACTIVEEXPORTID` optional explicit export id; otherwise the service auto-falls back to the `latest/` symlink or newest directory.
- `CONCELIER__MIRROR__REQUIREAUTHENTICATION` default auth requirement; override per domain with `CONCELIER__MIRROR__DOMAINS__{n}__REQUIREAUTHENTICATION`.
- `CONCELIER__MIRROR__MAXINDEXREQUESTSPERHOUR` budget for `/concelier/exports/index.json`. Domains inherit this value unless they define `__MAXDOWNLOADREQUESTSPERHOUR`.
- `CONCELIER__MIRROR__DOMAINS__{n}__ID` domain identifier matching the exporter manifest; additional keys configure display name and rate budgets.
> The service honours Stella Ops Authority when `CONCELIER__AUTHORITY__ENABLED=true` and `ALLOWANONYMOUSFALLBACK=false`. Use the bypass CIDR list (`CONCELIER__AUTHORITY__BYPASSNETWORKS__*`) for in-cluster ingress gateways that terminate Basic Auth. Unauthorized requests emit `WWW-Authenticate: Bearer` so downstream automation can detect token failures.
Mirror responses carry deterministic cache headers: `/index.json` returns `Cache-Control: public, max-age=60`, while per-domain manifests/bundles include `Cache-Control: public, max-age=300, immutable`. Rate limiting surfaces `Retry-After` when quotas are exceeded.
## 2. Secret & certificate layout
### Docker Compose (`deploy/compose/docker-compose.mirror.yaml`)
- `deploy/compose/env/mirror.env.example` copy to `.env` and adjust quotas or domain IDs.
- `deploy/compose/mirror-secrets/` mount read-only into `/run/secrets`. Place:
- `concelier-authority-client` Authority client secret.
- `excititor-authority-client` (optional) reserve for future authn.
- `deploy/compose/mirror-gateway/tls/` PEM-encoded cert/key pairs:
- `mirror-primary.crt`, `mirror-primary.key`
- `mirror-community.crt`, `mirror-community.key`
- `deploy/compose/mirror-gateway/secrets/` htpasswd files:
- `mirror-primary.htpasswd`
- `mirror-community.htpasswd`
### Helm (`deploy/helm/stellaops/values-mirror.yaml`)
Create secrets in the target namespace:
```bash
kubectl create secret generic concelier-mirror-auth \
--from-file=concelier-authority-client=concelier-authority-client
kubectl create secret generic excititor-mirror-auth \
--from-file=excititor-authority-client=excititor-authority-client
kubectl create secret tls mirror-gateway-tls \
--cert=mirror-primary.crt --key=mirror-primary.key
kubectl create secret generic mirror-gateway-htpasswd \
--from-file=mirror-primary.htpasswd --from-file=mirror-community.htpasswd
```
> Keep Basic Auth lists short-lived (rotate quarterly) and document credential recipients.
## 3. Deployment
### 3.1 Docker Compose (edge mirrors, lab validation)
1. `cp deploy/compose/env/mirror.env.example deploy/compose/env/mirror.env`
2. Populate secrets/tls directories as described above.
3. Sync mirror bundles (see §4) into `deploy/compose/mirror-data/…` and ensure they are mounted
on the host path backing the `concelier-exports` and `excititor-exports` volumes.
4. Run the profile validator: `deploy/tools/validate-profiles.sh`.
5. Launch: `docker compose --env-file env/mirror.env -f docker-compose.mirror.yaml up -d`.
### 3.2 Helm (production mirrors)
1. Provision PVCs sized for mirror bundles (baseline: 20GiB per domain).
2. Create secrets/tls config maps (§2).
3. `helm upgrade --install mirror deploy/helm/stellaops -f deploy/helm/stellaops/values-mirror.yaml`.
4. Annotate the `stellaops-mirror-gateway` service with ingress/LoadBalancer metadata required by
your CDN (e.g., AWS load balancer scheme internal + NLB idle timeout).
## 4. Artifact sync workflow
Mirrors never generate exports—they ingest signed bundles produced by the Concelier and Excititor
export jobs. Recommended sync pattern:
### 4.1 Compose host (systemd timer)
`/usr/local/bin/mirror-sync.sh`:
```bash
#!/usr/bin/env bash
set -euo pipefail
export AWS_ACCESS_KEY_ID=
export AWS_SECRET_ACCESS_KEY=
aws s3 sync s3://mirror-stellaops/concelier/latest \
/opt/stellaops/mirror-data/concelier --delete --size-only
aws s3 sync s3://mirror-stellaops/excititor/latest \
/opt/stellaops/mirror-data/excititor --delete --size-only
```
Schedule with a systemd timer every 5minutes. The Compose volumes mount `/opt/stellaops/mirror-data/*`
into the containers read-only, matching `CONCELIER__MIRROR__EXPORTROOT=/exports/json` and
`EXCITITOR__ARTIFACTS__FILESYSTEM__ROOT=/exports`.
### 4.2 Kubernetes (CronJob)
Create a CronJob running the AWS CLI (or rclone) in the same namespace, writing into the PVCs:
```yaml
apiVersion: batch/v1
kind: CronJob
metadata:
name: mirror-sync
spec:
schedule: "*/5 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: sync
image: public.ecr.aws/aws-cli/aws-cli@sha256:5df5f52c29f5e3ba46d0ad9e0e3afc98701c4a0f879400b4c5f80d943b5fadea
command:
- /bin/sh
- -c
- >
aws s3 sync s3://mirror-stellaops/concelier/latest /exports/concelier --delete --size-only &&
aws s3 sync s3://mirror-stellaops/excititor/latest /exports/excititor --delete --size-only
volumeMounts:
- name: concelier-exports
mountPath: /exports/concelier
- name: excititor-exports
mountPath: /exports/excititor
envFrom:
- secretRef:
name: mirror-sync-aws
restartPolicy: OnFailure
volumes:
- name: concelier-exports
persistentVolumeClaim:
claimName: concelier-mirror-exports
- name: excititor-exports
persistentVolumeClaim:
claimName: excititor-mirror-exports
```
## 5. CDN integration
1. Point the CDN origin at the mirror gateway (Compose host or Kubernetes LoadBalancer).
2. Honour the response headers emitted by the gateway and Concelier/Excititor:
`Cache-Control: public, max-age=300, immutable` for mirror payloads.
3. Configure origin shields in the CDN to prevent cache stampedes. Recommended TTLs:
- Index (`/concelier/exports/index.json`, `/excititor/mirror/*/index`) → 60s.
- Bundle/manifest payloads → 300s.
4. Forward the `Authorization` header—Basic Auth terminates at the gateway.
5. Enforce per-domain rate limits at the CDN (matching gateway budgets) and enable logging
to SIEM for anomaly detection.
## 6. Smoke tests
After each deployment or sync cycle (temporarily set low budgets if you need to observe 429 responses):
```bash
# Index with Basic Auth
curl -u $PRIMARY_CREDS https://mirror-primary.stella-ops.org/concelier/exports/index.json | jq 'keys'
# Mirror manifest signature and cache headers
curl -u $PRIMARY_CREDS -I https://mirror-primary.stella-ops.org/concelier/exports/mirror/primary/manifest.json \
| tee /tmp/manifest-headers.txt
grep -E '^Cache-Control: ' /tmp/manifest-headers.txt # expect public, max-age=300, immutable
# Excititor consensus bundle metadata
curl -u $COMMUNITY_CREDS https://mirror-community.stella-ops.org/excititor/mirror/community/index \
| jq '.exports[].exportKey'
# Signed bundle + detached JWS (spot check digests)
curl -u $PRIMARY_CREDS https://mirror-primary.stella-ops.org/concelier/exports/mirror/primary/bundle.json.jws \
-o bundle.json.jws
cosign verify-blob --signature bundle.json.jws --key mirror-key.pub bundle.json
# Service-level auth check (inside cluster no gateway credentials)
kubectl exec deploy/stellaops-concelier -- curl -si http://localhost:8443/concelier/exports/mirror/primary/manifest.json \
| head -n 5 # expect HTTP/1.1 401 with WWW-Authenticate: Bearer
# Rate limit smoke (repeat quickly; second call should return 429 + Retry-After)
for i in 1 2; do
curl -s -o /dev/null -D - https://mirror-primary.stella-ops.org/concelier/exports/index.json \
-u $PRIMARY_CREDS | grep -E '^(HTTP/|Retry-After:)'
sleep 1
done
```
Watch the gateway metrics (`nginx_vts` or access logs) for cache hits. In Kubernetes, `kubectl logs deploy/stellaops-mirror-gateway`
should show `X-Cache-Status: HIT/MISS`.
## 7. Maintenance & rotation
- **Bundle freshness** alert if sync job lag exceeds 15minutes or if `concelier` logs
`Mirror export root is not configured`.
- **Secret rotation** change Authority client secrets and Basic Auth credentials quarterly.
Update the mounted secrets and restart deployments (`docker compose restart concelier` or
`kubectl rollout restart deploy/stellaops-concelier`).
- **TLS renewal** reissue certificates, place new files, and reload gateway (`docker compose exec mirror-gateway nginx -s reload`).
- **Quota tuning** adjust per-domain `MAXDOWNLOADREQUESTSPERHOUR` in `.env` or values file.
Align CDN rate limits and inform downstreams.
## 8. References
- Deployment profiles: `deploy/compose/docker-compose.mirror.yaml`,
`deploy/helm/stellaops/values-mirror.yaml`
- Mirror architecture dossiers: `docs/ARCHITECTURE_CONCELIER.md`,
`docs/ARCHITECTURE_EXCITITOR_MIRRORS.md`
- Export bundling: `docs/ARCHITECTURE_DEVOPS.md` §3, `docs/ARCHITECTURE_EXCITITOR.md` §7

Binary file not shown.

View File

@@ -10,4 +10,5 @@
| DEVOPS-REL-14-001 | TODO | DevOps Guild | SIGNER-API-11-101, ATTESTOR-API-11-201 | Deterministic build/release pipeline with SBOM/provenance, signing, manifest generation. | CI pipeline produces signed images + SBOM/attestations, manifests published with verified hashes, docs updated. |
| DEVOPS-REL-17-002 | TODO | DevOps Guild | DEVOPS-REL-14-001, SCANNER-EMIT-17-701 | Persist stripped-debug artifacts organised by GNU build-id and bundle them into release/offline kits with checksum manifests. | CI job writes `.debug` files under `artifacts/debug/.build-id/`, manifest + checksums published, offline kit includes cache, smoke job proves symbol lookup via build-id. |
| DEVOPS-MIRROR-08-001 | DONE (2025-10-19) | DevOps Guild | DEVOPS-REL-14-001 | Stand up managed mirror profiles for `*.stella-ops.org` (Concelier/Excititor), including Helm/Compose overlays, multi-tenant secrets, CDN caching, and sync documentation. | Infra overlays committed, CI smoke deploy hits mirror endpoints, runbooks published for downstream sync and quota management. |
| DEVOPS-SEC-10-301 | DOING (2025-10-19) | DevOps Guild | Wave 0A complete | Address NU1902/NU1903 advisories for `MongoDB.Driver` 2.12.0 and `SharpCompress` 0.23.0 surfaced during scanner cache and worker test runs. | Dependencies bumped to patched releases, audit logs free of NU1902/NU1903 warnings, regression tests green, change log documents upgrade guidance. |
| DEVOPS-SEC-10-301 | DONE (2025-10-20) | DevOps Guild | Wave 0A complete | Address NU1902/NU1903 advisories for `MongoDB.Driver` 2.12.0 and `SharpCompress` 0.23.0 surfaced during scanner cache and worker test runs. | Dependencies bumped to patched releases, audit logs free of NU1902/NU1903 warnings, regression tests green, change log documents upgrade guidance. |
> Remark (2025-10-20): Repacked `Mongo2Go` local feed to require MongoDB.Driver 3.5.0 + SharpCompress 0.41.0; cache regression tests green and NU1902/NU1903 suppressed.

View File

@@ -1,95 +1,143 @@
using System;
using System.Collections.Generic;
using System.Security.Cryptography;
using Microsoft.Extensions.Logging.Abstractions;
using StellaOps.Concelier.Connector.StellaOpsMirror.Security;
using StellaOps.Cryptography;
using Xunit;
namespace StellaOps.Concelier.Connector.StellaOpsMirror.Tests;
public sealed class MirrorSignatureVerifierTests
{
[Fact]
public async Task VerifyAsync_ValidSignaturePasses()
{
var provider = new DefaultCryptoProvider();
var key = CreateSigningKey("mirror-key");
provider.UpsertSigningKey(key);
var registry = new CryptoProviderRegistry(new[] { provider });
var verifier = new MirrorSignatureVerifier(registry, NullLogger<MirrorSignatureVerifier>.Instance);
var payload = "{\"advisories\":[]}\"u8".ToUtf8Bytes();
var (signature, _) = await CreateDetachedJwsAsync(provider, key.Reference.KeyId, payload);
await verifier.VerifyAsync(payload, signature, CancellationToken.None);
}
[Fact]
public async Task VerifyAsync_InvalidSignatureThrows()
{
var provider = new DefaultCryptoProvider();
var key = CreateSigningKey("mirror-key");
provider.UpsertSigningKey(key);
var registry = new CryptoProviderRegistry(new[] { provider });
var verifier = new MirrorSignatureVerifier(registry, NullLogger<MirrorSignatureVerifier>.Instance);
var payload = "{\"advisories\":[]}\"u8".ToUtf8Bytes();
var (signature, _) = await CreateDetachedJwsAsync(provider, key.Reference.KeyId, payload);
var tampered = signature.Replace("a", "b", StringComparison.Ordinal);
await Assert.ThrowsAsync<InvalidOperationException>(() => verifier.VerifyAsync(payload, tampered, CancellationToken.None));
}
private static CryptoSigningKey CreateSigningKey(string keyId)
{
using var ecdsa = ECDsa.Create(ECCurve.NamedCurves.nistP256);
var parameters = ecdsa.ExportParameters(includePrivateParameters: true);
return new CryptoSigningKey(new CryptoKeyReference(keyId), SignatureAlgorithms.Es256, in parameters, DateTimeOffset.UtcNow);
}
private static async Task<(string Signature, DateTimeOffset SignedAt)> CreateDetachedJwsAsync(
DefaultCryptoProvider provider,
string keyId,
ReadOnlyMemory<byte> payload)
{
var signer = provider.GetSigner(SignatureAlgorithms.Es256, new CryptoKeyReference(keyId));
var header = new Dictionary<string, object?>
{
["alg"] = SignatureAlgorithms.Es256,
["kid"] = keyId,
["provider"] = provider.Name,
["typ"] = "application/vnd.stellaops.concelier.mirror-bundle+jws",
["b64"] = false,
["crit"] = new[] { "b64" }
};
var headerJson = System.Text.Json.JsonSerializer.Serialize(header);
var protectedHeader = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(headerJson);
var signingInput = BuildSigningInput(protectedHeader, payload.Span);
var signatureBytes = await signer.SignAsync(signingInput, CancellationToken.None).ConfigureAwait(false);
var encodedSignature = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(signatureBytes);
return (string.Concat(protectedHeader, "..", encodedSignature), DateTimeOffset.UtcNow);
}
private static ReadOnlyMemory<byte> BuildSigningInput(string encodedHeader, ReadOnlySpan<byte> payload)
{
var headerBytes = System.Text.Encoding.ASCII.GetBytes(encodedHeader);
var buffer = new byte[headerBytes.Length + 1 + payload.Length];
headerBytes.CopyTo(buffer.AsSpan());
buffer[headerBytes.Length] = (byte)'.';
payload.CopyTo(buffer.AsSpan(headerBytes.Length + 1));
return buffer;
}
}
file static class Utf8Extensions
{
public static ReadOnlyMemory<byte> ToUtf8Bytes(this string value)
=> System.Text.Encoding.UTF8.GetBytes(value);
}
using System;
using System.Collections.Generic;
using System.Security.Cryptography;
using Microsoft.Extensions.Logging.Abstractions;
using StellaOps.Concelier.Connector.StellaOpsMirror.Security;
using StellaOps.Cryptography;
using Xunit;
namespace StellaOps.Concelier.Connector.StellaOpsMirror.Tests;
public sealed class MirrorSignatureVerifierTests
{
[Fact]
public async Task VerifyAsync_ValidSignaturePasses()
{
var provider = new DefaultCryptoProvider();
var key = CreateSigningKey("mirror-key");
provider.UpsertSigningKey(key);
var registry = new CryptoProviderRegistry(new[] { provider });
var verifier = new MirrorSignatureVerifier(registry, NullLogger<MirrorSignatureVerifier>.Instance);
var payloadText = System.Text.Json.JsonSerializer.Serialize(new { advisories = Array.Empty<string>() });
var payload = payloadText.ToUtf8Bytes();
var (signature, _) = await CreateDetachedJwsAsync(provider, key.Reference.KeyId, payload);
await verifier.VerifyAsync(payload, signature, CancellationToken.None);
}
[Fact]
public async Task VerifyAsync_InvalidSignatureThrows()
{
var provider = new DefaultCryptoProvider();
var key = CreateSigningKey("mirror-key");
provider.UpsertSigningKey(key);
var registry = new CryptoProviderRegistry(new[] { provider });
var verifier = new MirrorSignatureVerifier(registry, NullLogger<MirrorSignatureVerifier>.Instance);
var payloadText = System.Text.Json.JsonSerializer.Serialize(new { advisories = Array.Empty<string>() });
var payload = payloadText.ToUtf8Bytes();
var (signature, _) = await CreateDetachedJwsAsync(provider, key.Reference.KeyId, payload);
var tampered = signature.Replace('a', 'b', StringComparison.Ordinal);
await Assert.ThrowsAsync<InvalidOperationException>(() => verifier.VerifyAsync(payload, tampered, CancellationToken.None));
}
[Fact]
public async Task VerifyAsync_KeyMismatchThrows()
{
var provider = new DefaultCryptoProvider();
var key = CreateSigningKey("mirror-key");
provider.UpsertSigningKey(key);
var registry = new CryptoProviderRegistry(new[] { provider });
var verifier = new MirrorSignatureVerifier(registry, NullLogger<MirrorSignatureVerifier>.Instance);
var payloadText = System.Text.Json.JsonSerializer.Serialize(new { advisories = Array.Empty<string>() });
var payload = payloadText.ToUtf8Bytes();
var (signature, _) = await CreateDetachedJwsAsync(provider, key.Reference.KeyId, payload);
await Assert.ThrowsAsync<InvalidOperationException>(() => verifier.VerifyAsync(
payload,
signature,
expectedKeyId: "unexpected-key",
expectedProvider: null,
cancellationToken: CancellationToken.None));
}
[Fact]
public async Task VerifyAsync_ThrowsWhenProviderMissingKey()
{
var provider = new DefaultCryptoProvider();
var key = CreateSigningKey("mirror-key");
provider.UpsertSigningKey(key);
var registry = new CryptoProviderRegistry(new[] { provider });
var verifier = new MirrorSignatureVerifier(registry, NullLogger<MirrorSignatureVerifier>.Instance);
var payloadText = System.Text.Json.JsonSerializer.Serialize(new { advisories = Array.Empty<string>() });
var payload = payloadText.ToUtf8Bytes();
var (signature, _) = await CreateDetachedJwsAsync(provider, key.Reference.KeyId, payload);
provider.RemoveSigningKey(key.Reference.KeyId);
await Assert.ThrowsAsync<InvalidOperationException>(() => verifier.VerifyAsync(
payload,
signature,
expectedKeyId: key.Reference.KeyId,
expectedProvider: provider.Name,
cancellationToken: CancellationToken.None));
}
private static CryptoSigningKey CreateSigningKey(string keyId)
{
using var ecdsa = ECDsa.Create(ECCurve.NamedCurves.nistP256);
var parameters = ecdsa.ExportParameters(includePrivateParameters: true);
return new CryptoSigningKey(new CryptoKeyReference(keyId), SignatureAlgorithms.Es256, in parameters, DateTimeOffset.UtcNow);
}
private static async Task<(string Signature, DateTimeOffset SignedAt)> CreateDetachedJwsAsync(
DefaultCryptoProvider provider,
string keyId,
ReadOnlyMemory<byte> payload)
{
var signer = provider.GetSigner(SignatureAlgorithms.Es256, new CryptoKeyReference(keyId));
var header = new Dictionary<string, object?>
{
["alg"] = SignatureAlgorithms.Es256,
["kid"] = keyId,
["provider"] = provider.Name,
["typ"] = "application/vnd.stellaops.concelier.mirror-bundle+jws",
["b64"] = false,
["crit"] = new[] { "b64" }
};
var headerJson = System.Text.Json.JsonSerializer.Serialize(header);
var protectedHeader = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(headerJson);
var signingInput = BuildSigningInput(protectedHeader, payload.Span);
var signatureBytes = await signer.SignAsync(signingInput, CancellationToken.None).ConfigureAwait(false);
var encodedSignature = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(signatureBytes);
return (string.Concat(protectedHeader, "..", encodedSignature), DateTimeOffset.UtcNow);
}
private static ReadOnlyMemory<byte> BuildSigningInput(string encodedHeader, ReadOnlySpan<byte> payload)
{
var headerBytes = System.Text.Encoding.ASCII.GetBytes(encodedHeader);
var buffer = new byte[headerBytes.Length + 1 + payload.Length];
headerBytes.CopyTo(buffer.AsSpan());
buffer[headerBytes.Length] = (byte)'.';
payload.CopyTo(buffer.AsSpan(headerBytes.Length + 1));
return buffer;
}
}
file static class Utf8Extensions
{
public static ReadOnlyMemory<byte> ToUtf8Bytes(this string value)
=> System.Text.Encoding.UTF8.GetBytes(value);
}

View File

@@ -1,319 +1,359 @@
using System;
using System.Collections.Generic;
using System.Net;
using System.Net.Http;
using System.Security.Cryptography;
using System.Text;
using System.Text.Json;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Http;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Logging.Abstractions;
using Microsoft.Extensions.Options;
using MongoDB.Bson;
using StellaOps.Concelier.Connector.Common;
using StellaOps.Concelier.Connector.Common.Testing;
using StellaOps.Concelier.Connector.StellaOpsMirror.Settings;
using StellaOps.Concelier.Storage.Mongo;
using StellaOps.Concelier.Storage.Mongo.Documents;
using StellaOps.Concelier.Storage.Mongo.SourceState;
using StellaOps.Concelier.Testing;
using StellaOps.Cryptography;
using Xunit;
namespace StellaOps.Concelier.Connector.StellaOpsMirror.Tests;
[Collection("mongo-fixture")]
public sealed class StellaOpsMirrorConnectorTests : IAsyncLifetime
{
private readonly MongoIntegrationFixture _fixture;
private readonly CannedHttpMessageHandler _handler;
public StellaOpsMirrorConnectorTests(MongoIntegrationFixture fixture)
{
_fixture = fixture;
_handler = new CannedHttpMessageHandler();
}
[Fact]
public async Task FetchAsync_PersistsMirrorArtifacts()
{
var manifestContent = "{\"domain\":\"primary\",\"files\":[]}";
var bundleContent = "{\"advisories\":[{\"id\":\"CVE-2025-0001\"}]}";
var manifestDigest = ComputeDigest(manifestContent);
var bundleDigest = ComputeDigest(bundleContent);
var index = BuildIndex(manifestDigest, Encoding.UTF8.GetByteCount(manifestContent), bundleDigest, Encoding.UTF8.GetByteCount(bundleContent), includeSignature: false);
await using var provider = await BuildServiceProviderAsync();
SeedResponses(index, manifestContent, bundleContent, signature: null);
var connector = provider.GetRequiredService<StellaOpsMirrorConnector>();
await connector.FetchAsync(provider, CancellationToken.None);
var documentStore = provider.GetRequiredService<IDocumentStore>();
var manifestUri = "https://mirror.test/mirror/primary/manifest.json";
var bundleUri = "https://mirror.test/mirror/primary/bundle.json";
var manifestDocument = await documentStore.FindBySourceAndUriAsync(StellaOpsMirrorConnector.Source, manifestUri, CancellationToken.None);
Assert.NotNull(manifestDocument);
Assert.Equal(DocumentStatuses.Mapped, manifestDocument!.Status);
Assert.Equal(NormalizeDigest(manifestDigest), manifestDocument.Sha256);
var bundleDocument = await documentStore.FindBySourceAndUriAsync(StellaOpsMirrorConnector.Source, bundleUri, CancellationToken.None);
Assert.NotNull(bundleDocument);
Assert.Equal(DocumentStatuses.PendingParse, bundleDocument!.Status);
Assert.Equal(NormalizeDigest(bundleDigest), bundleDocument.Sha256);
var rawStorage = provider.GetRequiredService<RawDocumentStorage>();
Assert.NotNull(manifestDocument.GridFsId);
Assert.NotNull(bundleDocument.GridFsId);
var manifestBytes = await rawStorage.DownloadAsync(manifestDocument.GridFsId!.Value, CancellationToken.None);
var bundleBytes = await rawStorage.DownloadAsync(bundleDocument.GridFsId!.Value, CancellationToken.None);
Assert.Equal(manifestContent, Encoding.UTF8.GetString(manifestBytes));
Assert.Equal(bundleContent, Encoding.UTF8.GetString(bundleBytes));
var stateRepository = provider.GetRequiredService<ISourceStateRepository>();
var state = await stateRepository.TryGetAsync(StellaOpsMirrorConnector.Source, CancellationToken.None);
Assert.NotNull(state);
var cursorDocument = state!.Cursor ?? new BsonDocument();
var digestValue = cursorDocument.TryGetValue("bundleDigest", out var digestBson) ? digestBson.AsString : string.Empty;
Assert.Equal(NormalizeDigest(bundleDigest), NormalizeDigest(digestValue));
var pendingDocumentsArray = cursorDocument.TryGetValue("pendingDocuments", out var pendingDocsBson) && pendingDocsBson is BsonArray pendingArray
? pendingArray
: new BsonArray();
Assert.Single(pendingDocumentsArray);
var pendingDocumentId = Guid.Parse(pendingDocumentsArray[0].AsString);
Assert.Equal(bundleDocument.Id, pendingDocumentId);
var pendingMappingsArray = cursorDocument.TryGetValue("pendingMappings", out var pendingMappingsBson) && pendingMappingsBson is BsonArray mappingsArray
? mappingsArray
: new BsonArray();
Assert.Empty(pendingMappingsArray);
}
[Fact]
public async Task FetchAsync_TamperedSignatureThrows()
{
var manifestContent = "{\"domain\":\"primary\"}";
var bundleContent = "{\"advisories\":[{\"id\":\"CVE-2025-0002\"}]}";
var manifestDigest = ComputeDigest(manifestContent);
var bundleDigest = ComputeDigest(bundleContent);
var index = BuildIndex(manifestDigest, Encoding.UTF8.GetByteCount(manifestContent), bundleDigest, Encoding.UTF8.GetByteCount(bundleContent), includeSignature: true);
await using var provider = await BuildServiceProviderAsync(options =>
{
options.Signature.Enabled = true;
options.Signature.KeyId = "mirror-key";
options.Signature.Provider = "default";
});
var defaultProvider = provider.GetRequiredService<DefaultCryptoProvider>();
var signingKey = CreateSigningKey("mirror-key");
defaultProvider.UpsertSigningKey(signingKey);
var (signatureValue, _) = CreateDetachedJws(signingKey, bundleContent);
// Tamper with signature so verification fails.
var tamperedSignature = signatureValue.Replace('a', 'b');
SeedResponses(index, manifestContent, bundleContent, tamperedSignature);
var connector = provider.GetRequiredService<StellaOpsMirrorConnector>();
await Assert.ThrowsAsync<InvalidOperationException>(() => connector.FetchAsync(provider, CancellationToken.None));
var stateRepository = provider.GetRequiredService<ISourceStateRepository>();
var state = await stateRepository.TryGetAsync(StellaOpsMirrorConnector.Source, CancellationToken.None);
Assert.NotNull(state);
Assert.True(state!.FailCount >= 1);
Assert.False(state.Cursor.TryGetValue("bundleDigest", out _));
}
public Task InitializeAsync() => Task.CompletedTask;
public Task DisposeAsync()
{
_handler.Clear();
return Task.CompletedTask;
}
private async Task<ServiceProvider> BuildServiceProviderAsync(Action<StellaOpsMirrorConnectorOptions>? configureOptions = null)
{
await _fixture.Client.DropDatabaseAsync(_fixture.Database.DatabaseNamespace.DatabaseName);
_handler.Clear();
var services = new ServiceCollection();
services.AddLogging(builder => builder.AddProvider(NullLoggerProvider.Instance));
services.AddSingleton(_handler);
services.AddSingleton(TimeProvider.System);
services.AddMongoStorage(options =>
{
options.ConnectionString = _fixture.Runner.ConnectionString;
options.DatabaseName = _fixture.Database.DatabaseNamespace.DatabaseName;
options.CommandTimeout = TimeSpan.FromSeconds(5);
});
services.AddSingleton<DefaultCryptoProvider>();
services.AddSingleton<ICryptoProvider>(sp => sp.GetRequiredService<DefaultCryptoProvider>());
services.AddSingleton<ICryptoProviderRegistry>(sp => new CryptoProviderRegistry(sp.GetServices<ICryptoProvider>()));
var configuration = new ConfigurationBuilder()
.AddInMemoryCollection(new Dictionary<string, string?>
{
["concelier:sources:stellaopsMirror:baseAddress"] = "https://mirror.test/",
["concelier:sources:stellaopsMirror:domainId"] = "primary",
["concelier:sources:stellaopsMirror:indexPath"] = "/concelier/exports/index.json",
})
.Build();
var routine = new StellaOpsMirrorDependencyInjectionRoutine();
routine.Register(services, configuration);
if (configureOptions is not null)
{
services.PostConfigure(configureOptions);
}
services.Configure<HttpClientFactoryOptions>("stellaops-mirror", builder =>
{
builder.HttpMessageHandlerBuilderActions.Add(options =>
{
options.PrimaryHandler = _handler;
});
});
var provider = services.BuildServiceProvider();
var bootstrapper = provider.GetRequiredService<MongoBootstrapper>();
await bootstrapper.InitializeAsync(CancellationToken.None);
return provider;
}
private void SeedResponses(string indexJson, string manifestContent, string bundleContent, string? signature)
{
var baseUri = new Uri("https://mirror.test");
_handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "/concelier/exports/index.json"), () => CreateJsonResponse(indexJson));
_handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "mirror/primary/manifest.json"), () => CreateJsonResponse(manifestContent));
_handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "mirror/primary/bundle.json"), () => CreateJsonResponse(bundleContent));
if (signature is not null)
{
_handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "mirror/primary/bundle.json.jws"), () => new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StringContent(signature, Encoding.UTF8, "application/jose+json"),
});
}
}
private static HttpResponseMessage CreateJsonResponse(string content)
=> new(HttpStatusCode.OK)
{
Content = new StringContent(content, Encoding.UTF8, "application/json"),
};
private static string BuildIndex(string manifestDigest, int manifestBytes, string bundleDigest, int bundleBytes, bool includeSignature)
{
var index = new
{
schemaVersion = 1,
generatedAt = new DateTimeOffset(2025, 10, 19, 12, 0, 0, TimeSpan.Zero),
targetRepository = "repo",
domains = new[]
{
new
{
domainId = "primary",
displayName = "Primary",
advisoryCount = 1,
manifest = new
{
path = "mirror/primary/manifest.json",
sizeBytes = manifestBytes,
digest = manifestDigest,
signature = (object?)null,
},
bundle = new
{
path = "mirror/primary/bundle.json",
sizeBytes = bundleBytes,
digest = bundleDigest,
signature = includeSignature
? new
{
path = "mirror/primary/bundle.json.jws",
algorithm = "ES256",
keyId = "mirror-key",
provider = "default",
signedAt = new DateTimeOffset(2025, 10, 19, 12, 0, 0, TimeSpan.Zero),
}
: null,
},
sources = Array.Empty<object>(),
}
}
};
return JsonSerializer.Serialize(index, new JsonSerializerOptions
{
PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
WriteIndented = false,
});
}
private static string ComputeDigest(string content)
{
var bytes = Encoding.UTF8.GetBytes(content);
var hash = SHA256.HashData(bytes);
return "sha256:" + Convert.ToHexString(hash).ToLowerInvariant();
}
private static string NormalizeDigest(string digest)
=> digest.StartsWith("sha256:", StringComparison.OrdinalIgnoreCase) ? digest[7..] : digest;
private static CryptoSigningKey CreateSigningKey(string keyId)
{
using var ecdsa = ECDsa.Create(ECCurve.NamedCurves.nistP256);
var parameters = ecdsa.ExportParameters(includePrivateParameters: true);
return new CryptoSigningKey(new CryptoKeyReference(keyId), SignatureAlgorithms.Es256, in parameters, DateTimeOffset.UtcNow);
}
private static (string Signature, DateTimeOffset SignedAt) CreateDetachedJws(CryptoSigningKey signingKey, string payload)
{
using var provider = new DefaultCryptoProvider();
provider.UpsertSigningKey(signingKey);
var signer = provider.GetSigner(SignatureAlgorithms.Es256, signingKey.Reference);
var header = new Dictionary<string, object?>
{
["alg"] = SignatureAlgorithms.Es256,
["kid"] = signingKey.Reference.KeyId,
["provider"] = provider.Name,
["typ"] = "application/vnd.stellaops.concelier.mirror-bundle+jws",
["b64"] = false,
["crit"] = new[] { "b64" }
};
var headerJson = JsonSerializer.Serialize(header);
var encodedHeader = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(headerJson);
var payloadBytes = Encoding.UTF8.GetBytes(payload);
var signingInput = BuildSigningInput(encodedHeader, payloadBytes);
var signatureBytes = signer.SignAsync(signingInput, CancellationToken.None).GetAwaiter().GetResult();
var encodedSignature = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(signatureBytes);
return (string.Concat(encodedHeader, "..", encodedSignature), DateTimeOffset.UtcNow);
}
private static ReadOnlyMemory<byte> BuildSigningInput(string encodedHeader, ReadOnlySpan<byte> payload)
{
var headerBytes = Encoding.ASCII.GetBytes(encodedHeader);
var buffer = new byte[headerBytes.Length + 1 + payload.Length];
headerBytes.CopyTo(buffer, 0);
buffer[headerBytes.Length] = (byte)'.';
payload.CopyTo(buffer.AsSpan(headerBytes.Length + 1));
return buffer;
}
}
using System;
using System.Collections.Generic;
using System.Net;
using System.Net.Http;
using System.Security.Cryptography;
using System.Text;
using System.Text.Json;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Http;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Logging.Abstractions;
using Microsoft.Extensions.Options;
using MongoDB.Bson;
using StellaOps.Concelier.Connector.Common;
using StellaOps.Concelier.Connector.Common.Fetch;
using StellaOps.Concelier.Connector.Common.Testing;
using StellaOps.Concelier.Connector.StellaOpsMirror.Settings;
using StellaOps.Concelier.Storage.Mongo;
using StellaOps.Concelier.Storage.Mongo.Documents;
using StellaOps.Concelier.Testing;
using StellaOps.Cryptography;
using Xunit;
namespace StellaOps.Concelier.Connector.StellaOpsMirror.Tests;
[Collection("mongo-fixture")]
public sealed class StellaOpsMirrorConnectorTests : IAsyncLifetime
{
private readonly MongoIntegrationFixture _fixture;
private readonly CannedHttpMessageHandler _handler;
public StellaOpsMirrorConnectorTests(MongoIntegrationFixture fixture)
{
_fixture = fixture;
_handler = new CannedHttpMessageHandler();
}
[Fact]
public async Task FetchAsync_PersistsMirrorArtifacts()
{
var manifestContent = "{\"domain\":\"primary\",\"files\":[]}";
var bundleContent = "{\"advisories\":[{\"id\":\"CVE-2025-0001\"}]}";
var manifestDigest = ComputeDigest(manifestContent);
var bundleDigest = ComputeDigest(bundleContent);
var index = BuildIndex(manifestDigest, Encoding.UTF8.GetByteCount(manifestContent), bundleDigest, Encoding.UTF8.GetByteCount(bundleContent), includeSignature: false);
await using var provider = await BuildServiceProviderAsync();
SeedResponses(index, manifestContent, bundleContent, signature: null);
var connector = provider.GetRequiredService<StellaOpsMirrorConnector>();
await connector.FetchAsync(provider, CancellationToken.None);
var documentStore = provider.GetRequiredService<IDocumentStore>();
var manifestUri = "https://mirror.test/mirror/primary/manifest.json";
var bundleUri = "https://mirror.test/mirror/primary/bundle.json";
var manifestDocument = await documentStore.FindBySourceAndUriAsync(StellaOpsMirrorConnector.Source, manifestUri, CancellationToken.None);
Assert.NotNull(manifestDocument);
Assert.Equal(DocumentStatuses.Mapped, manifestDocument!.Status);
Assert.Equal(NormalizeDigest(manifestDigest), manifestDocument.Sha256);
var bundleDocument = await documentStore.FindBySourceAndUriAsync(StellaOpsMirrorConnector.Source, bundleUri, CancellationToken.None);
Assert.NotNull(bundleDocument);
Assert.Equal(DocumentStatuses.PendingParse, bundleDocument!.Status);
Assert.Equal(NormalizeDigest(bundleDigest), bundleDocument.Sha256);
var rawStorage = provider.GetRequiredService<RawDocumentStorage>();
Assert.NotNull(manifestDocument.GridFsId);
Assert.NotNull(bundleDocument.GridFsId);
var manifestBytes = await rawStorage.DownloadAsync(manifestDocument.GridFsId!.Value, CancellationToken.None);
var bundleBytes = await rawStorage.DownloadAsync(bundleDocument.GridFsId!.Value, CancellationToken.None);
Assert.Equal(manifestContent, Encoding.UTF8.GetString(manifestBytes));
Assert.Equal(bundleContent, Encoding.UTF8.GetString(bundleBytes));
var stateRepository = provider.GetRequiredService<ISourceStateRepository>();
var state = await stateRepository.TryGetAsync(StellaOpsMirrorConnector.Source, CancellationToken.None);
Assert.NotNull(state);
var cursorDocument = state!.Cursor ?? new BsonDocument();
var digestValue = cursorDocument.TryGetValue("bundleDigest", out var digestBson) ? digestBson.AsString : string.Empty;
Assert.Equal(NormalizeDigest(bundleDigest), NormalizeDigest(digestValue));
var pendingDocumentsArray = cursorDocument.TryGetValue("pendingDocuments", out var pendingDocsBson) && pendingDocsBson is BsonArray pendingArray
? pendingArray
: new BsonArray();
Assert.Single(pendingDocumentsArray);
var pendingDocumentId = Guid.Parse(pendingDocumentsArray[0].AsString);
Assert.Equal(bundleDocument.Id, pendingDocumentId);
var pendingMappingsArray = cursorDocument.TryGetValue("pendingMappings", out var pendingMappingsBson) && pendingMappingsBson is BsonArray mappingsArray
? mappingsArray
: new BsonArray();
Assert.Empty(pendingMappingsArray);
}
[Fact]
public async Task FetchAsync_TamperedSignatureThrows()
{
var manifestContent = "{\"domain\":\"primary\"}";
var bundleContent = "{\"advisories\":[{\"id\":\"CVE-2025-0002\"}]}";
var manifestDigest = ComputeDigest(manifestContent);
var bundleDigest = ComputeDigest(bundleContent);
var index = BuildIndex(manifestDigest, Encoding.UTF8.GetByteCount(manifestContent), bundleDigest, Encoding.UTF8.GetByteCount(bundleContent), includeSignature: true);
await using var provider = await BuildServiceProviderAsync(options =>
{
options.Signature.Enabled = true;
options.Signature.KeyId = "mirror-key";
options.Signature.Provider = "default";
});
var defaultProvider = provider.GetRequiredService<DefaultCryptoProvider>();
var signingKey = CreateSigningKey("mirror-key");
defaultProvider.UpsertSigningKey(signingKey);
var (signatureValue, _) = CreateDetachedJws(signingKey, bundleContent);
// Tamper with signature so verification fails.
var tamperedSignature = signatureValue.Replace('a', 'b');
SeedResponses(index, manifestContent, bundleContent, tamperedSignature);
var connector = provider.GetRequiredService<StellaOpsMirrorConnector>();
await Assert.ThrowsAsync<InvalidOperationException>(() => connector.FetchAsync(provider, CancellationToken.None));
var stateRepository = provider.GetRequiredService<ISourceStateRepository>();
var state = await stateRepository.TryGetAsync(StellaOpsMirrorConnector.Source, CancellationToken.None);
Assert.NotNull(state);
Assert.True(state!.FailCount >= 1);
Assert.False(state.Cursor.TryGetValue("bundleDigest", out _));
}
[Fact]
public async Task FetchAsync_SignatureKeyMismatchThrows()
{
var manifestContent = "{\"domain\":\"primary\"}";
var bundleContent = "{\"advisories\":[{\"id\":\"CVE-2025-0003\"}]}";
var manifestDigest = ComputeDigest(manifestContent);
var bundleDigest = ComputeDigest(bundleContent);
var index = BuildIndex(
manifestDigest,
Encoding.UTF8.GetByteCount(manifestContent),
bundleDigest,
Encoding.UTF8.GetByteCount(bundleContent),
includeSignature: true,
signatureKeyId: "unexpected-key",
signatureProvider: "default");
var signingKey = CreateSigningKey("unexpected-key");
var (signatureValue, _) = CreateDetachedJws(signingKey, bundleContent);
await using var provider = await BuildServiceProviderAsync(options =>
{
options.Signature.Enabled = true;
options.Signature.KeyId = "mirror-key";
options.Signature.Provider = "default";
});
SeedResponses(index, manifestContent, bundleContent, signatureValue);
var connector = provider.GetRequiredService<StellaOpsMirrorConnector>();
await Assert.ThrowsAsync<InvalidOperationException>(() => connector.FetchAsync(provider, CancellationToken.None));
}
public Task InitializeAsync() => Task.CompletedTask;
public Task DisposeAsync()
{
_handler.Clear();
return Task.CompletedTask;
}
private async Task<ServiceProvider> BuildServiceProviderAsync(Action<StellaOpsMirrorConnectorOptions>? configureOptions = null)
{
await _fixture.Client.DropDatabaseAsync(_fixture.Database.DatabaseNamespace.DatabaseName);
_handler.Clear();
var services = new ServiceCollection();
services.AddLogging(builder => builder.AddProvider(NullLoggerProvider.Instance));
services.AddSingleton(_handler);
services.AddSingleton(TimeProvider.System);
services.AddMongoStorage(options =>
{
options.ConnectionString = _fixture.Runner.ConnectionString;
options.DatabaseName = _fixture.Database.DatabaseNamespace.DatabaseName;
options.CommandTimeout = TimeSpan.FromSeconds(5);
});
services.AddSingleton<DefaultCryptoProvider>();
services.AddSingleton<ICryptoProvider>(sp => sp.GetRequiredService<DefaultCryptoProvider>());
services.AddSingleton<ICryptoProviderRegistry>(sp => new CryptoProviderRegistry(sp.GetServices<ICryptoProvider>()));
var configuration = new ConfigurationBuilder()
.AddInMemoryCollection(new Dictionary<string, string?>
{
["concelier:sources:stellaopsMirror:baseAddress"] = "https://mirror.test/",
["concelier:sources:stellaopsMirror:domainId"] = "primary",
["concelier:sources:stellaopsMirror:indexPath"] = "/concelier/exports/index.json",
})
.Build();
var routine = new StellaOpsMirrorDependencyInjectionRoutine();
routine.Register(services, configuration);
if (configureOptions is not null)
{
services.PostConfigure(configureOptions);
}
services.Configure<HttpClientFactoryOptions>("stellaops-mirror", builder =>
{
builder.HttpMessageHandlerBuilderActions.Add(options =>
{
options.PrimaryHandler = _handler;
});
});
var provider = services.BuildServiceProvider();
var bootstrapper = provider.GetRequiredService<MongoBootstrapper>();
await bootstrapper.InitializeAsync(CancellationToken.None);
return provider;
}
private void SeedResponses(string indexJson, string manifestContent, string bundleContent, string? signature)
{
var baseUri = new Uri("https://mirror.test");
_handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "/concelier/exports/index.json"), () => CreateJsonResponse(indexJson));
_handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "mirror/primary/manifest.json"), () => CreateJsonResponse(manifestContent));
_handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "mirror/primary/bundle.json"), () => CreateJsonResponse(bundleContent));
if (signature is not null)
{
_handler.AddResponse(HttpMethod.Get, new Uri(baseUri, "mirror/primary/bundle.json.jws"), () => new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StringContent(signature, Encoding.UTF8, "application/jose+json"),
});
}
}
private static HttpResponseMessage CreateJsonResponse(string content)
=> new(HttpStatusCode.OK)
{
Content = new StringContent(content, Encoding.UTF8, "application/json"),
};
private static string BuildIndex(
string manifestDigest,
int manifestBytes,
string bundleDigest,
int bundleBytes,
bool includeSignature,
string signatureKeyId = "mirror-key",
string signatureProvider = "default")
{
var index = new
{
schemaVersion = 1,
generatedAt = new DateTimeOffset(2025, 10, 19, 12, 0, 0, TimeSpan.Zero),
targetRepository = "repo",
domains = new[]
{
new
{
domainId = "primary",
displayName = "Primary",
advisoryCount = 1,
manifest = new
{
path = "mirror/primary/manifest.json",
sizeBytes = manifestBytes,
digest = manifestDigest,
signature = (object?)null,
},
bundle = new
{
path = "mirror/primary/bundle.json",
sizeBytes = bundleBytes,
digest = bundleDigest,
signature = includeSignature
? new
{
path = "mirror/primary/bundle.json.jws",
algorithm = "ES256",
keyId = signatureKeyId,
provider = signatureProvider,
signedAt = new DateTimeOffset(2025, 10, 19, 12, 0, 0, TimeSpan.Zero),
}
: null,
},
sources = Array.Empty<object>(),
}
}
};
return JsonSerializer.Serialize(index, new JsonSerializerOptions
{
PropertyNamingPolicy = JsonNamingPolicy.CamelCase,
WriteIndented = false,
});
}
private static string ComputeDigest(string content)
{
var bytes = Encoding.UTF8.GetBytes(content);
var hash = SHA256.HashData(bytes);
return "sha256:" + Convert.ToHexString(hash).ToLowerInvariant();
}
private static string NormalizeDigest(string digest)
=> digest.StartsWith("sha256:", StringComparison.OrdinalIgnoreCase) ? digest[7..] : digest;
private static CryptoSigningKey CreateSigningKey(string keyId)
{
using var ecdsa = ECDsa.Create(ECCurve.NamedCurves.nistP256);
var parameters = ecdsa.ExportParameters(includePrivateParameters: true);
return new CryptoSigningKey(new CryptoKeyReference(keyId), SignatureAlgorithms.Es256, in parameters, DateTimeOffset.UtcNow);
}
private static (string Signature, DateTimeOffset SignedAt) CreateDetachedJws(CryptoSigningKey signingKey, string payload)
{
var provider = new DefaultCryptoProvider();
provider.UpsertSigningKey(signingKey);
var signer = provider.GetSigner(SignatureAlgorithms.Es256, signingKey.Reference);
var header = new Dictionary<string, object?>
{
["alg"] = SignatureAlgorithms.Es256,
["kid"] = signingKey.Reference.KeyId,
["provider"] = provider.Name,
["typ"] = "application/vnd.stellaops.concelier.mirror-bundle+jws",
["b64"] = false,
["crit"] = new[] { "b64" }
};
var headerJson = JsonSerializer.Serialize(header);
var encodedHeader = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(headerJson);
var payloadBytes = Encoding.UTF8.GetBytes(payload);
var signingInput = BuildSigningInput(encodedHeader, payloadBytes);
var signatureBytes = signer.SignAsync(signingInput, CancellationToken.None).GetAwaiter().GetResult();
var encodedSignature = Microsoft.IdentityModel.Tokens.Base64UrlEncoder.Encode(signatureBytes);
return (string.Concat(encodedHeader, "..", encodedSignature), DateTimeOffset.UtcNow);
}
private static ReadOnlyMemory<byte> BuildSigningInput(string encodedHeader, ReadOnlySpan<byte> payload)
{
var headerBytes = Encoding.ASCII.GetBytes(encodedHeader);
var buffer = new byte[headerBytes.Length + 1 + payload.Length];
headerBytes.CopyTo(buffer, 0);
buffer[headerBytes.Length] = (byte)'.';
payload.CopyTo(buffer.AsSpan(headerBytes.Length + 1));
return buffer;
}
}

View File

@@ -1,121 +1,150 @@
using System;
using System.Text;
using System.Text.Json;
using System.Text.Json.Serialization;
using Microsoft.Extensions.Logging;
using Microsoft.IdentityModel.Tokens;
using StellaOps.Cryptography;
namespace StellaOps.Concelier.Connector.StellaOpsMirror.Security;
/// <summary>
/// Validates detached JWS signatures emitted by mirror bundles.
/// </summary>
public sealed class MirrorSignatureVerifier
{
private static readonly JsonSerializerOptions HeaderSerializerOptions = new(JsonSerializerDefaults.Web)
{
PropertyNameCaseInsensitive = true
};
private readonly ICryptoProviderRegistry _providerRegistry;
private readonly ILogger<MirrorSignatureVerifier> _logger;
public MirrorSignatureVerifier(ICryptoProviderRegistry providerRegistry, ILogger<MirrorSignatureVerifier> logger)
{
_providerRegistry = providerRegistry ?? throw new ArgumentNullException(nameof(providerRegistry));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
public async Task VerifyAsync(ReadOnlyMemory<byte> payload, string signatureValue, CancellationToken cancellationToken)
{
if (payload.IsEmpty)
{
throw new ArgumentException("Payload must not be empty.", nameof(payload));
}
if (string.IsNullOrWhiteSpace(signatureValue))
{
throw new ArgumentException("Signature value must be provided.", nameof(signatureValue));
}
if (!TryParseDetachedJws(signatureValue, out var encodedHeader, out var encodedSignature))
{
throw new InvalidOperationException("Detached JWS signature is malformed.");
}
var headerJson = Encoding.UTF8.GetString(Base64UrlEncoder.DecodeBytes(encodedHeader));
var header = JsonSerializer.Deserialize<MirrorSignatureHeader>(headerJson, HeaderSerializerOptions)
?? throw new InvalidOperationException("Detached JWS header could not be parsed.");
if (!header.Critical.Contains("b64", StringComparer.Ordinal))
{
throw new InvalidOperationException("Detached JWS header is missing required 'b64' critical parameter.");
}
if (header.Base64Payload)
{
throw new InvalidOperationException("Detached JWS header sets b64=true; expected unencoded payload.");
}
if (string.IsNullOrWhiteSpace(header.KeyId))
{
throw new InvalidOperationException("Detached JWS header missing key identifier.");
}
if (string.IsNullOrWhiteSpace(header.Algorithm))
{
throw new InvalidOperationException("Detached JWS header missing algorithm identifier.");
}
var signingInput = BuildSigningInput(encodedHeader, payload.Span);
var signatureBytes = Base64UrlEncoder.DecodeBytes(encodedSignature);
var keyReference = new CryptoKeyReference(header.KeyId, header.Provider);
var resolution = _providerRegistry.ResolveSigner(
CryptoCapability.Verification,
header.Algorithm,
keyReference,
header.Provider);
var verified = await resolution.Signer.VerifyAsync(signingInput, signatureBytes, cancellationToken).ConfigureAwait(false);
if (!verified)
{
_logger.LogWarning("Detached JWS verification failed for key {KeyId} via provider {Provider}.", header.KeyId, resolution.ProviderName);
throw new InvalidOperationException("Detached JWS signature verification failed.");
}
}
private static bool TryParseDetachedJws(string value, out string encodedHeader, out string encodedSignature)
{
var parts = value.Split("..", StringSplitOptions.None);
if (parts.Length != 2 || string.IsNullOrEmpty(parts[0]) || string.IsNullOrEmpty(parts[1]))
{
encodedHeader = string.Empty;
encodedSignature = string.Empty;
return false;
}
encodedHeader = parts[0];
encodedSignature = parts[1];
return true;
}
private static ReadOnlyMemory<byte> BuildSigningInput(string encodedHeader, ReadOnlySpan<byte> payload)
{
var headerBytes = Encoding.ASCII.GetBytes(encodedHeader);
var buffer = new byte[headerBytes.Length + 1 + payload.Length];
headerBytes.CopyTo(buffer.AsSpan());
buffer[headerBytes.Length] = (byte)'.';
payload.CopyTo(buffer.AsSpan(headerBytes.Length + 1));
return buffer;
}
private sealed record MirrorSignatureHeader(
[property: JsonPropertyName("alg")] string Algorithm,
[property: JsonPropertyName("kid")] string KeyId,
[property: JsonPropertyName("provider")] string? Provider,
[property: JsonPropertyName("typ")] string? Type,
[property: JsonPropertyName("b64")] bool Base64Payload,
[property: JsonPropertyName("crit")] string[] Critical);
}
using System;
using System.Text;
using System.Text.Json;
using System.Text.Json.Serialization;
using Microsoft.Extensions.Logging;
using Microsoft.IdentityModel.Tokens;
using StellaOps.Cryptography;
namespace StellaOps.Concelier.Connector.StellaOpsMirror.Security;
/// <summary>
/// Validates detached JWS signatures emitted by mirror bundles.
/// </summary>
public sealed class MirrorSignatureVerifier
{
private static readonly JsonSerializerOptions HeaderSerializerOptions = new(JsonSerializerDefaults.Web)
{
PropertyNameCaseInsensitive = true
};
private readonly ICryptoProviderRegistry _providerRegistry;
private readonly ILogger<MirrorSignatureVerifier> _logger;
public MirrorSignatureVerifier(ICryptoProviderRegistry providerRegistry, ILogger<MirrorSignatureVerifier> logger)
{
_providerRegistry = providerRegistry ?? throw new ArgumentNullException(nameof(providerRegistry));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
public Task VerifyAsync(ReadOnlyMemory<byte> payload, string signatureValue, CancellationToken cancellationToken)
=> VerifyAsync(payload, signatureValue, expectedKeyId: null, expectedProvider: null, cancellationToken);
public async Task VerifyAsync(
ReadOnlyMemory<byte> payload,
string signatureValue,
string? expectedKeyId,
string? expectedProvider,
CancellationToken cancellationToken)
{
if (payload.IsEmpty)
{
throw new ArgumentException("Payload must not be empty.", nameof(payload));
}
if (string.IsNullOrWhiteSpace(signatureValue))
{
throw new ArgumentException("Signature value must be provided.", nameof(signatureValue));
}
if (!TryParseDetachedJws(signatureValue, out var encodedHeader, out var encodedSignature))
{
throw new InvalidOperationException("Detached JWS signature is malformed.");
}
var headerJson = Encoding.UTF8.GetString(Base64UrlEncoder.DecodeBytes(encodedHeader));
var header = JsonSerializer.Deserialize<MirrorSignatureHeader>(headerJson, HeaderSerializerOptions)
?? throw new InvalidOperationException("Detached JWS header could not be parsed.");
if (!header.Critical.Contains("b64", StringComparer.Ordinal))
{
throw new InvalidOperationException("Detached JWS header is missing required 'b64' critical parameter.");
}
if (header.Base64Payload)
{
throw new InvalidOperationException("Detached JWS header sets b64=true; expected unencoded payload.");
}
if (string.IsNullOrWhiteSpace(header.KeyId))
{
throw new InvalidOperationException("Detached JWS header missing key identifier.");
}
if (string.IsNullOrWhiteSpace(header.Algorithm))
{
throw new InvalidOperationException("Detached JWS header missing algorithm identifier.");
}
if (!string.IsNullOrWhiteSpace(expectedKeyId) &&
!string.Equals(header.KeyId, expectedKeyId, StringComparison.OrdinalIgnoreCase))
{
throw new InvalidOperationException($"Mirror bundle signature key '{header.KeyId}' did not match expected key '{expectedKeyId}'.");
}
if (!string.IsNullOrWhiteSpace(expectedProvider) &&
!string.Equals(header.Provider, expectedProvider, StringComparison.OrdinalIgnoreCase))
{
throw new InvalidOperationException($"Mirror bundle signature provider '{header.Provider ?? "<null>"}' did not match expected provider '{expectedProvider}'.");
}
var signingInput = BuildSigningInput(encodedHeader, payload.Span);
var signatureBytes = Base64UrlEncoder.DecodeBytes(encodedSignature);
var keyReference = new CryptoKeyReference(header.KeyId, header.Provider);
CryptoSignerResolution resolution;
try
{
resolution = _providerRegistry.ResolveSigner(
CryptoCapability.Verification,
header.Algorithm,
keyReference,
header.Provider);
}
catch (Exception ex) when (ex is InvalidOperationException or KeyNotFoundException)
{
_logger.LogWarning(ex, "Unable to resolve signer for mirror signature key {KeyId} via provider {Provider}.", header.KeyId, header.Provider ?? "<null>");
throw new InvalidOperationException("Detached JWS signature verification failed.", ex);
}
var verified = await resolution.Signer.VerifyAsync(signingInput, signatureBytes, cancellationToken).ConfigureAwait(false);
if (!verified)
{
_logger.LogWarning("Detached JWS verification failed for key {KeyId} via provider {Provider}.", header.KeyId, resolution.ProviderName);
throw new InvalidOperationException("Detached JWS signature verification failed.");
}
}
private static bool TryParseDetachedJws(string value, out string encodedHeader, out string encodedSignature)
{
var parts = value.Split("..", StringSplitOptions.None);
if (parts.Length != 2 || string.IsNullOrEmpty(parts[0]) || string.IsNullOrEmpty(parts[1]))
{
encodedHeader = string.Empty;
encodedSignature = string.Empty;
return false;
}
encodedHeader = parts[0];
encodedSignature = parts[1];
return true;
}
private static ReadOnlyMemory<byte> BuildSigningInput(string encodedHeader, ReadOnlySpan<byte> payload)
{
var headerBytes = Encoding.ASCII.GetBytes(encodedHeader);
var buffer = new byte[headerBytes.Length + 1 + payload.Length];
headerBytes.CopyTo(buffer.AsSpan());
buffer[headerBytes.Length] = (byte)'.';
payload.CopyTo(buffer.AsSpan(headerBytes.Length + 1));
return buffer;
}
private sealed record MirrorSignatureHeader(
[property: JsonPropertyName("alg")] string Algorithm,
[property: JsonPropertyName("kid")] string KeyId,
[property: JsonPropertyName("provider")] string? Provider,
[property: JsonPropertyName("typ")] string? Type,
[property: JsonPropertyName("b64")] bool Base64Payload,
[property: JsonPropertyName("crit")] string[] Critical);
}

View File

@@ -1,288 +1,309 @@
using System;
using System.Collections.Generic;
using System.Linq;
using System.Security.Cryptography;
using System.Text;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
using MongoDB.Bson;
using StellaOps.Concelier.Connector.Common.Fetch;
using StellaOps.Concelier.Connector.Common;
using StellaOps.Concelier.Connector.StellaOpsMirror.Client;
using StellaOps.Concelier.Connector.StellaOpsMirror.Internal;
using StellaOps.Concelier.Connector.StellaOpsMirror.Security;
using StellaOps.Concelier.Connector.StellaOpsMirror.Settings;
using StellaOps.Concelier.Storage.Mongo;
using StellaOps.Concelier.Storage.Mongo.Documents;
using StellaOps.Plugin;
namespace StellaOps.Concelier.Connector.StellaOpsMirror;
public sealed class StellaOpsMirrorConnector : IFeedConnector
{
public const string Source = "stellaops-mirror";
private readonly MirrorManifestClient _client;
private readonly MirrorSignatureVerifier _signatureVerifier;
private readonly RawDocumentStorage _rawDocumentStorage;
private readonly IDocumentStore _documentStore;
private readonly ISourceStateRepository _stateRepository;
private readonly TimeProvider _timeProvider;
private readonly ILogger<StellaOpsMirrorConnector> _logger;
private readonly StellaOpsMirrorConnectorOptions _options;
public StellaOpsMirrorConnector(
MirrorManifestClient client,
MirrorSignatureVerifier signatureVerifier,
RawDocumentStorage rawDocumentStorage,
IDocumentStore documentStore,
ISourceStateRepository stateRepository,
IOptions<StellaOpsMirrorConnectorOptions> options,
TimeProvider? timeProvider,
ILogger<StellaOpsMirrorConnector> logger)
{
_client = client ?? throw new ArgumentNullException(nameof(client));
_signatureVerifier = signatureVerifier ?? throw new ArgumentNullException(nameof(signatureVerifier));
_rawDocumentStorage = rawDocumentStorage ?? throw new ArgumentNullException(nameof(rawDocumentStorage));
_documentStore = documentStore ?? throw new ArgumentNullException(nameof(documentStore));
_stateRepository = stateRepository ?? throw new ArgumentNullException(nameof(stateRepository));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
_timeProvider = timeProvider ?? TimeProvider.System;
_options = (options ?? throw new ArgumentNullException(nameof(options))).Value ?? throw new ArgumentNullException(nameof(options));
ValidateOptions(_options);
}
public string SourceName => Source;
public async Task FetchAsync(IServiceProvider services, CancellationToken cancellationToken)
{
_ = services ?? throw new ArgumentNullException(nameof(services));
var now = _timeProvider.GetUtcNow();
var cursor = await GetCursorAsync(cancellationToken).ConfigureAwait(false);
var pendingDocuments = cursor.PendingDocuments.ToHashSet();
var pendingMappings = cursor.PendingMappings.ToHashSet();
MirrorIndexDocument index;
try
{
index = await _client.GetIndexAsync(_options.IndexPath, cancellationToken).ConfigureAwait(false);
}
catch (Exception ex)
{
await _stateRepository.MarkFailureAsync(Source, now, TimeSpan.FromMinutes(15), ex.Message, cancellationToken).ConfigureAwait(false);
throw;
}
var domain = index.Domains.FirstOrDefault(entry =>
string.Equals(entry.DomainId, _options.DomainId, StringComparison.OrdinalIgnoreCase));
if (domain is null)
{
var message = $"Mirror domain '{_options.DomainId}' not present in index.";
await _stateRepository.MarkFailureAsync(Source, now, TimeSpan.FromMinutes(30), message, cancellationToken).ConfigureAwait(false);
throw new InvalidOperationException(message);
}
if (string.Equals(domain.Bundle.Digest, cursor.BundleDigest, StringComparison.OrdinalIgnoreCase))
{
_logger.LogInformation("Mirror bundle digest {Digest} unchanged; skipping fetch.", domain.Bundle.Digest);
return;
}
try
{
await ProcessDomainAsync(index, domain, pendingDocuments, cancellationToken).ConfigureAwait(false);
}
catch (Exception ex)
{
await _stateRepository.MarkFailureAsync(Source, now, TimeSpan.FromMinutes(10), ex.Message, cancellationToken).ConfigureAwait(false);
throw;
}
var updatedCursor = cursor
.WithPendingDocuments(pendingDocuments)
.WithPendingMappings(pendingMappings)
.WithBundleSnapshot(domain.Bundle.Path, domain.Bundle.Digest, index.GeneratedAt);
await UpdateCursorAsync(updatedCursor, cancellationToken).ConfigureAwait(false);
}
public Task ParseAsync(IServiceProvider services, CancellationToken cancellationToken)
=> Task.CompletedTask;
public Task MapAsync(IServiceProvider services, CancellationToken cancellationToken)
=> Task.CompletedTask;
private async Task ProcessDomainAsync(
MirrorIndexDocument index,
MirrorIndexDomainEntry domain,
HashSet<Guid> pendingDocuments,
CancellationToken cancellationToken)
{
var manifestBytes = await _client.DownloadAsync(domain.Manifest.Path, cancellationToken).ConfigureAwait(false);
var bundleBytes = await _client.DownloadAsync(domain.Bundle.Path, cancellationToken).ConfigureAwait(false);
VerifyDigest(domain.Manifest.Digest, manifestBytes, domain.Manifest.Path);
VerifyDigest(domain.Bundle.Digest, bundleBytes, domain.Bundle.Path);
if (_options.Signature.Enabled)
{
if (domain.Bundle.Signature is null)
{
throw new InvalidOperationException("Mirror bundle did not include a signature descriptor while verification is enabled.");
}
var signatureBytes = await _client.DownloadAsync(domain.Bundle.Signature.Path, cancellationToken).ConfigureAwait(false);
var signatureValue = Encoding.UTF8.GetString(signatureBytes);
await _signatureVerifier.VerifyAsync(bundleBytes, signatureValue, cancellationToken).ConfigureAwait(false);
}
await StoreAsync(domain, index.GeneratedAt, domain.Manifest, manifestBytes, "application/json", DocumentStatuses.Mapped, addToPending: false, pendingDocuments, cancellationToken).ConfigureAwait(false);
var bundleRecord = await StoreAsync(domain, index.GeneratedAt, domain.Bundle, bundleBytes, "application/json", DocumentStatuses.PendingParse, addToPending: true, pendingDocuments, cancellationToken).ConfigureAwait(false);
_logger.LogInformation(
"Stored mirror bundle {Uri} as document {DocumentId} with digest {Digest}.",
bundleRecord.Uri,
bundleRecord.Id,
bundleRecord.Sha256);
}
private async Task<DocumentRecord> StoreAsync(
MirrorIndexDomainEntry domain,
DateTimeOffset generatedAt,
MirrorFileDescriptor descriptor,
byte[] payload,
string contentType,
string status,
bool addToPending,
HashSet<Guid> pendingDocuments,
CancellationToken cancellationToken)
{
var absolute = ResolveAbsolutePath(descriptor.Path);
var existing = await _documentStore.FindBySourceAndUriAsync(Source, absolute, cancellationToken).ConfigureAwait(false);
if (existing is not null && string.Equals(existing.Sha256, NormalizeDigest(descriptor.Digest), StringComparison.OrdinalIgnoreCase))
{
if (addToPending)
{
pendingDocuments.Add(existing.Id);
}
return existing;
}
var gridFsId = await _rawDocumentStorage.UploadAsync(Source, absolute, payload, contentType, cancellationToken).ConfigureAwait(false);
var now = _timeProvider.GetUtcNow();
var sha = ComputeSha256(payload);
var metadata = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase)
{
["mirror.domainId"] = domain.DomainId,
["mirror.displayName"] = domain.DisplayName,
["mirror.path"] = descriptor.Path,
["mirror.digest"] = NormalizeDigest(descriptor.Digest),
["mirror.type"] = ReferenceEquals(descriptor, domain.Bundle) ? "bundle" : "manifest",
};
var record = new DocumentRecord(
existing?.Id ?? Guid.NewGuid(),
Source,
absolute,
now,
sha,
status,
contentType,
Headers: null,
Metadata: metadata,
Etag: null,
LastModified: generatedAt,
GridFsId: gridFsId,
ExpiresAt: null);
var upserted = await _documentStore.UpsertAsync(record, cancellationToken).ConfigureAwait(false);
if (addToPending)
{
pendingDocuments.Add(upserted.Id);
}
return upserted;
}
private string ResolveAbsolutePath(string path)
{
var uri = new Uri(_options.BaseAddress, path);
return uri.ToString();
}
private async Task<StellaOpsMirrorCursor> GetCursorAsync(CancellationToken cancellationToken)
{
var state = await _stateRepository.TryGetAsync(Source, cancellationToken).ConfigureAwait(false);
return state is null ? StellaOpsMirrorCursor.Empty : StellaOpsMirrorCursor.FromBson(state.Cursor);
}
private async Task UpdateCursorAsync(StellaOpsMirrorCursor cursor, CancellationToken cancellationToken)
{
var document = cursor.ToBsonDocument();
var now = _timeProvider.GetUtcNow();
await _stateRepository.UpdateCursorAsync(Source, document, now, cancellationToken).ConfigureAwait(false);
}
private static void VerifyDigest(string expected, ReadOnlySpan<byte> payload, string path)
{
if (string.IsNullOrWhiteSpace(expected))
{
return;
}
if (!expected.StartsWith("sha256:", StringComparison.OrdinalIgnoreCase))
{
throw new InvalidOperationException($"Unsupported digest '{expected}' for '{path}'.");
}
var actualHash = SHA256.HashData(payload);
var actual = "sha256:" + Convert.ToHexString(actualHash).ToLowerInvariant();
if (!string.Equals(actual, expected, StringComparison.OrdinalIgnoreCase))
{
throw new InvalidOperationException($"Digest mismatch for '{path}'. Expected {expected}, computed {actual}.");
}
}
private static string ComputeSha256(ReadOnlySpan<byte> payload)
{
var hash = SHA256.HashData(payload);
return Convert.ToHexString(hash).ToLowerInvariant();
}
private static string NormalizeDigest(string digest)
{
if (string.IsNullOrWhiteSpace(digest))
{
return string.Empty;
}
return digest.StartsWith("sha256:", StringComparison.OrdinalIgnoreCase)
? digest[7..]
: digest.ToLowerInvariant();
}
private static void ValidateOptions(StellaOpsMirrorConnectorOptions options)
{
if (options.BaseAddress is null || !options.BaseAddress.IsAbsoluteUri)
{
throw new InvalidOperationException("Mirror connector requires an absolute baseAddress.");
}
if (string.IsNullOrWhiteSpace(options.DomainId))
{
throw new InvalidOperationException("Mirror connector requires domainId to be specified.");
}
}
}
file static class UriExtensions
{
public static Uri Combine(this Uri baseUri, string relative)
=> new(baseUri, relative);
}
using System;
using System.Collections.Generic;
using System.Linq;
using System.Security.Cryptography;
using System.Text;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
using MongoDB.Bson;
using StellaOps.Concelier.Connector.Common.Fetch;
using StellaOps.Concelier.Connector.Common;
using StellaOps.Concelier.Connector.StellaOpsMirror.Client;
using StellaOps.Concelier.Connector.StellaOpsMirror.Internal;
using StellaOps.Concelier.Connector.StellaOpsMirror.Security;
using StellaOps.Concelier.Connector.StellaOpsMirror.Settings;
using StellaOps.Concelier.Storage.Mongo;
using StellaOps.Concelier.Storage.Mongo.Documents;
using StellaOps.Plugin;
namespace StellaOps.Concelier.Connector.StellaOpsMirror;
public sealed class StellaOpsMirrorConnector : IFeedConnector
{
public const string Source = "stellaops-mirror";
private readonly MirrorManifestClient _client;
private readonly MirrorSignatureVerifier _signatureVerifier;
private readonly RawDocumentStorage _rawDocumentStorage;
private readonly IDocumentStore _documentStore;
private readonly ISourceStateRepository _stateRepository;
private readonly TimeProvider _timeProvider;
private readonly ILogger<StellaOpsMirrorConnector> _logger;
private readonly StellaOpsMirrorConnectorOptions _options;
public StellaOpsMirrorConnector(
MirrorManifestClient client,
MirrorSignatureVerifier signatureVerifier,
RawDocumentStorage rawDocumentStorage,
IDocumentStore documentStore,
ISourceStateRepository stateRepository,
IOptions<StellaOpsMirrorConnectorOptions> options,
TimeProvider? timeProvider,
ILogger<StellaOpsMirrorConnector> logger)
{
_client = client ?? throw new ArgumentNullException(nameof(client));
_signatureVerifier = signatureVerifier ?? throw new ArgumentNullException(nameof(signatureVerifier));
_rawDocumentStorage = rawDocumentStorage ?? throw new ArgumentNullException(nameof(rawDocumentStorage));
_documentStore = documentStore ?? throw new ArgumentNullException(nameof(documentStore));
_stateRepository = stateRepository ?? throw new ArgumentNullException(nameof(stateRepository));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
_timeProvider = timeProvider ?? TimeProvider.System;
_options = (options ?? throw new ArgumentNullException(nameof(options))).Value ?? throw new ArgumentNullException(nameof(options));
ValidateOptions(_options);
}
public string SourceName => Source;
public async Task FetchAsync(IServiceProvider services, CancellationToken cancellationToken)
{
_ = services ?? throw new ArgumentNullException(nameof(services));
var now = _timeProvider.GetUtcNow();
var cursor = await GetCursorAsync(cancellationToken).ConfigureAwait(false);
var pendingDocuments = cursor.PendingDocuments.ToHashSet();
var pendingMappings = cursor.PendingMappings.ToHashSet();
MirrorIndexDocument index;
try
{
index = await _client.GetIndexAsync(_options.IndexPath, cancellationToken).ConfigureAwait(false);
}
catch (Exception ex)
{
await _stateRepository.MarkFailureAsync(Source, now, TimeSpan.FromMinutes(15), ex.Message, cancellationToken).ConfigureAwait(false);
throw;
}
var domain = index.Domains.FirstOrDefault(entry =>
string.Equals(entry.DomainId, _options.DomainId, StringComparison.OrdinalIgnoreCase));
if (domain is null)
{
var message = $"Mirror domain '{_options.DomainId}' not present in index.";
await _stateRepository.MarkFailureAsync(Source, now, TimeSpan.FromMinutes(30), message, cancellationToken).ConfigureAwait(false);
throw new InvalidOperationException(message);
}
if (string.Equals(domain.Bundle.Digest, cursor.BundleDigest, StringComparison.OrdinalIgnoreCase))
{
_logger.LogInformation("Mirror bundle digest {Digest} unchanged; skipping fetch.", domain.Bundle.Digest);
return;
}
try
{
await ProcessDomainAsync(index, domain, pendingDocuments, cancellationToken).ConfigureAwait(false);
}
catch (Exception ex)
{
await _stateRepository.MarkFailureAsync(Source, now, TimeSpan.FromMinutes(10), ex.Message, cancellationToken).ConfigureAwait(false);
throw;
}
var updatedCursor = cursor
.WithPendingDocuments(pendingDocuments)
.WithPendingMappings(pendingMappings)
.WithBundleSnapshot(domain.Bundle.Path, domain.Bundle.Digest, index.GeneratedAt);
await UpdateCursorAsync(updatedCursor, cancellationToken).ConfigureAwait(false);
}
public Task ParseAsync(IServiceProvider services, CancellationToken cancellationToken)
=> Task.CompletedTask;
public Task MapAsync(IServiceProvider services, CancellationToken cancellationToken)
=> Task.CompletedTask;
private async Task ProcessDomainAsync(
MirrorIndexDocument index,
MirrorIndexDomainEntry domain,
HashSet<Guid> pendingDocuments,
CancellationToken cancellationToken)
{
var manifestBytes = await _client.DownloadAsync(domain.Manifest.Path, cancellationToken).ConfigureAwait(false);
var bundleBytes = await _client.DownloadAsync(domain.Bundle.Path, cancellationToken).ConfigureAwait(false);
VerifyDigest(domain.Manifest.Digest, manifestBytes, domain.Manifest.Path);
VerifyDigest(domain.Bundle.Digest, bundleBytes, domain.Bundle.Path);
if (_options.Signature.Enabled)
{
if (domain.Bundle.Signature is null)
{
throw new InvalidOperationException("Mirror bundle did not include a signature descriptor while verification is enabled.");
}
if (!string.IsNullOrWhiteSpace(_options.Signature.KeyId) &&
!string.Equals(domain.Bundle.Signature.KeyId, _options.Signature.KeyId, StringComparison.OrdinalIgnoreCase))
{
throw new InvalidOperationException($"Mirror bundle signature key '{domain.Bundle.Signature.KeyId}' did not match expected key '{_options.Signature.KeyId}'.");
}
if (!string.IsNullOrWhiteSpace(_options.Signature.Provider) &&
!string.Equals(domain.Bundle.Signature.Provider, _options.Signature.Provider, StringComparison.OrdinalIgnoreCase))
{
throw new InvalidOperationException($"Mirror bundle signature provider '{domain.Bundle.Signature.Provider ?? "<null>"}' did not match expected provider '{_options.Signature.Provider}'.");
}
var signatureBytes = await _client.DownloadAsync(domain.Bundle.Signature.Path, cancellationToken).ConfigureAwait(false);
var signatureValue = Encoding.UTF8.GetString(signatureBytes).Trim();
await _signatureVerifier.VerifyAsync(
bundleBytes,
signatureValue,
expectedKeyId: _options.Signature.KeyId,
expectedProvider: _options.Signature.Provider,
cancellationToken).ConfigureAwait(false);
}
else if (domain.Bundle.Signature is not null)
{
_logger.LogInformation("Mirror bundle provided signature descriptor but verification is disabled; skipping verification.");
}
await StoreAsync(domain, index.GeneratedAt, domain.Manifest, manifestBytes, "application/json", DocumentStatuses.Mapped, addToPending: false, pendingDocuments, cancellationToken).ConfigureAwait(false);
var bundleRecord = await StoreAsync(domain, index.GeneratedAt, domain.Bundle, bundleBytes, "application/json", DocumentStatuses.PendingParse, addToPending: true, pendingDocuments, cancellationToken).ConfigureAwait(false);
_logger.LogInformation(
"Stored mirror bundle {Uri} as document {DocumentId} with digest {Digest}.",
bundleRecord.Uri,
bundleRecord.Id,
bundleRecord.Sha256);
}
private async Task<DocumentRecord> StoreAsync(
MirrorIndexDomainEntry domain,
DateTimeOffset generatedAt,
MirrorFileDescriptor descriptor,
byte[] payload,
string contentType,
string status,
bool addToPending,
HashSet<Guid> pendingDocuments,
CancellationToken cancellationToken)
{
var absolute = ResolveAbsolutePath(descriptor.Path);
var existing = await _documentStore.FindBySourceAndUriAsync(Source, absolute, cancellationToken).ConfigureAwait(false);
if (existing is not null && string.Equals(existing.Sha256, NormalizeDigest(descriptor.Digest), StringComparison.OrdinalIgnoreCase))
{
if (addToPending)
{
pendingDocuments.Add(existing.Id);
}
return existing;
}
var gridFsId = await _rawDocumentStorage.UploadAsync(Source, absolute, payload, contentType, cancellationToken).ConfigureAwait(false);
var now = _timeProvider.GetUtcNow();
var sha = ComputeSha256(payload);
var metadata = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase)
{
["mirror.domainId"] = domain.DomainId,
["mirror.displayName"] = domain.DisplayName,
["mirror.path"] = descriptor.Path,
["mirror.digest"] = NormalizeDigest(descriptor.Digest),
["mirror.type"] = ReferenceEquals(descriptor, domain.Bundle) ? "bundle" : "manifest",
};
var record = new DocumentRecord(
existing?.Id ?? Guid.NewGuid(),
Source,
absolute,
now,
sha,
status,
contentType,
Headers: null,
Metadata: metadata,
Etag: null,
LastModified: generatedAt,
GridFsId: gridFsId,
ExpiresAt: null);
var upserted = await _documentStore.UpsertAsync(record, cancellationToken).ConfigureAwait(false);
if (addToPending)
{
pendingDocuments.Add(upserted.Id);
}
return upserted;
}
private string ResolveAbsolutePath(string path)
{
var uri = new Uri(_options.BaseAddress, path);
return uri.ToString();
}
private async Task<StellaOpsMirrorCursor> GetCursorAsync(CancellationToken cancellationToken)
{
var state = await _stateRepository.TryGetAsync(Source, cancellationToken).ConfigureAwait(false);
return state is null ? StellaOpsMirrorCursor.Empty : StellaOpsMirrorCursor.FromBson(state.Cursor);
}
private async Task UpdateCursorAsync(StellaOpsMirrorCursor cursor, CancellationToken cancellationToken)
{
var document = cursor.ToBsonDocument();
var now = _timeProvider.GetUtcNow();
await _stateRepository.UpdateCursorAsync(Source, document, now, cancellationToken).ConfigureAwait(false);
}
private static void VerifyDigest(string expected, ReadOnlySpan<byte> payload, string path)
{
if (string.IsNullOrWhiteSpace(expected))
{
return;
}
if (!expected.StartsWith("sha256:", StringComparison.OrdinalIgnoreCase))
{
throw new InvalidOperationException($"Unsupported digest '{expected}' for '{path}'.");
}
var actualHash = SHA256.HashData(payload);
var actual = "sha256:" + Convert.ToHexString(actualHash).ToLowerInvariant();
if (!string.Equals(actual, expected, StringComparison.OrdinalIgnoreCase))
{
throw new InvalidOperationException($"Digest mismatch for '{path}'. Expected {expected}, computed {actual}.");
}
}
private static string ComputeSha256(ReadOnlySpan<byte> payload)
{
var hash = SHA256.HashData(payload);
return Convert.ToHexString(hash).ToLowerInvariant();
}
private static string NormalizeDigest(string digest)
{
if (string.IsNullOrWhiteSpace(digest))
{
return string.Empty;
}
return digest.StartsWith("sha256:", StringComparison.OrdinalIgnoreCase)
? digest[7..]
: digest.ToLowerInvariant();
}
private static void ValidateOptions(StellaOpsMirrorConnectorOptions options)
{
if (options.BaseAddress is null || !options.BaseAddress.IsAbsoluteUri)
{
throw new InvalidOperationException("Mirror connector requires an absolute baseAddress.");
}
if (string.IsNullOrWhiteSpace(options.DomainId))
{
throw new InvalidOperationException("Mirror connector requires domainId to be specified.");
}
}
}
file static class UriExtensions
{
public static Uri Combine(this Uri baseUri, string relative)
=> new(baseUri, relative);
}

View File

@@ -1,6 +1,7 @@
using System.Collections.Concurrent;
using System.Linq;
using System.Threading.Tasks;
using System.Collections.Concurrent;
using System.Collections.Immutable;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.Extensions.Logging.Abstractions;
using Microsoft.Extensions.Time.Testing;
using MongoDB.Driver;
@@ -43,8 +44,9 @@ public sealed class AdvisoryMergeServiceTests
var result = await service.MergeAsync("GHSA-aaaa-bbbb-cccc", CancellationToken.None);
Assert.NotNull(result.Merged);
Assert.Equal("OSV summary overrides", result.Merged!.Summary);
Assert.NotNull(result.Merged);
Assert.Equal("OSV summary overrides", result.Merged!.Summary);
Assert.Empty(result.Conflicts);
var upserted = advisoryStore.LastUpserted;
Assert.NotNull(upserted);
@@ -103,25 +105,108 @@ public sealed class AdvisoryMergeServiceTests
provenance: new[] { provenance });
}
private static Advisory CreateOsvAdvisory()
{
var recorded = DateTimeOffset.Parse("2025-03-05T12:00:00Z");
var provenance = new AdvisoryProvenance("osv", "map", "OSV-2025-xyz", recorded, new[] { ProvenanceFieldMasks.Advisory });
return new Advisory(
"OSV-2025-xyz",
"Container escape",
"OSV summary overrides",
"en",
recorded,
recorded,
"critical",
exploitKnown: false,
aliases: new[] { "OSV-2025-xyz", "CVE-2025-4242" },
references: Array.Empty<AdvisoryReference>(),
affectedPackages: Array.Empty<AffectedPackage>(),
cvssMetrics: Array.Empty<CvssMetric>(),
provenance: new[] { provenance });
}
private static Advisory CreateOsvAdvisory()
{
var recorded = DateTimeOffset.Parse("2025-03-05T12:00:00Z");
var provenance = new AdvisoryProvenance("osv", "map", "OSV-2025-xyz", recorded, new[] { ProvenanceFieldMasks.Advisory });
return new Advisory(
"OSV-2025-xyz",
"Container escape",
"OSV summary overrides",
"en",
recorded,
recorded,
"critical",
exploitKnown: false,
aliases: new[] { "OSV-2025-xyz", "CVE-2025-4242" },
references: Array.Empty<AdvisoryReference>(),
affectedPackages: Array.Empty<AffectedPackage>(),
cvssMetrics: Array.Empty<CvssMetric>(),
provenance: new[] { provenance });
}
private static Advisory CreateVendorAdvisory()
{
var recorded = DateTimeOffset.Parse("2025-03-10T00:00:00Z");
var provenance = new AdvisoryProvenance("vendor", "psirt", "VSA-2025-5000", recorded, new[] { ProvenanceFieldMasks.Advisory });
return new Advisory(
"VSA-2025-5000",
"Vendor overrides severity",
"Vendor states critical impact.",
"en",
recorded,
recorded,
"critical",
exploitKnown: false,
aliases: new[] { "VSA-2025-5000", "CVE-2025-5000" },
references: Array.Empty<AdvisoryReference>(),
affectedPackages: Array.Empty<AffectedPackage>(),
cvssMetrics: Array.Empty<CvssMetric>(),
provenance: new[] { provenance });
}
private static Advisory CreateConflictingNvdAdvisory()
{
var recorded = DateTimeOffset.Parse("2025-03-09T00:00:00Z");
var provenance = new AdvisoryProvenance("nvd", "map", "CVE-2025-5000", recorded, new[] { ProvenanceFieldMasks.Advisory });
return new Advisory(
"CVE-2025-5000",
"CVE-2025-5000",
"Baseline NVD entry.",
"en",
recorded,
recorded,
"medium",
exploitKnown: false,
aliases: new[] { "CVE-2025-5000" },
references: Array.Empty<AdvisoryReference>(),
affectedPackages: Array.Empty<AffectedPackage>(),
cvssMetrics: Array.Empty<CvssMetric>(),
provenance: new[] { provenance });
}
[Fact]
public async Task MergeAsync_PersistsConflictSummariesWithHashes()
{
var aliasStore = new FakeAliasStore();
aliasStore.Register("CVE-2025-5000",
(AliasSchemes.Cve, "CVE-2025-5000"));
aliasStore.Register("VSA-2025-5000",
(AliasSchemes.Cve, "CVE-2025-5000"));
var vendor = CreateVendorAdvisory();
var nvd = CreateConflictingNvdAdvisory();
var advisoryStore = new FakeAdvisoryStore();
advisoryStore.Seed(vendor, nvd);
var mergeEventStore = new InMemoryMergeEventStore();
var timeProvider = new FakeTimeProvider(new DateTimeOffset(2025, 4, 2, 0, 0, 0, TimeSpan.Zero));
var writer = new MergeEventWriter(mergeEventStore, new CanonicalHashCalculator(), timeProvider, NullLogger<MergeEventWriter>.Instance);
var precedenceMerger = new AdvisoryPrecedenceMerger(new AffectedPackagePrecedenceResolver(), timeProvider);
var aliasResolver = new AliasGraphResolver(aliasStore);
var canonicalMerger = new CanonicalMerger(timeProvider);
var eventLog = new RecordingAdvisoryEventLog();
var service = new AdvisoryMergeService(aliasResolver, advisoryStore, precedenceMerger, writer, canonicalMerger, eventLog, timeProvider, NullLogger<AdvisoryMergeService>.Instance);
var result = await service.MergeAsync("CVE-2025-5000", CancellationToken.None);
var conflict = Assert.Single(result.Conflicts);
Assert.Equal("CVE-2025-5000", conflict.VulnerabilityKey);
Assert.Equal("severity", conflict.Explainer.Type);
Assert.Equal("mismatch", conflict.Explainer.Reason);
Assert.Contains("vendor", conflict.Explainer.PrimarySources, StringComparer.OrdinalIgnoreCase);
Assert.Contains("nvd", conflict.Explainer.SuppressedSources, StringComparer.OrdinalIgnoreCase);
Assert.Equal(conflict.Explainer.ComputeHashHex(), conflict.ConflictHash);
Assert.True(conflict.StatementIds.Length >= 2);
Assert.Equal(timeProvider.GetUtcNow(), conflict.RecordedAt);
var appendRequest = eventLog.LastRequest;
Assert.NotNull(appendRequest);
var appendedConflict = Assert.Single(appendRequest!.Conflicts!);
Assert.Equal(conflict.ConflictId, appendedConflict.ConflictId);
Assert.Equal(conflict.StatementIds, appendedConflict.StatementIds.ToImmutableArray());
}
private sealed class RecordingAdvisoryEventLog : IAdvisoryEventLog

View File

@@ -1,430 +1,456 @@
using System;
using System.Collections.Generic;
using System.Collections.Immutable;
using System.Diagnostics.Metrics;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Extensions.Logging;
using StellaOps.Concelier.Core;
using StellaOps.Concelier.Core.Events;
using StellaOps.Concelier.Models;
using StellaOps.Concelier.Storage.Mongo.Advisories;
using StellaOps.Concelier.Storage.Mongo.Aliases;
using StellaOps.Concelier.Storage.Mongo.MergeEvents;
using System.Text.Json;
namespace StellaOps.Concelier.Merge.Services;
public sealed class AdvisoryMergeService
{
private static readonly Meter MergeMeter = new("StellaOps.Concelier.Merge");
private static readonly Counter<long> AliasCollisionCounter = MergeMeter.CreateCounter<long>(
"concelier.merge.identity_conflicts",
unit: "count",
description: "Number of alias collisions detected during merge.");
private static readonly string[] PreferredAliasSchemes =
{
AliasSchemes.Cve,
AliasSchemes.Ghsa,
AliasSchemes.OsV,
AliasSchemes.Msrc,
};
private readonly AliasGraphResolver _aliasResolver;
private readonly IAdvisoryStore _advisoryStore;
private readonly AdvisoryPrecedenceMerger _precedenceMerger;
private readonly MergeEventWriter _mergeEventWriter;
private readonly IAdvisoryEventLog _eventLog;
private readonly TimeProvider _timeProvider;
private readonly CanonicalMerger _canonicalMerger;
private readonly ILogger<AdvisoryMergeService> _logger;
public AdvisoryMergeService(
AliasGraphResolver aliasResolver,
IAdvisoryStore advisoryStore,
AdvisoryPrecedenceMerger precedenceMerger,
MergeEventWriter mergeEventWriter,
CanonicalMerger canonicalMerger,
IAdvisoryEventLog eventLog,
TimeProvider timeProvider,
ILogger<AdvisoryMergeService> logger)
{
_aliasResolver = aliasResolver ?? throw new ArgumentNullException(nameof(aliasResolver));
_advisoryStore = advisoryStore ?? throw new ArgumentNullException(nameof(advisoryStore));
_precedenceMerger = precedenceMerger ?? throw new ArgumentNullException(nameof(precedenceMerger));
_mergeEventWriter = mergeEventWriter ?? throw new ArgumentNullException(nameof(mergeEventWriter));
_canonicalMerger = canonicalMerger ?? throw new ArgumentNullException(nameof(canonicalMerger));
_eventLog = eventLog ?? throw new ArgumentNullException(nameof(eventLog));
_timeProvider = timeProvider ?? throw new ArgumentNullException(nameof(timeProvider));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
public async Task<AdvisoryMergeResult> MergeAsync(string seedAdvisoryKey, CancellationToken cancellationToken)
{
ArgumentException.ThrowIfNullOrWhiteSpace(seedAdvisoryKey);
var component = await _aliasResolver.BuildComponentAsync(seedAdvisoryKey, cancellationToken).ConfigureAwait(false);
var inputs = new List<Advisory>();
foreach (var advisoryKey in component.AdvisoryKeys)
{
cancellationToken.ThrowIfCancellationRequested();
var advisory = await _advisoryStore.FindAsync(advisoryKey, cancellationToken).ConfigureAwait(false);
if (advisory is not null)
{
inputs.Add(advisory);
}
}
if (inputs.Count == 0)
{
_logger.LogWarning("Alias component seeded by {Seed} contains no persisted advisories", seedAdvisoryKey);
return AdvisoryMergeResult.Empty(seedAdvisoryKey, component);
}
var canonicalKey = SelectCanonicalKey(component) ?? seedAdvisoryKey;
var canonicalMerge = ApplyCanonicalMergeIfNeeded(canonicalKey, inputs);
var before = await _advisoryStore.FindAsync(canonicalKey, cancellationToken).ConfigureAwait(false);
var normalizedInputs = NormalizeInputs(inputs, canonicalKey).ToList();
PrecedenceMergeResult precedenceResult;
try
{
precedenceResult = _precedenceMerger.Merge(normalizedInputs);
}
catch (Exception ex)
{
_logger.LogError(ex, "Failed to merge alias component seeded by {Seed}", seedAdvisoryKey);
throw;
}
var merged = precedenceResult.Advisory;
var conflictDetails = precedenceResult.Conflicts;
if (component.Collisions.Count > 0)
{
foreach (var collision in component.Collisions)
{
var tags = new KeyValuePair<string, object?>[]
{
new("scheme", collision.Scheme ?? string.Empty),
new("alias_value", collision.Value ?? string.Empty),
new("advisory_count", collision.AdvisoryKeys.Count),
};
AliasCollisionCounter.Add(1, tags);
_logger.LogInformation(
"Alias collision {Scheme}:{Value} involves advisories {Advisories}",
collision.Scheme,
collision.Value,
string.Join(", ", collision.AdvisoryKeys));
}
}
await _advisoryStore.UpsertAsync(merged, cancellationToken).ConfigureAwait(false);
await _mergeEventWriter.AppendAsync(
canonicalKey,
before,
merged,
Array.Empty<Guid>(),
ConvertFieldDecisions(canonicalMerge?.Decisions),
cancellationToken).ConfigureAwait(false);
await AppendEventLogAsync(canonicalKey, normalizedInputs, merged, conflictDetails, cancellationToken).ConfigureAwait(false);
return new AdvisoryMergeResult(seedAdvisoryKey, canonicalKey, component, inputs, before, merged);
}
private async Task AppendEventLogAsync(
string vulnerabilityKey,
IReadOnlyList<Advisory> inputs,
Advisory merged,
IReadOnlyList<MergeConflictDetail> conflicts,
CancellationToken cancellationToken)
{
var recordedAt = _timeProvider.GetUtcNow();
var statements = new List<AdvisoryStatementInput>(inputs.Count + 1);
var statementIds = new Dictionary<Advisory, Guid>(ReferenceEqualityComparer.Instance);
foreach (var advisory in inputs)
{
var statementId = Guid.NewGuid();
statementIds[advisory] = statementId;
statements.Add(new AdvisoryStatementInput(
vulnerabilityKey,
advisory,
DetermineAsOf(advisory, recordedAt),
InputDocumentIds: Array.Empty<Guid>(),
StatementId: statementId,
AdvisoryKey: advisory.AdvisoryKey));
}
var canonicalStatementId = Guid.NewGuid();
statementIds[merged] = canonicalStatementId;
statements.Add(new AdvisoryStatementInput(
vulnerabilityKey,
merged,
recordedAt,
InputDocumentIds: Array.Empty<Guid>(),
StatementId: canonicalStatementId,
AdvisoryKey: merged.AdvisoryKey));
var conflictInputs = BuildConflictInputs(conflicts, vulnerabilityKey, statementIds, canonicalStatementId, recordedAt);
if (statements.Count == 0 && conflictInputs.Count == 0)
{
return;
}
var request = new AdvisoryEventAppendRequest(statements, conflictInputs.Count > 0 ? conflictInputs : null);
try
{
await _eventLog.AppendAsync(request, cancellationToken).ConfigureAwait(false);
}
finally
{
foreach (var conflict in conflictInputs)
{
conflict.Details.Dispose();
}
}
}
private static DateTimeOffset DetermineAsOf(Advisory advisory, DateTimeOffset fallback)
{
return (advisory.Modified ?? advisory.Published ?? fallback).ToUniversalTime();
}
private static List<AdvisoryConflictInput> BuildConflictInputs(
IReadOnlyList<MergeConflictDetail> conflicts,
string vulnerabilityKey,
IReadOnlyDictionary<Advisory, Guid> statementIds,
Guid canonicalStatementId,
DateTimeOffset recordedAt)
{
if (conflicts.Count == 0)
{
return new List<AdvisoryConflictInput>(0);
}
var inputs = new List<AdvisoryConflictInput>(conflicts.Count);
foreach (var detail in conflicts)
{
if (!statementIds.TryGetValue(detail.Suppressed, out var suppressedId))
{
continue;
}
var related = new List<Guid> { canonicalStatementId, suppressedId };
if (statementIds.TryGetValue(detail.Primary, out var primaryId))
{
if (!related.Contains(primaryId))
{
related.Add(primaryId);
}
}
var payload = new ConflictDetailPayload(
detail.ConflictType,
detail.Reason,
detail.PrimarySources,
detail.PrimaryRank,
detail.SuppressedSources,
detail.SuppressedRank,
detail.PrimaryValue,
detail.SuppressedValue);
var json = CanonicalJsonSerializer.Serialize(payload);
var document = JsonDocument.Parse(json);
var asOf = (detail.Primary.Modified ?? detail.Suppressed.Modified ?? recordedAt).ToUniversalTime();
inputs.Add(new AdvisoryConflictInput(
vulnerabilityKey,
document,
asOf,
related,
ConflictId: null));
}
return inputs;
}
private sealed record ConflictDetailPayload(
string Type,
string Reason,
IReadOnlyList<string> PrimarySources,
int PrimaryRank,
IReadOnlyList<string> SuppressedSources,
int SuppressedRank,
string? PrimaryValue,
string? SuppressedValue);
private static IEnumerable<Advisory> NormalizeInputs(IEnumerable<Advisory> advisories, string canonicalKey)
{
foreach (var advisory in advisories)
{
yield return CloneWithKey(advisory, canonicalKey);
}
}
private static Advisory CloneWithKey(Advisory source, string advisoryKey)
=> new(
advisoryKey,
source.Title,
source.Summary,
source.Language,
source.Published,
source.Modified,
source.Severity,
source.ExploitKnown,
source.Aliases,
source.Credits,
source.References,
source.AffectedPackages,
source.CvssMetrics,
source.Provenance,
source.Description,
source.Cwes,
source.CanonicalMetricId);
private CanonicalMergeResult? ApplyCanonicalMergeIfNeeded(string canonicalKey, List<Advisory> inputs)
{
if (inputs.Count == 0)
{
return null;
}
var ghsa = FindBySource(inputs, CanonicalSources.Ghsa);
var nvd = FindBySource(inputs, CanonicalSources.Nvd);
var osv = FindBySource(inputs, CanonicalSources.Osv);
var participatingSources = 0;
if (ghsa is not null)
{
participatingSources++;
}
if (nvd is not null)
{
participatingSources++;
}
if (osv is not null)
{
participatingSources++;
}
if (participatingSources < 2)
{
return null;
}
var result = _canonicalMerger.Merge(canonicalKey, ghsa, nvd, osv);
inputs.RemoveAll(advisory => MatchesCanonicalSource(advisory));
inputs.Add(result.Advisory);
return result;
}
private static Advisory? FindBySource(IEnumerable<Advisory> advisories, string source)
=> advisories.FirstOrDefault(advisory => advisory.Provenance.Any(provenance =>
!string.Equals(provenance.Kind, "merge", StringComparison.OrdinalIgnoreCase) &&
string.Equals(provenance.Source, source, StringComparison.OrdinalIgnoreCase)));
private static bool MatchesCanonicalSource(Advisory advisory)
{
foreach (var provenance in advisory.Provenance)
{
if (string.Equals(provenance.Kind, "merge", StringComparison.OrdinalIgnoreCase))
{
continue;
}
if (string.Equals(provenance.Source, CanonicalSources.Ghsa, StringComparison.OrdinalIgnoreCase) ||
string.Equals(provenance.Source, CanonicalSources.Nvd, StringComparison.OrdinalIgnoreCase) ||
string.Equals(provenance.Source, CanonicalSources.Osv, StringComparison.OrdinalIgnoreCase))
{
return true;
}
}
return false;
}
private static IReadOnlyList<MergeFieldDecision> ConvertFieldDecisions(ImmutableArray<FieldDecision>? decisions)
{
if (decisions is null || decisions.Value.IsDefaultOrEmpty)
{
return Array.Empty<MergeFieldDecision>();
}
var builder = ImmutableArray.CreateBuilder<MergeFieldDecision>(decisions.Value.Length);
foreach (var decision in decisions.Value)
{
builder.Add(new MergeFieldDecision(
decision.Field,
decision.SelectedSource,
decision.DecisionReason,
decision.SelectedModified,
decision.ConsideredSources.ToArray()));
}
return builder.ToImmutable();
}
private static class CanonicalSources
{
public const string Ghsa = "ghsa";
public const string Nvd = "nvd";
public const string Osv = "osv";
}
private static string? SelectCanonicalKey(AliasComponent component)
{
foreach (var scheme in PreferredAliasSchemes)
{
var alias = component.AliasMap.Values
.SelectMany(static aliases => aliases)
.FirstOrDefault(record => string.Equals(record.Scheme, scheme, StringComparison.OrdinalIgnoreCase));
if (!string.IsNullOrWhiteSpace(alias?.Value))
{
return alias.Value;
}
}
if (component.AliasMap.TryGetValue(component.SeedAdvisoryKey, out var seedAliases))
{
var primary = seedAliases.FirstOrDefault(record => string.Equals(record.Scheme, AliasStoreConstants.PrimaryScheme, StringComparison.OrdinalIgnoreCase));
if (!string.IsNullOrWhiteSpace(primary?.Value))
{
return primary.Value;
}
}
var firstAlias = component.AliasMap.Values.SelectMany(static aliases => aliases).FirstOrDefault();
if (!string.IsNullOrWhiteSpace(firstAlias?.Value))
{
return firstAlias.Value;
}
return component.SeedAdvisoryKey;
}
}
public sealed record AdvisoryMergeResult(
string SeedAdvisoryKey,
string CanonicalAdvisoryKey,
AliasComponent Component,
IReadOnlyList<Advisory> Inputs,
Advisory? Previous,
Advisory? Merged)
{
public static AdvisoryMergeResult Empty(string seed, AliasComponent component)
=> new(seed, seed, component, Array.Empty<Advisory>(), null, null);
}
using System;
using System.Collections.Generic;
using System.Collections.Immutable;
using System.Diagnostics.Metrics;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Extensions.Logging;
using StellaOps.Concelier.Core;
using StellaOps.Concelier.Core.Events;
using StellaOps.Concelier.Models;
using StellaOps.Concelier.Storage.Mongo.Advisories;
using StellaOps.Concelier.Storage.Mongo.Aliases;
using StellaOps.Concelier.Storage.Mongo.MergeEvents;
using System.Text.Json;
namespace StellaOps.Concelier.Merge.Services;
public sealed class AdvisoryMergeService
{
private static readonly Meter MergeMeter = new("StellaOps.Concelier.Merge");
private static readonly Counter<long> AliasCollisionCounter = MergeMeter.CreateCounter<long>(
"concelier.merge.identity_conflicts",
unit: "count",
description: "Number of alias collisions detected during merge.");
private static readonly string[] PreferredAliasSchemes =
{
AliasSchemes.Cve,
AliasSchemes.Ghsa,
AliasSchemes.OsV,
AliasSchemes.Msrc,
};
private readonly AliasGraphResolver _aliasResolver;
private readonly IAdvisoryStore _advisoryStore;
private readonly AdvisoryPrecedenceMerger _precedenceMerger;
private readonly MergeEventWriter _mergeEventWriter;
private readonly IAdvisoryEventLog _eventLog;
private readonly TimeProvider _timeProvider;
private readonly CanonicalMerger _canonicalMerger;
private readonly ILogger<AdvisoryMergeService> _logger;
public AdvisoryMergeService(
AliasGraphResolver aliasResolver,
IAdvisoryStore advisoryStore,
AdvisoryPrecedenceMerger precedenceMerger,
MergeEventWriter mergeEventWriter,
CanonicalMerger canonicalMerger,
IAdvisoryEventLog eventLog,
TimeProvider timeProvider,
ILogger<AdvisoryMergeService> logger)
{
_aliasResolver = aliasResolver ?? throw new ArgumentNullException(nameof(aliasResolver));
_advisoryStore = advisoryStore ?? throw new ArgumentNullException(nameof(advisoryStore));
_precedenceMerger = precedenceMerger ?? throw new ArgumentNullException(nameof(precedenceMerger));
_mergeEventWriter = mergeEventWriter ?? throw new ArgumentNullException(nameof(mergeEventWriter));
_canonicalMerger = canonicalMerger ?? throw new ArgumentNullException(nameof(canonicalMerger));
_eventLog = eventLog ?? throw new ArgumentNullException(nameof(eventLog));
_timeProvider = timeProvider ?? throw new ArgumentNullException(nameof(timeProvider));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
public async Task<AdvisoryMergeResult> MergeAsync(string seedAdvisoryKey, CancellationToken cancellationToken)
{
ArgumentException.ThrowIfNullOrWhiteSpace(seedAdvisoryKey);
var component = await _aliasResolver.BuildComponentAsync(seedAdvisoryKey, cancellationToken).ConfigureAwait(false);
var inputs = new List<Advisory>();
foreach (var advisoryKey in component.AdvisoryKeys)
{
cancellationToken.ThrowIfCancellationRequested();
var advisory = await _advisoryStore.FindAsync(advisoryKey, cancellationToken).ConfigureAwait(false);
if (advisory is not null)
{
inputs.Add(advisory);
}
}
if (inputs.Count == 0)
{
_logger.LogWarning("Alias component seeded by {Seed} contains no persisted advisories", seedAdvisoryKey);
return AdvisoryMergeResult.Empty(seedAdvisoryKey, component);
}
var canonicalKey = SelectCanonicalKey(component) ?? seedAdvisoryKey;
var canonicalMerge = ApplyCanonicalMergeIfNeeded(canonicalKey, inputs);
var before = await _advisoryStore.FindAsync(canonicalKey, cancellationToken).ConfigureAwait(false);
var normalizedInputs = NormalizeInputs(inputs, canonicalKey).ToList();
PrecedenceMergeResult precedenceResult;
try
{
precedenceResult = _precedenceMerger.Merge(normalizedInputs);
}
catch (Exception ex)
{
_logger.LogError(ex, "Failed to merge alias component seeded by {Seed}", seedAdvisoryKey);
throw;
}
var merged = precedenceResult.Advisory;
var conflictDetails = precedenceResult.Conflicts;
if (component.Collisions.Count > 0)
{
foreach (var collision in component.Collisions)
{
var tags = new KeyValuePair<string, object?>[]
{
new("scheme", collision.Scheme ?? string.Empty),
new("alias_value", collision.Value ?? string.Empty),
new("advisory_count", collision.AdvisoryKeys.Count),
};
AliasCollisionCounter.Add(1, tags);
_logger.LogInformation(
"Alias collision {Scheme}:{Value} involves advisories {Advisories}",
collision.Scheme,
collision.Value,
string.Join(", ", collision.AdvisoryKeys));
}
}
await _advisoryStore.UpsertAsync(merged, cancellationToken).ConfigureAwait(false);
await _mergeEventWriter.AppendAsync(
canonicalKey,
before,
merged,
Array.Empty<Guid>(),
ConvertFieldDecisions(canonicalMerge?.Decisions),
cancellationToken).ConfigureAwait(false);
var conflictSummaries = await AppendEventLogAsync(canonicalKey, normalizedInputs, merged, conflictDetails, cancellationToken).ConfigureAwait(false);
return new AdvisoryMergeResult(seedAdvisoryKey, canonicalKey, component, inputs, before, merged, conflictSummaries);
}
private async Task<IReadOnlyList<MergeConflictSummary>> AppendEventLogAsync(
string vulnerabilityKey,
IReadOnlyList<Advisory> inputs,
Advisory merged,
IReadOnlyList<MergeConflictDetail> conflicts,
CancellationToken cancellationToken)
{
var recordedAt = _timeProvider.GetUtcNow();
var statements = new List<AdvisoryStatementInput>(inputs.Count + 1);
var statementIds = new Dictionary<Advisory, Guid>(ReferenceEqualityComparer.Instance);
foreach (var advisory in inputs)
{
var statementId = Guid.NewGuid();
statementIds[advisory] = statementId;
statements.Add(new AdvisoryStatementInput(
vulnerabilityKey,
advisory,
DetermineAsOf(advisory, recordedAt),
InputDocumentIds: Array.Empty<Guid>(),
StatementId: statementId,
AdvisoryKey: advisory.AdvisoryKey));
}
var canonicalStatementId = Guid.NewGuid();
statementIds[merged] = canonicalStatementId;
statements.Add(new AdvisoryStatementInput(
vulnerabilityKey,
merged,
recordedAt,
InputDocumentIds: Array.Empty<Guid>(),
StatementId: canonicalStatementId,
AdvisoryKey: merged.AdvisoryKey));
var conflictMaterialization = BuildConflictInputs(conflicts, vulnerabilityKey, statementIds, canonicalStatementId, recordedAt);
var conflictInputs = conflictMaterialization.Inputs;
var conflictSummaries = conflictMaterialization.Summaries;
if (statements.Count == 0 && conflictInputs.Count == 0)
{
return conflictSummaries.Count == 0
? Array.Empty<MergeConflictSummary>()
: conflictSummaries.ToArray();
}
var request = new AdvisoryEventAppendRequest(statements, conflictInputs.Count > 0 ? conflictInputs : null);
try
{
await _eventLog.AppendAsync(request, cancellationToken).ConfigureAwait(false);
}
finally
{
foreach (var conflict in conflictInputs)
{
conflict.Details.Dispose();
}
}
return conflictSummaries.Count == 0
? Array.Empty<MergeConflictSummary>()
: conflictSummaries.ToArray();
}
private static DateTimeOffset DetermineAsOf(Advisory advisory, DateTimeOffset fallback)
{
return (advisory.Modified ?? advisory.Published ?? fallback).ToUniversalTime();
}
private static ConflictMaterialization BuildConflictInputs(
IReadOnlyList<MergeConflictDetail> conflicts,
string vulnerabilityKey,
IReadOnlyDictionary<Advisory, Guid> statementIds,
Guid canonicalStatementId,
DateTimeOffset recordedAt)
{
if (conflicts.Count == 0)
{
return new ConflictMaterialization(new List<AdvisoryConflictInput>(0), new List<MergeConflictSummary>(0));
}
var inputs = new List<AdvisoryConflictInput>(conflicts.Count);
var summaries = new List<MergeConflictSummary>(conflicts.Count);
foreach (var detail in conflicts)
{
if (!statementIds.TryGetValue(detail.Suppressed, out var suppressedId))
{
continue;
}
var related = new List<Guid> { canonicalStatementId, suppressedId };
if (statementIds.TryGetValue(detail.Primary, out var primaryId))
{
if (!related.Contains(primaryId))
{
related.Add(primaryId);
}
}
var payload = new ConflictDetailPayload(
detail.ConflictType,
detail.Reason,
detail.PrimarySources,
detail.PrimaryRank,
detail.SuppressedSources,
detail.SuppressedRank,
detail.PrimaryValue,
detail.SuppressedValue);
var explainer = new MergeConflictExplainerPayload(
payload.Type,
payload.Reason,
payload.PrimarySources,
payload.PrimaryRank,
payload.SuppressedSources,
payload.SuppressedRank,
payload.PrimaryValue,
payload.SuppressedValue);
var canonicalJson = explainer.ToCanonicalJson();
var document = JsonDocument.Parse(canonicalJson);
var asOf = (detail.Primary.Modified ?? detail.Suppressed.Modified ?? recordedAt).ToUniversalTime();
var conflictId = Guid.NewGuid();
var statementIdArray = ImmutableArray.CreateRange(related);
var conflictHash = explainer.ComputeHashHex(canonicalJson);
inputs.Add(new AdvisoryConflictInput(
vulnerabilityKey,
document,
asOf,
related,
ConflictId: conflictId));
summaries.Add(new MergeConflictSummary(
conflictId,
vulnerabilityKey,
statementIdArray,
conflictHash,
asOf,
recordedAt,
explainer));
}
return new ConflictMaterialization(inputs, summaries);
}
private static IEnumerable<Advisory> NormalizeInputs(IEnumerable<Advisory> advisories, string canonicalKey)
{
foreach (var advisory in advisories)
{
yield return CloneWithKey(advisory, canonicalKey);
}
}
private static Advisory CloneWithKey(Advisory source, string advisoryKey)
=> new(
advisoryKey,
source.Title,
source.Summary,
source.Language,
source.Published,
source.Modified,
source.Severity,
source.ExploitKnown,
source.Aliases,
source.Credits,
source.References,
source.AffectedPackages,
source.CvssMetrics,
source.Provenance,
source.Description,
source.Cwes,
source.CanonicalMetricId);
private CanonicalMergeResult? ApplyCanonicalMergeIfNeeded(string canonicalKey, List<Advisory> inputs)
{
if (inputs.Count == 0)
{
return null;
}
var ghsa = FindBySource(inputs, CanonicalSources.Ghsa);
var nvd = FindBySource(inputs, CanonicalSources.Nvd);
var osv = FindBySource(inputs, CanonicalSources.Osv);
var participatingSources = 0;
if (ghsa is not null)
{
participatingSources++;
}
if (nvd is not null)
{
participatingSources++;
}
if (osv is not null)
{
participatingSources++;
}
if (participatingSources < 2)
{
return null;
}
var result = _canonicalMerger.Merge(canonicalKey, ghsa, nvd, osv);
inputs.RemoveAll(advisory => MatchesCanonicalSource(advisory));
inputs.Add(result.Advisory);
return result;
}
private static Advisory? FindBySource(IEnumerable<Advisory> advisories, string source)
=> advisories.FirstOrDefault(advisory => advisory.Provenance.Any(provenance =>
!string.Equals(provenance.Kind, "merge", StringComparison.OrdinalIgnoreCase) &&
string.Equals(provenance.Source, source, StringComparison.OrdinalIgnoreCase)));
private static bool MatchesCanonicalSource(Advisory advisory)
{
foreach (var provenance in advisory.Provenance)
{
if (string.Equals(provenance.Kind, "merge", StringComparison.OrdinalIgnoreCase))
{
continue;
}
if (string.Equals(provenance.Source, CanonicalSources.Ghsa, StringComparison.OrdinalIgnoreCase) ||
string.Equals(provenance.Source, CanonicalSources.Nvd, StringComparison.OrdinalIgnoreCase) ||
string.Equals(provenance.Source, CanonicalSources.Osv, StringComparison.OrdinalIgnoreCase))
{
return true;
}
}
return false;
}
private static IReadOnlyList<MergeFieldDecision> ConvertFieldDecisions(ImmutableArray<FieldDecision>? decisions)
{
if (decisions is null || decisions.Value.IsDefaultOrEmpty)
{
return Array.Empty<MergeFieldDecision>();
}
var builder = ImmutableArray.CreateBuilder<MergeFieldDecision>(decisions.Value.Length);
foreach (var decision in decisions.Value)
{
builder.Add(new MergeFieldDecision(
decision.Field,
decision.SelectedSource,
decision.DecisionReason,
decision.SelectedModified,
decision.ConsideredSources.ToArray()));
}
return builder.ToImmutable();
}
private static class CanonicalSources
{
public const string Ghsa = "ghsa";
public const string Nvd = "nvd";
public const string Osv = "osv";
}
private sealed record ConflictMaterialization(
List<AdvisoryConflictInput> Inputs,
List<MergeConflictSummary> Summaries);
private static string? SelectCanonicalKey(AliasComponent component)
{
foreach (var scheme in PreferredAliasSchemes)
{
var alias = component.AliasMap.Values
.SelectMany(static aliases => aliases)
.FirstOrDefault(record => string.Equals(record.Scheme, scheme, StringComparison.OrdinalIgnoreCase));
if (!string.IsNullOrWhiteSpace(alias?.Value))
{
return alias.Value;
}
}
if (component.AliasMap.TryGetValue(component.SeedAdvisoryKey, out var seedAliases))
{
var primary = seedAliases.FirstOrDefault(record => string.Equals(record.Scheme, AliasStoreConstants.PrimaryScheme, StringComparison.OrdinalIgnoreCase));
if (!string.IsNullOrWhiteSpace(primary?.Value))
{
return primary.Value;
}
}
var firstAlias = component.AliasMap.Values.SelectMany(static aliases => aliases).FirstOrDefault();
if (!string.IsNullOrWhiteSpace(firstAlias?.Value))
{
return firstAlias.Value;
}
return component.SeedAdvisoryKey;
}
}
public sealed record AdvisoryMergeResult(
string SeedAdvisoryKey,
string CanonicalAdvisoryKey,
AliasComponent Component,
IReadOnlyList<Advisory> Inputs,
Advisory? Previous,
Advisory? Merged,
IReadOnlyList<MergeConflictSummary> Conflicts)
{
public static AdvisoryMergeResult Empty(string seed, AliasComponent component)
=> new(seed, seed, component, Array.Empty<Advisory>(), null, null, Array.Empty<MergeConflictSummary>());
}

View File

@@ -0,0 +1,34 @@
using System;
using System.Collections.Generic;
using System.Security.Cryptography;
using System.Text;
using StellaOps.Concelier.Models;
namespace StellaOps.Concelier.Merge.Services;
/// <summary>
/// Structured payload describing a precedence conflict between advisory sources.
/// </summary>
public sealed record MergeConflictExplainerPayload(
string Type,
string Reason,
IReadOnlyList<string> PrimarySources,
int PrimaryRank,
IReadOnlyList<string> SuppressedSources,
int SuppressedRank,
string? PrimaryValue,
string? SuppressedValue)
{
public string ToCanonicalJson() => CanonicalJsonSerializer.Serialize(this);
public string ComputeHashHex(string? canonicalJson = null)
{
var json = canonicalJson ?? ToCanonicalJson();
var bytes = Encoding.UTF8.GetBytes(json);
var hash = SHA256.HashData(bytes);
return Convert.ToHexString(hash);
}
public static MergeConflictExplainerPayload FromCanonicalJson(string canonicalJson)
=> CanonicalJsonSerializer.Deserialize<MergeConflictExplainerPayload>(canonicalJson);
}

View File

@@ -0,0 +1,16 @@
using System;
using System.Collections.Immutable;
namespace StellaOps.Concelier.Merge.Services;
/// <summary>
/// Summary of a persisted advisory conflict including hashes and structured explainer payload.
/// </summary>
public sealed record MergeConflictSummary(
Guid ConflictId,
string VulnerabilityKey,
ImmutableArray<Guid> StatementIds,
string ConflictHash,
DateTimeOffset AsOf,
DateTimeOffset RecordedAt,
MergeConflictExplainerPayload Explainer);

View File

@@ -18,4 +18,5 @@
|Range primitives backlog|BE-Merge|Connector WGs|**DOING** Coordinate remaining connectors (`Acsc`, `Cccs`, `CertBund`, `CertCc`, `Cve`, `Ghsa`, `Ics.Cisa`, `Kisa`, `Ru.Bdu`, `Ru.Nkcki`, `Vndr.Apple`, `Vndr.Cisco`, `Vndr.Msrc`) to emit canonical RangePrimitives with provenance tags; track progress/fixtures here.<br>2025-10-11: Storage alignment notes + sample normalized rule JSON now captured in `RANGE_PRIMITIVES_COORDINATION.md` (see “Storage alignment quick reference”).<br>2025-10-11 18:45Z: GHSA normalized rules landed; OSV connector picked up next for rollout.<br>2025-10-11 21:10Z: `docs/dev/merge_semver_playbook.md` Section 8 now documents the persisted Mongo projection (SemVer + NEVRA) for connector reviewers.<br>2025-10-11 21:30Z: Added `docs/dev/normalized_versions_rollout.md` dashboard to centralize connector status and upcoming milestones.<br>2025-10-11 21:55Z: Merge now emits `concelier.merge.normalized_rules*` counters and unions connector-provided normalized arrays; see new test coverage in `AdvisoryPrecedenceMergerTests.Merge_RecordsNormalizedRuleMetrics`.<br>2025-10-12 17:05Z: CVE + KEV normalized rule verification complete; OSV parity fixtures revalidated—downstream parity/monitoring tasks may proceed.<br>2025-10-19 14:35Z: Prerequisites reviewed (none outstanding); FEEDMERGE-COORD-02-900 remains in DOING with connector follow-ups unchanged.<br>2025-10-19 15:25Z: Refreshed `RANGE_PRIMITIVES_COORDINATION.md` matrix + added targeted follow-ups (Cccs, CertBund, ICS-CISA, Kisa, Vndr.Cisco) with delivery dates 2025-10-21 → 2025-10-25; monitoring merge counters for regression.|
|Merge pipeline parity for new advisory fields|BE-Merge|Models, Core|DONE (2025-10-15) merge service now surfaces description/CWE/canonical metric decisions with updated metrics/tests.|
|Connector coordination for new advisory fields|Connector Leads, BE-Merge|Models, Core|**DONE (2025-10-15)** GHSA, NVD, and OSV connectors now emit advisory descriptions, CWE weaknesses, and canonical metric ids. Fixtures refreshed (GHSA connector regression suite, `conflict-nvd.canonical.json`, OSV parity snapshots) and completion recorded in coordination log.|
|FEEDMERGE-ENGINE-07-001 Conflict sets & explainers|BE-Merge|FEEDSTORAGE-DATA-07-001|**DOING (2025-10-19)** Merge now captures canonical advisory statements + prepares conflict payload scaffolding (statement hashes, deterministic JSON, tests). Next: surface conflict explainers and replay APIs for Core/WebService before marking DONE.|
|FEEDMERGE-ENGINE-07-001 Conflict sets & explainers|BE-Merge|FEEDSTORAGE-DATA-07-001|**DONE (2025-10-20)** Merge surfaces conflict explainers with replay hashes via `MergeConflictSummary`; API exposes structured payloads and integration tests cover deterministic `asOf` hashes.|
> Remark (2025-10-20): `AdvisoryMergeService` now returns conflict summaries with deterministic hashes; WebService replay endpoint emits typed explainers verified by new tests.

View File

@@ -1,10 +1,12 @@
using System;
using System;
using System.Collections.Generic;
using System.Globalization;
using System.IO;
using System.Linq;
using System.Net;
using System.Net.Http.Json;
using System.Net.Http.Headers;
using System.Text.Json;
using Microsoft.AspNetCore.Builder;
using Microsoft.AspNetCore.Hosting;
using Microsoft.AspNetCore.Mvc.Testing;
@@ -17,75 +19,76 @@ using Mongo2Go;
using StellaOps.Concelier.Core.Events;
using StellaOps.Concelier.Core.Jobs;
using StellaOps.Concelier.Models;
using StellaOps.Concelier.Merge.Services;
using StellaOps.Concelier.WebService.Jobs;
using StellaOps.Concelier.WebService.Options;
using Xunit.Sdk;
using StellaOps.Auth.Abstractions;
using StellaOps.Auth.Client;
namespace StellaOps.Concelier.WebService.Tests;
public sealed class WebServiceEndpointsTests : IAsyncLifetime
{
private MongoDbRunner _runner = null!;
private ConcelierApplicationFactory _factory = null!;
public Task InitializeAsync()
{
_runner = MongoDbRunner.Start(singleNodeReplSet: true);
_factory = new ConcelierApplicationFactory(_runner.ConnectionString);
return Task.CompletedTask;
}
public Task DisposeAsync()
{
_factory.Dispose();
_runner.Dispose();
return Task.CompletedTask;
}
[Fact]
public async Task HealthAndReadyEndpointsRespond()
{
using var client = _factory.CreateClient();
var healthResponse = await client.GetAsync("/health");
if (!healthResponse.IsSuccessStatusCode)
{
var body = await healthResponse.Content.ReadAsStringAsync();
throw new Xunit.Sdk.XunitException($"/health failed: {(int)healthResponse.StatusCode} {body}");
}
var readyResponse = await client.GetAsync("/ready");
if (!readyResponse.IsSuccessStatusCode)
{
var body = await readyResponse.Content.ReadAsStringAsync();
throw new Xunit.Sdk.XunitException($"/ready failed: {(int)readyResponse.StatusCode} {body}");
}
var healthPayload = await healthResponse.Content.ReadFromJsonAsync<HealthPayload>();
Assert.NotNull(healthPayload);
Assert.Equal("healthy", healthPayload!.Status);
Assert.Equal("mongo", healthPayload.Storage.Driver);
var readyPayload = await readyResponse.Content.ReadFromJsonAsync<ReadyPayload>();
Assert.NotNull(readyPayload);
Assert.Equal("ready", readyPayload!.Status);
Assert.Equal("ready", readyPayload.Mongo.Status);
}
[Fact]
namespace StellaOps.Concelier.WebService.Tests;
public sealed class WebServiceEndpointsTests : IAsyncLifetime
{
private MongoDbRunner _runner = null!;
private ConcelierApplicationFactory _factory = null!;
public Task InitializeAsync()
{
_runner = MongoDbRunner.Start(singleNodeReplSet: true);
_factory = new ConcelierApplicationFactory(_runner.ConnectionString);
return Task.CompletedTask;
}
public Task DisposeAsync()
{
_factory.Dispose();
_runner.Dispose();
return Task.CompletedTask;
}
[Fact]
public async Task HealthAndReadyEndpointsRespond()
{
using var client = _factory.CreateClient();
var healthResponse = await client.GetAsync("/health");
if (!healthResponse.IsSuccessStatusCode)
{
var body = await healthResponse.Content.ReadAsStringAsync();
throw new Xunit.Sdk.XunitException($"/health failed: {(int)healthResponse.StatusCode} {body}");
}
var readyResponse = await client.GetAsync("/ready");
if (!readyResponse.IsSuccessStatusCode)
{
var body = await readyResponse.Content.ReadAsStringAsync();
throw new Xunit.Sdk.XunitException($"/ready failed: {(int)readyResponse.StatusCode} {body}");
}
var healthPayload = await healthResponse.Content.ReadFromJsonAsync<HealthPayload>();
Assert.NotNull(healthPayload);
Assert.Equal("healthy", healthPayload!.Status);
Assert.Equal("mongo", healthPayload.Storage.Driver);
var readyPayload = await readyResponse.Content.ReadFromJsonAsync<ReadyPayload>();
Assert.NotNull(readyPayload);
Assert.Equal("ready", readyPayload!.Status);
Assert.Equal("ready", readyPayload.Mongo.Status);
}
[Fact]
public async Task JobsEndpointsReturnExpectedStatuses()
{
using var client = _factory.CreateClient();
var definitions = await client.GetAsync("/jobs/definitions");
if (!definitions.IsSuccessStatusCode)
{
var body = await definitions.Content.ReadAsStringAsync();
throw new Xunit.Sdk.XunitException($"/jobs/definitions failed: {(int)definitions.StatusCode} {body}");
}
var definitions = await client.GetAsync("/jobs/definitions");
if (!definitions.IsSuccessStatusCode)
{
var body = await definitions.Content.ReadAsStringAsync();
throw new Xunit.Sdk.XunitException($"/jobs/definitions failed: {(int)definitions.StatusCode} {body}");
}
var trigger = await client.PostAsync("/jobs/unknown", new StringContent("{}", System.Text.Encoding.UTF8, "application/json"));
if (trigger.StatusCode != HttpStatusCode.NotFound)
{
@@ -96,12 +99,12 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
Assert.NotNull(problem);
Assert.Equal("https://stellaops.org/problems/not-found", problem!.Type);
Assert.Equal(404, problem.Status);
}
[Fact]
public async Task JobRunEndpointReturnsProblemWhenNotFound()
{
using var client = _factory.CreateClient();
}
[Fact]
public async Task JobRunEndpointReturnsProblemWhenNotFound()
{
using var client = _factory.CreateClient();
var response = await client.GetAsync($"/jobs/{Guid.NewGuid()}");
if (response.StatusCode != HttpStatusCode.NotFound)
{
@@ -111,14 +114,14 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
var problem = await response.Content.ReadFromJsonAsync<ProblemDocument>();
Assert.NotNull(problem);
Assert.Equal("https://stellaops.org/problems/not-found", problem!.Type);
}
[Fact]
public async Task JobTriggerMapsCoordinatorOutcomes()
{
var handler = _factory.Services.GetRequiredService<StubJobCoordinator>();
using var client = _factory.CreateClient();
}
[Fact]
public async Task JobTriggerMapsCoordinatorOutcomes()
{
var handler = _factory.Services.GetRequiredService<StubJobCoordinator>();
using var client = _factory.CreateClient();
handler.NextResult = JobTriggerResult.AlreadyRunning("busy");
var conflict = await client.PostAsync("/jobs/test", JsonContent.Create(new JobTriggerRequest()));
if (conflict.StatusCode != HttpStatusCode.Conflict)
@@ -151,72 +154,72 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
var failureProblem = await failed.Content.ReadFromJsonAsync<ProblemDocument>();
Assert.NotNull(failureProblem);
Assert.Equal("https://stellaops.org/problems/job-failure", failureProblem!.Type);
}
[Fact]
}
[Fact]
public async Task JobsEndpointsExposeJobData()
{
var handler = _factory.Services.GetRequiredService<StubJobCoordinator>();
var now = DateTimeOffset.UtcNow;
var run = new JobRunSnapshot(
Guid.NewGuid(),
"demo",
JobRunStatus.Succeeded,
now,
now,
now.AddSeconds(2),
"api",
"hash",
null,
TimeSpan.FromMinutes(5),
TimeSpan.FromMinutes(1),
new Dictionary<string, object?> { ["key"] = "value" });
handler.Definitions = new[]
{
new JobDefinition("demo", typeof(DemoJob), TimeSpan.FromMinutes(5), TimeSpan.FromMinutes(1), "*/5 * * * *", true)
};
handler.LastRuns["demo"] = run;
handler.RecentRuns = new[] { run };
handler.ActiveRuns = Array.Empty<JobRunSnapshot>();
handler.Runs[run.RunId] = run;
try
{
using var client = _factory.CreateClient();
var definitions = await client.GetFromJsonAsync<List<JobDefinitionPayload>>("/jobs/definitions");
Assert.NotNull(definitions);
Assert.Single(definitions!);
Assert.Equal("demo", definitions![0].Kind);
Assert.NotNull(definitions[0].LastRun);
Assert.Equal(run.RunId, definitions[0].LastRun!.RunId);
var runPayload = await client.GetFromJsonAsync<JobRunPayload>($"/jobs/{run.RunId}");
Assert.NotNull(runPayload);
Assert.Equal(run.RunId, runPayload!.RunId);
Assert.Equal("Succeeded", runPayload.Status);
var runs = await client.GetFromJsonAsync<List<JobRunPayload>>("/jobs?kind=demo&limit=5");
Assert.NotNull(runs);
Assert.Single(runs!);
Assert.Equal(run.RunId, runs![0].RunId);
var runsByDefinition = await client.GetFromJsonAsync<List<JobRunPayload>>("/jobs/definitions/demo/runs");
Assert.NotNull(runsByDefinition);
Assert.Single(runsByDefinition!);
var active = await client.GetFromJsonAsync<List<JobRunPayload>>("/jobs/active");
Assert.NotNull(active);
Assert.Empty(active!);
}
finally
{
handler.Definitions = Array.Empty<JobDefinition>();
handler.RecentRuns = Array.Empty<JobRunSnapshot>();
handler.ActiveRuns = Array.Empty<JobRunSnapshot>();
handler.Runs.Clear();
handler.LastRuns.Clear();
Guid.NewGuid(),
"demo",
JobRunStatus.Succeeded,
now,
now,
now.AddSeconds(2),
"api",
"hash",
null,
TimeSpan.FromMinutes(5),
TimeSpan.FromMinutes(1),
new Dictionary<string, object?> { ["key"] = "value" });
handler.Definitions = new[]
{
new JobDefinition("demo", typeof(DemoJob), TimeSpan.FromMinutes(5), TimeSpan.FromMinutes(1), "*/5 * * * *", true)
};
handler.LastRuns["demo"] = run;
handler.RecentRuns = new[] { run };
handler.ActiveRuns = Array.Empty<JobRunSnapshot>();
handler.Runs[run.RunId] = run;
try
{
using var client = _factory.CreateClient();
var definitions = await client.GetFromJsonAsync<List<JobDefinitionPayload>>("/jobs/definitions");
Assert.NotNull(definitions);
Assert.Single(definitions!);
Assert.Equal("demo", definitions![0].Kind);
Assert.NotNull(definitions[0].LastRun);
Assert.Equal(run.RunId, definitions[0].LastRun!.RunId);
var runPayload = await client.GetFromJsonAsync<JobRunPayload>($"/jobs/{run.RunId}");
Assert.NotNull(runPayload);
Assert.Equal(run.RunId, runPayload!.RunId);
Assert.Equal("Succeeded", runPayload.Status);
var runs = await client.GetFromJsonAsync<List<JobRunPayload>>("/jobs?kind=demo&limit=5");
Assert.NotNull(runs);
Assert.Single(runs!);
Assert.Equal(run.RunId, runs![0].RunId);
var runsByDefinition = await client.GetFromJsonAsync<List<JobRunPayload>>("/jobs/definitions/demo/runs");
Assert.NotNull(runsByDefinition);
Assert.Single(runsByDefinition!);
var active = await client.GetFromJsonAsync<List<JobRunPayload>>("/jobs/active");
Assert.NotNull(active);
Assert.Empty(active!);
}
finally
{
handler.Definitions = Array.Empty<JobDefinition>();
handler.RecentRuns = Array.Empty<JobRunSnapshot>();
handler.ActiveRuns = Array.Empty<JobRunSnapshot>();
handler.Runs.Clear();
handler.LastRuns.Clear();
}
}
@@ -271,6 +274,77 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
Assert.True(payload.Conflicts is null || payload.Conflicts!.Count == 0);
}
[Fact]
public async Task AdvisoryReplayEndpointReturnsConflictExplainer()
{
var vulnerabilityKey = "CVE-2025-9100";
var statementId = Guid.NewGuid();
var conflictId = Guid.NewGuid();
var recordedAt = DateTimeOffset.Parse("2025-02-01T00:00:00Z", CultureInfo.InvariantCulture);
using (var scope = _factory.Services.CreateScope())
{
var eventLog = scope.ServiceProvider.GetRequiredService<IAdvisoryEventLog>();
var advisory = new Advisory(
advisoryKey: vulnerabilityKey,
title: "Base advisory",
summary: "Baseline summary",
language: "en",
published: recordedAt.AddDays(-1),
modified: recordedAt,
severity: "critical",
exploitKnown: false,
aliases: new[] { vulnerabilityKey },
references: Array.Empty<AdvisoryReference>(),
affectedPackages: Array.Empty<AffectedPackage>(),
cvssMetrics: Array.Empty<CvssMetric>(),
provenance: Array.Empty<AdvisoryProvenance>());
var statementInput = new AdvisoryStatementInput(
vulnerabilityKey,
advisory,
recordedAt,
Array.Empty<Guid>(),
StatementId: statementId,
AdvisoryKey: advisory.AdvisoryKey);
await eventLog.AppendAsync(new AdvisoryEventAppendRequest(new[] { statementInput }), CancellationToken.None);
var explainer = new MergeConflictExplainerPayload(
Type: "severity",
Reason: "mismatch",
PrimarySources: new[] { "vendor" },
PrimaryRank: 1,
SuppressedSources: new[] { "nvd" },
SuppressedRank: 5,
PrimaryValue: "CRITICAL",
SuppressedValue: "MEDIUM");
using var conflictDoc = JsonDocument.Parse(explainer.ToCanonicalJson());
var conflictInput = new AdvisoryConflictInput(
vulnerabilityKey,
conflictDoc,
recordedAt,
new[] { statementId },
ConflictId: conflictId);
await eventLog.AppendAsync(new AdvisoryEventAppendRequest(Array.Empty<AdvisoryStatementInput>(), new[] { conflictInput }), CancellationToken.None);
}
using var client = _factory.CreateClient();
var response = await client.GetAsync($"/concelier/advisories/{vulnerabilityKey}/replay");
Assert.Equal(HttpStatusCode.OK, response.StatusCode);
var payload = await response.Content.ReadFromJsonAsync<ReplayResponse>();
Assert.NotNull(payload);
var conflict = Assert.Single(payload!.Conflicts);
Assert.Equal(conflictId, conflict.ConflictId);
Assert.Equal("severity", conflict.Explainer.Type);
Assert.Equal("mismatch", conflict.Explainer.Reason);
Assert.Equal("CRITICAL", conflict.Explainer.PrimaryValue);
Assert.Equal("MEDIUM", conflict.Explainer.SuppressedValue);
Assert.Equal(conflict.Explainer.ComputeHashHex(), conflict.ConflictHash);
}
[Fact]
public async Task MirrorEndpointsServeConfiguredArtifacts()
{
@@ -379,8 +453,49 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
using var client = factory.CreateClient();
var response = await client.GetAsync("/concelier/exports/mirror/secure/manifest.json");
Assert.Equal(HttpStatusCode.Unauthorized, response.StatusCode);
var authHeader = Assert.Single(response.Headers.WwwAuthenticate);
Assert.Equal("Bearer", authHeader.Scheme);
}
[Fact]
public async Task MirrorEndpointsRespectRateLimits()
{
using var temp = new TempDirectory();
var exportId = "20251019T130000Z";
var exportRoot = Path.Combine(temp.Path, exportId);
var mirrorRoot = Path.Combine(exportRoot, "mirror");
Directory.CreateDirectory(mirrorRoot);
await File.WriteAllTextAsync(
Path.Combine(mirrorRoot, "index.json"),
"""{\"schemaVersion\":1,\"domains\":[]}"""
);
var environment = new Dictionary<string, string?>
{
["CONCELIER_MIRROR__ENABLED"] = "true",
["CONCELIER_MIRROR__EXPORTROOT"] = temp.Path,
["CONCELIER_MIRROR__ACTIVEEXPORTID"] = exportId,
["CONCELIER_MIRROR__MAXINDEXREQUESTSPERHOUR"] = "1",
["CONCELIER_MIRROR__DOMAINS__0__ID"] = "primary",
["CONCELIER_MIRROR__DOMAINS__0__REQUIREAUTHENTICATION"] = "false",
["CONCELIER_MIRROR__DOMAINS__0__MAXDOWNLOADREQUESTSPERHOUR"] = "1"
};
using var factory = new ConcelierApplicationFactory(_runner.ConnectionString, environmentOverrides: environment);
using var client = factory.CreateClient();
var okResponse = await client.GetAsync("/concelier/exports/index.json");
Assert.Equal(HttpStatusCode.OK, okResponse.StatusCode);
var limitedResponse = await client.GetAsync("/concelier/exports/index.json");
Assert.Equal((HttpStatusCode)429, limitedResponse.StatusCode);
Assert.NotNull(limitedResponse.Headers.RetryAfter);
Assert.True(limitedResponse.Headers.RetryAfter!.Delta.HasValue);
Assert.True(limitedResponse.Headers.RetryAfter!.Delta!.Value.TotalSeconds > 0);
}
[Fact]
public async Task JobsEndpointsAllowBypassWhenAuthorityEnabled()
{
@@ -553,7 +668,8 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
string ConflictHash,
DateTimeOffset AsOf,
DateTimeOffset RecordedAt,
string Details);
string Details,
MergeConflictExplainerPayload Explainer);
private sealed class ConcelierApplicationFactory : WebApplicationFactory<Program>
{
@@ -832,85 +948,85 @@ public sealed class WebServiceEndpointsTests : IAsyncLifetime
}
}
}
private sealed record HealthPayload(string Status, DateTimeOffset StartedAt, double UptimeSeconds, StoragePayload Storage, TelemetryPayload Telemetry);
private sealed record StoragePayload(string Driver, bool Completed, DateTimeOffset? CompletedAt, double? DurationMs);
private sealed record TelemetryPayload(bool Enabled, bool Tracing, bool Metrics, bool Logging);
private sealed record ReadyPayload(string Status, DateTimeOffset StartedAt, double UptimeSeconds, ReadyMongoPayload Mongo);
private sealed record ReadyMongoPayload(string Status, double? LatencyMs, DateTimeOffset? CheckedAt, string? Error);
private sealed record JobDefinitionPayload(string Kind, bool Enabled, string? CronExpression, TimeSpan Timeout, TimeSpan LeaseDuration, JobRunPayload? LastRun);
private sealed record JobRunPayload(Guid RunId, string Kind, string Status, string Trigger, DateTimeOffset CreatedAt, DateTimeOffset? StartedAt, DateTimeOffset? CompletedAt, string? Error, TimeSpan? Duration, Dictionary<string, object?> Parameters);
private sealed record ProblemDocument(string? Type, string? Title, int? Status, string? Detail, string? Instance);
private sealed class DemoJob : IJob
{
public Task ExecuteAsync(JobExecutionContext context, CancellationToken cancellationToken) => Task.CompletedTask;
}
private sealed class StubJobCoordinator : IJobCoordinator
{
public JobTriggerResult NextResult { get; set; } = JobTriggerResult.NotFound("not set");
public IReadOnlyList<JobDefinition> Definitions { get; set; } = Array.Empty<JobDefinition>();
public IReadOnlyList<JobRunSnapshot> RecentRuns { get; set; } = Array.Empty<JobRunSnapshot>();
public IReadOnlyList<JobRunSnapshot> ActiveRuns { get; set; } = Array.Empty<JobRunSnapshot>();
public Dictionary<Guid, JobRunSnapshot> Runs { get; } = new();
public Dictionary<string, JobRunSnapshot?> LastRuns { get; } = new(StringComparer.Ordinal);
public Task<JobTriggerResult> TriggerAsync(string kind, IReadOnlyDictionary<string, object?>? parameters, string trigger, CancellationToken cancellationToken)
=> Task.FromResult(NextResult);
public Task<IReadOnlyList<JobDefinition>> GetDefinitionsAsync(CancellationToken cancellationToken)
=> Task.FromResult(Definitions);
public Task<IReadOnlyList<JobRunSnapshot>> GetRecentRunsAsync(string? kind, int limit, CancellationToken cancellationToken)
{
IEnumerable<JobRunSnapshot> query = RecentRuns;
if (!string.IsNullOrWhiteSpace(kind))
{
query = query.Where(run => string.Equals(run.Kind, kind, StringComparison.Ordinal));
}
return Task.FromResult<IReadOnlyList<JobRunSnapshot>>(query.Take(limit).ToArray());
}
public Task<IReadOnlyList<JobRunSnapshot>> GetActiveRunsAsync(CancellationToken cancellationToken)
=> Task.FromResult(ActiveRuns);
public Task<JobRunSnapshot?> GetRunAsync(Guid runId, CancellationToken cancellationToken)
=> Task.FromResult(Runs.TryGetValue(runId, out var run) ? run : null);
public Task<JobRunSnapshot?> GetLastRunAsync(string kind, CancellationToken cancellationToken)
=> Task.FromResult(LastRuns.TryGetValue(kind, out var run) ? run : null);
public Task<IReadOnlyDictionary<string, JobRunSnapshot>> GetLastRunsAsync(IEnumerable<string> kinds, CancellationToken cancellationToken)
{
var map = new Dictionary<string, JobRunSnapshot>(StringComparer.Ordinal);
foreach (var kind in kinds)
{
if (kind is null)
{
continue;
}
if (LastRuns.TryGetValue(kind, out var run) && run is not null)
{
map[kind] = run;
}
}
return Task.FromResult<IReadOnlyDictionary<string, JobRunSnapshot>>(map);
}
}
}
private sealed record HealthPayload(string Status, DateTimeOffset StartedAt, double UptimeSeconds, StoragePayload Storage, TelemetryPayload Telemetry);
private sealed record StoragePayload(string Driver, bool Completed, DateTimeOffset? CompletedAt, double? DurationMs);
private sealed record TelemetryPayload(bool Enabled, bool Tracing, bool Metrics, bool Logging);
private sealed record ReadyPayload(string Status, DateTimeOffset StartedAt, double UptimeSeconds, ReadyMongoPayload Mongo);
private sealed record ReadyMongoPayload(string Status, double? LatencyMs, DateTimeOffset? CheckedAt, string? Error);
private sealed record JobDefinitionPayload(string Kind, bool Enabled, string? CronExpression, TimeSpan Timeout, TimeSpan LeaseDuration, JobRunPayload? LastRun);
private sealed record JobRunPayload(Guid RunId, string Kind, string Status, string Trigger, DateTimeOffset CreatedAt, DateTimeOffset? StartedAt, DateTimeOffset? CompletedAt, string? Error, TimeSpan? Duration, Dictionary<string, object?> Parameters);
private sealed record ProblemDocument(string? Type, string? Title, int? Status, string? Detail, string? Instance);
private sealed class DemoJob : IJob
{
public Task ExecuteAsync(JobExecutionContext context, CancellationToken cancellationToken) => Task.CompletedTask;
}
private sealed class StubJobCoordinator : IJobCoordinator
{
public JobTriggerResult NextResult { get; set; } = JobTriggerResult.NotFound("not set");
public IReadOnlyList<JobDefinition> Definitions { get; set; } = Array.Empty<JobDefinition>();
public IReadOnlyList<JobRunSnapshot> RecentRuns { get; set; } = Array.Empty<JobRunSnapshot>();
public IReadOnlyList<JobRunSnapshot> ActiveRuns { get; set; } = Array.Empty<JobRunSnapshot>();
public Dictionary<Guid, JobRunSnapshot> Runs { get; } = new();
public Dictionary<string, JobRunSnapshot?> LastRuns { get; } = new(StringComparer.Ordinal);
public Task<JobTriggerResult> TriggerAsync(string kind, IReadOnlyDictionary<string, object?>? parameters, string trigger, CancellationToken cancellationToken)
=> Task.FromResult(NextResult);
public Task<IReadOnlyList<JobDefinition>> GetDefinitionsAsync(CancellationToken cancellationToken)
=> Task.FromResult(Definitions);
public Task<IReadOnlyList<JobRunSnapshot>> GetRecentRunsAsync(string? kind, int limit, CancellationToken cancellationToken)
{
IEnumerable<JobRunSnapshot> query = RecentRuns;
if (!string.IsNullOrWhiteSpace(kind))
{
query = query.Where(run => string.Equals(run.Kind, kind, StringComparison.Ordinal));
}
return Task.FromResult<IReadOnlyList<JobRunSnapshot>>(query.Take(limit).ToArray());
}
public Task<IReadOnlyList<JobRunSnapshot>> GetActiveRunsAsync(CancellationToken cancellationToken)
=> Task.FromResult(ActiveRuns);
public Task<JobRunSnapshot?> GetRunAsync(Guid runId, CancellationToken cancellationToken)
=> Task.FromResult(Runs.TryGetValue(runId, out var run) ? run : null);
public Task<JobRunSnapshot?> GetLastRunAsync(string kind, CancellationToken cancellationToken)
=> Task.FromResult(LastRuns.TryGetValue(kind, out var run) ? run : null);
public Task<IReadOnlyDictionary<string, JobRunSnapshot>> GetLastRunsAsync(IEnumerable<string> kinds, CancellationToken cancellationToken)
{
var map = new Dictionary<string, JobRunSnapshot>(StringComparer.Ordinal);
foreach (var kind in kinds)
{
if (kind is null)
{
continue;
}
if (LastRuns.TryGetValue(kind, out var run) && run is not null)
{
map[kind] = run;
}
}
return Task.FromResult<IReadOnlyDictionary<string, JobRunSnapshot>>(map);
}
}
}

View File

@@ -1,8 +1,9 @@
using System.Globalization;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Options;
using StellaOps.Concelier.WebService.Options;
using StellaOps.Concelier.WebService.Services;
using System.Globalization;
using System.IO;
using Microsoft.AspNetCore.Http;
using Microsoft.Extensions.Options;
using StellaOps.Concelier.WebService.Options;
using StellaOps.Concelier.WebService.Services;
namespace StellaOps.Concelier.WebService.Extensions;
@@ -42,7 +43,7 @@ internal static class MirrorEndpointExtensions
return Results.NotFound();
}
return await WriteFileAsync(path, context.Response, "application/json").ConfigureAwait(false);
return await WriteFileAsync(path, context.Response, "application/json").ConfigureAwait(false);
});
app.MapGet("/concelier/exports/{**relativePath}", async (
@@ -84,7 +85,7 @@ internal static class MirrorEndpointExtensions
}
var contentType = ResolveContentType(path);
return await WriteFileAsync(path, context.Response, contentType).ConfigureAwait(false);
return await WriteFileAsync(path, context.Response, contentType).ConfigureAwait(false);
});
}
@@ -111,12 +112,12 @@ internal static class MirrorEndpointExtensions
return null;
}
private static bool TryAuthorize(bool requireAuthentication, bool enforceAuthority, HttpContext context, bool authorityConfigured, out IResult result)
{
result = Results.Empty;
if (!requireAuthentication)
{
return true;
private static bool TryAuthorize(bool requireAuthentication, bool enforceAuthority, HttpContext context, bool authorityConfigured, out IResult result)
{
result = Results.Empty;
if (!requireAuthentication)
{
return true;
}
if (!enforceAuthority || !authorityConfigured)
@@ -127,14 +128,15 @@ internal static class MirrorEndpointExtensions
if (context.User?.Identity?.IsAuthenticated == true)
{
return true;
}
result = Results.StatusCode(StatusCodes.Status401Unauthorized);
return false;
}
private static Task<IResult> WriteFileAsync(string path, HttpResponse response, string contentType)
{
}
context.Response.Headers.WWWAuthenticate = "Bearer realm=\"StellaOps Concelier Mirror\"";
result = Results.StatusCode(StatusCodes.Status401Unauthorized);
return false;
}
private static Task<IResult> WriteFileAsync(string path, HttpResponse response, string contentType)
{
var fileInfo = new FileInfo(path);
if (!fileInfo.Exists)
{
@@ -147,12 +149,12 @@ internal static class MirrorEndpointExtensions
FileAccess.Read,
FileShare.Read | FileShare.Delete);
response.Headers.CacheControl = "public, max-age=60";
response.Headers.LastModified = fileInfo.LastWriteTimeUtc.ToString("R", CultureInfo.InvariantCulture);
response.ContentLength = fileInfo.Length;
return Task.FromResult(Results.Stream(stream, contentType));
}
response.Headers.CacheControl = BuildCacheControlHeader(path);
response.Headers.LastModified = fileInfo.LastWriteTimeUtc.ToString("R", CultureInfo.InvariantCulture);
response.ContentLength = fileInfo.Length;
return Task.FromResult(Results.Stream(stream, contentType));
}
private static string ResolveContentType(string path)
{
if (path.EndsWith(".json", StringComparison.OrdinalIgnoreCase))
@@ -176,6 +178,28 @@ internal static class MirrorEndpointExtensions
}
var seconds = Math.Max((int)Math.Ceiling(retryAfter.Value.TotalSeconds), 1);
response.Headers.RetryAfter = seconds.ToString(CultureInfo.InvariantCulture);
}
}
response.Headers.RetryAfter = seconds.ToString(CultureInfo.InvariantCulture);
}
private static string BuildCacheControlHeader(string path)
{
var fileName = Path.GetFileName(path);
if (fileName is null)
{
return "public, max-age=60";
}
if (string.Equals(fileName, "index.json", StringComparison.OrdinalIgnoreCase))
{
return "public, max-age=60";
}
if (fileName.EndsWith(".json", StringComparison.OrdinalIgnoreCase) ||
fileName.EndsWith(".jws", StringComparison.OrdinalIgnoreCase))
{
return "public, max-age=300, immutable";
}
return "public, max-age=300";
}
}

View File

@@ -227,7 +227,8 @@ app.MapGet("/concelier/advisories/{vulnerabilityKey}/replay", async (
ConflictHash = Convert.ToHexString(conflict.ConflictHash.ToArray()),
conflict.AsOf,
conflict.RecordedAt,
Details = conflict.CanonicalJson
Details = conflict.CanonicalJson,
Explainer = MergeConflictExplainerPayload.FromCanonicalJson(conflict.CanonicalJson)
}).ToArray()
};

View File

@@ -1,27 +1,28 @@
# TASKS
| Task | Owner(s) | Depends on | Notes |
|---|---|---|---|
|FEEDWEB-EVENTS-07-001 Advisory event replay API|Concelier WebService Guild|FEEDCORE-ENGINE-07-001|**DONE (2025-10-19)** Added `/concelier/advisories/{vulnerabilityKey}/replay` endpoint with optional `asOf`, hex hashes, and conflict payloads; integration covered via `dotnet test src/StellaOps.Concelier.WebService.Tests/StellaOps.Concelier.WebService.Tests.csproj`.|
|Bind & validate ConcelierOptions|BE-Base|WebService|DONE options bound/validated with failure logging.|
|Mongo service wiring|BE-Base|Storage.Mongo|DONE wiring delegated to `AddMongoStorage`.|
|Bootstrapper execution on start|BE-Base|Storage.Mongo|DONE startup calls `MongoBootstrapper.InitializeAsync`.|
|Plugin host options finalization|BE-Base|Plugins|DONE default plugin directories/search patterns configured.|
|Jobs API contract tests|QA|Core|DONE WebServiceEndpointsTests now cover success payloads, filtering, and trigger outcome mapping.|
|Health/Ready probes|DevOps|Ops|DONE `/health` and `/ready` endpoints implemented.|
|Serilog + OTEL integration hooks|BE-Base|Observability|DONE `TelemetryExtensions` wires Serilog + OTEL with configurable exporters.|
|Register built-in jobs (sources/exporters)|BE-Base|Core|DONE AddBuiltInConcelierJobs adds fallback scheduler definitions for core connectors and exporters via reflection.|
|HTTP problem details consistency|BE-Base|WebService|DONE API errors now emit RFC7807 responses with trace identifiers and typed problem categories.|
|Request logging and metrics|BE-Base|Observability|DONE Serilog request logging enabled with enriched context and web.jobs counters published via OpenTelemetry.|
|Endpoint smoke tests (health/ready/jobs error paths)|QA|WebService|DONE WebServiceEndpointsTests assert success and problem responses for health, ready, and job trigger error paths.|
|Batch job definition last-run lookup|BE-Base|Core|DONE definitions endpoint now precomputes kinds array and reuses batched last-run dictionary; manual smoke verified via local GET `/jobs/definitions`.|
|Add no-cache headers to health/readiness/jobs APIs|BE-Base|WebService|DONE helper applies Cache-Control/Pragma/Expires on all health/ready/jobs endpoints; awaiting automated probe tests once connector fixtures stabilize.|
|Authority configuration parity (FSR1)|DevEx/Concelier|Authority options schema|**DONE (2025-10-10)** Options post-config loads clientSecretFile fallback, validators normalize scopes/audiences, and sample config documents issuer/credential/bypass settings.|
|Document authority toggle & scope requirements|Docs/Concelier|Authority integration|**DOING (2025-10-10)** Quickstart updated with staging flag, client credentials, env overrides; operator guide refresh pending Docs guild review.|
|Plumb Authority client resilience options|BE-Base|Auth libraries LIB5|**DONE (2025-10-12)** `Program.cs` wires `authority.resilience.*` + client scopes into `AddStellaOpsAuthClient`; new integration test asserts binding and retries.|
|Author ops guidance for resilience tuning|Docs/Concelier|Plumb Authority client resilience options|**DONE (2025-10-12)** `docs/21_INSTALL_GUIDE.md` + `docs/ops/concelier-authority-audit-runbook.md` document resilience profiles for connected vs air-gapped installs and reference monitoring cues.|
|Document authority bypass logging patterns|Docs/Concelier|FSR3 logging|**DONE (2025-10-12)** Updated operator guides clarify `Concelier.Authorization.Audit` fields (route/status/subject/clientId/scopes/bypass/remote) and SIEM triggers.|
|Update Concelier operator guide for enforcement cutoff|Docs/Concelier|FSR1 rollout|**DONE (2025-10-12)** Installation guide emphasises disabling `allowAnonymousFallback` before 2025-12-31 UTC and connects audit signals to the rollout checklist.|
|Rename plugin drop directory to namespaced path|BE-Base|Plugins|**DONE (2025-10-19)** Build outputs now target `StellaOps.Concelier.PluginBinaries`/`StellaOps.Authority.PluginBinaries`, plugin host defaults updated, config/docs refreshed, and `dotnet test src/StellaOps.Concelier.WebService.Tests/StellaOps.Concelier.WebService.Tests.csproj --no-restore` covers the change.|
|Authority resilience adoption|Concelier WebService, Docs|Plumb Authority client resilience options|**BLOCKED (2025-10-10)** Roll out retry/offline knobs to deployment docs and confirm CLI parity once LIB5 lands; unblock after resilience options wired and tested.|
|CONCELIER-WEB-08-201 Mirror distribution endpoints|Concelier WebService Guild|CONCELIER-EXPORT-08-201, DEVOPS-MIRROR-08-001|DOING (2025-10-19) HTTP endpoints wired (`/concelier/exports/index.json`, `/concelier/exports/mirror/*`), mirror options bound/validated, and integration tests added; pending auth docs + smoke in ops handbook.|
|Wave 0B readiness checkpoint|Team WebService & Authority|Wave0A completion|BLOCKED (2025-10-19) FEEDSTORAGE-MONGO-08-001 closed, but remaining Wave0A items (AUTH-DPOP-11-001, AUTH-MTLS-11-002, PLUGIN-DI-08-001) still open; maintain current DOING workstreams only.|
# TASKS
| Task | Owner(s) | Depends on | Notes |
|---|---|---|---|
|FEEDWEB-EVENTS-07-001 Advisory event replay API|Concelier WebService Guild|FEEDCORE-ENGINE-07-001|**DONE (2025-10-19)** Added `/concelier/advisories/{vulnerabilityKey}/replay` endpoint with optional `asOf`, hex hashes, and conflict payloads; integration covered via `dotnet test src/StellaOps.Concelier.WebService.Tests/StellaOps.Concelier.WebService.Tests.csproj`.|
|Bind & validate ConcelierOptions|BE-Base|WebService|DONE options bound/validated with failure logging.|
|Mongo service wiring|BE-Base|Storage.Mongo|DONE wiring delegated to `AddMongoStorage`.|
|Bootstrapper execution on start|BE-Base|Storage.Mongo|DONE startup calls `MongoBootstrapper.InitializeAsync`.|
|Plugin host options finalization|BE-Base|Plugins|DONE default plugin directories/search patterns configured.|
|Jobs API contract tests|QA|Core|DONE WebServiceEndpointsTests now cover success payloads, filtering, and trigger outcome mapping.|
|Health/Ready probes|DevOps|Ops|DONE `/health` and `/ready` endpoints implemented.|
|Serilog + OTEL integration hooks|BE-Base|Observability|DONE `TelemetryExtensions` wires Serilog + OTEL with configurable exporters.|
|Register built-in jobs (sources/exporters)|BE-Base|Core|DONE AddBuiltInConcelierJobs adds fallback scheduler definitions for core connectors and exporters via reflection.|
|HTTP problem details consistency|BE-Base|WebService|DONE API errors now emit RFC7807 responses with trace identifiers and typed problem categories.|
|Request logging and metrics|BE-Base|Observability|DONE Serilog request logging enabled with enriched context and web.jobs counters published via OpenTelemetry.|
|Endpoint smoke tests (health/ready/jobs error paths)|QA|WebService|DONE WebServiceEndpointsTests assert success and problem responses for health, ready, and job trigger error paths.|
|Batch job definition last-run lookup|BE-Base|Core|DONE definitions endpoint now precomputes kinds array and reuses batched last-run dictionary; manual smoke verified via local GET `/jobs/definitions`.|
|Add no-cache headers to health/readiness/jobs APIs|BE-Base|WebService|DONE helper applies Cache-Control/Pragma/Expires on all health/ready/jobs endpoints; awaiting automated probe tests once connector fixtures stabilize.|
|Authority configuration parity (FSR1)|DevEx/Concelier|Authority options schema|**DONE (2025-10-10)** Options post-config loads clientSecretFile fallback, validators normalize scopes/audiences, and sample config documents issuer/credential/bypass settings.|
|Document authority toggle & scope requirements|Docs/Concelier|Authority integration|**DOING (2025-10-10)** Quickstart updated with staging flag, client credentials, env overrides; operator guide refresh pending Docs guild review.|
|Plumb Authority client resilience options|BE-Base|Auth libraries LIB5|**DONE (2025-10-12)** `Program.cs` wires `authority.resilience.*` + client scopes into `AddStellaOpsAuthClient`; new integration test asserts binding and retries.|
|Author ops guidance for resilience tuning|Docs/Concelier|Plumb Authority client resilience options|**DONE (2025-10-12)** `docs/21_INSTALL_GUIDE.md` + `docs/ops/concelier-authority-audit-runbook.md` document resilience profiles for connected vs air-gapped installs and reference monitoring cues.|
|Document authority bypass logging patterns|Docs/Concelier|FSR3 logging|**DONE (2025-10-12)** Updated operator guides clarify `Concelier.Authorization.Audit` fields (route/status/subject/clientId/scopes/bypass/remote) and SIEM triggers.|
|Update Concelier operator guide for enforcement cutoff|Docs/Concelier|FSR1 rollout|**DONE (2025-10-12)** Installation guide emphasises disabling `allowAnonymousFallback` before 2025-12-31 UTC and connects audit signals to the rollout checklist.|
|Rename plugin drop directory to namespaced path|BE-Base|Plugins|**DONE (2025-10-19)** Build outputs now target `StellaOps.Concelier.PluginBinaries`/`StellaOps.Authority.PluginBinaries`, plugin host defaults updated, config/docs refreshed, and `dotnet test src/StellaOps.Concelier.WebService.Tests/StellaOps.Concelier.WebService.Tests.csproj --no-restore` covers the change.|
|Authority resilience adoption|Concelier WebService, Docs|Plumb Authority client resilience options|**BLOCKED (2025-10-10)** Roll out retry/offline knobs to deployment docs and confirm CLI parity once LIB5 lands; unblock after resilience options wired and tested.|
|CONCELIER-WEB-08-201 Mirror distribution endpoints|Concelier WebService Guild|CONCELIER-EXPORT-08-201, DEVOPS-MIRROR-08-001|**DONE (2025-10-20)** Mirror endpoints now enforce per-domain rate limits, emit cache headers, honour Authority/WWW-Authenticate, and docs cover auth + smoke workflows.|
> Remark (2025-10-20): Updated ops runbook with token/rate-limit checks and added API tests for Retry-After + unauthorized flows.|
|Wave 0B readiness checkpoint|Team WebService & Authority|Wave0A completion|BLOCKED (2025-10-19) FEEDSTORAGE-MONGO-08-001 closed, but remaining Wave0A items (AUTH-DPOP-11-001, AUTH-MTLS-11-002, PLUGIN-DI-08-001) still open; maintain current DOING workstreams only.|