From ea1106ce7ccaca83151cd018bad43a9a9d3b7cc6 Mon Sep 17 00:00:00 2001
From: Vladimir Moushkov
Install guide reiterates the 2025-12-31 cutoff and links audit signals to the rollout checklist. |
| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Authority/TASKS.md | DONE (2025-10-11) | Team WebService & Authority | SEC3.HOST | Rate limiter policy binding
Authority host now applies configuration-driven fixed windows to `/token`, `/authorize`, and `/internal/*`; integration tests assert 429 + `Retry-After` headers; docs/config samples refreshed for Docs guild diagrams. |
| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Authority/TASKS.md | DONE (2025-10-11) | Team WebService & Authority | SEC3.BUILD | Authority rate-limiter follow-through
`Security.RateLimiting` now fronts token/authorize/internal limiters; Authority + Configuration matrices (`dotnet test src/StellaOps.Authority/StellaOps.Authority.sln`, `dotnet test src/StellaOps.Configuration.Tests/StellaOps.Configuration.Tests.csproj`) passed on 2025-10-11; awaiting #authority-core broadcast. |
+| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Authority/TASKS.md | DONE (2025-10-14) | Team Authority Platform & Security Guild | AUTHCORE-BUILD-OPENIDDICT / AUTHCORE-STORAGE-DEVICE-TOKENS / AUTHCORE-BOOTSTRAP-INVITES | Address remaining Authority compile blockers (OpenIddict transaction shim, token device document, bootstrap invite cleanup) so `dotnet build src/StellaOps.Authority.sln` returns success. |
| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard/TASKS.md | DONE (2025-10-11) | Team WebService & Authority | PLG6.DOC | Plugin developer guide polish
Section 9 now documents rate limiter metadata, config keys, and lockout interplay; YAML samples updated alongside Authority config templates. |
| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Feedser.Source.CertCc/TASKS.md | DONE (2025-10-11) | Team Connector Resumption – CERT/RedHat | FEEDCONN-CERTCC-02-001 | Fetch pipeline & state tracking
Summary planner now drives monthly/yearly VINCE fetches, persists pending summaries/notes, and hydrates VINCE detail queue with telemetry.
Team instructions: Read ./AGENTS.md and src/StellaOps.Feedser.Source.CertCc/AGENTS.md. Coordinate daily with Models/Merge leads so new normalizedVersions output and provenance tags stay aligned with ./src/FASTER_MODELING_AND_NORMALIZATION.md. |
| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Feedser.Source.CertCc/TASKS.md | DONE (2025-10-11) | Team Connector Resumption – CERT/RedHat | FEEDCONN-CERTCC-02-002 | VINCE note detail fetcher
Summary planner queues VINCE note detail endpoints, persists raw JSON with SHA/ETag metadata, and records retry/backoff metrics. |
@@ -23,7 +24,7 @@
| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Feedser.Source.CertCc/TASKS.md | DONE (2025-10-12) | Team Connector Resumption – CERT/RedHat | FEEDCONN-CERTCC-02-005 | Deterministic fixtures/tests
Snapshot harness refreshed 2025-10-12; `certcc-*.snapshot.json` regenerated and regression suite green without UPDATE flag drift. |
| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Feedser.Source.CertCc/TASKS.md | DONE (2025-10-12) | Team Connector Resumption – CERT/RedHat | FEEDCONN-CERTCC-02-006 | Telemetry & documentation
`CertCcDiagnostics` publishes summary/detail/parse/map metrics (meter `StellaOps.Feedser.Source.CertCc`), README documents instruments, and log guidance captured for Ops on 2025-10-12. |
| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Feedser.Source.CertCc/TASKS.md | DONE (2025-10-12) | Team Connector Resumption – CERT/RedHat | FEEDCONN-CERTCC-02-007 | Connector test harness remediation
Harness now wires `AddSourceCommon`, resets `FakeTimeProvider`, and passes canned-response regression run dated 2025-10-12. |
-| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Feedser.Source.CertCc/TASKS.md | BLOCKED (2025-10-11) | Team Connector Resumption – CERT/RedHat | FEEDCONN-CERTCC-02-008 | Snapshot coverage handoff
Upstream repo version lacks SemVer primitives + provenance decision reason fields, so snapshot regeneration fails; resume once Models/Storage sprint lands those changes. |
+| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Feedser.Source.CertCc/TASKS.md | DONE (2025-10-11) | Team Connector Resumption – CERT/RedHat | FEEDCONN-CERTCC-02-008 | Snapshot coverage handoff
Fixtures regenerated with normalized ranges + provenance fields on 2025-10-11; QA handoff notes published and merge backfill unblocked. |
| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Feedser.Source.CertCc/TASKS.md | DONE (2025-10-12) | Team Connector Resumption – CERT/RedHat | FEEDCONN-CERTCC-02-012 | Schema sync & snapshot regen follow-up
Fixtures regenerated with normalizedVersions + provenance decision reasons; handoff notes updated for Merge backfill 2025-10-12. |
| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Feedser.Source.CertCc/TASKS.md | DONE (2025-10-11) | Team Connector Resumption – CERT/RedHat | FEEDCONN-CERTCC-02-009 | Detail/map reintegration plan
Staged reintegration plan published in `src/StellaOps.Feedser.Source.CertCc/FEEDCONN-CERTCC-02-009_PLAN.md`; coordinates enablement with FEEDCONN-CERTCC-02-004. |
| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Feedser.Source.CertCc/TASKS.md | DONE (2025-10-12) | Team Connector Resumption – CERT/RedHat | FEEDCONN-CERTCC-02-010 | Partial-detail graceful degradation
Detail fetch now tolerates 404/403/410 responses and regression tests cover mixed endpoint availability. |
@@ -48,16 +49,20 @@
| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Feedser.WebService/TASKS.md | DONE (2025-10-11) | Team WebService & Authority | FEEDWEB-OPS-01-007 | Authority resilience adoption
Deployment docs and CLI notes explain the LIB5 resilience knobs for rollout.
Instructions to work:
DONE Read ./AGENTS.md and src/StellaOps.Feedser.WebService/AGENTS.md. These items were mid-flight; resume implementation ensuring docs/operators receive timely updates. |
| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Authority/TASKS.md | DONE (2025-10-11) | Team Authority Platform & Security Guild | AUTHCORE-ENGINE-01-001 | CORE8.RL — Rate limiter plumbing validated; integration tests green and docs handoff recorded for middleware ordering + Retry-After headers (see `docs/dev/authority-rate-limit-tuning-outline.md` for continuing guidance). |
| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Cryptography/TASKS.md | DONE (2025-10-11) | Team Authority Platform & Security Guild | AUTHCRYPTO-ENGINE-01-001 | SEC3.A — Shared metadata resolver confirmed via host test run; SEC3.B now unblocked for tuning guidance (outline captured in `docs/dev/authority-rate-limit-tuning-outline.md`). |
-| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard/TASKS.md | DOING (2025-10-11) | Team Authority Platform & Security Guild | AUTHPLUG-DOCS-01-001 | PLG6.DOC — Docs guild resuming diagram/copy updates using the captured limiter context + configuration notes (reference `docs/dev/authority-rate-limit-tuning-outline.md` for tuning matrix + observability copy).
Instructions to work:
Read ./AGENTS.md plus module-specific AGENTS. Restart the blocked rate-limiter workstream (Authority host + cryptography) so the plugin docs team can finish diagrams. Coordinate daily; use ./src/DEDUP_CONFLICTS_RESOLUTION_ALGO.md where rate limiting interacts with conflict policy. |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Normalization/TASKS.md | — | Team Normalization & Storage Backbone | FEEDNORM-NORM-02-001 | SemVer normalized rule emitter
Instructions to work:
Read ./AGENTS.md and module AGENTS. Use ./src/FASTER_MODELING_AND_NORMALIZATION.md to build the shared rule generator; sync daily with storage and connector owners. |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Storage.Mongo/TASKS.md | — | Team Normalization & Storage Backbone | FEEDSTORAGE-DATA-02-001 | Normalized range dual-write + backfill |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Storage.Mongo/TASKS.md | — | Team Normalization & Storage Backbone | FEEDSTORAGE-DATA-02-002 | Provenance decision reason persistence |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Storage.Mongo/TASKS.md | — | Team Normalization & Storage Backbone | FEEDSTORAGE-DATA-02-003 | Normalized versions indexing
Instructions to work:
Read ./AGENTS.md and storage AGENTS. Implement dual-write/backfill and index creation using the shapes from ./src/FASTER_MODELING_AND_NORMALIZATION.md; coordinate with connectors entering the sprint. |
+| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Cryptography/TASKS.md | DONE (2025-10-13) | Team Authority Platform & Security Guild | AUTHSEC-DOCS-01-002 | SEC3.B — Published `docs/security/rate-limits.md` with tuning matrix, alert thresholds, and lockout interplay guidance; Docs guild can lift copy into plugin guide. |
+| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Cryptography/TASKS.md | DONE (2025-10-14) | Team Authority Platform & Security Guild | AUTHSEC-CRYPTO-02-001 | SEC5.B1 — Introduce libsodium signing provider and parity tests to unblock CLI verification enhancements. |
+| Sprint 1 | Bootstrap & Replay Hardening | src/StellaOps.Cryptography/TASKS.md | DONE (2025-10-14) | Security Guild | AUTHSEC-CRYPTO-02-004 | SEC5.D/E — Finish bootstrap invite lifecycle (API/store/cleanup) and token device heuristics; build currently red due to pending handler integration. |
+| Sprint 1 | Developer Tooling | src/StellaOps.Cli/TASKS.md | TODO | DevEx/CLI | AUTHCLI-DIAG-01-001 | Surface password policy diagnostics in CLI startup/output so operators see weakened overrides immediately. |
+| Sprint 1 | Stabilize In-Progress Foundations | src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard/TASKS.md | DONE (2025-10-11) | Team Authority Platform & Security Guild | AUTHPLUG-DOCS-01-001 | PLG6.DOC — Developer guide copy + diagrams merged 2025-10-11; limiter guidance incorporated and handed to Docs guild for asset export. |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Normalization/TASKS.md | DONE (2025-10-12) | Team Normalization & Storage Backbone | FEEDNORM-NORM-02-001 | SemVer normalized rule emitter
`SemVerRangeRuleBuilder` shipped 2025-10-12 with comparator/`||` support and fixtures aligning to `FASTER_MODELING_AND_NORMALIZATION.md`. |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Storage.Mongo/TASKS.md | DONE (2025-10-11) | Team Normalization & Storage Backbone | FEEDSTORAGE-DATA-02-001 | Normalized range dual-write + backfill |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Storage.Mongo/TASKS.md | DONE (2025-10-11) | Team Normalization & Storage Backbone | FEEDSTORAGE-DATA-02-002 | Provenance decision reason persistence |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Storage.Mongo/TASKS.md | DONE (2025-10-11) | Team Normalization & Storage Backbone | FEEDSTORAGE-DATA-02-003 | Normalized versions indexing
Indexes seeded + docs updated 2025-10-11 to cover flattened normalized rules for connector adoption. |
| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Merge/TASKS.md | DONE (2025-10-11) | Team Normalization & Storage Backbone | FEEDMERGE-ENGINE-02-002 | Normalized versions union & dedupe
Affected package resolver unions/dedupes normalized rules, stamps merge provenance with `decisionReason`, and tests cover the rollout. |
| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Ghsa/TASKS.md | DONE (2025-10-11) | Team Connector Expansion – GHSA/NVD/OSV | FEEDCONN-GHSA-02-001 | GHSA normalized versions & provenance |
| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Ghsa/TASKS.md | DONE (2025-10-11) | Team Connector Expansion – GHSA/NVD/OSV | FEEDCONN-GHSA-02-004 | GHSA credits & ecosystem severity mapping |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Ghsa/TASKS.md | TODO | Team Connector Expansion – GHSA/NVD/OSV | FEEDCONN-GHSA-02-005 | GitHub quota monitoring & retries |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Ghsa/TASKS.md | TODO | Team Connector Expansion – GHSA/NVD/OSV | FEEDCONN-GHSA-02-006 | Production credential & scheduler rollout |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Ghsa/TASKS.md | DONE (2025-10-12) | Team Connector Expansion – GHSA/NVD/OSV | FEEDCONN-GHSA-02-005 | GitHub quota monitoring & retries |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Ghsa/TASKS.md | DONE (2025-10-12) | Team Connector Expansion – GHSA/NVD/OSV | FEEDCONN-GHSA-02-006 | Production credential & scheduler rollout |
| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Ghsa/TASKS.md | DONE (2025-10-12) | Team Connector Expansion – GHSA/NVD/OSV | FEEDCONN-GHSA-02-007 | Credit parity regression fixtures |
| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Nvd/TASKS.md | DONE (2025-10-11) | Team Connector Expansion – GHSA/NVD/OSV | FEEDCONN-NVD-02-002 | NVD normalized versions & timestamps |
| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Nvd/TASKS.md | DONE (2025-10-11) | Team Connector Expansion – GHSA/NVD/OSV | FEEDCONN-NVD-02-004 | NVD CVSS & CWE precedence payloads |
@@ -65,17 +70,17 @@
| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Osv/TASKS.md | DONE (2025-10-11) | Team Connector Expansion – GHSA/NVD/OSV | FEEDCONN-OSV-02-003 | OSV normalized versions & freshness |
| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Osv/TASKS.md | DONE (2025-10-11) | Team Connector Expansion – GHSA/NVD/OSV | FEEDCONN-OSV-02-004 | OSV references & credits alignment |
| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Osv/TASKS.md | DONE (2025-10-12) | Team Connector Expansion – GHSA/NVD/OSV | FEEDCONN-OSV-02-005 | Fixture updater workflow
Resolved 2025-10-12: OSV mapper now derives canonical PURLs for Go + scoped npm packages when raw payloads omit `purl`; conflict fixtures unchanged for invalid npm names. Verified via `dotnet test src/StellaOps.Feedser.Source.Osv.Tests`, `src/StellaOps.Feedser.Source.Ghsa.Tests`, `src/StellaOps.Feedser.Source.Nvd.Tests`, and backbone normalization/storage suites. |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Acsc/TASKS.md | Implementation DOING | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-ACSC-02-001 … 02-008 | Fetch pipeline, DTO parser, canonical mapper, fixtures, and README shipped 2025-10-12; downstream export integration still pending future tasks. |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Cccs/TASKS.md | Research DOING | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-CCCS-02-001 … 02-007 | Atom feed verified 2025-10-11, history/caching review and FR locale enumeration pending. |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.CertBund/TASKS.md | Research DOING | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-CERTBUND-02-001 … 02-007 | BSI RSS directory confirmed CERT-Bund feed 2025-10-11, history assessment pending. |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Kisa/TASKS.md | Research DOING | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-KISA-02-001 … 02-007 | KNVD RSS endpoint identified 2025-10-11, access headers/session strategy outstanding. |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Ru.Bdu/TASKS.md | Build DOING | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-RUBDU-02-001 … 02-008 | TLS bundle + connectors landed 2025-10-12; fetch/parse/map flow emits advisories, fixtures & telemetry follow-up pending. |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Ru.Nkcki/TASKS.md | Build DOING | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-NKCKI-02-001 … 02-008 | JSON bulletin fetch + canonical mapping live 2025-10-12; regression fixtures added but blocked on Mongo2Go libcrypto dependency for test execution. |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Ics.Cisa/TASKS.md | Research DOING | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-ICSCISA-02-001 … 02-008 | new ICS RSS endpoint logged 2025-10-11 but Akamai blocks direct pulls, fallback strategy task opened. |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Vndr.Cisco/TASKS.md | Research DOING | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-CISCO-02-001 … 02-007 | openVuln API + RSS reviewed 2025-10-11, auth/pagination memo pending. |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Vndr.Msrc/TASKS.md | Research DOING | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-MSRC-02-001 … 02-007 | MSRC API docs reviewed 2025-10-11, auth/throttling comparison memo pending.
Instructions to work:
Read ./AGENTS.md plus each module's AGENTS file. Parallelize research, ingestion, mapping, fixtures, and docs using the normalized rule shape from ./src/FASTER_MODELING_AND_NORMALIZATION.md. Coordinate daily with the merge coordination task from Sprint 1. |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Cve/TASKS.md | — | Team Connector Support & Monitoring | FEEDCONN-CVE-02-001 … 02-002 | Instructions to work:
Read ./AGENTS.md and module AGENTS. Deliver operator docs and monitoring instrumentation required for broader feed rollout. |
-| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Kev/TASKS.md | — | Team Connector Support & Monitoring | FEEDCONN-KEV-02-001 … 02-002 | Instructions to work:
Read ./AGENTS.md and module AGENTS. Deliver operator docs and monitoring instrumentation required for broader feed rollout. |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Acsc/TASKS.md | Implementation DONE (2025-10-12) | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-ACSC-02-001 … 02-008 | Fetch→parse→map pipeline, fixtures, diagnostics, and README finished 2025-10-12; awaiting downstream export follow-ups tracked separately. |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Cccs/TASKS.md | DONE (2025-10-16) | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-CCCS-02-001 … 02-008 | Observability meter, historical harvest plan, and DOM sanitizer refinements wrapped; ops notes live under `docs/ops/feedser-cccs-operations.md` with fixtures validating EN/FR list handling. |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.CertBund/TASKS.md | DONE (2025-10-15) | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-CERTBUND-02-001 … 02-008 | Telemetry/docs (02-006) and history/locale sweep (02-007) completed alongside pipeline; runbook `docs/ops/feedser-certbund-operations.md` captures locale guidance and offline packaging. |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Kisa/TASKS.md | DONE (2025-10-14) | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-KISA-02-001 … 02-007 | Connector, tests, and telemetry/docs (02-006) finalized; localisation notes in `docs/dev/kisa_connector_notes.md` complete rollout. |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Ru.Bdu/TASKS.md | DONE (2025-10-14) | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-RUBDU-02-001 … 02-008 | Fetch/parser/mapper refinements, regression fixtures, telemetry/docs, access options, and trusted root packaging all landed; README documents offline access strategy. |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Ru.Nkcki/TASKS.md | DONE (2025-10-13) | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-NKCKI-02-001 … 02-008 | Listing fetch, parser, mapper, fixtures, telemetry/docs, and archive plan finished; Mongo2Go/libcrypto dependency resolved via bundled OpenSSL noted in ops guide. |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Ics.Cisa/TASKS.md | DONE (2025-10-16) | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-ICSCISA-02-001 … 02-011 | Feed parser attachment fixes, SemVer exact values, regression suites, telemetry/docs updates, and handover complete; ops runbook now details attachment verification + proxy usage. |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Vndr.Cisco/TASKS.md | Implementation DONE (2025-10-14) | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-CISCO-02-001 … 02-007 | OAuth fetch pipeline, DTO/mapping, tests, and telemetry/docs shipped; monitoring enablement now tracked via follow-up ops tasks (02-006+). |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Vndr.Msrc/TASKS.md | DONE (2025-10-15) | Team Connector Expansion – Regional & Vendor Feeds | FEEDCONN-MSRC-02-001 … 02-008 | Azure AD onboarding (02-008) unblocked fetch/parse/map pipeline; fixtures, telemetry/docs, and Offline Kit guidance published in `docs/ops/feedser-msrc-operations.md`. |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Cve/TASKS.md | DONE (2025-10-15) | Team Connector Support & Monitoring | FEEDCONN-CVE-02-001 … 02-002 | CVE data-source selection, fetch pipeline, and docs landed 2025-10-10. 2025-10-15: smoke verified using the seeded mirror fallback; connector now logs a warning and pulls from `seed-data/cve/` until live CVE Services credentials arrive. |
+| Sprint 2 | Connector & Data Implementation Wave | src/StellaOps.Feedser.Source.Kev/TASKS.md | DONE (2025-10-12) | Team Connector Support & Monitoring | FEEDCONN-KEV-02-001 … 02-002 | KEV catalog ingestion, fixtures, telemetry, and schema validation completed 2025-10-12; ops dashboard published. |
| Sprint 2 | Connector & Data Implementation Wave | docs/TASKS.md | DONE (2025-10-11) | Team Docs & Knowledge Base | FEEDDOCS-DOCS-01-001 | Canonical schema docs refresh
Updated canonical schema + provenance guides with SemVer style, normalized version rules, decision reason change log, and migration notes. |
| Sprint 2 | Connector & Data Implementation Wave | docs/TASKS.md | DONE (2025-10-11) | Team Docs & Knowledge Base | FEEDDOCS-DOCS-02-001 | Feedser-SemVer Playbook
Published merge playbook covering mapper patterns, dedupe flow, indexes, and rollout checklist. |
| Sprint 2 | Connector & Data Implementation Wave | docs/TASKS.md | DONE (2025-10-11) | Team Docs & Knowledge Base | FEEDDOCS-DOCS-02-002 | Normalized versions query guide
Delivered Mongo index/query addendum with `$unwind` recipes, dedupe checks, and operational checklist.
Instructions to work:
DONE Read ./AGENTS.md and docs/AGENTS.md. Document every schema/index/query change produced in Sprint 1-2 leveraging ./src/FASTER_MODELING_AND_NORMALIZATION.md. |
@@ -92,4 +97,4 @@
| Sprint 3 | Conflict Resolution Integration & Communications | src/StellaOps.Feedser.Source.Nvd/TASKS.md | DONE (2025-10-12) | Team Connector Regression Fixtures | FEEDCONN-NVD-04-002 | NVD conflict regression fixtures |
| Sprint 3 | Conflict Resolution Integration & Communications | src/StellaOps.Feedser.Source.Osv/TASKS.md | DONE (2025-10-12) | Team Connector Regression Fixtures | FEEDCONN-OSV-04-002 | OSV conflict regression fixtures
Instructions to work:
Read ./AGENTS.md and module AGENTS. Produce fixture triples supporting the precedence/tie-breaker paths defined in ./src/DEDUP_CONFLICTS_RESOLUTION_ALGO.md and hand them to Merge QA. |
| Sprint 3 | Conflict Resolution Integration & Communications | docs/TASKS.md | DONE (2025-10-11) | Team Documentation Guild – Conflict Guidance | FEEDDOCS-DOCS-05-001 | Feedser Conflict Rules
Runbook published at `docs/ops/feedser-conflict-resolution.md`; metrics/log guidance aligned with Sprint 3 merge counters. |
-| Sprint 3 | Conflict Resolution Integration & Communications | docs/TASKS.md | TODO | Team Documentation Guild – Conflict Guidance | FEEDDOCS-DOCS-05-002 | Conflict runbook ops rollout
Instructions to work:
Read ./AGENTS.md and docs/AGENTS.md. Once GHSA/NVD/OSV regression fixtures (FEEDCONN-GHSA-04-002, FEEDCONN-NVD-04-002, FEEDCONN-OSV-04-002) are delivered, schedule the Ops review, apply the alert thresholds captured in `docs/ops/feedser-authority-audit-runbook.md`, and record change-log linkage after sign-off. Use ./src/DEDUP_CONFLICTS_RESOLUTION_ALGO.md for ongoing rule references. |
+| Sprint 3 | Conflict Resolution Integration & Communications | docs/TASKS.md | DONE (2025-10-16) | Team Documentation Guild – Conflict Guidance | FEEDDOCS-DOCS-05-002 | Conflict runbook ops rollout
Ops review completed, alert thresholds applied, and change log appended in `docs/ops/feedser-conflict-resolution.md`; task closed after connector signals verified. |
diff --git a/docs/11_AUTHORITY.md b/docs/11_AUTHORITY.md
index faa6d946..3fad4471 100644
--- a/docs/11_AUTHORITY.md
+++ b/docs/11_AUTHORITY.md
@@ -67,8 +67,9 @@ Authority centralises revocation in `authority_revocations` with deterministic c
**Export surfaces** (deterministic output, suitable for Offline Kit):
- CLI: `stella auth revoke export --output ./out` writes `revocation-bundle.json`, `.jws`, `.sha256`.
+- Verification: `stella auth revoke verify --bundle
Products: ControlSuite 4.2
+CVE-2024-12345 allows remote code execution.
+ + ]]> + +Products: InfusionManager 2.1
+Multiple vulnerabilities including CVE-2025-11111 and CVE-2025-22222.
+ ]]>` with the personalised value):
+
+```bash
+curl -H "User-Agent: StellaOpsFeedser/ics-cisa" \
+ "https://content.govdelivery.com/accounts/USDHSCISA/topics/ICS-CERT/feed.rss?format=xml&code="
+```
+
+If the endpoint returns HTTP 200 and an RSS payload, record the sample response under `docs/artifacts/icscisa/` (see Task `FEEDCONN-ICSCISA-02-007`). HTTP 403 or 406 usually means the subscription was not confirmed or the code was mistyped.
+
+## 3. Configuration Snippet
+
+Add the connector configuration to `feedser.yaml` (or equivalent environment variables):
+
+```yaml
+feedser:
+ sources:
+ icscisa:
+ govDelivery:
+ code: "${FEEDSER_ICS_CISA_GOVDELIVERY_CODE}"
+ topics:
+ - "USDHSCISA_16"
+ - "USDHSCISA_19"
+ - "USDHSCISA_17"
+ rssBaseUri: "https://content.govdelivery.com/accounts/USDHSCISA"
+ requestDelay: "00:00:01"
+ failureBackoff: "00:05:00"
+```
+
+Environment variable example:
+
+```bash
+export FEEDSER_SOURCES_ICSCISA_GOVDELIVERY_CODE="AB12CD34EF"
+```
+
+Feedser automatically register the host with the Source.Common HTTP allow-list when the connector assembly is loaded.
+
+
+Optional tuning keys (set only when needed):
+
+- `proxyUri` — HTTP/HTTPS proxy URL used when Akamai blocks direct pulls.
+- `requestVersion` / `requestVersionPolicy` — override HTTP negotiation when the proxy requires HTTP/1.1.
+- `enableDetailScrape` — toggle HTML detail fallback (defaults to true).
+- `captureAttachments` — collect PDF attachments from detail pages (defaults to true).
+- `detailBaseUri` — alternate host for detail enrichment if CISA changes their layout.
+
+## 4. Seeding Without GovDelivery
+
+If credentials are still pending, populate the connector with the community CSV dataset before enabling the live fetch:
+
+1. Run `./scripts/fetch-ics-cisa-seed.sh` (or `.ps1`) to download the latest `CISA_ICS_ADV_*.csv` files into `seed-data/ics-cisa/`.
+2. Copy the CSVs (and the generated `.sha256` files) into your Offline Kit staging area so they ship alongside the other feeds.
+3. Import the kit as usual. The connector can parse the seed data for historical context, but **live GovDelivery credentials are still required** for fresh advisories.
+4. Once credentials arrive, update `feedser:sources:icscisa:govDelivery:code` and re-trigger `source:ics-cisa:fetch` so the connector switches to the authorised feed.
+
+> The CSVs are licensed under ODbL 1.0 by the ICS Advisory Project. Preserve the attribution when redistributing them.
+
+## 4. Integration Validation
+
+1. Ensure secrets are in place and restart the Feedser workers.
+2. Run a dry-run fetch/parse/map chain against an Akamai-protected topic:
+ ```bash
+ FEEDSER_SOURCES_ICSCISA_GOVDELIVERY_CODE=... \
+ FEEDSER_SOURCES_ICSCISA_ENABLEDETAILSCRAPE=1 \
+ stella db jobs run source:ics-cisa:fetch --and-then source:ics-cisa:parse --and-then source:ics-cisa:map
+ ```
+3. Confirm logs contain `ics-cisa detail fetch` entries and that new documents/DTOs include attachments (see `docs/artifacts/icscisa`). Canonical advisories should expose PDF links as `references.kind == "attachment"` and affected packages should surface `primitives.semVer.exactValue` for single-version hits.
+4. If Akamai blocks direct fetches, set `feedser:sources:icscisa:proxyUri` to your allow-listed egress proxy and rerun the dry-run.
+
+## 4. Rotation & Incident Response
+
+- Review GovDelivery access quarterly. Rotate the personalised code whenever Ops changes the service mailbox password or membership.
+- Revoking the subscription in GovDelivery invalidates the code immediately; update the vault and configuration in the same change.
+- If the code leaks, remove the subscription (`https://public.govdelivery.com/accounts/USDHSCISA/subscriber/manage_preferences?code=`), resubscribe, and distribute the new value via the vault.
+
+## 5. Offline Kit Handling
+
+Include the personalised code in `offline-kit/secrets/feedser/icscisa.env`:
+
+```
+FEEDSER_SOURCES_ICSCISA_GOVDELIVERY_CODE=AB12CD34EF
+```
+
+The Offline Kit deployment script copies this file into the container secret directory mounted at `/run/secrets/feedser`. Ensure permissions are `600` and ownership matches the Feedser runtime user.
+
+## 6. Telemetry & Monitoring
+
+The connector emits metrics under the meter `StellaOps.Feedser.Source.Ics.Cisa`. They allow operators to track Akamai fallbacks, detail enrichment health, and advisory fan-out.
+
+- `icscisa.fetch.*` – counters for `attempts`, `success`, `failures`, `not_modified`, and `fallbacks`, plus histogram `icscisa.fetch.documents` showing documents added per topic pull (tags: `feedser.source`, `icscisa.topic`).
+- `icscisa.parse.*` – counters for `success`/`failures` and histograms `icscisa.parse.advisories`, `icscisa.parse.attachments`, `icscisa.parse.detail_fetches` to monitor enrichment workload per feed document.
+- `icscisa.detail.*` – counters `success` / `failures` per advisory (tagged with `icscisa.advisory`) to alert when Akamai blocks detail pages.
+- `icscisa.map.*` – counters for `success`/`failures` and histograms `icscisa.map.references`, `icscisa.map.packages`, `icscisa.map.aliases` capturing canonical fan-out.
+
+Suggested alerts:
+
+- `increase(icscisa.fetch.failures_total[15m]) > 0` or `increase(icscisa.fetch.fallbacks_total[15m]) > 5` — sustained Akamai or proxy issues.
+- `increase(icscisa.detail.failures_total[30m]) > 0` — detail enrichment breaking (potential HTML layout change).
+- `histogram_quantile(0.95, rate(icscisa.map.references_bucket[1h]))` trending sharply higher — sudden advisory reference explosion worth investigating.
+- Keep an eye on shared HTTP metrics (`feedser.source.http.*{feedser.source="ics-cisa"}`) for request latency and retry patterns.
+
+## 6. Related Tasks
+
+- `FEEDCONN-ICSCISA-02-009` (GovDelivery credential onboarding) — completed once this runbook is followed and secrets are placed in the vault.
+- `FEEDCONN-ICSCISA-02-007` (document inventory) — archive the first successful RSS response and any attachment URL schema under `docs/artifacts/icscisa/`.
diff --git a/docs/ops/feedser-kisa-operations.md b/docs/ops/feedser-kisa-operations.md
new file mode 100644
index 00000000..d2d25caf
--- /dev/null
+++ b/docs/ops/feedser-kisa-operations.md
@@ -0,0 +1,74 @@
+# Feedser KISA Connector Operations
+
+Operational guidance for the Korea Internet & Security Agency (KISA / KNVD) connector (`source:kisa:*`). Pair this with the engineering brief in `docs/dev/kisa_connector_notes.md`.
+
+## 1. Prerequisites
+
+- Outbound HTTPS (or mirrored cache) for `https://knvd.krcert.or.kr/`.
+- Connector options defined under `feedser:sources:kisa`:
+
+```yaml
+feedser:
+ sources:
+ kisa:
+ feedUri: "https://knvd.krcert.or.kr/rss/securityInfo.do"
+ detailApiUri: "https://knvd.krcert.or.kr/rssDetailData.do"
+ detailPageUri: "https://knvd.krcert.or.kr/detailDos.do"
+ maxAdvisoriesPerFetch: 10
+ requestDelay: "00:00:01"
+ failureBackoff: "00:05:00"
+```
+
+> Ensure the URIs stay absolute—Feedser adds the `feedUri`/`detailApiUri` hosts to the HttpClient allow-list automatically.
+
+## 2. Staging Smoke Test
+
+1. Restart the Feedser workers so the KISA options bind.
+2. Run a full connector cycle:
+ - CLI: `stella db jobs run source:kisa:fetch --and-then source:kisa:parse --and-then source:kisa:map`
+ - REST: `POST /jobs/run { "kind": "source:kisa:fetch", "chain": ["source:kisa:parse", "source:kisa:map"] }`
+3. Confirm telemetry (Meter `StellaOps.Feedser.Source.Kisa`):
+ - `kisa.feed.success`, `kisa.feed.items`
+ - `kisa.detail.success` / `.failures`
+ - `kisa.parse.success` / `.failures`
+ - `kisa.map.success` / `.failures`
+ - `kisa.cursor.updates`
+4. Inspect logs for structured entries:
+ - `KISA feed returned {ItemCount}`
+ - `KISA fetched detail for {Idx} … category={Category}`
+ - `KISA mapped advisory {AdvisoryId} (severity={Severity})`
+ - Absence of warnings such as `document missing GridFS payload`.
+5. Validate MongoDB state:
+ - `raw_documents.metadata` has `kisa.idx`, `kisa.category`, `kisa.title`.
+ - DTO store contains `schemaVersion="kisa.detail.v1"`.
+ - Advisories include aliases (`IDX`, CVE) and `language="ko"`.
+ - `source_states` entry for `kisa` shows recent `cursor.lastFetchAt`.
+
+## 3. Production Monitoring
+
+- **Dashboards** – Add the following Prometheus/OTEL expressions:
+ - `rate(kisa_feed_items_total[15m])` versus `rate(feedser_source_http_requests_total{feedser_source="kisa"}[15m])`
+ - `increase(kisa_detail_failures_total{reason!="empty-document"}[1h])` alert at `>0`
+ - `increase(kisa_parse_failures_total[1h])` for storage/JSON issues
+ - `increase(kisa_map_failures_total[1h])` to flag schema drift
+ - `increase(kisa_cursor_updates_total[6h]) == 0` during active windows → warn
+- **Alerts** – Page when `rate(kisa_feed_success_total[2h]) == 0` while other connectors are active; back off for maintenance windows announced on `https://knvd.krcert.or.kr/`.
+- **Logs** – Watch for repeated warnings (`document missing`, `DTO missing`) or errors with reason tags `HttpRequestException`, `download`, `parse`, `map`.
+
+## 4. Localisation Handling
+
+- Hangul categories (for example `취약점정보`) flow into telemetry tags (`category=…`) and logs. Dashboards must render UTF‑8 and avoid transliteration.
+- HTML content is sanitised before storage; translation teams can consume the `ContentHtml` field safely.
+- Advisory severity remains as provided by KISA (`High`, `Medium`, etc.). Map-level failures include the severity tag for filtering.
+
+## 5. Fixture & Regression Maintenance
+
+- Regression fixtures: `src/StellaOps.Feedser.Source.Kisa.Tests/Fixtures/kisa-feed.xml` and `kisa-detail.json`.
+- Refresh via `UPDATE_KISA_FIXTURES=1 dotnet test src/StellaOps.Feedser.Source.Kisa.Tests/StellaOps.Feedser.Source.Kisa.Tests.csproj`.
+- The telemetry regression (`KisaConnectorTests.Telemetry_RecordsMetrics`) will fail if counters/log wiring drifts—treat failures as gating.
+
+## 6. Known Issues
+
+- RSS feeds only expose the latest 10 advisories; long outages require replay via archived feeds or manual IDX seeds.
+- Detail endpoint occasionally throttles; the connector honours `requestDelay` and reports failures with reason `HttpRequestException`. Consider increasing delay for weekend backfills.
+- If `kisa.category` tags suddenly appear as `unknown`, verify KISA has not renamed RSS elements; update the parser fixtures before production rollout.
diff --git a/docs/ops/feedser-msrc-operations.md b/docs/ops/feedser-msrc-operations.md
new file mode 100644
index 00000000..828b5a9c
--- /dev/null
+++ b/docs/ops/feedser-msrc-operations.md
@@ -0,0 +1,86 @@
+# Feedser MSRC Connector – Azure AD Onboarding Brief
+
+_Drafted: 2025-10-15_
+
+## 1. App registration requirements
+
+- **Tenant**: shared StellaOps production Azure AD.
+- **Application type**: confidential client (web/API) issuing client credentials.
+- **API permissions**: `api://api.msrc.microsoft.com/.default` (Application). Admin consent required once.
+- **Token audience**: `https://api.msrc.microsoft.com/`.
+- **Grant type**: client credentials. Feedser will request tokens via `POST https://login.microsoftonline.com/{tenantId}/oauth2/v2.0/token`.
+
+## 2. Secret/credential policy
+
+- Maintain two client secrets (primary + standby) rotating every 90 days.
+- Store secrets in the Feedser secrets vault; Offline Kit deployments must mirror the secret payloads in their encrypted store.
+- Record rotation cadence in Ops runbook and update Feedser configuration (`FEEDSER__SOURCES__VNDR__MSRC__CLIENTSECRET`) ahead of expiry.
+
+## 3. Feedser configuration sample
+
+```yaml
+feedser:
+ sources:
+ vndr.msrc:
+ tenantId: ""
+ clientId: ""
+ clientSecret: ""
+ apiVersion: "2024-08-01"
+ locale: "en-US"
+ requestDelay: "00:00:00.250"
+ failureBackoff: "00:05:00"
+ cursorOverlapMinutes: 10
+ downloadCvrf: false # set true to persist CVRF ZIP alongside JSON detail
+```
+
+## 4. CVRF artefacts
+
+- The MSRC REST payload exposes `cvrfUrl` per advisory. Current connector persists the link as advisory metadata and reference; it does **not** download the ZIP by default.
+- Ops should mirror CVRF ZIPs when preparing Offline Kits so air-gapped deployments can reconcile advisories without direct internet access.
+- Once Offline Kit storage guidelines are finalised, extend the connector configuration with `downloadCvrf: true` to enable automatic attachment retrieval.
+
+### 4.1 State seeding helper
+
+Use `tools/SourceStateSeeder` to queue historical advisories (detail JSON + optional CVRF artefacts) for replay without manual Mongo edits. Example seed file:
+
+```json
+{
+ "source": "vndr.msrc",
+ "cursor": {
+ "lastModifiedCursor": "2024-01-01T00:00:00Z"
+ },
+ "documents": [
+ {
+ "uri": "https://api.msrc.microsoft.com/sug/v2.0/vulnerability/ADV2024-0001",
+ "contentFile": "./seeds/adv2024-0001.json",
+ "contentType": "application/json",
+ "metadata": { "msrc.vulnerabilityId": "ADV2024-0001" },
+ "addToPendingDocuments": true
+ },
+ {
+ "uri": "https://download.microsoft.com/msrc/2024/ADV2024-0001.cvrf.zip",
+ "contentFile": "./seeds/adv2024-0001.cvrf.zip",
+ "contentType": "application/zip",
+ "status": "mapped",
+ "addToPendingDocuments": false
+ }
+ ]
+}
+```
+
+Run the helper:
+
+```bash
+dotnet run --project tools/SourceStateSeeder -- \
+ --connection-string "mongodb://localhost:27017" \
+ --database feedser \
+ --input seeds/msrc-backfill.json
+```
+
+Any documents marked `addToPendingDocuments` will appear in the connector cursor; `DownloadCvrf` can remain disabled if the ZIP artefact is pre-seeded.
+
+## 5. Outstanding items
+
+- Ops to confirm tenant/app names and provide client credentials through the secure channel.
+- Connector team monitors token cache health (already implemented); validate instrumentation once Ops supplies credentials.
+- Offline Kit packaging: add encrypted blob containing client credentials with rotation instructions.
diff --git a/docs/ops/feedser-nkcki-operations.md b/docs/ops/feedser-nkcki-operations.md
new file mode 100644
index 00000000..4424c9ee
--- /dev/null
+++ b/docs/ops/feedser-nkcki-operations.md
@@ -0,0 +1,48 @@
+# NKCKI Connector Operations Guide
+
+## Overview
+
+The NKCKI connector ingests JSON bulletin archives from cert.gov.ru, expanding each `*.json.zip` attachment into per-vulnerability DTOs before canonical mapping. The fetch pipeline now supports cache-backed recovery, deterministic pagination, and telemetry suitable for production monitoring.
+
+## Configuration
+
+Key options exposed through `feedser:sources:ru-nkcki:http`:
+
+- `maxBulletinsPerFetch` – limits new bulletin downloads in a single run (default `5`).
+- `maxListingPagesPerFetch` – maximum listing pages visited during pagination (default `3`).
+- `listingCacheDuration` – minimum interval between listing fetches before falling back to cached artefacts (default `00:10:00`).
+- `cacheDirectory` – optional path for persisted bulletin archives used during offline or failure scenarios.
+- `requestDelay` – delay inserted between bulletin downloads to respect upstream politeness.
+
+When operating in offline-first mode, set `cacheDirectory` to a writable path (e.g. `/var/lib/feedser/cache/ru-nkcki`) and pre-populate bulletin archives via the offline kit.
+
+## Telemetry
+
+`RuNkckiDiagnostics` emits the following metrics under meter `StellaOps.Feedser.Source.Ru.Nkcki`:
+
+- `nkcki.listing.fetch.attempts` / `nkcki.listing.fetch.success` / `nkcki.listing.fetch.failures`
+- `nkcki.listing.pages.visited` (histogram, `pages`)
+- `nkcki.listing.attachments.discovered` / `nkcki.listing.attachments.new`
+- `nkcki.bulletin.fetch.success` / `nkcki.bulletin.fetch.cached` / `nkcki.bulletin.fetch.failures`
+- `nkcki.entries.processed` (histogram, `entries`)
+
+Integrate these counters into standard Feedser observability dashboards to track crawl coverage and cache hit rates.
+
+## Archive Backfill Strategy
+
+Bitrix pagination surfaces archives via `?PAGEN_1=n`. The connector now walks up to `maxListingPagesPerFetch` pages, deduplicating bulletin IDs and maintaining a rolling `knownBulletins` window. Backfill strategy:
+
+1. Enumerate pages from newest to oldest, respecting `maxListingPagesPerFetch` and `listingCacheDuration` to avoid refetch storms.
+2. Persist every `*.json.zip` attachment to the configured cache directory. This enables replay when listing access is temporarily blocked.
+3. During archive replay, `ProcessCachedBulletinsAsync` enqueues missing documents while respecting `maxVulnerabilitiesPerFetch`.
+4. For historical HTML-only advisories, collect page URLs and metadata while offline (future work: HTML and PDF extraction pipeline documented in `docs/feedser-connector-research-20251011.md`).
+
+For large migrations, seed caches with archived zip bundles, then run fetch/parse/map cycles in chronological order to maintain deterministic outputs.
+
+## Failure Handling
+
+- Listing failures mark the source state with exponential backoff while attempting cache replay.
+- Bulletin fetches fall back to cached copies before surfacing an error.
+- Mongo integration tests rely on bundled OpenSSL 1.1 libraries (`tools/openssl/linux-x64`) to keep `Mongo2Go` operational on modern distros.
+
+Refer to `ru-nkcki` entries in `src/StellaOps.Feedser.Source.Ru.Nkcki/TASKS.md` for outstanding items.
diff --git a/docs/security/audit-events.md b/docs/security/audit-events.md
index 5fcb16a9..468c7ef5 100644
--- a/docs/security/audit-events.md
+++ b/docs/security/audit-events.md
@@ -15,7 +15,7 @@ Audit events share the `StellaOps.Cryptography.Audit.AuthEventRecord` contract.
- `Client` — `AuthEventClient` with client identifier, display name, and originating provider/plugin.
- `Scopes` — granted or requested OAuth scopes (sorted before emission).
- `Network` — `AuthEventNetwork` with remote address, forwarded headers, and user agent string (all treated as PII).
-- `Properties` — additional `AuthEventProperty` entries for context-specific details (lockout durations, policy decisions, retries, etc.).
+- `Properties` — additional `AuthEventProperty` entries for context-specific details (lockout durations, policy decisions, retries, `request.tampered`/`request.unexpected_parameter`, `bootstrap.invite_token`, etc.).
## Data Classifications
@@ -33,7 +33,13 @@ Event names follow dotted notation:
- `authority.password.grant` — password grant handled by OpenIddict.
- `authority.client_credentials.grant` — client credential grant handling.
+- `authority.token.tamper` — suspicious `/token` request detected (unexpected parameters or manipulated payload).
- `authority.bootstrap.user` and `authority.bootstrap.client` — bootstrap API operations.
+- `authority.bootstrap.invite.created` — operator created a bootstrap invite.
+- `authority.bootstrap.invite.consumed` — invite consumed during user/client provisioning.
+- `authority.bootstrap.invite.expired` — invite expired without being used.
+- `authority.bootstrap.invite.rejected` — invite was rejected (invalid, mismatched provider/target, or already consumed).
+- `authority.token.replay.suspected` — replay heuristics detected a token being used from a new device fingerprint.
- Future additions should preserve the `authority..` pattern to keep filtering deterministic.
## Persistence
diff --git a/docs/security/authority-threat-model.md b/docs/security/authority-threat-model.md
index 9cbba654..f9ad5c99 100644
--- a/docs/security/authority-threat-model.md
+++ b/docs/security/authority-threat-model.md
@@ -82,9 +82,9 @@ flowchart LR
| Threat | STRIDE Vector | Surface | Risk (L×I) | Existing Controls | Gaps / Actions | Owner |
|--------|---------------|---------|------------|-------------------|----------------|-------|
| Spoofed revocation bundle | Spoofing | TB5 — Authority ↔️ Agents | Med×High | Detached JWS signature (planned), offline kit checksums | Finalise signing key registry & verification script (SEC4.B/SEC4.HOST); add bundle freshness requirement | Security Guild (follow-up: **SEC5.B**) |
-| Parameter tampering on `/token` | Tampering | TB1 — Public ingress | Med×High | ASP.NET model validation, OpenIddict, rate limiter (CORE8.RL) | Add audit coverage for tampered inputs, align correlation IDs with SOC (SEC2.A/SEC2.B) | Security Guild + Authority Core (follow-up: **SEC5.C**) |
-| Bootstrap invite replay | Repudiation | TB4 — Operator CLI ↔️ Authority | Low×High | One-time bootstrap tokens, Argon2id hashing on creation | Enforce invite expiration + audit trail for unused invites | Security Guild (follow-up: **SEC5.D**) |
-| Token replay by stolen agent | Information Disclosure | TB5 | Med×High | Planned revocation bundles, optional mTLS | Require agent binding (device fingerprint) and enforce revocation grace window alerts | Security Guild + Zastava (follow-up: **SEC5.E**) |
+| Parameter tampering on `/token` | Tampering | TB1 — Public ingress | Med×High | ASP.NET model validation, OpenIddict, rate limiter (CORE8.RL) | Tampered requests emit `authority.token.tamper` audit events (`request.tampered`, unexpected parameter names) correlating with `/token` outcomes (SEC5.C) | Security Guild + Authority Core (follow-up: **SEC5.C**) |
+| Bootstrap invite replay | Repudiation | TB4 — Operator CLI ↔️ Authority | Low×High | One-time bootstrap tokens, Argon2id hashing on creation | Invites expire automatically and emit audit events on consumption/expiration (SEC5.D) | Security Guild |
+| Token replay by stolen agent | Information Disclosure | TB5 | Med×High | Signed revocation bundles, device fingerprint heuristics, optional mTLS | Monitor revocation acknowledgement latency via Zastava and tune replay alerting thresholds | Security Guild + Zastava (follow-up: **SEC5.E**) |
| Privilege escalation via plug-in override | Elevation of Privilege | TB3 — Plug-in sandbox | Med×High | Signed plug-ins, restart-only loading, configuration validation | Add static analysis on manifest overrides + runtime warning when policy weaker than host | Security Guild + DevOps (follow-up: **SEC5.F**) |
| Offline bundle tampering | Tampering | Distribution | Low×High | SHA256 manifest, signed bundles (planned) | Add supply-chain attestation for Offline Kit, publish verification CLI in docs | Security Guild + Ops (follow-up: **SEC5.G**) |
| Failure to log denied tokens | Repudiation | TB2 — Authority ↔️ Mongo | Med×Med | Serilog structured events (partial), Mongo persistence path (planned) | Finalise audit schema (SEC2.A) and ensure `/token` denies include subject/client/IP fields | Security Guild + Authority Core (follow-up: **SEC5.H**) |
@@ -98,7 +98,7 @@ Risk scoring uses qualitative scale (Low/Med/High) for likelihood × impact; mit
| SEC5.B | Spoofed revocation bundle | Complete libsodium/Core signing integration and ship revocation verification script. | Security Guild + Authority Core |
| SEC5.C | Parameter tampering on `/token` | Finalise audit contract (`SEC2.A`) and add request tamper logging. | Security Guild + Authority Core |
| SEC5.D | Bootstrap invite replay | Implement expiry enforcement + audit coverage for unused bootstrap invites. | Security Guild |
-| SEC5.E | Token replay by stolen agent | Document device binding requirements and create detector for stale revocation acknowledgements. | Security Guild + Zastava |
+| SEC5.E | Token replay by stolen agent | Coordinate Zastava alerting with the new device fingerprint heuristics and surface stale revocation acknowledgements. | Security Guild + Zastava |
| SEC5.F | Plug-in override escalation | Static analysis of plug-in manifests; warn on weaker password policy overrides. | Security Guild + DevOps |
| SEC5.G | Offline bundle tampering | Extend Offline Kit build to include attested manifest + verification CLI sample. | Security Guild + Ops |
| SEC5.H | Failure to log denied tokens | Ensure audit persistence for all `/token` denials with correlation IDs. | Security Guild + Authority Core |
diff --git a/docs/security/rate-limits.md b/docs/security/rate-limits.md
new file mode 100644
index 00000000..a6f561dc
--- /dev/null
+++ b/docs/security/rate-limits.md
@@ -0,0 +1,76 @@
+# StellaOps Authority Rate Limit Guidance
+
+StellaOps Authority applies fixed-window rate limiting to critical endpoints so that brute-force and burst traffic are throttled before they can exhaust downstream resources. This guide complements the lockout policy documentation and captures the recommended defaults, override scenarios, and monitoring practices for `/token`, `/authorize`, and `/internal/*` routes.
+
+## Configuration Overview
+
+Rate limits live under `security.rateLimiting` in `authority.yaml` (and map to the same hierarchy for environment variables). Each endpoint exposes:
+
+- `enabled` — toggles the limiter.
+- `permitLimit` — maximum requests per fixed window.
+- `window` — window duration expressed as an ISO-8601 timespan (e.g., `00:01:00`).
+- `queueLimit` — number of requests allowed to queue when the window is exhausted.
+
+```yaml
+security:
+ rateLimiting:
+ token:
+ enabled: true
+ permitLimit: 30
+ window: 00:01:00
+ queueLimit: 0
+ authorize:
+ enabled: true
+ permitLimit: 60
+ window: 00:01:00
+ queueLimit: 10
+ internal:
+ enabled: false
+ permitLimit: 5
+ window: 00:01:00
+ queueLimit: 0
+```
+
+When limits trigger, middleware decorates responses with `Retry-After` headers and log tags (`authority.endpoint`, `authority.client_id`, `authority.remote_ip`) so operators can correlate events with clients and source IPs.
+
+Environment overrides follow the same hierarchy. For example:
+
+```
+STELLAOPS_AUTHORITY__SECURITY__RATELIMITING__TOKEN__PERMITLIMIT=60
+STELLAOPS_AUTHORITY__SECURITY__RATELIMITING__TOKEN__WINDOW=00:01:00
+```
+
+## Recommended Profiles
+
+| Scenario | permitLimit | window | queueLimit | Notes |
+|----------|-------------|--------|------------|-------|
+| Default production | 30 | 60s | 0 | Balances anonymous quota (33 scans/day) with headroom for tenant bursts. |
+| High-trust clustered IPs | 60 | 60s | 5 | Requires WAF allowlist + alert `aspnetcore_rate_limiting_rejections_total{limiter="authority-token"} <= 1%` sustained. |
+| Air-gapped lab | 10 | 120s | 0 | Lower concurrency reduces noise when running from shared bastion hosts. |
+| Incident lockdown | 5 | 300s | 0 | Pair with credential lockout limit of 3 attempts and SOC paging for each denial. |
+
+### Lockout Interplay
+
+- Rate limiting throttles by IP/client; lockout policies apply per subject. Keep both enabled.
+- During lockdown scenarios, reduce `security.lockout.maxFailures` alongside the rate limits above so that subjects face quicker escalation.
+- Map support playbooks to the observed `Retry-After` value: anything above 120 seconds should trigger manual investigation before re-enabling clients.
+
+## Monitoring and Alerts
+
+1. **Metrics**
+ - `aspnetcore_rate_limiting_rejections_total{limiter="authority-token"}` for `/token`.
+ - `aspnetcore_rate_limiting_rejections_total{limiter="authority-authorize"}` for `/authorize`.
+ - Custom counters derived from the structured log tags (`authority.remote_ip`, `authority.client_id`).
+2. **Dashboards**
+ - Requests vs. rejections per endpoint.
+ - Top offending clients/IP ranges in the current window.
+ - Heatmap of retry-after durations to spot persistent throttling.
+3. **Alerts**
+ - Notify SOC when 429 rates exceed 25 % for five consecutive minutes on any limiter.
+ - Trigger client-specific alerts when a single client_id produces >100 throttle events/hour.
+
+## Operational Checklist
+
+- Validate updated limits in staging before production rollout; smoke-test with representative workload.
+- When raising limits, confirm audit events continue to capture `authority.client_id`, `authority.remote_ip`, and correlation IDs for throttle responses.
+- Document any overrides in the change log, including justification and expiry review date.
diff --git a/docs/security/revocation-bundle.md b/docs/security/revocation-bundle.md
index 417a0d8d..657c10e0 100644
--- a/docs/security/revocation-bundle.md
+++ b/docs/security/revocation-bundle.md
@@ -43,6 +43,7 @@ Consumers MUST treat the combination of `schemaVersion` and `sequence` as a mono
{
"alg": "ES256",
"kid": "{signingKeyId}",
+ "provider": "{providerName}",
"typ": "application/vnd.stellaops.revocation-bundle+jws",
"b64": false,
"crit": ["b64"]
@@ -54,8 +55,28 @@ Verification steps:
1. Validate `revocation-bundle.json` against the schema.
2. Re-compute SHA-256 and compare with `.sha256` (if present).
-3. Resolve the signing key from JWKS (`/.well-known/jwks.json`) or the offline key bundle.
-4. Verify the detached JWS using the stored signing key (example tooling coming with `stella auth revoke verify`).
+3. Resolve the signing key from JWKS (`/.well-known/jwks.json`) or the offline key bundle, preferring the provider declared in the JWS header (`provider` falls back to `default`).
+4. Verify the detached JWS using the resolved provider. The CLI mirrors Authority resolution, so builds compiled with `StellaOpsCryptoSodium=true` automatically use the libsodium provider when advertised; otherwise verification downgrades to the managed fallback.
+
+### CLI verification workflow
+
+Use the bundled CLI command before distributing a bundle:
+
+```bash
+stellaops auth revoke verify \
+ --bundle artifacts/revocation-bundle.json \
+ --signature artifacts/revocation-bundle.json.jws \
+ --key etc/authority/signing/authority-public.pem \
+ --verbose
+```
+
+The verifier performs three checks:
+
+1. Prints the computed digest in `sha256:` format. Compare it with the exported `.sha256` artefact.
+2. Confirms the detached JWS header advertises `b64: false`, captures the provider hint, and that the algorithm matches the Authority configuration (ES256 unless overridden).
+3. Registers the supplied PEM key with the crypto provider registry and validates the signature (falling back to the managed provider when the hinted provider is unavailable).
+
+A zero exit code means the bundle is ready for mirroring/import. Non-zero codes signal missing arguments, malformed JWS payloads, or signature mismatches; regenerate or re-sign the bundle before distribution.
## Example
@@ -64,7 +85,7 @@ The repository contains an [example bundle](revocation-bundle-example.json) demo
## Operations Quick Reference
- `stella auth revoke export` emits a canonical JSON bundle, `.sha256` digest, and detached JWS signature in one command. Use `--output` to write into your mirror staging directory.
-- `stella auth revoke verify` validates a bundle using cached JWKS or an offline PEM key and reports digest mismatches before distribution.
+- `stella auth revoke verify` validates a bundle using cached JWKS or an offline PEM key, honours the `provider` metadata embedded in the signature, and reports digest mismatches before distribution.
- `POST /internal/revocations/export` provides the same payload for orchestrators that already talk to the bootstrap API.
- `POST /internal/signing/rotate` rotates JWKS material without downtime; always export a fresh bundle afterward so downstream mirrors receive signatures from the new `kid`.
- Offline Kit automation should mirror `revocation-bundle.json*` alongside Feedser exports so agents ingest revocations during the same sync pass.
diff --git a/etc/feedser.yaml.sample b/etc/feedser.yaml.sample
index 25038d57..a36cdd1f 100644
--- a/etc/feedser.yaml.sample
+++ b/etc/feedser.yaml.sample
@@ -83,3 +83,15 @@ sources:
failureBackoff: "00:05:00"
rateLimitWarningThreshold: 500
secondaryRateLimitBackoff: "00:02:00"
+ cve:
+ baseEndpoint: "https://cveawg.mitre.org/api/"
+ apiOrg: ""
+ apiUser: ""
+ apiKey: ""
+ # Optional mirror used when credentials are unavailable.
+ seedDirectory: "./seed-data/cve"
+ pageSize: 200
+ maxPagesPerFetch: 5
+ initialBackfill: "30.00:00:00"
+ requestDelay: "00:00:00.250"
+ failureBackoff: "00:10:00"
diff --git a/scripts/fetch-ics-cisa-seed.ps1 b/scripts/fetch-ics-cisa-seed.ps1
new file mode 100644
index 00000000..1f9e7acb
--- /dev/null
+++ b/scripts/fetch-ics-cisa-seed.ps1
@@ -0,0 +1,38 @@
+param(
+ [string]$Destination = "$(Join-Path (Split-Path -Parent $PSCommandPath) '..' | Resolve-Path)/seed-data/ics-cisa"
+)
+
+$ErrorActionPreference = 'Stop'
+New-Item -Path $Destination -ItemType Directory -Force | Out-Null
+
+Function Write-Info($Message) { Write-Host "[ics-seed] $Message" }
+Function Write-ErrorLine($Message) { Write-Host "[ics-seed][error] $Message" -ForegroundColor Red }
+
+Function Download-File($Url, $Path) {
+ Write-Info "Downloading $(Split-Path $Path -Leaf)"
+ Invoke-WebRequest -Uri $Url -OutFile $Path -UseBasicParsing
+ $hash = Get-FileHash -Path $Path -Algorithm SHA256
+ $hash.Hash | Out-File -FilePath "$Path.sha256" -Encoding ascii
+}
+
+$base = 'https://raw.githubusercontent.com/icsadvprj/ICS-Advisory-Project/main/ICS-CERT_ADV'
+$master = 'CISA_ICS_ADV_Master.csv'
+$snapshot = 'CISA_ICS_ADV_2025_10_09.csv'
+
+Write-Info 'Fetching ICS advisories seed data (ODbL v1.0)'
+Download-File "$base/$master" (Join-Path $Destination $master)
+Download-File "$base/$snapshot" (Join-Path $Destination $snapshot)
+
+$medicalUrl = 'https://raw.githubusercontent.com/batarr22/ICSMA_CSV/main/ICSMA_CSV_4-20-2023.xlsx'
+$medicalFile = 'ICSMA_CSV_4-20-2023.xlsx'
+Write-Info 'Fetching community ICSMA snapshot'
+try {
+ Download-File $medicalUrl (Join-Path $Destination $medicalFile)
+}
+catch {
+ Write-ErrorLine "Unable to download $medicalFile (optional): $_"
+ Remove-Item (Join-Path $Destination $medicalFile) -ErrorAction SilentlyContinue
+}
+
+Write-Info "Seed data ready in $Destination"
+Write-Info 'Remember: data is licensed under ODbL v1.0 (see seed README).'
diff --git a/scripts/fetch-ics-cisa-seed.sh b/scripts/fetch-ics-cisa-seed.sh
new file mode 100644
index 00000000..7cf7150f
--- /dev/null
+++ b/scripts/fetch-ics-cisa-seed.sh
@@ -0,0 +1,38 @@
+#!/usr/bin/env bash
+set -euo pipefail
+
+ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
+DEST_DIR="${1:-$ROOT_DIR/seed-data/ics-cisa}"
+mkdir -p "$DEST_DIR"
+
+info() { printf "[ics-seed] %s\n" "$*"; }
+error() { printf "[ics-seed][error] %s\n" "$*" >&2; }
+
+download() {
+ local url="$1"
+ local target="$2"
+ info "Downloading $(basename "$target")"
+ curl -fL "$url" -o "$target"
+ sha256sum "$target" > "$target.sha256"
+}
+
+BASE="https://raw.githubusercontent.com/icsadvprj/ICS-Advisory-Project/main/ICS-CERT_ADV"
+MASTER_FILE="CISA_ICS_ADV_Master.csv"
+SNAPSHOT_2025="CISA_ICS_ADV_2025_10_09.csv"
+
+info "Fetching ICS advisories seed data (ODbL v1.0)"
+download "$BASE/$MASTER_FILE" "$DEST_DIR/$MASTER_FILE"
+download "$BASE/$SNAPSHOT_2025" "$DEST_DIR/$SNAPSHOT_2025"
+
+MEDICAL_URL="https://raw.githubusercontent.com/batarr22/ICSMA_CSV/main/ICSMA_CSV_4-20-2023.xlsx"
+MEDICAL_FILE="ICSMA_CSV_4-20-2023.xlsx"
+info "Fetching community ICSMA snapshot"
+if curl -fL "$MEDICAL_URL" -o "$DEST_DIR/$MEDICAL_FILE"; then
+ sha256sum "$DEST_DIR/$MEDICAL_FILE" > "$DEST_DIR/$MEDICAL_FILE.sha256"
+else
+ error "Unable to download $MEDICAL_FILE (optional)."
+ rm -f "$DEST_DIR/$MEDICAL_FILE"
+fi
+
+info "Seed data ready in $DEST_DIR"
+info "Remember: data is licensed under ODbL v1.0 (see seed README)."
diff --git a/seed-data/cve/2025-10-15/CVE-2024-0001.json b/seed-data/cve/2025-10-15/CVE-2024-0001.json
new file mode 100644
index 00000000..b9b89bfc
--- /dev/null
+++ b/seed-data/cve/2025-10-15/CVE-2024-0001.json
@@ -0,0 +1,72 @@
+{
+ "dataType": "CVE_RECORD",
+ "dataVersion": "5.0",
+ "cveMetadata": {
+ "cveId": "CVE-2024-0001",
+ "assignerShortName": "ExampleOrg",
+ "state": "PUBLISHED",
+ "dateReserved": "2024-01-01T00:00:00Z",
+ "datePublished": "2024-09-10T12:00:00Z",
+ "dateUpdated": "2024-09-15T12:00:00Z"
+ },
+ "containers": {
+ "cna": {
+ "title": "Example Product Remote Code Execution",
+ "descriptions": [
+ {
+ "lang": "en",
+ "value": "An example vulnerability allowing remote attackers to execute arbitrary code."
+ }
+ ],
+ "affected": [
+ {
+ "vendor": "ExampleVendor",
+ "product": "ExampleProduct",
+ "platform": "linux",
+ "defaultStatus": "affected",
+ "versions": [
+ {
+ "status": "affected",
+ "version": "1.0.0",
+ "lessThan": "1.2.0",
+ "versionType": "semver"
+ },
+ {
+ "status": "unaffected",
+ "version": "1.2.0",
+ "versionType": "semver"
+ }
+ ]
+ }
+ ],
+ "references": [
+ {
+ "url": "https://example.com/security/advisory",
+ "name": "Vendor Advisory",
+ "tags": [
+ "vendor-advisory"
+ ]
+ },
+ {
+ "url": "https://cve.example.com/CVE-2024-0001",
+ "tags": [
+ "third-party-advisory"
+ ]
+ }
+ ],
+ "metrics": [
+ {
+ "cvssV3_1": {
+ "version": "3.1",
+ "vectorString": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H",
+ "baseScore": 9.8,
+ "baseSeverity": "CRITICAL"
+ }
+ }
+ ],
+ "aliases": [
+ "GHSA-xxxx-yyyy-zzzz"
+ ]
+ }
+ }
+}
diff --git a/seed-data/cve/2025-10-15/CVE-2024-4567.json b/seed-data/cve/2025-10-15/CVE-2024-4567.json
new file mode 100644
index 00000000..65805afa
--- /dev/null
+++ b/seed-data/cve/2025-10-15/CVE-2024-4567.json
@@ -0,0 +1,147 @@
+{
+ "dataType": "CVE_RECORD",
+ "dataVersion": "5.1",
+ "cveMetadata": {
+ "cveId": "CVE-2024-4567",
+ "assignerOrgId": "b15e7b5b-3da4-40ae-a43c-f7aa60e62599",
+ "state": "PUBLISHED",
+ "assignerShortName": "Wordfence",
+ "dateReserved": "2024-05-06T19:34:14.071Z",
+ "datePublished": "2024-05-09T20:03:38.213Z",
+ "dateUpdated": "2024-08-01T20:47:40.724Z"
+ },
+ "containers": {
+ "cna": {
+ "providerMetadata": {
+ "orgId": "b15e7b5b-3da4-40ae-a43c-f7aa60e62599",
+ "shortName": "Wordfence",
+ "dateUpdated": "2024-05-09T20:03:38.213Z"
+ },
+ "affected": [
+ {
+ "vendor": "themifyme",
+ "product": "Themify Shortcodes",
+ "versions": [
+ {
+ "version": "*",
+ "status": "affected",
+ "lessThanOrEqual": "2.0.9",
+ "versionType": "semver"
+ }
+ ],
+ "defaultStatus": "unaffected"
+ }
+ ],
+ "descriptions": [
+ {
+ "lang": "en",
+ "value": "The Themify Shortcodes plugin for WordPress is vulnerable to Stored Cross-Site Scripting via the plugin's themify_button shortcode in all versions up to, and including, 2.0.9 due to insufficient input sanitization and output escaping on user supplied attributes. This makes it possible for authenticated attackers, with contributor-level access and above, to inject arbitrary web scripts in pages that will execute whenever a user accesses an injected page."
+ }
+ ],
+ "title": "Themify Shortcodes <= 2.0.9 - Authenticated (Contributor+) Stored Cross-Site Scripting via themify_button Shortcode",
+ "references": [
+ {
+ "url": "https://www.wordfence.com/threat-intel/vulnerabilities/id/c63ff9d7-6a14-4186-8550-4e5c50855e7f?source=cve"
+ },
+ {
+ "url": "https://plugins.trac.wordpress.org/changeset/3082885/themify-shortcodes"
+ }
+ ],
+ "problemTypes": [
+ {
+ "descriptions": [
+ {
+ "lang": "en",
+ "description": "CWE-79 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')"
+ }
+ ]
+ }
+ ],
+ "metrics": [
+ {
+ "cvssV3_1": {
+ "version": "3.1",
+ "vectorString": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:L/I:L/A:N",
+ "baseScore": 6.4,
+ "baseSeverity": "MEDIUM"
+ }
+ }
+ ],
+ "credits": [
+ {
+ "lang": "en",
+ "type": "finder",
+ "value": "Francesco Carlucci"
+ }
+ ],
+ "timeline": [
+ {
+ "time": "2024-05-06T00:00:00.000+00:00",
+ "lang": "en",
+ "value": "Vendor Notified"
+ },
+ {
+ "time": "2024-05-08T00:00:00.000+00:00",
+ "lang": "en",
+ "value": "Disclosed"
+ }
+ ]
+ },
+ "adp": [
+ {
+ "title": "CISA ADP Vulnrichment",
+ "metrics": [
+ {
+ "other": {
+ "type": "ssvc",
+ "content": {
+ "id": "CVE-2024-4567",
+ "role": "CISA Coordinator",
+ "options": [
+ {
+ "Exploitation": "none"
+ },
+ {
+ "Automatable": "no"
+ },
+ {
+ "Technical Impact": "partial"
+ }
+ ],
+ "version": "2.0.3",
+ "timestamp": "2024-05-11T16:56:12.695905Z"
+ }
+ }
+ }
+ ],
+ "providerMetadata": {
+ "orgId": "134c704f-9b21-4f2e-91b3-4a467353bcc0",
+ "shortName": "CISA-ADP",
+ "dateUpdated": "2024-06-04T17:54:44.162Z"
+ }
+ },
+ {
+ "providerMetadata": {
+ "orgId": "af854a3a-2127-422b-91ae-364da2661108",
+ "shortName": "CVE",
+ "dateUpdated": "2024-08-01T20:47:40.724Z"
+ },
+ "title": "CVE Program Container",
+ "references": [
+ {
+ "url": "https://www.wordfence.com/threat-intel/vulnerabilities/id/c63ff9d7-6a14-4186-8550-4e5c50855e7f?source=cve",
+ "tags": [
+ "x_transferred"
+ ]
+ },
+ {
+ "url": "https://plugins.trac.wordpress.org/changeset/3082885/themify-shortcodes",
+ "tags": [
+ "x_transferred"
+ ]
+ }
+ ]
+ }
+ ]
+ }
+}
diff --git a/seed-data/ics-cisa/README.md b/seed-data/ics-cisa/README.md
new file mode 100644
index 00000000..1c314a11
--- /dev/null
+++ b/seed-data/ics-cisa/README.md
@@ -0,0 +1,19 @@
+# CISA ICS Advisory Seed Data
+
+This directory is reserved for **seed data** sourced from the community-maintained [ICS Advisory Project](https://github.com/icsadvprj/ICS-Advisory-Project). The project republishes CISA ICS advisories under the **Open Database License (ODbL) v1.0**. StellaOps uses these CSV snapshots to bootstrap offline environments before the official GovDelivery credentials arrive.
+
+> ⚠️ **Licence notice** – By downloading and using the CSV files you agree to the ODbL requirements (attribution, share-alike, and notice preservation). See [`LICENSE-ODBL.md`](https://github.com/icsadvprj/ICS-Advisory-Project/blob/main/LICENSE.md) for the full text.
+
+## Usage
+
+1. Run `scripts/fetch-ics-cisa-seed.sh` (or the PowerShell variant) to download the latest snapshots into this directory.
+2. The files are ignored by Git to avoid committing third-party data; include them explicitly when building an Offline Update Kit.
+3. When you later switch to live GovDelivery ingestion, keep the CSVs around as historical fixtures—do **not** treat them as an authoritative source once the live connector is enabled.
+
+### Suggested Artefacts
+
+- `CISA_ICS_ADV_Master.csv` – cumulative advisory dataset (2010 → present)
+- `CISA_ICS_ADV_.csv` – point-in-time snapshots
+- `ICSMA_CSV_.xlsx` – medical device advisories (optional, sourced from the community mirror)
+
+Keep the generated SHA-256 files alongside the CSVs so Offline Kit packaging can verify integrity.
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard.Tests/StandardClientProvisioningStoreTests.cs b/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard.Tests/StandardClientProvisioningStoreTests.cs
index 125d2c17..a0fb6f8d 100644
--- a/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard.Tests/StandardClientProvisioningStoreTests.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard.Tests/StandardClientProvisioningStoreTests.cs
@@ -15,7 +15,8 @@ public class StandardClientProvisioningStoreTests
public async Task CreateOrUpdateAsync_HashesSecretAndPersistsDocument()
{
var store = new TrackingClientStore();
- var provisioning = new StandardClientProvisioningStore("standard", store);
+ var revocations = new TrackingRevocationStore();
+ var provisioning = new StandardClientProvisioningStore("standard", store, revocations, TimeProvider.System);
var registration = new AuthorityClientRegistration(
clientId: "bootstrap-client",
@@ -63,4 +64,21 @@ public class StandardClientProvisioningStoreTests
return ValueTask.FromResult(removed);
}
}
+
+ private sealed class TrackingRevocationStore : IAuthorityRevocationStore
+ {
+ public List Upserts { get; } = new();
+
+ public ValueTask UpsertAsync(AuthorityRevocationDocument document, CancellationToken cancellationToken)
+ {
+ Upserts.Add(document);
+ return ValueTask.CompletedTask;
+ }
+
+ public ValueTask RemoveAsync(string category, string revocationId, CancellationToken cancellationToken)
+ => ValueTask.FromResult(true);
+
+ public ValueTask> GetActiveAsync(DateTimeOffset asOf, CancellationToken cancellationToken)
+ => ValueTask.FromResult>(Array.Empty());
+ }
}
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard.Tests/StandardPluginRegistrarTests.cs b/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard.Tests/StandardPluginRegistrarTests.cs
index ef7c0f9e..5c95104c 100644
--- a/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard.Tests/StandardPluginRegistrarTests.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard.Tests/StandardPluginRegistrarTests.cs
@@ -5,6 +5,7 @@ using System.Threading;
using System.Threading.Tasks;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
+using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Options;
using Mongo2Go;
@@ -58,6 +59,21 @@ public class StandardPluginRegistrarTests
services.AddLogging();
services.AddSingleton(database);
services.AddSingleton(new InMemoryClientStore());
+ services.AddSingleton(new StubRevocationStore());
+ services.AddSingleton(TimeProvider.System);
+ services.AddSingleton(new StubRevocationStore());
+ services.AddSingleton(TimeProvider.System);
+ services.AddSingleton(new StubRevocationStore());
+ services.AddSingleton(TimeProvider.System);
+ services.AddSingleton(new StubRevocationStore());
+ services.AddSingleton(TimeProvider.System);
+ services.AddSingleton(new StubRevocationStore());
+ services.AddSingleton(TimeProvider.System);
+ services.AddSingleton(TimeProvider.System);
+ services.AddSingleton(TimeProvider.System);
+ services.AddSingleton(new StubRevocationStore());
+ services.AddSingleton(new StubRevocationStore());
+ services.AddSingleton(new StubRevocationStore());
var registrar = new StandardPluginRegistrar();
registrar.Register(new AuthorityPluginRegistrationContext(services, pluginContext, configuration));
@@ -83,6 +99,53 @@ public class StandardPluginRegistrarTests
Assert.True(verification.User?.RequiresPasswordReset);
}
+ [Fact]
+ public void Register_LogsWarning_WhenPasswordPolicyWeaker()
+ {
+ using var runner = MongoDbRunner.Start(singleNodeReplSet: true);
+ var client = new MongoClient(runner.ConnectionString);
+ var database = client.GetDatabase("registrar-password-policy");
+
+ var configuration = new ConfigurationBuilder()
+ .AddInMemoryCollection(new Dictionary
+ {
+ ["passwordPolicy:minimumLength"] = "6",
+ ["passwordPolicy:requireUppercase"] = "false",
+ ["passwordPolicy:requireLowercase"] = "false",
+ ["passwordPolicy:requireDigit"] = "false",
+ ["passwordPolicy:requireSymbol"] = "false"
+ })
+ .Build();
+
+ var manifest = new AuthorityPluginManifest(
+ "standard",
+ "standard",
+ true,
+ typeof(StandardPluginRegistrar).Assembly.GetName().Name,
+ typeof(StandardPluginRegistrar).Assembly.Location,
+ new[] { AuthorityPluginCapabilities.Password },
+ new Dictionary(),
+ "standard.yaml");
+
+ var pluginContext = new AuthorityPluginContext(manifest, configuration);
+ var services = new ServiceCollection();
+ var loggerProvider = new CapturingLoggerProvider();
+ services.AddLogging(builder => builder.AddProvider(loggerProvider));
+ services.AddSingleton(database);
+ services.AddSingleton(new InMemoryClientStore());
+
+ var registrar = new StandardPluginRegistrar();
+ registrar.Register(new AuthorityPluginRegistrationContext(services, pluginContext, configuration));
+
+ using var provider = services.BuildServiceProvider();
+ _ = provider.GetRequiredService();
+
+ Assert.Contains(loggerProvider.Entries, entry =>
+ entry.Level == LogLevel.Warning &&
+ entry.Category.Contains(typeof(StandardPluginRegistrar).FullName!, StringComparison.Ordinal) &&
+ entry.Message.Contains("weaker password policy", StringComparison.OrdinalIgnoreCase));
+ }
+
[Fact]
public void Register_ForcesPasswordCapability_WhenManifestMissing()
{
@@ -106,6 +169,8 @@ public class StandardPluginRegistrarTests
services.AddLogging();
services.AddSingleton(database);
services.AddSingleton(new InMemoryClientStore());
+ services.AddSingleton(new StubRevocationStore());
+ services.AddSingleton(TimeProvider.System);
var registrar = new StandardPluginRegistrar();
registrar.Register(new AuthorityPluginRegistrationContext(services, pluginContext, configuration));
@@ -209,6 +274,61 @@ public class StandardPluginRegistrarTests
}
}
+internal sealed record CapturedLogEntry(string Category, LogLevel Level, string Message);
+
+internal sealed class CapturingLoggerProvider : ILoggerProvider
+{
+ public List Entries { get; } = new();
+
+ public ILogger CreateLogger(string categoryName) => new CapturingLogger(categoryName, Entries);
+
+ public void Dispose()
+ {
+ }
+
+ private sealed class CapturingLogger : ILogger
+ {
+ private readonly string category;
+ private readonly List entries;
+
+ public CapturingLogger(string category, List entries)
+ {
+ this.category = category;
+ this.entries = entries;
+ }
+
+ public IDisposable BeginScope(TState state) where TState : notnull => NullScope.Instance;
+
+ public bool IsEnabled(LogLevel logLevel) => true;
+
+ public void Log(LogLevel logLevel, EventId eventId, TState state, Exception? exception, Func formatter)
+ {
+ entries.Add(new CapturedLogEntry(category, logLevel, formatter(state, exception)));
+ }
+
+ private sealed class NullScope : IDisposable
+ {
+ public static readonly NullScope Instance = new();
+
+ public void Dispose()
+ {
+ }
+ }
+ }
+}
+
+internal sealed class StubRevocationStore : IAuthorityRevocationStore
+{
+ public ValueTask UpsertAsync(AuthorityRevocationDocument document, CancellationToken cancellationToken)
+ => ValueTask.CompletedTask;
+
+ public ValueTask RemoveAsync(string category, string revocationId, CancellationToken cancellationToken)
+ => ValueTask.FromResult(false);
+
+ public ValueTask> GetActiveAsync(DateTimeOffset asOf, CancellationToken cancellationToken)
+ => ValueTask.FromResult>(Array.Empty());
+}
+
internal sealed class InMemoryClientStore : IAuthorityClientStore
{
private readonly Dictionary clients = new(StringComparer.OrdinalIgnoreCase);
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard/StandardPluginOptions.cs b/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard/StandardPluginOptions.cs
index 46122d7c..86cec8dd 100644
--- a/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard/StandardPluginOptions.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard/StandardPluginOptions.cs
@@ -71,6 +71,41 @@ internal sealed class PasswordPolicyOptions
throw new InvalidOperationException($"Standard plugin '{pluginName}' requires passwordPolicy.minimumLength to be greater than zero.");
}
}
+
+ public bool IsWeakerThan(PasswordPolicyOptions other)
+ {
+ if (other is null)
+ {
+ return false;
+ }
+
+ if (MinimumLength < other.MinimumLength)
+ {
+ return true;
+ }
+
+ if (!RequireUppercase && other.RequireUppercase)
+ {
+ return true;
+ }
+
+ if (!RequireLowercase && other.RequireLowercase)
+ {
+ return true;
+ }
+
+ if (!RequireDigit && other.RequireDigit)
+ {
+ return true;
+ }
+
+ if (!RequireSymbol && other.RequireSymbol)
+ {
+ return true;
+ }
+
+ return false;
+ }
}
internal sealed class LockoutOptions
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard/StandardPluginRegistrar.cs b/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard/StandardPluginRegistrar.cs
index f0595a3c..d7857413 100644
--- a/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard/StandardPluginRegistrar.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard/StandardPluginRegistrar.cs
@@ -51,6 +51,25 @@ internal sealed class StandardPluginRegistrar : IAuthorityPluginRegistrar
var cryptoProvider = sp.GetRequiredService();
var passwordHasher = new CryptoPasswordHasher(pluginOptions, cryptoProvider);
var loggerFactory = sp.GetRequiredService();
+ var registrarLogger = loggerFactory.CreateLogger();
+
+ var baselinePolicy = new PasswordPolicyOptions();
+ if (pluginOptions.PasswordPolicy.IsWeakerThan(baselinePolicy))
+ {
+ registrarLogger.LogWarning(
+ "Standard plugin '{Plugin}' configured a weaker password policy (minLength={Length}, requireUpper={Upper}, requireLower={Lower}, requireDigit={Digit}, requireSymbol={Symbol}) than the baseline (minLength={BaseLength}, requireUpper={BaseUpper}, requireLower={BaseLower}, requireDigit={BaseDigit}, requireSymbol={BaseSymbol}).",
+ pluginName,
+ pluginOptions.PasswordPolicy.MinimumLength,
+ pluginOptions.PasswordPolicy.RequireUppercase,
+ pluginOptions.PasswordPolicy.RequireLowercase,
+ pluginOptions.PasswordPolicy.RequireDigit,
+ pluginOptions.PasswordPolicy.RequireSymbol,
+ baselinePolicy.MinimumLength,
+ baselinePolicy.RequireUppercase,
+ baselinePolicy.RequireLowercase,
+ baselinePolicy.RequireDigit,
+ baselinePolicy.RequireSymbol);
+ }
return new StandardUserCredentialStore(
pluginName,
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard/TASKS.md b/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard/TASKS.md
index 44f0cb79..6f7190fa 100644
--- a/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard/TASKS.md
+++ b/src/StellaOps.Authority/StellaOps.Authority.Plugin.Standard/TASKS.md
@@ -5,12 +5,14 @@
| PLG6.DOC | DONE (2025-10-11) | BE-Auth Plugin, Docs Guild | PLG1–PLG5 | Final polish + diagrams for plugin developer guide (AUTHPLUG-DOCS-01-001). | Docs team delivers copy-edit + exported diagrams; PR merged. |
| SEC1.PLG | DONE (2025-10-11) | Security Guild, BE-Auth Plugin | SEC1.A (StellaOps.Cryptography) | Swap Standard plugin hashing to Argon2id via `StellaOps.Cryptography` abstractions; keep PBKDF2 verification for legacy. | ✅ `StandardUserCredentialStore` uses `ICryptoProvider` to hash/check; ✅ Transparent rehash on success; ✅ Unit tests cover tamper + legacy rehash. |
| SEC1.OPT | DONE (2025-10-11) | Security Guild | SEC1.PLG | Expose password hashing knobs in `StandardPluginOptions` (`memoryKiB`, `iterations`, `parallelism`, `algorithm`) with validation. | ✅ Options bound from YAML; ✅ Invalid configs throw; ✅ Docs include tuning guidance. |
-| SEC2.PLG | TODO | Security Guild, Storage Guild | SEC2.A (audit contract) | Emit audit events from password verification outcomes and persist via `IAuthorityLoginAttemptStore`. | ✅ Serilog events enriched with subject/client/IP/outcome; ✅ Mongo records written per attempt; ✅ Tests assert success/lockout/failure cases. |
-| SEC3.PLG | TODO | Security Guild, BE-Auth Plugin | CORE8, SEC3.A (rate limiter) | Ensure lockout responses and rate-limit metadata flow through plugin logs/events (include retry-after). | ✅ Audit record includes retry-after; ✅ Tests confirm lockout + limiter interplay. |
+| SEC2.PLG | DOING (2025-10-14) | Security Guild, Storage Guild | SEC2.A (audit contract) | Emit audit events from password verification outcomes and persist via `IAuthorityLoginAttemptStore`. | ✅ Serilog events enriched with subject/client/IP/outcome; ✅ Mongo records written per attempt; ✅ Tests assert success/lockout/failure cases. |
+| SEC3.PLG | DOING (2025-10-14) | Security Guild, BE-Auth Plugin | CORE8, SEC3.A (rate limiter) | Ensure lockout responses and rate-limit metadata flow through plugin logs/events (include retry-after). | ✅ Audit record includes retry-after; ✅ Tests confirm lockout + limiter interplay. |
| SEC4.PLG | DONE (2025-10-12) | Security Guild | SEC4.A (revocation schema) | Provide plugin hooks so revoked users/clients write reasons for revocation bundle export. | ✅ Revocation exporter consumes plugin data; ✅ Tests cover revoked user/client output. |
-| SEC5.PLG | TODO | Security Guild | SEC5.A (threat model) | Address plugin-specific mitigations (bootstrap user handling, password policy docs) in threat model backlog. | ✅ Threat model lists plugin attack surfaces; ✅ Mitigation items filed. |
+| SEC5.PLG | DOING (2025-10-14) | Security Guild | SEC5.A (threat model) | Address plugin-specific mitigations (bootstrap user handling, password policy docs) in threat model backlog. | ✅ Threat model lists plugin attack surfaces; ✅ Mitigation items filed. |
| PLG4-6.CAPABILITIES | BLOCKED (2025-10-12) | BE-Auth Plugin, Docs Guild | PLG1–PLG3 | Finalise capability metadata exposure, config validation, and developer guide updates; remaining action is Docs polish/diagram export. | ✅ Capability metadata + validation merged; ✅ Plugin guide updated with final copy & diagrams; ✅ Release notes mention new toggles.
⛔ Blocked awaiting Authority rate-limiter stream (CORE8/SEC3) to resume so doc updates reflect final limiter behaviour. |
| PLG7.RFC | REVIEW | BE-Auth Plugin, Security Guild | PLG4 | Socialize LDAP plugin RFC (`docs/rfcs/authority-plugin-ldap.md`) and capture guild feedback. | ✅ Guild review sign-off recorded; ✅ Follow-up issues filed in module boards. |
| PLG6.DIAGRAM | TODO | Docs Guild | PLG6.DOC | Export final sequence/component diagrams for the developer guide and add offline-friendly assets under `docs/assets/authority`. | ✅ Mermaid sources committed; ✅ Rendered SVG/PNG linked from Section 2 + Section 9; ✅ Docs build preview shared with Plugin + Docs guilds. |
> Update statuses to DOING/DONE/BLOCKED as you make progress. Always run `dotnet test` for touched projects before marking DONE.
+
+> Remark (2025-10-13, PLG6.DOC/PLG6.DIAGRAM): Security Guild delivered `docs/security/rate-limits.md`; Docs team can lift Section 3 (tuning table + alerts) into the developer guide diagrams when rendering assets.
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/AuthorityMongoDefaults.cs b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/AuthorityMongoDefaults.cs
index c43a0792..311e52c2 100644
--- a/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/AuthorityMongoDefaults.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/AuthorityMongoDefaults.cs
@@ -22,5 +22,6 @@ public static class AuthorityMongoDefaults
public const string LoginAttempts = "authority_login_attempts";
public const string Revocations = "authority_revocations";
public const string RevocationState = "authority_revocation_state";
+ public const string Invites = "authority_bootstrap_invites";
}
}
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Documents/AuthorityBootstrapInviteDocument.cs b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Documents/AuthorityBootstrapInviteDocument.cs
new file mode 100644
index 00000000..2d9974f4
--- /dev/null
+++ b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Documents/AuthorityBootstrapInviteDocument.cs
@@ -0,0 +1,72 @@
+using System;
+using System.Collections.Generic;
+using MongoDB.Bson;
+using MongoDB.Bson.Serialization.Attributes;
+
+namespace StellaOps.Authority.Storage.Mongo.Documents;
+
+///
+/// Represents a bootstrap invitation token for provisioning users or clients.
+///
+[BsonIgnoreExtraElements]
+public sealed class AuthorityBootstrapInviteDocument
+{
+ [BsonId]
+ [BsonRepresentation(BsonType.ObjectId)]
+ public string Id { get; set; } = ObjectId.GenerateNewId().ToString();
+
+ [BsonElement("token")]
+ public string Token { get; set; } = Guid.NewGuid().ToString("N");
+
+ [BsonElement("type")]
+ public string Type { get; set; } = "user";
+
+ [BsonElement("provider")]
+ [BsonIgnoreIfNull]
+ public string? Provider { get; set; }
+
+ [BsonElement("target")]
+ [BsonIgnoreIfNull]
+ public string? Target { get; set; }
+
+ [BsonElement("issuedAt")]
+ public DateTimeOffset IssuedAt { get; set; } = DateTimeOffset.UtcNow;
+
+ [BsonElement("issuedBy")]
+ [BsonIgnoreIfNull]
+ public string? IssuedBy { get; set; }
+
+ [BsonElement("expiresAt")]
+ public DateTimeOffset ExpiresAt { get; set; } = DateTimeOffset.UtcNow.AddDays(2);
+
+ [BsonElement("status")]
+ public string Status { get; set; } = AuthorityBootstrapInviteStatuses.Pending;
+
+ [BsonElement("reservedAt")]
+ [BsonIgnoreIfNull]
+ public DateTimeOffset? ReservedAt { get; set; }
+
+ [BsonElement("reservedBy")]
+ [BsonIgnoreIfNull]
+ public string? ReservedBy { get; set; }
+
+ [BsonElement("consumedAt")]
+ [BsonIgnoreIfNull]
+ public DateTimeOffset? ConsumedAt { get; set; }
+
+ [BsonElement("consumedBy")]
+ [BsonIgnoreIfNull]
+ public string? ConsumedBy { get; set; }
+
+ [BsonElement("metadata")]
+ [BsonIgnoreIfNull]
+ public Dictionary? Metadata { get; set; }
+}
+
+public static class AuthorityBootstrapInviteStatuses
+{
+ public const string Pending = "pending";
+ public const string Reserved = "reserved";
+ public const string Consumed = "consumed";
+ public const string Expired = "expired";
+}
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Documents/AuthorityTokenDocument.cs b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Documents/AuthorityTokenDocument.cs
index de05127f..fd5156b5 100644
--- a/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Documents/AuthorityTokenDocument.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Documents/AuthorityTokenDocument.cs
@@ -1,3 +1,4 @@
+using System;
using System.Collections.Generic;
using MongoDB.Bson;
using MongoDB.Bson.Serialization.Attributes;
@@ -61,6 +62,11 @@ public sealed class AuthorityTokenDocument
[BsonIgnoreIfNull]
public string? RevokedReasonDescription { get; set; }
+
+ [BsonElement("devices")]
+ [BsonIgnoreIfNull]
+ public List? Devices { get; set; }
+
[BsonElement("revokedMetadata")]
[BsonIgnoreIfNull]
public Dictionary? RevokedMetadata { get; set; }
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Extensions/ServiceCollectionExtensions.cs b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Extensions/ServiceCollectionExtensions.cs
index 8856e5e9..9b48024d 100644
--- a/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Extensions/ServiceCollectionExtensions.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Extensions/ServiceCollectionExtensions.cs
@@ -98,12 +98,19 @@ public static class ServiceCollectionExtensions
return database.GetCollection(AuthorityMongoDefaults.Collections.RevocationState);
});
+ services.AddSingleton(static sp =>
+ {
+ var database = sp.GetRequiredService();
+ return database.GetCollection(AuthorityMongoDefaults.Collections.Invites);
+ });
+
services.TryAddSingleton();
services.TryAddSingleton();
services.TryAddSingleton();
services.TryAddSingleton();
services.TryAddSingleton();
services.TryAddSingleton();
+ services.TryAddSingleton();
services.TryAddSingleton();
services.TryAddSingleton();
@@ -112,6 +119,7 @@ public static class ServiceCollectionExtensions
services.TryAddSingleton();
services.TryAddSingleton();
services.TryAddSingleton();
+ services.TryAddSingleton();
return services;
}
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Initialization/AuthorityBootstrapInviteCollectionInitializer.cs b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Initialization/AuthorityBootstrapInviteCollectionInitializer.cs
new file mode 100644
index 00000000..4aea6696
--- /dev/null
+++ b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Initialization/AuthorityBootstrapInviteCollectionInitializer.cs
@@ -0,0 +1,25 @@
+using MongoDB.Driver;
+using StellaOps.Authority.Storage.Mongo.Documents;
+
+namespace StellaOps.Authority.Storage.Mongo.Initialization;
+
+internal sealed class AuthorityBootstrapInviteCollectionInitializer : IAuthorityCollectionInitializer
+{
+ private static readonly CreateIndexModel[] Indexes =
+ {
+ new CreateIndexModel(
+ Builders.IndexKeys.Ascending(i => i.Token),
+ new CreateIndexOptions { Unique = true, Name = "idx_invite_token" }),
+ new CreateIndexModel(
+ Builders.IndexKeys.Ascending(i => i.Status).Ascending(i => i.ExpiresAt),
+ new CreateIndexOptions { Name = "idx_invite_status_expires" })
+ };
+
+ public async ValueTask EnsureIndexesAsync(IMongoDatabase database, CancellationToken cancellationToken)
+ {
+ ArgumentNullException.ThrowIfNull(database);
+
+ var collection = database.GetCollection(AuthorityMongoDefaults.Collections.Invites);
+ await collection.Indexes.CreateManyAsync(Indexes, cancellationToken).ConfigureAwait(false);
+ }
+}
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Stores/AuthorityBootstrapInviteStore.cs b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Stores/AuthorityBootstrapInviteStore.cs
new file mode 100644
index 00000000..48c0629f
--- /dev/null
+++ b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Stores/AuthorityBootstrapInviteStore.cs
@@ -0,0 +1,166 @@
+using System;
+using System.Collections.Generic;
+using MongoDB.Driver;
+using StellaOps.Authority.Storage.Mongo.Documents;
+
+namespace StellaOps.Authority.Storage.Mongo.Stores;
+
+internal sealed class AuthorityBootstrapInviteStore : IAuthorityBootstrapInviteStore
+{
+ private readonly IMongoCollection collection;
+
+ public AuthorityBootstrapInviteStore(IMongoCollection collection)
+ => this.collection = collection ?? throw new ArgumentNullException(nameof(collection));
+
+ public async ValueTask CreateAsync(AuthorityBootstrapInviteDocument document, CancellationToken cancellationToken)
+ {
+ ArgumentNullException.ThrowIfNull(document);
+
+ await collection.InsertOneAsync(document, cancellationToken: cancellationToken).ConfigureAwait(false);
+ return document;
+ }
+
+ public async ValueTask TryReserveAsync(
+ string token,
+ string expectedType,
+ DateTimeOffset now,
+ string? reservedBy,
+ CancellationToken cancellationToken)
+ {
+ if (string.IsNullOrWhiteSpace(token))
+ {
+ return new BootstrapInviteReservationResult(BootstrapInviteReservationStatus.NotFound, null);
+ }
+
+ var normalizedToken = token.Trim();
+ var filter = Builders.Filter.And(
+ Builders.Filter.Eq(i => i.Token, normalizedToken),
+ Builders.Filter.Eq(i => i.Status, AuthorityBootstrapInviteStatuses.Pending));
+
+ var update = Builders.Update
+ .Set(i => i.Status, AuthorityBootstrapInviteStatuses.Reserved)
+ .Set(i => i.ReservedAt, now)
+ .Set(i => i.ReservedBy, reservedBy);
+
+ var options = new FindOneAndUpdateOptions
+ {
+ ReturnDocument = ReturnDocument.After
+ };
+
+ var invite = await collection.FindOneAndUpdateAsync(filter, update, options, cancellationToken).ConfigureAwait(false);
+
+ if (invite is null)
+ {
+ var existing = await collection
+ .Find(i => i.Token == normalizedToken)
+ .FirstOrDefaultAsync(cancellationToken)
+ .ConfigureAwait(false);
+
+ if (existing is null)
+ {
+ return new BootstrapInviteReservationResult(BootstrapInviteReservationStatus.NotFound, null);
+ }
+
+ if (existing.Status is AuthorityBootstrapInviteStatuses.Consumed or AuthorityBootstrapInviteStatuses.Reserved)
+ {
+ return new BootstrapInviteReservationResult(BootstrapInviteReservationStatus.AlreadyUsed, existing);
+ }
+
+ if (existing.Status == AuthorityBootstrapInviteStatuses.Expired || existing.ExpiresAt <= now)
+ {
+ return new BootstrapInviteReservationResult(BootstrapInviteReservationStatus.Expired, existing);
+ }
+
+ return new BootstrapInviteReservationResult(BootstrapInviteReservationStatus.NotFound, existing);
+ }
+
+ if (!string.Equals(invite.Type, expectedType, StringComparison.OrdinalIgnoreCase))
+ {
+ await ReleaseAsync(normalizedToken, cancellationToken).ConfigureAwait(false);
+ return new BootstrapInviteReservationResult(BootstrapInviteReservationStatus.NotFound, invite);
+ }
+
+ if (invite.ExpiresAt <= now)
+ {
+ await MarkExpiredAsync(normalizedToken, cancellationToken).ConfigureAwait(false);
+ return new BootstrapInviteReservationResult(BootstrapInviteReservationStatus.Expired, invite);
+ }
+
+ return new BootstrapInviteReservationResult(BootstrapInviteReservationStatus.Reserved, invite);
+ }
+
+ public async ValueTask ReleaseAsync(string token, CancellationToken cancellationToken)
+ {
+ if (string.IsNullOrWhiteSpace(token))
+ {
+ return false;
+ }
+
+ var result = await collection.UpdateOneAsync(
+ Builders.Filter.And(
+ Builders.Filter.Eq(i => i.Token, token.Trim()),
+ Builders.Filter.Eq(i => i.Status, AuthorityBootstrapInviteStatuses.Reserved)),
+ Builders.Update
+ .Set(i => i.Status, AuthorityBootstrapInviteStatuses.Pending)
+ .Set(i => i.ReservedAt, null)
+ .Set(i => i.ReservedBy, null),
+ cancellationToken: cancellationToken).ConfigureAwait(false);
+
+ return result.ModifiedCount > 0;
+ }
+
+ public async ValueTask MarkConsumedAsync(string token, string? consumedBy, DateTimeOffset consumedAt, CancellationToken cancellationToken)
+ {
+ if (string.IsNullOrWhiteSpace(token))
+ {
+ return false;
+ }
+
+ var result = await collection.UpdateOneAsync(
+ Builders.Filter.And(
+ Builders.Filter.Eq(i => i.Token, token.Trim()),
+ Builders.Filter.Eq(i => i.Status, AuthorityBootstrapInviteStatuses.Reserved)),
+ Builders.Update
+ .Set(i => i.Status, AuthorityBootstrapInviteStatuses.Consumed)
+ .Set(i => i.ConsumedAt, consumedAt)
+ .Set(i => i.ConsumedBy, consumedBy),
+ cancellationToken: cancellationToken).ConfigureAwait(false);
+
+ return result.ModifiedCount > 0;
+ }
+
+ public async ValueTask> ExpireAsync(DateTimeOffset now, CancellationToken cancellationToken)
+ {
+ var filter = Builders.Filter.And(
+ Builders.Filter.Lte(i => i.ExpiresAt, now),
+ Builders.Filter.In(
+ i => i.Status,
+ new[] { AuthorityBootstrapInviteStatuses.Pending, AuthorityBootstrapInviteStatuses.Reserved }));
+
+ var update = Builders.Update
+ .Set(i => i.Status, AuthorityBootstrapInviteStatuses.Expired)
+ .Set(i => i.ReservedAt, null)
+ .Set(i => i.ReservedBy, null);
+
+ var expired = await collection.Find(filter)
+ .ToListAsync(cancellationToken)
+ .ConfigureAwait(false);
+
+ if (expired.Count == 0)
+ {
+ return Array.Empty();
+ }
+
+ await collection.UpdateManyAsync(filter, update, cancellationToken: cancellationToken).ConfigureAwait(false);
+
+ return expired;
+ }
+
+ private async Task MarkExpiredAsync(string token, CancellationToken cancellationToken)
+ {
+ await collection.UpdateOneAsync(
+ Builders.Filter.Eq(i => i.Token, token),
+ Builders.Update.Set(i => i.Status, AuthorityBootstrapInviteStatuses.Expired),
+ cancellationToken: cancellationToken).ConfigureAwait(false);
+ }
+}
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Stores/AuthorityTokenStore.cs b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Stores/AuthorityTokenStore.cs
index c74a1ea6..da2c4477 100644
--- a/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Stores/AuthorityTokenStore.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Stores/AuthorityTokenStore.cs
@@ -1,6 +1,10 @@
+using System;
using System.Collections.Generic;
using Microsoft.Extensions.Logging;
+using MongoDB.Bson;
using MongoDB.Driver;
+using System.Linq;
+using System.Globalization;
using StellaOps.Authority.Storage.Mongo.Documents;
namespace StellaOps.Authority.Storage.Mongo.Stores;
@@ -86,6 +90,86 @@ internal sealed class AuthorityTokenStore : IAuthorityTokenStore
logger.LogDebug("Updated token {TokenId} status to {Status} (matched {Matched}).", tokenId, status, result.MatchedCount);
}
+
+ public async ValueTask RecordUsageAsync(string tokenId, string? remoteAddress, string? userAgent, DateTimeOffset observedAt, CancellationToken cancellationToken)
+ {
+ if (string.IsNullOrWhiteSpace(tokenId))
+ {
+ return new TokenUsageUpdateResult(TokenUsageUpdateStatus.NotFound, null, null);
+ }
+
+ if (string.IsNullOrWhiteSpace(remoteAddress) && string.IsNullOrWhiteSpace(userAgent))
+ {
+ return new TokenUsageUpdateResult(TokenUsageUpdateStatus.MissingMetadata, remoteAddress, userAgent);
+ }
+
+ var id = tokenId.Trim();
+ var token = await collection
+ .Find(t => t.TokenId == id)
+ .FirstOrDefaultAsync(cancellationToken)
+ .ConfigureAwait(false);
+
+ if (token is null)
+ {
+ return new TokenUsageUpdateResult(TokenUsageUpdateStatus.NotFound, remoteAddress, userAgent);
+ }
+
+ token.Devices ??= new List();
+
+ string? normalizedAddress = string.IsNullOrWhiteSpace(remoteAddress) ? null : remoteAddress.Trim();
+ string? normalizedAgent = string.IsNullOrWhiteSpace(userAgent) ? null : userAgent.Trim();
+
+ var device = token.Devices.FirstOrDefault(d =>
+ string.Equals(GetString(d, "remoteAddress"), normalizedAddress, StringComparison.OrdinalIgnoreCase) &&
+ string.Equals(GetString(d, "userAgent"), normalizedAgent, StringComparison.Ordinal));
+ var suspicious = false;
+
+ if (device is null)
+ {
+ suspicious = token.Devices.Count > 0;
+ var document = new BsonDocument
+ {
+ { "remoteAddress", normalizedAddress },
+ { "userAgent", normalizedAgent },
+ { "firstSeen", BsonDateTime.Create(observedAt.UtcDateTime) },
+ { "lastSeen", BsonDateTime.Create(observedAt.UtcDateTime) },
+ { "useCount", 1 }
+ };
+
+ token.Devices.Add(document);
+ }
+ else
+ {
+ device["lastSeen"] = BsonDateTime.Create(observedAt.UtcDateTime);
+ device["useCount"] = device.TryGetValue("useCount", out var existingCount) && existingCount.IsInt32
+ ? existingCount.AsInt32 + 1
+ : 1;
+ }
+
+ var update = Builders.Update.Set(t => t.Devices, token.Devices);
+ await collection.UpdateOneAsync(
+ Builders.Filter.Eq(t => t.TokenId, id),
+ update,
+ cancellationToken: cancellationToken).ConfigureAwait(false);
+
+ return new TokenUsageUpdateResult(suspicious ? TokenUsageUpdateStatus.SuspectedReplay : TokenUsageUpdateStatus.Recorded, normalizedAddress, normalizedAgent);
+ }
+
+ private static string? GetString(BsonDocument document, string name)
+ {
+ if (!document.TryGetValue(name, out var value))
+ {
+ return null;
+ }
+
+ return value switch
+ {
+ { IsString: true } => value.AsString,
+ { IsBsonNull: true } => null,
+ _ => value.ToString()
+ };
+ }
+
public async ValueTask DeleteExpiredAsync(DateTimeOffset threshold, CancellationToken cancellationToken)
{
var filter = Builders.Filter.And(
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Stores/IAuthorityBootstrapInviteStore.cs b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Stores/IAuthorityBootstrapInviteStore.cs
new file mode 100644
index 00000000..c0a51bc5
--- /dev/null
+++ b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Stores/IAuthorityBootstrapInviteStore.cs
@@ -0,0 +1,26 @@
+using StellaOps.Authority.Storage.Mongo.Documents;
+
+namespace StellaOps.Authority.Storage.Mongo.Stores;
+
+public interface IAuthorityBootstrapInviteStore
+{
+ ValueTask CreateAsync(AuthorityBootstrapInviteDocument document, CancellationToken cancellationToken);
+
+ ValueTask TryReserveAsync(string token, string expectedType, DateTimeOffset now, string? reservedBy, CancellationToken cancellationToken);
+
+ ValueTask ReleaseAsync(string token, CancellationToken cancellationToken);
+
+ ValueTask MarkConsumedAsync(string token, string? consumedBy, DateTimeOffset consumedAt, CancellationToken cancellationToken);
+
+ ValueTask> ExpireAsync(DateTimeOffset now, CancellationToken cancellationToken);
+}
+
+public enum BootstrapInviteReservationStatus
+{
+ Reserved,
+ NotFound,
+ Expired,
+ AlreadyUsed
+}
+
+public sealed record BootstrapInviteReservationResult(BootstrapInviteReservationStatus Status, AuthorityBootstrapInviteDocument? Invite);
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Stores/IAuthorityTokenStore.cs b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Stores/IAuthorityTokenStore.cs
index fc576bd9..f4bb918a 100644
--- a/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Stores/IAuthorityTokenStore.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority.Storage.Mongo/Stores/IAuthorityTokenStore.cs
@@ -1,3 +1,5 @@
+using System;
+using System.Collections.Generic;
using StellaOps.Authority.Storage.Mongo.Documents;
namespace StellaOps.Authority.Storage.Mongo.Stores;
@@ -21,5 +23,17 @@ public interface IAuthorityTokenStore
ValueTask DeleteExpiredAsync(DateTimeOffset threshold, CancellationToken cancellationToken);
+ ValueTask RecordUsageAsync(string tokenId, string? remoteAddress, string? userAgent, DateTimeOffset observedAt, CancellationToken cancellationToken);
+
ValueTask> ListRevokedAsync(DateTimeOffset? issuedAfter, CancellationToken cancellationToken);
}
+
+public enum TokenUsageUpdateStatus
+{
+ Recorded,
+ SuspectedReplay,
+ MissingMetadata,
+ NotFound
+}
+
+public sealed record TokenUsageUpdateResult(TokenUsageUpdateStatus Status, string? RemoteAddress, string? UserAgent);
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Tests/Bootstrap/BootstrapInviteCleanupServiceTests.cs b/src/StellaOps.Authority/StellaOps.Authority.Tests/Bootstrap/BootstrapInviteCleanupServiceTests.cs
new file mode 100644
index 00000000..b707b251
--- /dev/null
+++ b/src/StellaOps.Authority/StellaOps.Authority.Tests/Bootstrap/BootstrapInviteCleanupServiceTests.cs
@@ -0,0 +1,97 @@
+using System;
+using System.Collections.Generic;
+using System.Linq;
+using System.Threading;
+using System.Threading.Tasks;
+using Microsoft.Extensions.Logging.Abstractions;
+using Microsoft.Extensions.Time.Testing;
+using StellaOps.Authority.Bootstrap;
+using StellaOps.Authority.Storage.Mongo.Documents;
+using StellaOps.Authority.Storage.Mongo.Stores;
+using StellaOps.Cryptography.Audit;
+using Xunit;
+
+namespace StellaOps.Authority.Tests.Bootstrap;
+
+public sealed class BootstrapInviteCleanupServiceTests
+{
+ [Fact]
+ public async Task SweepExpiredInvitesAsync_ExpiresInvitesAndEmitsAuditRecords()
+ {
+ var now = new DateTimeOffset(2025, 10, 14, 12, 0, 0, TimeSpan.Zero);
+ var timeProvider = new FakeTimeProvider(now);
+
+ var invites = new List
+ {
+ new()
+ {
+ Token = "token-1",
+ Type = BootstrapInviteTypes.User,
+ ExpiresAt = now.AddMinutes(-5),
+ Provider = "standard",
+ Target = "alice@example.com",
+ Status = AuthorityBootstrapInviteStatuses.Pending
+ },
+ new()
+ {
+ Token = "token-2",
+ Type = BootstrapInviteTypes.Client,
+ ExpiresAt = now.AddMinutes(-1),
+ Provider = "standard",
+ Target = "client-1",
+ Status = AuthorityBootstrapInviteStatuses.Reserved
+ }
+ };
+
+ var store = new FakeInviteStore(invites);
+ var sink = new CapturingAuthEventSink();
+ var service = new BootstrapInviteCleanupService(store, sink, timeProvider, NullLogger.Instance);
+
+ await service.SweepExpiredInvitesAsync(CancellationToken.None);
+
+ Assert.True(store.ExpireCalled);
+ Assert.Equal(2, sink.Events.Count);
+ Assert.All(sink.Events, record => Assert.Equal("authority.bootstrap.invite.expired", record.EventType));
+ Assert.Contains(sink.Events, record => record.Properties.Any(property => property.Name == "invite.token" && property.Value.Value == "token-1"));
+ Assert.Contains(sink.Events, record => record.Properties.Any(property => property.Name == "invite.token" && property.Value.Value == "token-2"));
+ }
+
+ private sealed class FakeInviteStore : IAuthorityBootstrapInviteStore
+ {
+ private readonly IReadOnlyList invites;
+
+ public FakeInviteStore(IReadOnlyList invites)
+ => this.invites = invites;
+
+ public bool ExpireCalled { get; private set; }
+
+ public ValueTask CreateAsync(AuthorityBootstrapInviteDocument document, CancellationToken cancellationToken)
+ => throw new NotImplementedException();
+
+ public ValueTask TryReserveAsync(string token, string expectedType, DateTimeOffset now, string? reservedBy, CancellationToken cancellationToken)
+ => ValueTask.FromResult(new BootstrapInviteReservationResult(BootstrapInviteReservationStatus.NotFound, null));
+
+ public ValueTask ReleaseAsync(string token, CancellationToken cancellationToken)
+ => ValueTask.FromResult(false);
+
+ public ValueTask MarkConsumedAsync(string token, string? consumedBy, DateTimeOffset consumedAt, CancellationToken cancellationToken)
+ => ValueTask.FromResult(false);
+
+ public ValueTask> ExpireAsync(DateTimeOffset now, CancellationToken cancellationToken)
+ {
+ ExpireCalled = true;
+ return ValueTask.FromResult(invites);
+ }
+ }
+
+ private sealed class CapturingAuthEventSink : IAuthEventSink
+ {
+ public List Events { get; } = new();
+
+ public ValueTask WriteAsync(AuthEventRecord record, CancellationToken cancellationToken)
+ {
+ Events.Add(record);
+ return ValueTask.CompletedTask;
+ }
+ }
+}
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Tests/OpenIddict/ClientCredentialsAndTokenHandlersTests.cs b/src/StellaOps.Authority/StellaOps.Authority.Tests/OpenIddict/ClientCredentialsAndTokenHandlersTests.cs
index 086c5b54..25b3e059 100644
--- a/src/StellaOps.Authority/StellaOps.Authority.Tests/OpenIddict/ClientCredentialsAndTokenHandlersTests.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority.Tests/OpenIddict/ClientCredentialsAndTokenHandlersTests.cs
@@ -17,6 +17,7 @@ using StellaOps.Authority.Storage.Mongo.Stores;
using StellaOps.Authority.RateLimiting;
using StellaOps.Cryptography.Audit;
using Xunit;
+using MongoDB.Bson;
using static StellaOps.Authority.Tests.OpenIddict.TestHelpers;
namespace StellaOps.Authority.Tests.OpenIddict;
@@ -76,7 +77,7 @@ public class ClientCredentialsHandlersTests
await handler.HandleAsync(context);
- Assert.False(context.IsRejected);
+ Assert.False(context.IsRejected, $"Rejected: {context.Error} - {context.ErrorDescription}");
Assert.Same(clientDocument, context.Transaction.Properties[AuthorityOpenIddictConstants.ClientTransactionProperty]);
var grantedScopes = Assert.IsType(context.Transaction.Properties[AuthorityOpenIddictConstants.ClientGrantedScopesProperty]);
@@ -84,6 +85,36 @@ public class ClientCredentialsHandlersTests
Assert.Equal(clientDocument.Plugin, context.Transaction.Properties[AuthorityOpenIddictConstants.ClientProviderTransactionProperty]);
}
+ [Fact]
+ public async Task ValidateClientCredentials_EmitsTamperAuditEvent_WhenUnexpectedParametersPresent()
+ {
+ var clientDocument = CreateClient(
+ secret: "s3cr3t!",
+ allowedGrantTypes: "client_credentials",
+ allowedScopes: "jobs:read");
+
+ var registry = CreateRegistry(withClientProvisioning: true, clientDescriptor: CreateDescriptor(clientDocument));
+ var sink = new TestAuthEventSink();
+ var handler = new ValidateClientCredentialsHandler(
+ new TestClientStore(clientDocument),
+ registry,
+ TestActivitySource,
+ sink,
+ new TestRateLimiterMetadataAccessor(),
+ TimeProvider.System,
+ NullLogger.Instance);
+
+ var transaction = CreateTokenTransaction(clientDocument.ClientId, "s3cr3t!", scope: "jobs:read");
+ transaction.Request?.SetParameter("unexpected_param", "value");
+
+ await handler.HandleAsync(new OpenIddictServerEvents.ValidateTokenRequestContext(transaction));
+
+ var tamperEvent = Assert.Single(sink.Events, record => record.EventType == "authority.token.tamper");
+ Assert.Contains(tamperEvent.Properties, property =>
+ string.Equals(property.Name, "request.unexpected_parameter", StringComparison.OrdinalIgnoreCase) &&
+ string.Equals(property.Value.Value, "unexpected_param", StringComparison.OrdinalIgnoreCase));
+ }
+
[Fact]
public async Task HandleClientCredentials_PersistsTokenAndEnrichesClaims()
{
@@ -98,22 +129,30 @@ public class ClientCredentialsHandlersTests
var tokenStore = new TestTokenStore();
var authSink = new TestAuthEventSink();
var metadataAccessor = new TestRateLimiterMetadataAccessor();
+ var validateHandler = new ValidateClientCredentialsHandler(
+ new TestClientStore(clientDocument),
+ registry,
+ TestActivitySource,
+ authSink,
+ metadataAccessor,
+ TimeProvider.System,
+ NullLogger.Instance);
+
+ var transaction = CreateTokenTransaction(clientDocument.ClientId, secret: null, scope: "jobs:trigger");
+ transaction.Options.AccessTokenLifetime = TimeSpan.FromMinutes(30);
+
+ var validateContext = new OpenIddictServerEvents.ValidateTokenRequestContext(transaction);
+ await validateHandler.HandleAsync(validateContext);
+ Assert.False(validateContext.IsRejected);
+
var handler = new HandleClientCredentialsHandler(
registry,
tokenStore,
TimeProvider.System,
TestActivitySource,
- authSink,
- metadataAccessor,
NullLogger.Instance);
var persistHandler = new PersistTokensHandler(tokenStore, TimeProvider.System, TestActivitySource, NullLogger.Instance);
- var transaction = CreateTokenTransaction(clientDocument.ClientId, secret: null, scope: "jobs:trigger");
- transaction.Options.AccessTokenLifetime = TimeSpan.FromMinutes(30);
- transaction.Properties[AuthorityOpenIddictConstants.ClientTransactionProperty] = clientDocument;
- transaction.Properties[AuthorityOpenIddictConstants.ClientProviderTransactionProperty] = clientDocument.Plugin!;
- transaction.Properties[AuthorityOpenIddictConstants.ClientGrantedScopesProperty] = new[] { "jobs:trigger" };
-
var context = new OpenIddictServerEvents.HandleTokenRequestContext(transaction);
await handler.HandleAsync(context);
@@ -161,10 +200,14 @@ public class TokenValidationHandlersTests
ClientId = "feedser"
};
+ var metadataAccessor = new TestRateLimiterMetadataAccessor();
+ var auditSink = new TestAuthEventSink();
var handler = new ValidateAccessTokenHandler(
tokenStore,
new TestClientStore(CreateClient()),
CreateRegistry(withClientProvisioning: true, clientDescriptor: CreateDescriptor(CreateClient())),
+ metadataAccessor,
+ auditSink,
TimeProvider.System,
TestActivitySource,
NullLogger.Instance);
@@ -203,10 +246,14 @@ public class TokenValidationHandlersTests
var registry = new AuthorityIdentityProviderRegistry(new[] { plugin }, NullLogger.Instance);
+ var metadataAccessorSuccess = new TestRateLimiterMetadataAccessor();
+ var auditSinkSuccess = new TestAuthEventSink();
var handler = new ValidateAccessTokenHandler(
new TestTokenStore(),
new TestClientStore(clientDocument),
registry,
+ metadataAccessorSuccess,
+ auditSinkSuccess,
TimeProvider.System,
TestActivitySource,
NullLogger.Instance);
@@ -229,6 +276,76 @@ public class TokenValidationHandlersTests
Assert.False(context.IsRejected);
Assert.Contains(principal.Claims, claim => claim.Type == "enriched" && claim.Value == "true");
}
+
+ [Fact]
+ public async Task ValidateAccessTokenHandler_EmitsReplayAudit_WhenStoreDetectsSuspectedReplay()
+ {
+ var tokenStore = new TestTokenStore();
+ tokenStore.Inserted = new AuthorityTokenDocument
+ {
+ TokenId = "token-replay",
+ Status = "valid",
+ ClientId = "agent",
+ Devices = new List
+ {
+ new BsonDocument
+ {
+ { "remoteAddress", "10.0.0.1" },
+ { "userAgent", "agent/1.0" },
+ { "firstSeen", BsonDateTime.Create(DateTimeOffset.UtcNow.AddMinutes(-15)) },
+ { "lastSeen", BsonDateTime.Create(DateTimeOffset.UtcNow.AddMinutes(-5)) },
+ { "useCount", 2 }
+ }
+ }
+ };
+
+ tokenStore.UsageCallback = (remote, agent) => new TokenUsageUpdateResult(TokenUsageUpdateStatus.SuspectedReplay, remote, agent);
+
+ var metadataAccessor = new TestRateLimiterMetadataAccessor();
+ var metadata = metadataAccessor.GetMetadata();
+ if (metadata is not null)
+ {
+ metadata.RemoteIp = "203.0.113.7";
+ metadata.UserAgent = "agent/2.0";
+ }
+
+ var clientDocument = CreateClient();
+ clientDocument.ClientId = "agent";
+ var auditSink = new TestAuthEventSink();
+ var registry = CreateRegistry(withClientProvisioning: false, clientDescriptor: null);
+ var handler = new ValidateAccessTokenHandler(
+ tokenStore,
+ new TestClientStore(clientDocument),
+ registry,
+ metadataAccessor,
+ auditSink,
+ TimeProvider.System,
+ TestActivitySource,
+ NullLogger.Instance);
+
+ var transaction = new OpenIddictServerTransaction
+ {
+ Options = new OpenIddictServerOptions(),
+ EndpointType = OpenIddictServerEndpointType.Introspection,
+ Request = new OpenIddictRequest()
+ };
+
+ var principal = CreatePrincipal("agent", "token-replay", "standard");
+ var context = new OpenIddictServerEvents.ValidateTokenContext(transaction)
+ {
+ Principal = principal,
+ TokenId = "token-replay"
+ };
+
+ await handler.HandleAsync(context);
+
+ Assert.False(context.IsRejected);
+ var replayEvent = Assert.Single(auditSink.Events, record => record.EventType == "authority.token.replay.suspected");
+ Assert.Equal(AuthEventOutcome.Error, replayEvent.Outcome);
+ Assert.NotNull(replayEvent.Network);
+ Assert.Equal("203.0.113.7", replayEvent.Network?.RemoteAddress.Value);
+ Assert.Contains(replayEvent.Properties, property => property.Name == "token.devices.total");
+ }
}
internal sealed class TestClientStore : IAuthorityClientStore
@@ -263,6 +380,8 @@ internal sealed class TestTokenStore : IAuthorityTokenStore
{
public AuthorityTokenDocument? Inserted { get; set; }
+ public Func? UsageCallback { get; set; }
+
public ValueTask InsertAsync(AuthorityTokenDocument document, CancellationToken cancellationToken)
{
Inserted = document;
@@ -281,6 +400,9 @@ internal sealed class TestTokenStore : IAuthorityTokenStore
public ValueTask DeleteExpiredAsync(DateTimeOffset threshold, CancellationToken cancellationToken)
=> ValueTask.FromResult(0L);
+ public ValueTask RecordUsageAsync(string tokenId, string? remoteAddress, string? userAgent, DateTimeOffset observedAt, CancellationToken cancellationToken)
+ => ValueTask.FromResult(UsageCallback?.Invoke(remoteAddress, userAgent) ?? new TokenUsageUpdateResult(TokenUsageUpdateStatus.Recorded, remoteAddress, userAgent));
+
public ValueTask> ListRevokedAsync(DateTimeOffset? issuedAfter, CancellationToken cancellationToken)
=> ValueTask.FromResult>(Array.Empty());
}
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Tests/OpenIddict/PasswordGrantHandlersTests.cs b/src/StellaOps.Authority/StellaOps.Authority.Tests/OpenIddict/PasswordGrantHandlersTests.cs
index cca234a0..b0f502c2 100644
--- a/src/StellaOps.Authority/StellaOps.Authority.Tests/OpenIddict/PasswordGrantHandlersTests.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority.Tests/OpenIddict/PasswordGrantHandlersTests.cs
@@ -74,6 +74,26 @@ public class PasswordGrantHandlersTests
Assert.Contains(sink.Events, record => record.EventType == "authority.password.grant" && record.Outcome == AuthEventOutcome.LockedOut);
}
+ [Fact]
+ public async Task ValidatePasswordGrant_EmitsTamperAuditEvent_WhenUnexpectedParametersPresent()
+ {
+ var sink = new TestAuthEventSink();
+ var metadataAccessor = new TestRateLimiterMetadataAccessor();
+ var registry = CreateRegistry(new SuccessCredentialStore());
+ var validate = new ValidatePasswordGrantHandler(registry, TestActivitySource, sink, metadataAccessor, TimeProvider.System, NullLogger.Instance);
+
+ var transaction = CreatePasswordTransaction("alice", "Password1!");
+ transaction.Request?.SetParameter("unexpected_param", "value");
+
+ await validate.HandleAsync(new OpenIddictServerEvents.ValidateTokenRequestContext(transaction));
+
+ var tamperEvent = Assert.Single(sink.Events, record => record.EventType == "authority.token.tamper");
+ Assert.Equal(AuthEventOutcome.Failure, tamperEvent.Outcome);
+ Assert.Contains(tamperEvent.Properties, property =>
+ string.Equals(property.Name, "request.unexpected_parameter", StringComparison.OrdinalIgnoreCase) &&
+ string.Equals(property.Value.Value, "unexpected_param", StringComparison.OrdinalIgnoreCase));
+ }
+
private static AuthorityIdentityProviderRegistry CreateRegistry(IUserCredentialStore store)
{
var plugin = new StubIdentityProviderPlugin("stub", store);
@@ -104,14 +124,14 @@ public class PasswordGrantHandlersTests
Name = name;
Type = "stub";
var manifest = new AuthorityPluginManifest(
- name,
- "stub",
- enabled: true,
- version: null,
- description: null,
- capabilities: new[] { AuthorityPluginCapabilities.Password },
- configuration: new Dictionary(StringComparer.OrdinalIgnoreCase),
- configPath: $"{name}.yaml");
+ Name: name,
+ Type: "stub",
+ Enabled: true,
+ AssemblyName: null,
+ AssemblyPath: null,
+ Capabilities: new[] { AuthorityPluginCapabilities.Password },
+ Metadata: new Dictionary(StringComparer.OrdinalIgnoreCase),
+ ConfigPath: $"{name}.yaml");
Context = new AuthorityPluginContext(manifest, new ConfigurationBuilder().Build());
Credentials = store;
ClaimsEnricher = new NoopClaimsEnricher();
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Tests/OpenIddict/TokenPersistenceIntegrationTests.cs b/src/StellaOps.Authority/StellaOps.Authority.Tests/OpenIddict/TokenPersistenceIntegrationTests.cs
index 76b15cde..bf7e8dc5 100644
--- a/src/StellaOps.Authority/StellaOps.Authority.Tests/OpenIddict/TokenPersistenceIntegrationTests.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority.Tests/OpenIddict/TokenPersistenceIntegrationTests.cs
@@ -5,6 +5,7 @@ using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Logging.Abstractions;
using Microsoft.Extensions.Time.Testing;
using MongoDB.Driver;
+using MongoDB.Bson;
using OpenIddict.Abstractions;
using OpenIddict.Server;
using StellaOps.Authority;
@@ -56,10 +57,10 @@ public sealed class TokenPersistenceIntegrationTests
withClientProvisioning: true,
clientDescriptor: TestHelpers.CreateDescriptor(clientDocument));
- var validateHandler = new ValidateClientCredentialsHandler(clientStore, registry, TestActivitySource, NullLogger.Instance);
var authSink = new TestAuthEventSink();
var metadataAccessor = new TestRateLimiterMetadataAccessor();
- var handleHandler = new HandleClientCredentialsHandler(registry, TestActivitySource, authSink, metadataAccessor, clock, NullLogger.Instance);
+ var validateHandler = new ValidateClientCredentialsHandler(clientStore, registry, TestActivitySource, authSink, metadataAccessor, clock, NullLogger.Instance);
+ var handleHandler = new HandleClientCredentialsHandler(registry, tokenStore, clock, TestActivitySource, NullLogger.Instance);
var persistHandler = new PersistTokensHandler(tokenStore, clock, TestActivitySource, NullLogger.Instance);
var transaction = TestHelpers.CreateTokenTransaction(clientDocument.ClientId, "s3cr3t!", scope: "jobs:trigger");
@@ -148,10 +149,14 @@ public sealed class TokenPersistenceIntegrationTests
var revokedAt = now.AddMinutes(1);
await tokenStore.UpdateStatusAsync(revokedTokenId, "revoked", revokedAt, "manual", null, null, CancellationToken.None);
+ var metadataAccessor = new TestRateLimiterMetadataAccessor();
+ var auditSink = new TestAuthEventSink();
var handler = new ValidateAccessTokenHandler(
tokenStore,
clientStore,
registry,
+ metadataAccessor,
+ auditSink,
clock,
TestActivitySource,
NullLogger.Instance);
@@ -190,6 +195,60 @@ public sealed class TokenPersistenceIntegrationTests
Assert.Equal("manual", stored.RevokedReason);
}
+ [Fact]
+ public async Task RecordUsageAsync_FlagsSuspectedReplay_OnNewDeviceFingerprint()
+ {
+ await ResetCollectionsAsync();
+
+ var issuedAt = new DateTimeOffset(2025, 10, 14, 8, 0, 0, TimeSpan.Zero);
+ var clock = new FakeTimeProvider(issuedAt);
+
+ await using var provider = await BuildMongoProviderAsync(clock);
+
+ var tokenStore = provider.GetRequiredService();
+
+ var tokenDocument = new AuthorityTokenDocument
+ {
+ TokenId = "token-replay",
+ Type = OpenIddictConstants.TokenTypeHints.AccessToken,
+ ClientId = "client-1",
+ Status = "valid",
+ CreatedAt = issuedAt,
+ Devices = new List
+ {
+ new BsonDocument
+ {
+ { "remoteAddress", "10.0.0.1" },
+ { "userAgent", "agent/1.0" },
+ { "firstSeen", BsonDateTime.Create(issuedAt.AddMinutes(-10).UtcDateTime) },
+ { "lastSeen", BsonDateTime.Create(issuedAt.AddMinutes(-5).UtcDateTime) },
+ { "useCount", 2 }
+ }
+ }
+ };
+
+ await tokenStore.InsertAsync(tokenDocument, CancellationToken.None);
+
+ var result = await tokenStore.RecordUsageAsync(
+ "token-replay",
+ remoteAddress: "10.0.0.2",
+ userAgent: "agent/2.0",
+ observedAt: clock.GetUtcNow(),
+ CancellationToken.None);
+
+ Assert.Equal(TokenUsageUpdateStatus.SuspectedReplay, result.Status);
+
+ var stored = await tokenStore.FindByTokenIdAsync("token-replay", CancellationToken.None);
+ Assert.NotNull(stored);
+ Assert.Equal(2, stored!.Devices?.Count);
+ Assert.Contains(stored.Devices!, doc =>
+ {
+ var remote = doc.TryGetValue("remoteAddress", out var ra) && ra.IsString ? ra.AsString : null;
+ var agentValue = doc.TryGetValue("userAgent", out var ua) && ua.IsString ? ua.AsString : null;
+ return remote == "10.0.0.2" && agentValue == "agent/2.0";
+ });
+ }
+
private async Task ResetCollectionsAsync()
{
var tokens = fixture.Database.GetCollection(AuthorityMongoDefaults.Collections.Tokens);
@@ -220,27 +279,3 @@ public sealed class TokenPersistenceIntegrationTests
return provider;
}
}
-
-internal sealed class TestAuthEventSink : IAuthEventSink
-{
- public List Records { get; } = new();
-
- public ValueTask WriteAsync(AuthEventRecord record, CancellationToken cancellationToken)
- {
- Records.Add(record);
- return ValueTask.CompletedTask;
- }
-}
-
-internal sealed class TestRateLimiterMetadataAccessor : IAuthorityRateLimiterMetadataAccessor
-{
- private readonly AuthorityRateLimiterMetadata metadata = new();
-
- public AuthorityRateLimiterMetadata? GetMetadata() => metadata;
-
- public void SetClientId(string? clientId) => metadata.ClientId = string.IsNullOrWhiteSpace(clientId) ? null : clientId;
-
- public void SetSubjectId(string? subjectId) => metadata.SubjectId = string.IsNullOrWhiteSpace(subjectId) ? null : subjectId;
-
- public void SetTag(string name, string? value) => metadata.SetTag(name, value);
-}
diff --git a/src/StellaOps.Authority/StellaOps.Authority.Tests/RateLimiting/AuthorityRateLimiterMetadataMiddlewareTests.cs b/src/StellaOps.Authority/StellaOps.Authority.Tests/RateLimiting/AuthorityRateLimiterMetadataMiddlewareTests.cs
index 0a072a06..df687867 100644
--- a/src/StellaOps.Authority/StellaOps.Authority.Tests/RateLimiting/AuthorityRateLimiterMetadataMiddlewareTests.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority.Tests/RateLimiting/AuthorityRateLimiterMetadataMiddlewareTests.cs
@@ -76,6 +76,7 @@ public class AuthorityRateLimiterMetadataMiddlewareTests
context.Request.Path = "/token";
context.Request.Method = HttpMethods.Post;
context.Request.Headers["X-Forwarded-For"] = "203.0.113.99";
+ context.Request.Headers.UserAgent = "StellaOps-Client/1.2";
var middleware = CreateMiddleware();
await middleware.InvokeAsync(context);
@@ -84,6 +85,9 @@ public class AuthorityRateLimiterMetadataMiddlewareTests
Assert.NotNull(metadata);
Assert.Equal("203.0.113.99", metadata!.RemoteIp);
Assert.Equal("203.0.113.99", metadata.ForwardedFor);
+ Assert.Equal("StellaOps-Client/1.2", metadata.UserAgent);
+ Assert.True(metadata.Tags.TryGetValue("authority.user_agent", out var tagValue));
+ Assert.Equal("StellaOps-Client/1.2", tagValue);
}
private static AuthorityRateLimiterMetadataMiddleware CreateMiddleware()
diff --git a/src/StellaOps.Authority/StellaOps.Authority/Bootstrap/BootstrapInviteCleanupService.cs b/src/StellaOps.Authority/StellaOps.Authority/Bootstrap/BootstrapInviteCleanupService.cs
new file mode 100644
index 00000000..927aca24
--- /dev/null
+++ b/src/StellaOps.Authority/StellaOps.Authority/Bootstrap/BootstrapInviteCleanupService.cs
@@ -0,0 +1,106 @@
+using System;
+using System.Collections.Generic;
+using System.Globalization;
+using Microsoft.Extensions.Hosting;
+using Microsoft.Extensions.Logging;
+using StellaOps.Authority.Storage.Mongo.Stores;
+using StellaOps.Authority.Storage.Mongo.Documents;
+using StellaOps.Cryptography.Audit;
+
+namespace StellaOps.Authority.Bootstrap;
+
+internal sealed class BootstrapInviteCleanupService : BackgroundService
+{
+ private readonly IAuthorityBootstrapInviteStore inviteStore;
+ private readonly IAuthEventSink auditSink;
+ private readonly TimeProvider timeProvider;
+ private readonly ILogger logger;
+ private readonly TimeSpan interval;
+
+ public BootstrapInviteCleanupService(
+ IAuthorityBootstrapInviteStore inviteStore,
+ IAuthEventSink auditSink,
+ TimeProvider timeProvider,
+ ILogger logger)
+ {
+ this.inviteStore = inviteStore ?? throw new ArgumentNullException(nameof(inviteStore));
+ this.auditSink = auditSink ?? throw new ArgumentNullException(nameof(auditSink));
+ this.timeProvider = timeProvider ?? throw new ArgumentNullException(nameof(timeProvider));
+ this.logger = logger ?? throw new ArgumentNullException(nameof(logger));
+ interval = TimeSpan.FromMinutes(5);
+ }
+
+ protected override async Task ExecuteAsync(CancellationToken stoppingToken)
+ {
+ var timer = new PeriodicTimer(interval);
+
+ try
+ {
+ while (await timer.WaitForNextTickAsync(stoppingToken).ConfigureAwait(false))
+ {
+ await SweepExpiredInvitesAsync(stoppingToken).ConfigureAwait(false);
+ }
+ }
+ catch (OperationCanceledException)
+ {
+ // Shutdown requested.
+ }
+ finally
+ {
+ timer.Dispose();
+ }
+ }
+
+ internal async Task SweepExpiredInvitesAsync(CancellationToken cancellationToken)
+ {
+ var now = timeProvider.GetUtcNow();
+ var expired = await inviteStore.ExpireAsync(now, cancellationToken).ConfigureAwait(false);
+ if (expired.Count == 0)
+ {
+ return;
+ }
+
+ logger.LogInformation("Expired {Count} bootstrap invite(s).", expired.Count);
+
+ foreach (var invite in expired)
+ {
+ var record = new AuthEventRecord
+ {
+ EventType = "authority.bootstrap.invite.expired",
+ OccurredAt = now,
+ CorrelationId = Guid.NewGuid().ToString("N"),
+ Outcome = AuthEventOutcome.Success,
+ Reason = "Invite expired before consumption.",
+ Subject = null,
+ Client = null,
+ Scopes = Array.Empty(),
+ Network = null,
+ Properties = BuildInviteProperties(invite)
+ };
+
+ await auditSink.WriteAsync(record, cancellationToken).ConfigureAwait(false);
+ }
+ }
+
+ private static AuthEventProperty[] BuildInviteProperties(AuthorityBootstrapInviteDocument invite)
+ {
+ var properties = new List
+ {
+ new() { Name = "invite.token", Value = ClassifiedString.Public(invite.Token) },
+ new() { Name = "invite.type", Value = ClassifiedString.Public(invite.Type) },
+ new() { Name = "invite.expires_at", Value = ClassifiedString.Public(invite.ExpiresAt.ToString("O", CultureInfo.InvariantCulture)) }
+ };
+
+ if (!string.IsNullOrWhiteSpace(invite.Provider))
+ {
+ properties.Add(new AuthEventProperty { Name = "invite.provider", Value = ClassifiedString.Public(invite.Provider) });
+ }
+
+ if (!string.IsNullOrWhiteSpace(invite.Target))
+ {
+ properties.Add(new AuthEventProperty { Name = "invite.target", Value = ClassifiedString.Public(invite.Target) });
+ }
+
+ return properties.ToArray();
+ }
+}
diff --git a/src/StellaOps.Authority/StellaOps.Authority/Bootstrap/BootstrapRequests.cs b/src/StellaOps.Authority/StellaOps.Authority/Bootstrap/BootstrapRequests.cs
index f5c31955..a6524532 100644
--- a/src/StellaOps.Authority/StellaOps.Authority/Bootstrap/BootstrapRequests.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority/Bootstrap/BootstrapRequests.cs
@@ -6,6 +6,8 @@ internal sealed record BootstrapUserRequest
{
public string? Provider { get; init; }
+ public string? InviteToken { get; init; }
+
[Required]
public string Username { get; init; } = string.Empty;
@@ -27,6 +29,8 @@ internal sealed record BootstrapClientRequest
{
public string? Provider { get; init; }
+ public string? InviteToken { get; init; }
+
[Required]
public string ClientId { get; init; } = string.Empty;
@@ -46,3 +50,26 @@ internal sealed record BootstrapClientRequest
public IReadOnlyDictionary? Properties { get; init; }
}
+
+internal sealed record BootstrapInviteRequest
+{
+ public string Type { get; init; } = BootstrapInviteTypes.User;
+
+ public string? Token { get; init; }
+
+ public string? Provider { get; init; }
+
+ public string? Target { get; init; }
+
+ public DateTimeOffset? ExpiresAt { get; init; }
+
+ public string? IssuedBy { get; init; }
+
+ public IReadOnlyDictionary? Metadata { get; init; }
+}
+
+internal static class BootstrapInviteTypes
+{
+ public const string User = "user";
+ public const string Client = "client";
+}
diff --git a/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/ClientCredentialsAuditHelper.cs b/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/ClientCredentialsAuditHelper.cs
new file mode 100644
index 00000000..e2e01f70
--- /dev/null
+++ b/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/ClientCredentialsAuditHelper.cs
@@ -0,0 +1,252 @@
+using System;
+using System.Collections.Generic;
+using System.Diagnostics;
+using System.Globalization;
+using System.Linq;
+using OpenIddict.Abstractions;
+using OpenIddict.Server;
+using StellaOps.Authority.RateLimiting;
+using StellaOps.Cryptography.Audit;
+
+namespace StellaOps.Authority.OpenIddict.Handlers;
+
+internal static class ClientCredentialsAuditHelper
+{
+ internal static string EnsureCorrelationId(OpenIddictServerTransaction transaction)
+ {
+ ArgumentNullException.ThrowIfNull(transaction);
+
+ if (transaction.Properties.TryGetValue(AuthorityOpenIddictConstants.AuditCorrelationProperty, out var value) &&
+ value is string existing &&
+ !string.IsNullOrWhiteSpace(existing))
+ {
+ return existing;
+ }
+
+ var correlation = Activity.Current?.TraceId.ToString() ??
+ Guid.NewGuid().ToString("N", CultureInfo.InvariantCulture);
+
+ transaction.Properties[AuthorityOpenIddictConstants.AuditCorrelationProperty] = correlation;
+ return correlation;
+ }
+
+ internal static AuthEventRecord CreateRecord(
+ TimeProvider timeProvider,
+ OpenIddictServerTransaction transaction,
+ AuthorityRateLimiterMetadata? metadata,
+ string? clientSecret,
+ AuthEventOutcome outcome,
+ string? reason,
+ string? clientId,
+ string? providerName,
+ bool? confidential,
+ IReadOnlyList requestedScopes,
+ IReadOnlyList grantedScopes,
+ string? invalidScope,
+ IEnumerable? extraProperties = null,
+ string? eventType = null)
+ {
+ ArgumentNullException.ThrowIfNull(timeProvider);
+ ArgumentNullException.ThrowIfNull(transaction);
+
+ var correlationId = EnsureCorrelationId(transaction);
+ var client = BuildClient(clientId, providerName);
+ var network = BuildNetwork(metadata);
+ var normalizedGranted = NormalizeScopes(grantedScopes);
+ var properties = BuildProperties(confidential, requestedScopes, invalidScope, extraProperties);
+
+ return new AuthEventRecord
+ {
+ EventType = string.IsNullOrWhiteSpace(eventType) ? "authority.client_credentials.grant" : eventType,
+ OccurredAt = timeProvider.GetUtcNow(),
+ CorrelationId = correlationId,
+ Outcome = outcome,
+ Reason = Normalize(reason),
+ Subject = null,
+ Client = client,
+ Scopes = normalizedGranted,
+ Network = network,
+ Properties = properties
+ };
+ }
+
+ internal static AuthEventRecord CreateTamperRecord(
+ TimeProvider timeProvider,
+ OpenIddictServerTransaction transaction,
+ AuthorityRateLimiterMetadata? metadata,
+ string? clientId,
+ string? providerName,
+ bool? confidential,
+ IEnumerable unexpectedParameters)
+ {
+ var properties = new List
+ {
+ new()
+ {
+ Name = "request.tampered",
+ Value = ClassifiedString.Public("true")
+ }
+ };
+
+ if (confidential.HasValue)
+ {
+ properties.Add(new AuthEventProperty
+ {
+ Name = "client.confidential",
+ Value = ClassifiedString.Public(confidential.Value ? "true" : "false")
+ });
+ }
+
+ if (unexpectedParameters is not null)
+ {
+ foreach (var parameter in unexpectedParameters)
+ {
+ if (string.IsNullOrWhiteSpace(parameter))
+ {
+ continue;
+ }
+
+ properties.Add(new AuthEventProperty
+ {
+ Name = "request.unexpected_parameter",
+ Value = ClassifiedString.Public(parameter)
+ });
+ }
+ }
+
+ var reason = unexpectedParameters is null
+ ? "Unexpected parameters supplied to client credentials request."
+ : $"Unexpected parameters supplied to client credentials request: {string.Join(", ", unexpectedParameters)}.";
+
+ return CreateRecord(
+ timeProvider,
+ transaction,
+ metadata,
+ clientSecret: null,
+ outcome: AuthEventOutcome.Failure,
+ reason: reason,
+ clientId: clientId,
+ providerName: providerName,
+ confidential: confidential,
+ requestedScopes: Array.Empty(),
+ grantedScopes: Array.Empty(),
+ invalidScope: null,
+ extraProperties: properties,
+ eventType: "authority.token.tamper");
+ }
+
+ private static AuthEventClient? BuildClient(string? clientId, string? providerName)
+ {
+ if (string.IsNullOrWhiteSpace(clientId) && string.IsNullOrWhiteSpace(providerName))
+ {
+ return null;
+ }
+
+ return new AuthEventClient
+ {
+ ClientId = ClassifiedString.Personal(Normalize(clientId)),
+ Name = ClassifiedString.Empty,
+ Provider = ClassifiedString.Public(Normalize(providerName))
+ };
+ }
+
+ private static AuthEventNetwork? BuildNetwork(AuthorityRateLimiterMetadata? metadata)
+ {
+ var remote = Normalize(metadata?.RemoteIp);
+ var forwarded = Normalize(metadata?.ForwardedFor);
+ var userAgent = Normalize(metadata?.UserAgent);
+
+ if (string.IsNullOrWhiteSpace(remote) && string.IsNullOrWhiteSpace(forwarded) && string.IsNullOrWhiteSpace(userAgent))
+ {
+ return null;
+ }
+
+ return new AuthEventNetwork
+ {
+ RemoteAddress = ClassifiedString.Personal(remote),
+ ForwardedFor = ClassifiedString.Personal(forwarded),
+ UserAgent = ClassifiedString.Personal(userAgent)
+ };
+ }
+
+ private static IReadOnlyList BuildProperties(
+ bool? confidential,
+ IReadOnlyList requestedScopes,
+ string? invalidScope,
+ IEnumerable? extraProperties)
+ {
+ var properties = new List();
+
+ if (confidential.HasValue)
+ {
+ properties.Add(new AuthEventProperty
+ {
+ Name = "client.confidential",
+ Value = ClassifiedString.Public(confidential.Value ? "true" : "false")
+ });
+ }
+
+ var normalizedRequested = NormalizeScopes(requestedScopes);
+ if (normalizedRequested is { Count: > 0 })
+ {
+ foreach (var scope in normalizedRequested)
+ {
+ if (string.IsNullOrWhiteSpace(scope))
+ {
+ continue;
+ }
+
+ properties.Add(new AuthEventProperty
+ {
+ Name = "scope.requested",
+ Value = ClassifiedString.Public(scope)
+ });
+ }
+ }
+
+ if (!string.IsNullOrWhiteSpace(invalidScope))
+ {
+ properties.Add(new AuthEventProperty
+ {
+ Name = "scope.invalid",
+ Value = ClassifiedString.Public(invalidScope)
+ });
+ }
+
+ if (extraProperties is not null)
+ {
+ foreach (var property in extraProperties)
+ {
+ if (property is null || string.IsNullOrWhiteSpace(property.Name))
+ {
+ continue;
+ }
+
+ properties.Add(property);
+ }
+ }
+
+ return properties.Count == 0 ? Array.Empty() : properties;
+ }
+
+ private static IReadOnlyList NormalizeScopes(IReadOnlyList? scopes)
+ {
+ if (scopes is null || scopes.Count == 0)
+ {
+ return Array.Empty();
+ }
+
+ var normalized = scopes
+ .Where(static scope => !string.IsNullOrWhiteSpace(scope))
+ .Select(static scope => scope.Trim())
+ .Where(static scope => scope.Length > 0)
+ .Distinct(StringComparer.Ordinal)
+ .OrderBy(static scope => scope, StringComparer.Ordinal)
+ .ToArray();
+
+ return normalized.Length == 0 ? Array.Empty() : normalized;
+ }
+
+ private static string? Normalize(string? value)
+ => string.IsNullOrWhiteSpace(value) ? null : value.Trim();
+}
diff --git a/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/ClientCredentialsHandlers.cs b/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/ClientCredentialsHandlers.cs
index b404f6e9..8ee2ac5c 100644
--- a/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/ClientCredentialsHandlers.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/ClientCredentialsHandlers.cs
@@ -76,6 +76,22 @@ internal sealed class ValidateClientCredentialsHandler : IOpenIddictServerHandle
var requestedScopes = requestedScopeInput.IsDefaultOrEmpty ? Array.Empty() : requestedScopeInput.ToArray();
context.Transaction.Properties[AuthorityOpenIddictConstants.AuditRequestedScopesProperty] = requestedScopes;
+ var unexpectedParameters = TokenRequestTamperInspector.GetUnexpectedClientCredentialsParameters(context.Request);
+ if (unexpectedParameters.Count > 0)
+ {
+ var providerHint = context.Request.GetParameter(AuthorityOpenIddictConstants.ProviderParameterName)?.Value?.ToString();
+ var tamperRecord = ClientCredentialsAuditHelper.CreateTamperRecord(
+ timeProvider,
+ context.Transaction,
+ metadata,
+ clientId,
+ providerHint,
+ confidential: null,
+ unexpectedParameters);
+
+ await auditSink.WriteAsync(tamperRecord, context.CancellationToken).ConfigureAwait(false);
+ }
+
try
{
if (string.IsNullOrWhiteSpace(context.ClientId))
diff --git a/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/PasswordGrantHandlers.cs b/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/PasswordGrantHandlers.cs
index 2edeb153..85b5e5f3 100644
--- a/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/PasswordGrantHandlers.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/PasswordGrantHandlers.cs
@@ -68,6 +68,23 @@ internal sealed class ValidatePasswordGrantHandler : IOpenIddictServerHandler() : requestedScopesInput.ToArray();
+ var unexpectedParameters = TokenRequestTamperInspector.GetUnexpectedPasswordGrantParameters(context.Request);
+ if (unexpectedParameters.Count > 0)
+ {
+ var providerHint = context.Request.GetParameter(AuthorityOpenIddictConstants.ProviderParameterName)?.Value?.ToString();
+ var tamperRecord = PasswordGrantAuditHelper.CreateTamperRecord(
+ timeProvider,
+ context.Transaction,
+ metadata,
+ clientId,
+ providerHint,
+ context.Request.Username,
+ requestedScopes,
+ unexpectedParameters);
+
+ await auditSink.WriteAsync(tamperRecord, context.CancellationToken).ConfigureAwait(false);
+ }
+
var selection = AuthorityIdentityProviderSelector.ResolvePasswordProvider(context.Request, registry);
if (!selection.Succeeded)
{
@@ -75,7 +92,6 @@ internal sealed class ValidatePasswordGrantHandler : IOpenIddictServerHandler? scopes,
TimeSpan? retryAfter,
AuthorityCredentialFailureCode? failureCode,
- IEnumerable? extraProperties)
+ IEnumerable? extraProperties,
+ string? eventType = null)
{
ArgumentNullException.ThrowIfNull(timeProvider);
ArgumentNullException.ThrowIfNull(transaction);
@@ -409,7 +424,7 @@ internal static class PasswordGrantAuditHelper
return new AuthEventRecord
{
- EventType = "authority.password.grant",
+ EventType = string.IsNullOrWhiteSpace(eventType) ? "authority.password.grant" : eventType,
OccurredAt = timeProvider.GetUtcNow(),
CorrelationId = correlationId,
Outcome = outcome,
@@ -581,4 +596,61 @@ internal static class PasswordGrantAuditHelper
private static string? Normalize(string? value)
=> string.IsNullOrWhiteSpace(value) ? null : value.Trim();
+
+ internal static AuthEventRecord CreateTamperRecord(
+ TimeProvider timeProvider,
+ OpenIddictServerTransaction transaction,
+ AuthorityRateLimiterMetadata? metadata,
+ string? clientId,
+ string? providerName,
+ string? username,
+ IEnumerable? scopes,
+ IEnumerable unexpectedParameters)
+ {
+ var properties = new List
+ {
+ new()
+ {
+ Name = "request.tampered",
+ Value = ClassifiedString.Public("true")
+ }
+ };
+
+ if (unexpectedParameters is not null)
+ {
+ foreach (var parameter in unexpectedParameters)
+ {
+ if (string.IsNullOrWhiteSpace(parameter))
+ {
+ continue;
+ }
+
+ properties.Add(new AuthEventProperty
+ {
+ Name = "request.unexpected_parameter",
+ Value = ClassifiedString.Public(parameter)
+ });
+ }
+ }
+
+ var reason = unexpectedParameters is null
+ ? "Unexpected parameters supplied to password grant request."
+ : $"Unexpected parameters supplied to password grant request: {string.Join(", ", unexpectedParameters)}.";
+
+ return CreatePasswordGrantRecord(
+ timeProvider,
+ transaction,
+ metadata,
+ AuthEventOutcome.Failure,
+ reason,
+ clientId,
+ providerName,
+ user: null,
+ username,
+ scopes,
+ retryAfter: null,
+ failureCode: null,
+ extraProperties: properties,
+ eventType: "authority.token.tamper");
+ }
}
diff --git a/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/RevocationHandlers.cs b/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/RevocationHandlers.cs
index 3da96ac3..95efe539 100644
--- a/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/RevocationHandlers.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/RevocationHandlers.cs
@@ -111,14 +111,26 @@ internal sealed class HandleRevocationRequestHandler : IOpenIddictServerHandler<
private static byte[] Base64UrlDecode(string value)
{
- var padded = value.Length % 4 switch
+ if (string.IsNullOrWhiteSpace(value))
{
- 2 => value + "==",
- 3 => value + "=",
- _ => value
- };
+ return Array.Empty();
+ }
- padded = padded.Replace('-', '+').Replace('_', '/');
+ var remainder = value.Length % 4;
+ if (remainder == 2)
+ {
+ value += "==";
+ }
+ else if (remainder == 3)
+ {
+ value += "=";
+ }
+ else if (remainder != 0)
+ {
+ value += new string('=', 4 - remainder);
+ }
+
+ var padded = value.Replace('-', '+').Replace('_', '/');
return Convert.FromBase64String(padded);
}
}
diff --git a/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/TokenPersistenceHandlers.cs b/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/TokenPersistenceHandlers.cs
index 7f126ca4..ee161b56 100644
--- a/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/TokenPersistenceHandlers.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/Handlers/TokenPersistenceHandlers.cs
@@ -119,7 +119,7 @@ internal sealed class PersistTokensHandler : IOpenIddictServerHandler logger;
@@ -25,6 +32,8 @@ internal sealed class ValidateAccessTokenHandler : IOpenIddictServerHandler logger)
@@ -32,6 +41,8 @@ internal sealed class ValidateAccessTokenHandler : IOpenIddictServerHandler
+ {
+ new() { Name = "token.id", Value = ClassifiedString.Sensitive(tokenDocument.TokenId) },
+ new() { Name = "token.type", Value = ClassifiedString.Public(tokenDocument.Type) },
+ new() { Name = "token.devices.total", Value = ClassifiedString.Public((previousCount + 1).ToString(CultureInfo.InvariantCulture)) }
+ };
+
+ if (!string.IsNullOrWhiteSpace(tokenDocument.ClientId))
+ {
+ properties.Add(new AuthEventProperty
+ {
+ Name = "token.client_id",
+ Value = ClassifiedString.Personal(tokenDocument.ClientId)
+ });
+ }
+
+ logger.LogWarning("Detected suspected token replay for token {TokenId} (client {ClientId}).", tokenDocument.TokenId, clientId ?? "");
+
+ var record = new AuthEventRecord
+ {
+ EventType = "authority.token.replay.suspected",
+ OccurredAt = observedAt,
+ CorrelationId = Activity.Current?.TraceId.ToString() ?? Guid.NewGuid().ToString("N"),
+ Outcome = AuthEventOutcome.Error,
+ Reason = "Token observed from a new device fingerprint.",
+ Subject = subject,
+ Client = client,
+ Scopes = Array.Empty(),
+ Network = network,
+ Properties = properties
+ };
+
+ await auditSink.WriteAsync(record, cancellationToken).ConfigureAwait(false);
+ }
}
diff --git a/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/TokenRequestTamperInspector.cs b/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/TokenRequestTamperInspector.cs
new file mode 100644
index 00000000..d17e23ef
--- /dev/null
+++ b/src/StellaOps.Authority/StellaOps.Authority/OpenIddict/TokenRequestTamperInspector.cs
@@ -0,0 +1,112 @@
+using System.Collections.Generic;
+using System.Linq;
+using OpenIddict.Abstractions;
+
+namespace StellaOps.Authority.OpenIddict;
+
+internal static class TokenRequestTamperInspector
+{
+ private static readonly HashSet CommonParameters = new(StringComparer.OrdinalIgnoreCase)
+ {
+ OpenIddictConstants.Parameters.GrantType,
+ OpenIddictConstants.Parameters.Scope,
+ OpenIddictConstants.Parameters.Resource,
+ OpenIddictConstants.Parameters.ClientId,
+ OpenIddictConstants.Parameters.ClientSecret,
+ OpenIddictConstants.Parameters.ClientAssertion,
+ OpenIddictConstants.Parameters.ClientAssertionType,
+ OpenIddictConstants.Parameters.RefreshToken,
+ OpenIddictConstants.Parameters.DeviceCode,
+ OpenIddictConstants.Parameters.Code,
+ OpenIddictConstants.Parameters.CodeVerifier,
+ OpenIddictConstants.Parameters.CodeChallenge,
+ OpenIddictConstants.Parameters.CodeChallengeMethod,
+ OpenIddictConstants.Parameters.RedirectUri,
+ OpenIddictConstants.Parameters.Assertion,
+ OpenIddictConstants.Parameters.Nonce,
+ OpenIddictConstants.Parameters.Prompt,
+ OpenIddictConstants.Parameters.MaxAge,
+ OpenIddictConstants.Parameters.UiLocales,
+ OpenIddictConstants.Parameters.AcrValues,
+ OpenIddictConstants.Parameters.LoginHint,
+ OpenIddictConstants.Parameters.Claims,
+ OpenIddictConstants.Parameters.Token,
+ OpenIddictConstants.Parameters.TokenTypeHint,
+ OpenIddictConstants.Parameters.AccessToken,
+ OpenIddictConstants.Parameters.IdToken
+ };
+
+ private static readonly HashSet PasswordGrantParameters = new(StringComparer.OrdinalIgnoreCase)
+ {
+ OpenIddictConstants.Parameters.Username,
+ OpenIddictConstants.Parameters.Password,
+ AuthorityOpenIddictConstants.ProviderParameterName
+ };
+
+ private static readonly HashSet ClientCredentialsParameters = new(StringComparer.OrdinalIgnoreCase)
+ {
+ AuthorityOpenIddictConstants.ProviderParameterName
+ };
+
+ internal static IReadOnlyList GetUnexpectedPasswordGrantParameters(OpenIddictRequest request)
+ => DetectUnexpectedParameters(request, PasswordGrantParameters);
+
+ internal static IReadOnlyList GetUnexpectedClientCredentialsParameters(OpenIddictRequest request)
+ => DetectUnexpectedParameters(request, ClientCredentialsParameters);
+
+ private static IReadOnlyList DetectUnexpectedParameters(
+ OpenIddictRequest request,
+ HashSet grantSpecific)
+ {
+ if (request is null)
+ {
+ return Array.Empty();
+ }
+
+ var unexpected = new HashSet(StringComparer.OrdinalIgnoreCase);
+
+ foreach (var pair in request.GetParameters())
+ {
+ var name = pair.Key;
+ if (string.IsNullOrWhiteSpace(name))
+ {
+ continue;
+ }
+
+ if (IsAllowed(name, grantSpecific))
+ {
+ continue;
+ }
+
+ unexpected.Add(name);
+ }
+
+ return unexpected.Count == 0
+ ? Array.Empty()
+ : unexpected
+ .OrderBy(static value => value, StringComparer.OrdinalIgnoreCase)
+ .ToArray();
+ }
+
+ private static bool IsAllowed(string parameterName, HashSet grantSpecific)
+ {
+ if (CommonParameters.Contains(parameterName) || grantSpecific.Contains(parameterName))
+ {
+ return true;
+ }
+
+ if (parameterName.StartsWith("ext_", StringComparison.OrdinalIgnoreCase) ||
+ parameterName.StartsWith("x-", StringComparison.OrdinalIgnoreCase) ||
+ parameterName.StartsWith("custom_", StringComparison.OrdinalIgnoreCase))
+ {
+ return true;
+ }
+
+ if (parameterName.Contains(':', StringComparison.Ordinal))
+ {
+ return true;
+ }
+
+ return false;
+ }
+}
diff --git a/src/StellaOps.Authority/StellaOps.Authority/Program.cs b/src/StellaOps.Authority/StellaOps.Authority/Program.cs
index 050ec056..8eb43fbd 100644
--- a/src/StellaOps.Authority/StellaOps.Authority/Program.cs
+++ b/src/StellaOps.Authority/StellaOps.Authority/Program.cs
@@ -24,6 +24,7 @@ using StellaOps.Authority.Plugins;
using StellaOps.Authority.Bootstrap;
using StellaOps.Authority.Storage.Mongo.Extensions;
using StellaOps.Authority.Storage.Mongo.Initialization;
+using StellaOps.Authority.Storage.Mongo.Stores;
using StellaOps.Authority.RateLimiting;
using StellaOps.Configuration;
using StellaOps.Plugin.DependencyInjection;
@@ -35,6 +36,7 @@ using StellaOps.Cryptography.DependencyInjection;
using StellaOps.Authority.Revocation;
using StellaOps.Authority.Signing;
using StellaOps.Cryptography;
+using StellaOps.Authority.Storage.Mongo.Documents;
var builder = WebApplication.CreateBuilder(args);
@@ -124,6 +126,7 @@ builder.Services.AddSingleton();
builder.Services.AddSingleton();
builder.Services.AddSingleton();
builder.Services.AddSingleton();
+builder.Services.AddHostedService();
var pluginRegistrationSummary = AuthorityPluginLoader.RegisterPlugins(
builder.Services,
@@ -281,38 +284,98 @@ if (authorityOptions.Bootstrap.Enabled)
HttpContext httpContext,
BootstrapUserRequest request,
IAuthorityIdentityProviderRegistry registry,
+ IAuthorityBootstrapInviteStore inviteStore,
IAuthEventSink auditSink,
TimeProvider timeProvider,
CancellationToken cancellationToken) =>
{
if (request is null)
{
- await WriteBootstrapUserAuditAsync(AuthEventOutcome.Failure, "Request payload is required.", null, null, null, Array.Empty()).ConfigureAwait(false);
+ await WriteBootstrapUserAuditAsync(AuthEventOutcome.Failure, "Request payload is required.", null, null, null, Array.Empty(), null).ConfigureAwait(false);
return Results.BadRequest(new { error = "invalid_request", message = "Request payload is required." });
}
+ var now = timeProvider.GetUtcNow();
+ var inviteToken = string.IsNullOrWhiteSpace(request.InviteToken) ? null : request.InviteToken.Trim();
+ AuthorityBootstrapInviteDocument? invite = null;
+ var inviteReserved = false;
+
+ async Task ReleaseInviteAsync(string reason)
+ {
+ if (inviteToken is null)
+ {
+ return;
+ }
+
+ if (inviteReserved)
+ {
+ await inviteStore.ReleaseAsync(inviteToken, cancellationToken).ConfigureAwait(false);
+ }
+
+ await WriteInviteAuditAsync("authority.bootstrap.invite.rejected", AuthEventOutcome.Failure, reason, invite, inviteToken).ConfigureAwait(false);
+ }
+
+ if (inviteToken is not null)
+ {
+ var reservation = await inviteStore.TryReserveAsync(inviteToken, BootstrapInviteTypes.User, now, request.Username, cancellationToken).ConfigureAwait(false);
+
+ switch (reservation.Status)
+ {
+ case BootstrapInviteReservationStatus.Reserved:
+ inviteReserved = true;
+ invite = reservation.Invite;
+ break;
+ case BootstrapInviteReservationStatus.Expired:
+ await WriteInviteAuditAsync("authority.bootstrap.invite.expired", AuthEventOutcome.Failure, "Invite expired before use.", reservation.Invite, inviteToken).ConfigureAwait(false);
+ return Results.BadRequest(new { error = "invite_expired", message = "Invite has expired." });
+ case BootstrapInviteReservationStatus.AlreadyUsed:
+ await WriteInviteAuditAsync("authority.bootstrap.invite.rejected", AuthEventOutcome.Failure, "Invite token already consumed.", reservation.Invite, inviteToken).ConfigureAwait(false);
+ return Results.BadRequest(new { error = "invite_used", message = "Invite token has already been used." });
+ default:
+ await WriteInviteAuditAsync("authority.bootstrap.invite.rejected", AuthEventOutcome.Failure, "Invite token not found.", reservation.Invite, inviteToken).ConfigureAwait(false);
+ return Results.BadRequest(new { error = "invalid_invite", message = "Invite token is invalid." });
+ }
+ }
+
var providerName = string.IsNullOrWhiteSpace(request.Provider)
- ? authorityOptions.Bootstrap.DefaultIdentityProvider
+ ? invite?.Provider ?? authorityOptions.Bootstrap.DefaultIdentityProvider
: request.Provider;
+ if (invite is not null && !string.IsNullOrWhiteSpace(invite.Provider) &&
+ !string.Equals(invite.Provider, providerName, StringComparison.OrdinalIgnoreCase))
+ {
+ await ReleaseInviteAsync("Invite provider does not match requested provider.");
+ return Results.BadRequest(new { error = "invite_provider_mismatch", message = "Invite is limited to a different identity provider." });
+ }
+
if (string.IsNullOrWhiteSpace(providerName) || !registry.TryGet(providerName!, out var provider))
{
- await WriteBootstrapUserAuditAsync(AuthEventOutcome.Failure, "Specified identity provider was not found.", null, request.Username, providerName, request.Roles ?? Array.Empty()).ConfigureAwait(false);
+ await ReleaseInviteAsync("Specified identity provider was not found.");
+ await WriteBootstrapUserAuditAsync(AuthEventOutcome.Failure, "Specified identity provider was not found.", null, request.Username, providerName, request.Roles ?? Array.Empty(), inviteToken).ConfigureAwait(false);
return Results.BadRequest(new { error = "invalid_provider", message = "Specified identity provider was not found." });
}
if (!provider.Capabilities.SupportsPassword)
{
- await WriteBootstrapUserAuditAsync(AuthEventOutcome.Failure, "Selected provider does not support password provisioning.", null, request.Username, provider.Name, request.Roles ?? Array.Empty()).ConfigureAwait(false);
+ await ReleaseInviteAsync("Selected provider does not support password provisioning.");
+ await WriteBootstrapUserAuditAsync(AuthEventOutcome.Failure, "Selected provider does not support password provisioning.", null, request.Username, provider.Name, request.Roles ?? Array.Empty(), inviteToken).ConfigureAwait(false);
return Results.BadRequest(new { error = "unsupported_provider", message = "Selected provider does not support password provisioning." });
}
if (string.IsNullOrWhiteSpace(request.Username) || string.IsNullOrEmpty(request.Password))
{
- await WriteBootstrapUserAuditAsync(AuthEventOutcome.Failure, "Username and password are required.", null, request.Username, provider.Name, request.Roles ?? Array.Empty()).ConfigureAwait(false);
+ await ReleaseInviteAsync("Username and password are required.");
+ await WriteBootstrapUserAuditAsync(AuthEventOutcome.Failure, "Username and password are required.", null, request.Username, provider.Name, request.Roles ?? Array.Empty(), inviteToken).ConfigureAwait(false);
return Results.BadRequest(new { error = "invalid_request", message = "Username and password are required." });
}
+ if (invite is not null && !string.IsNullOrWhiteSpace(invite.Target) &&
+ !string.Equals(invite.Target, request.Username, StringComparison.OrdinalIgnoreCase))
+ {
+ await ReleaseInviteAsync("Invite target does not match requested username.");
+ return Results.BadRequest(new { error = "invite_target_mismatch", message = "Invite target does not match username." });
+ }
+
var roles = request.Roles is null ? Array.Empty() : request.Roles.ToArray();
var attributes = request.Attributes is null
? new Dictionary(StringComparer.OrdinalIgnoreCase)
@@ -327,24 +390,47 @@ if (authorityOptions.Bootstrap.Enabled)
roles,
attributes);
- var result = await provider.Credentials.UpsertUserAsync(registration, cancellationToken).ConfigureAwait(false);
-
- if (!result.Succeeded || result.Value is null)
+ try
{
- await WriteBootstrapUserAuditAsync(AuthEventOutcome.Failure, result.Message ?? "User provisioning failed.", null, request.Username, provider.Name, roles).ConfigureAwait(false);
- return Results.BadRequest(new { error = result.ErrorCode ?? "bootstrap_failed", message = result.Message ?? "User provisioning failed." });
+ var result = await provider.Credentials.UpsertUserAsync(registration, cancellationToken).ConfigureAwait(false);
+
+ if (!result.Succeeded || result.Value is null)
+ {
+ await ReleaseInviteAsync(result.Message ?? "User provisioning failed.");
+ await WriteBootstrapUserAuditAsync(AuthEventOutcome.Failure, result.Message ?? "User provisioning failed.", null, request.Username, provider.Name, roles, inviteToken).ConfigureAwait(false);
+ return Results.BadRequest(new { error = result.ErrorCode ?? "bootstrap_failed", message = result.Message ?? "User provisioning failed." });
+ }
+
+ if (inviteReserved && inviteToken is not null)
+ {
+ var consumed = await inviteStore.MarkConsumedAsync(inviteToken, result.Value.SubjectId ?? result.Value.Username, now, cancellationToken).ConfigureAwait(false);
+ if (consumed)
+ {
+ await WriteInviteAuditAsync("authority.bootstrap.invite.consumed", AuthEventOutcome.Success, null, invite, inviteToken).ConfigureAwait(false);
+ }
+ }
+
+ await WriteBootstrapUserAuditAsync(AuthEventOutcome.Success, null, result.Value.SubjectId, result.Value.Username, provider.Name, roles, inviteToken).ConfigureAwait(false);
+
+ return Results.Ok(new
+ {
+ provider = provider.Name,
+ subjectId = result.Value.SubjectId,
+ username = result.Value.Username
+ });
+ }
+ catch
+ {
+ if (inviteReserved && inviteToken is not null)
+ {
+ await inviteStore.ReleaseAsync(inviteToken, cancellationToken).ConfigureAwait(false);
+ await WriteInviteAuditAsync("authority.bootstrap.invite.released", AuthEventOutcome.Error, "Invite released due to provisioning failure.", invite, inviteToken).ConfigureAwait(false);
+ }
+
+ throw;
}
- await WriteBootstrapUserAuditAsync(AuthEventOutcome.Success, null, result.Value.SubjectId, result.Value.Username, provider.Name, roles).ConfigureAwait(false);
-
- return Results.Ok(new
- {
- provider = provider.Name,
- subjectId = result.Value.SubjectId,
- username = result.Value.Username
- });
-
- async Task WriteBootstrapUserAuditAsync(AuthEventOutcome outcome, string? reason, string? subjectId, string? usernameValue, string? providerValue, IReadOnlyCollection rolesValue)
+ async Task WriteBootstrapUserAuditAsync(AuthEventOutcome outcome, string? reason, string? subjectId, string? usernameValue, string? providerValue, IReadOnlyCollection rolesValue, string? inviteValue)
{
var correlationId = Activity.Current?.TraceId.ToString() ?? httpContext.TraceIdentifier ?? Guid.NewGuid().ToString("N", CultureInfo.InvariantCulture);
AuthEventNetwork? network = null;
@@ -369,16 +455,24 @@ if (authorityOptions.Bootstrap.Enabled)
Realm = ClassifiedString.Public(providerValue)
};
- var properties = string.IsNullOrWhiteSpace(providerValue)
- ? Array.Empty()
- : new[]
+ var properties = new List();
+ if (!string.IsNullOrWhiteSpace(providerValue))
+ {
+ properties.Add(new AuthEventProperty
{
- new AuthEventProperty
- {
- Name = "bootstrap.provider",
- Value = ClassifiedString.Public(providerValue)
- }
- };
+ Name = "bootstrap.provider",
+ Value = ClassifiedString.Public(providerValue)
+ });
+ }
+
+ if (!string.IsNullOrWhiteSpace(inviteValue))
+ {
+ properties.Add(new AuthEventProperty
+ {
+ Name = "bootstrap.invite_token",
+ Value = ClassifiedString.Public(inviteValue)
+ });
+ }
var scopes = rolesValue is { Count: > 0 }
? rolesValue.ToArray()
@@ -395,65 +489,199 @@ if (authorityOptions.Bootstrap.Enabled)
Client = null,
Scopes = scopes,
Network = network,
- Properties = properties
+ Properties = properties.Count == 0 ? Array.Empty() : properties
};
await auditSink.WriteAsync(record, httpContext.RequestAborted).ConfigureAwait(false);
}
+
+ async Task WriteInviteAuditAsync(string eventType, AuthEventOutcome outcome, string? reason, AuthorityBootstrapInviteDocument? document, string? tokenValue)
+ {
+ var record = new AuthEventRecord
+ {
+ EventType = eventType,
+ OccurredAt = timeProvider.GetUtcNow(),
+ CorrelationId = Activity.Current?.TraceId.ToString() ?? httpContext.TraceIdentifier ?? Guid.NewGuid().ToString("N", CultureInfo.InvariantCulture),
+ Outcome = outcome,
+ Reason = reason,
+ Subject = null,
+ Client = null,
+ Scopes = Array.Empty(),
+ Network = null,
+ Properties = BuildInviteProperties(document, tokenValue)
+ };
+
+ await auditSink.WriteAsync(record, httpContext.RequestAborted).ConfigureAwait(false);
+ }
+
+ static AuthEventProperty[] BuildInviteProperties(AuthorityBootstrapInviteDocument? document, string? token)
+ {
+ var properties = new List();
+ if (!string.IsNullOrWhiteSpace(token))
+ {
+ properties.Add(new AuthEventProperty
+ {
+ Name = "invite.token",
+ Value = ClassifiedString.Public(token)
+ });
+ }
+
+ if (document is not null)
+ {
+ if (!string.IsNullOrWhiteSpace(document.Type))
+ {
+ properties.Add(new AuthEventProperty
+ {
+ Name = "invite.type",
+ Value = ClassifiedString.Public(document.Type)
+ });
+ }
+
+ if (!string.IsNullOrWhiteSpace(document.Provider))
+ {
+ properties.Add(new AuthEventProperty
+ {
+ Name = "invite.provider",
+ Value = ClassifiedString.Public(document.Provider)
+ });
+ }
+
+ if (!string.IsNullOrWhiteSpace(document.Target))
+ {
+ properties.Add(new AuthEventProperty
+ {
+ Name = "invite.target",
+ Value = ClassifiedString.Public(document.Target)
+ });
+ }
+
+ properties.Add(new AuthEventProperty
+ {
+ Name = "invite.expires_at",
+ Value = ClassifiedString.Public(document.ExpiresAt.ToString("O", CultureInfo.InvariantCulture))
+ });
+ }
+
+ return properties.Count == 0 ? Array.Empty() : properties.ToArray();
+ }
});
bootstrapGroup.MapPost("/clients", async (
HttpContext httpContext,
BootstrapClientRequest request,
IAuthorityIdentityProviderRegistry registry,
+ IAuthorityBootstrapInviteStore inviteStore,
IAuthEventSink auditSink,
TimeProvider timeProvider,
CancellationToken cancellationToken) =>
{
if (request is null)
{
- await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, "Request payload is required.", null, null, null, Array.Empty(), null).ConfigureAwait(false);
+ await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, "Request payload is required.", null, null, null, Array.Empty(), null, null).ConfigureAwait(false);
return Results.BadRequest(new { error = "invalid_request", message = "Request payload is required." });
}
+ var now = timeProvider.GetUtcNow();
+ var inviteToken = string.IsNullOrWhiteSpace(request.InviteToken) ? null : request.InviteToken.Trim();
+ AuthorityBootstrapInviteDocument? invite = null;
+ var inviteReserved = false;
+
+ async Task ReleaseInviteAsync(string reason)
+ {
+ if (inviteToken is null)
+ {
+ return;
+ }
+
+ if (inviteReserved)
+ {
+ await inviteStore.ReleaseAsync(inviteToken, cancellationToken).ConfigureAwait(false);
+ }
+
+ await WriteInviteAuditAsync("authority.bootstrap.invite.rejected", AuthEventOutcome.Failure, reason, invite, inviteToken).ConfigureAwait(false);
+ }
+
+ if (inviteToken is not null)
+ {
+ var reservation = await inviteStore.TryReserveAsync(inviteToken, BootstrapInviteTypes.Client, now, request.ClientId, cancellationToken).ConfigureAwait(false);
+ switch (reservation.Status)
+ {
+ case BootstrapInviteReservationStatus.Reserved:
+ inviteReserved = true;
+ invite = reservation.Invite;
+ break;
+ case BootstrapInviteReservationStatus.Expired:
+ await WriteInviteAuditAsync("authority.bootstrap.invite.expired", AuthEventOutcome.Failure, "Invite expired before use.", reservation.Invite, inviteToken).ConfigureAwait(false);
+ return Results.BadRequest(new { error = "invite_expired", message = "Invite has expired." });
+ case BootstrapInviteReservationStatus.AlreadyUsed:
+ await WriteInviteAuditAsync("authority.bootstrap.invite.rejected", AuthEventOutcome.Failure, "Invite token already consumed.", reservation.Invite, inviteToken).ConfigureAwait(false);
+ return Results.BadRequest(new { error = "invite_used", message = "Invite token has already been used." });
+ default:
+ await WriteInviteAuditAsync("authority.bootstrap.invite.rejected", AuthEventOutcome.Failure, "Invite token is invalid.", reservation.Invite, inviteToken).ConfigureAwait(false);
+ return Results.BadRequest(new { error = "invalid_invite", message = "Invite token is invalid." });
+ }
+ }
+
var providerName = string.IsNullOrWhiteSpace(request.Provider)
- ? authorityOptions.Bootstrap.DefaultIdentityProvider
+ ? invite?.Provider ?? authorityOptions.Bootstrap.DefaultIdentityProvider
: request.Provider;
+ if (invite is not null && !string.IsNullOrWhiteSpace(invite.Provider) &&
+ !string.Equals(invite.Provider, providerName, StringComparison.OrdinalIgnoreCase))
+ {
+ await ReleaseInviteAsync("Invite provider does not match requested provider.");
+ return Results.BadRequest(new { error = "invite_provider_mismatch", message = "Invite is limited to a different identity provider." });
+ }
+
if (string.IsNullOrWhiteSpace(providerName) || !registry.TryGet(providerName!, out var provider))
{
- await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, "Specified identity provider was not found.", request.ClientId, null, providerName, request.AllowedScopes ?? Array.Empty(), request?.Confidential).ConfigureAwait(false);
+ await ReleaseInviteAsync("Specified identity provider was not found.");
+ await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, "Specified identity provider was not found.", request.ClientId, null, providerName, request.AllowedScopes ?? Array.Empty(), request?.Confidential, inviteToken).ConfigureAwait(false);
return Results.BadRequest(new { error = "invalid_provider", message = "Specified identity provider was not found." });
}
if (!provider.Capabilities.SupportsClientProvisioning || provider.ClientProvisioning is null)
{
- await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, "Selected provider does not support client provisioning.", request.ClientId, null, provider.Name, request.AllowedScopes ?? Array.Empty(), request.Confidential).ConfigureAwait(false);
+ await ReleaseInviteAsync("Selected provider does not support client provisioning.");
+ await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, "Selected provider does not support client provisioning.", request.ClientId, null, provider.Name, request.AllowedScopes ?? Array.Empty(), request.Confidential, inviteToken).ConfigureAwait(false);
return Results.BadRequest(new { error = "unsupported_provider", message = "Selected provider does not support client provisioning." });
}
if (string.IsNullOrWhiteSpace(request.ClientId))
{
- await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, "ClientId is required.", null, null, provider.Name, request.AllowedScopes ?? Array.Empty(), request.Confidential).ConfigureAwait(false);
+ await ReleaseInviteAsync("ClientId is required.");
+ await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, "ClientId is required.", null, null, provider.Name, request.AllowedScopes ?? Array.Empty(), request.Confidential, inviteToken).ConfigureAwait(false);
return Results.BadRequest(new { error = "invalid_request", message = "ClientId is required." });
}
+ if (invite is not null && !string.IsNullOrWhiteSpace(invite.Target) &&
+ !string.Equals(invite.Target, request.ClientId, StringComparison.OrdinalIgnoreCase))
+ {
+ await ReleaseInviteAsync("Invite target does not match requested client id.");
+ return Results.BadRequest(new { error = "invite_target_mismatch", message = "Invite target does not match client id." });
+ }
+
if (request.Confidential && string.IsNullOrWhiteSpace(request.ClientSecret))
{
- await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, "Confidential clients require a client secret.", request.ClientId, null, provider.Name, request.AllowedScopes ?? Array.Empty(), request.Confidential).ConfigureAwait(false);
+ await ReleaseInviteAsync("Confidential clients require a client secret.");
+ await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, "Confidential clients require a client secret.", request.ClientId, null, provider.Name, request.AllowedScopes ?? Array.Empty(), request.Confidential, inviteToken).ConfigureAwait(false);
return Results.BadRequest(new { error = "invalid_request", message = "Confidential clients require a client secret." });
}
if (!TryParseUris(request.RedirectUris, out var redirectUris, out var redirectError))
{
- await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, redirectError, request.ClientId, null, provider.Name, request.AllowedScopes ?? Array.Empty(), request.Confidential).ConfigureAwait(false);
- return Results.BadRequest(new { error = "invalid_request", message = redirectError });
+ var errorMessage = redirectError ?? "Redirect URI validation failed.";
+ await ReleaseInviteAsync(errorMessage);
+ await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, errorMessage, request.ClientId, null, provider.Name, request.AllowedScopes ?? Array.Empty(), request.Confidential, inviteToken).ConfigureAwait(false);
+ return Results.BadRequest(new { error = "invalid_request", message = errorMessage });
}
if (!TryParseUris(request.PostLogoutRedirectUris, out var postLogoutUris, out var postLogoutError))
{
- await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, postLogoutError, request.ClientId, null, provider.Name, request.AllowedScopes ?? Array.Empty(), request.Confidential).ConfigureAwait(false);
- return Results.BadRequest(new { error = "invalid_request", message = postLogoutError });
+ var errorMessage = postLogoutError ?? "Post-logout redirect URI validation failed.";
+ await ReleaseInviteAsync(errorMessage);
+ await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, errorMessage, request.ClientId, null, provider.Name, request.AllowedScopes ?? Array.Empty(), request.Confidential, inviteToken).ConfigureAwait(false);
+ return Results.BadRequest(new { error = "invalid_request", message = errorMessage });
}
var properties = request.Properties is null
@@ -475,11 +703,21 @@ if (authorityOptions.Bootstrap.Enabled)
if (!result.Succeeded || result.Value is null)
{
- await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, result.Message ?? "Client provisioning failed.", request.ClientId, result.Value?.ClientId, provider.Name, request.AllowedScopes ?? Array.Empty(), request.Confidential).ConfigureAwait(false);
+ await ReleaseInviteAsync(result.Message ?? "Client provisioning failed.");
+ await WriteBootstrapClientAuditAsync(AuthEventOutcome.Failure, result.Message ?? "Client provisioning failed.", request.ClientId, result.Value?.ClientId, provider.Name, request.AllowedScopes ?? Array.Empty(), request.Confidential, inviteToken).ConfigureAwait(false);
return Results.BadRequest(new { error = result.ErrorCode ?? "bootstrap_failed", message = result.Message ?? "Client provisioning failed." });
}
- await WriteBootstrapClientAuditAsync(AuthEventOutcome.Success, null, request.ClientId, result.Value.ClientId, provider.Name, request.AllowedScopes ?? Array.Empty(), request.Confidential).ConfigureAwait(false);
+ if (inviteReserved && inviteToken is not null)
+ {
+ var consumed = await inviteStore.MarkConsumedAsync(inviteToken, result.Value.ClientId, now, cancellationToken).ConfigureAwait(false);
+ if (consumed)
+ {
+ await WriteInviteAuditAsync("authority.bootstrap.invite.consumed", AuthEventOutcome.Success, null, invite, inviteToken).ConfigureAwait(false);
+ }
+ }
+
+ await WriteBootstrapClientAuditAsync(AuthEventOutcome.Success, null, request.ClientId, result.Value.ClientId, provider.Name, request.AllowedScopes ?? Array.Empty(), request.Confidential, inviteToken).ConfigureAwait(false);
return Results.Ok(new
{
@@ -488,7 +726,7 @@ if (authorityOptions.Bootstrap.Enabled)
confidential = result.Value.Confidential
});
- async Task WriteBootstrapClientAuditAsync(AuthEventOutcome outcome, string? reason, string? requestedClientId, string? assignedClientId, string? providerValue, IReadOnlyCollection scopes, bool? confidentialFlag)
+ async Task WriteBootstrapClientAuditAsync(AuthEventOutcome outcome, string? reason, string? requestedClientId, string? assignedClientId, string? providerValue, IReadOnlyCollection scopes, bool? confidentialFlag, string? inviteValue)
{
var correlationId = Activity.Current?.TraceId.ToString() ?? httpContext.TraceIdentifier ?? Guid.NewGuid().ToString("N", CultureInfo.InvariantCulture);
AuthEventNetwork? network = null;
@@ -533,6 +771,15 @@ if (authorityOptions.Bootstrap.Enabled)
});
}
+ if (!string.IsNullOrWhiteSpace(inviteValue))
+ {
+ properties.Add(new AuthEventProperty
+ {
+ Name = "bootstrap.invite_token",
+ Value = ClassifiedString.Public(inviteValue)
+ });
+ }
+
var record = new AuthEventRecord
{
EventType = "authority.bootstrap.client",
@@ -549,6 +796,175 @@ if (authorityOptions.Bootstrap.Enabled)
await auditSink.WriteAsync(record, httpContext.RequestAborted).ConfigureAwait(false);
}
+
+ async Task WriteInviteAuditAsync(string eventType, AuthEventOutcome outcome, string? reason, AuthorityBootstrapInviteDocument? document, string? tokenValue)
+ {
+ var record = new AuthEventRecord
+ {
+ EventType = eventType,
+ OccurredAt = timeProvider.GetUtcNow(),
+ CorrelationId = Activity.Current?.TraceId.ToString() ?? httpContext.TraceIdentifier ?? Guid.NewGuid().ToString("N", CultureInfo.InvariantCulture),
+ Outcome = outcome,
+ Reason = reason,
+ Subject = null,
+ Client = null,
+ Scopes = Array.Empty