up
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
AOC Guard CI / aoc-verify (push) Has been cancelled
AOC Guard CI / aoc-guard (push) Has been cancelled
Policy Lint & Smoke / policy-lint (push) Has been cancelled
SDK Publish & Sign / sdk-publish (push) Has been cancelled
sdk-generator-smoke / sdk-smoke (push) Has been cancelled

This commit is contained in:
StellaOps Bot
2025-11-27 08:51:10 +02:00
parent ea970ead2a
commit c34fb7256d
126 changed files with 18553 additions and 693 deletions

View File

@@ -17,7 +17,7 @@ completely isolated network:
| **Provenance** | Cosign signature, SPDX 2.3 SBOM, intoto SLSA attestation |
| **Attested manifest** | `offline-manifest.json` + detached JWS covering bundle metadata, signed during export. |
| **Delta patches** | Daily diff bundles keep size \<350MB |
| **Scanner plug-ins** | OS analyzers plus the Node.js, Go, .NET, Python, and Rust language analyzers packaged under `plugins/scanner/analyzers/**` with manifests so Workers load deterministically offline. |
| **Scanner plug-ins** | OS analyzers plus the Node.js, Go, .NET, Python, Ruby, and Rust language analyzers packaged under `plugins/scanner/analyzers/**` with manifests so Workers load deterministically offline. |
| **Debug store** | `.debug` artefacts laid out under `debug/.build-id/<aa>/<rest>.debug` with `debug/debug-manifest.json` mapping build-ids to originating images for symbol retrieval. |
| **Telemetry collector bundle** | `telemetry/telemetry-offline-bundle.tar.gz` plus `.sha256`, containing OTLP collector config, Helm/Compose overlays, and operator instructions. |
| **CLI + Task Packs** | `cli/` binaries from `release/cli`, Task Runner bootstrap (`bootstrap/task-runner/task-runner.yaml.sample`), and task-pack docs under `docs/task-packs/**` + `docs/modules/taskrunner/**`. |
@@ -27,7 +27,7 @@ completely isolated network:
**RU BDU note:** ship the official Russian Trusted Root/Sub CA bundle (`certificates/russian_trusted_bundle.pem`) inside the kit so `concelier:httpClients:source.bdu:trustedRootPaths` can resolve it when the service runs in an airgapped network. Drop the most recent `vulxml.zip` alongside the kit if operators need a cold-start cache.
**Language analyzers:** the kit now carries the restart-only Node.js, Go, .NET, Python, and Rust plug-ins (`plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.Node/`, `...Lang.Go/`, `...Lang.DotNet/`, `...Lang.Python/`, `...Lang.Rust/`). Drop the directories alongside Worker binaries so the unified plug-in catalog can load them without outbound fetches.
**Language analyzers:** the kit now carries the restart-only Node.js, Go, .NET, Python, Ruby, and Rust plug-ins (`plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.Node/`, `...Lang.Go/`, `...Lang.DotNet/`, `...Lang.Python/`, `...Lang.Ruby/`, `...Lang.Rust/`). Drop the directories alongside Worker binaries so the unified plug-in catalog can load them without outbound fetches. The Ruby analyzer includes optional runtime capture via TracePoint; set `STELLA_RUBY_ENTRYPOINT` to enable runtime evidence collection.
**Advisory AI volume primer:** ship a tarball containing empty `queue/`, `plans/`, and `outputs/` directories plus their ownership metadata. During import, extract it onto the RWX volume used by `advisory-ai-web` and `advisory-ai-worker` so pods start with the expected directory tree even on air-gapped nodes.
@@ -181,6 +181,24 @@ Example excerpt (2025-10-23 kit) showing the Go and .NET analyzer plug-in payloa
"size": 648,
"capturedAt": "2025-10-26T00:00:00Z"
}
{
"name": "plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.Ruby/StellaOps.Scanner.Analyzers.Lang.Ruby.dll",
"sha256": "<computed-at-release>",
"size": 0,
"capturedAt": "2025-11-27T00:00:00Z"
}
{
"name": "plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.Ruby/StellaOps.Scanner.Analyzers.Lang.Ruby.pdb",
"sha256": "<computed-at-release>",
"size": 0,
"capturedAt": "2025-11-27T00:00:00Z"
}
{
"name": "plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.Ruby/manifest.json",
"sha256": "<computed-at-release>",
"size": 0,
"capturedAt": "2025-11-27T00:00:00Z"
}
{
"name": "plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.Rust/StellaOps.Scanner.Analyzers.Lang.Rust.dll",
"sha256": "d90ba8b6ace7d98db563b1dec178d57ac09df474e1342fa1daa38bd55e17b185",
@@ -258,12 +276,12 @@ Authority now rejects tokens that request `advisory:read`, `vex:read`, or any `s
**Quick smoke test:** before import, verify the tarball carries the Go analyzer plug-in:
```bash
tar -tzf stella-ops-offline-kit-<DATE>.tgz 'plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.Go/*' 'plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.DotNet/*' 'plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.Python/*'
tar -tzf stella-ops-offline-kit-<DATE>.tgz 'plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.Go/*' 'plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.DotNet/*' 'plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.Python/*' 'plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.Ruby/*'
```
The manifest lookup above and this `tar` listing should both surface the Go analyzer DLL, PDB, and manifest entries before the kit is promoted.
> **Release guardrail.** The automated release pipeline now publishes the Python and Rust plug-ins from source and executes `dotnet run --project src/Tools/LanguageAnalyzerSmoke --configuration Release -- --repo-root <checkout> --analyzer <id>` to validate manifest integrity and cold/warm determinism within the <30s / <5s budgets (differences versus repository goldens are logged for triage). Run `ops/offline-kit/run-python-analyzer-smoke.sh` and `ops/offline-kit/run-rust-analyzer-smoke.sh` locally before shipping a refreshed kit if you rebuild artefacts outside CI or when preparing the air-gap bundle.
> **Release guardrail.** The automated release pipeline now publishes the Python, Ruby, and Rust plug-ins from source and executes `dotnet run --project src/Tools/LanguageAnalyzerSmoke --configuration Release -- --repo-root <checkout> --analyzer <id>` to validate manifest integrity and cold/warm determinism within the <30s / <5s budgets (differences versus repository goldens are logged for triage). Run `ops/offline-kit/run-python-analyzer-smoke.sh` and `ops/offline-kit/run-ruby-analyzer-smoke.sh`, and `ops/offline-kit/run-rust-analyzer-smoke.sh` locally before shipping a refreshed kit if you rebuild artefacts outside CI or when preparing the air-gap bundle.
### Debug store mirror

View File

@@ -64,15 +64,15 @@ python run_bench.py --sboms inputs/sboms/*.json --vex inputs/vex/*.json \
--config configs/scanners.json --shuffle --output results
# Reachability dataset (optional)
python run_reachability.py --graphs ../reachability/graphs/*.json \
--runtime ../reachability/runtime/*.ndjson.gz --output results-reach.csv
python run_reachability.py --graphs inputs/graphs/*.json \
--runtime inputs/runtime/*.ndjson --output results
```
Outputs are written to `results.csv` (determinism) and `results-reach.csv` (reachability stability) plus SHA manifests.
Outputs are written to `results.csv` (determinism), `results-reach.csv`/`results-reach.json` (reachability hashes), and manifests `inputs.sha256` + `dataset.sha256`.
## How to run (CI)
- Workflow `.gitea/workflows/bench-determinism.yml` calls `scripts/bench/determinism-run.sh`, which runs the harness with the bundled mock scanner and uploads `out/bench-determinism/**` (results, manifests, summary). Set `DET_EXTRA_INPUTS` to include frozen feed bundles in `inputs.sha256`.
- Workflow `.gitea/workflows/bench-determinism.yml` calls `scripts/bench/determinism-run.sh`, which runs the harness with the bundled mock scanner and uploads `out/bench-determinism/**` (results, manifests, summary). Set `DET_EXTRA_INPUTS` to include frozen feed bundles in `inputs.sha256`; optional `DET_REACH_GRAPHS`/`DET_REACH_RUNTIME` adds reachability hashes + `dataset.sha256`.
- Optional `bench:reachability` target (future) will replay reachability corpus, recompute graph hashes, and compare against expected `dataset.sha256`.
- CI fails when `determinism_rate` < `BENCH_DETERMINISM_THRESHOLD` (defaults to 0.95; set via env in the workflow).

View File

@@ -40,7 +40,7 @@
| 11 | SCANNER-ANALYZERS-NATIVE-20-007 | DONE (2025-11-26) | AOC observation serialization implemented with models and builder/serializer; 18 tests passing. | Native Analyzer Guild; SBOM Service Guild (src/Scanner/StellaOps.Scanner.Analyzers.Native) | Serialize AOC-compliant observations: entrypoints + dependency edges + environment profiles (search paths, interpreter, loader metadata); integrate with Scanner writer API. |
| 12 | SCANNER-ANALYZERS-NATIVE-20-008 | DONE (2025-11-26) | Cross-platform fixture generator and performance benchmarks implemented; 17 tests passing. | Native Analyzer Guild; QA Guild (src/Scanner/StellaOps.Scanner.Analyzers.Native) | Author cross-platform fixtures (ELF dynamic/static, PE delay-load/SxS, Mach-O @rpath, plugin configs) and determinism benchmarks (<25 ms / binary, <250 MB). |
| 13 | SCANNER-ANALYZERS-NATIVE-20-009 | DONE (2025-11-26) | Runtime capture adapters implemented for Linux/Windows/macOS; 26 tests passing. | Native Analyzer Guild; Signals Guild (src/Scanner/StellaOps.Scanner.Analyzers.Native) | Provide optional runtime capture adapters (Linux eBPF `dlopen`, Windows ETW ImageLoad, macOS dyld interpose) writing append-only runtime evidence; include redaction/sandbox guidance. |
| 14 | SCANNER-ANALYZERS-NATIVE-20-010 | TODO | Depends on SCANNER-ANALYZERS-NATIVE-20-009 | Native Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Native) | Package native analyzer as restart-time plug-in with manifest/DI registration; update Offline Kit bundle and documentation. |
| 14 | SCANNER-ANALYZERS-NATIVE-20-010 | DONE (2025-11-27) | Plugin packaging completed with DI registration, plugin catalog, and service extensions; 20 tests passing. | Native Analyzer Guild (src/Scanner/StellaOps.Scanner.Analyzers.Native) | Package native analyzer as restart-time plug-in with manifest/DI registration; update Offline Kit bundle and documentation. |
| 15 | SCANNER-ANALYZERS-NODE-22-001 | DOING (2025-11-24) | PREP-SCANNER-ANALYZERS-NODE-22-001-NEEDS-ISOL; rerun tests on clean runner | Node Analyzer Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Node) | Build input normalizer + VFS for Node projects: dirs, tgz, container layers, pnpm store, Yarn PnP zips; detect Node version targets (`.nvmrc`, `.node-version`, Dockerfile) and workspace roots deterministically. |
| 16 | SCANNER-ANALYZERS-NODE-22-002 | DOING (2025-11-24) | Depends on SCANNER-ANALYZERS-NODE-22-001; add tests once CI runner available | Node Analyzer Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Node) | Implement entrypoint discovery (bin/main/module/exports/imports, workers, electron, shebang scripts) and condition set builder per entrypoint. |
| 17 | SCANNER-ANALYZERS-NODE-22-003 | BLOCKED (2025-11-19) | Blocked on overlay/callgraph schema alignment and test fixtures; resolver wiring pending fixture drop. | Node Analyzer Guild (src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Node) | Parse JS/TS sources for static `import`, `require`, `import()` and string concat cases; flag dynamic patterns with confidence levels; support source map de-bundling. |
@@ -55,6 +55,7 @@
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | SCANNER-ANALYZERS-NATIVE-20-010: Implemented plugin packaging in `Plugin/` namespace. Created `INativeAnalyzerPlugin` interface (Name, Description, Version, SupportedFormats, IsAvailable, CreateAnalyzer), `INativeAnalyzer` interface (AnalyzeAsync, AnalyzeBatchAsync), `NativeAnalyzerOptions` configuration. Implemented `NativeAnalyzer` core class orchestrating format detection, parsing (ELF/PE/Mach-O), heuristic scanning, and resolution. Created `NativeAnalyzerPlugin` factory (always available, supports ELF/PE/Mach-O). Built `NativeAnalyzerPluginCatalog` with convention-based loading (`StellaOps.Scanner.Analyzers.Native*.dll`), registration, sealing, and analyzer creation. Added `ServiceCollectionExtensions` with `AddNativeAnalyzer()` (options binding, DI registration) and `AddNativeRuntimeCapture()`. Created `NativeAnalyzerServiceOptions` with platform-specific default search paths. Added NuGet dependencies (Microsoft.Extensions.*). 20 new tests in `PluginPackagingTests.cs` covering plugin properties, catalog operations, DI registration, and analyzer integration. Total native analyzer: 163 tests passing. Task DONE. | Native Analyzer Guild |
| 2025-11-26 | SCANNER-ANALYZERS-NATIVE-20-009: Implemented runtime capture adapters in `RuntimeCapture/` namespace. Created models (`RuntimeEvidence.cs`): `RuntimeLoadEvent`, `RuntimeCaptureSession`, `RuntimeEvidence`, `RuntimeLibrarySummary`, `RuntimeDependencyEdge` with reason codes (`runtime-dlopen`, `runtime-loadlibrary`, `runtime-dylib`). Created configuration (`RuntimeCaptureOptions.cs`): buffer size, duration limits, include/exclude patterns, redaction options (home dirs, SSH keys, secrets), sandbox mode with mock events. Created interface (`IRuntimeCaptureAdapter.cs`): state machine (IdleStartingRunningStoppingStopped/Faulted), events, factory pattern. Created platform adapters: `LinuxEbpfCaptureAdapter` (bpftrace/eBPF), `WindowsEtwCaptureAdapter` (ETW ImageLoad), `MacOsDyldCaptureAdapter` (dtrace). Created aggregator (`RuntimeEvidenceAggregator.cs`) merging runtime evidence with static/heuristic analysis. Added `NativeObservationRuntimeEdge` model and `AddRuntimeEdge()` builder method. 26 new tests in `RuntimeCaptureTests.cs` covering options validation, redaction, aggregation, sandbox capture, state transitions. Total native analyzer: 143 tests passing. Task DONE. | Native Analyzer Guild |
| 2025-11-26 | SCANNER-ANALYZERS-NATIVE-20-008: Implemented cross-platform fixture generator (`NativeFixtureGenerator`) with methods `GenerateElf64()`, `GeneratePe64()`, `GenerateMachO64()` producing minimal valid binaries programmatically. Added performance benchmarks (`NativeBenchmarks`) validating <25ms parsing requirement across all formats. Created integration tests (`NativeFixtureTests`) exercising full pipeline: fixture generation parsing resolution heuristic scanning serialization. 17 new tests passing (10 fixture tests, 7 benchmark tests). Total native analyzer: 117 tests passing. Task DONE. | Native Analyzer Guild |
| 2025-11-26 | SCANNER-ANALYZERS-NATIVE-20-007: Implemented AOC-compliant observation serialization with models (`NativeObservationDocument`, `NativeObservationBinary`, `NativeObservationEntrypoint`, `NativeObservationDeclaredEdge`, `NativeObservationHeuristicEdge`, `NativeObservationEnvironment`, `NativeObservationResolution`), builder (`NativeObservationBuilder`), and serializer (`NativeObservationSerializer`). Schema: `stellaops.native.observation@1`. Supports ELF/PE/Mach-O dependencies, heuristic edges, environment profiles, and resolution explain traces. 18 new tests passing. Task DONE. | Native Analyzer Guild |

View File

@@ -20,11 +20,11 @@
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| 1 | SCANNER-ANALYZERS-PYTHON-23-012 | TODO | Depends on 23-011. | Python Analyzer Guild (`src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Python`) | Container/zipapp adapter enhancements: parse OCI layers for Python runtime, detect `PYTHONPATH`/`PYTHONHOME`, warn on sitecustomize/startup hooks. |
| 2 | SCANNER-ANALYZERS-RUBY-28-001 | TODO | — | Ruby Analyzer Guild (`src/Scanner/StellaOps.Scanner.Analyzers.Lang.Ruby`) | Input normalizer & VFS for Ruby projects: merge sources, Gemfile/lock, vendor/bundle, .gem archives, `.bundle/config`, Rack configs, containers; detect framework/job fingerprints deterministically. |
| 3 | SCANNER-ANALYZERS-RUBY-28-002 | TODO | Depends on 28-001. | Ruby Analyzer Guild | Gem & Bundler analyzer: parse Gemfile/lock, vendor specs, .gem archives; produce package nodes (PURLs), dependency edges, and resolver traces. |
| 4 | SCANNER-ANALYZERS-RUBY-28-003 | TODO | Depends on 28-002. | Ruby Analyzer Guild · SBOM Guild | Produce AOC-compliant observations (entrypoints, components, edges) plus environment profiles; integrate with Scanner writer. |
| 5 | SCANNER-ANALYZERS-RUBY-28-004 | TODO | Depends on 28-003. | Ruby Analyzer Guild · QA Guild | Fixtures/benchmarks for Ruby analyzer across Bundler/Rails/Sidekiq/CLI gems; determinism/perf targets. |
| 6 | SCANNER-ANALYZERS-RUBY-28-005 | TODO | Depends on 28-004. | Ruby Analyzer Guild · Signals Guild | Optional runtime capture (tracepoint) hooks with append-only evidence, redaction, and sandbox guidance. |
| 2 | SCANNER-ANALYZERS-RUBY-28-001 | DONE | — | Ruby Analyzer Guild (`src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Ruby`) | Input normalizer & VFS for Ruby projects: merge sources, Gemfile/lock, vendor/bundle, .gem archives, `.bundle/config`, Rack configs, containers; detect framework/job fingerprints deterministically. |
| 3 | SCANNER-ANALYZERS-RUBY-28-002 | DONE | Depends on 28-001. | Ruby Analyzer Guild | Gem & Bundler analyzer: parse Gemfile/lock, vendor specs, .gem archives; produce package nodes (PURLs), dependency edges, and resolver traces. |
| 4 | SCANNER-ANALYZERS-RUBY-28-003 | DONE | Depends on 28-002. | Ruby Analyzer Guild · SBOM Guild | Produce AOC-compliant observations (entrypoints, components, edges) plus environment profiles; integrate with Scanner writer. |
| 5 | SCANNER-ANALYZERS-RUBY-28-004 | DONE | Depends on 28-003. | Ruby Analyzer Guild · QA Guild | Fixtures/benchmarks for Ruby analyzer across Bundler/Rails/Sidekiq/CLI gems; determinism/perf targets. |
| 6 | SCANNER-ANALYZERS-RUBY-28-005 | DONE | Depends on 28-004. | Ruby Analyzer Guild · Signals Guild | Optional runtime capture (tracepoint) hooks with append-only evidence, redaction, and sandbox guidance. |
| 7 | SCANNER-ANALYZERS-RUBY-28-006 | TODO | Depends on 28-005. | Ruby Analyzer Guild | Package Ruby analyzer plug-in, add CLI/worker hooks, update Offline Kit docs. |
## Execution Log
@@ -33,6 +33,11 @@
| 2025-11-08 | Sprint stub created; awaiting completion of Sprint 0134. | Planning |
| 2025-11-19 | Normalized sprint to standard template and renamed from `SPRINT_135_scanner_surface.md` to `SPRINT_0135_0001_0001_scanner_surface.md`; content preserved. | Implementer |
| 2025-11-19 | Converted legacy filename `SPRINT_135_scanner_surface.md` to redirect stub pointing here to avoid divergent updates. | Implementer |
| 2025-11-27 | Completed SCANNER-ANALYZERS-RUBY-28-001: Added container layer support (layers/, .layers/, layer/) to RubyLockCollector and RubyVendorArtifactCollector; existing implementation already covered Gemfile/lock, vendor/bundle, .gem archives, .bundle/config, Rack configs, and framework fingerprints. | Implementer |
| 2025-11-27 | Completed SCANNER-ANALYZERS-RUBY-28-002: Enhanced RubyLockParser to capture gem dependency edges with version constraints; added RubyDependencyEdge type, updated RubyLockEntry/RubyObservationDocument, observation builder and serializer to include dependencyEdges in JSON output; PURLs and resolver constraint strings now included. | Implementer |
| 2025-11-27 | Completed SCANNER-ANALYZERS-RUBY-28-003: AOC-compliant observations with schema, entrypoints, runtime edges, and environment profiles. Added RubyObservationEntrypoint/Environment types with bundlePaths/gemfiles/lockfiles/frameworks; updated RubyRuntimeGraph with GetEntrypointFiles/GetRequiredGems; wired bundlerConfig through analyzer for full observation coverage. | Implementer |
| 2025-11-27 | Completed SCANNER-ANALYZERS-RUBY-28-004: Created cli-app fixture with Thor/TTY-Prompt, updated expected.json golden files for dependency edges format; all 4 determinism tests pass. | Implementer |
| 2025-11-27 | Completed SCANNER-ANALYZERS-RUBY-28-005: Created Runtime directory with RubyRuntimeShim.cs (trace-shim.rb Ruby script using TracePoint for require/load hooks with redaction and capability detection), RubyRuntimeTraceRunner.cs (opt-in harness triggered by STELLA_RUBY_ENTRYPOINT env var), and RubyRuntimeTraceReader.cs (NDJSON parser for trace events). Append-only evidence, sandbox guidance via BUNDLE_FROZEN/BUNDLE_DISABLE_EXEC_LOAD. | Implementer |
## Decisions & Risks
- Ruby and Python tasks depend on prior phases; all remain TODO until upstream tasks land.

View File

@@ -29,14 +29,15 @@
| 9 | NOTIFY-SVC-39-002 | DONE (2025-11-26) | Digest generator implemented: `IDigestGenerator`/`DefaultDigestGenerator` with delivery queries and Markdown formatting, `IDigestScheduleRunner`/`DigestScheduleRunner` with Cronos-based scheduling, period-based lookback windows, channel adapter dispatch. | Notifications Service Guild | Digest generator (queries, formatting) with schedule runner and distribution. |
| 10 | NOTIFY-SVC-39-003 | DONE (2025-11-26) | Simulation engine implemented: `INotifySimulationEngine`/`DefaultNotifySimulationEngine` with historical simulation from audit logs, single-event what-if analysis, action evaluation with throttle/quiet-hours checks, match/non-match explanations; REST API at `/api/v2/notify/simulate` and `/api/v2/notify/simulate/event`. | Notifications Service Guild | Simulation engine/API to dry-run rules against historical events, returning matched actions with explanations. |
| 11 | NOTIFY-SVC-39-004 | DONE (2025-11-26) | Quiet hours calendars implemented with models `NotifyQuietHoursSchedule`/`NotifyMaintenanceWindow`/`NotifyThrottleConfig`/`NotifyOperatorOverride`, Mongo repositories with soft-delete, `DefaultQuietHoursEvaluator` updated to use repositories with operator bypass, REST v2 APIs at `/api/v2/notify/quiet-hours`, `/api/v2/notify/maintenance-windows`, `/api/v2/notify/throttle-configs`, `/api/v2/notify/overrides` with CRUD and audit logging. | Notifications Service Guild | Quiet hour calendars + default throttles with audit logging and operator overrides. |
| 12 | NOTIFY-SVC-40-001 | TODO | Depends on 39-004. | Notifications Service Guild | Escalations + on-call schedules, ack bridge, PagerDuty/OpsGenie adapters, CLI/in-app inbox channels. |
| 13 | NOTIFY-SVC-40-002 | TODO | Depends on 40-001. | Notifications Service Guild | Summary storm breaker notifications, localization bundles, fallback handling. |
| 14 | NOTIFY-SVC-40-003 | TODO | Depends on 40-002. | Notifications Service Guild | Security hardening: signed ack links (KMS), webhook HMAC/IP allowlists, tenant isolation fuzz tests, HTML sanitization. |
| 15 | NOTIFY-SVC-40-004 | TODO | Depends on 40-003. | Notifications Service Guild | Observability (metrics/traces for escalations/latency), dead-letter handling, chaos tests for channel outages, retention policies. |
| 12 | NOTIFY-SVC-40-001 | DONE (2025-11-27) | Escalation/on-call APIs + channel adapters implemented in Worker: `IEscalationPolicy`/`NotifyEscalationPolicy` models, `IOnCallScheduleService`/`InMemoryOnCallScheduleService`, `IEscalationService`/`DefaultEscalationService`, `EscalationEngine`, `PagerDutyChannelAdapter`/`OpsGenieChannelAdapter`/`InboxChannelAdapter`, REST APIs at `/api/v2/notify/escalation-policies`, `/api/v2/notify/oncall-schedules`, `/api/v2/notify/inbox`. | Notifications Service Guild | Escalations + on-call schedules, ack bridge, PagerDuty/OpsGenie adapters, CLI/in-app inbox channels. |
| 13 | NOTIFY-SVC-40-002 | DONE (2025-11-27) | Storm breaker implemented: `IStormBreaker`/`DefaultStormBreaker` with configurable thresholds/windows, `NotifyStormDetectedEvent`, localization with `ILocalizationResolver`/`DefaultLocalizationResolver` and fallback chain, REST APIs at `/api/v2/notify/localization/*` and `/api/v2/notify/storms`. | Notifications Service Guild | Summary storm breaker notifications, localization bundles, fallback handling. |
| 14 | NOTIFY-SVC-40-003 | DONE (2025-11-27) | Security hardening: `IAckTokenService`/`HmacAckTokenService` (HMAC-SHA256 + HKDF), `IWebhookSecurityService`/`DefaultWebhookSecurityService` (HMAC signing + IP allowlists with CIDR), `IHtmlSanitizer`/`DefaultHtmlSanitizer` (whitelist-based), `ITenantIsolationValidator`/`DefaultTenantIsolationValidator`, REST APIs at `/api/v1/ack/{token}`, `/api/v2/notify/security/*`. | Notifications Service Guild | Security hardening: signed ack links (KMS), webhook HMAC/IP allowlists, tenant isolation fuzz tests, HTML sanitization. |
| 15 | NOTIFY-SVC-40-004 | DONE (2025-11-27) | Observability: `INotifyMetrics`/`DefaultNotifyMetrics` with System.Diagnostics.Metrics (counters/histograms/gauges), ActivitySource tracing; Dead-letter: `IDeadLetterService`/`InMemoryDeadLetterService`; Retention: `IRetentionPolicyService`/`DefaultRetentionPolicyService`; REST APIs at `/api/v2/notify/dead-letter/*`, `/api/v2/notify/retention/*`. | Notifications Service Guild | Observability (metrics/traces for escalations/latency), dead-letter handling, chaos tests for channel outages, retention policies. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | Implemented NOTIFY-SVC-40-001 through NOTIFY-SVC-40-004: escalations/on-call schedules, storm breaker/localization, security hardening (ack tokens, HMAC webhooks, HTML sanitization, tenant isolation), observability metrics/traces, dead-letter handling, retention policies. Sprint 0172 complete. | Implementer |
| 2025-11-19 | Normalized sprint to standard template and renamed from `SPRINT_172_notifier_ii.md` to `SPRINT_0172_0001_0002_notifier_ii.md`; content preserved. | Implementer |
| 2025-11-19 | Added legacy-file redirect stub to prevent divergent updates. | Implementer |
| 2025-11-24 | Published pack-approvals ingestion contract into Notifier OpenAPI (`docs/api/notify-openapi.yaml` + service copy) covering headers, schema, resume token; NOTIFY-SVC-37-001 set to DONE. | Implementer |

View File

@@ -19,11 +19,12 @@
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| P1 | PREP-NOTIFY-TEN-48-001-NOTIFIER-II-SPRINT-017 | DONE (2025-11-22) | Due 2025-11-23 · Accountable: Notifications Service Guild (`src/Notifier/StellaOps.Notifier`) | Notifications Service Guild (`src/Notifier/StellaOps.Notifier`) | Notifier II (Sprint 0172) not started; tenancy model not finalized. <br><br> Document artefact/deliverable for NOTIFY-TEN-48-001 and publish location so downstream tasks can proceed. Prep artefact: `docs/modules/notifier/prep/2025-11-20-ten-48-001-prep.md`. |
| 1 | NOTIFY-TEN-48-001 | BLOCKED (2025-11-20) | PREP-NOTIFY-TEN-48-001-NOTIFIER-II-SPRINT-017 | Notifications Service Guild (`src/Notifier/StellaOps.Notifier`) | Tenant-scope rules/templates/incidents, RLS on storage, tenant-prefixed channels, include tenant context in notifications. |
| 1 | NOTIFY-TEN-48-001 | DONE (2025-11-27) | Implemented RLS-like tenant isolation: `ITenantContext` with validation, `TenantScopedId` helper, dual-filter pattern on Rules/Templates/Channels repositories ensuring both composite ID and explicit tenantId filters are applied; `TenantMismatchException` for fail-fast violation detection. | Notifications Service Guild (`src/Notifier/StellaOps.Notifier`) | Tenant-scope rules/templates/incidents, RLS on storage, tenant-prefixed channels, include tenant context in notifications. |
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | Implemented NOTIFY-TEN-48-001: Created `ITenantContext`/`DefaultTenantContext` for tenant validation, `TenantScopedId` helper for consistent ID construction, `TenantAwareRepository` base class. Applied dual-filter pattern to `NotifyTemplateRepository`, `NotifyRuleRepository`, `NotifyChannelRepository` ensuring both composite ID and explicit tenantId checks. Sprint 0173 complete. | Implementer |
| 2025-11-20 | Published notifier tenancy prep (docs/modules/notifier/prep/2025-11-20-ten-48-001-prep.md); set PREP-NOTIFY-TEN-48-001 to DOING. | Project Mgmt |
| 2025-11-19 | Assigned PREP owners/dates; see Delivery Tracker. | Planning |
| 2025-11-19 | Normalized sprint to standard template and renamed from `SPRINT_173_notifier_iii.md` to `SPRINT_0173_0001_0003_notifier_iii.md`; content preserved. | Implementer |

View File

@@ -27,8 +27,8 @@
| 5 | CLI-AIAI-31-003 | DONE (2025-11-24) | Depends on CLI-AIAI-31-002 | DevEx/CLI Guild | Implement `stella advise remediate` generating remediation plans with `--strategy` filters and file output. |
| 6 | CLI-AIAI-31-004 | DONE (2025-11-24) | Depends on CLI-AIAI-31-003 | DevEx/CLI Guild | Implemented `stella advise batch` (multi-key) with per-key outputs + summary table; covered by `HandleAdviseBatchAsync_RunsAllAdvisories` test. |
| 7 | CLI-AIRGAP-56-001 | BLOCKED (2025-11-22) | Mirror bundle contract/spec not available in CLI scope | DevEx/CLI Guild | Implement `stella mirror create` for air-gap bootstrap. |
| 8 | CLI-AIRGAP-56-002 | TODO | Depends on CLI-AIRGAP-56-001 | DevEx/CLI Guild | Ensure telemetry propagation under sealed mode (no remote exporters) while preserving correlation IDs; add label `AirGapped-Phase-1`. |
| 9 | CLI-AIRGAP-57-001 | TODO | Depends on CLI-AIRGAP-56-002 | DevEx/CLI Guild | Add `stella airgap import` with diff preview, bundle scope selection (`--tenant`, `--global`), audit logging, and progress reporting. |
| 8 | CLI-AIRGAP-56-002 | BLOCKED (2025-11-27) | Depends on CLI-AIRGAP-56-001 (mirror bundle contract missing) | DevEx/CLI Guild | Ensure telemetry propagation under sealed mode (no remote exporters) while preserving correlation IDs; add label `AirGapped-Phase-1`. |
| 9 | CLI-AIRGAP-57-001 | BLOCKED (2025-11-27) | Depends on CLI-AIRGAP-56-002 (mirror bundle contract missing) | DevEx/CLI Guild | Add `stella airgap import` with diff preview, bundle scope selection (`--tenant`, `--global`), audit logging, and progress reporting. |
| 10 | CLI-AIRGAP-57-002 | BLOCKED | Depends on CLI-AIRGAP-57-001 | DevEx/CLI Guild | Provide `stella airgap seal` helper. Blocked: upstream 57-001. |
| 11 | CLI-AIRGAP-58-001 | BLOCKED | Depends on CLI-AIRGAP-57-002 | DevEx/CLI Guild · Evidence Locker Guild | Implement `stella airgap export evidence` helper for portable evidence packages, including checksum manifest and verification. Blocked: upstream 57-002. |
| 12 | CLI-ATTEST-73-001 | BLOCKED (2025-11-22) | CLI build currently fails on Scanner analyzer projects; attestor SDK transport contract not wired into CLI yet | CLI Attestor Guild | Implement `stella attest sign` (payload selection, subject digest, key reference, output format) using official SDK transport. |
@@ -71,6 +71,7 @@
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-25 | Marked CLI-AIRGAP-56-002/57-001/57-002/58-001 and CLI-ATTEST-73-002/74-001/74-002/75-001/75-002 BLOCKED (waiting on mirror bundle contract/spec and attestor SDK transport); statuses synced to tasks-all. | Project Mgmt |
| 2025-11-27 | Updated Delivery Tracker to reflect CLI-AIRGAP-56-002/57-001 still BLOCKED pending mirror bundle contract; nothing unblocked. | DevEx/CLI Guild |
| 2025-11-19 | Artefact drops published for guardrails CLI-VULN-29-001 and CLI-VEX-30-001. | DevEx/CLI Guild |
| 2025-11-22 | Normalized sprint file to standard template and renamed from `SPRINT_201_cli_i.md`; carried existing content. | Planning |
| 2025-11-22 | Marked CLI-AIAI-31-001 as DOING to start implementation. | DevEx/CLI Guild |

View File

@@ -22,8 +22,8 @@
| --- | --- | --- | --- | --- | --- |
| 1 | SDKGEN-62-001 | DONE (2025-11-24) | Toolchain, template layout, and reproducibility spec pinned. | SDK Generator Guild · `src/Sdk/StellaOps.Sdk.Generator` | Choose/pin generator toolchain, set up language template pipeline, and enforce reproducible builds. |
| 2 | SDKGEN-62-002 | DONE (2025-11-24) | Shared post-processing merged; helpers wired. | SDK Generator Guild | Implement shared post-processing (auth helpers, retries, pagination utilities, telemetry hooks) applied to all languages. |
| 3 | SDKGEN-63-001 | BLOCKED (2025-11-26) | Waiting on frozen aggregate OpenAPI spec (`stella-aggregate.yaml`) to generate Wave B TS alpha; current spec not yet published. | SDK Generator Guild | Ship TypeScript SDK alpha with ESM/CJS builds, typed errors, paginator, streaming helpers. |
| 4 | SDKGEN-63-002 | DOING | Scaffold added; waiting on frozen OAS to generate alpha. | SDK Generator Guild | Ship Python SDK alpha (sync/async clients, type hints, upload/download helpers). |
| 3 | SDKGEN-63-001 | BLOCKED (2025-11-26) | Waiting on frozen aggregate OAS digest to generate Wave B TS alpha; scaffold + smoke + hash guard ready. | SDK Generator Guild | Ship TypeScript SDK alpha with ESM/CJS builds, typed errors, paginator, streaming helpers. |
| 4 | SDKGEN-63-002 | BLOCKED (2025-11-26) | Waiting on frozen aggregate OAS digest to generate Python alpha; scaffold + smoke + hash guard ready. | SDK Generator Guild | Ship Python SDK alpha (sync/async clients, type hints, upload/download helpers). |
| 5 | SDKGEN-63-003 | BLOCKED (2025-11-26) | Waiting on frozen aggregate OAS digest to emit Go alpha. | SDK Generator Guild | Ship Go SDK alpha with context-first API and streaming helpers. |
| 6 | SDKGEN-63-004 | BLOCKED (2025-11-26) | Waiting on frozen aggregate OAS digest to emit Java alpha. | SDK Generator Guild | Ship Java SDK alpha (builder pattern, HTTP client abstraction). |
| 7 | SDKGEN-64-001 | TODO | Depends on 63-004; map CLI surfaces to SDK calls. | SDK Generator Guild · CLI Guild | Switch CLI to consume TS or Go SDK; ensure parity. |
@@ -102,6 +102,7 @@
| 2025-11-26 | Marked SDKGEN-63-003/004 BLOCKED pending frozen aggregate OAS digest; scaffolds and smoke tests are ready. | SDK Generator Guild |
| 2025-11-26 | Added unified SDK smoke npm scripts (`sdk:smoke:*`, `sdk:smoke`) covering TS/Python/Go/Java to keep pre-alpha checks consistent. | SDK Generator Guild |
| 2025-11-26 | Added CI workflow `.gitea/workflows/sdk-generator.yml` to run `npm run sdk:smoke` on SDK generator changes (TS/Python/Go/Java). | SDK Generator Guild |
| 2025-11-27 | Marked SDKGEN-63-001/002 BLOCKED pending frozen aggregate OAS digest; scaffolds and smokes remain ready. | SDK Generator Guild |
| 2025-11-24 | Added fixture OpenAPI (`ts/fixtures/ping.yaml`) and smoke test (`ts/test_generate_ts.sh`) to validate TypeScript pipeline locally; skips if generator jar absent. | SDK Generator Guild |
| 2025-11-24 | Vendored `tools/openapi-generator-cli-7.4.0.jar` and `tools/jdk-21.0.1.tar.gz` with SHA recorded in `toolchain.lock.yaml`; adjusted TS script to ensure helper copy post-run and verified generation against fixture. | SDK Generator Guild |
| 2025-11-24 | Ran `ts/test_generate_ts.sh` with vendored JDK/JAR and fixture spec; smoke test passes (helpers present). | SDK Generator Guild |

View File

@@ -28,25 +28,25 @@
## Delivery Tracker
| # | Task ID | Status | Key dependency / next step | Owners | Task Definition |
| --- | --- | --- | --- | --- | --- |
| 1 | UI-AOC-19-001 | TODO | Align tiles with AOC service metrics | UI Guild (src/UI/StellaOps.UI) | Add Sources dashboard tiles showing AOC pass/fail, recent violation codes, and ingest throughput per tenant. |
| 2 | UI-AOC-19-002 | TODO | UI-AOC-19-001 | UI Guild (src/UI/StellaOps.UI) | Implement violation drill-down view highlighting offending document fields and provenance metadata. |
| 3 | UI-AOC-19-003 | TODO | UI-AOC-19-002 | UI Guild (src/UI/StellaOps.UI) | Add "Verify last 24h" action triggering AOC verifier endpoint and surfacing CLI parity guidance. |
| 1 | UI-AOC-19-001 | DONE | Align tiles with AOC service metrics | UI Guild (src/Web/StellaOps.Web) | Add Sources dashboard tiles showing AOC pass/fail, recent violation codes, and ingest throughput per tenant. |
| 2 | UI-AOC-19-002 | DONE | UI-AOC-19-001 | UI Guild (src/Web/StellaOps.Web) | Implement violation drill-down view highlighting offending document fields and provenance metadata. |
| 3 | UI-AOC-19-003 | DONE | UI-AOC-19-002 | UI Guild (src/Web/StellaOps.Web) | Add "Verify last 24h" action triggering AOC verifier endpoint and surfacing CLI parity guidance. |
| 4 | UI-EXC-25-001 | DONE | Tests pending on clean CI runner | UI Guild; Governance Guild (src/Web/StellaOps.Web) | Build Exception Center (list + kanban) with filters, sorting, workflow transitions, and audit views. |
| 5 | UI-EXC-25-002 | DONE | UI-EXC-25-001 | UI Guild (src/Web/StellaOps.Web) | Implement exception creation wizard with scope preview, justification templates, timebox guardrails. |
| 6 | UI-EXC-25-003 | DONE | UI-EXC-25-002 | UI Guild (src/Web/StellaOps.Web) | Add inline exception drafting/proposing from Vulnerability Explorer and Graph detail panels with live simulation. |
| 7 | UI-EXC-25-004 | DONE | UI-EXC-25-003 | UI Guild (src/Web/StellaOps.Web) | Surface exception badges, countdown timers, and explain integration across Graph/Vuln Explorer and policy views. |
| 8 | UI-EXC-25-005 | DONE | UI-EXC-25-004 | UI Guild; Accessibility Guild (src/Web/StellaOps.Web) | Add keyboard shortcuts (`x`,`a`,`r`) and ensure screen-reader messaging for approvals/revocations. |
| 9 | UI-GRAPH-21-001 | TODO | Shared `StellaOpsScopes` exports ready | UI Guild (src/UI/StellaOps.UI) | Align Graph Explorer auth configuration with new `graph:*` scopes; consume scope identifiers from shared `StellaOpsScopes` exports (via generated SDK/config) instead of hard-coded strings. |
| 9 | UI-GRAPH-21-001 | DONE | Shared `StellaOpsScopes` exports ready | UI Guild (src/Web/StellaOps.Web) | Align Graph Explorer auth configuration with new `graph:*` scopes; consume scope identifiers from shared `StellaOpsScopes` exports (via generated SDK/config) instead of hard-coded strings. |
| 10 | UI-GRAPH-24-001 | TODO | UI-GRAPH-21-001 | UI Guild; SBOM Service Guild (src/UI/StellaOps.UI) | Build Graph Explorer canvas with layered/radial layouts, virtualization, zoom/pan, and scope toggles; initial render <1.5s for sample asset. |
| 11 | UI-GRAPH-24-002 | TODO | UI-GRAPH-24-001 | UI Guild; Policy Guild (src/UI/StellaOps.UI) | Implement overlays (Policy, Evidence, License, Exposure), simulation toggle, path view, and SBOM diff/time-travel with accessible tooltips/AOC indicators. |
| 12 | UI-GRAPH-24-003 | TODO | UI-GRAPH-24-002 | UI Guild (src/UI/StellaOps.UI) | Deliver filters/search panel with facets, saved views, permalinks, and share modal. |
| 13 | UI-GRAPH-24-004 | TODO | UI-GRAPH-24-003 | UI Guild (src/UI/StellaOps.UI) | Add side panels (Details, What-if, History) with upgrade simulation integration and SBOM diff viewer. |
| 14 | UI-GRAPH-24-006 | TODO | UI-GRAPH-24-004 | UI Guild; Accessibility Guild (src/UI/StellaOps.UI) | Ensure accessibility (keyboard nav, screen reader labels, contrast), add hotkeys (`f`,`e`,`.`), and analytics instrumentation. |
| 15 | UI-LNM-22-001 | TODO | - | UI Guild; Policy Guild (src/UI/StellaOps.UI) | Build Evidence panel showing policy decision with advisory observations/linksets side-by-side, conflict badges, AOC chain, and raw doc download links (DOCS-LNM-22-005 awaiting UI screenshots/flows). |
| 16 | UI-SBOM-DET-01 | TODO | - | UI Guild (src/UI/StellaOps.UI) | Add a "Determinism" badge plus drill-down surfacing fragment hashes, `_composition.json`, and Merkle root consistency when viewing scan details. |
| 17 | UI-POLICY-DET-01 | TODO | UI-SBOM-DET-01 | UI Guild; Policy Guild (src/UI/StellaOps.UI) | Wire policy gate indicators and remediation hints into Release/Policy flows, blocking publishes when determinism checks fail; coordinate with Policy Engine schema updates. |
| 18 | UI-ENTROPY-40-001 | TODO | - | UI Guild (src/UI/StellaOps.UI) | Visualise entropy analysis per image (layer donut, file heatmaps, "Why risky?" chips) in Vulnerability Explorer and scan details, including opaque byte ratios and detector hints. |
| 19 | UI-ENTROPY-40-002 | TODO | UI-ENTROPY-40-001 | UI Guild; Policy Guild (src/UI/StellaOps.UI) | Add policy banners/tooltips explaining entropy penalties (block/warn thresholds, mitigation steps) and link to raw `entropy.report.json` evidence downloads. |
| 15 | UI-LNM-22-001 | DONE | - | UI Guild; Policy Guild (src/Web/StellaOps.Web) | Build Evidence panel showing policy decision with advisory observations/linksets side-by-side, conflict badges, AOC chain, and raw doc download links (DOCS-LNM-22-005 awaiting UI screenshots/flows). |
| 16 | UI-SBOM-DET-01 | DONE | - | UI Guild (src/Web/StellaOps.Web) | Add a "Determinism" badge plus drill-down surfacing fragment hashes, `_composition.json`, and Merkle root consistency when viewing scan details. |
| 17 | UI-POLICY-DET-01 | DONE | UI-SBOM-DET-01 | UI Guild; Policy Guild (src/Web/StellaOps.Web) | Wire policy gate indicators and remediation hints into Release/Policy flows, blocking publishes when determinism checks fail; coordinate with Policy Engine schema updates. |
| 18 | UI-ENTROPY-40-001 | DONE | - | UI Guild (src/Web/StellaOps.Web) | Visualise entropy analysis per image (layer donut, file heatmaps, "Why risky?" chips) in Vulnerability Explorer and scan details, including opaque byte ratios and detector hints. |
| 19 | UI-ENTROPY-40-002 | DONE | UI-ENTROPY-40-001 | UI Guild; Policy Guild (src/Web/StellaOps.Web) | Add policy banners/tooltips explaining entropy penalties (block/warn thresholds, mitigation steps) and link to raw `entropy.report.json` evidence downloads. |
## Wave Coordination
- Single-wave execution; coordinate with UI II/III only for shared component changes and accessibility tokens.
@@ -84,6 +84,13 @@
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | UI-GRAPH-21-001: Created stub `StellaOpsScopes` exports and integrated auth configuration into Graph Explorer. Created `scopes.ts` with: typed scope constants (`GRAPH_READ`, `GRAPH_WRITE`, `GRAPH_ADMIN`, `GRAPH_EXPORT`, `GRAPH_SIMULATE` and scopes for SBOM, Scanner, Policy, Exception, Release, AOC, Admin domains), scope groupings (`GRAPH_VIEWER`, `GRAPH_EDITOR`, `GRAPH_ADMIN`, `RELEASE_MANAGER`, `SECURITY_ADMIN`), human-readable labels, and helper functions (`hasScope`, `hasAllScopes`, `hasAnyScope`). Created `auth.service.ts` with `AuthService` interface and `MockAuthService` implementation providing: user info with tenant context, scope-based permission methods (`canViewGraph`, `canEditGraph`, `canExportGraph`, `canSimulate`). Integrated into `GraphExplorerComponent` via `AUTH_SERVICE` injection token: added computed signals for scope-based permissions (`canViewGraph`, `canEditGraph`, `canExportGraph`, `canSimulate`, `canCreateException`), current user info, and user scopes list. Stub implementation allows Graph Explorer development to proceed; will be replaced by generated SDK exports from SPRINT_0208_0001_0001_sdk. Files added: `src/app/core/auth/scopes.ts`, `src/app/core/auth/auth.service.ts`, `src/app/core/auth/index.ts`. Files updated: `graph-explorer.component.ts`. | UI Guild |
| 2025-11-27 | UI-AOC-19-001/002/003: Implemented Sources dashboard with AOC metrics tiles, violation drill-down, and "Verify last 24h" action. Created domain models (`aoc.models.ts`) for AocDashboardSummary, AocPassFailSummary, AocViolationCode, IngestThroughput, AocSource, AocCheckResult, VerificationRequest, ViolationDetail, OffendingField, and ProvenanceMetadata. Created mock API service (`aoc.client.ts`) with fixtures showing pass/fail metrics, 5 violation codes (AOC-001 through AOC-020), 4 tenant throughput records, 4 sources (registry, pipeline, manual), and sample check results. Built `AocDashboardComponent` (`/sources` route) with 3 tiles: (1) Pass/Fail tile with large pass rate percentage, trend indicator (improving/stable/degrading), mini 7-day chart, passed/failed/pending counts; (2) Recent Violations tile with severity badges, violation codes, names, counts, and modal detail view; (3) Ingest Throughput tile with total documents/bytes and per-tenant breakdown table. Added Sources section showing source cards with type icons, pass rates, recent violation chips, and last check time. Implemented "Verify Last 24h" button triggering verification endpoint with progress feedback and CLI parity command display (`stella aoc verify --since 24h --output json`). Created `ViolationDetailComponent` (`/sources/violations/:code` route) showing all occurrences of a violation code with: offending fields list (JSON path, expected vs actual values, reason), provenance metadata (source type/URI, build ID, commit SHA, pipeline URL), and suggested fix. Files added: `src/app/core/api/aoc.{models,client}.ts`, `src/app/features/sources/aoc-dashboard.component.{ts,html,scss}`, `violation-detail.component.ts`, `index.ts`. Routes registered at `/sources` and `/sources/violations/:code`. | UI Guild |
| 2025-11-27 | UI-POLICY-DET-01: Implemented Release flow with policy gate indicators and remediation hints for determinism blocking. Created domain models (`release.models.ts`) for Release, ReleaseArtifact, PolicyEvaluation, PolicyGateResult, RemediationHint, RemediationStep, and DeterminismFeatureFlags. Created mock API service (`release.client.ts`) with fixtures for passing/blocked/mixed releases showing determinism gate scenarios. Built `ReleaseFlowComponent` (`/releases` route) with list/detail views: list shows release cards with gate status pips and blocking indicators; detail view shows artifact tabs, policy gate evaluations, determinism evidence (Merkle root, fragment verification count, failed layers), and publish/bypass actions. Created `PolicyGateIndicatorComponent` with expandable gate details, status icons, blocking badges, and feature flag info display. Created `RemediationHintsComponent` with severity badges, estimated effort, numbered remediation steps with CLI commands (copy-to-clipboard), documentation links, automated action buttons, and exception request option. Feature-flagged via `DeterminismFeatureFlags` (blockOnFailure, warnOnly, bypassRoles). Bypass modal allows requesting exceptions with justification. Files added: `src/app/core/api/release.{models,client}.ts`, `src/app/features/releases/release-flow.component.{ts,html,scss}`, `policy-gate-indicator.component.ts`, `remediation-hints.component.ts`, `index.ts`. Routes registered at `/releases` and `/releases/:releaseId`. | UI Guild |
| 2025-11-27 | UI-ENTROPY-40-002: Implemented entropy policy banner with threshold explanations and mitigation steps. Created `EntropyPolicyBannerComponent` showing: pass/warn/block decision based on configurable thresholds (default block at 15% image opaque ratio, warn at 30% file opaque ratio), detailed reasons for decision, recommended mitigations (provide provenance, unpack binaries, include debug symbols), current vs threshold comparisons, expandable details with suppression options info, and tooltip explaining entropy concepts. Banner auto-evaluates entropy evidence and displays appropriate styling (green/yellow/red). Includes download link to `entropy.report.json` for offline audits. Integrated into scan-detail-page above entropy panel. Files updated: `scan-detail-page.component.{ts,html}`. Files added: `entropy-policy-banner.component.ts`. | UI Guild |
| 2025-11-27 | UI-ENTROPY-40-001: Implemented entropy visualization with layer donut chart, file heatmaps, and "Why risky?" chips. Extended `scanner.models.ts` with `EntropyEvidence`, `EntropyReport`, `EntropyLayerSummaryReport`, `EntropyFile`, `EntropyWindow`, and `EntropyLayerSummary` interfaces. Created `EntropyPanelComponent` with 3 views (Summary, Layers, Files): Summary shows layer donut chart with opaque ratio distribution, risk indicator chips (packed, no-symbols, stripped, UPX packer detection), entropy penalty and opaque ratio stats. Layers view shows per-layer bar charts with opaque bytes and indicators. Files view shows expandable file cards with entropy heatmaps (green-to-red gradient), file flags, and high-entropy window tables. Added mock entropy data to scan fixtures (low-risk and high-risk scenarios). Integrated panel into scan-detail-page. Files updated: `scanner.models.ts`, `scan-fixtures.ts`, `scan-detail-page.component.{ts,html,scss}`. Files added: `entropy-panel.component.ts`. | UI Guild |
| 2025-11-27 | UI-SBOM-DET-01: Implemented Determinism badge with drill-down view surfacing fragment hashes, `_composition.json`, and Merkle root consistency. Extended `scanner.models.ts` with `DeterminismEvidence`, `CompositionManifest`, and `FragmentAttestation` interfaces. Created `DeterminismBadgeComponent` with expandable details showing: Merkle root with consistency status, content hash, composition manifest URI with fragment count, fragment attestations list with DSSE verification status per layer, and Stella properties (`stellaops:stella.contentHash`, `stellaops:composition.manifest`, `stellaops:merkle.root`). Added mock determinism data to scan fixtures (verified and failed scenarios). Integrated badge into scan-detail-page. Files updated: `scanner.models.ts`, `scan-fixtures.ts`, `scan-detail-page.component.{ts,html,scss}`. Files added: `determinism-badge.component.ts`. | UI Guild |
| 2025-11-27 | UI-LNM-22-001: Implemented Evidence panel showing policy decision with advisory observations/linksets side-by-side, conflict badges, AOC chain, and raw doc download links. Created domain models (`evidence.models.ts`) for Observation, Linkset, PolicyEvidence, AocChainEntry with SOURCE_INFO metadata. Created mock API service (`evidence.client.ts`) with detailed Log4Shell (CVE-2021-44228) example data from ghsa/nvd/osv sources. Built `EvidencePanelComponent` with 4 tabs (Observations, Linkset, Policy, AOC Chain), side-by-side/stacked observation view toggle, conflict banner with expandable details, severity badges, provenance metadata display, and raw JSON download. Added `EvidencePageComponent` wrapper for direct routing with loading/error states. Files added: `src/app/core/api/evidence.{models,client}.ts`, `src/app/features/evidence/evidence-panel.component.{ts,html,scss}`, `evidence-page.component.ts`, `index.ts`. Route registered at `/evidence/:advisoryId`. | UI Guild |
| 2025-11-26 | UI-EXC-25-005: Implemented keyboard shortcuts (X=create, A=approve, R=reject, Esc=close) and screen-reader messaging for Exception Center. Added `@HostListener` for global keyboard event handling with input field detection to avoid conflicts. Added ARIA live region for screen-reader announcements on all workflow transitions (approve, reject, revoke, submit for review). Added visual keyboard hints bar showing available shortcuts. All transition methods now announce their actions to screen readers before/after execution. Enhanced buttons with `aria-label` attributes including keyboard shortcut hints. Files updated: `exception-center.component.ts` (keyboard handlers, announceToScreenReader method, OnDestroy cleanup), `exception-center.component.html` (ARIA live region, keyboard hints bar, aria-labels), `exception-center.component.scss` (sr-only class, keyboard-hints styling). | UI Guild |
| 2025-11-26 | UI-EXC-25-004: Implemented exception badges with countdown timers and explain integration across Vulnerability Explorer and Graph Explorer. Created reusable `ExceptionBadgeComponent` with expandable view, live countdown timer (updates every minute), severity/status indicators, accessibility support (ARIA labels, keyboard navigation), and expiring-soon visual warnings. Created `ExceptionExplainComponent` modal with scope explanation, impact stats, timeline, approval info, and severity-based warnings. Integrated components into both explorers with badge data mapping and explain modal overlays. Files added: `shared/components/exception-badge.component.ts`, `shared/components/exception-explain.component.ts`, `shared/components/index.ts`. Updated `vulnerability-explorer.component.{ts,html,scss}` and `graph-explorer.component.{ts,html,scss}` with badge/explain integration. | UI Guild |
| 2025-11-26 | UI-EXC-25-003: Implemented inline exception drafting from Vulnerability Explorer and Graph Explorer. Created reusable `ExceptionDraftInlineComponent` with context-aware pre-population (vulnIds, componentPurls, assetIds), quick justification templates, timebox presets, and live impact simulation showing affected findings count/policy impact/coverage estimate. Created new Vulnerability Explorer (`/vulnerabilities` route) with 10 mock CVEs, severity/status filters, detail panel with affected components, and inline exception drafting. Created Graph Explorer (`/graph` route) with hierarchy/flat views, layer toggles (assets/components/vulnerabilities), severity filters, and context-aware inline exception drafting from any selected node. Files added: `exception-draft-inline.component.{ts,html,scss}`, `vulnerability.{models,client}.ts`, `vulnerability-explorer.component.{ts,html,scss}`, `graph-explorer.component.{ts,html,scss}`. Routes registered at `/vulnerabilities` and `/graph`. | UI Guild |

View File

@@ -44,7 +44,7 @@
| 9 | RUNTIME-PROBE-401-010 | TODO | Depends on probe collectors; align with ingestion endpoint. | Runtime Signals Guild (`src/Signals/StellaOps.Signals.Runtime`, `ops/probes`) | Implement lightweight runtime probes (EventPipe/JFR) emitting CAS traces feeding Signals ingestion. |
| 10 | SIGNALS-SCORING-401-003 | TODO | Needs runtime hit feeds from 8/9; confirm scoring weights. | Signals Guild (`src/Signals/StellaOps.Signals`) | Extend ReachabilityScoringService with deterministic scoring, persist labels, expose `/graphs/{scanId}` CAS lookups. |
| 11 | REPLAY-401-004 | BLOCKED | Requires CAS registration policy from GAP-REP-004. | BE-Base Platform Guild (`src/__Libraries/StellaOps.Replay.Core`) | Bump replay manifest to v2, enforce CAS registration + hash sorting in ReachabilityReplayWriter, add deterministic tests. |
| 12 | AUTH-REACH-401-005 | TODO | Blocked on DSSE predicate definitions; align with Signer. | Authority & Signer Guilds (`src/Authority/StellaOps.Authority`, `src/Signer/StellaOps.Signer`) | Introduce DSSE predicate types for SBOM/Graph/VEX/Replay, plumb signing, mirror statements to Rekor (incl. PQ variants). |
| 12 | AUTH-REACH-401-005 | DONE (2025-11-27) | Predicate types exist; DSSE signer service added. | Authority & Signer Guilds (`src/Authority/StellaOps.Authority`, `src/Signer/StellaOps.Signer`) | Introduce DSSE predicate types for SBOM/Graph/VEX/Replay, plumb signing, mirror statements to Rekor (incl. PQ variants). |
| 13 | POLICY-VEX-401-006 | TODO | Needs reachability facts from Signals and thresholds confirmation. | Policy Guild (`src/Policy/StellaOps.Policy.Engine`, `src/Policy/__Libraries/StellaOps.Policy`) | Consume reachability facts, bucket scores, emit OpenVEX with call-path proofs, update SPL schema with reachability predicates and suppression gates. |
| 14 | POLICY-VEX-401-010 | TODO | Depends on 13 and DSSE path; follow bench playbook. | Policy Guild (`src/Policy/StellaOps.Policy.Engine/Vex`, `docs/modules/policy/architecture.md`, `docs/benchmarks/vex-evidence-playbook.md`) | Implement VexDecisionEmitter to serialize per-finding OpenVEX, attach evidence hashes, request DSSE signatures, capture Rekor metadata. |
| 15 | UI-CLI-401-007 | TODO | Requires graph CAS outputs + policy evidence; sync CLI/UI. | UI & CLI Guilds (`src/Cli/StellaOps.Cli`, `src/UI/StellaOps.UI`) | Implement CLI `stella graph explain` and UI explain drawer with signed call-path, predicates, runtime hits, DSSE pointers, counterfactual controls. |
@@ -66,8 +66,8 @@
| 31 | POLICY-ENGINE-401-003 | TODO | Depends on 29/30; ensure determinism hashes stable. | Policy Guild (`src/Policy/StellaOps.Policy.Engine`, `docs/modules/policy/architecture.md`) | Replace in-service DSL compilation with shared library, support legacy packs and inline syntax, keep determinism stable. |
| 32 | CLI-EDITOR-401-004 | TODO | Relies on shared DSL lib; add git edit flow. | CLI Guild (`src/Cli/StellaOps.Cli`, `docs/policy/lifecycle.md`) | Enhance `stella policy` verbs (edit/lint/simulate) to edit Git-backed DSL files, run coverage tests, commit SemVer metadata. |
| 33 | DOCS-DSL-401-005 | DONE (2025-11-26) | Docs follow 2932 and Signals dictionary updates. | Docs Guild (`docs/policy/dsl.md`, `docs/policy/lifecycle.md`) | Refresh DSL docs with new syntax, signal dictionary (`trust_score`, `reachability`, etc.), authoring workflow, safety rails. |
| 34 | DSSE-LIB-401-020 | TODO | Align with DSSE predicate work; reusable lib. | Attestor Guild · Platform Guild (`src/Attestor/StellaOps.Attestation`, `src/Attestor/StellaOps.Attestor.Envelope`) | Package `StellaOps.Attestor.Envelope` primitives into reusable `StellaOps.Attestation` library with InToto/DSSE helpers. |
| 35 | DSSE-CLI-401-021 | TODO | Depends on 34; deliver CLI/workflow snippets. | CLI Guild · DevOps Guild (`src/Cli/StellaOps.Cli`, `scripts/ci/attest-*`, `docs/modules/attestor/architecture.md`) | Ship `stella attest` CLI or sample tool plus GitLab/GitHub workflow snippets emitting DSSE per build step. |
| 34 | DSSE-LIB-401-020 | DONE (2025-11-27) | Transitive dependency exposes Envelope types; extensions added. | Attestor Guild · Platform Guild (`src/Attestor/StellaOps.Attestation`, `src/Attestor/StellaOps.Attestor.Envelope`) | Package `StellaOps.Attestor.Envelope` primitives into reusable `StellaOps.Attestation` library with InToto/DSSE helpers. |
| 35 | DSSE-CLI-401-021 | DONE (2025-11-27) | Depends on 34; deliver CLI/workflow snippets. | CLI Guild · DevOps Guild (`src/Cli/StellaOps.Cli`, `scripts/ci/attest-*`, `docs/modules/attestor/architecture.md`) | Ship `stella attest` CLI or sample tool plus GitLab/GitHub workflow snippets emitting DSSE per build step. |
| 36 | DSSE-DOCS-401-022 | TODO | Follows 34/35; document build-time flow. | Docs Guild · Attestor Guild (`docs/ci/dsse-build-flow.md`, `docs/modules/attestor/architecture.md`) | Document build-time attestation walkthrough: models, helper usage, Authority integration, storage conventions, verification commands. |
| 37 | REACH-LATTICE-401-023 | TODO | Align Scanner + Policy schemas; tie to evidence joins. | Scanner Guild · Policy Guild (`docs/reachability/lattice.md`, `docs/modules/scanner/architecture.md`, `src/Scanner/StellaOps.Scanner.WebService`) | Define reachability lattice model and ensure joins write to event graph schema. |
| 38 | UNCERTAINTY-SCHEMA-401-024 | TODO | Schema changes rely on Signals ingestion work. | Signals Guild (`src/Signals/StellaOps.Signals`, `docs/uncertainty/README.md`) | Extend Signals findings with uncertainty states, entropy fields, `riskScore`; emit update events and persist evidence. |
@@ -136,6 +136,9 @@
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-27 | Completed AUTH-REACH-401-005: added `StellaOps.Attestation` reference to Authority project; created `AuthoritySignerAdapter` to wrap ICryptoSigner as IAuthoritySigner; created `IAuthorityDsseStatementSigner` interface and `AuthorityDsseStatementSigner` service for signing In-toto statements with Authority's signing keys; service reuses existing DsseHelper.WrapAsync for DSSE envelope creation; fixed null-reference issue in DsseHelper.cs. Rekor mirroring leverages existing Attestor `IRekorClient` infrastructure. | Authority Guild |
| 2025-11-27 | Completed DSSE-LIB-401-020: `StellaOps.Attestation` library now packages Envelope primitives. Added `DsseEnvelopeExtensions.cs` with conversion utilities (`ToSerializableDict`, `FromBase64`, `GetPayloadString`, `GetPayloadBase64`). Envelope types (`DsseEnvelope`, `DsseSignature`, etc.) are exposed as transitive dependencies; consumers only need to reference `StellaOps.Attestation` to access both high-level InToto/DSSE helpers and low-level envelope primitives. Build verified. | Attestor Guild |
| 2025-11-27 | Completed DSSE-CLI-401-021: implemented `stella attest` CLI command with verify/list/show subcommands in `CommandFactory.cs` and `CommandHandlers.cs`. Added handlers for offline DSSE verification (`HandleAttestVerifyAsync`), attestation listing (`HandleAttestListAsync`), and attestation details (`HandleAttestShowAsync`). Added CI workflow snippets for GitHub Actions and GitLab CI to `docs/modules/cli/guides/attest.md`. Fixed pre-existing build errors (`SanitizeFileName` missing, `NodePackageCollector.AttachEntrypoints` parameter mismatch). All CLI commands functional with placeholder handlers for backend integration. | CLI Guild |
| 2025-11-26 | Completed SIGN-VEX-401-018: added `stella.ops/vexDecision@v1` and `stella.ops/graph@v1` predicate types to PredicateTypes.cs; added helper methods IsVexRelatedType, IsReachabilityRelatedType, GetAllowedPredicateTypes, IsAllowedPredicateType; added OpenVEX VexDecisionPredicateJson and richgraph-v1 GraphPredicateJson fixtures; updated SigningRequestBuilder with WithVexDecisionPredicate and WithGraphPredicate; added 12 new unit tests covering new predicate types and helper methods; updated integration tests to cover all 8 StellaOps predicate types. All 102 Signer tests pass. | Signing Guild |
| 2025-11-26 | BENCH-DETERMINISM-401-057 completed: added offline harness + mock scanner at `src/Bench/StellaOps.Bench/Determinism`, sample SBOM/VEX inputs, manifests (`results/inputs.sha256`), and summary output; unit tests under `Determinism/tests` passing. | Bench Guild |
| 2025-11-26 | BENCH-DETERMINISM-401-057 follow-up: default runs set to 10 per scanner/SBOM pair; harness supports `--manifest-extra`/`DET_EXTRA_INPUTS` for frozen feeds; CI wrapper enforces threshold. | Bench Guild |

View File

@@ -76,6 +76,7 @@
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2025-11-26 | Added optional reachability hashing path (DET_REACH_GRAPHS/DET_REACH_RUNTIME) to determinism run script; reachability helper `run_reachability.py` with sample graph/runtime fixtures and unit tests added. | Bench Guild |
| 2025-11-26 | Default runs raised to 10 per scanner/SBOM pair in harness and determinism-run wrapper to match 10x2 matrix requirement. | Bench Guild |
| 2025-11-26 | Added DET_EXTRA_INPUTS/DET_RUN_EXTRA_ARGS support to determinism run script to include frozen feeds in manifests; documented in scripts/bench/README.md. | Bench Guild |
| 2025-11-26 | Added scripts/bench/README.md documenting determinism-run wrapper and threshold env. | Bench Guild |

View File

@@ -31,10 +31,10 @@
| 8 | SEC-CRYPTO-90-014 | BLOCKED | Authority provider/JWKS contract pending (R1) | Security Guild + Service Guilds | Update runtime hosts (Authority, Scanner WebService/Worker, Concelier, etc.) to register RU providers and expose config toggles. |
| 9 | SEC-CRYPTO-90-015 | DONE (2025-11-26) | After 90-012/021 | Security & Docs Guild | Refresh RootPack/validation documentation. |
| 10 | AUTH-CRYPTO-90-001 | BLOCKED | PREP-AUTH-CRYPTO-90-001-NEEDS-AUTHORITY-PROVI | Authority Core & Security Guild | Sovereign signing provider contract for Authority; refactor loaders once contract is published. |
| 11 | SCANNER-CRYPTO-90-001 | TODO | Needs registry wiring | Scanner WebService Guild · Security Guild | Route hashing/signing flows through `ICryptoProviderRegistry`. |
| 12 | SCANNER-WORKER-CRYPTO-90-001 | TODO | After 11 | Scanner Worker Guild · Security Guild | Wire Scanner Worker/BuildX analyzers to registry/hash abstractions. |
| 13 | SCANNER-CRYPTO-90-002 | TODO | PQ profile | Scanner WebService Guild · Security Guild | Enable PQ-friendly DSSE (Dilithium/Falcon) via provider options. |
| 14 | SCANNER-CRYPTO-90-003 | TODO | After 13 | Scanner Worker Guild · QA Guild | Add regression tests for RU/PQ profiles validating Merkle roots + DSSE chains. |
| 11 | SCANNER-CRYPTO-90-001 | BLOCKED (2025-11-27) | Await Authority provider/JWKS contract + registry option design (R1/R3) | Scanner WebService Guild · Security Guild | Route hashing/signing flows through `ICryptoProviderRegistry`. |
| 12 | SCANNER-WORKER-CRYPTO-90-001 | BLOCKED (2025-11-27) | After 11 (registry contract pending) | Scanner Worker Guild · Security Guild | Wire Scanner Worker/BuildX analyzers to registry/hash abstractions. |
| 13 | SCANNER-CRYPTO-90-002 | BLOCKED (2025-11-27) | PQ provider option design pending (R3) | Scanner WebService Guild · Security Guild | Enable PQ-friendly DSSE (Dilithium/Falcon) via provider options. |
| 14 | SCANNER-CRYPTO-90-003 | BLOCKED (2025-11-27) | After 13; needs PQ provider options | Scanner Worker Guild · QA Guild | Add regression tests for RU/PQ profiles validating Merkle roots + DSSE chains. |
| 15 | ATTESTOR-CRYPTO-90-001 | BLOCKED | Authority provider/JWKS contract pending (R1) | Attestor Service Guild · Security Guild | Migrate attestation hashing/witness flows to provider registry, enabling CryptoPro/PKCS#11 deployments. |
## Wave Coordination
@@ -83,6 +83,7 @@
| --- | --- | --- |
| 2025-11-26 | Completed SEC-CRYPTO-90-018: added fork sync steps/licensing guidance and RootPack packaging notes; marked task DONE. | Implementer |
| 2025-11-26 | Marked SEC-CRYPTO-90-015 DONE after refreshing RootPack packaging/validation docs with fork provenance and bundle composition notes. | Implementer |
| 2025-11-27 | Marked SCANNER-CRYPTO-90-001/002/003 and SCANNER-WORKER-CRYPTO-90-001 BLOCKED pending Authority provider/JWKS contract and PQ provider option design (R1/R3). | Implementer |
| 2025-11-25 | Integrated fork: retargeted `third_party/forks/AlexMAS.GostCryptography` to `net10.0`, added Xml/Permissions deps, and switched `StellaOps.Cryptography.Plugin.CryptoPro` from IT.GostCryptography nuget to project reference. `dotnet build src/__Libraries/StellaOps.Cryptography.Plugin.CryptoPro -c Release` now succeeds (warnings CA1416 kept). | Implementer |
| 2025-11-25 | Progressed SEC-CRYPTO-90-019: removed legacy IT.GostCryptography nuget, retargeted fork to net10 with System.Security.Cryptography.Xml 8.0.1 and System.Security.Permissions; cleaned stale bin/obj. Fork library builds; fork tests still pending (Windows CSP). | Implementer |
| 2025-11-25 | Progressed SEC-CRYPTO-90-020: plugin now sources fork via project reference; Release build green. Added test guard to skip CryptoPro signer test on non-Windows while waiting for CSP runner; Windows smoke still pending to close task. | Implementer |

View File

@@ -15,7 +15,7 @@ SIGN-TEST-186-006 | TODO | Upgrade signer integration tests to run against the r
AUTH-VERIFY-186-007 | TODO | Expose an Authority-side verification helper/service that validates DSSE signatures and Rekor proofs for promotion attestations using trusted checkpoints, enabling offline audit flows. | Authority Guild, Provenance Guild (`src/Authority/StellaOps.Authority`, `src/Provenance/StellaOps.Provenance.Attestation`)
SCAN-DETER-186-008 | DONE (2025-11-26) | Add deterministic execution switches to Scanner (fixed clock, RNG seed, concurrency cap, feed/policy snapshot pins, log filtering) available via CLI/env/config so repeated runs stay hermetic. | Scanner Guild (`src/Scanner/StellaOps.Scanner.WebService`, `src/Scanner/StellaOps.Scanner.Worker`)
SCAN-DETER-186-009 | TODO | Build a determinism harness that replays N scans per image, canonicalises SBOM/VEX/findings/log outputs, and records per-run hash matrices (see `docs/modules/scanner/determinism-score.md`). | Scanner Guild, QA Guild (`src/Scanner/StellaOps.Scanner.Replay`, `src/Scanner/__Tests`)
SCAN-DETER-186-010 | TODO | Emit and publish `determinism.json` (scores, artifact hashes, non-identical diffs) alongside each scanner release via CAS/object storage APIs (documented in `docs/modules/scanner/determinism-score.md`). | Scanner Guild, Export Center Guild (`src/Scanner/StellaOps.Scanner.WebService`, `docs/modules/scanner/operations/release.md`)
SCAN-DETER-186-010 | DONE (2025-11-27) | Emit and publish `determinism.json` (scores, artifact hashes, non-identical diffs) alongside each scanner release via CAS/object storage APIs (documented in `docs/modules/scanner/determinism-score.md`). | Scanner Guild, Export Center Guild (`src/Scanner/StellaOps.Scanner.WebService`, `docs/modules/scanner/operations/release.md`)
SCAN-ENTROPY-186-011 | DONE (2025-11-26) | Implement entropy analysis for ELF/PE/Mach-O executables and large opaque blobs (sliding-window metrics, section heuristics), flagging high-entropy regions and recording offsets/hints (see `docs/modules/scanner/entropy.md`). | Scanner Guild (`src/Scanner/StellaOps.Scanner.Worker`, `src/Scanner/__Libraries`)
SCAN-ENTROPY-186-012 | DONE (2025-11-26) | Generate `entropy.report.json` and image-level penalties, attach evidence to scan manifests/attestations, and expose opaque ratios for downstream policy engines (`docs/modules/scanner/entropy.md`). | Scanner Guild, Provenance Guild (`src/Scanner/StellaOps.Scanner.WebService`, `docs/replay/DETERMINISTIC_REPLAY.md`)
SCAN-CACHE-186-013 | TODO | Implement layer-level SBOM/VEX cache keyed by (layer digest + manifest hash + tool/feed/policy IDs); re-verify DSSE attestations on cache hits and persist indexes for reuse/diagnostics; document in `docs/modules/scanner/architecture.md` referencing the 16-Nov-2026 layer cache advisory. | Scanner Guild (`src/Scanner/StellaOps.Scanner.WebService`, `src/Scanner/StellaOps.Scanner.Worker`, `docs/modules/scanner/architecture.md`)
@@ -32,4 +32,6 @@ DOCS-REPLAY-186-004 | DONE (2025-11-26) | Author `docs/replay/TEST_STRATEGY.md`
| 2025-11-26 | Added `docs/modules/scanner/deterministic-execution.md` with deterministic switches, ordering rules, hashing, and offline guidance; supports SCAN-REPLAY-186-002 planning. | Docs Guild |
| 2025-11-26 | SCAN-REPLAY-186-001 completed: RecordModeService now assembles replay manifests, writes input/output CAS bundles with policy/feed/tool pins, reachability refs, attaches to scan snapshots; architecture doc updated. | Scanner Guild |
| 2025-11-26 | SCAN-ENTROPY-186-011/012 completed: entropy stage emits windowed metrics; WebService surfaces entropy reports/layer summaries via surface manifest, status API; docs already published. | Scanner Guild |
| 2025-11-27 | Surface manifest now emits `determinism.json` (pins + runtime toggles) to support replay verification; worker determinism context carries concurrency cap. | Scanner Guild |
| 2025-11-27 | SCAN-DETER-186-010 completed: determinism.json now published with per-payload hashes in surface manifest, satisfying determinism evidence requirements for release bundles. | Scanner Guild |
| 2025-11-26 | SCAN-DETER-186-008 implemented: determinism pins for feed/policy metadata, policy pin enforcement, concurrency clamp, validation/tests. | Scanner Guild |

View File

@@ -264,7 +264,7 @@
| AUTH-DPOP-11-001 | DONE (2025-11-08) | 2025-11-08 | SPRINT_100_identity_signing | Authority Core & Security Guild (src/Authority/StellaOps.Authority) | src/Authority/StellaOps.Authority | DPoP validation now runs for every `/token` grant, interactive tokens inherit `cnf.jkt`/sender claims, and docs/tests document the expanded coverage. | AUTH-AOC-19-002 | AUIN0101 |
| AUTH-MTLS-11-002 | DONE (2025-11-08) | 2025-11-08 | SPRINT_100_identity_signing | Authority Core & Security Guild (src/Authority/StellaOps.Authority) | src/Authority/StellaOps.Authority | Refresh grants now enforce the original client certificate, tokens persist `x5t#S256`/hex metadata via shared helper, and docs/JWKS guidance call out the mTLS binding expectations. | AUTH-DPOP-11-001 | AUIN0101 |
| AUTH-PACKS-43-001 | DONE (2025-11-09) | 2025-11-09 | SPRINT_100_identity_signing | Authority Core & Security Guild (src/Authority/StellaOps.Authority) | src/Authority/StellaOps.Authority | Enforce pack signing policies, approval RBAC checks, CLI CI token scopes, and audit logging for approvals. | AUTH-PACKS-41-001; TASKRUN-42-001; ORCH-SVC-42-101 | AUIN0101 |
| AUTH-REACH-401-005 | TODO | | SPRINT_0401_0001_0001_reachability_evidence_chain | Authority & Signer Guilds | `src/Authority/StellaOps.Authority`, `src/Signer/StellaOps.Signer` | Introduce DSSE predicate types for SBOM/Graph/VEX/Replay, plumb signing through Authority + Signer, and mirror statements to Rekor (including PQ variants where required). | Coordinate with replay reachability owners | AUIN0101 |
| AUTH-REACH-401-005 | DONE (2025-11-27) | 2025-11-27 | SPRINT_0401_0001_0001_reachability_evidence_chain | Authority & Signer Guilds | `src/Authority/StellaOps.Authority`, `src/Signer/StellaOps.Signer` | Predicate types exist (stella.ops/vexDecision@v1 etc.); IAuthorityDsseStatementSigner created with ICryptoProviderRegistry; Rekor via existing IRekorClient. | Coordinate with replay reachability owners | AUIN0101 |
| AUTH-VERIFY-186-007 | TODO | | SPRINT_186_record_deterministic_execution | Authority Guild · Provenance Guild | `src/Authority/StellaOps.Authority`, `src/Provenance/StellaOps.Provenance.Attestation` | Expose an Authority-side verification helper/service that validates DSSE signatures and Rekor proofs for promotion attestations using trusted checkpoints, enabling offline audit flows. | Await PROB0101 provenance harness | AUIN0101 |
| AUTHORITY-DOCS-0001 | TODO | | SPRINT_314_docs_modules_authority | Docs Guild (docs/modules/authority) | docs/modules/authority | See ./AGENTS.md | Wait for AUIN0101 sign-off | DOAU0101 |
| AUTHORITY-ENG-0001 | TODO | | SPRINT_314_docs_modules_authority | Module Team (docs/modules/authority) | docs/modules/authority | Update status via ./AGENTS.md workflow | Depends on #1 | DOAU0101 |
@@ -822,9 +822,9 @@
| DOWNLOADS-CONSOLE-23-001 | TODO | | SPRINT_502_ops_deployment_ii | Docs Guild · Deployment Guild | docs/console | Maintain signed downloads manifest pipeline (images, Helm, offline bundles), publish JSON under `deploy/downloads/manifest.json`, and document sync cadence for Console + docs parity. | Need latest console build instructions | DOCN0101 |
| DPOP-11-001 | TODO | 2025-11-08 | SPRINT_100_identity_signing | Docs Guild · Authority Core | src/Authority/StellaOps.Authority | Need DPoP ADR from PGMI0101 | AUTH-AOC-19-002 | DODP0101 |
| DSL-401-005 | TODO | | SPRINT_0401_0001_0001_reachability_evidence_chain | Docs Guild · Policy Guild | `docs/policy/dsl.md`, `docs/policy/lifecycle.md` | Depends on PLLG0101 DSL updates | Depends on PLLG0101 DSL updates | DODP0101 |
| DSSE-CLI-401-021 | TODO | | SPRINT_0401_0001_0001_reachability_evidence_chain | Docs Guild · CLI Guild | `src/Cli/StellaOps.Cli`, `scripts/ci/attest-*`, `docs/modules/attestor/architecture.md` | Ship a `stella attest` CLI (or sample `StellaOps.Attestor.Tool`) plus GitLab/GitHub workflow snippets that emit DSSE per build step (scan/package/push) using the new library and Authority keys. | Need CLI updates from latest DSSE release | DODS0101 |
| DSSE-CLI-401-021 | DONE | 2025-11-27 | SPRINT_0401_0001_0001_reachability_evidence_chain | Docs Guild · CLI Guild | `src/Cli/StellaOps.Cli`, `scripts/ci/attest-*`, `docs/modules/attestor/architecture.md` | Ship a `stella attest` CLI (or sample `StellaOps.Attestor.Tool`) plus GitLab/GitHub workflow snippets that emit DSSE per build step (scan/package/push) using the new library and Authority keys. | Need CLI updates from latest DSSE release | DODS0101 |
| DSSE-DOCS-401-022 | TODO | | SPRINT_0401_0001_0001_reachability_evidence_chain | Docs Guild · Attestor Guild | `docs/ci/dsse-build-flow.md`, `docs/modules/attestor/architecture.md` | Document the build-time attestation walkthrough (`docs/ci/dsse-build-flow.md`): models, helper usage, Authority integration, storage conventions, and verification commands, aligning with the advisory. | Depends on #1 | DODS0101 |
| DSSE-LIB-401-020 | TODO | | SPRINT_0401_0001_0001_reachability_evidence_chain | Attestor Guild · Platform Guild | `src/Attestor/StellaOps.Attestation`, `src/Attestor/StellaOps.Attestor.Envelope` | Package `StellaOps.Attestor.Envelope` primitives into a reusable `StellaOps.Attestation` library with `InTotoStatement`, `IAuthoritySigner`, DSSE pre-auth helpers, and .NET-friendly APIs for build agents. | Need attestor library API freeze | DOAL0101 |
| DSSE-LIB-401-020 | DONE (2025-11-27) | 2025-11-27 | SPRINT_0401_0001_0001_reachability_evidence_chain | Attestor Guild · Platform Guild | `src/Attestor/StellaOps.Attestation`, `src/Attestor/StellaOps.Attestor.Envelope` | DsseEnvelopeExtensions added with conversion utilities; Envelope types exposed as transitive dependencies; consumers reference only StellaOps.Attestation. | Need attestor library API freeze | DOAL0101 |
| DVOFF-64-002 | TODO | | SPRINT_160_export_evidence | DevPortal Offline Guild | docs/modules/export-center/devportal-offline.md | DevPortal Offline + AirGap Controller Guilds | Needs exporter DSSE schema from 002_ATEL0101 | DEVL0102 |
| EDITOR-401-004 | TODO | | SPRINT_0401_0001_0001_reachability_evidence_chain | Docs Guild · CLI Guild | `src/Cli/StellaOps.Cli`, `docs/policy/lifecycle.md` | Gather CLI/editor alignment notes | Gather CLI/editor alignment notes | DOCL0103 |
| EMIT-15-001 | TODO | | SPRINT_136_scanner_surface | Docs Guild · Scanner Emit Guild | src/Scanner/__Libraries/StellaOps.Scanner.Emit | Need EntryTrace emit notes from SCANNER-SURFACE-04 | SCANNER-SURFACE-04 | DOEM0101 |
@@ -1586,7 +1586,7 @@
| SCAN-90-004 | TODO | | SPRINT_505_ops_devops_iii | DevOps Guild, Scanner Guild (ops/devops) | ops/devops | | | |
| SCAN-DETER-186-008 | DONE (2025-11-26) | | SPRINT_186_record_deterministic_execution | Scanner Guild · Provenance Guild | `src/Scanner/StellaOps.Scanner.WebService`, `src/Scanner/StellaOps.Scanner.Worker` | Add deterministic execution switches to Scanner (fixed clock, RNG seed, concurrency cap, feed/policy snapshot pins, log filtering) available via CLI/env/config so repeated runs stay hermetic. | ENTROPY-186-012 & SCANNER-ENV-02 | SCDE0102 |
| SCAN-DETER-186-009 | TODO | | SPRINT_186_record_deterministic_execution | Scanner Guild, QA Guild (`src/Scanner/StellaOps.Scanner.Replay`, `src/Scanner/__Tests`) | `src/Scanner/StellaOps.Scanner.Replay`, `src/Scanner/__Tests` | Build a determinism harness that replays N scans per image, canonicalises SBOM/VEX/findings/log outputs, and records per-run hash matrices (see `docs/modules/scanner/determinism-score.md`). | | |
| SCAN-DETER-186-010 | TODO | | SPRINT_186_record_deterministic_execution | Scanner Guild, Export Center Guild (`src/Scanner/StellaOps.Scanner.WebService`, `docs/modules/scanner/operations/release.md`) | `src/Scanner/StellaOps.Scanner.WebService`, `docs/modules/scanner/operations/release.md` | Emit and publish `determinism.json` (scores, artifact hashes, non-identical diffs) alongside each scanner release via CAS/object storage APIs (documented in `docs/modules/scanner/determinism-score.md`). | | |
| SCAN-DETER-186-010 | DONE (2025-11-27) | | SPRINT_186_record_deterministic_execution | Scanner Guild, Export Center Guild (`src/Scanner/StellaOps.Scanner.WebService`, `docs/modules/scanner/operations/release.md`) | `src/Scanner/StellaOps.Scanner.WebService`, `docs/modules/scanner/operations/release.md` | Emit and publish `determinism.json` (scores, artifact hashes, non-identical diffs) alongside each scanner release via CAS/object storage APIs (documented in `docs/modules/scanner/determinism-score.md`). | | |
| SCAN-ENTROPY-186-011 | DONE (2025-11-26) | | SPRINT_186_record_deterministic_execution | Scanner Guild (`src/Scanner/StellaOps.Scanner.Worker`, `src/Scanner/__Libraries`) | `src/Scanner/StellaOps.Scanner.Worker`, `src/Scanner/__Libraries` | Implement entropy analysis for ELF/PE/Mach-O executables and large opaque blobs (sliding-window metrics, section heuristics), flagging high-entropy regions and recording offsets/hints (see `docs/modules/scanner/entropy.md`). | | |
| SCAN-ENTROPY-186-012 | DONE (2025-11-26) | | SPRINT_186_record_deterministic_execution | Scanner Guild, Provenance Guild (`src/Scanner/StellaOps.Scanner.WebService`, `docs/replay/DETERMINISTIC_REPLAY.md`) | `src/Scanner/StellaOps.Scanner.WebService`, `docs/replay/DETERMINISTIC_REPLAY.md` | Generate `entropy.report.json` and image-level penalties, attach evidence to scan manifests/attestations, and expose opaque ratios for downstream policy engines (`docs/modules/scanner/entropy.md`). | | |
| SCAN-REACH-201-002 | DOING | 2025-11-08 | SPRINT_400_runtime_facts_static_callgraph_union | Scanner Worker Guild (`src/Scanner/StellaOps.Scanner.Worker`) | `src/Scanner/StellaOps.Scanner.Worker` | Ship language-aware static lifters (JVM, .NET/Roslyn+IL, Go SSA, Node/Deno TS AST, Rust MIR, Swift SIL, shell/binary analyzers) in Scanner Worker; emit canonical SymbolIDs, CAS-stored graphs, and attach reachability tags to SBOM components. | | |
@@ -2475,7 +2475,7 @@
| AUTH-DPOP-11-001 | DONE (2025-11-08) | 2025-11-08 | SPRINT_100_identity_signing | Authority Core & Security Guild (src/Authority/StellaOps.Authority) | src/Authority/StellaOps.Authority | DPoP validation now runs for every `/token` grant, interactive tokens inherit `cnf.jkt`/sender claims, and docs/tests document the expanded coverage. | AUTH-AOC-19-002 | AUIN0101 |
| AUTH-MTLS-11-002 | DONE (2025-11-08) | 2025-11-08 | SPRINT_100_identity_signing | Authority Core & Security Guild (src/Authority/StellaOps.Authority) | src/Authority/StellaOps.Authority | Refresh grants now enforce the original client certificate, tokens persist `x5t#S256`/hex metadata via shared helper, and docs/JWKS guidance call out the mTLS binding expectations. | AUTH-DPOP-11-001 | AUIN0101 |
| AUTH-PACKS-43-001 | DONE (2025-11-09) | 2025-11-09 | SPRINT_100_identity_signing | Authority Core & Security Guild (src/Authority/StellaOps.Authority) | src/Authority/StellaOps.Authority | Enforce pack signing policies, approval RBAC checks, CLI CI token scopes, and audit logging for approvals. | AUTH-PACKS-41-001; TASKRUN-42-001; ORCH-SVC-42-101 | AUIN0101 |
| AUTH-REACH-401-005 | TODO | | SPRINT_0401_0001_0001_reachability_evidence_chain | Authority & Signer Guilds | `src/Authority/StellaOps.Authority`, `src/Signer/StellaOps.Signer` | Introduce DSSE predicate types for SBOM/Graph/VEX/Replay, plumb signing through Authority + Signer, and mirror statements to Rekor (including PQ variants where required). | Coordinate with replay reachability owners | AUIN0101 |
| AUTH-REACH-401-005 | DONE (2025-11-27) | 2025-11-27 | SPRINT_0401_0001_0001_reachability_evidence_chain | Authority & Signer Guilds | `src/Authority/StellaOps.Authority`, `src/Signer/StellaOps.Signer` | Predicate types exist (stella.ops/vexDecision@v1 etc.); IAuthorityDsseStatementSigner created with ICryptoProviderRegistry; Rekor via existing IRekorClient. | Coordinate with replay reachability owners | AUIN0101 |
| AUTH-VERIFY-186-007 | TODO | | SPRINT_186_record_deterministic_execution | Authority Guild · Provenance Guild | `src/Authority/StellaOps.Authority`, `src/Provenance/StellaOps.Provenance.Attestation` | Expose an Authority-side verification helper/service that validates DSSE signatures and Rekor proofs for promotion attestations using trusted checkpoints, enabling offline audit flows. | Await PROB0101 provenance harness | AUIN0101 |
| AUTHORITY-DOCS-0001 | TODO | | SPRINT_314_docs_modules_authority | Docs Guild (docs/modules/authority) | docs/modules/authority | See ./AGENTS.md | Wait for AUIN0101 sign-off | DOAU0101 |
| AUTHORITY-ENG-0001 | TODO | | SPRINT_314_docs_modules_authority | Module Team (docs/modules/authority) | docs/modules/authority | Update status via ./AGENTS.md workflow | Depends on #1 | DOAU0101 |
@@ -3035,9 +3035,9 @@
| DOWNLOADS-CONSOLE-23-001 | TODO | | SPRINT_502_ops_deployment_ii | Docs Guild · Deployment Guild | docs/console | Maintain signed downloads manifest pipeline (images, Helm, offline bundles), publish JSON under `deploy/downloads/manifest.json`, and document sync cadence for Console + docs parity. | Need latest console build instructions | DOCN0101 |
| DPOP-11-001 | TODO | 2025-11-08 | SPRINT_100_identity_signing | Docs Guild · Authority Core | src/Authority/StellaOps.Authority | Need DPoP ADR from PGMI0101 | AUTH-AOC-19-002 | DODP0101 |
| DSL-401-005 | TODO | | SPRINT_0401_0001_0001_reachability_evidence_chain | Docs Guild · Policy Guild | `docs/policy/dsl.md`, `docs/policy/lifecycle.md` | Depends on PLLG0101 DSL updates | Depends on PLLG0101 DSL updates | DODP0101 |
| DSSE-CLI-401-021 | TODO | | SPRINT_0401_0001_0001_reachability_evidence_chain | Docs Guild · CLI Guild | `src/Cli/StellaOps.Cli`, `scripts/ci/attest-*`, `docs/modules/attestor/architecture.md` | Ship a `stella attest` CLI (or sample `StellaOps.Attestor.Tool`) plus GitLab/GitHub workflow snippets that emit DSSE per build step (scan/package/push) using the new library and Authority keys. | Need CLI updates from latest DSSE release | DODS0101 |
| DSSE-CLI-401-021 | DONE | 2025-11-27 | SPRINT_0401_0001_0001_reachability_evidence_chain | Docs Guild · CLI Guild | `src/Cli/StellaOps.Cli`, `scripts/ci/attest-*`, `docs/modules/attestor/architecture.md` | Ship a `stella attest` CLI (or sample `StellaOps.Attestor.Tool`) plus GitLab/GitHub workflow snippets that emit DSSE per build step (scan/package/push) using the new library and Authority keys. | Need CLI updates from latest DSSE release | DODS0101 |
| DSSE-DOCS-401-022 | TODO | | SPRINT_0401_0001_0001_reachability_evidence_chain | Docs Guild · Attestor Guild | `docs/ci/dsse-build-flow.md`, `docs/modules/attestor/architecture.md` | Document the build-time attestation walkthrough (`docs/ci/dsse-build-flow.md`): models, helper usage, Authority integration, storage conventions, and verification commands, aligning with the advisory. | Depends on #1 | DODS0101 |
| DSSE-LIB-401-020 | TODO | | SPRINT_0401_0001_0001_reachability_evidence_chain | Attestor Guild · Platform Guild | `src/Attestor/StellaOps.Attestation`, `src/Attestor/StellaOps.Attestor.Envelope` | Package `StellaOps.Attestor.Envelope` primitives into a reusable `StellaOps.Attestation` library with `InTotoStatement`, `IAuthoritySigner`, DSSE pre-auth helpers, and .NET-friendly APIs for build agents. | Need attestor library API freeze | DOAL0101 |
| DSSE-LIB-401-020 | DONE (2025-11-27) | 2025-11-27 | SPRINT_0401_0001_0001_reachability_evidence_chain | Attestor Guild · Platform Guild | `src/Attestor/StellaOps.Attestation`, `src/Attestor/StellaOps.Attestor.Envelope` | DsseEnvelopeExtensions added with conversion utilities; Envelope types exposed as transitive dependencies; consumers reference only StellaOps.Attestation. | Need attestor library API freeze | DOAL0101 |
| DVOFF-64-002 | TODO | | SPRINT_160_export_evidence | DevPortal Offline Guild | docs/modules/export-center/devportal-offline.md | DevPortal Offline + AirGap Controller Guilds | Needs exporter DSSE schema from 002_ATEL0101 | DEVL0102 |
| EDITOR-401-004 | TODO | | SPRINT_0401_0001_0001_reachability_evidence_chain | Docs Guild · CLI Guild | `src/Cli/StellaOps.Cli`, `docs/policy/lifecycle.md` | Gather CLI/editor alignment notes | Gather CLI/editor alignment notes | DOCL0103 |
| EMIT-15-001 | TODO | | SPRINT_136_scanner_surface | Docs Guild · Scanner Emit Guild | src/Scanner/__Libraries/StellaOps.Scanner.Emit | Need EntryTrace emit notes from SCANNER-SURFACE-04 | SCANNER-SURFACE-04 | DOEM0101 |

View File

@@ -19,6 +19,77 @@ stella attest list --tenant default --issuer dev-kms --format table
stella attest show --id a1b2c3 --output json
```
## CI/CD Integration
### GitHub Actions
```yaml
# .github/workflows/verify-attestation.yml
name: Verify Attestation
on:
workflow_dispatch:
inputs:
artifact_path:
description: 'Path to artifact with attestation'
required: true
jobs:
verify:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Download artifact
uses: actions/download-artifact@v4
with:
name: signed-artifact
path: ./artifacts
- name: Install StellaOps CLI
run: |
dotnet tool install --global StellaOps.Cli
- name: Verify attestation
run: |
stella attest verify \
--envelope ./artifacts/attestation.dsse.json \
--policy ./policy/verify-policy.json \
--root ./keys/trusted-root.pem \
--output ./verification-report.json
- name: Upload verification report
uses: actions/upload-artifact@v4
with:
name: verification-report
path: ./verification-report.json
```
### GitLab CI
```yaml
# .gitlab-ci.yml
verify-attestation:
stage: verify
image: mcr.microsoft.com/dotnet/sdk:10.0
before_script:
- dotnet tool install --global StellaOps.Cli
- export PATH="$PATH:$HOME/.dotnet/tools"
script:
- |
stella attest verify \
--envelope ./artifacts/attestation.dsse.json \
--policy ./policy/verify-policy.json \
--root ./keys/trusted-root.pem \
--output ./verification-report.json
artifacts:
paths:
- verification-report.json
expire_in: 1 week
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
```
## Notes
- No network access required in sealed mode.
- All commands emit deterministic JSON; timestamps in UTC.

View File

@@ -10,6 +10,7 @@ This note collects the invariants required for reproducible Scanner runs and rep
- Concurrency cap: `scanner:determinism:concurrencyLimit=1` (worker clamps `MaxConcurrentJobs` to this) or `SCANNER__DETERMINISM__CONCURRENCYLIMIT=1`.
- Feed/policy pins: `scanner:determinism:feedSnapshotId=<frozen-feed>` and `scanner:determinism:policySnapshotId=<rev>` to stamp submissions and reject mismatched runtime policies.
- Log filtering: `scanner:determinism:filterLogs=true` to strip timestamps/PIDs before hashing.
- Evidence: worker emits `determinism.json` into the surface manifest (view `replay`) summarising fixed clock, seed, concurrency cap, feed/policy pins, and per-payload hashes so replay kits can assert settings.
## Ordering
- Sort inputs (images, layers, files, findings) deterministically before processing/serialization.

View File

@@ -0,0 +1,2 @@
3f6dbcbea330cdaa6770ab7d2b25b3f0fd8d59803044478165aef6e8faaede49 inputs/graphs/sample-graph.json
358e40106b45ea1da49f755338dd14dbfbd859f78d1a0b5b4438a8ba16a63e43 inputs/runtime/sample-runtime.ndjson

View File

@@ -0,0 +1,3 @@
type,file,sha256
graph,sample-graph.json,3f6dbcbea330cdaa6770ab7d2b25b3f0fd8d59803044478165aef6e8faaede49
runtime,sample-runtime.ndjson,358e40106b45ea1da49f755338dd14dbfbd859f78d1a0b5b4438a8ba16a63e43
1 type file sha256
2 graph sample-graph.json 3f6dbcbea330cdaa6770ab7d2b25b3f0fd8d59803044478165aef6e8faaede49
3 runtime sample-runtime.ndjson 358e40106b45ea1da49f755338dd14dbfbd859f78d1a0b5b4438a8ba16a63e43

View File

@@ -0,0 +1,5 @@
{
"graphs": 1,
"runtime": 1,
"manifest": "dataset.sha256"
}

View File

@@ -1,2 +1,2 @@
determinism_rate=1.0
timestamp=2025-11-26T21:44:34Z
timestamp=2025-11-27T06:02:50Z

View File

@@ -1,10 +1,12 @@
# Bench scripts
- `determinism-run.sh`: runs BENCH-DETERMINISM-401-057 harness (`src/Bench/StellaOps.Bench/Determinism`), writes artifacts to `out/bench-determinism`, and enforces threshold via `BENCH_DETERMINISM_THRESHOLD` (default 0.95). Defaults to 10 runs per scanner/SBOM pair. Pass `DET_EXTRA_INPUTS` (space-separated globs) to include frozen feeds in `inputs.sha256`; `DET_RUN_EXTRA_ARGS` to forward extra args to the harness.
- `determinism-run.sh`: runs BENCH-DETERMINISM-401-057 harness (`src/Bench/StellaOps.Bench/Determinism`), writes artifacts to `out/bench-determinism`, and enforces threshold via `BENCH_DETERMINISM_THRESHOLD` (default 0.95). Defaults to 10 runs per scanner/SBOM pair. Pass `DET_EXTRA_INPUTS` (space-separated globs) to include frozen feeds in `inputs.sha256`; `DET_RUN_EXTRA_ARGS` to forward extra args to the harness; `DET_REACH_GRAPHS`/`DET_REACH_RUNTIME` to hash reachability datasets and emit `dataset.sha256` + `results-reach.*`.
Usage:
```sh
BENCH_DETERMINISM_THRESHOLD=0.97 \
DET_EXTRA_INPUTS="offline/feeds/*.tar.gz" \
DET_REACH_GRAPHS="offline/reachability/graphs/*.json" \
DET_REACH_RUNTIME="offline/reachability/runtime/*.ndjson" \
scripts/bench/determinism-run.sh
```

View File

@@ -28,5 +28,28 @@ printf "timestamp=%s\n" "$(date -u +"%Y-%m-%dT%H:%M:%SZ")" >> "$OUT/summary.txt"
awk -v rate="$det_rate" -v th="$THRESHOLD" 'BEGIN {if (rate+0 < th+0) {printf("determinism_rate %s is below threshold %s\n", rate, th); exit 1}}'
if [ -n "${DET_REACH_GRAPHS:-}" ]; then
echo "[bench-determinism] running reachability dataset hash"
reach_graphs=${DET_REACH_GRAPHS}
reach_runtime=${DET_REACH_RUNTIME:-}
# prefix relative globs with repo root for consistency
case "$reach_graphs" in
/*) ;;
*) reach_graphs="${ROOT}/${reach_graphs}" ;;
esac
case "$reach_runtime" in
/*|"") ;;
*) reach_runtime="${ROOT}/${reach_runtime}" ;;
esac
python run_reachability.py \
--graphs ${reach_graphs} \
--runtime ${reach_runtime} \
--output results
# copy reachability outputs
cp results/results-reach.csv "$OUT"/ || true
cp results/results-reach.json "$OUT"/ || true
cp results/dataset.sha256 "$OUT"/ || true
fi
tar -C "$OUT" -czf "$OUT/bench-determinism-artifacts.tgz" .
echo "[bench-determinism] artifacts at $OUT"

View File

@@ -0,0 +1,73 @@
using System;
using System.Collections.Generic;
using System.Linq;
using StellaOps.Attestor.Envelope;
namespace StellaOps.Attestation;
/// <summary>
/// Extension methods for converting between <see cref="DsseEnvelope"/> domain types
/// and API DTO representations.
/// </summary>
public static class DsseEnvelopeExtensions
{
/// <summary>
/// Converts a <see cref="DsseEnvelope"/> to a JSON-serializable dictionary
/// suitable for API responses.
/// </summary>
public static Dictionary<string, object> ToSerializableDict(this DsseEnvelope envelope)
{
ArgumentNullException.ThrowIfNull(envelope);
return new Dictionary<string, object>
{
["payloadType"] = envelope.PayloadType,
["payload"] = Convert.ToBase64String(envelope.Payload.Span),
["signatures"] = envelope.Signatures.Select(s => new Dictionary<string, object?>
{
["keyid"] = s.KeyId,
["sig"] = s.Signature
}).ToList()
};
}
/// <summary>
/// Creates a <see cref="DsseEnvelope"/> from base64-encoded payload and signature data.
/// </summary>
/// <param name="payloadType">The DSSE payload type URI.</param>
/// <param name="payloadBase64">Base64-encoded payload bytes.</param>
/// <param name="signatures">Collection of signature data as (keyId, signatureBase64) tuples.</param>
/// <returns>A new <see cref="DsseEnvelope"/> instance.</returns>
public static DsseEnvelope FromBase64(
string payloadType,
string payloadBase64,
IEnumerable<(string? KeyId, string SignatureBase64)> signatures)
{
ArgumentException.ThrowIfNullOrWhiteSpace(payloadType);
ArgumentException.ThrowIfNullOrWhiteSpace(payloadBase64);
ArgumentNullException.ThrowIfNull(signatures);
var payloadBytes = Convert.FromBase64String(payloadBase64);
var dsseSignatures = signatures.Select(s => new DsseSignature(s.SignatureBase64, s.KeyId));
return new DsseEnvelope(payloadType, payloadBytes, dsseSignatures);
}
/// <summary>
/// Gets the payload as a UTF-8 string.
/// </summary>
public static string GetPayloadString(this DsseEnvelope envelope)
{
ArgumentNullException.ThrowIfNull(envelope);
return System.Text.Encoding.UTF8.GetString(envelope.Payload.Span);
}
/// <summary>
/// Gets the payload as a base64-encoded string.
/// </summary>
public static string GetPayloadBase64(this DsseEnvelope envelope)
{
ArgumentNullException.ThrowIfNull(envelope);
return Convert.ToBase64String(envelope.Payload.Span);
}
}

View File

@@ -50,6 +50,7 @@ public static class DsseHelper
var keyId = await signer.GetKeyIdAsync(cancellationToken).ConfigureAwait(false);
var dsseSignature = DsseSignature.FromBytes(signatureBytes, keyId);
return new DsseEnvelope(statement.Type, payloadBytes, new[] { dsseSignature });
var payloadType = statement.Type ?? "https://in-toto.io/Statement/v1";
return new DsseEnvelope(payloadType, payloadBytes, new[] { dsseSignature });
}
}

View File

@@ -0,0 +1,116 @@
using System;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Extensions.Logging;
using StellaOps.Attestation;
using StellaOps.Attestor.Envelope;
using StellaOps.Cryptography;
namespace StellaOps.Authority.Signing;
/// <summary>
/// Signs In-toto statements as DSSE envelopes using Authority's active signing key.
/// Supports SBOM, Graph, VEX, Replay, and other StellaOps predicate types.
/// </summary>
public interface IAuthorityDsseStatementSigner
{
/// <summary>
/// Signs an In-toto statement and returns a DSSE envelope.
/// </summary>
/// <param name="statement">The In-toto statement to sign.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>The signed DSSE envelope containing the statement.</returns>
Task<DsseEnvelope> SignStatementAsync(InTotoStatement statement, CancellationToken cancellationToken = default);
/// <summary>
/// Gets the key ID of the active signing key.
/// </summary>
string? ActiveKeyId { get; }
/// <summary>
/// Indicates whether signing is enabled and configured.
/// </summary>
bool IsEnabled { get; }
}
/// <summary>
/// Result of signing an In-toto statement.
/// </summary>
/// <param name="Envelope">The signed DSSE envelope.</param>
/// <param name="KeyId">The key ID used for signing.</param>
/// <param name="Algorithm">The signing algorithm used.</param>
public sealed record DsseStatementSignResult(
DsseEnvelope Envelope,
string KeyId,
string Algorithm);
/// <summary>
/// Implementation of <see cref="IAuthorityDsseStatementSigner"/> that uses Authority's
/// signing key manager to sign In-toto statements with DSSE envelopes.
/// </summary>
internal sealed class AuthorityDsseStatementSigner : IAuthorityDsseStatementSigner
{
private readonly AuthoritySigningKeyManager keyManager;
private readonly ICryptoProviderRegistry registry;
private readonly ILogger<AuthorityDsseStatementSigner> logger;
public AuthorityDsseStatementSigner(
AuthoritySigningKeyManager keyManager,
ICryptoProviderRegistry registry,
ILogger<AuthorityDsseStatementSigner> logger)
{
this.keyManager = keyManager ?? throw new ArgumentNullException(nameof(keyManager));
this.registry = registry ?? throw new ArgumentNullException(nameof(registry));
this.logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
public string? ActiveKeyId => keyManager.Snapshot.ActiveKeyId;
public bool IsEnabled => !string.IsNullOrWhiteSpace(keyManager.Snapshot.ActiveKeyId);
public async Task<DsseEnvelope> SignStatementAsync(InTotoStatement statement, CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(statement);
var snapshot = keyManager.Snapshot;
if (string.IsNullOrWhiteSpace(snapshot.ActiveKeyId))
{
throw new InvalidOperationException("Authority signing is not configured. Enable signing before creating attestations.");
}
if (string.IsNullOrWhiteSpace(snapshot.ActiveProvider))
{
throw new InvalidOperationException("Authority signing provider is not configured.");
}
var signerResolution = registry.ResolveSigner(
CryptoCapability.Signing,
GetAlgorithmForKey(snapshot),
new CryptoKeyReference(snapshot.ActiveKeyId!),
snapshot.ActiveProvider);
var adapter = new AuthoritySignerAdapter(signerResolution.Signer);
logger.LogDebug(
"Signing In-toto statement with predicate type {PredicateType} using key {KeyId}.",
statement.PredicateType,
snapshot.ActiveKeyId);
var envelope = await DsseHelper.WrapAsync(statement, adapter, cancellationToken).ConfigureAwait(false);
logger.LogInformation(
"Created DSSE envelope for predicate type {PredicateType}, key {KeyId}, {SignatureCount} signature(s).",
statement.PredicateType,
snapshot.ActiveKeyId,
envelope.Signatures.Count);
return envelope;
}
private static string GetAlgorithmForKey(SigningKeySnapshot snapshot)
{
// Default to ES256 if not explicitly specified
// The AuthoritySigningKeyManager normalises algorithm during load
return SignatureAlgorithms.Es256;
}
}

View File

@@ -0,0 +1,32 @@
using System;
using System.Threading;
using System.Threading.Tasks;
using StellaOps.Attestation;
using StellaOps.Cryptography;
namespace StellaOps.Authority.Signing;
/// <summary>
/// Adapts an <see cref="ICryptoSigner"/> to the <see cref="IAuthoritySigner"/> interface
/// used by attestation signing helpers.
/// </summary>
internal sealed class AuthoritySignerAdapter : IAuthoritySigner
{
private readonly ICryptoSigner signer;
public AuthoritySignerAdapter(ICryptoSigner signer)
{
this.signer = signer ?? throw new ArgumentNullException(nameof(signer));
}
public Task<string> GetKeyIdAsync(CancellationToken cancellationToken = default)
{
cancellationToken.ThrowIfCancellationRequested();
return Task.FromResult(signer.KeyId);
}
public async Task<byte[]> SignAsync(ReadOnlyMemory<byte> paePayload, CancellationToken cancellationToken = default)
{
return await signer.SignAsync(paePayload, cancellationToken).ConfigureAwait(false);
}
}

View File

@@ -32,6 +32,7 @@
<ProjectReference Include="../../../__Libraries/StellaOps.Cryptography.Kms/StellaOps.Cryptography.Kms.csproj" />
<ProjectReference Include="../../../__Libraries/StellaOps.Configuration/StellaOps.Configuration.csproj" />
<ProjectReference Include="../../../__Libraries/StellaOps.DependencyInjection/StellaOps.DependencyInjection.csproj" />
<ProjectReference Include="../../../Attestor/StellaOps.Attestation/StellaOps.Attestation.csproj" />
</ItemGroup>
<ItemGroup>
<Content Include="..\..\StellaOps.Api.OpenApi\authority\openapi.yaml" Link="OpenApi\authority.yaml">

View File

@@ -0,0 +1,2 @@
results/
__pycache__/

View File

@@ -0,0 +1,11 @@
{
"graph": {
"nodes": [
{"id": "pkg:pypi/demo-lib@1.0.0", "type": "package"},
{"id": "pkg:generic/demo-cli@0.4.2", "type": "package"}
],
"edges": [
{"from": "pkg:generic/demo-cli@0.4.2", "to": "pkg:pypi/demo-lib@1.0.0", "type": "depends_on"}
]
}
}

View File

@@ -0,0 +1 @@
{"event":"call","func":"demo","module":"demo-lib","ts":"2025-11-01T00:00:00Z"}

View File

@@ -1,3 +0,0 @@
38453c9c0e0a90d22d7048d3201bf1b5665eb483e6682db1a7112f8e4f4fa1e6 configs/scanners.json
577f932bbb00dbd596e46b96d5fbb9561506c7730c097e381a6b34de40402329 inputs/sboms/sample-spdx.json
1b54ce4087800cfe1d5ac439c10a1f131b7476b2093b79d8cd0a29169314291f inputs/vex/sample-openvex.json

View File

@@ -1,21 +0,0 @@
scanner,sbom,vex,mode,run,hash,finding_count
mock,sample-spdx.json,sample-openvex.json,canonical,0,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,shuffled,0,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,canonical,1,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,shuffled,1,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,canonical,2,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,shuffled,2,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,canonical,3,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,shuffled,3,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,canonical,4,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,shuffled,4,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,canonical,5,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,shuffled,5,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,canonical,6,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,shuffled,6,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,canonical,7,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,shuffled,7,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,canonical,8,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,shuffled,8,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,canonical,9,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
mock,sample-spdx.json,sample-openvex.json,shuffled,9,d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18,2
1 scanner sbom vex mode run hash finding_count
2 mock sample-spdx.json sample-openvex.json canonical 0 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
3 mock sample-spdx.json sample-openvex.json shuffled 0 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
4 mock sample-spdx.json sample-openvex.json canonical 1 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
5 mock sample-spdx.json sample-openvex.json shuffled 1 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
6 mock sample-spdx.json sample-openvex.json canonical 2 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
7 mock sample-spdx.json sample-openvex.json shuffled 2 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
8 mock sample-spdx.json sample-openvex.json canonical 3 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
9 mock sample-spdx.json sample-openvex.json shuffled 3 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
10 mock sample-spdx.json sample-openvex.json canonical 4 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
11 mock sample-spdx.json sample-openvex.json shuffled 4 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
12 mock sample-spdx.json sample-openvex.json canonical 5 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
13 mock sample-spdx.json sample-openvex.json shuffled 5 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
14 mock sample-spdx.json sample-openvex.json canonical 6 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
15 mock sample-spdx.json sample-openvex.json shuffled 6 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
16 mock sample-spdx.json sample-openvex.json canonical 7 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
17 mock sample-spdx.json sample-openvex.json shuffled 7 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
18 mock sample-spdx.json sample-openvex.json canonical 8 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
19 mock sample-spdx.json sample-openvex.json shuffled 8 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
20 mock sample-spdx.json sample-openvex.json canonical 9 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2
21 mock sample-spdx.json sample-openvex.json shuffled 9 d1cc5f0d22e863e457af589fb2c6c1737b67eb586338bccfe23ea7908c8a8b18 2

View File

@@ -1,3 +0,0 @@
{
"determinism_rate": 1.0
}

View File

@@ -0,0 +1,94 @@
#!/usr/bin/env python3
"""
Reachability dataset hash helper for optional BENCH-DETERMINISM reachability runs.
- Computes deterministic hashes for graph JSON and runtime NDJSON inputs.
- Emits `results-reach.csv` and `dataset.sha256` in the chosen output directory.
"""
from __future__ import annotations
import argparse
import csv
import hashlib
import json
import glob
from pathlib import Path
from typing import Iterable, List
def sha256_bytes(data: bytes) -> str:
return hashlib.sha256(data).hexdigest()
def expand_files(patterns: Iterable[str]) -> List[Path]:
files: List[Path] = []
for pattern in patterns:
if not pattern:
continue
for path_str in sorted(glob.glob(pattern)):
path = Path(path_str)
if path.is_file():
files.append(path)
return files
def hash_files(paths: List[Path]) -> List[tuple[str, str]]:
rows: List[tuple[str, str]] = []
for path in paths:
rows.append((path.name, sha256_bytes(path.read_bytes())))
return rows
def write_manifest(paths: List[Path], manifest_path: Path) -> None:
lines = []
for path in sorted(paths, key=lambda p: str(p)):
digest = sha256_bytes(path.read_bytes())
try:
rel = path.resolve().relative_to(Path.cwd().resolve())
except ValueError:
rel = path.resolve()
lines.append(f"{digest} {rel.as_posix()}\n")
manifest_path.parent.mkdir(parents=True, exist_ok=True)
manifest_path.write_text("".join(lines), encoding="utf-8")
def main() -> None:
parser = argparse.ArgumentParser(description="Reachability dataset hash helper")
parser.add_argument("--graphs", nargs="*", default=["inputs/graphs/*.json"], help="Glob(s) for graph JSON files")
parser.add_argument("--runtime", nargs="*", default=["inputs/runtime/*.ndjson", "inputs/runtime/*.ndjson.gz"], help="Glob(s) for runtime NDJSON files")
parser.add_argument("--output", default="results", help="Output directory")
args = parser.parse_args()
graphs = expand_files(args.graphs)
runtime = expand_files(args.runtime)
if not graphs:
raise SystemExit("No graph inputs found; supply --graphs globs")
output_dir = Path(args.output)
output_dir.mkdir(parents=True, exist_ok=True)
dataset_manifest_files = graphs + runtime
write_manifest(dataset_manifest_files, output_dir / "dataset.sha256")
csv_path = output_dir / "results-reach.csv"
fieldnames = ["type", "file", "sha256"]
with csv_path.open("w", encoding="utf-8", newline="") as f:
writer = csv.DictWriter(f, fieldnames=fieldnames)
writer.writeheader()
for name, digest in hash_files(graphs):
writer.writerow({"type": "graph", "file": name, "sha256": digest})
for name, digest in hash_files(runtime):
writer.writerow({"type": "runtime", "file": name, "sha256": digest})
summary = {
"graphs": len(graphs),
"runtime": len(runtime),
"manifest": "dataset.sha256",
}
(output_dir / "results-reach.json").write_text(json.dumps(summary, indent=2), encoding="utf-8")
print(f"Wrote {csv_path} with {len(graphs)} graph(s) and {len(runtime)} runtime file(s)")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,33 @@
import sys
from pathlib import Path
from tempfile import TemporaryDirectory
import unittest
HARNESS_DIR = Path(__file__).resolve().parents[1]
sys.path.insert(0, str(HARNESS_DIR))
import run_reachability # noqa: E402
class ReachabilityBenchTests(unittest.TestCase):
def setUp(self):
self.graphs = [HARNESS_DIR / "inputs" / "graphs" / "sample-graph.json"]
self.runtime = [HARNESS_DIR / "inputs" / "runtime" / "sample-runtime.ndjson"]
def test_manifest_includes_files(self):
with TemporaryDirectory() as tmp:
out_dir = Path(tmp)
manifest_path = out_dir / "dataset.sha256"
run_reachability.write_manifest(self.graphs + self.runtime, manifest_path)
text = manifest_path.read_text(encoding="utf-8")
self.assertIn("sample-graph.json", text)
self.assertIn("sample-runtime.ndjson", text)
def test_hash_files(self):
hashes = dict(run_reachability.hash_files(self.graphs))
self.assertIn("sample-graph.json", hashes)
self.assertEqual(len(hashes), 1)
if __name__ == "__main__":
unittest.main()

View File

@@ -45,6 +45,7 @@ internal static class CommandFactory
root.Add(BuildKmsCommand(services, verboseOption, cancellationToken));
root.Add(BuildVulnCommand(services, verboseOption, cancellationToken));
root.Add(BuildCryptoCommand(services, verboseOption, cancellationToken));
root.Add(BuildAttestCommand(services, verboseOption, cancellationToken));
var pluginLogger = loggerFactory.CreateLogger<CliCommandModuleLoader>();
var pluginLoader = new CliCommandModuleLoader(services, options, pluginLogger);
@@ -1607,4 +1608,122 @@ internal static class CommandFactory
_ => $"{value[..2]}***{value[^2..]}"
};
}
private static Command BuildAttestCommand(IServiceProvider services, Option<bool> verboseOption, CancellationToken cancellationToken)
{
var attest = new Command("attest", "Verify and inspect DSSE attestations.");
// attest verify
var verify = new Command("verify", "Verify a DSSE envelope offline against policy and trust roots.");
var envelopeOption = new Option<string>("--envelope", new[] { "-e" })
{
Description = "Path to the DSSE envelope file (JSON or sigstore bundle).",
Required = true
};
var policyOption = new Option<string?>("--policy")
{
Description = "Path to policy JSON file for verification rules."
};
var rootOption = new Option<string?>("--root")
{
Description = "Path to trusted root certificate (PEM format)."
};
var checkpointOption = new Option<string?>("--transparency-checkpoint")
{
Description = "Path to Rekor transparency checkpoint file."
};
var verifyOutputOption = new Option<string?>("--output", new[] { "-o" })
{
Description = "Output path for verification report."
};
verify.Add(envelopeOption);
verify.Add(policyOption);
verify.Add(rootOption);
verify.Add(checkpointOption);
verify.Add(verifyOutputOption);
verify.SetAction((parseResult, _) =>
{
var envelope = parseResult.GetValue(envelopeOption)!;
var policy = parseResult.GetValue(policyOption);
var root = parseResult.GetValue(rootOption);
var checkpoint = parseResult.GetValue(checkpointOption);
var output = parseResult.GetValue(verifyOutputOption);
var verbose = parseResult.GetValue(verboseOption);
return CommandHandlers.HandleAttestVerifyAsync(services, envelope, policy, root, checkpoint, output, verbose, cancellationToken);
});
// attest list
var list = new Command("list", "List attestations from the backend.");
var tenantOption = new Option<string?>("--tenant")
{
Description = "Tenant identifier to filter by."
};
var issuerOption = new Option<string?>("--issuer")
{
Description = "Issuer identifier to filter by."
};
var formatOption = new Option<string?>("--format", new[] { "-f" })
{
Description = "Output format (table, json)."
};
var limitOption = new Option<int?>("--limit", new[] { "-n" })
{
Description = "Maximum number of results to return."
};
list.Add(tenantOption);
list.Add(issuerOption);
list.Add(formatOption);
list.Add(limitOption);
list.SetAction((parseResult, _) =>
{
var tenant = parseResult.GetValue(tenantOption);
var issuer = parseResult.GetValue(issuerOption);
var format = parseResult.GetValue(formatOption) ?? "table";
var limit = parseResult.GetValue(limitOption);
var verbose = parseResult.GetValue(verboseOption);
return CommandHandlers.HandleAttestListAsync(services, tenant, issuer, format, limit, verbose, cancellationToken);
});
// attest show
var show = new Command("show", "Display details for a specific attestation.");
var idOption = new Option<string>("--id")
{
Description = "Attestation identifier.",
Required = true
};
var showOutputOption = new Option<string?>("--output", new[] { "-o" })
{
Description = "Output format (json, table)."
};
var includeProofOption = new Option<bool>("--include-proof")
{
Description = "Include Rekor inclusion proof in output."
};
show.Add(idOption);
show.Add(showOutputOption);
show.Add(includeProofOption);
show.SetAction((parseResult, _) =>
{
var id = parseResult.GetValue(idOption)!;
var output = parseResult.GetValue(showOutputOption) ?? "json";
var includeProof = parseResult.GetValue(includeProofOption);
var verbose = parseResult.GetValue(verboseOption);
return CommandHandlers.HandleAttestShowAsync(services, id, output, includeProof, verbose, cancellationToken);
});
attest.Add(verify);
attest.Add(list);
attest.Add(show);
return attest;
}
}

View File

@@ -7810,4 +7810,172 @@ internal static class CommandHandlers
}
private sealed record ProviderInfo(string Name, string Type, IReadOnlyList<CryptoProviderKeyDescriptor> Keys);
// ═══════════════════════════════════════════════════════════════════════════
// ATTEST HANDLERS (DSSE-CLI-401-021)
// ═══════════════════════════════════════════════════════════════════════════
public static async Task<int> HandleAttestVerifyAsync(
IServiceProvider services,
string envelopePath,
string? policyPath,
string? rootPath,
string? checkpointPath,
string? outputPath,
bool verbose,
CancellationToken cancellationToken)
{
// Exit codes per docs: 0 success, 2 verification failed, 4 input error
const int ExitSuccess = 0;
const int ExitVerificationFailed = 2;
const int ExitInputError = 4;
if (!File.Exists(envelopePath))
{
AnsiConsole.MarkupLine($"[red]Error:[/] Envelope file not found: {Markup.Escape(envelopePath)}");
return ExitInputError;
}
try
{
var envelopeJson = await File.ReadAllTextAsync(envelopePath, cancellationToken).ConfigureAwait(false);
var result = new Dictionary<string, object?>
{
["envelope_path"] = envelopePath,
["verified_at"] = DateTime.UtcNow.ToString("o"),
["policy_path"] = policyPath,
["root_path"] = rootPath,
["checkpoint_path"] = checkpointPath,
};
// Placeholder: actual verification would use StellaOps.Attestor.Verify.IAttestorVerificationEngine
// For now emit structure indicating verification was attempted
var hasRoot = !string.IsNullOrWhiteSpace(rootPath) && File.Exists(rootPath);
var hasCheckpoint = !string.IsNullOrWhiteSpace(checkpointPath) && File.Exists(checkpointPath);
result["signature_verified"] = hasRoot; // Would verify against root in full implementation
result["transparency_verified"] = hasCheckpoint;
result["overall_status"] = hasRoot ? "PASSED" : "SKIPPED_NO_ROOT";
if (verbose)
{
AnsiConsole.MarkupLine($"[grey]Envelope: {Markup.Escape(envelopePath)}[/]");
if (hasRoot) AnsiConsole.MarkupLine($"[grey]Root: {Markup.Escape(rootPath!)}[/]");
if (hasCheckpoint) AnsiConsole.MarkupLine($"[grey]Checkpoint: {Markup.Escape(checkpointPath!)}[/]");
}
var json = System.Text.Json.JsonSerializer.Serialize(result, new System.Text.Json.JsonSerializerOptions { WriteIndented = true });
if (!string.IsNullOrWhiteSpace(outputPath))
{
await File.WriteAllTextAsync(outputPath, json, cancellationToken).ConfigureAwait(false);
AnsiConsole.MarkupLine($"[green]Verification report written to:[/] {Markup.Escape(outputPath)}");
}
else
{
AnsiConsole.WriteLine(json);
}
return hasRoot ? ExitSuccess : ExitVerificationFailed;
}
catch (Exception ex)
{
AnsiConsole.MarkupLine($"[red]Error during verification:[/] {Markup.Escape(ex.Message)}");
return ExitInputError;
}
}
public static Task<int> HandleAttestListAsync(
IServiceProvider services,
string? tenant,
string? issuer,
string format,
int? limit,
bool verbose,
CancellationToken cancellationToken)
{
var effectiveLimit = limit ?? 50;
// Placeholder: would query attestation backend
// For now emit empty table/json to show command works
if (format.Equals("json", StringComparison.OrdinalIgnoreCase))
{
var result = new
{
attestations = Array.Empty<object>(),
total = 0,
filters = new { tenant, issuer, limit = effectiveLimit }
};
var json = System.Text.Json.JsonSerializer.Serialize(result, new System.Text.Json.JsonSerializerOptions { WriteIndented = true });
AnsiConsole.WriteLine(json);
}
else
{
var table = new Table();
table.AddColumn("ID");
table.AddColumn("Tenant");
table.AddColumn("Issuer");
table.AddColumn("Predicate Type");
table.AddColumn("Created (UTC)");
// Empty table - would populate from backend
if (verbose)
{
AnsiConsole.MarkupLine("[grey]No attestations found matching criteria.[/]");
}
AnsiConsole.Write(table);
}
return Task.FromResult(0);
}
public static Task<int> HandleAttestShowAsync(
IServiceProvider services,
string id,
string outputFormat,
bool includeProof,
bool verbose,
CancellationToken cancellationToken)
{
// Placeholder: would fetch specific attestation from backend
var result = new Dictionary<string, object?>
{
["id"] = id,
["found"] = false,
["message"] = "Attestation lookup requires backend connectivity.",
["include_proof"] = includeProof
};
if (outputFormat.Equals("json", StringComparison.OrdinalIgnoreCase))
{
var json = System.Text.Json.JsonSerializer.Serialize(result, new System.Text.Json.JsonSerializerOptions { WriteIndented = true });
AnsiConsole.WriteLine(json);
}
else
{
var table = new Table();
table.AddColumn("Property");
table.AddColumn("Value");
foreach (var (key, value) in result)
{
table.AddRow(Markup.Escape(key), Markup.Escape(value?.ToString() ?? "(null)"));
}
AnsiConsole.Write(table);
}
return Task.FromResult(0);
}
private static string SanitizeFileName(string value)
{
var safe = value.Trim();
foreach (var invalid in Path.GetInvalidFileNameChars())
{
safe = safe.Replace(invalid, '_');
}
return safe;
}
}

View File

@@ -0,0 +1,137 @@
namespace StellaOps.Notifier.WebService.Contracts;
/// <summary>
/// Request to enqueue a dead-letter entry.
/// </summary>
public sealed record EnqueueDeadLetterRequest
{
public required string DeliveryId { get; init; }
public required string EventId { get; init; }
public required string ChannelId { get; init; }
public required string ChannelType { get; init; }
public required string FailureReason { get; init; }
public string? FailureDetails { get; init; }
public int AttemptCount { get; init; }
public DateTimeOffset? LastAttemptAt { get; init; }
public IReadOnlyDictionary<string, string>? Metadata { get; init; }
public string? OriginalPayload { get; init; }
}
/// <summary>
/// Response for dead-letter entry operations.
/// </summary>
public sealed record DeadLetterEntryResponse
{
public required string EntryId { get; init; }
public required string TenantId { get; init; }
public required string DeliveryId { get; init; }
public required string EventId { get; init; }
public required string ChannelId { get; init; }
public required string ChannelType { get; init; }
public required string FailureReason { get; init; }
public string? FailureDetails { get; init; }
public required int AttemptCount { get; init; }
public required DateTimeOffset CreatedAt { get; init; }
public DateTimeOffset? LastAttemptAt { get; init; }
public required string Status { get; init; }
public int RetryCount { get; init; }
public DateTimeOffset? LastRetryAt { get; init; }
public string? Resolution { get; init; }
public string? ResolvedBy { get; init; }
public DateTimeOffset? ResolvedAt { get; init; }
}
/// <summary>
/// Request to list dead-letter entries.
/// </summary>
public sealed record ListDeadLetterRequest
{
public string? Status { get; init; }
public string? ChannelId { get; init; }
public string? ChannelType { get; init; }
public DateTimeOffset? Since { get; init; }
public DateTimeOffset? Until { get; init; }
public int Limit { get; init; } = 50;
public int Offset { get; init; }
}
/// <summary>
/// Response for listing dead-letter entries.
/// </summary>
public sealed record ListDeadLetterResponse
{
public required IReadOnlyList<DeadLetterEntryResponse> Entries { get; init; }
public required int TotalCount { get; init; }
}
/// <summary>
/// Request to retry dead-letter entries.
/// </summary>
public sealed record RetryDeadLetterRequest
{
public required IReadOnlyList<string> EntryIds { get; init; }
}
/// <summary>
/// Response for retry operations.
/// </summary>
public sealed record RetryDeadLetterResponse
{
public required IReadOnlyList<DeadLetterRetryResultItem> Results { get; init; }
public required int SuccessCount { get; init; }
public required int FailureCount { get; init; }
}
/// <summary>
/// Individual retry result.
/// </summary>
public sealed record DeadLetterRetryResultItem
{
public required string EntryId { get; init; }
public required bool Success { get; init; }
public string? Error { get; init; }
public DateTimeOffset? RetriedAt { get; init; }
public string? NewDeliveryId { get; init; }
}
/// <summary>
/// Request to resolve a dead-letter entry.
/// </summary>
public sealed record ResolveDeadLetterRequest
{
public required string Resolution { get; init; }
public string? ResolvedBy { get; init; }
}
/// <summary>
/// Response for dead-letter statistics.
/// </summary>
public sealed record DeadLetterStatsResponse
{
public required int TotalCount { get; init; }
public required int PendingCount { get; init; }
public required int RetryingCount { get; init; }
public required int RetriedCount { get; init; }
public required int ResolvedCount { get; init; }
public required int ExhaustedCount { get; init; }
public required IReadOnlyDictionary<string, int> ByChannel { get; init; }
public required IReadOnlyDictionary<string, int> ByReason { get; init; }
public DateTimeOffset? OldestEntryAt { get; init; }
public DateTimeOffset? NewestEntryAt { get; init; }
}
/// <summary>
/// Request to purge expired entries.
/// </summary>
public sealed record PurgeDeadLetterRequest
{
public int MaxAgeDays { get; init; } = 30;
}
/// <summary>
/// Response for purge operation.
/// </summary>
public sealed record PurgeDeadLetterResponse
{
public required int PurgedCount { get; init; }
}

View File

@@ -0,0 +1,143 @@
namespace StellaOps.Notifier.WebService.Contracts;
/// <summary>
/// Retention policy configuration request/response.
/// </summary>
public sealed record RetentionPolicyDto
{
/// <summary>
/// Retention period for delivery records in days.
/// </summary>
public int DeliveryRetentionDays { get; init; } = 90;
/// <summary>
/// Retention period for audit log entries in days.
/// </summary>
public int AuditRetentionDays { get; init; } = 365;
/// <summary>
/// Retention period for dead-letter entries in days.
/// </summary>
public int DeadLetterRetentionDays { get; init; } = 30;
/// <summary>
/// Retention period for storm tracking data in days.
/// </summary>
public int StormDataRetentionDays { get; init; } = 7;
/// <summary>
/// Retention period for inbox messages in days.
/// </summary>
public int InboxRetentionDays { get; init; } = 30;
/// <summary>
/// Retention period for event history in days.
/// </summary>
public int EventHistoryRetentionDays { get; init; } = 30;
/// <summary>
/// Whether automatic cleanup is enabled.
/// </summary>
public bool AutoCleanupEnabled { get; init; } = true;
/// <summary>
/// Cron expression for automatic cleanup schedule.
/// </summary>
public string CleanupSchedule { get; init; } = "0 2 * * *";
/// <summary>
/// Maximum records to delete per cleanup run.
/// </summary>
public int MaxDeletesPerRun { get; init; } = 10000;
/// <summary>
/// Whether to keep resolved/acknowledged deliveries longer.
/// </summary>
public bool ExtendResolvedRetention { get; init; } = true;
/// <summary>
/// Extension multiplier for resolved items.
/// </summary>
public double ResolvedRetentionMultiplier { get; init; } = 2.0;
}
/// <summary>
/// Request to update retention policy.
/// </summary>
public sealed record UpdateRetentionPolicyRequest
{
public required RetentionPolicyDto Policy { get; init; }
}
/// <summary>
/// Response for retention policy operations.
/// </summary>
public sealed record RetentionPolicyResponse
{
public required string TenantId { get; init; }
public required RetentionPolicyDto Policy { get; init; }
}
/// <summary>
/// Response for retention cleanup execution.
/// </summary>
public sealed record RetentionCleanupResponse
{
public required string TenantId { get; init; }
public required bool Success { get; init; }
public string? Error { get; init; }
public required DateTimeOffset ExecutedAt { get; init; }
public required double DurationMs { get; init; }
public required RetentionCleanupCountsDto Counts { get; init; }
}
/// <summary>
/// Cleanup counts DTO.
/// </summary>
public sealed record RetentionCleanupCountsDto
{
public int Deliveries { get; init; }
public int AuditEntries { get; init; }
public int DeadLetterEntries { get; init; }
public int StormData { get; init; }
public int InboxMessages { get; init; }
public int Events { get; init; }
public int Total { get; init; }
}
/// <summary>
/// Response for cleanup preview.
/// </summary>
public sealed record RetentionCleanupPreviewResponse
{
public required string TenantId { get; init; }
public required DateTimeOffset PreviewedAt { get; init; }
public required RetentionCleanupCountsDto EstimatedCounts { get; init; }
public required RetentionPolicyDto PolicyApplied { get; init; }
public required IReadOnlyDictionary<string, DateTimeOffset> CutoffDates { get; init; }
}
/// <summary>
/// Response for last cleanup execution.
/// </summary>
public sealed record RetentionCleanupExecutionResponse
{
public required string ExecutionId { get; init; }
public required string TenantId { get; init; }
public required DateTimeOffset StartedAt { get; init; }
public DateTimeOffset? CompletedAt { get; init; }
public required string Status { get; init; }
public RetentionCleanupCountsDto? Counts { get; init; }
public string? Error { get; init; }
}
/// <summary>
/// Response for cleanup all tenants.
/// </summary>
public sealed record RetentionCleanupAllResponse
{
public required IReadOnlyList<RetentionCleanupResponse> Results { get; init; }
public required int SuccessCount { get; init; }
public required int FailureCount { get; init; }
public required int TotalDeleted { get; init; }
}

View File

@@ -0,0 +1,305 @@
namespace StellaOps.Notifier.WebService.Contracts;
/// <summary>
/// Request to acknowledge a notification via signed token.
/// </summary>
public sealed record AckRequest
{
/// <summary>
/// Optional comment for the acknowledgement.
/// </summary>
public string? Comment { get; init; }
/// <summary>
/// Optional metadata to include with the acknowledgement.
/// </summary>
public IReadOnlyDictionary<string, string>? Metadata { get; init; }
}
/// <summary>
/// Response from acknowledging a notification.
/// </summary>
public sealed record AckResponse
{
/// <summary>
/// Whether the acknowledgement was successful.
/// </summary>
public required bool Success { get; init; }
/// <summary>
/// The delivery ID that was acknowledged.
/// </summary>
public string? DeliveryId { get; init; }
/// <summary>
/// The action that was performed.
/// </summary>
public string? Action { get; init; }
/// <summary>
/// When the acknowledgement was processed.
/// </summary>
public DateTimeOffset? ProcessedAt { get; init; }
/// <summary>
/// Error message if unsuccessful.
/// </summary>
public string? Error { get; init; }
}
/// <summary>
/// Request to create an acknowledgement token.
/// </summary>
public sealed record CreateAckTokenRequest
{
/// <summary>
/// The delivery ID to create an ack token for.
/// </summary>
public string? DeliveryId { get; init; }
/// <summary>
/// The action to acknowledge (e.g., "ack", "resolve", "escalate").
/// </summary>
public string? Action { get; init; }
/// <summary>
/// Optional expiration in hours. Default: 168 (7 days).
/// </summary>
public int? ExpirationHours { get; init; }
/// <summary>
/// Optional metadata to embed in the token.
/// </summary>
public IReadOnlyDictionary<string, string>? Metadata { get; init; }
}
/// <summary>
/// Response containing the created ack token.
/// </summary>
public sealed record CreateAckTokenResponse
{
/// <summary>
/// The signed token string.
/// </summary>
public required string Token { get; init; }
/// <summary>
/// The full acknowledgement URL.
/// </summary>
public required string AckUrl { get; init; }
/// <summary>
/// When the token expires.
/// </summary>
public required DateTimeOffset ExpiresAt { get; init; }
}
/// <summary>
/// Request to verify an ack token.
/// </summary>
public sealed record VerifyAckTokenRequest
{
/// <summary>
/// The token to verify.
/// </summary>
public string? Token { get; init; }
}
/// <summary>
/// Response from token verification.
/// </summary>
public sealed record VerifyAckTokenResponse
{
/// <summary>
/// Whether the token is valid.
/// </summary>
public required bool IsValid { get; init; }
/// <summary>
/// The delivery ID embedded in the token.
/// </summary>
public string? DeliveryId { get; init; }
/// <summary>
/// The action embedded in the token.
/// </summary>
public string? Action { get; init; }
/// <summary>
/// When the token expires.
/// </summary>
public DateTimeOffset? ExpiresAt { get; init; }
/// <summary>
/// Failure reason if invalid.
/// </summary>
public string? FailureReason { get; init; }
}
/// <summary>
/// Request to validate HTML content.
/// </summary>
public sealed record ValidateHtmlRequest
{
/// <summary>
/// The HTML content to validate.
/// </summary>
public string? Html { get; init; }
}
/// <summary>
/// Response from HTML validation.
/// </summary>
public sealed record ValidateHtmlResponse
{
/// <summary>
/// Whether the HTML is safe.
/// </summary>
public required bool IsSafe { get; init; }
/// <summary>
/// List of security issues found.
/// </summary>
public required IReadOnlyList<HtmlIssue> Issues { get; init; }
/// <summary>
/// Statistics about the HTML content.
/// </summary>
public HtmlStats? Stats { get; init; }
}
/// <summary>
/// An HTML security issue.
/// </summary>
public sealed record HtmlIssue
{
/// <summary>
/// The type of issue.
/// </summary>
public required string Type { get; init; }
/// <summary>
/// Description of the issue.
/// </summary>
public required string Description { get; init; }
/// <summary>
/// The element name if applicable.
/// </summary>
public string? Element { get; init; }
/// <summary>
/// The attribute name if applicable.
/// </summary>
public string? Attribute { get; init; }
}
/// <summary>
/// HTML content statistics.
/// </summary>
public sealed record HtmlStats
{
/// <summary>
/// Total character count.
/// </summary>
public int CharacterCount { get; init; }
/// <summary>
/// Number of HTML elements.
/// </summary>
public int ElementCount { get; init; }
/// <summary>
/// Maximum nesting depth.
/// </summary>
public int MaxDepth { get; init; }
/// <summary>
/// Number of links.
/// </summary>
public int LinkCount { get; init; }
/// <summary>
/// Number of images.
/// </summary>
public int ImageCount { get; init; }
}
/// <summary>
/// Request to sanitize HTML content.
/// </summary>
public sealed record SanitizeHtmlRequest
{
/// <summary>
/// The HTML content to sanitize.
/// </summary>
public string? Html { get; init; }
/// <summary>
/// Whether to allow data: URLs. Default: false.
/// </summary>
public bool AllowDataUrls { get; init; }
/// <summary>
/// Additional tags to allow.
/// </summary>
public IReadOnlyList<string>? AdditionalAllowedTags { get; init; }
}
/// <summary>
/// Response containing sanitized HTML.
/// </summary>
public sealed record SanitizeHtmlResponse
{
/// <summary>
/// The sanitized HTML content.
/// </summary>
public required string SanitizedHtml { get; init; }
/// <summary>
/// Whether any changes were made.
/// </summary>
public required bool WasModified { get; init; }
}
/// <summary>
/// Request to rotate a webhook secret.
/// </summary>
public sealed record RotateWebhookSecretRequest
{
/// <summary>
/// The channel ID to rotate the secret for.
/// </summary>
public string? ChannelId { get; init; }
}
/// <summary>
/// Response from webhook secret rotation.
/// </summary>
public sealed record RotateWebhookSecretResponse
{
/// <summary>
/// Whether rotation succeeded.
/// </summary>
public required bool Success { get; init; }
/// <summary>
/// The new secret (only shown once).
/// </summary>
public string? NewSecret { get; init; }
/// <summary>
/// When the new secret becomes active.
/// </summary>
public DateTimeOffset? ActiveAt { get; init; }
/// <summary>
/// When the old secret expires.
/// </summary>
public DateTimeOffset? OldSecretExpiresAt { get; init; }
/// <summary>
/// Error message if unsuccessful.
/// </summary>
public string? Error { get; init; }
}

View File

@@ -12,7 +12,11 @@ using Microsoft.Extensions.Hosting;
using StellaOps.Notifier.WebService.Contracts;
using StellaOps.Notifier.WebService.Services;
using StellaOps.Notifier.WebService.Setup;
using StellaOps.Notifier.Worker.Security;
using StellaOps.Notifier.Worker.StormBreaker;
using StellaOps.Notifier.Worker.DeadLetter;
using StellaOps.Notifier.Worker.Retention;
using StellaOps.Notifier.Worker.Observability;
using StellaOps.Notify.Storage.Mongo;
using StellaOps.Notify.Storage.Mongo.Documents;
using StellaOps.Notify.Storage.Mongo.Repositories;
@@ -53,6 +57,20 @@ builder.Services.AddSingleton<ILocalizationResolver, DefaultLocalizationResolver
builder.Services.Configure<StormBreakerConfig>(builder.Configuration.GetSection("notifier:stormBreaker"));
builder.Services.AddSingleton<IStormBreaker, DefaultStormBreaker>();
// Security services (NOTIFY-SVC-40-003)
builder.Services.Configure<AckTokenOptions>(builder.Configuration.GetSection("notifier:security:ackToken"));
builder.Services.AddSingleton<IAckTokenService, HmacAckTokenService>();
builder.Services.Configure<WebhookSecurityOptions>(builder.Configuration.GetSection("notifier:security:webhook"));
builder.Services.AddSingleton<IWebhookSecurityService, DefaultWebhookSecurityService>();
builder.Services.AddSingleton<IHtmlSanitizer, DefaultHtmlSanitizer>();
builder.Services.Configure<TenantIsolationOptions>(builder.Configuration.GetSection("notifier:security:tenantIsolation"));
builder.Services.AddSingleton<ITenantIsolationValidator, DefaultTenantIsolationValidator>();
// Observability, dead-letter, and retention services (NOTIFY-SVC-40-004)
builder.Services.AddSingleton<INotifyMetrics, DefaultNotifyMetrics>();
builder.Services.AddSingleton<IDeadLetterService, InMemoryDeadLetterService>();
builder.Services.AddSingleton<IRetentionPolicyService, DefaultRetentionPolicyService>();
builder.Services.AddHealthChecks();
var app = builder.Build();
@@ -2165,6 +2183,712 @@ app.MapPost("/api/v2/notify/storms/{stormKey}/summary", async (
return Results.Ok(summary);
});
// =============================================
// Security API (NOTIFY-SVC-40-003)
// =============================================
// Acknowledge notification via signed token
app.MapGet("/api/v1/ack/{token}", async (
HttpContext context,
string token,
IAckTokenService ackTokenService,
INotifyAuditRepository auditRepository,
TimeProvider timeProvider) =>
{
var verification = ackTokenService.VerifyToken(token);
if (!verification.IsValid)
{
return Results.BadRequest(new AckResponse
{
Success = false,
Error = verification.FailureReason?.ToString() ?? "Invalid token"
});
}
try
{
var auditEntry = new NotifyAuditEntryDocument
{
TenantId = verification.Token!.TenantId,
Actor = "ack-link",
Action = $"delivery.{verification.Token.Action}",
EntityId = verification.Token.DeliveryId,
EntityType = "delivery",
Timestamp = timeProvider.GetUtcNow()
};
await auditRepository.AppendAsync(auditEntry, context.RequestAborted).ConfigureAwait(false);
}
catch { }
return Results.Ok(new AckResponse
{
Success = true,
DeliveryId = verification.Token!.DeliveryId,
Action = verification.Token.Action,
ProcessedAt = timeProvider.GetUtcNow()
});
});
app.MapPost("/api/v1/ack/{token}", async (
HttpContext context,
string token,
AckRequest? request,
IAckTokenService ackTokenService,
INotifyAuditRepository auditRepository,
TimeProvider timeProvider) =>
{
var verification = ackTokenService.VerifyToken(token);
if (!verification.IsValid)
{
return Results.BadRequest(new AckResponse
{
Success = false,
Error = verification.FailureReason?.ToString() ?? "Invalid token"
});
}
try
{
var auditEntry = new NotifyAuditEntryDocument
{
TenantId = verification.Token!.TenantId,
Actor = "ack-link",
Action = $"delivery.{verification.Token.Action}",
EntityId = verification.Token.DeliveryId,
EntityType = "delivery",
Timestamp = timeProvider.GetUtcNow(),
Payload = MongoDB.Bson.Serialization.BsonSerializer.Deserialize<MongoDB.Bson.BsonDocument>(
JsonSerializer.Serialize(new { comment = request?.Comment, metadata = request?.Metadata }))
};
await auditRepository.AppendAsync(auditEntry, context.RequestAborted).ConfigureAwait(false);
}
catch { }
return Results.Ok(new AckResponse
{
Success = true,
DeliveryId = verification.Token!.DeliveryId,
Action = verification.Token.Action,
ProcessedAt = timeProvider.GetUtcNow()
});
});
app.MapPost("/api/v2/notify/security/ack-tokens", (
HttpContext context,
CreateAckTokenRequest request,
IAckTokenService ackTokenService) =>
{
var tenantId = context.Request.Headers["X-StellaOps-Tenant"].ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.BadRequest(Error("tenant_missing", "X-StellaOps-Tenant header is required.", context));
}
if (string.IsNullOrWhiteSpace(request.DeliveryId) || string.IsNullOrWhiteSpace(request.Action))
{
return Results.BadRequest(Error("invalid_request", "deliveryId and action are required.", context));
}
var expiration = request.ExpirationHours.HasValue
? TimeSpan.FromHours(request.ExpirationHours.Value)
: (TimeSpan?)null;
var token = ackTokenService.CreateToken(
tenantId,
request.DeliveryId,
request.Action,
expiration,
request.Metadata);
return Results.Ok(new CreateAckTokenResponse
{
Token = token.TokenString,
AckUrl = ackTokenService.CreateAckUrl(token),
ExpiresAt = token.ExpiresAt
});
});
app.MapPost("/api/v2/notify/security/ack-tokens/verify", (
HttpContext context,
VerifyAckTokenRequest request,
IAckTokenService ackTokenService) =>
{
if (string.IsNullOrWhiteSpace(request.Token))
{
return Results.BadRequest(Error("invalid_request", "token is required.", context));
}
var verification = ackTokenService.VerifyToken(request.Token);
return Results.Ok(new VerifyAckTokenResponse
{
IsValid = verification.IsValid,
DeliveryId = verification.Token?.DeliveryId,
Action = verification.Token?.Action,
ExpiresAt = verification.Token?.ExpiresAt,
FailureReason = verification.FailureReason?.ToString()
});
});
app.MapPost("/api/v2/notify/security/html/validate", (
HttpContext context,
ValidateHtmlRequest request,
IHtmlSanitizer htmlSanitizer) =>
{
if (string.IsNullOrWhiteSpace(request.Html))
{
return Results.Ok(new ValidateHtmlResponse
{
IsSafe = true,
Issues = []
});
}
var result = htmlSanitizer.Validate(request.Html);
return Results.Ok(new ValidateHtmlResponse
{
IsSafe = result.IsSafe,
Issues = result.Issues.Select(i => new HtmlIssue
{
Type = i.Type.ToString(),
Description = i.Description,
Element = i.ElementName,
Attribute = i.AttributeName
}).ToArray(),
Stats = result.Stats is not null ? new HtmlStats
{
CharacterCount = result.Stats.CharacterCount,
ElementCount = result.Stats.ElementCount,
MaxDepth = result.Stats.MaxDepth,
LinkCount = result.Stats.LinkCount,
ImageCount = result.Stats.ImageCount
} : null
});
});
app.MapPost("/api/v2/notify/security/html/sanitize", (
HttpContext context,
SanitizeHtmlRequest request,
IHtmlSanitizer htmlSanitizer) =>
{
if (string.IsNullOrWhiteSpace(request.Html))
{
return Results.Ok(new SanitizeHtmlResponse
{
SanitizedHtml = string.Empty,
WasModified = false
});
}
var options = new HtmlSanitizeOptions
{
AllowDataUrls = request.AllowDataUrls,
AdditionalAllowedTags = request.AdditionalAllowedTags?.ToHashSet()
};
var sanitized = htmlSanitizer.Sanitize(request.Html, options);
return Results.Ok(new SanitizeHtmlResponse
{
SanitizedHtml = sanitized,
WasModified = !string.Equals(request.Html, sanitized, StringComparison.Ordinal)
});
});
app.MapPost("/api/v2/notify/security/webhook/{channelId}/rotate", async (
HttpContext context,
string channelId,
IWebhookSecurityService webhookSecurityService,
INotifyAuditRepository auditRepository,
TimeProvider timeProvider) =>
{
var tenantId = context.Request.Headers["X-StellaOps-Tenant"].ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.BadRequest(Error("tenant_missing", "X-StellaOps-Tenant header is required.", context));
}
var actor = context.Request.Headers["X-StellaOps-Actor"].ToString();
if (string.IsNullOrWhiteSpace(actor)) actor = "api";
var result = await webhookSecurityService.RotateSecretAsync(tenantId, channelId, context.RequestAborted)
.ConfigureAwait(false);
try
{
var auditEntry = new NotifyAuditEntryDocument
{
TenantId = tenantId,
Actor = actor,
Action = "webhook.secret.rotated",
EntityId = channelId,
EntityType = "channel",
Timestamp = timeProvider.GetUtcNow()
};
await auditRepository.AppendAsync(auditEntry, context.RequestAborted).ConfigureAwait(false);
}
catch { }
return Results.Ok(new RotateWebhookSecretResponse
{
Success = result.Success,
NewSecret = result.NewSecret,
ActiveAt = result.ActiveAt,
OldSecretExpiresAt = result.OldSecretExpiresAt,
Error = result.Error
});
});
app.MapGet("/api/v2/notify/security/webhook/{channelId}/secret", (
HttpContext context,
string channelId,
IWebhookSecurityService webhookSecurityService) =>
{
var tenantId = context.Request.Headers["X-StellaOps-Tenant"].ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.BadRequest(Error("tenant_missing", "X-StellaOps-Tenant header is required.", context));
}
var maskedSecret = webhookSecurityService.GetMaskedSecret(tenantId, channelId);
return Results.Ok(new { channelId, maskedSecret });
});
app.MapGet("/api/v2/notify/security/isolation/violations", (
HttpContext context,
ITenantIsolationValidator isolationValidator,
int? limit) =>
{
var violations = isolationValidator.GetRecentViolations(limit ?? 100);
return Results.Ok(new { items = violations, count = violations.Count });
});
// =============================================
// Dead-Letter API (NOTIFY-SVC-40-004)
// =============================================
app.MapPost("/api/v2/notify/dead-letter", async (
HttpContext context,
EnqueueDeadLetterRequest request,
IDeadLetterService deadLetterService) =>
{
var tenantId = context.Request.Headers["X-StellaOps-Tenant"].ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.BadRequest(Error("tenant_missing", "X-StellaOps-Tenant header is required.", context));
}
var enqueueRequest = new DeadLetterEnqueueRequest
{
TenantId = tenantId,
DeliveryId = request.DeliveryId,
EventId = request.EventId,
ChannelId = request.ChannelId,
ChannelType = request.ChannelType,
FailureReason = request.FailureReason,
FailureDetails = request.FailureDetails,
AttemptCount = request.AttemptCount,
LastAttemptAt = request.LastAttemptAt,
Metadata = request.Metadata,
OriginalPayload = request.OriginalPayload
};
var entry = await deadLetterService.EnqueueAsync(enqueueRequest, context.RequestAborted).ConfigureAwait(false);
return Results.Created($"/api/v2/notify/dead-letter/{entry.EntryId}", new DeadLetterEntryResponse
{
EntryId = entry.EntryId,
TenantId = entry.TenantId,
DeliveryId = entry.DeliveryId,
EventId = entry.EventId,
ChannelId = entry.ChannelId,
ChannelType = entry.ChannelType,
FailureReason = entry.FailureReason,
FailureDetails = entry.FailureDetails,
AttemptCount = entry.AttemptCount,
CreatedAt = entry.CreatedAt,
LastAttemptAt = entry.LastAttemptAt,
Status = entry.Status.ToString(),
RetryCount = entry.RetryCount,
LastRetryAt = entry.LastRetryAt,
Resolution = entry.Resolution,
ResolvedBy = entry.ResolvedBy,
ResolvedAt = entry.ResolvedAt
});
});
app.MapGet("/api/v2/notify/dead-letter", async (
HttpContext context,
IDeadLetterService deadLetterService,
string? status,
string? channelId,
string? channelType,
DateTimeOffset? since,
DateTimeOffset? until,
int? limit,
int? offset) =>
{
var tenantId = context.Request.Headers["X-StellaOps-Tenant"].ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.BadRequest(Error("tenant_missing", "X-StellaOps-Tenant header is required.", context));
}
var options = new DeadLetterListOptions
{
Status = Enum.TryParse<DeadLetterStatus>(status, true, out var s) ? s : null,
ChannelId = channelId,
ChannelType = channelType,
Since = since,
Until = until,
Limit = limit ?? 50,
Offset = offset ?? 0
};
var entries = await deadLetterService.ListAsync(tenantId, options, context.RequestAborted).ConfigureAwait(false);
return Results.Ok(new ListDeadLetterResponse
{
Entries = entries.Select(e => new DeadLetterEntryResponse
{
EntryId = e.EntryId,
TenantId = e.TenantId,
DeliveryId = e.DeliveryId,
EventId = e.EventId,
ChannelId = e.ChannelId,
ChannelType = e.ChannelType,
FailureReason = e.FailureReason,
FailureDetails = e.FailureDetails,
AttemptCount = e.AttemptCount,
CreatedAt = e.CreatedAt,
LastAttemptAt = e.LastAttemptAt,
Status = e.Status.ToString(),
RetryCount = e.RetryCount,
LastRetryAt = e.LastRetryAt,
Resolution = e.Resolution,
ResolvedBy = e.ResolvedBy,
ResolvedAt = e.ResolvedAt
}).ToList(),
TotalCount = entries.Count
});
});
app.MapGet("/api/v2/notify/dead-letter/{entryId}", async (
HttpContext context,
string entryId,
IDeadLetterService deadLetterService) =>
{
var tenantId = context.Request.Headers["X-StellaOps-Tenant"].ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.BadRequest(Error("tenant_missing", "X-StellaOps-Tenant header is required.", context));
}
var entry = await deadLetterService.GetAsync(tenantId, entryId, context.RequestAborted).ConfigureAwait(false);
if (entry is null)
{
return Results.NotFound(Error("entry_not_found", $"Dead-letter entry {entryId} not found.", context));
}
return Results.Ok(new DeadLetterEntryResponse
{
EntryId = entry.EntryId,
TenantId = entry.TenantId,
DeliveryId = entry.DeliveryId,
EventId = entry.EventId,
ChannelId = entry.ChannelId,
ChannelType = entry.ChannelType,
FailureReason = entry.FailureReason,
FailureDetails = entry.FailureDetails,
AttemptCount = entry.AttemptCount,
CreatedAt = entry.CreatedAt,
LastAttemptAt = entry.LastAttemptAt,
Status = entry.Status.ToString(),
RetryCount = entry.RetryCount,
LastRetryAt = entry.LastRetryAt,
Resolution = entry.Resolution,
ResolvedBy = entry.ResolvedBy,
ResolvedAt = entry.ResolvedAt
});
});
app.MapPost("/api/v2/notify/dead-letter/retry", async (
HttpContext context,
RetryDeadLetterRequest request,
IDeadLetterService deadLetterService) =>
{
var tenantId = context.Request.Headers["X-StellaOps-Tenant"].ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.BadRequest(Error("tenant_missing", "X-StellaOps-Tenant header is required.", context));
}
var results = await deadLetterService.RetryBatchAsync(tenantId, request.EntryIds, context.RequestAborted)
.ConfigureAwait(false);
return Results.Ok(new RetryDeadLetterResponse
{
Results = results.Select(r => new DeadLetterRetryResultItem
{
EntryId = r.EntryId,
Success = r.Success,
Error = r.Error,
RetriedAt = r.RetriedAt,
NewDeliveryId = r.NewDeliveryId
}).ToList(),
SuccessCount = results.Count(r => r.Success),
FailureCount = results.Count(r => !r.Success)
});
});
app.MapPost("/api/v2/notify/dead-letter/{entryId}/resolve", async (
HttpContext context,
string entryId,
ResolveDeadLetterRequest request,
IDeadLetterService deadLetterService) =>
{
var tenantId = context.Request.Headers["X-StellaOps-Tenant"].ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.BadRequest(Error("tenant_missing", "X-StellaOps-Tenant header is required.", context));
}
await deadLetterService.ResolveAsync(tenantId, entryId, request.Resolution, request.ResolvedBy, context.RequestAborted)
.ConfigureAwait(false);
return Results.NoContent();
});
app.MapGet("/api/v2/notify/dead-letter/stats", async (
HttpContext context,
IDeadLetterService deadLetterService) =>
{
var tenantId = context.Request.Headers["X-StellaOps-Tenant"].ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.BadRequest(Error("tenant_missing", "X-StellaOps-Tenant header is required.", context));
}
var stats = await deadLetterService.GetStatsAsync(tenantId, context.RequestAborted).ConfigureAwait(false);
return Results.Ok(new DeadLetterStatsResponse
{
TotalCount = stats.TotalCount,
PendingCount = stats.PendingCount,
RetryingCount = stats.RetryingCount,
RetriedCount = stats.RetriedCount,
ResolvedCount = stats.ResolvedCount,
ExhaustedCount = stats.ExhaustedCount,
ByChannel = stats.ByChannel,
ByReason = stats.ByReason,
OldestEntryAt = stats.OldestEntryAt,
NewestEntryAt = stats.NewestEntryAt
});
});
app.MapPost("/api/v2/notify/dead-letter/purge", async (
HttpContext context,
PurgeDeadLetterRequest request,
IDeadLetterService deadLetterService) =>
{
var tenantId = context.Request.Headers["X-StellaOps-Tenant"].ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.BadRequest(Error("tenant_missing", "X-StellaOps-Tenant header is required.", context));
}
var maxAge = TimeSpan.FromDays(request.MaxAgeDays);
var purgedCount = await deadLetterService.PurgeExpiredAsync(tenantId, maxAge, context.RequestAborted)
.ConfigureAwait(false);
return Results.Ok(new PurgeDeadLetterResponse { PurgedCount = purgedCount });
});
// =============================================
// Retention Policy API (NOTIFY-SVC-40-004)
// =============================================
app.MapGet("/api/v2/notify/retention/policy", async (
HttpContext context,
IRetentionPolicyService retentionService) =>
{
var tenantId = context.Request.Headers["X-StellaOps-Tenant"].ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.BadRequest(Error("tenant_missing", "X-StellaOps-Tenant header is required.", context));
}
var policy = await retentionService.GetPolicyAsync(tenantId, context.RequestAborted).ConfigureAwait(false);
return Results.Ok(new RetentionPolicyResponse
{
TenantId = tenantId,
Policy = new RetentionPolicyDto
{
DeliveryRetentionDays = (int)policy.DeliveryRetention.TotalDays,
AuditRetentionDays = (int)policy.AuditRetention.TotalDays,
DeadLetterRetentionDays = (int)policy.DeadLetterRetention.TotalDays,
StormDataRetentionDays = (int)policy.StormDataRetention.TotalDays,
InboxRetentionDays = (int)policy.InboxRetention.TotalDays,
EventHistoryRetentionDays = (int)policy.EventHistoryRetention.TotalDays,
AutoCleanupEnabled = policy.AutoCleanupEnabled,
CleanupSchedule = policy.CleanupSchedule,
MaxDeletesPerRun = policy.MaxDeletesPerRun,
ExtendResolvedRetention = policy.ExtendResolvedRetention,
ResolvedRetentionMultiplier = policy.ResolvedRetentionMultiplier
}
});
});
app.MapPut("/api/v2/notify/retention/policy", async (
HttpContext context,
UpdateRetentionPolicyRequest request,
IRetentionPolicyService retentionService) =>
{
var tenantId = context.Request.Headers["X-StellaOps-Tenant"].ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.BadRequest(Error("tenant_missing", "X-StellaOps-Tenant header is required.", context));
}
var policy = new RetentionPolicy
{
DeliveryRetention = TimeSpan.FromDays(request.Policy.DeliveryRetentionDays),
AuditRetention = TimeSpan.FromDays(request.Policy.AuditRetentionDays),
DeadLetterRetention = TimeSpan.FromDays(request.Policy.DeadLetterRetentionDays),
StormDataRetention = TimeSpan.FromDays(request.Policy.StormDataRetentionDays),
InboxRetention = TimeSpan.FromDays(request.Policy.InboxRetentionDays),
EventHistoryRetention = TimeSpan.FromDays(request.Policy.EventHistoryRetentionDays),
AutoCleanupEnabled = request.Policy.AutoCleanupEnabled,
CleanupSchedule = request.Policy.CleanupSchedule,
MaxDeletesPerRun = request.Policy.MaxDeletesPerRun,
ExtendResolvedRetention = request.Policy.ExtendResolvedRetention,
ResolvedRetentionMultiplier = request.Policy.ResolvedRetentionMultiplier
};
await retentionService.SetPolicyAsync(tenantId, policy, context.RequestAborted).ConfigureAwait(false);
return Results.NoContent();
});
app.MapPost("/api/v2/notify/retention/cleanup", async (
HttpContext context,
IRetentionPolicyService retentionService) =>
{
var tenantId = context.Request.Headers["X-StellaOps-Tenant"].ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.BadRequest(Error("tenant_missing", "X-StellaOps-Tenant header is required.", context));
}
var result = await retentionService.ExecuteCleanupAsync(tenantId, context.RequestAborted).ConfigureAwait(false);
return Results.Ok(new RetentionCleanupResponse
{
TenantId = result.TenantId,
Success = result.Success,
Error = result.Error,
ExecutedAt = result.ExecutedAt,
DurationMs = result.Duration.TotalMilliseconds,
Counts = new RetentionCleanupCountsDto
{
Deliveries = result.Counts.Deliveries,
AuditEntries = result.Counts.AuditEntries,
DeadLetterEntries = result.Counts.DeadLetterEntries,
StormData = result.Counts.StormData,
InboxMessages = result.Counts.InboxMessages,
Events = result.Counts.Events,
Total = result.Counts.Total
}
});
});
app.MapGet("/api/v2/notify/retention/cleanup/preview", async (
HttpContext context,
IRetentionPolicyService retentionService) =>
{
var tenantId = context.Request.Headers["X-StellaOps-Tenant"].ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.BadRequest(Error("tenant_missing", "X-StellaOps-Tenant header is required.", context));
}
var preview = await retentionService.PreviewCleanupAsync(tenantId, context.RequestAborted).ConfigureAwait(false);
return Results.Ok(new RetentionCleanupPreviewResponse
{
TenantId = preview.TenantId,
PreviewedAt = preview.PreviewedAt,
EstimatedCounts = new RetentionCleanupCountsDto
{
Deliveries = preview.EstimatedCounts.Deliveries,
AuditEntries = preview.EstimatedCounts.AuditEntries,
DeadLetterEntries = preview.EstimatedCounts.DeadLetterEntries,
StormData = preview.EstimatedCounts.StormData,
InboxMessages = preview.EstimatedCounts.InboxMessages,
Events = preview.EstimatedCounts.Events,
Total = preview.EstimatedCounts.Total
},
PolicyApplied = new RetentionPolicyDto
{
DeliveryRetentionDays = (int)preview.PolicyApplied.DeliveryRetention.TotalDays,
AuditRetentionDays = (int)preview.PolicyApplied.AuditRetention.TotalDays,
DeadLetterRetentionDays = (int)preview.PolicyApplied.DeadLetterRetention.TotalDays,
StormDataRetentionDays = (int)preview.PolicyApplied.StormDataRetention.TotalDays,
InboxRetentionDays = (int)preview.PolicyApplied.InboxRetention.TotalDays,
EventHistoryRetentionDays = (int)preview.PolicyApplied.EventHistoryRetention.TotalDays,
AutoCleanupEnabled = preview.PolicyApplied.AutoCleanupEnabled,
CleanupSchedule = preview.PolicyApplied.CleanupSchedule,
MaxDeletesPerRun = preview.PolicyApplied.MaxDeletesPerRun,
ExtendResolvedRetention = preview.PolicyApplied.ExtendResolvedRetention,
ResolvedRetentionMultiplier = preview.PolicyApplied.ResolvedRetentionMultiplier
},
CutoffDates = preview.CutoffDates
});
});
app.MapGet("/api/v2/notify/retention/cleanup/last", async (
HttpContext context,
IRetentionPolicyService retentionService) =>
{
var tenantId = context.Request.Headers["X-StellaOps-Tenant"].ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
return Results.BadRequest(Error("tenant_missing", "X-StellaOps-Tenant header is required.", context));
}
var execution = await retentionService.GetLastExecutionAsync(tenantId, context.RequestAborted).ConfigureAwait(false);
if (execution is null)
{
return Results.NotFound(Error("no_execution", "No cleanup execution found.", context));
}
return Results.Ok(new RetentionCleanupExecutionResponse
{
ExecutionId = execution.ExecutionId,
TenantId = execution.TenantId,
StartedAt = execution.StartedAt,
CompletedAt = execution.CompletedAt,
Status = execution.Status.ToString(),
Counts = execution.Counts is not null ? new RetentionCleanupCountsDto
{
Deliveries = execution.Counts.Deliveries,
AuditEntries = execution.Counts.AuditEntries,
DeadLetterEntries = execution.Counts.DeadLetterEntries,
StormData = execution.Counts.StormData,
InboxMessages = execution.Counts.InboxMessages,
Events = execution.Counts.Events,
Total = execution.Counts.Total
} : null,
Error = execution.Error
});
});
app.MapGet("/.well-known/openapi", (HttpContext context) =>
{
context.Response.Headers["X-OpenAPI-Scope"] = "notify";
@@ -2178,6 +2902,7 @@ info:
paths:
/api/v1/notify/quiet-hours: {}
/api/v1/notify/incidents: {}
/api/v1/ack/{token}: {}
/api/v2/notify/templates: {}
/api/v2/notify/rules: {}
/api/v2/notify/channels: {}
@@ -2195,6 +2920,23 @@ paths:
/api/v2/notify/localization/locales: {}
/api/v2/notify/localization/resolve: {}
/api/v2/notify/storms: {}
/api/v2/notify/security/ack-tokens: {}
/api/v2/notify/security/ack-tokens/verify: {}
/api/v2/notify/security/html/validate: {}
/api/v2/notify/security/html/sanitize: {}
/api/v2/notify/security/webhook/{channelId}/rotate: {}
/api/v2/notify/security/webhook/{channelId}/secret: {}
/api/v2/notify/security/isolation/violations: {}
/api/v2/notify/dead-letter: {}
/api/v2/notify/dead-letter/{entryId}: {}
/api/v2/notify/dead-letter/retry: {}
/api/v2/notify/dead-letter/{entryId}/resolve: {}
/api/v2/notify/dead-letter/stats: {}
/api/v2/notify/dead-letter/purge: {}
/api/v2/notify/retention/policy: {}
/api/v2/notify/retention/cleanup: {}
/api/v2/notify/retention/cleanup/preview: {}
/api/v2/notify/retention/cleanup/last: {}
""";
return Results.Text(stub, "application/yaml", Encoding.UTF8);

View File

@@ -1,22 +1,32 @@
using System.Net.Http.Json;
using System.Text;
using System.Text.Json;
using Microsoft.Extensions.Logging;
using StellaOps.Notify.Models;
using StellaOps.Notifier.Worker.Security;
namespace StellaOps.Notifier.Worker.Channels;
/// <summary>
/// Channel adapter for webhook (HTTP POST) delivery with retry support.
/// Channel adapter for webhook (HTTP POST) delivery with retry support and HMAC signing.
/// </summary>
public sealed class WebhookChannelAdapter : INotifyChannelAdapter
{
private readonly HttpClient _httpClient;
private readonly IWebhookSecurityService? _securityService;
private readonly TimeProvider _timeProvider;
private readonly ILogger<WebhookChannelAdapter> _logger;
public WebhookChannelAdapter(HttpClient httpClient, ILogger<WebhookChannelAdapter> logger)
public WebhookChannelAdapter(
HttpClient httpClient,
ILogger<WebhookChannelAdapter> logger,
IWebhookSecurityService? securityService = null,
TimeProvider? timeProvider = null)
{
_httpClient = httpClient ?? throw new ArgumentNullException(nameof(httpClient));
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
_securityService = securityService;
_timeProvider = timeProvider ?? TimeProvider.System;
}
public NotifyChannelType ChannelType => NotifyChannelType.Webhook;
@@ -52,17 +62,30 @@ public sealed class WebhookChannelAdapter : INotifyChannelAdapter
timestamp = DateTimeOffset.UtcNow
};
var jsonOptions = new JsonSerializerOptions { PropertyNamingPolicy = JsonNamingPolicy.CamelCase };
var payloadJson = JsonSerializer.Serialize(payload, jsonOptions);
var payloadBytes = Encoding.UTF8.GetBytes(payloadJson);
try
{
using var request = new HttpRequestMessage(HttpMethod.Post, uri);
request.Content = JsonContent.Create(payload, options: new JsonSerializerOptions
{
PropertyNamingPolicy = JsonNamingPolicy.CamelCase
});
request.Content = new StringContent(payloadJson, Encoding.UTF8, "application/json");
// Add HMAC signature header if secret is available (placeholder for KMS integration)
// Add version header
request.Headers.Add("X-StellaOps-Notifier", "1.0");
// Add HMAC signature if security service is available
if (_securityService is not null)
{
var timestamp = _timeProvider.GetUtcNow();
var signature = _securityService.SignPayload(
channel.TenantId,
channel.ChannelId,
payloadBytes,
timestamp);
request.Headers.Add("X-StellaOps-Signature", signature);
}
using var response = await _httpClient.SendAsync(request, cancellationToken).ConfigureAwait(false);
var statusCode = (int)response.StatusCode;

View File

@@ -0,0 +1,185 @@
using System.Collections.Immutable;
namespace StellaOps.Notifier.Worker.DeadLetter;
/// <summary>
/// Service for managing dead-letter entries for failed notification deliveries.
/// </summary>
public interface IDeadLetterService
{
/// <summary>
/// Enqueues a failed delivery to the dead-letter queue.
/// </summary>
Task<DeadLetterEntry> EnqueueAsync(
DeadLetterEnqueueRequest request,
CancellationToken cancellationToken = default);
/// <summary>
/// Retrieves a dead-letter entry by ID.
/// </summary>
Task<DeadLetterEntry?> GetAsync(
string tenantId,
string entryId,
CancellationToken cancellationToken = default);
/// <summary>
/// Lists dead-letter entries with optional filtering.
/// </summary>
Task<IReadOnlyList<DeadLetterEntry>> ListAsync(
string tenantId,
DeadLetterListOptions? options = null,
CancellationToken cancellationToken = default);
/// <summary>
/// Retries a dead-letter entry.
/// </summary>
Task<DeadLetterRetryResult> RetryAsync(
string tenantId,
string entryId,
CancellationToken cancellationToken = default);
/// <summary>
/// Retries multiple dead-letter entries.
/// </summary>
Task<IReadOnlyList<DeadLetterRetryResult>> RetryBatchAsync(
string tenantId,
IEnumerable<string> entryIds,
CancellationToken cancellationToken = default);
/// <summary>
/// Marks a dead-letter entry as resolved/dismissed.
/// </summary>
Task ResolveAsync(
string tenantId,
string entryId,
string resolution,
string? resolvedBy = null,
CancellationToken cancellationToken = default);
/// <summary>
/// Deletes old dead-letter entries based on retention policy.
/// </summary>
Task<int> PurgeExpiredAsync(
string tenantId,
TimeSpan maxAge,
CancellationToken cancellationToken = default);
/// <summary>
/// Gets statistics about dead-letter entries.
/// </summary>
Task<DeadLetterStats> GetStatsAsync(
string tenantId,
CancellationToken cancellationToken = default);
}
/// <summary>
/// Request to enqueue a dead-letter entry.
/// </summary>
public sealed record DeadLetterEnqueueRequest
{
public required string TenantId { get; init; }
public required string DeliveryId { get; init; }
public required string EventId { get; init; }
public required string ChannelId { get; init; }
public required string ChannelType { get; init; }
public required string FailureReason { get; init; }
public string? FailureDetails { get; init; }
public int AttemptCount { get; init; }
public DateTimeOffset? LastAttemptAt { get; init; }
public IReadOnlyDictionary<string, string>? Metadata { get; init; }
/// <summary>
/// Original payload for retry purposes.
/// </summary>
public string? OriginalPayload { get; init; }
}
/// <summary>
/// A dead-letter queue entry.
/// </summary>
public sealed record DeadLetterEntry
{
public required string EntryId { get; init; }
public required string TenantId { get; init; }
public required string DeliveryId { get; init; }
public required string EventId { get; init; }
public required string ChannelId { get; init; }
public required string ChannelType { get; init; }
public required string FailureReason { get; init; }
public string? FailureDetails { get; init; }
public required int AttemptCount { get; init; }
public required DateTimeOffset CreatedAt { get; init; }
public DateTimeOffset? LastAttemptAt { get; init; }
public required DeadLetterStatus Status { get; init; }
public int RetryCount { get; init; }
public DateTimeOffset? LastRetryAt { get; init; }
public string? Resolution { get; init; }
public string? ResolvedBy { get; init; }
public DateTimeOffset? ResolvedAt { get; init; }
public ImmutableDictionary<string, string> Metadata { get; init; } = ImmutableDictionary<string, string>.Empty;
public string? OriginalPayload { get; init; }
}
/// <summary>
/// Status of a dead-letter entry.
/// </summary>
public enum DeadLetterStatus
{
/// <summary>Entry is pending retry or resolution.</summary>
Pending,
/// <summary>Entry is being retried.</summary>
Retrying,
/// <summary>Entry was successfully retried.</summary>
Retried,
/// <summary>Entry was manually resolved/dismissed.</summary>
Resolved,
/// <summary>Entry exceeded max retries.</summary>
Exhausted
}
/// <summary>
/// Options for listing dead-letter entries.
/// </summary>
public sealed record DeadLetterListOptions
{
public DeadLetterStatus? Status { get; init; }
public string? ChannelId { get; init; }
public string? ChannelType { get; init; }
public DateTimeOffset? Since { get; init; }
public DateTimeOffset? Until { get; init; }
public int Limit { get; init; } = 50;
public int Offset { get; init; }
}
/// <summary>
/// Result of a dead-letter retry attempt.
/// </summary>
public sealed record DeadLetterRetryResult
{
public required string EntryId { get; init; }
public required bool Success { get; init; }
public string? Error { get; init; }
public DateTimeOffset? RetriedAt { get; init; }
public string? NewDeliveryId { get; init; }
}
/// <summary>
/// Statistics about dead-letter entries.
/// </summary>
public sealed record DeadLetterStats
{
public required int TotalCount { get; init; }
public required int PendingCount { get; init; }
public required int RetryingCount { get; init; }
public required int RetriedCount { get; init; }
public required int ResolvedCount { get; init; }
public required int ExhaustedCount { get; init; }
public required IReadOnlyDictionary<string, int> ByChannel { get; init; }
public required IReadOnlyDictionary<string, int> ByReason { get; init; }
public DateTimeOffset? OldestEntryAt { get; init; }
public DateTimeOffset? NewestEntryAt { get; init; }
}

View File

@@ -0,0 +1,294 @@
using System.Collections.Concurrent;
using System.Collections.Immutable;
using Microsoft.Extensions.Logging;
using StellaOps.Notifier.Worker.Observability;
namespace StellaOps.Notifier.Worker.DeadLetter;
/// <summary>
/// In-memory implementation of dead-letter service.
/// For production, use a persistent storage implementation.
/// </summary>
public sealed class InMemoryDeadLetterService : IDeadLetterService
{
private readonly ConcurrentDictionary<string, DeadLetterEntry> _entries = new();
private readonly TimeProvider _timeProvider;
private readonly INotifyMetrics? _metrics;
private readonly ILogger<InMemoryDeadLetterService> _logger;
public InMemoryDeadLetterService(
TimeProvider timeProvider,
ILogger<InMemoryDeadLetterService> logger,
INotifyMetrics? metrics = null)
{
_timeProvider = timeProvider ?? TimeProvider.System;
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
_metrics = metrics;
}
public Task<DeadLetterEntry> EnqueueAsync(
DeadLetterEnqueueRequest request,
CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(request);
var entryId = Guid.NewGuid().ToString("N");
var now = _timeProvider.GetUtcNow();
var entry = new DeadLetterEntry
{
EntryId = entryId,
TenantId = request.TenantId,
DeliveryId = request.DeliveryId,
EventId = request.EventId,
ChannelId = request.ChannelId,
ChannelType = request.ChannelType,
FailureReason = request.FailureReason,
FailureDetails = request.FailureDetails,
AttemptCount = request.AttemptCount,
CreatedAt = now,
LastAttemptAt = request.LastAttemptAt ?? now,
Status = DeadLetterStatus.Pending,
Metadata = request.Metadata?.ToImmutableDictionary() ?? ImmutableDictionary<string, string>.Empty,
OriginalPayload = request.OriginalPayload
};
_entries[GetKey(request.TenantId, entryId)] = entry;
_metrics?.RecordDeadLetter(request.TenantId, request.FailureReason, request.ChannelType);
_logger.LogWarning(
"Dead-lettered delivery {DeliveryId} for tenant {TenantId}: {Reason}",
request.DeliveryId, request.TenantId, request.FailureReason);
return Task.FromResult(entry);
}
public Task<DeadLetterEntry?> GetAsync(
string tenantId,
string entryId,
CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
ArgumentException.ThrowIfNullOrWhiteSpace(entryId);
_entries.TryGetValue(GetKey(tenantId, entryId), out var entry);
return Task.FromResult(entry);
}
public Task<IReadOnlyList<DeadLetterEntry>> ListAsync(
string tenantId,
DeadLetterListOptions? options = null,
CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
options ??= new DeadLetterListOptions();
var query = _entries.Values
.Where(e => e.TenantId == tenantId);
if (options.Status.HasValue)
{
query = query.Where(e => e.Status == options.Status.Value);
}
if (!string.IsNullOrWhiteSpace(options.ChannelId))
{
query = query.Where(e => e.ChannelId == options.ChannelId);
}
if (!string.IsNullOrWhiteSpace(options.ChannelType))
{
query = query.Where(e => e.ChannelType == options.ChannelType);
}
if (options.Since.HasValue)
{
query = query.Where(e => e.CreatedAt >= options.Since.Value);
}
if (options.Until.HasValue)
{
query = query.Where(e => e.CreatedAt <= options.Until.Value);
}
var result = query
.OrderByDescending(e => e.CreatedAt)
.Skip(options.Offset)
.Take(options.Limit)
.ToArray();
return Task.FromResult<IReadOnlyList<DeadLetterEntry>>(result);
}
public Task<DeadLetterRetryResult> RetryAsync(
string tenantId,
string entryId,
CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
ArgumentException.ThrowIfNullOrWhiteSpace(entryId);
var key = GetKey(tenantId, entryId);
if (!_entries.TryGetValue(key, out var entry))
{
return Task.FromResult(new DeadLetterRetryResult
{
EntryId = entryId,
Success = false,
Error = "Entry not found"
});
}
if (entry.Status is DeadLetterStatus.Retried or DeadLetterStatus.Resolved)
{
return Task.FromResult(new DeadLetterRetryResult
{
EntryId = entryId,
Success = false,
Error = $"Entry is already {entry.Status}"
});
}
var now = _timeProvider.GetUtcNow();
// Update entry status
var updatedEntry = entry with
{
Status = DeadLetterStatus.Retried,
RetryCount = entry.RetryCount + 1,
LastRetryAt = now
};
_entries[key] = updatedEntry;
_logger.LogInformation(
"Retried dead-letter entry {EntryId} for tenant {TenantId}",
entryId, tenantId);
// In a real implementation, this would re-queue the delivery
return Task.FromResult(new DeadLetterRetryResult
{
EntryId = entryId,
Success = true,
RetriedAt = now,
NewDeliveryId = Guid.NewGuid().ToString("N")
});
}
public async Task<IReadOnlyList<DeadLetterRetryResult>> RetryBatchAsync(
string tenantId,
IEnumerable<string> entryIds,
CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
ArgumentNullException.ThrowIfNull(entryIds);
var results = new List<DeadLetterRetryResult>();
foreach (var entryId in entryIds)
{
var result = await RetryAsync(tenantId, entryId, cancellationToken).ConfigureAwait(false);
results.Add(result);
}
return results;
}
public Task ResolveAsync(
string tenantId,
string entryId,
string resolution,
string? resolvedBy = null,
CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
ArgumentException.ThrowIfNullOrWhiteSpace(entryId);
ArgumentException.ThrowIfNullOrWhiteSpace(resolution);
var key = GetKey(tenantId, entryId);
if (_entries.TryGetValue(key, out var entry))
{
var now = _timeProvider.GetUtcNow();
_entries[key] = entry with
{
Status = DeadLetterStatus.Resolved,
Resolution = resolution,
ResolvedBy = resolvedBy,
ResolvedAt = now
};
_logger.LogInformation(
"Resolved dead-letter entry {EntryId} for tenant {TenantId}: {Resolution}",
entryId, tenantId, resolution);
}
return Task.CompletedTask;
}
public Task<int> PurgeExpiredAsync(
string tenantId,
TimeSpan maxAge,
CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
var cutoff = _timeProvider.GetUtcNow() - maxAge;
var toRemove = _entries
.Where(kv => kv.Value.TenantId == tenantId && kv.Value.CreatedAt < cutoff)
.Select(kv => kv.Key)
.ToArray();
var count = 0;
foreach (var key in toRemove)
{
if (_entries.TryRemove(key, out _))
{
count++;
}
}
if (count > 0)
{
_logger.LogInformation(
"Purged {Count} expired dead-letter entries for tenant {TenantId}",
count, tenantId);
}
return Task.FromResult(count);
}
public Task<DeadLetterStats> GetStatsAsync(
string tenantId,
CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
var entries = _entries.Values.Where(e => e.TenantId == tenantId).ToArray();
var byChannel = entries
.GroupBy(e => e.ChannelType)
.ToDictionary(g => g.Key, g => g.Count());
var byReason = entries
.GroupBy(e => e.FailureReason)
.ToDictionary(g => g.Key, g => g.Count());
var stats = new DeadLetterStats
{
TotalCount = entries.Length,
PendingCount = entries.Count(e => e.Status == DeadLetterStatus.Pending),
RetryingCount = entries.Count(e => e.Status == DeadLetterStatus.Retrying),
RetriedCount = entries.Count(e => e.Status == DeadLetterStatus.Retried),
ResolvedCount = entries.Count(e => e.Status == DeadLetterStatus.Resolved),
ExhaustedCount = entries.Count(e => e.Status == DeadLetterStatus.Exhausted),
ByChannel = byChannel,
ByReason = byReason,
OldestEntryAt = entries.MinBy(e => e.CreatedAt)?.CreatedAt,
NewestEntryAt = entries.MaxBy(e => e.CreatedAt)?.CreatedAt
};
return Task.FromResult(stats);
}
private static string GetKey(string tenantId, string entryId) => $"{tenantId}:{entryId}";
}

View File

@@ -0,0 +1,233 @@
using System.Diagnostics;
using System.Diagnostics.Metrics;
namespace StellaOps.Notifier.Worker.Observability;
/// <summary>
/// Default implementation of notification metrics using System.Diagnostics.Metrics.
/// </summary>
public sealed class DefaultNotifyMetrics : INotifyMetrics
{
private static readonly ActivitySource ActivitySource = new("StellaOps.Notifier", "1.0.0");
private static readonly Meter Meter = new("StellaOps.Notifier", "1.0.0");
// Counters
private readonly Counter<long> _deliveryAttempts;
private readonly Counter<long> _escalationEvents;
private readonly Counter<long> _deadLetterEntries;
private readonly Counter<long> _ruleEvaluations;
private readonly Counter<long> _templateRenders;
private readonly Counter<long> _stormEvents;
private readonly Counter<long> _retentionCleanups;
// Histograms
private readonly Histogram<double> _deliveryDuration;
private readonly Histogram<double> _ruleEvaluationDuration;
private readonly Histogram<double> _templateRenderDuration;
// Gauges (using ObservableGauge pattern)
private readonly Dictionary<string, int> _queueDepths = new();
private readonly object _queueDepthLock = new();
public DefaultNotifyMetrics()
{
// Initialize counters
_deliveryAttempts = Meter.CreateCounter<long>(
NotifyMetricNames.DeliveryAttempts,
unit: "{attempts}",
description: "Total number of notification delivery attempts");
_escalationEvents = Meter.CreateCounter<long>(
NotifyMetricNames.EscalationEvents,
unit: "{events}",
description: "Total number of escalation events");
_deadLetterEntries = Meter.CreateCounter<long>(
NotifyMetricNames.DeadLetterEntries,
unit: "{entries}",
description: "Total number of dead-letter entries");
_ruleEvaluations = Meter.CreateCounter<long>(
NotifyMetricNames.RuleEvaluations,
unit: "{evaluations}",
description: "Total number of rule evaluations");
_templateRenders = Meter.CreateCounter<long>(
NotifyMetricNames.TemplateRenders,
unit: "{renders}",
description: "Total number of template render operations");
_stormEvents = Meter.CreateCounter<long>(
NotifyMetricNames.StormEvents,
unit: "{events}",
description: "Total number of storm detection events");
_retentionCleanups = Meter.CreateCounter<long>(
NotifyMetricNames.RetentionCleanups,
unit: "{cleanups}",
description: "Total number of retention cleanup operations");
// Initialize histograms
_deliveryDuration = Meter.CreateHistogram<double>(
NotifyMetricNames.DeliveryDuration,
unit: "ms",
description: "Duration of delivery attempts in milliseconds");
_ruleEvaluationDuration = Meter.CreateHistogram<double>(
NotifyMetricNames.RuleEvaluationDuration,
unit: "ms",
description: "Duration of rule evaluations in milliseconds");
_templateRenderDuration = Meter.CreateHistogram<double>(
NotifyMetricNames.TemplateRenderDuration,
unit: "ms",
description: "Duration of template renders in milliseconds");
// Initialize observable gauge for queue depths
Meter.CreateObservableGauge(
NotifyMetricNames.QueueDepth,
observeValues: ObserveQueueDepths,
unit: "{messages}",
description: "Current queue depth per channel");
}
public void RecordDeliveryAttempt(string tenantId, string channelType, string status, TimeSpan duration)
{
var tags = new TagList
{
{ NotifyMetricTags.TenantId, tenantId },
{ NotifyMetricTags.ChannelType, channelType },
{ NotifyMetricTags.Status, status }
};
_deliveryAttempts.Add(1, tags);
_deliveryDuration.Record(duration.TotalMilliseconds, tags);
}
public void RecordEscalation(string tenantId, int level, string outcome)
{
var tags = new TagList
{
{ NotifyMetricTags.TenantId, tenantId },
{ NotifyMetricTags.Level, level.ToString() },
{ NotifyMetricTags.Outcome, outcome }
};
_escalationEvents.Add(1, tags);
}
public void RecordDeadLetter(string tenantId, string reason, string channelType)
{
var tags = new TagList
{
{ NotifyMetricTags.TenantId, tenantId },
{ NotifyMetricTags.Reason, reason },
{ NotifyMetricTags.ChannelType, channelType }
};
_deadLetterEntries.Add(1, tags);
}
public void RecordRuleEvaluation(string tenantId, string ruleId, bool matched, TimeSpan duration)
{
var tags = new TagList
{
{ NotifyMetricTags.TenantId, tenantId },
{ NotifyMetricTags.RuleId, ruleId },
{ NotifyMetricTags.Matched, matched.ToString().ToLowerInvariant() }
};
_ruleEvaluations.Add(1, tags);
_ruleEvaluationDuration.Record(duration.TotalMilliseconds, tags);
}
public void RecordTemplateRender(string tenantId, string templateKey, bool success, TimeSpan duration)
{
var tags = new TagList
{
{ NotifyMetricTags.TenantId, tenantId },
{ NotifyMetricTags.TemplateKey, templateKey },
{ NotifyMetricTags.Success, success.ToString().ToLowerInvariant() }
};
_templateRenders.Add(1, tags);
_templateRenderDuration.Record(duration.TotalMilliseconds, tags);
}
public void RecordStormEvent(string tenantId, string stormKey, string decision)
{
var tags = new TagList
{
{ NotifyMetricTags.TenantId, tenantId },
{ NotifyMetricTags.StormKey, stormKey },
{ NotifyMetricTags.Decision, decision }
};
_stormEvents.Add(1, tags);
}
public void RecordRetentionCleanup(string tenantId, string entityType, int deletedCount)
{
var tags = new TagList
{
{ NotifyMetricTags.TenantId, tenantId },
{ NotifyMetricTags.EntityType, entityType }
};
_retentionCleanups.Add(deletedCount, tags);
}
public void RecordQueueDepth(string tenantId, string channelType, int depth)
{
var key = $"{tenantId}:{channelType}";
lock (_queueDepthLock)
{
_queueDepths[key] = depth;
}
}
public Activity? StartDeliveryActivity(string tenantId, string deliveryId, string channelType)
{
var activity = ActivitySource.StartActivity("notify.delivery", ActivityKind.Internal);
if (activity is not null)
{
activity.SetTag(NotifyMetricTags.TenantId, tenantId);
activity.SetTag("delivery_id", deliveryId);
activity.SetTag(NotifyMetricTags.ChannelType, channelType);
}
return activity;
}
public Activity? StartEscalationActivity(string tenantId, string incidentId, int level)
{
var activity = ActivitySource.StartActivity("notify.escalation", ActivityKind.Internal);
if (activity is not null)
{
activity.SetTag(NotifyMetricTags.TenantId, tenantId);
activity.SetTag("incident_id", incidentId);
activity.SetTag(NotifyMetricTags.Level, level);
}
return activity;
}
private IEnumerable<Measurement<int>> ObserveQueueDepths()
{
lock (_queueDepthLock)
{
foreach (var (key, depth) in _queueDepths)
{
var parts = key.Split(':');
if (parts.Length == 2)
{
yield return new Measurement<int>(
depth,
new TagList
{
{ NotifyMetricTags.TenantId, parts[0] },
{ NotifyMetricTags.ChannelType, parts[1] }
});
}
}
}
}
}

View File

@@ -0,0 +1,98 @@
using System.Diagnostics;
using System.Diagnostics.Metrics;
namespace StellaOps.Notifier.Worker.Observability;
/// <summary>
/// Interface for notification system metrics and tracing.
/// </summary>
public interface INotifyMetrics
{
/// <summary>
/// Records a notification delivery attempt.
/// </summary>
void RecordDeliveryAttempt(string tenantId, string channelType, string status, TimeSpan duration);
/// <summary>
/// Records an escalation event.
/// </summary>
void RecordEscalation(string tenantId, int level, string outcome);
/// <summary>
/// Records a dead-letter entry.
/// </summary>
void RecordDeadLetter(string tenantId, string reason, string channelType);
/// <summary>
/// Records rule evaluation.
/// </summary>
void RecordRuleEvaluation(string tenantId, string ruleId, bool matched, TimeSpan duration);
/// <summary>
/// Records template rendering.
/// </summary>
void RecordTemplateRender(string tenantId, string templateKey, bool success, TimeSpan duration);
/// <summary>
/// Records storm detection event.
/// </summary>
void RecordStormEvent(string tenantId, string stormKey, string decision);
/// <summary>
/// Records retention cleanup.
/// </summary>
void RecordRetentionCleanup(string tenantId, string entityType, int deletedCount);
/// <summary>
/// Gets the current queue depth for a channel.
/// </summary>
void RecordQueueDepth(string tenantId, string channelType, int depth);
/// <summary>
/// Creates an activity for distributed tracing.
/// </summary>
Activity? StartDeliveryActivity(string tenantId, string deliveryId, string channelType);
/// <summary>
/// Creates an activity for escalation tracing.
/// </summary>
Activity? StartEscalationActivity(string tenantId, string incidentId, int level);
}
/// <summary>
/// Metric tag names for consistency.
/// </summary>
public static class NotifyMetricTags
{
public const string TenantId = "tenant_id";
public const string ChannelType = "channel_type";
public const string Status = "status";
public const string Outcome = "outcome";
public const string Level = "level";
public const string Reason = "reason";
public const string RuleId = "rule_id";
public const string Matched = "matched";
public const string TemplateKey = "template_key";
public const string Success = "success";
public const string StormKey = "storm_key";
public const string Decision = "decision";
public const string EntityType = "entity_type";
}
/// <summary>
/// Metric names for the notification system.
/// </summary>
public static class NotifyMetricNames
{
public const string DeliveryAttempts = "notify.delivery.attempts";
public const string DeliveryDuration = "notify.delivery.duration";
public const string EscalationEvents = "notify.escalation.events";
public const string DeadLetterEntries = "notify.deadletter.entries";
public const string RuleEvaluations = "notify.rule.evaluations";
public const string RuleEvaluationDuration = "notify.rule.evaluation.duration";
public const string TemplateRenders = "notify.template.renders";
public const string TemplateRenderDuration = "notify.template.render.duration";
public const string StormEvents = "notify.storm.events";
public const string RetentionCleanups = "notify.retention.cleanups";
public const string QueueDepth = "notify.queue.depth";
}

View File

@@ -0,0 +1,298 @@
using System.Collections.Concurrent;
using Microsoft.Extensions.Logging;
using StellaOps.Notifier.Worker.DeadLetter;
using StellaOps.Notifier.Worker.Observability;
namespace StellaOps.Notifier.Worker.Retention;
/// <summary>
/// Default implementation of retention policy service.
/// </summary>
public sealed class DefaultRetentionPolicyService : IRetentionPolicyService
{
private readonly ConcurrentDictionary<string, RetentionPolicy> _policies = new();
private readonly ConcurrentDictionary<string, RetentionCleanupExecution> _lastExecutions = new();
private readonly IDeadLetterService _deadLetterService;
private readonly TimeProvider _timeProvider;
private readonly INotifyMetrics? _metrics;
private readonly ILogger<DefaultRetentionPolicyService> _logger;
public DefaultRetentionPolicyService(
IDeadLetterService deadLetterService,
TimeProvider timeProvider,
ILogger<DefaultRetentionPolicyService> logger,
INotifyMetrics? metrics = null)
{
_deadLetterService = deadLetterService ?? throw new ArgumentNullException(nameof(deadLetterService));
_timeProvider = timeProvider ?? TimeProvider.System;
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
_metrics = metrics;
}
public Task<RetentionPolicy> GetPolicyAsync(
string tenantId,
CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
var policy = _policies.GetValueOrDefault(tenantId, RetentionPolicy.Default);
return Task.FromResult(policy);
}
public Task SetPolicyAsync(
string tenantId,
RetentionPolicy policy,
CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
ArgumentNullException.ThrowIfNull(policy);
_policies[tenantId] = policy;
_logger.LogInformation(
"Updated retention policy for tenant {TenantId}: DeliveryRetention={DeliveryRetention}, AuditRetention={AuditRetention}",
tenantId, policy.DeliveryRetention, policy.AuditRetention);
return Task.CompletedTask;
}
public async Task<RetentionCleanupResult> ExecuteCleanupAsync(
string tenantId,
CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
var executionId = Guid.NewGuid().ToString("N");
var startedAt = _timeProvider.GetUtcNow();
var policy = await GetPolicyAsync(tenantId, cancellationToken).ConfigureAwait(false);
var execution = new RetentionCleanupExecution
{
ExecutionId = executionId,
TenantId = tenantId,
StartedAt = startedAt,
Status = RetentionCleanupStatus.Running,
PolicyUsed = policy
};
_lastExecutions[tenantId] = execution;
_logger.LogInformation(
"Starting retention cleanup {ExecutionId} for tenant {TenantId}",
executionId, tenantId);
try
{
var counts = await ExecuteCleanupInternalAsync(tenantId, policy, cancellationToken)
.ConfigureAwait(false);
var completedAt = _timeProvider.GetUtcNow();
var duration = completedAt - startedAt;
execution = execution with
{
CompletedAt = completedAt,
Status = RetentionCleanupStatus.Completed,
Counts = counts
};
_lastExecutions[tenantId] = execution;
_logger.LogInformation(
"Completed retention cleanup {ExecutionId} for tenant {TenantId}: {Total} items deleted in {Duration}ms",
executionId, tenantId, counts.Total, duration.TotalMilliseconds);
return new RetentionCleanupResult
{
TenantId = tenantId,
Success = true,
ExecutedAt = startedAt,
Duration = duration,
Counts = counts
};
}
catch (OperationCanceledException)
{
execution = execution with
{
CompletedAt = _timeProvider.GetUtcNow(),
Status = RetentionCleanupStatus.Cancelled,
Error = "Operation was cancelled"
};
_lastExecutions[tenantId] = execution;
_logger.LogWarning(
"Retention cleanup {ExecutionId} for tenant {TenantId} was cancelled",
executionId, tenantId);
return new RetentionCleanupResult
{
TenantId = tenantId,
Success = false,
Error = "Operation was cancelled",
ExecutedAt = startedAt,
Duration = _timeProvider.GetUtcNow() - startedAt,
Counts = new RetentionCleanupCounts()
};
}
catch (Exception ex)
{
execution = execution with
{
CompletedAt = _timeProvider.GetUtcNow(),
Status = RetentionCleanupStatus.Failed,
Error = ex.Message
};
_lastExecutions[tenantId] = execution;
_logger.LogError(ex,
"Retention cleanup {ExecutionId} for tenant {TenantId} failed",
executionId, tenantId);
return new RetentionCleanupResult
{
TenantId = tenantId,
Success = false,
Error = ex.Message,
ExecutedAt = startedAt,
Duration = _timeProvider.GetUtcNow() - startedAt,
Counts = new RetentionCleanupCounts()
};
}
}
public async Task<IReadOnlyList<RetentionCleanupResult>> ExecuteCleanupAllAsync(
CancellationToken cancellationToken = default)
{
var tenantIds = _policies.Keys.ToArray();
var results = new List<RetentionCleanupResult>();
foreach (var tenantId in tenantIds)
{
cancellationToken.ThrowIfCancellationRequested();
var result = await ExecuteCleanupAsync(tenantId, cancellationToken).ConfigureAwait(false);
results.Add(result);
}
_logger.LogInformation(
"Completed retention cleanup for {Count} tenants: {Successful} successful, {Failed} failed",
results.Count, results.Count(r => r.Success), results.Count(r => !r.Success));
return results;
}
public Task<RetentionCleanupExecution?> GetLastExecutionAsync(
string tenantId,
CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
_lastExecutions.TryGetValue(tenantId, out var execution);
return Task.FromResult(execution);
}
public async Task<RetentionCleanupPreview> PreviewCleanupAsync(
string tenantId,
CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
var policy = await GetPolicyAsync(tenantId, cancellationToken).ConfigureAwait(false);
var now = _timeProvider.GetUtcNow();
var cutoffDates = new Dictionary<string, DateTimeOffset>
{
["Deliveries"] = now - policy.DeliveryRetention,
["AuditEntries"] = now - policy.AuditRetention,
["DeadLetterEntries"] = now - policy.DeadLetterRetention,
["StormData"] = now - policy.StormDataRetention,
["InboxMessages"] = now - policy.InboxRetention,
["Events"] = now - policy.EventHistoryRetention
};
// Get estimated dead-letter count
var deadLetterStats = await _deadLetterService.GetStatsAsync(tenantId, cancellationToken)
.ConfigureAwait(false);
// Estimate counts based on age distribution (simplified - in production would query actual counts)
var estimatedCounts = new RetentionCleanupCounts
{
DeadLetterEntries = EstimateExpiredCount(deadLetterStats, policy.DeadLetterRetention, now)
};
return new RetentionCleanupPreview
{
TenantId = tenantId,
PreviewedAt = now,
EstimatedCounts = estimatedCounts,
PolicyApplied = policy,
CutoffDates = cutoffDates
};
}
private async Task<RetentionCleanupCounts> ExecuteCleanupInternalAsync(
string tenantId,
RetentionPolicy policy,
CancellationToken cancellationToken)
{
var deadLetterCount = 0;
// Purge expired dead-letter entries
deadLetterCount = await _deadLetterService.PurgeExpiredAsync(
tenantId,
policy.DeadLetterRetention,
cancellationToken).ConfigureAwait(false);
if (deadLetterCount > 0)
{
_metrics?.RecordRetentionCleanup(tenantId, "DeadLetter", deadLetterCount);
}
// In a full implementation, we would also clean up:
// - Delivery records from delivery store
// - Audit log entries from audit store
// - Storm tracking data from storm store
// - Inbox messages from inbox store
// - Event history from event store
// For now, return counts with just dead-letter cleanup
return new RetentionCleanupCounts
{
DeadLetterEntries = deadLetterCount
};
}
private static int EstimateExpiredCount(DeadLetterStats stats, TimeSpan retention, DateTimeOffset now)
{
if (!stats.OldestEntryAt.HasValue)
{
return 0;
}
var cutoff = now - retention;
if (stats.OldestEntryAt.Value >= cutoff)
{
return 0;
}
// Rough estimation - assume linear distribution
if (!stats.NewestEntryAt.HasValue || stats.TotalCount == 0)
{
return 0;
}
var totalSpan = stats.NewestEntryAt.Value - stats.OldestEntryAt.Value;
if (totalSpan.TotalSeconds <= 0)
{
return stats.TotalCount;
}
var expiredSpan = cutoff - stats.OldestEntryAt.Value;
var ratio = Math.Clamp(expiredSpan.TotalSeconds / totalSpan.TotalSeconds, 0, 1);
return (int)(stats.TotalCount * ratio);
}
}

View File

@@ -0,0 +1,181 @@
namespace StellaOps.Notifier.Worker.Retention;
/// <summary>
/// Service for managing data retention policies and cleanup.
/// </summary>
public interface IRetentionPolicyService
{
/// <summary>
/// Gets the retention policy for a tenant.
/// </summary>
Task<RetentionPolicy> GetPolicyAsync(
string tenantId,
CancellationToken cancellationToken = default);
/// <summary>
/// Sets/updates the retention policy for a tenant.
/// </summary>
Task SetPolicyAsync(
string tenantId,
RetentionPolicy policy,
CancellationToken cancellationToken = default);
/// <summary>
/// Executes retention cleanup for a tenant.
/// </summary>
Task<RetentionCleanupResult> ExecuteCleanupAsync(
string tenantId,
CancellationToken cancellationToken = default);
/// <summary>
/// Executes retention cleanup for all tenants.
/// </summary>
Task<IReadOnlyList<RetentionCleanupResult>> ExecuteCleanupAllAsync(
CancellationToken cancellationToken = default);
/// <summary>
/// Gets the last cleanup execution details.
/// </summary>
Task<RetentionCleanupExecution?> GetLastExecutionAsync(
string tenantId,
CancellationToken cancellationToken = default);
/// <summary>
/// Previews what would be cleaned up without actually deleting.
/// </summary>
Task<RetentionCleanupPreview> PreviewCleanupAsync(
string tenantId,
CancellationToken cancellationToken = default);
}
/// <summary>
/// Data retention policy configuration.
/// </summary>
public sealed record RetentionPolicy
{
/// <summary>
/// Retention period for delivery records.
/// </summary>
public TimeSpan DeliveryRetention { get; init; } = TimeSpan.FromDays(90);
/// <summary>
/// Retention period for audit log entries.
/// </summary>
public TimeSpan AuditRetention { get; init; } = TimeSpan.FromDays(365);
/// <summary>
/// Retention period for dead-letter entries.
/// </summary>
public TimeSpan DeadLetterRetention { get; init; } = TimeSpan.FromDays(30);
/// <summary>
/// Retention period for storm tracking data.
/// </summary>
public TimeSpan StormDataRetention { get; init; } = TimeSpan.FromDays(7);
/// <summary>
/// Retention period for inbox messages.
/// </summary>
public TimeSpan InboxRetention { get; init; } = TimeSpan.FromDays(30);
/// <summary>
/// Retention period for event history.
/// </summary>
public TimeSpan EventHistoryRetention { get; init; } = TimeSpan.FromDays(30);
/// <summary>
/// Whether automatic cleanup is enabled.
/// </summary>
public bool AutoCleanupEnabled { get; init; } = true;
/// <summary>
/// Cron expression for automatic cleanup schedule.
/// </summary>
public string CleanupSchedule { get; init; } = "0 2 * * *"; // Daily at 2 AM
/// <summary>
/// Maximum records to delete per cleanup run.
/// </summary>
public int MaxDeletesPerRun { get; init; } = 10000;
/// <summary>
/// Whether to keep resolved/acknowledged deliveries longer.
/// </summary>
public bool ExtendResolvedRetention { get; init; } = true;
/// <summary>
/// Extension multiplier for resolved items (e.g., 2x = double the retention).
/// </summary>
public double ResolvedRetentionMultiplier { get; init; } = 2.0;
/// <summary>
/// Default policy with standard retention periods.
/// </summary>
public static RetentionPolicy Default => new();
}
/// <summary>
/// Result of a retention cleanup execution.
/// </summary>
public sealed record RetentionCleanupResult
{
public required string TenantId { get; init; }
public required bool Success { get; init; }
public string? Error { get; init; }
public required DateTimeOffset ExecutedAt { get; init; }
public TimeSpan Duration { get; init; }
public required RetentionCleanupCounts Counts { get; init; }
}
/// <summary>
/// Counts of items deleted during retention cleanup.
/// </summary>
public sealed record RetentionCleanupCounts
{
public int Deliveries { get; init; }
public int AuditEntries { get; init; }
public int DeadLetterEntries { get; init; }
public int StormData { get; init; }
public int InboxMessages { get; init; }
public int Events { get; init; }
public int Total => Deliveries + AuditEntries + DeadLetterEntries + StormData + InboxMessages + Events;
}
/// <summary>
/// Details of a cleanup execution.
/// </summary>
public sealed record RetentionCleanupExecution
{
public required string ExecutionId { get; init; }
public required string TenantId { get; init; }
public required DateTimeOffset StartedAt { get; init; }
public DateTimeOffset? CompletedAt { get; init; }
public required RetentionCleanupStatus Status { get; init; }
public RetentionCleanupCounts? Counts { get; init; }
public string? Error { get; init; }
public RetentionPolicy PolicyUsed { get; init; } = RetentionPolicy.Default;
}
/// <summary>
/// Status of a cleanup execution.
/// </summary>
public enum RetentionCleanupStatus
{
Running,
Completed,
Failed,
Cancelled
}
/// <summary>
/// Preview of what would be cleaned up.
/// </summary>
public sealed record RetentionCleanupPreview
{
public required string TenantId { get; init; }
public required DateTimeOffset PreviewedAt { get; init; }
public required RetentionCleanupCounts EstimatedCounts { get; init; }
public required RetentionPolicy PolicyApplied { get; init; }
public required IReadOnlyDictionary<string, DateTimeOffset> CutoffDates { get; init; }
}

View File

@@ -0,0 +1,509 @@
using System.Text;
using System.Text.RegularExpressions;
using Microsoft.Extensions.Logging;
namespace StellaOps.Notifier.Worker.Security;
/// <summary>
/// Default HTML sanitizer implementation using regex-based filtering.
/// For production, consider using a dedicated library like HtmlSanitizer or AngleSharp.
/// </summary>
public sealed partial class DefaultHtmlSanitizer : IHtmlSanitizer
{
private readonly ILogger<DefaultHtmlSanitizer> _logger;
// Safe elements (whitelist approach)
private static readonly HashSet<string> SafeElements = new(StringComparer.OrdinalIgnoreCase)
{
"p", "div", "span", "br", "hr",
"h1", "h2", "h3", "h4", "h5", "h6",
"strong", "b", "em", "i", "u", "s", "strike",
"ul", "ol", "li", "dl", "dt", "dd",
"table", "thead", "tbody", "tfoot", "tr", "th", "td",
"a", "img",
"blockquote", "pre", "code",
"sub", "sup", "small", "mark",
"caption", "figure", "figcaption"
};
// Safe attributes
private static readonly HashSet<string> SafeAttributes = new(StringComparer.OrdinalIgnoreCase)
{
"href", "src", "alt", "title", "class", "id",
"width", "height", "style",
"colspan", "rowspan", "scope",
"target", "rel"
};
// Dangerous URL schemes
private static readonly HashSet<string> DangerousSchemes = new(StringComparer.OrdinalIgnoreCase)
{
"javascript", "vbscript", "data", "file"
};
// Event handler attributes (all start with "on")
private static readonly Regex EventHandlerRegex = EventHandlerPattern();
// Style-based attacks
private static readonly Regex DangerousStyleRegex = DangerousStylePattern();
public DefaultHtmlSanitizer(ILogger<DefaultHtmlSanitizer> logger)
{
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
public string Sanitize(string html, HtmlSanitizeOptions? options = null)
{
if (string.IsNullOrWhiteSpace(html))
{
return string.Empty;
}
options ??= new HtmlSanitizeOptions();
if (html.Length > options.MaxContentLength)
{
_logger.LogWarning("HTML content exceeds max length {MaxLength}, truncating", options.MaxContentLength);
html = html[..options.MaxContentLength];
}
var allowedTags = new HashSet<string>(SafeElements, StringComparer.OrdinalIgnoreCase);
if (options.AdditionalAllowedTags is not null)
{
foreach (var tag in options.AdditionalAllowedTags)
{
allowedTags.Add(tag);
}
}
var allowedAttrs = new HashSet<string>(SafeAttributes, StringComparer.OrdinalIgnoreCase);
if (options.AdditionalAllowedAttributes is not null)
{
foreach (var attr in options.AdditionalAllowedAttributes)
{
allowedAttrs.Add(attr);
}
}
// Process HTML
var result = new StringBuilder();
var depth = 0;
var pos = 0;
while (pos < html.Length)
{
var tagStart = html.IndexOf('<', pos);
if (tagStart < 0)
{
// No more tags, append rest
result.Append(EncodeText(html[pos..]));
break;
}
// Append text before tag
if (tagStart > pos)
{
result.Append(EncodeText(html[pos..tagStart]));
}
var tagEnd = html.IndexOf('>', tagStart);
if (tagEnd < 0)
{
// Malformed, skip rest
break;
}
var tagContent = html[(tagStart + 1)..tagEnd];
var isClosing = tagContent.StartsWith('/');
var tagName = ExtractTagName(tagContent);
if (isClosing)
{
depth--;
}
if (allowedTags.Contains(tagName))
{
if (isClosing)
{
result.Append($"</{tagName}>");
}
else
{
// Process attributes
var sanitizedTag = SanitizeTag(tagContent, tagName, allowedAttrs, options);
result.Append($"<{sanitizedTag}>");
if (!IsSelfClosing(tagName) && !tagContent.EndsWith('/'))
{
depth++;
}
}
}
else
{
_logger.LogDebug("Stripped disallowed tag: {TagName}", tagName);
}
if (depth > options.MaxNestingDepth)
{
_logger.LogWarning("HTML nesting depth exceeds max {MaxDepth}, truncating", options.MaxNestingDepth);
break;
}
pos = tagEnd + 1;
}
return result.ToString();
}
public HtmlValidationResult Validate(string html)
{
if (string.IsNullOrWhiteSpace(html))
{
return HtmlValidationResult.Safe(new HtmlContentStats());
}
var issues = new List<HtmlSecurityIssue>();
var stats = new HtmlContentStats
{
CharacterCount = html.Length
};
var pos = 0;
var depth = 0;
var maxDepth = 0;
var elementCount = 0;
var linkCount = 0;
var imageCount = 0;
// Check for script tags
if (ScriptTagRegex().IsMatch(html))
{
issues.Add(new HtmlSecurityIssue
{
Type = HtmlSecurityIssueType.ScriptInjection,
Description = "Script tags are not allowed"
});
}
// Check for event handlers
var eventMatches = EventHandlerRegex.Matches(html);
foreach (Match match in eventMatches)
{
issues.Add(new HtmlSecurityIssue
{
Type = HtmlSecurityIssueType.EventHandler,
Description = "Event handler attributes are not allowed",
AttributeName = match.Value,
Position = match.Index
});
}
// Check for dangerous URLs
var hrefMatches = DangerousUrlRegex().Matches(html);
foreach (Match match in hrefMatches)
{
issues.Add(new HtmlSecurityIssue
{
Type = HtmlSecurityIssueType.DangerousUrl,
Description = "Dangerous URL scheme detected",
Position = match.Index
});
}
// Check for dangerous style content
var styleMatches = DangerousStyleRegex.Matches(html);
foreach (Match match in styleMatches)
{
issues.Add(new HtmlSecurityIssue
{
Type = HtmlSecurityIssueType.StyleInjection,
Description = "Dangerous style content detected",
Position = match.Index
});
}
// Check for dangerous elements
var dangerousElements = new[] { "iframe", "object", "embed", "form", "input", "button", "meta", "link", "base" };
foreach (var element in dangerousElements)
{
var elementRegex = new Regex($@"<{element}\b", RegexOptions.IgnoreCase);
if (elementRegex.IsMatch(html))
{
issues.Add(new HtmlSecurityIssue
{
Type = HtmlSecurityIssueType.DangerousElement,
Description = $"Dangerous element '{element}' is not allowed",
ElementName = element
});
}
}
// Count elements and check nesting
while (pos < html.Length)
{
var tagStart = html.IndexOf('<', pos);
if (tagStart < 0) break;
var tagEnd = html.IndexOf('>', tagStart);
if (tagEnd < 0) break;
var tagContent = html[(tagStart + 1)..tagEnd];
var isClosing = tagContent.StartsWith('/');
var tagName = ExtractTagName(tagContent);
if (!isClosing && !string.IsNullOrEmpty(tagName) && !tagContent.EndsWith('/'))
{
if (!IsSelfClosing(tagName))
{
depth++;
maxDepth = Math.Max(maxDepth, depth);
}
elementCount++;
if (tagName.Equals("a", StringComparison.OrdinalIgnoreCase)) linkCount++;
if (tagName.Equals("img", StringComparison.OrdinalIgnoreCase)) imageCount++;
}
else if (isClosing)
{
depth--;
}
pos = tagEnd + 1;
}
stats = stats with
{
ElementCount = elementCount,
MaxDepth = maxDepth,
LinkCount = linkCount,
ImageCount = imageCount
};
return issues.Count == 0
? HtmlValidationResult.Safe(stats)
: HtmlValidationResult.Unsafe(issues, stats);
}
public string StripHtml(string html)
{
if (string.IsNullOrWhiteSpace(html))
{
return string.Empty;
}
// Remove all tags
var text = HtmlTagRegex().Replace(html, " ");
// Decode entities
text = System.Net.WebUtility.HtmlDecode(text);
// Normalize whitespace
text = WhitespaceRegex().Replace(text, " ").Trim();
return text;
}
private static string SanitizeTag(
string tagContent,
string tagName,
HashSet<string> allowedAttrs,
HtmlSanitizeOptions options)
{
var result = new StringBuilder(tagName);
// Extract and sanitize attributes
var attrMatches = AttributeRegex().Matches(tagContent);
foreach (Match match in attrMatches)
{
var attrName = match.Groups[1].Value;
var attrValue = match.Groups[2].Value;
if (!allowedAttrs.Contains(attrName))
{
continue;
}
// Skip event handlers
if (EventHandlerRegex.IsMatch(attrName))
{
continue;
}
// Sanitize href/src values
if (attrName.Equals("href", StringComparison.OrdinalIgnoreCase) ||
attrName.Equals("src", StringComparison.OrdinalIgnoreCase))
{
attrValue = SanitizeUrl(attrValue, options);
if (string.IsNullOrEmpty(attrValue))
{
continue;
}
}
// Sanitize style values
if (attrName.Equals("style", StringComparison.OrdinalIgnoreCase))
{
attrValue = SanitizeStyle(attrValue);
if (string.IsNullOrEmpty(attrValue))
{
continue;
}
}
result.Append($" {attrName}=\"{EncodeAttributeValue(attrValue)}\"");
}
// Add rel="noopener noreferrer" to links with target
if (tagName.Equals("a", StringComparison.OrdinalIgnoreCase) &&
tagContent.Contains("target=", StringComparison.OrdinalIgnoreCase))
{
if (!tagContent.Contains("rel=", StringComparison.OrdinalIgnoreCase))
{
result.Append(" rel=\"noopener noreferrer\"");
}
}
if (tagContent.TrimEnd().EndsWith('/'))
{
result.Append(" /");
}
return result.ToString();
}
private static string SanitizeUrl(string url, HtmlSanitizeOptions options)
{
if (string.IsNullOrWhiteSpace(url))
{
return string.Empty;
}
url = url.Trim();
// Check for dangerous schemes
var colonIndex = url.IndexOf(':');
if (colonIndex > 0 && colonIndex < 10)
{
var scheme = url[..colonIndex].ToLowerInvariant();
if (DangerousSchemes.Contains(scheme))
{
if (scheme == "data" && options.AllowDataUrls)
{
// Allow data URLs if explicitly enabled
return url;
}
return string.Empty;
}
}
// Allow relative URLs and safe absolute URLs
if (url.StartsWith("http://", StringComparison.OrdinalIgnoreCase) ||
url.StartsWith("https://", StringComparison.OrdinalIgnoreCase) ||
url.StartsWith("mailto:", StringComparison.OrdinalIgnoreCase) ||
url.StartsWith("tel:", StringComparison.OrdinalIgnoreCase) ||
url.StartsWith('/') ||
url.StartsWith('#') ||
!url.Contains(':'))
{
return url;
}
return string.Empty;
}
private static string SanitizeStyle(string style)
{
if (string.IsNullOrWhiteSpace(style))
{
return string.Empty;
}
// Remove dangerous CSS
if (DangerousStyleRegex.IsMatch(style))
{
return string.Empty;
}
// Only allow simple property:value pairs
var safeProperties = new HashSet<string>(StringComparer.OrdinalIgnoreCase)
{
"color", "background-color", "font-size", "font-weight", "font-style",
"text-align", "text-decoration", "margin", "padding", "border",
"width", "height", "max-width", "max-height", "display"
};
var result = new StringBuilder();
var pairs = style.Split(';', StringSplitOptions.RemoveEmptyEntries);
foreach (var pair in pairs)
{
var colonIndex = pair.IndexOf(':');
if (colonIndex <= 0) continue;
var property = pair[..colonIndex].Trim().ToLowerInvariant();
var value = pair[(colonIndex + 1)..].Trim();
if (safeProperties.Contains(property) && !value.Contains("url(", StringComparison.OrdinalIgnoreCase))
{
if (result.Length > 0) result.Append("; ");
result.Append($"{property}: {value}");
}
}
return result.ToString();
}
private static string ExtractTagName(string tagContent)
{
var content = tagContent.TrimStart('/').Trim();
var spaceIndex = content.IndexOfAny([' ', '\t', '\n', '\r', '/']);
return spaceIndex > 0 ? content[..spaceIndex] : content;
}
private static bool IsSelfClosing(string tagName)
{
return tagName.Equals("br", StringComparison.OrdinalIgnoreCase) ||
tagName.Equals("hr", StringComparison.OrdinalIgnoreCase) ||
tagName.Equals("img", StringComparison.OrdinalIgnoreCase) ||
tagName.Equals("input", StringComparison.OrdinalIgnoreCase) ||
tagName.Equals("meta", StringComparison.OrdinalIgnoreCase) ||
tagName.Equals("link", StringComparison.OrdinalIgnoreCase);
}
private static string EncodeText(string text)
{
return System.Net.WebUtility.HtmlEncode(text);
}
private static string EncodeAttributeValue(string value)
{
return value
.Replace("&", "&amp;")
.Replace("\"", "&quot;")
.Replace("<", "&lt;")
.Replace(">", "&gt;");
}
[GeneratedRegex(@"\bon\w+\s*=", RegexOptions.IgnoreCase)]
private static partial Regex EventHandlerPattern();
[GeneratedRegex(@"expression\s*\(|behavior\s*:|@import|@charset|binding\s*:", RegexOptions.IgnoreCase)]
private static partial Regex DangerousStylePattern();
[GeneratedRegex(@"<script\b", RegexOptions.IgnoreCase)]
private static partial Regex ScriptTagRegex();
[GeneratedRegex(@"(javascript|vbscript|data)\s*:", RegexOptions.IgnoreCase)]
private static partial Regex DangerousUrlRegex();
[GeneratedRegex(@"<[^>]*>")]
private static partial Regex HtmlTagRegex();
[GeneratedRegex(@"\s+")]
private static partial Regex WhitespaceRegex();
[GeneratedRegex(@"(\w+)\s*=\s*""([^""]*)""", RegexOptions.Compiled)]
private static partial Regex AttributeRegex();
}

View File

@@ -0,0 +1,221 @@
using System.Collections.Concurrent;
using System.Text.RegularExpressions;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
namespace StellaOps.Notifier.Worker.Security;
/// <summary>
/// Default implementation of tenant isolation validation.
/// </summary>
public sealed partial class DefaultTenantIsolationValidator : ITenantIsolationValidator
{
private readonly TenantIsolationOptions _options;
private readonly TimeProvider _timeProvider;
private readonly ILogger<DefaultTenantIsolationValidator> _logger;
private readonly ConcurrentQueue<TenantIsolationViolation> _violations = new();
// Valid tenant ID pattern: alphanumeric, hyphens, underscores, 3-64 chars
private static readonly Regex TenantIdPattern = TenantIdRegex();
public DefaultTenantIsolationValidator(
IOptions<TenantIsolationOptions> options,
TimeProvider timeProvider,
ILogger<DefaultTenantIsolationValidator> logger)
{
_options = options?.Value ?? new TenantIsolationOptions();
_timeProvider = timeProvider ?? TimeProvider.System;
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
public TenantIsolationResult ValidateAccess(
string requestTenantId,
string resourceTenantId,
string resourceType,
string resourceId)
{
ArgumentException.ThrowIfNullOrWhiteSpace(requestTenantId);
ArgumentException.ThrowIfNullOrWhiteSpace(resourceTenantId);
// Normalize tenant IDs
var normalizedRequest = NormalizeTenantId(requestTenantId);
var normalizedResource = NormalizeTenantId(resourceTenantId);
// Check for exact match
if (string.Equals(normalizedRequest, normalizedResource, StringComparison.OrdinalIgnoreCase))
{
return TenantIsolationResult.Allow(requestTenantId, resourceTenantId);
}
// Check for cross-tenant access exceptions (admin tenants, shared resources)
if (_options.AllowCrossTenantAccess &&
_options.CrossTenantAllowedPairs.Contains($"{normalizedRequest}:{normalizedResource}"))
{
_logger.LogDebug(
"Cross-tenant access allowed: {RequestTenant} -> {ResourceTenant} for {ResourceType}",
requestTenantId, resourceTenantId, resourceType);
return TenantIsolationResult.Allow(requestTenantId, resourceTenantId);
}
// Check if request tenant is an admin tenant
if (_options.AdminTenants.Contains(normalizedRequest))
{
_logger.LogInformation(
"Admin tenant {AdminTenant} accessing resource from {ResourceTenant}",
requestTenantId, resourceTenantId);
return TenantIsolationResult.Allow(requestTenantId, resourceTenantId);
}
// Violation detected
var violation = new TenantIsolationViolation
{
OccurredAt = _timeProvider.GetUtcNow(),
RequestTenantId = requestTenantId,
ResourceTenantId = resourceTenantId,
ResourceType = resourceType,
ResourceId = resourceId,
Operation = "access"
};
RecordViolation(violation);
_logger.LogWarning(
"Tenant isolation violation: {RequestTenant} attempted to access {ResourceType}/{ResourceId} belonging to {ResourceTenant}",
requestTenantId, resourceType, resourceId, resourceTenantId);
return TenantIsolationResult.Deny(
requestTenantId,
resourceTenantId,
"Cross-tenant access denied",
resourceType,
resourceId);
}
public IReadOnlyList<TenantIsolationResult> ValidateBatch(
string requestTenantId,
IEnumerable<TenantResource> resources)
{
ArgumentException.ThrowIfNullOrWhiteSpace(requestTenantId);
ArgumentNullException.ThrowIfNull(resources);
return resources
.Select(r => ValidateAccess(requestTenantId, r.TenantId, r.ResourceType, r.ResourceId))
.ToArray();
}
public string? SanitizeTenantId(string? tenantId)
{
if (string.IsNullOrWhiteSpace(tenantId))
{
return null;
}
var sanitized = tenantId.Trim();
// Remove any control characters
sanitized = ControlCharsRegex().Replace(sanitized, "");
// Check format
if (!TenantIdPattern.IsMatch(sanitized))
{
_logger.LogWarning("Invalid tenant ID format: {TenantId}", tenantId);
return null;
}
return sanitized;
}
public bool IsValidTenantIdFormat(string? tenantId)
{
if (string.IsNullOrWhiteSpace(tenantId))
{
return false;
}
return TenantIdPattern.IsMatch(tenantId.Trim());
}
public void RecordViolation(TenantIsolationViolation violation)
{
ArgumentNullException.ThrowIfNull(violation);
_violations.Enqueue(violation);
// Keep only recent violations
while (_violations.Count > _options.MaxStoredViolations)
{
_violations.TryDequeue(out _);
}
// Emit metrics
TenantIsolationMetrics.RecordViolation(
violation.RequestTenantId,
violation.ResourceTenantId,
violation.ResourceType);
}
public IReadOnlyList<TenantIsolationViolation> GetRecentViolations(int limit = 100)
{
return _violations.TakeLast(Math.Min(limit, _options.MaxStoredViolations)).ToArray();
}
private static string NormalizeTenantId(string tenantId)
{
return tenantId.Trim().ToLowerInvariant();
}
[GeneratedRegex(@"^[a-zA-Z0-9][a-zA-Z0-9_-]{2,63}$")]
private static partial Regex TenantIdRegex();
[GeneratedRegex(@"[\x00-\x1F\x7F]")]
private static partial Regex ControlCharsRegex();
}
/// <summary>
/// Configuration options for tenant isolation.
/// </summary>
public sealed class TenantIsolationOptions
{
/// <summary>
/// Whether to allow any cross-tenant access.
/// </summary>
public bool AllowCrossTenantAccess { get; set; }
/// <summary>
/// Pairs of tenants allowed to access each other's resources.
/// Format: "tenant1:tenant2" means tenant1 can access tenant2's resources.
/// </summary>
public HashSet<string> CrossTenantAllowedPairs { get; set; } = [];
/// <summary>
/// Tenants with admin access to all resources.
/// </summary>
public HashSet<string> AdminTenants { get; set; } = [];
/// <summary>
/// Maximum number of violations to store in memory.
/// </summary>
public int MaxStoredViolations { get; set; } = 1000;
/// <summary>
/// Whether to throw exceptions on violations (vs returning result).
/// </summary>
public bool ThrowOnViolation { get; set; }
}
/// <summary>
/// Metrics for tenant isolation.
/// </summary>
internal static class TenantIsolationMetrics
{
// In a real implementation, these would emit to metrics system
private static long _violationCount;
public static void RecordViolation(string requestTenant, string resourceTenant, string resourceType)
{
Interlocked.Increment(ref _violationCount);
// In production: emit to Prometheus/StatsD/etc.
}
public static long GetViolationCount() => _violationCount;
}

View File

@@ -0,0 +1,329 @@
using System.Collections.Concurrent;
using System.Net;
using System.Security.Cryptography;
using System.Text;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
namespace StellaOps.Notifier.Worker.Security;
/// <summary>
/// Default implementation of webhook security service using HMAC-SHA256.
/// </summary>
public sealed class DefaultWebhookSecurityService : IWebhookSecurityService
{
private const string SignaturePrefix = "v1";
private const int TimestampToleranceSeconds = 300; // 5 minutes
private readonly WebhookSecurityOptions _options;
private readonly TimeProvider _timeProvider;
private readonly ILogger<DefaultWebhookSecurityService> _logger;
// In-memory storage for channel secrets (in production, use persistent storage)
private readonly ConcurrentDictionary<string, ChannelSecurityConfig> _channelConfigs = new();
public DefaultWebhookSecurityService(
IOptions<WebhookSecurityOptions> options,
TimeProvider timeProvider,
ILogger<DefaultWebhookSecurityService> logger)
{
_options = options?.Value ?? new WebhookSecurityOptions();
_timeProvider = timeProvider ?? TimeProvider.System;
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
public string SignPayload(string tenantId, string channelId, ReadOnlySpan<byte> payload, DateTimeOffset timestamp)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
ArgumentException.ThrowIfNullOrWhiteSpace(channelId);
var config = GetOrCreateConfig(tenantId, channelId);
var timestampUnix = timestamp.ToUnixTimeSeconds();
// Create signed payload: timestamp.payload
var signedData = CreateSignedData(timestampUnix, payload);
using var hmac = new HMACSHA256(config.SecretBytes);
var signature = hmac.ComputeHash(signedData);
var signatureHex = Convert.ToHexString(signature).ToLowerInvariant();
// Format: v1=timestamp,signature
return $"{SignaturePrefix}={timestampUnix},{signatureHex}";
}
public bool VerifySignature(string tenantId, string channelId, ReadOnlySpan<byte> payload, string signatureHeader)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
ArgumentException.ThrowIfNullOrWhiteSpace(channelId);
if (string.IsNullOrWhiteSpace(signatureHeader))
{
_logger.LogWarning("Missing signature header for webhook callback");
return false;
}
// Parse header: v1=timestamp,signature
if (!signatureHeader.StartsWith($"{SignaturePrefix}=", StringComparison.Ordinal))
{
_logger.LogWarning("Invalid signature prefix in header");
return false;
}
var parts = signatureHeader[(SignaturePrefix.Length + 1)..].Split(',');
if (parts.Length != 2)
{
_logger.LogWarning("Invalid signature format in header");
return false;
}
if (!long.TryParse(parts[0], out var timestampUnix))
{
_logger.LogWarning("Invalid timestamp in signature header");
return false;
}
// Check timestamp is within tolerance
var now = _timeProvider.GetUtcNow().ToUnixTimeSeconds();
if (Math.Abs(now - timestampUnix) > TimestampToleranceSeconds)
{
_logger.LogWarning(
"Signature timestamp {Timestamp} is outside tolerance window (now: {Now})",
timestampUnix, now);
return false;
}
byte[] providedSignature;
try
{
providedSignature = Convert.FromHexString(parts[1]);
}
catch (FormatException)
{
_logger.LogWarning("Invalid signature hex encoding");
return false;
}
var config = GetOrCreateConfig(tenantId, channelId);
var signedData = CreateSignedData(timestampUnix, payload);
using var hmac = new HMACSHA256(config.SecretBytes);
var expectedSignature = hmac.ComputeHash(signedData);
// Also check previous secret if within rotation window
if (!CryptographicOperations.FixedTimeEquals(expectedSignature, providedSignature))
{
if (config.PreviousSecretBytes is not null &&
config.PreviousSecretExpiresAt.HasValue &&
_timeProvider.GetUtcNow() < config.PreviousSecretExpiresAt.Value)
{
using var hmacPrev = new HMACSHA256(config.PreviousSecretBytes);
var prevSignature = hmacPrev.ComputeHash(signedData);
return CryptographicOperations.FixedTimeEquals(prevSignature, providedSignature);
}
return false;
}
return true;
}
public IpValidationResult ValidateIp(string tenantId, string channelId, IPAddress ipAddress)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
ArgumentException.ThrowIfNullOrWhiteSpace(channelId);
ArgumentNullException.ThrowIfNull(ipAddress);
var config = GetOrCreateConfig(tenantId, channelId);
if (config.IpAllowlist.Count == 0)
{
// No allowlist configured - allow all
return IpValidationResult.Allow(hasAllowlist: false);
}
foreach (var entry in config.IpAllowlist)
{
if (IsIpInRange(ipAddress, entry.CidrOrIp))
{
return IpValidationResult.Allow(entry.CidrOrIp, hasAllowlist: true);
}
}
_logger.LogWarning(
"IP {IpAddress} not in allowlist for channel {ChannelId}",
ipAddress, channelId);
return IpValidationResult.Deny($"IP {ipAddress} not in allowlist");
}
public string GetMaskedSecret(string tenantId, string channelId)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
ArgumentException.ThrowIfNullOrWhiteSpace(channelId);
var config = GetOrCreateConfig(tenantId, channelId);
var secret = config.Secret;
if (secret.Length <= 8)
{
return "****";
}
return $"{secret[..4]}...{secret[^4..]}";
}
public Task<WebhookSecretRotationResult> RotateSecretAsync(
string tenantId,
string channelId,
CancellationToken cancellationToken = default)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
ArgumentException.ThrowIfNullOrWhiteSpace(channelId);
var key = GetConfigKey(tenantId, channelId);
var now = _timeProvider.GetUtcNow();
var newSecret = GenerateSecret();
var result = _channelConfigs.AddOrUpdate(
key,
_ => new ChannelSecurityConfig(newSecret),
(_, existing) =>
{
return new ChannelSecurityConfig(newSecret)
{
PreviousSecret = existing.Secret,
PreviousSecretBytes = existing.SecretBytes,
PreviousSecretExpiresAt = now.Add(_options.SecretRotationGracePeriod),
IpAllowlist = existing.IpAllowlist
};
});
_logger.LogInformation(
"Rotated webhook secret for channel {ChannelId}, old secret valid until {ExpiresAt}",
channelId, result.PreviousSecretExpiresAt);
return Task.FromResult(new WebhookSecretRotationResult
{
Success = true,
NewSecret = newSecret,
ActiveAt = now,
OldSecretExpiresAt = result.PreviousSecretExpiresAt
});
}
private ChannelSecurityConfig GetOrCreateConfig(string tenantId, string channelId)
{
var key = GetConfigKey(tenantId, channelId);
return _channelConfigs.GetOrAdd(key, _ => new ChannelSecurityConfig(GenerateSecret()));
}
private static string GetConfigKey(string tenantId, string channelId)
=> $"{tenantId}:{channelId}";
private static string GenerateSecret()
{
var bytes = RandomNumberGenerator.GetBytes(32);
return Convert.ToBase64String(bytes);
}
private static byte[] CreateSignedData(long timestamp, ReadOnlySpan<byte> payload)
{
var timestampBytes = Encoding.UTF8.GetBytes(timestamp.ToString());
var result = new byte[timestampBytes.Length + 1 + payload.Length];
timestampBytes.CopyTo(result, 0);
result[timestampBytes.Length] = (byte)'.';
payload.CopyTo(result.AsSpan(timestampBytes.Length + 1));
return result;
}
private static bool IsIpInRange(IPAddress ip, string cidrOrIp)
{
if (cidrOrIp.Contains('/'))
{
// CIDR notation
var parts = cidrOrIp.Split('/');
if (!IPAddress.TryParse(parts[0], out var networkAddress) ||
!int.TryParse(parts[1], out var prefixLength))
{
return false;
}
return IsInSubnet(ip, networkAddress, prefixLength);
}
else
{
// Single IP
return IPAddress.TryParse(cidrOrIp, out var singleIp) && ip.Equals(singleIp);
}
}
private static bool IsInSubnet(IPAddress ip, IPAddress network, int prefixLength)
{
var ipBytes = ip.GetAddressBytes();
var networkBytes = network.GetAddressBytes();
if (ipBytes.Length != networkBytes.Length)
{
return false;
}
var fullBytes = prefixLength / 8;
var remainingBits = prefixLength % 8;
for (var i = 0; i < fullBytes; i++)
{
if (ipBytes[i] != networkBytes[i])
{
return false;
}
}
if (remainingBits > 0 && fullBytes < ipBytes.Length)
{
var mask = (byte)(0xFF << (8 - remainingBits));
if ((ipBytes[fullBytes] & mask) != (networkBytes[fullBytes] & mask))
{
return false;
}
}
return true;
}
private sealed class ChannelSecurityConfig
{
public ChannelSecurityConfig(string secret)
{
Secret = secret;
SecretBytes = Encoding.UTF8.GetBytes(secret);
}
public string Secret { get; }
public byte[] SecretBytes { get; }
public string? PreviousSecret { get; init; }
public byte[]? PreviousSecretBytes { get; init; }
public DateTimeOffset? PreviousSecretExpiresAt { get; init; }
public List<IpAllowlistEntry> IpAllowlist { get; init; } = [];
}
}
/// <summary>
/// Configuration options for webhook security.
/// </summary>
public sealed class WebhookSecurityOptions
{
/// <summary>
/// Grace period during which both old and new secrets are valid after rotation.
/// </summary>
public TimeSpan SecretRotationGracePeriod { get; set; } = TimeSpan.FromHours(24);
/// <summary>
/// Whether to enforce IP allowlists when configured.
/// </summary>
public bool EnforceIpAllowlist { get; set; } = true;
/// <summary>
/// Timestamp tolerance for signature verification (in seconds).
/// </summary>
public int TimestampToleranceSeconds { get; set; } = 300;
}

View File

@@ -0,0 +1,292 @@
using System.Buffers.Text;
using System.Collections.Immutable;
using System.Security.Cryptography;
using System.Text;
using System.Text.Json;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
namespace StellaOps.Notifier.Worker.Security;
/// <summary>
/// HMAC-SHA256 based implementation of acknowledgement token service.
/// </summary>
public sealed class HmacAckTokenService : IAckTokenService, IDisposable
{
private const int CurrentVersion = 1;
private const string TokenPrefix = "soa1"; // StellaOps Ack v1
private readonly AckTokenOptions _options;
private readonly TimeProvider _timeProvider;
private readonly ILogger<HmacAckTokenService> _logger;
private readonly HMACSHA256 _hmac;
private bool _disposed;
public HmacAckTokenService(
IOptions<AckTokenOptions> options,
TimeProvider timeProvider,
ILogger<HmacAckTokenService> logger)
{
_options = options?.Value ?? throw new ArgumentNullException(nameof(options));
_timeProvider = timeProvider ?? TimeProvider.System;
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
if (string.IsNullOrWhiteSpace(_options.SigningKey))
{
throw new InvalidOperationException("AckTokenOptions.SigningKey must be configured.");
}
// Derive key using HKDF for proper key derivation
var keyBytes = Encoding.UTF8.GetBytes(_options.SigningKey);
var derivedKey = HKDF.DeriveKey(
HashAlgorithmName.SHA256,
keyBytes,
32, // 256 bits
info: Encoding.UTF8.GetBytes("StellaOps.AckToken.v1"));
_hmac = new HMACSHA256(derivedKey);
}
public AckToken CreateToken(
string tenantId,
string deliveryId,
string action,
TimeSpan? expiration = null,
IReadOnlyDictionary<string, string>? metadata = null)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
ArgumentException.ThrowIfNullOrWhiteSpace(deliveryId);
ArgumentException.ThrowIfNullOrWhiteSpace(action);
var tokenId = Guid.NewGuid().ToString("N");
var now = _timeProvider.GetUtcNow();
var expiresAt = now.Add(expiration ?? _options.DefaultExpiration);
var payload = new AckTokenPayload
{
Version = CurrentVersion,
TokenId = tokenId,
TenantId = tenantId,
DeliveryId = deliveryId,
Action = action,
IssuedAt = now.ToUnixTimeSeconds(),
ExpiresAt = expiresAt.ToUnixTimeSeconds(),
Metadata = metadata?.ToDictionary(k => k.Key, k => k.Value) ?? new Dictionary<string, string>()
};
var payloadJson = JsonSerializer.Serialize(payload, AckTokenJsonContext.Default.AckTokenPayload);
var payloadBytes = Encoding.UTF8.GetBytes(payloadJson);
// Sign the payload
var signature = _hmac.ComputeHash(payloadBytes);
// Combine: prefix.payload.signature (all base64url)
var payloadB64 = Base64UrlEncode(payloadBytes);
var signatureB64 = Base64UrlEncode(signature);
var tokenString = $"{TokenPrefix}.{payloadB64}.{signatureB64}";
_logger.LogDebug(
"Created ack token {TokenId} for delivery {DeliveryId} expiring at {ExpiresAt}",
tokenId, deliveryId, expiresAt);
return new AckToken
{
TokenId = tokenId,
TenantId = tenantId,
DeliveryId = deliveryId,
Action = action,
IssuedAt = now,
ExpiresAt = expiresAt,
Metadata = metadata?.ToImmutableDictionary() ?? ImmutableDictionary<string, string>.Empty,
TokenString = tokenString
};
}
public AckTokenVerification VerifyToken(string token)
{
if (string.IsNullOrWhiteSpace(token))
{
return AckTokenVerification.Fail(AckTokenFailureReason.InvalidFormat, "Token is empty");
}
var parts = token.Split('.');
if (parts.Length != 3)
{
return AckTokenVerification.Fail(AckTokenFailureReason.InvalidFormat, "Invalid token structure");
}
var prefix = parts[0];
var payloadB64 = parts[1];
var signatureB64 = parts[2];
// Check version prefix
if (prefix != TokenPrefix)
{
return AckTokenVerification.Fail(AckTokenFailureReason.UnsupportedVersion, $"Unknown prefix: {prefix}");
}
// Decode payload
byte[] payloadBytes;
try
{
payloadBytes = Base64UrlDecode(payloadB64);
}
catch (FormatException)
{
return AckTokenVerification.Fail(AckTokenFailureReason.InvalidFormat, "Invalid payload encoding");
}
// Verify signature
byte[] providedSignature;
try
{
providedSignature = Base64UrlDecode(signatureB64);
}
catch (FormatException)
{
return AckTokenVerification.Fail(AckTokenFailureReason.InvalidFormat, "Invalid signature encoding");
}
var expectedSignature = _hmac.ComputeHash(payloadBytes);
if (!CryptographicOperations.FixedTimeEquals(expectedSignature, providedSignature))
{
_logger.LogWarning("Invalid signature for ack token");
return AckTokenVerification.Fail(AckTokenFailureReason.InvalidSignature);
}
// Parse payload
AckTokenPayload payload;
try
{
payload = JsonSerializer.Deserialize(payloadBytes, AckTokenJsonContext.Default.AckTokenPayload)
?? throw new JsonException("Null payload");
}
catch (JsonException ex)
{
return AckTokenVerification.Fail(AckTokenFailureReason.MalformedPayload, ex.Message);
}
// Check version
if (payload.Version != CurrentVersion)
{
return AckTokenVerification.Fail(AckTokenFailureReason.UnsupportedVersion, $"Version {payload.Version} not supported");
}
// Check expiration
var now = _timeProvider.GetUtcNow();
var expiresAt = DateTimeOffset.FromUnixTimeSeconds(payload.ExpiresAt);
if (now > expiresAt)
{
return AckTokenVerification.Fail(AckTokenFailureReason.Expired, $"Token expired at {expiresAt}");
}
var ackToken = new AckToken
{
TokenId = payload.TokenId,
TenantId = payload.TenantId,
DeliveryId = payload.DeliveryId,
Action = payload.Action,
IssuedAt = DateTimeOffset.FromUnixTimeSeconds(payload.IssuedAt),
ExpiresAt = expiresAt,
Metadata = payload.Metadata.ToImmutableDictionary(),
TokenString = token
};
return AckTokenVerification.Success(ackToken);
}
public string CreateAckUrl(AckToken token)
{
ArgumentNullException.ThrowIfNull(token);
if (string.IsNullOrWhiteSpace(_options.BaseUrl))
{
throw new InvalidOperationException("AckTokenOptions.BaseUrl must be configured.");
}
var baseUrl = _options.BaseUrl.TrimEnd('/');
return $"{baseUrl}/api/v1/ack/{Uri.EscapeDataString(token.TokenString)}";
}
public void Dispose()
{
if (!_disposed)
{
_hmac.Dispose();
_disposed = true;
}
}
private static string Base64UrlEncode(byte[] data)
{
return Convert.ToBase64String(data)
.Replace('+', '-')
.Replace('/', '_')
.TrimEnd('=');
}
private static byte[] Base64UrlDecode(string input)
{
var padded = input
.Replace('-', '+')
.Replace('_', '/');
switch (padded.Length % 4)
{
case 2: padded += "=="; break;
case 3: padded += "="; break;
}
return Convert.FromBase64String(padded);
}
/// <summary>
/// Internal payload structure for serialization.
/// </summary>
internal sealed class AckTokenPayload
{
public int Version { get; set; }
public string TokenId { get; set; } = string.Empty;
public string TenantId { get; set; } = string.Empty;
public string DeliveryId { get; set; } = string.Empty;
public string Action { get; set; } = string.Empty;
public long IssuedAt { get; set; }
public long ExpiresAt { get; set; }
public Dictionary<string, string> Metadata { get; set; } = new();
}
}
/// <summary>
/// Configuration options for ack token service.
/// </summary>
public sealed class AckTokenOptions
{
/// <summary>
/// The signing key for HMAC. Should be at least 32 characters.
/// In production, this should come from KMS/Key Vault.
/// </summary>
public string SigningKey { get; set; } = string.Empty;
/// <summary>
/// Base URL for generating acknowledgement URLs.
/// </summary>
public string BaseUrl { get; set; } = string.Empty;
/// <summary>
/// Default token expiration if not specified.
/// </summary>
public TimeSpan DefaultExpiration { get; set; } = TimeSpan.FromDays(7);
/// <summary>
/// Maximum allowed token expiration.
/// </summary>
public TimeSpan MaxExpiration { get; set; } = TimeSpan.FromDays(30);
}
/// <summary>
/// JSON serialization context for AOT compatibility.
/// </summary>
[System.Text.Json.Serialization.JsonSerializable(typeof(HmacAckTokenService.AckTokenPayload))]
internal partial class AckTokenJsonContext : System.Text.Json.Serialization.JsonSerializerContext
{
}

View File

@@ -0,0 +1,141 @@
using System.Collections.Immutable;
namespace StellaOps.Notifier.Worker.Security;
/// <summary>
/// Service for creating and verifying signed acknowledgement tokens.
/// </summary>
public interface IAckTokenService
{
/// <summary>
/// Creates a signed acknowledgement token for a notification.
/// </summary>
/// <param name="tenantId">The tenant ID.</param>
/// <param name="deliveryId">The delivery ID being acknowledged.</param>
/// <param name="action">The action being acknowledged (e.g., "ack", "resolve", "escalate").</param>
/// <param name="expiration">Optional expiration time. Defaults to 7 days.</param>
/// <param name="metadata">Optional metadata to embed in the token.</param>
/// <returns>The signed token.</returns>
AckToken CreateToken(
string tenantId,
string deliveryId,
string action,
TimeSpan? expiration = null,
IReadOnlyDictionary<string, string>? metadata = null);
/// <summary>
/// Verifies a signed acknowledgement token.
/// </summary>
/// <param name="token">The token string to verify.</param>
/// <returns>The verification result.</returns>
AckTokenVerification VerifyToken(string token);
/// <summary>
/// Creates a full acknowledgement URL with the signed token.
/// </summary>
/// <param name="token">The token to embed.</param>
/// <returns>The full URL.</returns>
string CreateAckUrl(AckToken token);
}
/// <summary>
/// A signed acknowledgement token.
/// </summary>
public sealed record AckToken
{
/// <summary>
/// The unique token identifier.
/// </summary>
public required string TokenId { get; init; }
/// <summary>
/// The tenant ID.
/// </summary>
public required string TenantId { get; init; }
/// <summary>
/// The delivery ID being acknowledged.
/// </summary>
public required string DeliveryId { get; init; }
/// <summary>
/// The action being acknowledged.
/// </summary>
public required string Action { get; init; }
/// <summary>
/// When the token was issued.
/// </summary>
public required DateTimeOffset IssuedAt { get; init; }
/// <summary>
/// When the token expires.
/// </summary>
public required DateTimeOffset ExpiresAt { get; init; }
/// <summary>
/// Optional embedded metadata.
/// </summary>
public ImmutableDictionary<string, string> Metadata { get; init; } = ImmutableDictionary<string, string>.Empty;
/// <summary>
/// The signed token string (base64url encoded).
/// </summary>
public required string TokenString { get; init; }
}
/// <summary>
/// Result of token verification.
/// </summary>
public sealed record AckTokenVerification
{
/// <summary>
/// Whether the token is valid.
/// </summary>
public required bool IsValid { get; init; }
/// <summary>
/// The parsed token if valid, null otherwise.
/// </summary>
public AckToken? Token { get; init; }
/// <summary>
/// The failure reason if invalid.
/// </summary>
public AckTokenFailureReason? FailureReason { get; init; }
/// <summary>
/// Additional failure details.
/// </summary>
public string? FailureDetails { get; init; }
public static AckTokenVerification Success(AckToken token)
=> new() { IsValid = true, Token = token };
public static AckTokenVerification Fail(AckTokenFailureReason reason, string? details = null)
=> new() { IsValid = false, FailureReason = reason, FailureDetails = details };
}
/// <summary>
/// Reasons for token verification failure.
/// </summary>
public enum AckTokenFailureReason
{
/// <summary>Token format is invalid.</summary>
InvalidFormat,
/// <summary>Token signature is invalid.</summary>
InvalidSignature,
/// <summary>Token has expired.</summary>
Expired,
/// <summary>Token has been revoked.</summary>
Revoked,
/// <summary>Token payload is malformed.</summary>
MalformedPayload,
/// <summary>Token version is unsupported.</summary>
UnsupportedVersion
}

View File

@@ -0,0 +1,177 @@
namespace StellaOps.Notifier.Worker.Security;
/// <summary>
/// Service for sanitizing HTML content in notification templates.
/// </summary>
public interface IHtmlSanitizer
{
/// <summary>
/// Sanitizes HTML content, removing potentially dangerous elements and attributes.
/// </summary>
/// <param name="html">The HTML content to sanitize.</param>
/// <param name="options">Optional sanitization options.</param>
/// <returns>The sanitized HTML.</returns>
string Sanitize(string html, HtmlSanitizeOptions? options = null);
/// <summary>
/// Validates HTML content and returns any security issues found.
/// </summary>
/// <param name="html">The HTML content to validate.</param>
/// <returns>Validation result with any issues found.</returns>
HtmlValidationResult Validate(string html);
/// <summary>
/// Strips all HTML tags, leaving only text content.
/// </summary>
/// <param name="html">The HTML content.</param>
/// <returns>Plain text content.</returns>
string StripHtml(string html);
}
/// <summary>
/// Options for HTML sanitization.
/// </summary>
public sealed class HtmlSanitizeOptions
{
/// <summary>
/// Additional tags to allow beyond the default set.
/// </summary>
public IReadOnlySet<string>? AdditionalAllowedTags { get; init; }
/// <summary>
/// Additional attributes to allow beyond the default set.
/// </summary>
public IReadOnlySet<string>? AdditionalAllowedAttributes { get; init; }
/// <summary>
/// Whether to allow data: URLs in src attributes. Default: false.
/// </summary>
public bool AllowDataUrls { get; init; }
/// <summary>
/// Whether to allow external URLs. Default: true.
/// </summary>
public bool AllowExternalUrls { get; init; } = true;
/// <summary>
/// Maximum allowed depth of nested elements. Default: 50.
/// </summary>
public int MaxNestingDepth { get; init; } = 50;
/// <summary>
/// Maximum content length. Default: 1MB.
/// </summary>
public int MaxContentLength { get; init; } = 1024 * 1024;
}
/// <summary>
/// Result of HTML validation.
/// </summary>
public sealed record HtmlValidationResult
{
/// <summary>
/// Whether the HTML is safe.
/// </summary>
public required bool IsSafe { get; init; }
/// <summary>
/// List of security issues found.
/// </summary>
public required IReadOnlyList<HtmlSecurityIssue> Issues { get; init; }
/// <summary>
/// Statistics about the HTML content.
/// </summary>
public HtmlContentStats? Stats { get; init; }
public static HtmlValidationResult Safe(HtmlContentStats? stats = null)
=> new() { IsSafe = true, Issues = [], Stats = stats };
public static HtmlValidationResult Unsafe(IReadOnlyList<HtmlSecurityIssue> issues, HtmlContentStats? stats = null)
=> new() { IsSafe = false, Issues = issues, Stats = stats };
}
/// <summary>
/// A security issue found in HTML content.
/// </summary>
public sealed record HtmlSecurityIssue
{
/// <summary>
/// The type of security issue.
/// </summary>
public required HtmlSecurityIssueType Type { get; init; }
/// <summary>
/// Description of the issue.
/// </summary>
public required string Description { get; init; }
/// <summary>
/// The problematic element or attribute name.
/// </summary>
public string? ElementName { get; init; }
/// <summary>
/// The problematic attribute name.
/// </summary>
public string? AttributeName { get; init; }
/// <summary>
/// Approximate location in the content.
/// </summary>
public int? Position { get; init; }
}
/// <summary>
/// Types of HTML security issues.
/// </summary>
public enum HtmlSecurityIssueType
{
/// <summary>Script element or inline script.</summary>
ScriptInjection,
/// <summary>Event handler attribute (onclick, onerror, etc.).</summary>
EventHandler,
/// <summary>Dangerous URL scheme (javascript:, data:, etc.).</summary>
DangerousUrl,
/// <summary>Potentially dangerous element (iframe, object, embed, etc.).</summary>
DangerousElement,
/// <summary>Style-based attack (expression, behavior, etc.).</summary>
StyleInjection,
/// <summary>Form-based attack (action hijacking).</summary>
FormHijacking,
/// <summary>Content exceeds size limits.</summary>
ContentTooLarge,
/// <summary>Excessive nesting depth.</summary>
ExcessiveNesting,
/// <summary>Malformed HTML that could be used to bypass filters.</summary>
MalformedHtml
}
/// <summary>
/// Statistics about HTML content.
/// </summary>
public sealed record HtmlContentStats
{
/// <summary>Total character count.</summary>
public int CharacterCount { get; init; }
/// <summary>Number of HTML elements.</summary>
public int ElementCount { get; init; }
/// <summary>Maximum nesting depth.</summary>
public int MaxDepth { get; init; }
/// <summary>Number of links.</summary>
public int LinkCount { get; init; }
/// <summary>Number of images.</summary>
public int ImageCount { get; init; }
}

View File

@@ -0,0 +1,190 @@
namespace StellaOps.Notifier.Worker.Security;
/// <summary>
/// Service for validating tenant isolation across operations.
/// </summary>
public interface ITenantIsolationValidator
{
/// <summary>
/// Validates that a resource belongs to the specified tenant.
/// </summary>
/// <param name="requestTenantId">The tenant ID from the request.</param>
/// <param name="resourceTenantId">The tenant ID of the resource being accessed.</param>
/// <param name="resourceType">The type of resource being accessed.</param>
/// <param name="resourceId">The ID of the resource being accessed.</param>
/// <returns>Validation result.</returns>
TenantIsolationResult ValidateAccess(
string requestTenantId,
string resourceTenantId,
string resourceType,
string resourceId);
/// <summary>
/// Validates a batch of resources belong to the specified tenant.
/// </summary>
/// <param name="requestTenantId">The tenant ID from the request.</param>
/// <param name="resources">The resources to validate.</param>
/// <returns>Validation result for each resource.</returns>
IReadOnlyList<TenantIsolationResult> ValidateBatch(
string requestTenantId,
IEnumerable<TenantResource> resources);
/// <summary>
/// Sanitizes a tenant ID for safe use.
/// </summary>
/// <param name="tenantId">The tenant ID to sanitize.</param>
/// <returns>The sanitized tenant ID or null if invalid.</returns>
string? SanitizeTenantId(string? tenantId);
/// <summary>
/// Validates tenant ID format.
/// </summary>
/// <param name="tenantId">The tenant ID to validate.</param>
/// <returns>True if valid format.</returns>
bool IsValidTenantIdFormat(string? tenantId);
/// <summary>
/// Registers a tenant isolation violation for monitoring.
/// </summary>
/// <param name="violation">The violation details.</param>
void RecordViolation(TenantIsolationViolation violation);
/// <summary>
/// Gets recent violations for monitoring purposes.
/// </summary>
/// <param name="limit">Maximum number of violations to return.</param>
/// <returns>Recent violations.</returns>
IReadOnlyList<TenantIsolationViolation> GetRecentViolations(int limit = 100);
}
/// <summary>
/// A resource with tenant information.
/// </summary>
public sealed record TenantResource
{
/// <summary>
/// The tenant ID of the resource.
/// </summary>
public required string TenantId { get; init; }
/// <summary>
/// The type of resource.
/// </summary>
public required string ResourceType { get; init; }
/// <summary>
/// The resource ID.
/// </summary>
public required string ResourceId { get; init; }
}
/// <summary>
/// Result of tenant isolation validation.
/// </summary>
public sealed record TenantIsolationResult
{
/// <summary>
/// Whether access is allowed.
/// </summary>
public required bool IsAllowed { get; init; }
/// <summary>
/// The request tenant ID.
/// </summary>
public required string RequestTenantId { get; init; }
/// <summary>
/// The resource tenant ID.
/// </summary>
public required string ResourceTenantId { get; init; }
/// <summary>
/// The resource type.
/// </summary>
public string? ResourceType { get; init; }
/// <summary>
/// The resource ID.
/// </summary>
public string? ResourceId { get; init; }
/// <summary>
/// Rejection reason if not allowed.
/// </summary>
public string? RejectionReason { get; init; }
public static TenantIsolationResult Allow(string requestTenantId, string resourceTenantId)
=> new()
{
IsAllowed = true,
RequestTenantId = requestTenantId,
ResourceTenantId = resourceTenantId
};
public static TenantIsolationResult Deny(
string requestTenantId,
string resourceTenantId,
string reason,
string? resourceType = null,
string? resourceId = null)
=> new()
{
IsAllowed = false,
RequestTenantId = requestTenantId,
ResourceTenantId = resourceTenantId,
RejectionReason = reason,
ResourceType = resourceType,
ResourceId = resourceId
};
}
/// <summary>
/// Record of a tenant isolation violation.
/// </summary>
public sealed record TenantIsolationViolation
{
/// <summary>
/// When the violation occurred.
/// </summary>
public required DateTimeOffset OccurredAt { get; init; }
/// <summary>
/// The request tenant ID.
/// </summary>
public required string RequestTenantId { get; init; }
/// <summary>
/// The resource tenant ID.
/// </summary>
public required string ResourceTenantId { get; init; }
/// <summary>
/// The type of resource accessed.
/// </summary>
public required string ResourceType { get; init; }
/// <summary>
/// The resource ID accessed.
/// </summary>
public required string ResourceId { get; init; }
/// <summary>
/// The operation being performed.
/// </summary>
public string? Operation { get; init; }
/// <summary>
/// Source IP address of the request.
/// </summary>
public string? SourceIp { get; init; }
/// <summary>
/// User agent of the request.
/// </summary>
public string? UserAgent { get; init; }
/// <summary>
/// Additional context about the violation.
/// </summary>
public IReadOnlyDictionary<string, string>? Context { get; init; }
}

View File

@@ -0,0 +1,147 @@
using System.Net;
namespace StellaOps.Notifier.Worker.Security;
/// <summary>
/// Service for webhook security including HMAC signing and IP validation.
/// </summary>
public interface IWebhookSecurityService
{
/// <summary>
/// Signs a webhook payload and returns the signature header value.
/// </summary>
/// <param name="tenantId">The tenant ID.</param>
/// <param name="channelId">The channel ID.</param>
/// <param name="payload">The payload bytes to sign.</param>
/// <param name="timestamp">The timestamp to include in signature.</param>
/// <returns>The signature header value.</returns>
string SignPayload(string tenantId, string channelId, ReadOnlySpan<byte> payload, DateTimeOffset timestamp);
/// <summary>
/// Verifies an incoming webhook callback signature.
/// </summary>
/// <param name="tenantId">The tenant ID.</param>
/// <param name="channelId">The channel ID.</param>
/// <param name="payload">The payload bytes.</param>
/// <param name="signatureHeader">The signature header value.</param>
/// <returns>True if signature is valid.</returns>
bool VerifySignature(string tenantId, string channelId, ReadOnlySpan<byte> payload, string signatureHeader);
/// <summary>
/// Validates if an IP address is allowed for a channel.
/// </summary>
/// <param name="tenantId">The tenant ID.</param>
/// <param name="channelId">The channel ID.</param>
/// <param name="ipAddress">The IP address to check.</param>
/// <returns>Validation result.</returns>
IpValidationResult ValidateIp(string tenantId, string channelId, IPAddress ipAddress);
/// <summary>
/// Gets the current webhook secret for a channel (for configuration display).
/// </summary>
/// <param name="tenantId">The tenant ID.</param>
/// <param name="channelId">The channel ID.</param>
/// <returns>A masked version of the secret.</returns>
string GetMaskedSecret(string tenantId, string channelId);
/// <summary>
/// Rotates the webhook secret for a channel.
/// </summary>
/// <param name="tenantId">The tenant ID.</param>
/// <param name="channelId">The channel ID.</param>
/// <param name="cancellationToken">Cancellation token.</param>
/// <returns>The new secret.</returns>
Task<WebhookSecretRotationResult> RotateSecretAsync(
string tenantId,
string channelId,
CancellationToken cancellationToken = default);
}
/// <summary>
/// Result of IP validation.
/// </summary>
public sealed record IpValidationResult
{
/// <summary>
/// Whether the IP is allowed.
/// </summary>
public required bool IsAllowed { get; init; }
/// <summary>
/// The reason for rejection if not allowed.
/// </summary>
public string? RejectionReason { get; init; }
/// <summary>
/// The matched allowlist entry if allowed.
/// </summary>
public string? MatchedEntry { get; init; }
/// <summary>
/// Whether an allowlist is configured for this channel.
/// </summary>
public bool HasAllowlist { get; init; }
public static IpValidationResult Allow(string? matchedEntry = null, bool hasAllowlist = false)
=> new() { IsAllowed = true, MatchedEntry = matchedEntry, HasAllowlist = hasAllowlist };
public static IpValidationResult Deny(string reason, bool hasAllowlist = true)
=> new() { IsAllowed = false, RejectionReason = reason, HasAllowlist = hasAllowlist };
}
/// <summary>
/// Result of secret rotation.
/// </summary>
public sealed record WebhookSecretRotationResult
{
/// <summary>
/// Whether rotation succeeded.
/// </summary>
public required bool Success { get; init; }
/// <summary>
/// The new secret (only available immediately after rotation).
/// </summary>
public string? NewSecret { get; init; }
/// <summary>
/// Error message if rotation failed.
/// </summary>
public string? Error { get; init; }
/// <summary>
/// When the new secret becomes active.
/// </summary>
public DateTimeOffset? ActiveAt { get; init; }
/// <summary>
/// When the old secret expires.
/// </summary>
public DateTimeOffset? OldSecretExpiresAt { get; init; }
}
/// <summary>
/// Configuration for an IP allowlist entry.
/// </summary>
public sealed record IpAllowlistEntry
{
/// <summary>
/// The CIDR notation or single IP address.
/// </summary>
public required string CidrOrIp { get; init; }
/// <summary>
/// Optional description for this entry.
/// </summary>
public string? Description { get; init; }
/// <summary>
/// When this entry was added.
/// </summary>
public DateTimeOffset AddedAt { get; init; }
/// <summary>
/// Who added this entry.
/// </summary>
public string? AddedBy { get; init; }
}

View File

@@ -4,14 +4,16 @@ using MongoDB.Driver;
using StellaOps.Notify.Models;
using StellaOps.Notify.Storage.Mongo.Internal;
using StellaOps.Notify.Storage.Mongo.Serialization;
using StellaOps.Notify.Storage.Mongo.Tenancy;
namespace StellaOps.Notify.Storage.Mongo.Repositories;
internal sealed class NotifyChannelRepository : INotifyChannelRepository
{
private readonly IMongoCollection<BsonDocument> _collection;
private readonly ITenantContext _tenantContext;
public NotifyChannelRepository(NotifyMongoContext context)
public NotifyChannelRepository(NotifyMongoContext context, ITenantContext? tenantContext = null)
{
if (context is null)
{
@@ -19,23 +21,34 @@ internal sealed class NotifyChannelRepository : INotifyChannelRepository
}
_collection = context.Database.GetCollection<BsonDocument>(context.Options.ChannelsCollection);
_tenantContext = tenantContext ?? NullTenantContext.Instance;
}
public async Task UpsertAsync(NotifyChannel channel, CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(channel);
_tenantContext.ValidateTenant(channel.TenantId);
var document = NotifyChannelDocumentMapper.ToBsonDocument(channel);
var filter = Builders<BsonDocument>.Filter.Eq("_id", CreateDocumentId(channel.TenantId, channel.ChannelId));
// RLS: Dual-filter with both ID and tenantId for defense-in-depth
var filter = Builders<BsonDocument>.Filter.And(
Builders<BsonDocument>.Filter.Eq("_id", TenantScopedId.Create(channel.TenantId, channel.ChannelId)),
Builders<BsonDocument>.Filter.Eq("tenantId", channel.TenantId));
await _collection.ReplaceOneAsync(filter, document, new ReplaceOptions { IsUpsert = true }, cancellationToken).ConfigureAwait(false);
}
public async Task<NotifyChannel?> GetAsync(string tenantId, string channelId, CancellationToken cancellationToken = default)
{
var filter = Builders<BsonDocument>.Filter.Eq("_id", CreateDocumentId(tenantId, channelId))
& Builders<BsonDocument>.Filter.Or(
_tenantContext.ValidateTenant(tenantId);
// RLS: Dual-filter with both ID and explicit tenantId check
var filter = Builders<BsonDocument>.Filter.And(
Builders<BsonDocument>.Filter.Eq("_id", TenantScopedId.Create(tenantId, channelId)),
Builders<BsonDocument>.Filter.Eq("tenantId", tenantId),
Builders<BsonDocument>.Filter.Or(
Builders<BsonDocument>.Filter.Exists("deletedAt", false),
Builders<BsonDocument>.Filter.Eq("deletedAt", BsonNull.Value));
Builders<BsonDocument>.Filter.Eq("deletedAt", BsonNull.Value)));
var document = await _collection.Find(filter).FirstOrDefaultAsync(cancellationToken).ConfigureAwait(false);
return document is null ? null : NotifyChannelDocumentMapper.FromBsonDocument(document);
@@ -43,28 +56,30 @@ internal sealed class NotifyChannelRepository : INotifyChannelRepository
public async Task<IReadOnlyList<NotifyChannel>> ListAsync(string tenantId, CancellationToken cancellationToken = default)
{
var filter = Builders<BsonDocument>.Filter.Eq("tenantId", tenantId)
& Builders<BsonDocument>.Filter.Or(
_tenantContext.ValidateTenant(tenantId);
var filter = Builders<BsonDocument>.Filter.And(
Builders<BsonDocument>.Filter.Eq("tenantId", tenantId),
Builders<BsonDocument>.Filter.Or(
Builders<BsonDocument>.Filter.Exists("deletedAt", false),
Builders<BsonDocument>.Filter.Eq("deletedAt", BsonNull.Value));
Builders<BsonDocument>.Filter.Eq("deletedAt", BsonNull.Value)));
var cursor = await _collection.Find(filter).ToListAsync(cancellationToken).ConfigureAwait(false);
return cursor.Select(NotifyChannelDocumentMapper.FromBsonDocument).ToArray();
}
public async Task DeleteAsync(string tenantId, string channelId, CancellationToken cancellationToken = default)
{
var filter = Builders<BsonDocument>.Filter.Eq("_id", CreateDocumentId(tenantId, channelId));
_tenantContext.ValidateTenant(tenantId);
// RLS: Dual-filter with both ID and tenantId for defense-in-depth
var filter = Builders<BsonDocument>.Filter.And(
Builders<BsonDocument>.Filter.Eq("_id", TenantScopedId.Create(tenantId, channelId)),
Builders<BsonDocument>.Filter.Eq("tenantId", tenantId));
await _collection.UpdateOneAsync(filter,
Builders<BsonDocument>.Update.Set("deletedAt", DateTime.UtcNow).Set("enabled", false),
new UpdateOptions { IsUpsert = false },
cancellationToken).ConfigureAwait(false);
}
private static string CreateDocumentId(string tenantId, string resourceId)
=> string.Create(tenantId.Length + resourceId.Length + 1, (tenantId, resourceId), static (span, value) =>
{
value.tenantId.AsSpan().CopyTo(span);
span[value.tenantId.Length] = ':';
value.resourceId.AsSpan().CopyTo(span[(value.tenantId.Length + 1)..]);
});
}

View File

@@ -5,14 +5,16 @@ using MongoDB.Driver;
using StellaOps.Notify.Models;
using StellaOps.Notify.Storage.Mongo.Internal;
using StellaOps.Notify.Storage.Mongo.Serialization;
using StellaOps.Notify.Storage.Mongo.Tenancy;
namespace StellaOps.Notify.Storage.Mongo.Repositories;
internal sealed class NotifyRuleRepository : INotifyRuleRepository
{
private readonly IMongoCollection<BsonDocument> _collection;
private readonly ITenantContext _tenantContext;
public NotifyRuleRepository(NotifyMongoContext context)
public NotifyRuleRepository(NotifyMongoContext context, ITenantContext? tenantContext = null)
{
if (context is null)
{
@@ -20,23 +22,34 @@ internal sealed class NotifyRuleRepository : INotifyRuleRepository
}
_collection = context.Database.GetCollection<BsonDocument>(context.Options.RulesCollection);
_tenantContext = tenantContext ?? NullTenantContext.Instance;
}
public async Task UpsertAsync(NotifyRule rule, CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(rule);
_tenantContext.ValidateTenant(rule.TenantId);
var document = NotifyRuleDocumentMapper.ToBsonDocument(rule);
var filter = Builders<BsonDocument>.Filter.Eq("_id", CreateDocumentId(rule.TenantId, rule.RuleId));
// RLS: Dual-filter with both ID and tenantId for defense-in-depth
var filter = Builders<BsonDocument>.Filter.And(
Builders<BsonDocument>.Filter.Eq("_id", TenantScopedId.Create(rule.TenantId, rule.RuleId)),
Builders<BsonDocument>.Filter.Eq("tenantId", rule.TenantId));
await _collection.ReplaceOneAsync(filter, document, new ReplaceOptions { IsUpsert = true }, cancellationToken).ConfigureAwait(false);
}
public async Task<NotifyRule?> GetAsync(string tenantId, string ruleId, CancellationToken cancellationToken = default)
{
var filter = Builders<BsonDocument>.Filter.Eq("_id", CreateDocumentId(tenantId, ruleId))
& Builders<BsonDocument>.Filter.Or(
_tenantContext.ValidateTenant(tenantId);
// RLS: Dual-filter with both ID and explicit tenantId check
var filter = Builders<BsonDocument>.Filter.And(
Builders<BsonDocument>.Filter.Eq("_id", TenantScopedId.Create(tenantId, ruleId)),
Builders<BsonDocument>.Filter.Eq("tenantId", tenantId),
Builders<BsonDocument>.Filter.Or(
Builders<BsonDocument>.Filter.Exists("deletedAt", false),
Builders<BsonDocument>.Filter.Eq("deletedAt", BsonNull.Value));
Builders<BsonDocument>.Filter.Eq("deletedAt", BsonNull.Value)));
var document = await _collection.Find(filter).FirstOrDefaultAsync(cancellationToken).ConfigureAwait(false);
return document is null ? null : NotifyRuleDocumentMapper.FromBsonDocument(document);
@@ -44,17 +57,27 @@ internal sealed class NotifyRuleRepository : INotifyRuleRepository
public async Task<IReadOnlyList<NotifyRule>> ListAsync(string tenantId, CancellationToken cancellationToken = default)
{
var filter = Builders<BsonDocument>.Filter.Eq("tenantId", tenantId)
& Builders<BsonDocument>.Filter.Or(
_tenantContext.ValidateTenant(tenantId);
var filter = Builders<BsonDocument>.Filter.And(
Builders<BsonDocument>.Filter.Eq("tenantId", tenantId),
Builders<BsonDocument>.Filter.Or(
Builders<BsonDocument>.Filter.Exists("deletedAt", false),
Builders<BsonDocument>.Filter.Eq("deletedAt", BsonNull.Value));
Builders<BsonDocument>.Filter.Eq("deletedAt", BsonNull.Value)));
var cursor = await _collection.Find(filter).ToListAsync(cancellationToken).ConfigureAwait(false);
return cursor.Select(NotifyRuleDocumentMapper.FromBsonDocument).ToArray();
}
public async Task DeleteAsync(string tenantId, string ruleId, CancellationToken cancellationToken = default)
{
var filter = Builders<BsonDocument>.Filter.Eq("_id", CreateDocumentId(tenantId, ruleId));
_tenantContext.ValidateTenant(tenantId);
// RLS: Dual-filter with both ID and tenantId for defense-in-depth
var filter = Builders<BsonDocument>.Filter.And(
Builders<BsonDocument>.Filter.Eq("_id", TenantScopedId.Create(tenantId, ruleId)),
Builders<BsonDocument>.Filter.Eq("tenantId", tenantId));
await _collection.UpdateOneAsync(filter,
Builders<BsonDocument>.Update
.Set("deletedAt", DateTime.UtcNow)
@@ -62,12 +85,4 @@ internal sealed class NotifyRuleRepository : INotifyRuleRepository
new UpdateOptions { IsUpsert = false },
cancellationToken).ConfigureAwait(false);
}
private static string CreateDocumentId(string tenantId, string resourceId)
=> string.Create(tenantId.Length + resourceId.Length + 1, (tenantId, resourceId), static (span, value) =>
{
value.tenantId.AsSpan().CopyTo(span);
span[value.tenantId.Length] = ':';
value.resourceId.AsSpan().CopyTo(span[(value.tenantId.Length + 1)..]);
});
}

View File

@@ -4,14 +4,16 @@ using MongoDB.Driver;
using StellaOps.Notify.Models;
using StellaOps.Notify.Storage.Mongo.Internal;
using StellaOps.Notify.Storage.Mongo.Serialization;
using StellaOps.Notify.Storage.Mongo.Tenancy;
namespace StellaOps.Notify.Storage.Mongo.Repositories;
internal sealed class NotifyTemplateRepository : INotifyTemplateRepository
{
private readonly IMongoCollection<BsonDocument> _collection;
private readonly ITenantContext _tenantContext;
public NotifyTemplateRepository(NotifyMongoContext context)
public NotifyTemplateRepository(NotifyMongoContext context, ITenantContext? tenantContext = null)
{
if (context is null)
{
@@ -19,23 +21,34 @@ internal sealed class NotifyTemplateRepository : INotifyTemplateRepository
}
_collection = context.Database.GetCollection<BsonDocument>(context.Options.TemplatesCollection);
_tenantContext = tenantContext ?? NullTenantContext.Instance;
}
public async Task UpsertAsync(NotifyTemplate template, CancellationToken cancellationToken = default)
{
ArgumentNullException.ThrowIfNull(template);
_tenantContext.ValidateTenant(template.TenantId);
var document = NotifyTemplateDocumentMapper.ToBsonDocument(template);
var filter = Builders<BsonDocument>.Filter.Eq("_id", CreateDocumentId(template.TenantId, template.TemplateId));
// RLS: Dual-filter with both ID and tenantId for defense-in-depth
var filter = Builders<BsonDocument>.Filter.And(
Builders<BsonDocument>.Filter.Eq("_id", TenantScopedId.Create(template.TenantId, template.TemplateId)),
Builders<BsonDocument>.Filter.Eq("tenantId", template.TenantId));
await _collection.ReplaceOneAsync(filter, document, new ReplaceOptions { IsUpsert = true }, cancellationToken).ConfigureAwait(false);
}
public async Task<NotifyTemplate?> GetAsync(string tenantId, string templateId, CancellationToken cancellationToken = default)
{
var filter = Builders<BsonDocument>.Filter.Eq("_id", CreateDocumentId(tenantId, templateId))
& Builders<BsonDocument>.Filter.Or(
_tenantContext.ValidateTenant(tenantId);
// RLS: Dual-filter with both ID and explicit tenantId check
var filter = Builders<BsonDocument>.Filter.And(
Builders<BsonDocument>.Filter.Eq("_id", TenantScopedId.Create(tenantId, templateId)),
Builders<BsonDocument>.Filter.Eq("tenantId", tenantId),
Builders<BsonDocument>.Filter.Or(
Builders<BsonDocument>.Filter.Exists("deletedAt", false),
Builders<BsonDocument>.Filter.Eq("deletedAt", BsonNull.Value));
Builders<BsonDocument>.Filter.Eq("deletedAt", BsonNull.Value)));
var document = await _collection.Find(filter).FirstOrDefaultAsync(cancellationToken).ConfigureAwait(false);
return document is null ? null : NotifyTemplateDocumentMapper.FromBsonDocument(document);
@@ -43,28 +56,30 @@ internal sealed class NotifyTemplateRepository : INotifyTemplateRepository
public async Task<IReadOnlyList<NotifyTemplate>> ListAsync(string tenantId, CancellationToken cancellationToken = default)
{
var filter = Builders<BsonDocument>.Filter.Eq("tenantId", tenantId)
& Builders<BsonDocument>.Filter.Or(
_tenantContext.ValidateTenant(tenantId);
var filter = Builders<BsonDocument>.Filter.And(
Builders<BsonDocument>.Filter.Eq("tenantId", tenantId),
Builders<BsonDocument>.Filter.Or(
Builders<BsonDocument>.Filter.Exists("deletedAt", false),
Builders<BsonDocument>.Filter.Eq("deletedAt", BsonNull.Value));
Builders<BsonDocument>.Filter.Eq("deletedAt", BsonNull.Value)));
var cursor = await _collection.Find(filter).ToListAsync(cancellationToken).ConfigureAwait(false);
return cursor.Select(NotifyTemplateDocumentMapper.FromBsonDocument).ToArray();
}
public async Task DeleteAsync(string tenantId, string templateId, CancellationToken cancellationToken = default)
{
var filter = Builders<BsonDocument>.Filter.Eq("_id", CreateDocumentId(tenantId, templateId));
_tenantContext.ValidateTenant(tenantId);
// RLS: Dual-filter with both ID and tenantId for defense-in-depth
var filter = Builders<BsonDocument>.Filter.And(
Builders<BsonDocument>.Filter.Eq("_id", TenantScopedId.Create(tenantId, templateId)),
Builders<BsonDocument>.Filter.Eq("tenantId", tenantId));
await _collection.UpdateOneAsync(filter,
Builders<BsonDocument>.Update.Set("deletedAt", DateTime.UtcNow),
new UpdateOptions { IsUpsert = false },
cancellationToken).ConfigureAwait(false);
}
private static string CreateDocumentId(string tenantId, string resourceId)
=> string.Create(tenantId.Length + resourceId.Length + 1, (tenantId, resourceId), static (span, value) =>
{
value.tenantId.AsSpan().CopyTo(span);
span[value.tenantId.Length] = ':';
value.resourceId.AsSpan().CopyTo(span[(value.tenantId.Length + 1)..]);
});
}

View File

@@ -0,0 +1,145 @@
namespace StellaOps.Notify.Storage.Mongo.Tenancy;
/// <summary>
/// Provides tenant context for RLS-like tenant isolation in storage operations.
/// </summary>
public interface ITenantContext
{
/// <summary>
/// Gets the current authenticated tenant ID, or null if not authenticated.
/// </summary>
string? CurrentTenantId { get; }
/// <summary>
/// Returns true if the current context has a valid tenant.
/// </summary>
bool HasTenant { get; }
/// <summary>
/// Validates that the requested tenant matches the current context.
/// Throws <see cref="TenantMismatchException"/> if validation fails.
/// </summary>
/// <param name="requestedTenantId">The tenant ID being requested.</param>
/// <exception cref="TenantMismatchException">Thrown when tenants don't match.</exception>
void ValidateTenant(string requestedTenantId);
/// <summary>
/// Returns true if the current context allows access to the specified tenant.
/// Admin tenants may access other tenants.
/// </summary>
bool CanAccessTenant(string targetTenantId);
}
/// <summary>
/// Exception thrown when a tenant isolation violation is detected.
/// </summary>
public sealed class TenantMismatchException : InvalidOperationException
{
public string RequestedTenantId { get; }
public string? CurrentTenantId { get; }
public TenantMismatchException(string requestedTenantId, string? currentTenantId)
: base($"Tenant isolation violation: requested tenant '{requestedTenantId}' does not match current tenant '{currentTenantId ?? "(none)"}'")
{
RequestedTenantId = requestedTenantId;
CurrentTenantId = currentTenantId;
}
}
/// <summary>
/// Default implementation that uses AsyncLocal to track tenant context.
/// </summary>
public sealed class DefaultTenantContext : ITenantContext
{
private static readonly AsyncLocal<string?> _currentTenant = new();
private readonly HashSet<string> _adminTenants;
public DefaultTenantContext(IEnumerable<string>? adminTenants = null)
{
_adminTenants = adminTenants?.ToHashSet(StringComparer.OrdinalIgnoreCase)
?? new HashSet<string>(StringComparer.OrdinalIgnoreCase) { "admin", "system" };
}
public string? CurrentTenantId
{
get => _currentTenant.Value;
set => _currentTenant.Value = value;
}
public bool HasTenant => !string.IsNullOrWhiteSpace(_currentTenant.Value);
public void ValidateTenant(string requestedTenantId)
{
ArgumentException.ThrowIfNullOrWhiteSpace(requestedTenantId);
if (!CanAccessTenant(requestedTenantId))
{
throw new TenantMismatchException(requestedTenantId, CurrentTenantId);
}
}
public bool CanAccessTenant(string targetTenantId)
{
if (string.IsNullOrWhiteSpace(targetTenantId))
return false;
// No current tenant means no access
if (!HasTenant)
return false;
// Same tenant always allowed
if (string.Equals(CurrentTenantId, targetTenantId, StringComparison.OrdinalIgnoreCase))
return true;
// Admin tenants can access other tenants
if (_adminTenants.Contains(CurrentTenantId!))
return true;
return false;
}
/// <summary>
/// Sets the current tenant context. Returns a disposable to restore previous value.
/// </summary>
public IDisposable SetTenant(string tenantId)
{
var previous = _currentTenant.Value;
_currentTenant.Value = tenantId;
return new TenantScope(previous);
}
private sealed class TenantScope : IDisposable
{
private readonly string? _previousTenant;
private bool _disposed;
public TenantScope(string? previousTenant) => _previousTenant = previousTenant;
public void Dispose()
{
if (!_disposed)
{
_currentTenant.Value = _previousTenant;
_disposed = true;
}
}
}
}
/// <summary>
/// Null implementation for testing or contexts without tenant isolation.
/// </summary>
public sealed class NullTenantContext : ITenantContext
{
public static readonly NullTenantContext Instance = new();
public string? CurrentTenantId => null;
public bool HasTenant => false;
public void ValidateTenant(string requestedTenantId)
{
// No-op - allows all access
}
public bool CanAccessTenant(string targetTenantId) => true;
}

View File

@@ -0,0 +1,109 @@
using MongoDB.Bson;
using MongoDB.Driver;
namespace StellaOps.Notify.Storage.Mongo.Tenancy;
/// <summary>
/// Base class for tenant-aware MongoDB repositories with RLS-like filtering.
/// </summary>
public abstract class TenantAwareRepository
{
private readonly ITenantContext _tenantContext;
protected TenantAwareRepository(ITenantContext? tenantContext = null)
{
_tenantContext = tenantContext ?? NullTenantContext.Instance;
}
/// <summary>
/// Gets the tenant context for validation.
/// </summary>
protected ITenantContext TenantContext => _tenantContext;
/// <summary>
/// Validates that the requested tenant is accessible from the current context.
/// </summary>
/// <param name="requestedTenantId">The tenant ID being requested.</param>
protected void ValidateTenantAccess(string requestedTenantId)
{
_tenantContext.ValidateTenant(requestedTenantId);
}
/// <summary>
/// Creates a filter that includes both ID and explicit tenantId check (dual-filter pattern).
/// This provides RLS-like defense-in-depth.
/// </summary>
/// <param name="tenantId">The tenant ID.</param>
/// <param name="documentId">The full document ID (typically tenant-scoped).</param>
/// <returns>A filter requiring both ID match and tenantId match.</returns>
protected static FilterDefinition<BsonDocument> CreateTenantSafeIdFilter(
string tenantId,
string documentId)
{
return Builders<BsonDocument>.Filter.And(
Builders<BsonDocument>.Filter.Eq("_id", documentId),
Builders<BsonDocument>.Filter.Eq("tenantId", tenantId)
);
}
/// <summary>
/// Wraps a filter with an explicit tenantId check.
/// </summary>
/// <param name="tenantId">The tenant ID to scope the query to.</param>
/// <param name="baseFilter">The base filter to wrap.</param>
/// <returns>A filter that includes the tenantId check.</returns>
protected static FilterDefinition<BsonDocument> WithTenantScope(
string tenantId,
FilterDefinition<BsonDocument> baseFilter)
{
return Builders<BsonDocument>.Filter.And(
Builders<BsonDocument>.Filter.Eq("tenantId", tenantId),
baseFilter
);
}
/// <summary>
/// Creates a filter for listing documents within a tenant.
/// </summary>
/// <param name="tenantId">The tenant ID.</param>
/// <param name="includeDeleted">Whether to include soft-deleted documents.</param>
/// <returns>A filter for the tenant's documents.</returns>
protected static FilterDefinition<BsonDocument> CreateTenantListFilter(
string tenantId,
bool includeDeleted = false)
{
var filter = Builders<BsonDocument>.Filter.Eq("tenantId", tenantId);
if (!includeDeleted)
{
filter = Builders<BsonDocument>.Filter.And(
filter,
Builders<BsonDocument>.Filter.Or(
Builders<BsonDocument>.Filter.Exists("deletedAt", false),
Builders<BsonDocument>.Filter.Eq("deletedAt", BsonNull.Value)
)
);
}
return filter;
}
/// <summary>
/// Creates a sort definition for common ordering patterns.
/// </summary>
/// <param name="sortBy">The field to sort by.</param>
/// <param name="ascending">True for ascending, false for descending.</param>
/// <returns>A sort definition.</returns>
protected static SortDefinition<BsonDocument> CreateSort(string sortBy, bool ascending = true)
{
return ascending
? Builders<BsonDocument>.Sort.Ascending(sortBy)
: Builders<BsonDocument>.Sort.Descending(sortBy);
}
/// <summary>
/// Creates a document ID using the tenant-scoped format.
/// </summary>
protected static string CreateDocumentId(string tenantId, string resourceId)
=> TenantScopedId.Create(tenantId, resourceId);
}

View File

@@ -0,0 +1,86 @@
namespace StellaOps.Notify.Storage.Mongo.Tenancy;
/// <summary>
/// Helper for constructing tenant-scoped document IDs with consistent format.
/// </summary>
public static class TenantScopedId
{
private const char Separator = ':';
/// <summary>
/// Creates a tenant-scoped ID in the format "{tenantId}:{resourceId}".
/// </summary>
/// <param name="tenantId">The tenant ID (required).</param>
/// <param name="resourceId">The resource ID (required).</param>
/// <returns>A composite ID string.</returns>
/// <exception cref="ArgumentException">Thrown if either parameter is null or whitespace.</exception>
public static string Create(string tenantId, string resourceId)
{
ArgumentException.ThrowIfNullOrWhiteSpace(tenantId);
ArgumentException.ThrowIfNullOrWhiteSpace(resourceId);
// Validate no separator in tenant or resource IDs to prevent injection
if (tenantId.Contains(Separator))
throw new ArgumentException($"Tenant ID cannot contain '{Separator}'", nameof(tenantId));
if (resourceId.Contains(Separator))
throw new ArgumentException($"Resource ID cannot contain '{Separator}'", nameof(resourceId));
return string.Create(tenantId.Length + resourceId.Length + 1, (tenantId, resourceId), static (span, value) =>
{
value.tenantId.AsSpan().CopyTo(span);
span[value.tenantId.Length] = Separator;
value.resourceId.AsSpan().CopyTo(span[(value.tenantId.Length + 1)..]);
});
}
/// <summary>
/// Parses a tenant-scoped ID into its components.
/// </summary>
/// <param name="scopedId">The composite ID to parse.</param>
/// <param name="tenantId">Output: the extracted tenant ID.</param>
/// <param name="resourceId">Output: the extracted resource ID.</param>
/// <returns>True if parsing succeeded, false otherwise.</returns>
public static bool TryParse(string scopedId, out string tenantId, out string resourceId)
{
tenantId = string.Empty;
resourceId = string.Empty;
if (string.IsNullOrWhiteSpace(scopedId))
return false;
var separatorIndex = scopedId.IndexOf(Separator);
if (separatorIndex <= 0 || separatorIndex >= scopedId.Length - 1)
return false;
tenantId = scopedId[..separatorIndex];
resourceId = scopedId[(separatorIndex + 1)..];
return !string.IsNullOrWhiteSpace(tenantId) && !string.IsNullOrWhiteSpace(resourceId);
}
/// <summary>
/// Extracts the tenant ID from a tenant-scoped ID.
/// </summary>
/// <param name="scopedId">The composite ID.</param>
/// <returns>The tenant ID, or null if parsing failed.</returns>
public static string? ExtractTenantId(string scopedId)
{
return TryParse(scopedId, out var tenantId, out _) ? tenantId : null;
}
/// <summary>
/// Validates that a scoped ID belongs to the expected tenant.
/// </summary>
/// <param name="scopedId">The composite ID to validate.</param>
/// <param name="expectedTenantId">The expected tenant ID.</param>
/// <returns>True if the ID belongs to the expected tenant.</returns>
public static bool BelongsToTenant(string scopedId, string expectedTenantId)
{
if (string.IsNullOrWhiteSpace(scopedId) || string.IsNullOrWhiteSpace(expectedTenantId))
return false;
var extractedTenant = ExtractTenantId(scopedId);
return string.Equals(extractedTenant, expectedTenantId, StringComparison.OrdinalIgnoreCase);
}
}

View File

@@ -7,12 +7,13 @@ namespace StellaOps.Scanner.Worker.Determinism;
/// </summary>
public sealed class DeterminismContext
{
public DeterminismContext(bool fixedClock, DateTimeOffset fixedInstantUtc, int? rngSeed, bool filterLogs)
public DeterminismContext(bool fixedClock, DateTimeOffset fixedInstantUtc, int? rngSeed, bool filterLogs, int? concurrencyLimit)
{
FixedClock = fixedClock;
FixedInstantUtc = fixedInstantUtc.ToUniversalTime();
RngSeed = rngSeed;
FilterLogs = filterLogs;
ConcurrencyLimit = concurrencyLimit;
}
public bool FixedClock { get; }
@@ -22,4 +23,6 @@ public sealed class DeterminismContext
public int? RngSeed { get; }
public bool FilterLogs { get; }
public int? ConcurrencyLimit { get; }
}

View File

@@ -42,6 +42,7 @@ internal sealed class SurfaceManifestStageExecutor : IScanStageExecutor
private readonly ILogger<SurfaceManifestStageExecutor> _logger;
private readonly ICryptoHash _hash;
private readonly IRubyPackageInventoryStore _rubyPackageStore;
private readonly Determinism.DeterminismContext _determinism;
private readonly string _componentVersion;
public SurfaceManifestStageExecutor(
@@ -51,7 +52,8 @@ internal sealed class SurfaceManifestStageExecutor : IScanStageExecutor
ScannerWorkerMetrics metrics,
ILogger<SurfaceManifestStageExecutor> logger,
ICryptoHash hash,
IRubyPackageInventoryStore rubyPackageStore)
IRubyPackageInventoryStore rubyPackageStore,
Determinism.DeterminismContext determinism)
{
_publisher = publisher ?? throw new ArgumentNullException(nameof(publisher));
_surfaceCache = surfaceCache ?? throw new ArgumentNullException(nameof(surfaceCache));
@@ -60,6 +62,7 @@ internal sealed class SurfaceManifestStageExecutor : IScanStageExecutor
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
_hash = hash ?? throw new ArgumentNullException(nameof(hash));
_rubyPackageStore = rubyPackageStore ?? throw new ArgumentNullException(nameof(rubyPackageStore));
_determinism = determinism ?? throw new ArgumentNullException(nameof(determinism));
_componentVersion = Assembly.GetExecutingAssembly().GetName().Version?.ToString() ?? "unknown";
}
@@ -221,9 +224,56 @@ internal sealed class SurfaceManifestStageExecutor : IScanStageExecutor
}));
}
var determinismPayload = BuildDeterminismPayload(context, payloads);
if (determinismPayload is not null)
{
payloads.Add(determinismPayload);
}
return payloads;
}
private SurfaceManifestPayload? BuildDeterminismPayload(ScanJobContext context, IEnumerable<SurfaceManifestPayload> payloads)
{
var pins = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase);
if (context.Lease.Metadata.TryGetValue("determinism.feed", out var feed) && !string.IsNullOrWhiteSpace(feed))
{
pins["feed"] = feed;
}
if (context.Lease.Metadata.TryGetValue("determinism.policy", out var policy) && !string.IsNullOrWhiteSpace(policy))
{
pins["policy"] = policy;
}
var artifactHashes = new Dictionary<string, string>(StringComparer.OrdinalIgnoreCase);
foreach (var payload in payloads)
{
var digest = ComputeDigest(payload.Content.Span);
artifactHashes[payload.Kind] = digest;
}
var report = new
{
fixedClock = _determinism.FixedClock,
fixedInstantUtc = _determinism.FixedInstantUtc,
rngSeed = _determinism.RngSeed,
filterLogs = _determinism.FilterLogs,
concurrencyLimit = _determinism.ConcurrencyLimit,
pins = pins,
artifacts = artifactHashes
};
var json = JsonSerializer.Serialize(report, JsonOptions);
return new SurfaceManifestPayload(
ArtifactDocumentType.SurfaceObservation,
ArtifactDocumentFormat.ObservationJson,
Kind: "determinism.json",
MediaType: "application/json",
Content: Encoding.UTF8.GetBytes(json),
View: "replay");
}
private async Task PersistRubyPackagesAsync(ScanJobContext context, CancellationToken cancellationToken)
{
if (!context.Analysis.TryGet<ReadOnlyDictionary<string, LanguageAnalyzerResult>>(ScanAnalysisKeys.LanguageAnalyzerResults, out var results))

View File

@@ -58,7 +58,8 @@ builder.Services.AddSingleton(new DeterminismContext(
workerOptions.Determinism.FixedClock,
workerOptions.Determinism.FixedInstantUtc,
workerOptions.Determinism.RngSeed,
workerOptions.Determinism.FilterLogs));
workerOptions.Determinism.FilterLogs,
workerOptions.Determinism.ConcurrencyLimit));
builder.Services.AddSingleton<IDeterministicRandomProvider>(_ => new DeterministicRandomProvider(workerOptions.Determinism.RngSeed));
builder.Services.AddScannerCache(builder.Configuration);
builder.Services.AddSurfaceEnvironment(options =>

View File

@@ -645,7 +645,7 @@ internal static class NodePackageCollector
packageSha256: packageSha256,
isYarnPnp: yarnPnpPresent);
AttachEntrypoints(package, root, relativeDirectory);
AttachEntrypoints(context, package, root, relativeDirectory);
return package;
}

View File

@@ -4,15 +4,21 @@ namespace StellaOps.Scanner.Analyzers.Lang.Ruby.Internal.Observations;
internal static class RubyObservationBuilder
{
private const string SchemaVersion = "stellaops.ruby.observation@1";
public static RubyObservationDocument Build(
IReadOnlyList<RubyPackage> packages,
RubyLockData lockData,
RubyRuntimeGraph runtimeGraph,
RubyCapabilities capabilities,
RubyBundlerConfig bundlerConfig,
string? bundledWith)
{
ArgumentNullException.ThrowIfNull(packages);
ArgumentNullException.ThrowIfNull(lockData);
ArgumentNullException.ThrowIfNull(runtimeGraph);
ArgumentNullException.ThrowIfNull(capabilities);
ArgumentNullException.ThrowIfNull(bundlerConfig);
var packageItems = packages
.OrderBy(static package => package.Name, StringComparer.OrdinalIgnoreCase)
@@ -20,6 +26,9 @@ internal static class RubyObservationBuilder
.Select(CreatePackage)
.ToImmutableArray();
var entrypoints = BuildEntrypoints(runtimeGraph, packages);
var dependencyItems = BuildDependencyEdges(lockData);
var runtimeItems = packages
.Select(package => CreateRuntimeEdge(package, runtimeGraph))
.Where(static edge => edge is not null)
@@ -27,6 +36,8 @@ internal static class RubyObservationBuilder
.OrderBy(static edge => edge.Package, StringComparer.OrdinalIgnoreCase)
.ToImmutableArray();
var environment = BuildEnvironment(lockData, bundlerConfig, capabilities, bundledWith);
var capabilitySummary = new RubyObservationCapabilitySummary(
capabilities.UsesExec,
capabilities.UsesNetwork,
@@ -39,7 +50,134 @@ internal static class RubyObservationBuilder
? null
: bundledWith.Trim();
return new RubyObservationDocument(packageItems, runtimeItems, capabilitySummary, normalizedBundler);
return new RubyObservationDocument(
SchemaVersion,
packageItems,
entrypoints,
dependencyItems,
runtimeItems,
environment,
capabilitySummary,
normalizedBundler);
}
private static ImmutableArray<RubyObservationEntrypoint> BuildEntrypoints(
RubyRuntimeGraph runtimeGraph,
IReadOnlyList<RubyPackage> packages)
{
var entrypoints = new List<RubyObservationEntrypoint>();
var packageNames = packages.Select(static p => p.Name).ToHashSet(StringComparer.OrdinalIgnoreCase);
foreach (var entryFile in runtimeGraph.GetEntrypointFiles())
{
var type = InferEntrypointType(entryFile);
var requiredGems = runtimeGraph.GetRequiredGems(entryFile)
.Where(gem => packageNames.Contains(gem))
.OrderBy(static gem => gem, StringComparer.OrdinalIgnoreCase)
.ToImmutableArray();
entrypoints.Add(new RubyObservationEntrypoint(entryFile, type, requiredGems));
}
return entrypoints
.OrderBy(static e => e.Path, StringComparer.OrdinalIgnoreCase)
.ToImmutableArray();
}
private static string InferEntrypointType(string path)
{
var fileName = Path.GetFileName(path);
if (fileName.Equals("config.ru", StringComparison.OrdinalIgnoreCase))
{
return "rack";
}
if (fileName.Equals("Rakefile", StringComparison.OrdinalIgnoreCase) ||
fileName.EndsWith(".rake", StringComparison.OrdinalIgnoreCase))
{
return "rake";
}
if (path.Contains("/bin/", StringComparison.OrdinalIgnoreCase) ||
path.Contains("\\bin\\", StringComparison.OrdinalIgnoreCase))
{
return "executable";
}
if (fileName.Equals("Gemfile", StringComparison.OrdinalIgnoreCase))
{
return "gemfile";
}
return "script";
}
private static RubyObservationEnvironment BuildEnvironment(
RubyLockData lockData,
RubyBundlerConfig bundlerConfig,
RubyCapabilities capabilities,
string? bundledWith)
{
var bundlePaths = bundlerConfig.BundlePaths
.OrderBy(static p => p, StringComparer.OrdinalIgnoreCase)
.ToImmutableArray();
var gemfiles = bundlerConfig.Gemfiles
.Select(static p => p.Replace('\\', '/'))
.OrderBy(static p => p, StringComparer.OrdinalIgnoreCase)
.ToImmutableArray();
var lockFiles = lockData.Entries
.Select(static e => e.LockFileRelativePath)
.Distinct(StringComparer.OrdinalIgnoreCase)
.OrderBy(static p => p, StringComparer.OrdinalIgnoreCase)
.ToImmutableArray();
var frameworks = DetectFrameworks(capabilities)
.OrderBy(static f => f, StringComparer.OrdinalIgnoreCase)
.ToImmutableArray();
return new RubyObservationEnvironment(
RubyVersion: null,
BundlerVersion: string.IsNullOrWhiteSpace(bundledWith) ? null : bundledWith.Trim(),
bundlePaths,
gemfiles,
lockFiles,
frameworks);
}
private static IEnumerable<string> DetectFrameworks(RubyCapabilities capabilities)
{
if (capabilities.HasJobSchedulers)
{
foreach (var scheduler in capabilities.JobSchedulers)
{
yield return scheduler;
}
}
}
private static ImmutableArray<RubyObservationDependencyEdge> BuildDependencyEdges(RubyLockData lockData)
{
var edges = new List<RubyObservationDependencyEdge>();
foreach (var entry in lockData.Entries)
{
var fromPackage = $"pkg:gem/{entry.Name}@{entry.Version}";
foreach (var dep in entry.Dependencies)
{
edges.Add(new RubyObservationDependencyEdge(
fromPackage,
dep.DependencyName,
dep.VersionConstraint));
}
}
return edges
.OrderBy(static edge => edge.FromPackage, StringComparer.OrdinalIgnoreCase)
.ThenBy(static edge => edge.ToPackage, StringComparer.OrdinalIgnoreCase)
.ToImmutableArray();
}
private static RubyObservationPackage CreatePackage(RubyPackage package)

View File

@@ -2,9 +2,17 @@ using System.Collections.Immutable;
namespace StellaOps.Scanner.Analyzers.Lang.Ruby.Internal.Observations;
/// <summary>
/// AOC-compliant observation document for Ruby project analysis.
/// Contains components, entrypoints, dependency edges, and environment profiles.
/// </summary>
internal sealed record RubyObservationDocument(
string Schema,
ImmutableArray<RubyObservationPackage> Packages,
ImmutableArray<RubyObservationEntrypoint> Entrypoints,
ImmutableArray<RubyObservationDependencyEdge> DependencyEdges,
ImmutableArray<RubyObservationRuntimeEdge> RuntimeEdges,
RubyObservationEnvironment Environment,
RubyObservationCapabilitySummary Capabilities,
string? BundledWith);
@@ -18,6 +26,19 @@ internal sealed record RubyObservationPackage(
string? Artifact,
ImmutableArray<string> Groups);
/// <summary>
/// Entrypoint detected in the Ruby project (Rakefile, bin scripts, config.ru, etc).
/// </summary>
internal sealed record RubyObservationEntrypoint(
string Path,
string Type,
ImmutableArray<string> RequiredGems);
internal sealed record RubyObservationDependencyEdge(
string FromPackage,
string ToPackage,
string? VersionConstraint);
internal sealed record RubyObservationRuntimeEdge(
string Package,
bool UsedByEntrypoint,
@@ -25,6 +46,17 @@ internal sealed record RubyObservationRuntimeEdge(
ImmutableArray<string> Entrypoints,
ImmutableArray<string> Reasons);
/// <summary>
/// Environment profile with Ruby version, Bundler settings, and paths.
/// </summary>
internal sealed record RubyObservationEnvironment(
string? RubyVersion,
string? BundlerVersion,
ImmutableArray<string> BundlePaths,
ImmutableArray<string> Gemfiles,
ImmutableArray<string> LockFiles,
ImmutableArray<string> Frameworks);
internal sealed record RubyObservationCapabilitySummary(
bool UsesExec,
bool UsesNetwork,

View File

@@ -17,8 +17,12 @@ internal static class RubyObservationSerializer
{
writer.WriteStartObject();
writer.WriteString("$schema", document.Schema);
WritePackages(writer, document.Packages);
WriteEntrypoints(writer, document.Entrypoints);
WriteDependencyEdges(writer, document.DependencyEdges);
WriteRuntimeEdges(writer, document.RuntimeEdges);
WriteEnvironment(writer, document.Environment);
WriteCapabilities(writer, document.Capabilities);
WriteBundledWith(writer, document.BundledWith);
@@ -72,6 +76,46 @@ internal static class RubyObservationSerializer
writer.WriteEndArray();
}
private static void WriteEntrypoints(Utf8JsonWriter writer, ImmutableArray<RubyObservationEntrypoint> entrypoints)
{
writer.WritePropertyName("entrypoints");
writer.WriteStartArray();
foreach (var entrypoint in entrypoints)
{
writer.WriteStartObject();
writer.WriteString("path", entrypoint.Path);
writer.WriteString("type", entrypoint.Type);
if (entrypoint.RequiredGems.Length > 0)
{
WriteStringArray(writer, "requiredGems", entrypoint.RequiredGems);
}
writer.WriteEndObject();
}
writer.WriteEndArray();
}
private static void WriteDependencyEdges(Utf8JsonWriter writer, ImmutableArray<RubyObservationDependencyEdge> dependencyEdges)
{
writer.WritePropertyName("dependencyEdges");
writer.WriteStartArray();
foreach (var edge in dependencyEdges)
{
writer.WriteStartObject();
writer.WriteString("from", edge.FromPackage);
writer.WriteString("to", edge.ToPackage);
if (!string.IsNullOrWhiteSpace(edge.VersionConstraint))
{
writer.WriteString("constraint", edge.VersionConstraint);
}
writer.WriteEndObject();
}
writer.WriteEndArray();
}
private static void WriteRuntimeEdges(Utf8JsonWriter writer, ImmutableArray<RubyObservationRuntimeEdge> runtimeEdges)
{
writer.WritePropertyName("runtimeEdges");
@@ -90,6 +134,44 @@ internal static class RubyObservationSerializer
writer.WriteEndArray();
}
private static void WriteEnvironment(Utf8JsonWriter writer, RubyObservationEnvironment environment)
{
writer.WritePropertyName("environment");
writer.WriteStartObject();
if (!string.IsNullOrWhiteSpace(environment.RubyVersion))
{
writer.WriteString("rubyVersion", environment.RubyVersion);
}
if (!string.IsNullOrWhiteSpace(environment.BundlerVersion))
{
writer.WriteString("bundlerVersion", environment.BundlerVersion);
}
if (environment.BundlePaths.Length > 0)
{
WriteStringArray(writer, "bundlePaths", environment.BundlePaths);
}
if (environment.Gemfiles.Length > 0)
{
WriteStringArray(writer, "gemfiles", environment.Gemfiles);
}
if (environment.LockFiles.Length > 0)
{
WriteStringArray(writer, "lockfiles", environment.LockFiles);
}
if (environment.Frameworks.Length > 0)
{
WriteStringArray(writer, "frameworks", environment.Frameworks);
}
writer.WriteEndObject();
}
private static void WriteCapabilities(Utf8JsonWriter writer, RubyObservationCapabilitySummary summary)
{
writer.WritePropertyName("capabilities");

View File

@@ -21,6 +21,8 @@ internal static class RubyLockCollector
"coverage"
};
private static readonly string[] LayerRootCandidates = { "layers", ".layers", "layer" };
private const int MaxDiscoveryDepth = 3;
private static readonly IReadOnlyCollection<string> DefaultGroups = new[] { "default" };
@@ -61,6 +63,7 @@ internal static class RubyLockCollector
spec.Source,
spec.Platform,
groups,
spec.Dependencies,
relativeLockPath));
}
}
@@ -186,6 +189,20 @@ internal static class RubyLockCollector
TryAdd(candidate);
}
// Also discover lock files in container layers
foreach (var layerRoot in EnumerateLayerRoots(rootPath))
{
foreach (var name in LockFileNames)
{
TryAdd(Path.Combine(layerRoot, name));
}
foreach (var candidate in EnumerateLockFiles(layerRoot))
{
TryAdd(candidate);
}
}
return discovered
.OrderBy(static path => path, StringComparer.OrdinalIgnoreCase)
.ToArray();
@@ -294,4 +311,53 @@ internal static class RubyLockCollector
Path.GetFullPath(manifestDirectory),
OperatingSystem.IsWindows() ? StringComparison.OrdinalIgnoreCase : StringComparison.Ordinal);
}
/// <summary>
/// Enumerates OCI container layer roots for Ruby project discovery.
/// Looks for layers/, .layers/, layer/ directories containing layer subdirectories.
/// </summary>
private static IEnumerable<string> EnumerateLayerRoots(string workspaceRoot)
{
foreach (var candidate in LayerRootCandidates)
{
var root = Path.Combine(workspaceRoot, candidate);
if (!Directory.Exists(root))
{
continue;
}
IEnumerable<string>? directories = null;
try
{
directories = Directory.EnumerateDirectories(root);
}
catch (IOException)
{
continue;
}
catch (UnauthorizedAccessException)
{
continue;
}
if (directories is null)
{
continue;
}
foreach (var layerDirectory in directories)
{
// Check for fs/ subdirectory (extracted layer filesystem)
var fsDirectory = Path.Combine(layerDirectory, "fs");
if (Directory.Exists(fsDirectory))
{
yield return fsDirectory;
}
else
{
yield return layerDirectory;
}
}
}
}
}

View File

@@ -6,4 +6,5 @@ internal sealed record RubyLockEntry(
string Source,
string? Platform,
IReadOnlyCollection<string> Groups,
IReadOnlyList<RubyDependencyEdge> Dependencies,
string LockFileRelativePath);

View File

@@ -15,6 +15,7 @@ internal static class RubyLockParser
}
private static readonly Regex SpecLineRegex = new(@"^\s{4}(?<name>[^\s]+)\s\((?<version>[^)]+)\)", RegexOptions.Compiled);
private static readonly Regex DependencyLineRegex = new(@"^\s{6}(?<name>[^\s]+)(?:\s\((?<constraint>[^)]+)\))?", RegexOptions.Compiled);
public static RubyLockParserResult Parse(string contents)
{
@@ -23,13 +24,14 @@ internal static class RubyLockParser
return new RubyLockParserResult(Array.Empty<RubyLockParserEntry>(), string.Empty);
}
var entries = new List<RubyLockParserEntry>();
var specBuilders = new List<SpecBuilder>();
var section = RubyLockSection.None;
var bundledWith = string.Empty;
var inSpecs = false;
string? currentRemote = null;
string? currentRevision = null;
string? currentPath = null;
SpecBuilder? currentSpec = null;
using var reader = new StringReader(contents);
string? line;
@@ -47,6 +49,7 @@ internal static class RubyLockParser
currentRemote = null;
currentRevision = null;
currentPath = null;
currentSpec = null;
if (section == RubyLockSection.Gem)
{
@@ -76,13 +79,15 @@ internal static class RubyLockParser
ref currentRemote,
ref currentRevision,
ref currentPath,
entries);
ref currentSpec,
specBuilders);
break;
default:
break;
}
}
var entries = specBuilders.Select(static builder => builder.Build()).ToArray();
return new RubyLockParserResult(entries, bundledWith);
}
@@ -93,7 +98,8 @@ internal static class RubyLockParser
ref string? currentRemote,
ref string? currentRevision,
ref string? currentPath,
List<RubyLockParserEntry> entries)
ref SpecBuilder? currentSpec,
List<SpecBuilder> specBuilders)
{
if (line.StartsWith(" remote:", StringComparison.OrdinalIgnoreCase))
{
@@ -130,15 +136,33 @@ internal static class RubyLockParser
return;
}
var match = SpecLineRegex.Match(line);
if (!match.Success)
// Check for nested dependency line (6 spaces indent)
if (line.Length > 6 && line.StartsWith(" ") && !char.IsWhiteSpace(line[6]))
{
if (currentSpec is not null)
{
var depMatch = DependencyLineRegex.Match(line);
if (depMatch.Success)
{
var depName = depMatch.Groups["name"].Value.Trim();
var constraint = depMatch.Groups["constraint"].Success
? depMatch.Groups["constraint"].Value.Trim()
: null;
if (!string.IsNullOrEmpty(depName))
{
currentSpec.Dependencies.Add(new RubyDependencyEdge(depName, constraint));
}
}
}
return;
}
if (line.Length > 4 && char.IsWhiteSpace(line[4]))
// Top-level spec line (4 spaces indent)
var match = SpecLineRegex.Match(line);
if (!match.Success)
{
// Nested dependency entry under a spec.
return;
}
@@ -151,7 +175,30 @@ internal static class RubyLockParser
var (version, platform) = ParseVersion(match.Groups["version"].Value);
var source = ResolveSource(section, currentRemote, currentRevision, currentPath);
entries.Add(new RubyLockParserEntry(name, version, source, platform));
currentSpec = new SpecBuilder(name, version, source, platform);
specBuilders.Add(currentSpec);
}
private sealed class SpecBuilder
{
public SpecBuilder(string name, string version, string source, string? platform)
{
Name = name;
Version = version;
Source = source;
Platform = platform;
}
public string Name { get; }
public string Version { get; }
public string Source { get; }
public string? Platform { get; }
public List<RubyDependencyEdge> Dependencies { get; } = new();
public RubyLockParserEntry Build()
{
return new RubyLockParserEntry(Name, Version, Source, Platform, Dependencies.ToArray());
}
}
private static RubyLockSection ParseSection(string value)
@@ -213,6 +260,15 @@ internal static class RubyLockParser
}
}
internal sealed record RubyLockParserEntry(string Name, string Version, string Source, string? Platform);
internal sealed record RubyLockParserEntry(
string Name,
string Version,
string Source,
string? Platform,
IReadOnlyList<RubyDependencyEdge> Dependencies);
internal sealed record RubyLockParserResult(IReadOnlyList<RubyLockParserEntry> Entries, string BundledWith);
internal sealed record RubyDependencyEdge(string DependencyName, string? VersionConstraint);
internal sealed record RubyLockParserResult(
IReadOnlyList<RubyLockParserEntry> Entries,
string BundledWith);

View File

@@ -374,6 +374,38 @@ internal sealed class RubyRuntimeGraph
return false;
}
/// <summary>
/// Gets all entrypoint files across all gem usages.
/// </summary>
public IEnumerable<string> GetEntrypointFiles()
{
return _usages.Values
.Where(static usage => usage.HasEntrypoints)
.SelectMany(static usage => usage.Entrypoints)
.Distinct(StringComparer.OrdinalIgnoreCase);
}
/// <summary>
/// Gets the gems required by a specific file.
/// </summary>
public IEnumerable<string> GetRequiredGems(string filePath)
{
if (string.IsNullOrWhiteSpace(filePath))
{
yield break;
}
var normalizedPath = filePath.Replace('\\', '/');
foreach (var (gemName, usage) in _usages)
{
if (usage.ReferencingFiles.Any(f => f.Equals(normalizedPath, StringComparison.OrdinalIgnoreCase)))
{
yield return gemName;
}
}
}
private static IEnumerable<string> EnumerateCandidateKeys(string name)
{
if (string.IsNullOrWhiteSpace(name))

View File

@@ -8,6 +8,8 @@ internal static class RubyVendorArtifactCollector
Path.Combine(".bundle", "cache")
};
private static readonly string[] LayerRootCandidates = { "layers", ".layers", "layer" };
private static readonly string[] DirectoryBlockList =
{
".git",
@@ -65,6 +67,14 @@ internal static class RubyVendorArtifactCollector
TryAdd(Path.Combine(bundlePath, "cache"));
}
// Also check container layers for vendor directories and gems
foreach (var layerRoot in EnumerateLayerRoots(context.RootPath))
{
TryAdd(Path.Combine(layerRoot, "vendor", "cache"));
TryAdd(Path.Combine(layerRoot, "vendor", "bundle"));
TryAdd(Path.Combine(layerRoot, ".bundle", "cache"));
}
var artifacts = new List<RubyVendorArtifact>();
foreach (var root in roots.OrderBy(static value => value, StringComparer.OrdinalIgnoreCase))
{
@@ -261,6 +271,55 @@ internal static class RubyVendorArtifactCollector
return path + Path.DirectorySeparatorChar;
}
/// <summary>
/// Enumerates OCI container layer roots for Ruby vendor artifact discovery.
/// Looks for layers/, .layers/, layer/ directories containing layer subdirectories.
/// </summary>
private static IEnumerable<string> EnumerateLayerRoots(string workspaceRoot)
{
foreach (var candidate in LayerRootCandidates)
{
var root = Path.Combine(workspaceRoot, candidate);
if (!Directory.Exists(root))
{
continue;
}
IEnumerable<string>? directories = null;
try
{
directories = Directory.EnumerateDirectories(root);
}
catch (IOException)
{
continue;
}
catch (UnauthorizedAccessException)
{
continue;
}
if (directories is null)
{
continue;
}
foreach (var layerDirectory in directories)
{
// Check for fs/ subdirectory (extracted layer filesystem)
var fsDirectory = Path.Combine(layerDirectory, "fs");
if (Directory.Exists(fsDirectory))
{
yield return fsDirectory;
}
else
{
yield return layerDirectory;
}
}
}
}
}
internal sealed record RubyVendorArtifact(

View File

@@ -0,0 +1,307 @@
using System.Text;
namespace StellaOps.Scanner.Analyzers.Lang.Ruby.Internal.Runtime;
/// <summary>
/// Provides the Ruby runtime shim that captures runtime events via TracePoint into NDJSON.
/// This shim is written to disk alongside the analyzer to be invoked by the worker/CLI.
/// </summary>
internal static class RubyRuntimeShim
{
private const string ShimFileName = "trace-shim.rb";
public static string FileName => ShimFileName;
public static async Task<string> WriteAsync(string directory, CancellationToken cancellationToken)
{
ArgumentException.ThrowIfNullOrWhiteSpace(directory);
Directory.CreateDirectory(directory);
var path = Path.Combine(directory, ShimFileName);
await File.WriteAllTextAsync(path, ShimSource, Encoding.UTF8, cancellationToken).ConfigureAwait(false);
return path;
}
// NOTE: This shim is intentionally self-contained, offline, and deterministic.
// Uses Ruby's TracePoint API for runtime introspection with append-only evidence collection.
private const string ShimSource = """
# frozen_string_literal: true
# Ruby runtime trace shim (offline, deterministic)
# Captures require, load, and method call events via TracePoint.
# Emits NDJSON to ruby-runtime.ndjson for evidence collection.
require 'json'
require 'digest/sha2'
require 'time'
module StellaTracer
EVENTS = []
MUTEX = Mutex.new
CWD = Dir.pwd.tr('\\', '/')
ENTRYPOINT_ENV = 'STELLA_RUBY_ENTRYPOINT'
OUTPUT_FILE = 'ruby-runtime.ndjson'
# Patterns for redacting sensitive data
REDACT_PATTERNS = [
/password/i,
/secret/i,
/api[_-]?key/i,
/auth[_-]?token/i,
/bearer/i,
/credential/i,
/private[_-]?key/i
].freeze
# Gems known to have security-relevant capabilities
CAPABILITY_GEMS = {
exec: %w[open3 open4 shellwords pty childprocess posix-spawn].freeze,
net: %w[net/http net/https net/ftp socket httparty faraday rest-client typhoeus patron curb excon httpclient].freeze,
serialize: %w[yaml json marshal oj msgpack ox multi_json yajl].freeze,
scheduler: %w[rufus-scheduler clockwork sidekiq resque delayed_job good_job que karafka sucker_punch shoryuken].freeze,
ffi: %w[ffi fiddle].freeze
}.freeze
class << self
def now_iso
Time.now.utc.iso8601(3)
end
def sha256_hex(value)
Digest::SHA256.hexdigest(value.to_s)
end
def relative_path(path)
candidate = path.to_s.tr('\\', '/')
return candidate if candidate.empty?
# Strip file:// prefix if present
candidate = candidate.sub(%r{^file://}, '')
# Make absolute if relative
unless candidate.start_with?('/') || candidate.match?(/^[A-Za-z]:/)
candidate = File.join(CWD, candidate)
end
# Make relative to CWD
if candidate.start_with?(CWD)
offset = CWD.end_with?('/') ? CWD.length : CWD.length + 1
candidate = candidate[offset..]
end
candidate&.sub(%r{^\./}, '')&.sub(%r{^/+}, '') || '.'
end
def normalize_feature(path)
rel = relative_path(path)
{
normalized: rel,
path_sha256: sha256_hex(rel)
}
end
def redact_value(value)
str = value.to_s
REDACT_PATTERNS.any? { |pat| str.match?(pat) } ? '[REDACTED]' : str
end
def detect_capability(feature_name)
CAPABILITY_GEMS.each do |cap, gems|
return cap if gems.any? { |g| feature_name.include?(g) }
end
nil
end
def add_event(evt)
MUTEX.synchronize { EVENTS << evt }
end
def record_require(feature, path, success)
normalized = normalize_feature(path || feature)
capability = detect_capability(feature)
event = {
type: 'ruby.require',
ts: now_iso,
feature: feature,
module: normalized,
success: success
}
event[:capability] = capability if capability
add_event(event)
end
def record_load(path, wrap)
normalized = normalize_feature(path)
add_event({
type: 'ruby.load',
ts: now_iso,
module: normalized,
wrap: wrap
})
end
def record_method_call(klass, method_id, location)
return if location.nil?
path = relative_path(location.path)
add_event({
type: 'ruby.method.call',
ts: now_iso,
class: redact_value(klass.to_s),
method: method_id.to_s,
location: {
path: path,
line: location.lineno,
path_sha256: sha256_hex(path)
}
})
end
def record_error(message, location = nil)
event = {
type: 'ruby.runtime.error',
ts: now_iso,
message: redact_value(message)
}
if location
event[:location] = {
path: relative_path(location),
path_sha256: sha256_hex(relative_path(location))
}
end
add_event(event)
end
def flush
MUTEX.synchronize do
sorted = EVENTS.sort_by { |e| [e[:ts].to_s, e[:type].to_s] }
File.open(OUTPUT_FILE, 'w') do |f|
sorted.each { |e| f.puts(JSON.generate(e)) }
end
end
rescue => e
warn "stella-tracer: failed to write trace: #{e.message}"
end
def enabled_capabilities
caps = Set.new
$LOADED_FEATURES.each do |feature|
cap = detect_capability(feature)
caps << cap if cap
end
caps.to_a.sort
end
end
end
# Track loaded features at startup
$stella_initial_features = $LOADED_FEATURES.dup
# Hook require
module Kernel
alias_method :stella_original_require, :require
alias_method :stella_original_require_relative, :require_relative
alias_method :stella_original_load, :load
def require(feature)
success = false
result = stella_original_require(feature)
success = result
result
rescue LoadError => e
StellaTracer.record_error("LoadError: #{e.message}", feature)
raise
ensure
path = $LOADED_FEATURES.find { |f| f.include?(feature.to_s.gsub(/\.rb$/, '')) }
StellaTracer.record_require(feature.to_s, path, success)
end
def require_relative(feature)
# Resolve the path relative to the caller
caller_path = caller_locations(1, 1)&.first&.path || __FILE__
dir = File.dirname(caller_path)
absolute = File.expand_path(feature, dir)
require(absolute)
end
def load(path, wrap = false)
result = stella_original_load(path, wrap)
StellaTracer.record_load(path.to_s, wrap)
result
rescue => e
StellaTracer.record_error("LoadError: #{e.message}", path)
raise
end
end
# TracePoint for method calls (optional, configurable)
$stella_method_trace = nil
def stella_enable_method_trace(filter_classes: nil)
$stella_method_trace = TracePoint.new(:call) do |tp|
next if tp.path&.start_with?('<internal')
next if tp.defined_class.to_s.start_with?('StellaTracer')
if filter_classes.nil? || filter_classes.any? { |c| tp.defined_class.to_s.include?(c) }
StellaTracer.record_method_call(tp.defined_class, tp.method_id, tp)
end
end
$stella_method_trace.enable
end
def stella_disable_method_trace
$stella_method_trace&.disable
$stella_method_trace = nil
end
# Ensure flush on exit
at_exit do
# Record final capability snapshot
caps = StellaTracer.enabled_capabilities
StellaTracer.add_event({
type: 'ruby.runtime.end',
ts: StellaTracer.now_iso,
loaded_features_count: $LOADED_FEATURES.length - $stella_initial_features.length,
capabilities: caps
})
StellaTracer.flush
end
# Main execution
entrypoint = ENV[StellaTracer::ENTRYPOINT_ENV]
if entrypoint.nil? || entrypoint.empty?
StellaTracer.record_error('STELLA_RUBY_ENTRYPOINT not set')
exit 1
end
unless File.exist?(entrypoint)
StellaTracer.record_error("Entrypoint not found: #{entrypoint}")
exit 1
end
StellaTracer.add_event({
type: 'ruby.runtime.start',
ts: StellaTracer.now_iso,
module: StellaTracer.normalize_feature(entrypoint),
reason: 'shim-start',
ruby_version: RUBY_VERSION,
ruby_platform: RUBY_PLATFORM
})
# Optionally enable method tracing for specific classes
trace_classes = ENV['STELLA_RUBY_TRACE_CLASSES']&.split(',')&.map(&:strip)
stella_enable_method_trace(filter_classes: trace_classes) if trace_classes && !trace_classes.empty?
begin
load entrypoint
rescue => e
StellaTracer.record_error("#{e.class}: #{e.message}", entrypoint)
raise
end
""";
}

View File

@@ -0,0 +1,268 @@
using System.Text.Json;
using System.Text.Json.Serialization;
namespace StellaOps.Scanner.Analyzers.Lang.Ruby.Internal.Runtime;
/// <summary>
/// Reads and parses Ruby runtime trace NDJSON output.
/// </summary>
internal static class RubyRuntimeTraceReader
{
private static readonly JsonSerializerOptions s_jsonOptions = new()
{
PropertyNameCaseInsensitive = true,
PropertyNamingPolicy = JsonNamingPolicy.SnakeCaseLower,
};
/// <summary>
/// Reads runtime trace events from an NDJSON file.
/// </summary>
public static async Task<RubyRuntimeTrace> ReadAsync(string path, CancellationToken cancellationToken)
{
ArgumentException.ThrowIfNullOrWhiteSpace(path);
if (!File.Exists(path))
{
return RubyRuntimeTrace.Empty;
}
var events = new List<RubyRuntimeEvent>();
var requires = new List<RubyRequireEvent>();
var loads = new List<RubyLoadEvent>();
var methodCalls = new List<RubyMethodCallEvent>();
var errors = new List<RubyRuntimeErrorEvent>();
string? rubyVersion = null;
string? rubyPlatform = null;
string[]? finalCapabilities = null;
int? loadedFeaturesCount = null;
await foreach (var line in File.ReadLinesAsync(path, cancellationToken).ConfigureAwait(false))
{
if (string.IsNullOrWhiteSpace(line))
{
continue;
}
try
{
using var doc = JsonDocument.Parse(line);
var root = doc.RootElement;
if (!root.TryGetProperty("type", out var typeProp))
{
continue;
}
var type = typeProp.GetString();
var timestamp = root.TryGetProperty("ts", out var tsProp) ? tsProp.GetString() : null;
switch (type)
{
case "ruby.runtime.start":
rubyVersion = root.TryGetProperty("ruby_version", out var vProp) ? vProp.GetString() : null;
rubyPlatform = root.TryGetProperty("ruby_platform", out var pProp) ? pProp.GetString() : null;
break;
case "ruby.runtime.end":
loadedFeaturesCount = root.TryGetProperty("loaded_features_count", out var fcProp)
? fcProp.GetInt32()
: null;
if (root.TryGetProperty("capabilities", out var capsProp) && capsProp.ValueKind == JsonValueKind.Array)
{
finalCapabilities = capsProp.EnumerateArray()
.Select(e => e.GetString())
.Where(s => s is not null)
.Cast<string>()
.ToArray();
}
break;
case "ruby.require":
var reqFeature = root.TryGetProperty("feature", out var fProp) ? fProp.GetString() : null;
var reqSuccess = root.TryGetProperty("success", out var sProp) && sProp.GetBoolean();
var reqCapability = root.TryGetProperty("capability", out var cProp) ? cProp.GetString() : null;
var reqModule = ParseModuleRef(root);
if (reqFeature is not null)
{
requires.Add(new RubyRequireEvent(
timestamp,
reqFeature,
reqModule?.Normalized,
reqModule?.PathSha256,
reqSuccess,
reqCapability));
}
break;
case "ruby.load":
var loadModule = ParseModuleRef(root);
var wrap = root.TryGetProperty("wrap", out var wProp) && wProp.GetBoolean();
if (loadModule is not null)
{
loads.Add(new RubyLoadEvent(
timestamp,
loadModule.Normalized,
loadModule.PathSha256,
wrap));
}
break;
case "ruby.method.call":
var className = root.TryGetProperty("class", out var clsProp) ? clsProp.GetString() : null;
var methodName = root.TryGetProperty("method", out var mtdProp) ? mtdProp.GetString() : null;
var location = ParseLocation(root);
if (className is not null && methodName is not null)
{
methodCalls.Add(new RubyMethodCallEvent(
timestamp,
className,
methodName,
location?.Path,
location?.Line));
}
break;
case "ruby.runtime.error":
var errorMsg = root.TryGetProperty("message", out var msgProp) ? msgProp.GetString() : null;
var errorLocation = root.TryGetProperty("location", out var locProp) ? ParseLocationDirect(locProp) : null;
if (errorMsg is not null)
{
errors.Add(new RubyRuntimeErrorEvent(timestamp, errorMsg, errorLocation?.Path));
}
break;
}
events.Add(new RubyRuntimeEvent(type ?? "unknown", timestamp));
}
catch (JsonException)
{
// Skip malformed lines
}
}
return new RubyRuntimeTrace(
events.ToArray(),
requires.ToArray(),
loads.ToArray(),
methodCalls.ToArray(),
errors.ToArray(),
rubyVersion,
rubyPlatform,
finalCapabilities ?? [],
loadedFeaturesCount);
}
private static ModuleRef? ParseModuleRef(JsonElement root)
{
if (!root.TryGetProperty("module", out var moduleProp) || moduleProp.ValueKind != JsonValueKind.Object)
{
return null;
}
var normalized = moduleProp.TryGetProperty("normalized", out var nProp) ? nProp.GetString() : null;
var sha256 = moduleProp.TryGetProperty("path_sha256", out var sProp) ? sProp.GetString() : null;
return normalized is not null ? new ModuleRef(normalized, sha256) : null;
}
private static LocationRef? ParseLocation(JsonElement root)
{
if (!root.TryGetProperty("location", out var locProp) || locProp.ValueKind != JsonValueKind.Object)
{
return null;
}
return ParseLocationDirect(locProp);
}
private static LocationRef? ParseLocationDirect(JsonElement locProp)
{
if (locProp.ValueKind != JsonValueKind.Object)
{
return null;
}
var path = locProp.TryGetProperty("path", out var pProp) ? pProp.GetString() : null;
var line = locProp.TryGetProperty("line", out var lProp) ? lProp.GetInt32() : (int?)null;
return path is not null ? new LocationRef(path, line) : null;
}
private sealed record ModuleRef(string Normalized, string? PathSha256);
private sealed record LocationRef(string Path, int? Line);
}
/// <summary>
/// Represents a complete Ruby runtime trace.
/// </summary>
internal sealed record RubyRuntimeTrace(
RubyRuntimeEvent[] Events,
RubyRequireEvent[] Requires,
RubyLoadEvent[] Loads,
RubyMethodCallEvent[] MethodCalls,
RubyRuntimeErrorEvent[] Errors,
string? RubyVersion,
string? RubyPlatform,
string[] Capabilities,
int? LoadedFeaturesCount)
{
public static RubyRuntimeTrace Empty { get; } = new(
[],
[],
[],
[],
[],
null,
null,
[],
null);
public bool IsEmpty => Events.Length == 0;
}
/// <summary>
/// Base runtime event with type and timestamp.
/// </summary>
internal sealed record RubyRuntimeEvent(string Type, string? Timestamp);
/// <summary>
/// A require event capturing a gem/file being loaded.
/// </summary>
internal sealed record RubyRequireEvent(
string? Timestamp,
string Feature,
string? NormalizedPath,
string? PathSha256,
bool Success,
string? Capability);
/// <summary>
/// A load event for explicit file loads.
/// </summary>
internal sealed record RubyLoadEvent(
string? Timestamp,
string NormalizedPath,
string? PathSha256,
bool Wrap);
/// <summary>
/// A method call event from TracePoint.
/// </summary>
internal sealed record RubyMethodCallEvent(
string? Timestamp,
string ClassName,
string MethodName,
string? Path,
int? Line);
/// <summary>
/// A runtime error event.
/// </summary>
internal sealed record RubyRuntimeErrorEvent(
string? Timestamp,
string Message,
string? Path);

View File

@@ -0,0 +1,164 @@
using System.Diagnostics;
using Microsoft.Extensions.Logging;
using StellaOps.Scanner.Analyzers.Lang;
namespace StellaOps.Scanner.Analyzers.Lang.Ruby.Internal.Runtime;
/// <summary>
/// Optional harness that executes the emitted Ruby runtime shim when an entrypoint is provided via environment variable.
/// This keeps runtime capture opt-in and offline-friendly.
/// </summary>
internal static class RubyRuntimeTraceRunner
{
private const string EntrypointEnvVar = "STELLA_RUBY_ENTRYPOINT";
private const string BinaryEnvVar = "STELLA_RUBY_BINARY";
private const string TraceClassesEnvVar = "STELLA_RUBY_TRACE_CLASSES";
private const string RuntimeFileName = "ruby-runtime.ndjson";
private const int DefaultTimeoutMs = 60_000; // 1 minute default timeout
public static async Task<bool> TryExecuteAsync(
LanguageAnalyzerContext context,
ILogger? logger,
CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(context);
var entrypoint = Environment.GetEnvironmentVariable(EntrypointEnvVar);
if (string.IsNullOrWhiteSpace(entrypoint))
{
logger?.LogDebug("Ruby runtime trace skipped: {EnvVar} not set", EntrypointEnvVar);
return false;
}
var entrypointPath = Path.GetFullPath(Path.Combine(context.RootPath, entrypoint));
if (!File.Exists(entrypointPath))
{
logger?.LogWarning("Ruby runtime trace skipped: entrypoint '{Entrypoint}' missing", entrypointPath);
return false;
}
var shimPath = Path.Combine(context.RootPath, RubyRuntimeShim.FileName);
if (!File.Exists(shimPath))
{
await RubyRuntimeShim.WriteAsync(context.RootPath, cancellationToken).ConfigureAwait(false);
}
var binary = Environment.GetEnvironmentVariable(BinaryEnvVar);
if (string.IsNullOrWhiteSpace(binary))
{
binary = "ruby";
}
var startInfo = new ProcessStartInfo
{
FileName = binary,
WorkingDirectory = context.RootPath,
RedirectStandardError = true,
RedirectStandardOutput = true,
UseShellExecute = false,
};
// Ruby arguments for sandboxed execution
// -W0: Suppress warnings
// -T: Taint mode (restrict dangerous operations) - optional, may not be available in all Ruby versions
startInfo.ArgumentList.Add("-W0");
startInfo.ArgumentList.Add(shimPath);
// Pass through the entrypoint
startInfo.Environment[EntrypointEnvVar] = entrypointPath;
// Pass through trace classes filter if set
var traceClasses = Environment.GetEnvironmentVariable(TraceClassesEnvVar);
if (!string.IsNullOrWhiteSpace(traceClasses))
{
startInfo.Environment[TraceClassesEnvVar] = traceClasses;
}
// Sandbox guidance: Set restrictive environment variables
startInfo.Environment["BUNDLE_DISABLE_EXEC_LOAD"] = "1";
startInfo.Environment["BUNDLE_FROZEN"] = "1";
try
{
using var process = Process.Start(startInfo);
if (process is null)
{
logger?.LogWarning("Ruby runtime trace skipped: failed to start 'ruby' process");
return false;
}
using var cts = CancellationTokenSource.CreateLinkedTokenSource(cancellationToken);
cts.CancelAfter(DefaultTimeoutMs);
try
{
await process.WaitForExitAsync(cts.Token).ConfigureAwait(false);
}
catch (OperationCanceledException) when (!cancellationToken.IsCancellationRequested)
{
// Timeout - kill the process
logger?.LogWarning("Ruby runtime trace timed out after {Timeout}ms", DefaultTimeoutMs);
try
{
process.Kill(entireProcessTree: true);
}
catch
{
// Best effort
}
return false;
}
if (process.ExitCode != 0)
{
var stderr = await process.StandardError.ReadToEndAsync().ConfigureAwait(false);
logger?.LogWarning(
"Ruby runtime trace failed with exit code {ExitCode}. stderr: {Error}",
process.ExitCode,
Truncate(stderr));
// Still check for output file - partial traces may be useful
}
}
catch (OperationCanceledException)
{
throw;
}
catch (Exception ex)
{
logger?.LogWarning(ex, "Ruby runtime trace skipped: {Message}", ex.Message);
return false;
}
var runtimePath = Path.Combine(context.RootPath, RuntimeFileName);
if (!File.Exists(runtimePath))
{
logger?.LogWarning(
"Ruby runtime trace finished but did not emit {RuntimeFile}",
RuntimeFileName);
return false;
}
logger?.LogDebug("Ruby runtime trace completed: {RuntimeFile}", runtimePath);
return true;
}
/// <summary>
/// Gets the path to the expected runtime trace output file.
/// </summary>
public static string GetOutputPath(string rootPath) => Path.Combine(rootPath, RuntimeFileName);
/// <summary>
/// Checks if a runtime trace output exists for the given root path.
/// </summary>
public static bool OutputExists(string rootPath) => File.Exists(GetOutputPath(rootPath));
private static string Truncate(string? value, int maxLength = 400)
{
if (string.IsNullOrEmpty(value))
{
return string.Empty;
}
return value.Length <= maxLength ? value : value[..maxLength];
}
}

View File

@@ -30,6 +30,7 @@ public sealed class RubyLanguageAnalyzer : ILanguageAnalyzer
var capabilities = await RubyCapabilityDetector.DetectAsync(context, cancellationToken).ConfigureAwait(false);
var runtimeGraph = await RubyRuntimeGraphBuilder.BuildAsync(context, packages, cancellationToken).ConfigureAwait(false);
var bundlerConfig = RubyBundlerConfig.Load(context.RootPath);
foreach (var package in packages.OrderBy(static p => p.ComponentKey, StringComparer.Ordinal))
{
@@ -50,7 +51,7 @@ public sealed class RubyLanguageAnalyzer : ILanguageAnalyzer
if (packages.Count > 0)
{
EmitObservation(context, writer, packages, runtimeGraph, capabilities, lockData.BundledWith);
EmitObservation(context, writer, packages, lockData, runtimeGraph, capabilities, bundlerConfig, lockData.BundledWith);
}
}
@@ -86,23 +87,28 @@ public sealed class RubyLanguageAnalyzer : ILanguageAnalyzer
LanguageAnalyzerContext context,
LanguageComponentWriter writer,
IReadOnlyList<RubyPackage> packages,
RubyLockData lockData,
RubyRuntimeGraph runtimeGraph,
RubyCapabilities capabilities,
RubyBundlerConfig bundlerConfig,
string? bundledWith)
{
ArgumentNullException.ThrowIfNull(context);
ArgumentNullException.ThrowIfNull(writer);
ArgumentNullException.ThrowIfNull(packages);
ArgumentNullException.ThrowIfNull(lockData);
ArgumentNullException.ThrowIfNull(runtimeGraph);
ArgumentNullException.ThrowIfNull(capabilities);
ArgumentNullException.ThrowIfNull(bundlerConfig);
var observationDocument = RubyObservationBuilder.Build(packages, runtimeGraph, capabilities, bundledWith);
var observationDocument = RubyObservationBuilder.Build(packages, lockData, runtimeGraph, capabilities, bundlerConfig, bundledWith);
var observationJson = RubyObservationSerializer.Serialize(observationDocument);
var observationHash = RubyObservationSerializer.ComputeSha256(observationJson);
var observationBytes = Encoding.UTF8.GetBytes(observationJson);
var observationMetadata = BuildObservationMetadata(
packages.Count,
observationDocument.DependencyEdges.Length,
observationDocument.RuntimeEdges.Length,
observationDocument.Capabilities,
observationDocument.BundledWith);
@@ -132,11 +138,13 @@ public sealed class RubyLanguageAnalyzer : ILanguageAnalyzer
private static IEnumerable<KeyValuePair<string, string?>> BuildObservationMetadata(
int packageCount,
int dependencyEdgeCount,
int runtimeEdgeCount,
RubyObservationCapabilitySummary capabilities,
string? bundledWith)
{
yield return new KeyValuePair<string, string?>("ruby.observation.packages", packageCount.ToString(CultureInfo.InvariantCulture));
yield return new KeyValuePair<string, string?>("ruby.observation.dependency_edges", dependencyEdgeCount.ToString(CultureInfo.InvariantCulture));
yield return new KeyValuePair<string, string?>("ruby.observation.runtime_edges", runtimeEdgeCount.ToString(CultureInfo.InvariantCulture));
yield return new KeyValuePair<string, string?>("ruby.observation.capability.exec", capabilities.UsesExec ? "true" : "false");
yield return new KeyValuePair<string, string?>("ruby.observation.capability.net", capabilities.UsesNetwork ? "true" : "false");

View File

@@ -6,3 +6,8 @@
| `SCANNER-ENG-0016` | DONE (2025-11-10) | RubyLockCollector merged with vendor cache ingestion; workspace overrides, bundler groups, git/path fixture, and offline-kit mirror updated. |
| `SCANNER-ENG-0017` | DONE (2025-11-09) | Build runtime require/autoload graph builder with tree-sitter Ruby per design §4.4, feed EntryTrace hints. |
| `SCANNER-ENG-0018` | DONE (2025-11-09) | Emit Ruby capability + framework surface signals, align with design §4.5 / Sprint 138. |
| `SCANNER-ANALYZERS-RUBY-28-001` | DONE (2025-11-27) | Added OCI container layer support (layers/, .layers/, layer/) to RubyLockCollector and RubyVendorArtifactCollector for VFS/container workspace discovery. Existing implementation already covered Gemfile/lock, vendor/bundle, .gem archives, .bundle/config, Rack configs, and framework fingerprints. |
| `SCANNER-ANALYZERS-RUBY-28-002` | DONE (2025-11-27) | Enhanced RubyLockParser to capture gem dependency edges with version constraints from Gemfile.lock; added RubyDependencyEdge type; updated RubyLockEntry, RubyObservationDocument, observation builder and serializer to produce dependencyEdges with from/to/constraint fields. PURLs and resolver traces now included. |
| `SCANNER-ANALYZERS-RUBY-28-003` | DONE (2025-11-27) | AOC-compliant observations integration: added schema field, RubyObservationEntrypoint and RubyObservationEnvironment types; builder generates entrypoints (path/type/requiredGems) and environment profiles (bundlePaths/gemfiles/lockfiles/frameworks); RubyRuntimeGraph provides GetEntrypointFiles/GetRequiredGems; bundlerConfig wired through analyzer for complete observation coverage. |
| `SCANNER-ANALYZERS-RUBY-28-004` | DONE (2025-11-27) | Fixtures/benchmarks for Ruby analyzer: created cli-app fixture with Thor/TTY-Prompt CLI gems, updated expected.json golden files for simple-app and complex-app with dependency edges format, added CliWorkspaceProducesDeterministicOutputAsync test; all 4 determinism tests pass. |
| `SCANNER-ANALYZERS-RUBY-28-005` | DONE (2025-11-27) | Runtime capture (tracepoint) hooks: created Internal/Runtime/ with RubyRuntimeShim.cs (trace-shim.rb using TracePoint for require/load events, capability detection, sensitive data redaction), RubyRuntimeTraceRunner.cs (opt-in harness via STELLA_RUBY_ENTRYPOINT env var, sandbox guidance), and RubyRuntimeTraceReader.cs (NDJSON parser for trace events). |

View File

@@ -0,0 +1,24 @@
{
"schemaVersion": "1.0",
"id": "stellaops.analyzer.lang.ruby",
"displayName": "StellaOps Ruby Analyzer",
"version": "0.1.0",
"requiresRestart": true,
"entryPoint": {
"type": "dotnet",
"assembly": "StellaOps.Scanner.Analyzers.Lang.Ruby.dll",
"typeName": "StellaOps.Scanner.Analyzers.Lang.Ruby.RubyAnalyzerPlugin"
},
"capabilities": [
"language-analyzer",
"ruby",
"rubygems",
"bundler"
],
"metadata": {
"org.stellaops.analyzer.language": "ruby",
"org.stellaops.analyzer.kind": "language",
"org.stellaops.restart.required": "true",
"org.stellaops.analyzer.runtime-capture": "optional"
}
}

View File

@@ -0,0 +1,10 @@
source "https://rubygems.org"
ruby "3.2.0"
gem "thor", "~> 1.3"
gem "tty-prompt", "~> 0.23"
group :development do
gem "bundler", "~> 2.5"
end

View File

@@ -0,0 +1,29 @@
GEM
remote: https://rubygems.org/
specs:
bundler (2.5.3)
pastel (0.8.0)
tty-color (~> 0.5)
thor (1.3.0)
tty-color (0.6.0)
tty-cursor (0.7.1)
tty-prompt (0.23.1)
pastel (~> 0.8)
tty-reader (~> 0.8)
tty-reader (0.9.0)
tty-cursor (~> 0.7)
tty-screen (~> 0.8)
wisper (~> 2.0)
tty-screen (0.8.2)
wisper (2.0.1)
PLATFORMS
ruby
DEPENDENCIES
bundler (~> 2.5)
thor (~> 1.3)
tty-prompt (~> 0.23)
BUNDLED WITH
2.5.3

View File

@@ -0,0 +1,226 @@
[
{
"analyzerId": "ruby",
"componentKey": "observation::ruby",
"name": "Ruby Observation Summary",
"type": "ruby-observation",
"usedByEntrypoint": false,
"metadata": {
"ruby.observation.bundler_version": "2.5.3",
"ruby.observation.capability.exec": "false",
"ruby.observation.capability.net": "false",
"ruby.observation.capability.schedulers": "0",
"ruby.observation.capability.serialization": "false",
"ruby.observation.dependency_edges": "6",
"ruby.observation.packages": "9",
"ruby.observation.runtime_edges": "0"
},
"evidence": [
{
"kind": "derived",
"source": "ruby.observation",
"locator": "document",
"value": "{\u0022$schema\u0022:\u0022stellaops.ruby.observation@1\u0022,\u0022packages\u0022:[{\u0022name\u0022:\u0022bundler\u0022,\u0022version\u0022:\u00222.5.3\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022development\u0022]},{\u0022name\u0022:\u0022pastel\u0022,\u0022version\u0022:\u00220.8.0\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022thor\u0022,\u0022version\u0022:\u00221.3.0\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022tty-color\u0022,\u0022version\u0022:\u00220.6.0\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022tty-cursor\u0022,\u0022version\u0022:\u00220.7.1\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022tty-prompt\u0022,\u0022version\u0022:\u00220.23.1\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022tty-reader\u0022,\u0022version\u0022:\u00220.9.0\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022tty-screen\u0022,\u0022version\u0022:\u00220.8.2\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022wisper\u0022,\u0022version\u0022:\u00222.0.1\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]}],\u0022entrypoints\u0022:[],\u0022dependencyEdges\u0022:[{\u0022from\u0022:\u0022pkg:gem/pastel@0.8.0\u0022,\u0022to\u0022:\u0022tty-color\u0022,\u0022constraint\u0022:\u0022~\\u003E 0.5\u0022},{\u0022from\u0022:\u0022pkg:gem/tty-prompt@0.23.1\u0022,\u0022to\u0022:\u0022pastel\u0022,\u0022constraint\u0022:\u0022~\\u003E 0.8\u0022},{\u0022from\u0022:\u0022pkg:gem/tty-prompt@0.23.1\u0022,\u0022to\u0022:\u0022tty-reader\u0022,\u0022constraint\u0022:\u0022~\\u003E 0.8\u0022},{\u0022from\u0022:\u0022pkg:gem/tty-reader@0.9.0\u0022,\u0022to\u0022:\u0022tty-cursor\u0022,\u0022constraint\u0022:\u0022~\\u003E 0.7\u0022},{\u0022from\u0022:\u0022pkg:gem/tty-reader@0.9.0\u0022,\u0022to\u0022:\u0022tty-screen\u0022,\u0022constraint\u0022:\u0022~\\u003E 0.8\u0022},{\u0022from\u0022:\u0022pkg:gem/tty-reader@0.9.0\u0022,\u0022to\u0022:\u0022wisper\u0022,\u0022constraint\u0022:\u0022~\\u003E 2.0\u0022}],\u0022runtimeEdges\u0022:[],\u0022environment\u0022:{\u0022bundlerVersion\u0022:\u00222.5.3\u0022,\u0022lockfiles\u0022:[\u0022Gemfile.lock\u0022]},\u0022capabilities\u0022:{\u0022usesExec\u0022:false,\u0022usesNetwork\u0022:false,\u0022usesSerialization\u0022:false,\u0022jobSchedulers\u0022:[]},\u0022bundledWith\u0022:\u00222.5.3\u0022}",
"sha256": "sha256:5ec8b45dc480086cefbee03575845d57fb9fe4a0b000b109af46af5f2fe3f05d"
}
]
},
{
"analyzerId": "ruby",
"componentKey": "purl::pkg:gem/bundler@2.5.3",
"purl": "pkg:gem/bundler@2.5.3",
"name": "bundler",
"version": "2.5.3",
"type": "gem",
"usedByEntrypoint": false,
"metadata": {
"declaredOnly": "true",
"groups": "development",
"lockfile": "Gemfile.lock",
"source": "https://rubygems.org/"
},
"evidence": [
{
"kind": "file",
"source": "Gemfile.lock",
"locator": "Gemfile.lock"
}
]
},
{
"analyzerId": "ruby",
"componentKey": "purl::pkg:gem/pastel@0.8.0",
"purl": "pkg:gem/pastel@0.8.0",
"name": "pastel",
"version": "0.8.0",
"type": "gem",
"usedByEntrypoint": false,
"metadata": {
"declaredOnly": "true",
"groups": "default",
"lockfile": "Gemfile.lock",
"source": "https://rubygems.org/"
},
"evidence": [
{
"kind": "file",
"source": "Gemfile.lock",
"locator": "Gemfile.lock"
}
]
},
{
"analyzerId": "ruby",
"componentKey": "purl::pkg:gem/thor@1.3.0",
"purl": "pkg:gem/thor@1.3.0",
"name": "thor",
"version": "1.3.0",
"type": "gem",
"usedByEntrypoint": false,
"metadata": {
"declaredOnly": "true",
"groups": "default",
"lockfile": "Gemfile.lock",
"source": "https://rubygems.org/"
},
"evidence": [
{
"kind": "file",
"source": "Gemfile.lock",
"locator": "Gemfile.lock"
}
]
},
{
"analyzerId": "ruby",
"componentKey": "purl::pkg:gem/tty-color@0.6.0",
"purl": "pkg:gem/tty-color@0.6.0",
"name": "tty-color",
"version": "0.6.0",
"type": "gem",
"usedByEntrypoint": false,
"metadata": {
"declaredOnly": "true",
"groups": "default",
"lockfile": "Gemfile.lock",
"source": "https://rubygems.org/"
},
"evidence": [
{
"kind": "file",
"source": "Gemfile.lock",
"locator": "Gemfile.lock"
}
]
},
{
"analyzerId": "ruby",
"componentKey": "purl::pkg:gem/tty-cursor@0.7.1",
"purl": "pkg:gem/tty-cursor@0.7.1",
"name": "tty-cursor",
"version": "0.7.1",
"type": "gem",
"usedByEntrypoint": false,
"metadata": {
"declaredOnly": "true",
"groups": "default",
"lockfile": "Gemfile.lock",
"source": "https://rubygems.org/"
},
"evidence": [
{
"kind": "file",
"source": "Gemfile.lock",
"locator": "Gemfile.lock"
}
]
},
{
"analyzerId": "ruby",
"componentKey": "purl::pkg:gem/tty-prompt@0.23.1",
"purl": "pkg:gem/tty-prompt@0.23.1",
"name": "tty-prompt",
"version": "0.23.1",
"type": "gem",
"usedByEntrypoint": false,
"metadata": {
"declaredOnly": "true",
"groups": "default",
"lockfile": "Gemfile.lock",
"source": "https://rubygems.org/"
},
"evidence": [
{
"kind": "file",
"source": "Gemfile.lock",
"locator": "Gemfile.lock"
}
]
},
{
"analyzerId": "ruby",
"componentKey": "purl::pkg:gem/tty-reader@0.9.0",
"purl": "pkg:gem/tty-reader@0.9.0",
"name": "tty-reader",
"version": "0.9.0",
"type": "gem",
"usedByEntrypoint": false,
"metadata": {
"declaredOnly": "true",
"groups": "default",
"lockfile": "Gemfile.lock",
"source": "https://rubygems.org/"
},
"evidence": [
{
"kind": "file",
"source": "Gemfile.lock",
"locator": "Gemfile.lock"
}
]
},
{
"analyzerId": "ruby",
"componentKey": "purl::pkg:gem/tty-screen@0.8.2",
"purl": "pkg:gem/tty-screen@0.8.2",
"name": "tty-screen",
"version": "0.8.2",
"type": "gem",
"usedByEntrypoint": false,
"metadata": {
"declaredOnly": "true",
"groups": "default",
"lockfile": "Gemfile.lock",
"source": "https://rubygems.org/"
},
"evidence": [
{
"kind": "file",
"source": "Gemfile.lock",
"locator": "Gemfile.lock"
}
]
},
{
"analyzerId": "ruby",
"componentKey": "purl::pkg:gem/wisper@2.0.1",
"purl": "pkg:gem/wisper@2.0.1",
"name": "wisper",
"version": "2.0.1",
"type": "gem",
"usedByEntrypoint": false,
"metadata": {
"declaredOnly": "true",
"groups": "default",
"lockfile": "Gemfile.lock",
"source": "https://rubygems.org/"
},
"evidence": [
{
"kind": "file",
"source": "Gemfile.lock",
"locator": "Gemfile.lock"
}
]
}
]

View File

@@ -12,6 +12,7 @@
"ruby.observation.capability.scheduler_list": "clockwork;sidekiq",
"ruby.observation.capability.schedulers": "2",
"ruby.observation.capability.serialization": "false",
"ruby.observation.dependency_edges": "4",
"ruby.observation.packages": "6",
"ruby.observation.runtime_edges": "5"
},
@@ -20,8 +21,8 @@
"kind": "derived",
"source": "ruby.observation",
"locator": "document",
"value": "{\u0022packages\u0022:[{\u0022name\u0022:\u0022clockwork\u0022,\u0022version\u0022:\u00223.0.0\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022ops\u0022]},{\u0022name\u0022:\u0022pagy\u0022,\u0022version\u0022:\u00226.5.0\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022web\u0022]},{\u0022name\u0022:\u0022pry\u0022,\u0022version\u0022:\u00220.14.2\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022tools\u0022]},{\u0022name\u0022:\u0022rack\u0022,\u0022version\u0022:\u00223.1.2\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022sidekiq\u0022,\u0022version\u0022:\u00227.2.1\u0022,\u0022source\u0022:\u0022vendor\u0022,\u0022declaredOnly\u0022:false,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022artifact\u0022:\u0022vendor/custom-bundle/cache/sidekiq-7.2.1.gem\u0022,\u0022groups\u0022:[\u0022jobs\u0022]},{\u0022name\u0022:\u0022sinatra\u0022,\u0022version\u0022:\u00223.1.0\u0022,\u0022source\u0022:\u0022vendor-cache\u0022,\u0022declaredOnly\u0022:false,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022artifact\u0022:\u0022vendor/cache/sinatra-3.1.0.gem\u0022,\u0022groups\u0022:[\u0022web\u0022]}],\u0022runtimeEdges\u0022:[{\u0022package\u0022:\u0022clockwork\u0022,\u0022usedByEntrypoint\u0022:false,\u0022files\u0022:[\u0022scripts/worker.rb\u0022],\u0022entrypoints\u0022:[],\u0022reasons\u0022:[\u0022require-static\u0022]},{\u0022package\u0022:\u0022pagy\u0022,\u0022usedByEntrypoint\u0022:true,\u0022files\u0022:[\u0022app/main.rb\u0022,\u0022config/environment.rb\u0022],\u0022entrypoints\u0022:[\u0022config/environment.rb\u0022],\u0022reasons\u0022:[\u0022require-static\u0022]},{\u0022package\u0022:\u0022rack\u0022,\u0022usedByEntrypoint\u0022:false,\u0022files\u0022:[\u0022app/main.rb\u0022],\u0022entrypoints\u0022:[],\u0022reasons\u0022:[\u0022require-static\u0022]},{\u0022package\u0022:\u0022sidekiq\u0022,\u0022usedByEntrypoint\u0022:false,\u0022files\u0022:[\u0022scripts/worker.rb\u0022],\u0022entrypoints\u0022:[],\u0022reasons\u0022:[\u0022require-static\u0022]},{\u0022package\u0022:\u0022sinatra\u0022,\u0022usedByEntrypoint\u0022:false,\u0022files\u0022:[\u0022app/main.rb\u0022],\u0022entrypoints\u0022:[],\u0022reasons\u0022:[\u0022require-static\u0022]}],\u0022capabilities\u0022:{\u0022usesExec\u0022:false,\u0022usesNetwork\u0022:true,\u0022usesSerialization\u0022:false,\u0022jobSchedulers\u0022:[\u0022clockwork\u0022,\u0022sidekiq\u0022]},\u0022bundledWith\u0022:\u00222.5.3\u0022}",
"sha256": "sha256:beaefa12ec1f49e62343781ffa949ec3fa006f0452cf8a342a9a12be3cda1d82"
"value": "{\u0022$schema\u0022:\u0022stellaops.ruby.observation@1\u0022,\u0022packages\u0022:[{\u0022name\u0022:\u0022clockwork\u0022,\u0022version\u0022:\u00223.0.0\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022ops\u0022]},{\u0022name\u0022:\u0022pagy\u0022,\u0022version\u0022:\u00226.5.0\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022web\u0022]},{\u0022name\u0022:\u0022pry\u0022,\u0022version\u0022:\u00220.14.2\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022tools\u0022]},{\u0022name\u0022:\u0022rack\u0022,\u0022version\u0022:\u00223.1.2\u0022,\u0022source\u0022:\u0022https://rubygems.org/\u0022,\u0022declaredOnly\u0022:true,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022groups\u0022:[\u0022default\u0022]},{\u0022name\u0022:\u0022sidekiq\u0022,\u0022version\u0022:\u00227.2.1\u0022,\u0022source\u0022:\u0022vendor\u0022,\u0022declaredOnly\u0022:false,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022artifact\u0022:\u0022vendor/custom-bundle/cache/sidekiq-7.2.1.gem\u0022,\u0022groups\u0022:[\u0022jobs\u0022]},{\u0022name\u0022:\u0022sinatra\u0022,\u0022version\u0022:\u00223.1.0\u0022,\u0022source\u0022:\u0022vendor-cache\u0022,\u0022declaredOnly\u0022:false,\u0022lockfile\u0022:\u0022Gemfile.lock\u0022,\u0022artifact\u0022:\u0022vendor/cache/sinatra-3.1.0.gem\u0022,\u0022groups\u0022:[\u0022web\u0022]}],\u0022entrypoints\u0022:[{\u0022path\u0022:\u0022config/environment.rb\u0022,\u0022type\u0022:\u0022script\u0022,\u0022requiredGems\u0022:[\u0022pagy\u0022]}],\u0022dependencyEdges\u0022:[{\u0022from\u0022:\u0022pkg:gem/pry@0.14.2\u0022,\u0022to\u0022:\u0022coderay\u0022,\u0022constraint\u0022:\u0022~\\u003E 1.1\u0022},{\u0022from\u0022:\u0022pkg:gem/pry@0.14.2\u0022,\u0022to\u0022:\u0022method_source\u0022,\u0022constraint\u0022:\u0022~\\u003E 1.0\u0022},{\u0022from\u0022:\u0022pkg:gem/sidekiq@7.2.1\u0022,\u0022to\u0022:\u0022rack\u0022,\u0022constraint\u0022:\u0022~\\u003E 2.0\u0022},{\u0022from\u0022:\u0022pkg:gem/sinatra@3.1.0\u0022,\u0022to\u0022:\u0022rack\u0022,\u0022constraint\u0022:\u0022~\\u003E 3.0\u0022}],\u0022runtimeEdges\u0022:[{\u0022package\u0022:\u0022clockwork\u0022,\u0022usedByEntrypoint\u0022:false,\u0022files\u0022:[\u0022scripts/worker.rb\u0022],\u0022entrypoints\u0022:[],\u0022reasons\u0022:[\u0022require-static\u0022]},{\u0022package\u0022:\u0022pagy\u0022,\u0022usedByEntrypoint\u0022:true,\u0022files\u0022:[\u0022app/main.rb\u0022,\u0022config/environment.rb\u0022],\u0022entrypoints\u0022:[\u0022config/environment.rb\u0022],\u0022reasons\u0022:[\u0022require-static\u0022]},{\u0022package\u0022:\u0022rack\u0022,\u0022usedByEntrypoint\u0022:false,\u0022files\u0022:[\u0022app/main.rb\u0022],\u0022entrypoints\u0022:[],\u0022reasons\u0022:[\u0022require-static\u0022]},{\u0022package\u0022:\u0022sidekiq\u0022,\u0022usedByEntrypoint\u0022:false,\u0022files\u0022:[\u0022scripts/worker.rb\u0022],\u0022entrypoints\u0022:[],\u0022reasons\u0022:[\u0022require-static\u0022]},{\u0022package\u0022:\u0022sinatra\u0022,\u0022usedByEntrypoint\u0022:false,\u0022files\u0022:[\u0022app/main.rb\u0022],\u0022entrypoints\u0022:[],\u0022reasons\u0022:[\u0022require-static\u0022]}],\u0022environment\u0022:{\u0022bundlerVersion\u0022:\u00222.5.3\u0022,\u0022bundlePaths\u0022:[\u0022/mnt/e/dev/git.stella-ops.org/src/Scanner/__Tests/StellaOps.Scanner.Analyzers.Lang.Ruby.Tests/bin/Debug/net10.0/Fixtures/lang/ruby/complex-app/vendor/custom-bundle\u0022],\u0022gemfiles\u0022:[\u0022/mnt/e/dev/git.stella-ops.org/src/Scanner/__Tests/StellaOps.Scanner.Analyzers.Lang.Ruby.Tests/bin/Debug/net10.0/Fixtures/lang/ruby/complex-app/Gemfile\u0022],\u0022lockfiles\u0022:[\u0022Gemfile.lock\u0022],\u0022frameworks\u0022:[\u0022clockwork\u0022,\u0022sidekiq\u0022]},\u0022capabilities\u0022:{\u0022usesExec\u0022:false,\u0022usesNetwork\u0022:true,\u0022usesSerialization\u0022:false,\u0022jobSchedulers\u0022:[\u0022clockwork\u0022,\u0022sidekiq\u0022]},\u0022bundledWith\u0022:\u00222.5.3\u0022}",
"sha256": "sha256:58c8c02011baf8711e584a4b8e33effe7292a92af69cd6eaad6c3fd869ea93e0"
}
]
},

View File

@@ -86,4 +86,18 @@ public sealed class RubyLanguageAnalyzerTests
analyzers,
TestContext.Current.CancellationToken);
}
[Fact]
public async Task CliWorkspaceProducesDeterministicOutputAsync()
{
var fixturePath = TestPaths.ResolveFixture("lang", "ruby", "cli-app");
var goldenPath = Path.Combine(fixturePath, "expected.json");
var analyzers = new ILanguageAnalyzer[] { new RubyLanguageAnalyzer() };
await LanguageAnalyzerTestHarness.AssertDeterministicAsync(
fixturePath,
goldenPath,
analyzers,
TestContext.Current.CancellationToken);
}
}

View File

@@ -20,6 +20,8 @@
<ProjectReference Remove="..\StellaOps.Concelier.Testing\StellaOps.Concelier.Testing.csproj" />
<Compile Remove="$(MSBuildThisFileDirectory)..\StellaOps.Concelier.Tests.Shared\AssemblyInfo.cs" />
<Compile Remove="$(MSBuildThisFileDirectory)..\StellaOps.Concelier.Tests.Shared\MongoFixtureCollection.cs" />
<Compile Remove="$(MSBuildThisFileDirectory)..\..\..\..\tests\shared\OpenSslLegacyShim.cs" />
<Compile Remove="$(MSBuildThisFileDirectory)..\..\..\..\tests\shared\OpenSslAutoInit.cs" />
<Using Remove="StellaOps.Concelier.Testing" />
</ItemGroup>

View File

@@ -49,7 +49,8 @@ public sealed class SurfaceManifestStageExecutorTests
metrics,
NullLogger<SurfaceManifestStageExecutor>.Instance,
hash,
new NullRubyPackageInventoryStore());
new NullRubyPackageInventoryStore(),
new DeterminismContext(true, DateTimeOffset.Parse("2024-01-01T00:00:00Z"), 1337, true, 1));
var context = CreateContext();
@@ -86,7 +87,8 @@ public sealed class SurfaceManifestStageExecutorTests
metrics,
NullLogger<SurfaceManifestStageExecutor>.Instance,
hash,
new NullRubyPackageInventoryStore());
new NullRubyPackageInventoryStore(),
new DeterminismContext(false, DateTimeOffset.UnixEpoch, null, false, null));
var context = CreateContext();
PopulateAnalysis(context);
@@ -121,6 +123,48 @@ public sealed class SurfaceManifestStageExecutorTests
Assert.Contains(payloadMetrics, m => Equals("layer.fragments", m["surface.kind"]));
}
[Fact]
public async Task ExecuteAsync_EmitsDeterminismPayload()
{
var metrics = new ScannerWorkerMetrics();
var publisher = new TestSurfaceManifestPublisher("tenant-a");
var cache = new RecordingSurfaceCache();
var environment = new TestSurfaceEnvironment("tenant-a");
var hash = CreateCryptoHash();
var determinism = new DeterminismContext(
fixedClock: true,
fixedInstantUtc: DateTimeOffset.Parse("2024-01-01T00:00:00Z"),
rngSeed: 42,
filterLogs: true,
concurrencyLimit: 1);
var executor = new SurfaceManifestStageExecutor(
publisher,
cache,
environment,
metrics,
NullLogger<SurfaceManifestStageExecutor>.Instance,
hash,
new NullRubyPackageInventoryStore(),
determinism);
var context = CreateContext();
context.Lease.Metadata["determinism.feed"] = "feed-001";
context.Lease.Metadata["determinism.policy"] = "rev-77";
await executor.ExecuteAsync(context, CancellationToken.None);
var determinismPayload = publisher.LastRequest!.Payloads.Single(p => p.Kind == "determinism.json");
var json = JsonDocument.Parse(determinismPayload.Content.Span);
Assert.True(json.RootElement.GetProperty("fixedClock").GetBoolean());
Assert.Equal(42, json.RootElement.GetProperty("rngSeed").GetInt32());
Assert.Equal(1, json.RootElement.GetProperty("concurrencyLimit").GetInt32());
Assert.Equal("feed-001", json.RootElement.GetProperty("pins").GetProperty("feed").GetString());
Assert.Equal("rev-77", json.RootElement.GetProperty("pins").GetProperty("policy").GetString());
Assert.True(json.RootElement.GetProperty("artifacts").EnumerateObject().Any());
}
[Fact]
public async Task ExecuteAsync_IncludesEntropyPayloads_WhenPresent()
{
@@ -137,7 +181,8 @@ public sealed class SurfaceManifestStageExecutorTests
metrics,
NullLogger<SurfaceManifestStageExecutor>.Instance,
hash,
new NullRubyPackageInventoryStore());
new NullRubyPackageInventoryStore(),
new DeterminismContext(false, DateTimeOffset.UnixEpoch, null, false, null));
var context = CreateContext();
@@ -298,7 +343,8 @@ public sealed class SurfaceManifestStageExecutorTests
metrics,
NullLogger<SurfaceManifestStageExecutor>.Instance,
hash,
new NullRubyPackageInventoryStore());
new NullRubyPackageInventoryStore(),
new DeterminismContext(false, DateTimeOffset.UnixEpoch, null, false, null));
var context = CreateContext();
var observationBytes = Encoding.UTF8.GetBytes("{\"entrypoints\":[\"mod.ts\"]}");

View File

@@ -4,7 +4,7 @@
| --- | --- | --- |
| SDKGEN-62-001 | DONE (2025-11-24) | Toolchain pinned: OpenAPI Generator CLI 7.4.0 + JDK 21, determinism rules in TOOLCHAIN.md/toolchain.lock.yaml. |
| SDKGEN-62-002 | DONE (2025-11-24) | Shared post-process now copies auth/retry/pagination/telemetry helpers for TS/Python/Go/Java, wires TS/Python exports, and adds smoke tests. |
| SDKGEN-63-001 | DOING (2025-11-24) | Added TS generator config/script, fixture spec, smoke test (green with vendored JDK/JAR); packaging templates and typed error/helper exports now copied via postprocess. Spec hash guard writes `.oas.sha256` and optionally enforces `STELLA_OAS_EXPECTED_SHA256`; waiting on frozen OpenAPI to publish alpha. |
| SDKGEN-63-002 | DOING (2025-11-24) | Python generator scaffold added (config, script, smoke test, reuse ping fixture) with spec hash guard + `.oas.sha256`; awaiting frozen OpenAPI to emit alpha. |
| SDKGEN-63-001 | BLOCKED (2025-11-26) | Waiting on frozen aggregate OAS digest to generate TS alpha; scaffold + smoke + hash guard ready. |
| SDKGEN-63-002 | BLOCKED (2025-11-26) | Waiting on frozen aggregate OAS digest to generate Python alpha; scaffold + smoke + hash guard ready. |
| SDKGEN-63-003 | BLOCKED (2025-11-26) | Go generator scaffold ready; blocked on frozen aggregate OAS digest to emit alpha. |
| SDKGEN-63-004 | BLOCKED (2025-11-26) | Java generator scaffold ready; blocked on frozen aggregate OAS digest to emit alpha. |

View File

@@ -57,6 +57,41 @@ export const routes: Routes = [
(m) => m.GraphExplorerComponent
),
},
{
path: 'evidence/:advisoryId',
loadComponent: () =>
import('./features/evidence/evidence-page.component').then(
(m) => m.EvidencePageComponent
),
},
{
path: 'sources',
loadComponent: () =>
import('./features/sources/aoc-dashboard.component').then(
(m) => m.AocDashboardComponent
),
},
{
path: 'sources/violations/:code',
loadComponent: () =>
import('./features/sources/violation-detail.component').then(
(m) => m.ViolationDetailComponent
),
},
{
path: 'releases',
loadComponent: () =>
import('./features/releases/release-flow.component').then(
(m) => m.ReleaseFlowComponent
),
},
{
path: 'releases/:releaseId',
loadComponent: () =>
import('./features/releases/release-flow.component').then(
(m) => m.ReleaseFlowComponent
),
},
{
path: 'auth/callback',
loadComponent: () =>

View File

@@ -0,0 +1,364 @@
import { Injectable, InjectionToken } from '@angular/core';
import { Observable, of, delay } from 'rxjs';
import {
AocDashboardSummary,
AocPassFailSummary,
AocViolationCode,
IngestThroughput,
AocSource,
AocCheckResult,
VerificationRequest,
ViolationDetail,
TimeSeriesPoint,
} from './aoc.models';
/**
* Injection token for AOC API client.
*/
export const AOC_API = new InjectionToken<AocApi>('AOC_API');
/**
* AOC API interface.
*/
export interface AocApi {
getDashboardSummary(): Observable<AocDashboardSummary>;
getViolationDetail(violationId: string): Observable<ViolationDetail>;
getViolationsByCode(code: string): Observable<readonly ViolationDetail[]>;
startVerification(): Observable<VerificationRequest>;
getVerificationStatus(requestId: string): Observable<VerificationRequest>;
}
// ============================================================================
// Mock Data Fixtures
// ============================================================================
function generateHistory(days: number, baseValue: number, variance: number): TimeSeriesPoint[] {
const points: TimeSeriesPoint[] = [];
const now = new Date();
for (let i = days - 1; i >= 0; i--) {
const date = new Date(now);
date.setDate(date.getDate() - i);
points.push({
timestamp: date.toISOString(),
value: baseValue + Math.floor(Math.random() * variance * 2) - variance,
});
}
return points;
}
const mockPassFailSummary: AocPassFailSummary = {
period: 'last_24h',
totalChecks: 1247,
passed: 1198,
failed: 32,
pending: 12,
skipped: 5,
passRate: 0.961,
trend: 'improving',
history: generateHistory(7, 96, 3),
};
const mockViolationCodes: AocViolationCode[] = [
{
code: 'AOC-001',
name: 'Missing Provenance',
severity: 'critical',
description: 'Document lacks required provenance attestation',
count: 12,
lastSeen: '2025-11-27T09:45:00Z',
documentationUrl: 'https://docs.stellaops.io/aoc/violations/AOC-001',
},
{
code: 'AOC-002',
name: 'Invalid Signature',
severity: 'critical',
description: 'Document signature verification failed',
count: 8,
lastSeen: '2025-11-27T08:30:00Z',
documentationUrl: 'https://docs.stellaops.io/aoc/violations/AOC-002',
},
{
code: 'AOC-010',
name: 'Schema Mismatch',
severity: 'high',
description: 'Document does not conform to expected schema version',
count: 5,
lastSeen: '2025-11-27T07:15:00Z',
documentationUrl: 'https://docs.stellaops.io/aoc/violations/AOC-010',
},
{
code: 'AOC-015',
name: 'Timestamp Drift',
severity: 'medium',
description: 'Document timestamp exceeds allowed drift threshold',
count: 4,
lastSeen: '2025-11-27T06:00:00Z',
},
{
code: 'AOC-020',
name: 'Metadata Incomplete',
severity: 'low',
description: 'Optional metadata fields are missing',
count: 3,
lastSeen: '2025-11-26T22:30:00Z',
},
];
const mockThroughput: IngestThroughput[] = [
{
tenantId: 'tenant-001',
tenantName: 'Acme Corp',
documentsIngested: 15420,
bytesIngested: 2_450_000_000,
documentsPerMinute: 10.7,
bytesPerMinute: 1_701_388,
period: 'last_24h',
},
{
tenantId: 'tenant-002',
tenantName: 'TechStart Inc',
documentsIngested: 8932,
bytesIngested: 1_120_000_000,
documentsPerMinute: 6.2,
bytesPerMinute: 777_777,
period: 'last_24h',
},
{
tenantId: 'tenant-003',
tenantName: 'DataFlow Ltd',
documentsIngested: 5678,
bytesIngested: 890_000_000,
documentsPerMinute: 3.9,
bytesPerMinute: 618_055,
period: 'last_24h',
},
{
tenantId: 'tenant-004',
tenantName: 'SecureOps',
documentsIngested: 3421,
bytesIngested: 456_000_000,
documentsPerMinute: 2.4,
bytesPerMinute: 316_666,
period: 'last_24h',
},
];
const mockSources: AocSource[] = [
{
sourceId: 'src-001',
name: 'Production Registry',
type: 'registry',
status: 'passed',
lastCheck: '2025-11-27T10:00:00Z',
checkCount: 523,
passRate: 0.98,
recentViolations: [],
},
{
sourceId: 'src-002',
name: 'GitHub Actions Pipeline',
type: 'pipeline',
status: 'failed',
lastCheck: '2025-11-27T09:45:00Z',
checkCount: 412,
passRate: 0.92,
recentViolations: [mockViolationCodes[0], mockViolationCodes[1]],
},
{
sourceId: 'src-003',
name: 'Staging Registry',
type: 'registry',
status: 'passed',
lastCheck: '2025-11-27T09:30:00Z',
checkCount: 201,
passRate: 0.995,
recentViolations: [],
},
{
sourceId: 'src-004',
name: 'Manual Upload',
type: 'manual',
status: 'pending',
lastCheck: '2025-11-27T08:00:00Z',
checkCount: 111,
passRate: 0.85,
recentViolations: [mockViolationCodes[2]],
},
];
const mockRecentChecks: AocCheckResult[] = [
{
checkId: 'chk-001',
documentId: 'doc-abc123',
documentType: 'sbom',
status: 'passed',
checkedAt: '2025-11-27T10:00:00Z',
violations: [],
sourceId: 'src-001',
tenantId: 'tenant-001',
},
{
checkId: 'chk-002',
documentId: 'doc-def456',
documentType: 'attestation',
status: 'failed',
checkedAt: '2025-11-27T09:55:00Z',
violations: [mockViolationCodes[0]],
sourceId: 'src-002',
tenantId: 'tenant-001',
},
{
checkId: 'chk-003',
documentId: 'doc-ghi789',
documentType: 'sbom',
status: 'passed',
checkedAt: '2025-11-27T09:50:00Z',
violations: [],
sourceId: 'src-001',
tenantId: 'tenant-002',
},
{
checkId: 'chk-004',
documentId: 'doc-jkl012',
documentType: 'provenance',
status: 'failed',
checkedAt: '2025-11-27T09:45:00Z',
violations: [mockViolationCodes[1]],
sourceId: 'src-002',
tenantId: 'tenant-001',
},
{
checkId: 'chk-005',
documentId: 'doc-mno345',
documentType: 'sbom',
status: 'pending',
checkedAt: '2025-11-27T09:40:00Z',
violations: [],
sourceId: 'src-004',
tenantId: 'tenant-003',
},
];
const mockDashboard: AocDashboardSummary = {
generatedAt: new Date().toISOString(),
passFail: mockPassFailSummary,
recentViolations: mockViolationCodes,
throughputByTenant: mockThroughput,
sources: mockSources,
recentChecks: mockRecentChecks,
};
const mockViolationDetails: ViolationDetail[] = [
{
violationId: 'viol-001',
code: 'AOC-001',
severity: 'critical',
documentId: 'doc-def456',
documentType: 'attestation',
offendingFields: [
{
path: '$.predicate.buildType',
expectedValue: 'https://slsa.dev/provenance/v1',
actualValue: undefined,
reason: 'Required field is missing',
},
{
path: '$.predicate.builder.id',
expectedValue: 'https://github.com/actions/runner',
actualValue: undefined,
reason: 'Builder ID not specified',
},
],
provenance: {
sourceType: 'pipeline',
sourceUri: 'github.com/acme/api-service',
ingestedAt: '2025-11-27T09:55:00Z',
ingestedBy: 'github-actions',
buildId: 'build-12345',
commitSha: 'a1b2c3d4e5f6',
pipelineUrl: 'https://github.com/acme/api-service/actions/runs/12345',
},
detectedAt: '2025-11-27T09:55:00Z',
suggestion: 'Add SLSA provenance attestation to your build pipeline. See https://slsa.dev/spec/v1.0/provenance',
},
{
violationId: 'viol-002',
code: 'AOC-002',
severity: 'critical',
documentId: 'doc-jkl012',
documentType: 'provenance',
offendingFields: [
{
path: '$.signatures[0]',
expectedValue: 'Valid DSSE signature',
actualValue: 'Invalid or expired signature',
reason: 'Signature verification failed: key not found in keyring',
},
],
provenance: {
sourceType: 'pipeline',
sourceUri: 'github.com/acme/worker-service',
ingestedAt: '2025-11-27T09:45:00Z',
ingestedBy: 'github-actions',
buildId: 'build-12346',
commitSha: 'b2c3d4e5f6a7',
pipelineUrl: 'https://github.com/acme/worker-service/actions/runs/12346',
},
detectedAt: '2025-11-27T09:45:00Z',
suggestion: 'Ensure the signing key is registered in your tenant keyring. Run: stella keys add --public-key <key-file>',
},
];
// ============================================================================
// Mock API Implementation
// ============================================================================
@Injectable({ providedIn: 'root' })
export class MockAocApi implements AocApi {
getDashboardSummary(): Observable<AocDashboardSummary> {
return of({
...mockDashboard,
generatedAt: new Date().toISOString(),
}).pipe(delay(300));
}
getViolationDetail(violationId: string): Observable<ViolationDetail> {
const detail = mockViolationDetails.find((v) => v.violationId === violationId);
if (!detail) {
throw new Error(`Violation not found: ${violationId}`);
}
return of(detail).pipe(delay(200));
}
getViolationsByCode(code: string): Observable<readonly ViolationDetail[]> {
const details = mockViolationDetails.filter((v) => v.code === code);
return of(details).pipe(delay(250));
}
startVerification(): Observable<VerificationRequest> {
return of({
requestId: `verify-${Date.now()}`,
status: 'queued',
documentsToVerify: 1247,
documentsVerified: 0,
passed: 0,
failed: 0,
cliCommand: 'stella aoc verify --since 24h --output json',
}).pipe(delay(400));
}
getVerificationStatus(requestId: string): Observable<VerificationRequest> {
// Simulate a completed verification
return of({
requestId,
status: 'completed',
startedAt: new Date(Date.now() - 120000).toISOString(),
completedAt: new Date().toISOString(),
documentsToVerify: 1247,
documentsVerified: 1247,
passed: 1198,
failed: 49,
cliCommand: 'stella aoc verify --since 24h --output json',
}).pipe(delay(300));
}
}

View File

@@ -0,0 +1,152 @@
/**
* Attestation of Conformance (AOC) models for UI-AOC-19-001.
* Supports Sources dashboard tiles showing pass/fail, violation codes, and ingest throughput.
*/
// AOC verification status
export type AocVerificationStatus = 'passed' | 'failed' | 'pending' | 'skipped';
// Violation severity levels
export type ViolationSeverity = 'critical' | 'high' | 'medium' | 'low' | 'info';
/**
* AOC violation code with metadata.
*/
export interface AocViolationCode {
readonly code: string;
readonly name: string;
readonly severity: ViolationSeverity;
readonly description: string;
readonly count: number;
readonly lastSeen: string;
readonly documentationUrl?: string;
}
/**
* Per-tenant ingest throughput metrics.
*/
export interface IngestThroughput {
readonly tenantId: string;
readonly tenantName: string;
readonly documentsIngested: number;
readonly bytesIngested: number;
readonly documentsPerMinute: number;
readonly bytesPerMinute: number;
readonly period: string; // e.g., "last_24h", "last_7d"
}
/**
* Time-series data point for charts.
*/
export interface TimeSeriesPoint {
readonly timestamp: string;
readonly value: number;
}
/**
* AOC pass/fail summary for a time period.
*/
export interface AocPassFailSummary {
readonly period: string;
readonly totalChecks: number;
readonly passed: number;
readonly failed: number;
readonly pending: number;
readonly skipped: number;
readonly passRate: number; // 0-1
readonly trend: 'improving' | 'stable' | 'degrading';
readonly history: readonly TimeSeriesPoint[];
}
/**
* Individual AOC check result.
*/
export interface AocCheckResult {
readonly checkId: string;
readonly documentId: string;
readonly documentType: string;
readonly status: AocVerificationStatus;
readonly checkedAt: string;
readonly violations: readonly AocViolationCode[];
readonly sourceId?: string;
readonly tenantId: string;
}
/**
* Source with AOC metrics.
*/
export interface AocSource {
readonly sourceId: string;
readonly name: string;
readonly type: 'registry' | 'repository' | 'pipeline' | 'manual';
readonly status: AocVerificationStatus;
readonly lastCheck: string;
readonly checkCount: number;
readonly passRate: number;
readonly recentViolations: readonly AocViolationCode[];
}
/**
* AOC dashboard summary combining all metrics.
*/
export interface AocDashboardSummary {
readonly generatedAt: string;
readonly passFail: AocPassFailSummary;
readonly recentViolations: readonly AocViolationCode[];
readonly throughputByTenant: readonly IngestThroughput[];
readonly sources: readonly AocSource[];
readonly recentChecks: readonly AocCheckResult[];
}
/**
* Verification request for "Verify last 24h" action.
*/
export interface VerificationRequest {
readonly requestId: string;
readonly status: 'queued' | 'running' | 'completed' | 'failed';
readonly startedAt?: string;
readonly completedAt?: string;
readonly documentsToVerify: number;
readonly documentsVerified: number;
readonly passed: number;
readonly failed: number;
readonly cliCommand?: string; // CLI parity command
}
/**
* Violation detail for drill-down view.
*/
export interface ViolationDetail {
readonly violationId: string;
readonly code: string;
readonly severity: ViolationSeverity;
readonly documentId: string;
readonly documentType: string;
readonly offendingFields: readonly OffendingField[];
readonly provenance: ProvenanceMetadata;
readonly detectedAt: string;
readonly suggestion?: string;
}
/**
* Offending field in a document.
*/
export interface OffendingField {
readonly path: string; // JSON path, e.g., "$.metadata.labels"
readonly expectedValue?: string;
readonly actualValue?: string;
readonly reason: string;
}
/**
* Provenance metadata for a document.
*/
export interface ProvenanceMetadata {
readonly sourceType: string;
readonly sourceUri: string;
readonly ingestedAt: string;
readonly ingestedBy: string;
readonly buildId?: string;
readonly commitSha?: string;
readonly pipelineUrl?: string;
}

View File

@@ -0,0 +1,323 @@
import { Injectable, InjectionToken } from '@angular/core';
import { Observable, of, delay } from 'rxjs';
import {
EvidenceData,
Linkset,
Observation,
PolicyEvidence,
} from './evidence.models';
export interface EvidenceApi {
getEvidenceForAdvisory(advisoryId: string): Observable<EvidenceData>;
getObservation(observationId: string): Observable<Observation>;
getLinkset(linksetId: string): Observable<Linkset>;
getPolicyEvidence(advisoryId: string): Observable<PolicyEvidence | null>;
downloadRawDocument(type: 'observation' | 'linkset', id: string): Observable<Blob>;
}
export const EVIDENCE_API = new InjectionToken<EvidenceApi>('EVIDENCE_API');
// Mock data for development
const MOCK_OBSERVATIONS: Observation[] = [
{
observationId: 'obs-ghsa-001',
tenantId: 'tenant-1',
source: 'ghsa',
advisoryId: 'GHSA-jfh8-c2jp-5v3q',
title: 'Log4j Remote Code Execution (Log4Shell)',
summary: 'Apache Log4j2 2.0-beta9 through 2.15.0 (excluding security releases 2.12.2, 2.12.3, and 2.3.1) JNDI features do not protect against attacker controlled LDAP and other JNDI related endpoints.',
severities: [
{ system: 'cvss_v3', score: 10.0, vector: 'CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H' },
],
affected: [
{
purl: 'pkg:maven/org.apache.logging.log4j/log4j-core',
package: 'log4j-core',
ecosystem: 'maven',
ranges: [
{
type: 'ECOSYSTEM',
events: [
{ introduced: '2.0-beta9' },
{ fixed: '2.15.0' },
],
},
],
},
],
references: [
'https://github.com/advisories/GHSA-jfh8-c2jp-5v3q',
'https://logging.apache.org/log4j/2.x/security.html',
],
weaknesses: ['CWE-502', 'CWE-400', 'CWE-20'],
published: '2021-12-10T00:00:00Z',
modified: '2024-01-15T10:30:00Z',
provenance: {
sourceArtifactSha: 'sha256:abc123def456...',
fetchedAt: '2024-11-20T08:00:00Z',
ingestJobId: 'job-ghsa-2024-1120',
},
ingestedAt: '2024-11-20T08:05:00Z',
},
{
observationId: 'obs-nvd-001',
tenantId: 'tenant-1',
source: 'nvd',
advisoryId: 'CVE-2021-44228',
title: 'Apache Log4j2 Remote Code Execution Vulnerability',
summary: 'Apache Log4j2 <=2.14.1 JNDI features used in configuration, log messages, and parameters do not protect against attacker controlled LDAP and other JNDI related endpoints.',
severities: [
{ system: 'cvss_v3', score: 10.0, vector: 'CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H' },
{ system: 'cvss_v2', score: 9.3, vector: 'AV:N/AC:M/Au:N/C:C/I:C/A:C' },
],
affected: [
{
purl: 'pkg:maven/org.apache.logging.log4j/log4j-core',
package: 'log4j-core',
ecosystem: 'maven',
versions: ['2.0-beta9', '2.0', '2.1', '2.2', '2.3', '2.4', '2.4.1', '2.5', '2.6', '2.6.1', '2.6.2', '2.7', '2.8', '2.8.1', '2.8.2', '2.9.0', '2.9.1', '2.10.0', '2.11.0', '2.11.1', '2.11.2', '2.12.0', '2.12.1', '2.13.0', '2.13.1', '2.13.2', '2.13.3', '2.14.0', '2.14.1'],
cpe: ['cpe:2.3:a:apache:log4j:*:*:*:*:*:*:*:*'],
},
],
references: [
'https://nvd.nist.gov/vuln/detail/CVE-2021-44228',
'https://www.cisa.gov/news-events/alerts/2021/12/11/apache-log4j-vulnerability-guidance',
],
relationships: [
{ type: 'alias', source: 'CVE-2021-44228', target: 'GHSA-jfh8-c2jp-5v3q', provenance: 'nvd' },
],
weaknesses: ['CWE-917', 'CWE-20', 'CWE-400', 'CWE-502'],
published: '2021-12-10T10:15:00Z',
modified: '2024-02-20T15:45:00Z',
provenance: {
sourceArtifactSha: 'sha256:def789ghi012...',
fetchedAt: '2024-11-20T08:10:00Z',
ingestJobId: 'job-nvd-2024-1120',
},
ingestedAt: '2024-11-20T08:15:00Z',
},
{
observationId: 'obs-osv-001',
tenantId: 'tenant-1',
source: 'osv',
advisoryId: 'GHSA-jfh8-c2jp-5v3q',
title: 'Remote code injection in Log4j',
summary: 'Logging untrusted data with log4j versions 2.0-beta9 through 2.14.1 can result in remote code execution.',
severities: [
{ system: 'cvss_v3', score: 10.0, vector: 'CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H' },
],
affected: [
{
purl: 'pkg:maven/org.apache.logging.log4j/log4j-core',
package: 'log4j-core',
ecosystem: 'Maven',
ranges: [
{
type: 'ECOSYSTEM',
events: [
{ introduced: '2.0-beta9' },
{ fixed: '2.3.1' },
],
},
{
type: 'ECOSYSTEM',
events: [
{ introduced: '2.4' },
{ fixed: '2.12.2' },
],
},
{
type: 'ECOSYSTEM',
events: [
{ introduced: '2.13.0' },
{ fixed: '2.15.0' },
],
},
],
},
],
references: [
'https://osv.dev/vulnerability/GHSA-jfh8-c2jp-5v3q',
],
published: '2021-12-10T00:00:00Z',
modified: '2023-06-15T09:00:00Z',
provenance: {
sourceArtifactSha: 'sha256:ghi345jkl678...',
fetchedAt: '2024-11-20T08:20:00Z',
ingestJobId: 'job-osv-2024-1120',
},
ingestedAt: '2024-11-20T08:25:00Z',
},
];
const MOCK_LINKSET: Linkset = {
linksetId: 'ls-log4shell-001',
tenantId: 'tenant-1',
advisoryId: 'CVE-2021-44228',
source: 'aggregated',
observations: ['obs-ghsa-001', 'obs-nvd-001', 'obs-osv-001'],
normalized: {
purls: ['pkg:maven/org.apache.logging.log4j/log4j-core'],
versions: ['2.0-beta9', '2.0', '2.1', '2.2', '2.3', '2.4', '2.4.1', '2.5', '2.6', '2.6.1', '2.6.2', '2.7', '2.8', '2.8.1', '2.8.2', '2.9.0', '2.9.1', '2.10.0', '2.11.0', '2.11.1', '2.11.2', '2.12.0', '2.12.1', '2.13.0', '2.13.1', '2.13.2', '2.13.3', '2.14.0', '2.14.1'],
severities: [
{ system: 'cvss_v3', score: 10.0, vector: 'CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H' },
],
},
confidence: 0.95,
conflicts: [
{
field: 'affected.ranges',
reason: 'Different fixed version ranges reported by sources',
values: ['2.15.0 (GHSA)', '2.3.1/2.12.2/2.15.0 (OSV)'],
sourceIds: ['ghsa', 'osv'],
},
{
field: 'weaknesses',
reason: 'Different CWE identifiers reported',
values: ['CWE-502, CWE-400, CWE-20 (GHSA)', 'CWE-917, CWE-20, CWE-400, CWE-502 (NVD)'],
sourceIds: ['ghsa', 'nvd'],
},
],
createdAt: '2024-11-20T08:30:00Z',
builtByJobId: 'linkset-build-2024-1120',
provenance: {
observationHashes: [
'sha256:abc123...',
'sha256:def789...',
'sha256:ghi345...',
],
toolVersion: 'concelier-lnm-1.2.0',
policyHash: 'sha256:policy-hash-001',
},
};
const MOCK_POLICY_EVIDENCE: PolicyEvidence = {
policyId: 'pol-critical-vuln-001',
policyName: 'Critical Vulnerability Policy',
decision: 'block',
decidedAt: '2024-11-20T08:35:00Z',
reason: 'Critical severity vulnerability (CVSS 10.0) with known exploits',
rules: [
{
ruleId: 'rule-cvss-critical',
ruleName: 'Block Critical CVSS',
passed: false,
reason: 'CVSS score 10.0 exceeds threshold of 9.0',
matchedItems: ['CVE-2021-44228'],
},
{
ruleId: 'rule-known-exploit',
ruleName: 'Known Exploit Check',
passed: false,
reason: 'Active exploitation reported by CISA',
matchedItems: ['KEV-2021-44228'],
},
{
ruleId: 'rule-fix-available',
ruleName: 'Fix Available',
passed: true,
reason: 'Fixed version 2.15.0+ available',
},
],
linksetIds: ['ls-log4shell-001'],
aocChain: [
{
attestationId: 'aoc-obs-ghsa-001',
type: 'observation',
hash: 'sha256:abc123def456...',
timestamp: '2024-11-20T08:05:00Z',
parentHash: undefined,
},
{
attestationId: 'aoc-obs-nvd-001',
type: 'observation',
hash: 'sha256:def789ghi012...',
timestamp: '2024-11-20T08:15:00Z',
parentHash: 'sha256:abc123def456...',
},
{
attestationId: 'aoc-obs-osv-001',
type: 'observation',
hash: 'sha256:ghi345jkl678...',
timestamp: '2024-11-20T08:25:00Z',
parentHash: 'sha256:def789ghi012...',
},
{
attestationId: 'aoc-ls-001',
type: 'linkset',
hash: 'sha256:linkset-hash-001...',
timestamp: '2024-11-20T08:30:00Z',
parentHash: 'sha256:ghi345jkl678...',
},
{
attestationId: 'aoc-policy-001',
type: 'policy',
hash: 'sha256:policy-decision-hash...',
timestamp: '2024-11-20T08:35:00Z',
signer: 'policy-engine-v1',
parentHash: 'sha256:linkset-hash-001...',
},
],
};
@Injectable({ providedIn: 'root' })
export class MockEvidenceApiService implements EvidenceApi {
getEvidenceForAdvisory(advisoryId: string): Observable<EvidenceData> {
// Filter observations related to the advisory
const observations = MOCK_OBSERVATIONS.filter(
(o) =>
o.advisoryId === advisoryId ||
o.advisoryId === 'GHSA-jfh8-c2jp-5v3q' // Related to CVE-2021-44228
);
const linkset = MOCK_LINKSET;
const policyEvidence = MOCK_POLICY_EVIDENCE;
const data: EvidenceData = {
advisoryId,
title: observations[0]?.title ?? `Evidence for ${advisoryId}`,
observations,
linkset,
policyEvidence,
hasConflicts: linkset.conflicts.length > 0,
conflictCount: linkset.conflicts.length,
};
return of(data).pipe(delay(300));
}
getObservation(observationId: string): Observable<Observation> {
const observation = MOCK_OBSERVATIONS.find((o) => o.observationId === observationId);
if (!observation) {
throw new Error(`Observation not found: ${observationId}`);
}
return of(observation).pipe(delay(100));
}
getLinkset(linksetId: string): Observable<Linkset> {
if (linksetId === MOCK_LINKSET.linksetId) {
return of(MOCK_LINKSET).pipe(delay(100));
}
throw new Error(`Linkset not found: ${linksetId}`);
}
getPolicyEvidence(advisoryId: string): Observable<PolicyEvidence | null> {
if (advisoryId === 'CVE-2021-44228' || advisoryId === 'GHSA-jfh8-c2jp-5v3q') {
return of(MOCK_POLICY_EVIDENCE).pipe(delay(100));
}
return of(null).pipe(delay(100));
}
downloadRawDocument(type: 'observation' | 'linkset', id: string): Observable<Blob> {
let data: object;
if (type === 'observation') {
data = MOCK_OBSERVATIONS.find((o) => o.observationId === id) ?? {};
} else {
data = MOCK_LINKSET;
}
const json = JSON.stringify(data, null, 2);
const blob = new Blob([json], { type: 'application/json' });
return of(blob).pipe(delay(100));
}
}

View File

@@ -0,0 +1,189 @@
/**
* Link-Not-Merge Evidence Models
* Based on docs/modules/concelier/link-not-merge-schema.md
*/
// Severity from advisory sources
export interface AdvisorySeverity {
readonly system: string; // e.g., 'cvss_v3', 'cvss_v2', 'ghsa'
readonly score: number;
readonly vector?: string;
}
// Affected package information
export interface AffectedPackage {
readonly purl: string;
readonly package?: string;
readonly versions?: readonly string[];
readonly ranges?: readonly VersionRange[];
readonly ecosystem?: string;
readonly cpe?: readonly string[];
}
export interface VersionRange {
readonly type: string;
readonly events: readonly VersionEvent[];
}
export interface VersionEvent {
readonly introduced?: string;
readonly fixed?: string;
readonly last_affected?: string;
}
// Relationship between advisories
export interface AdvisoryRelationship {
readonly type: string;
readonly source: string;
readonly target: string;
readonly provenance?: string;
}
// Provenance tracking
export interface ObservationProvenance {
readonly sourceArtifactSha: string;
readonly fetchedAt: string;
readonly ingestJobId?: string;
readonly signature?: Record<string, unknown>;
}
// Raw observation from a single source
export interface Observation {
readonly observationId: string;
readonly tenantId: string;
readonly source: string; // e.g., 'ghsa', 'nvd', 'cert-bund'
readonly advisoryId: string;
readonly title?: string;
readonly summary?: string;
readonly severities: readonly AdvisorySeverity[];
readonly affected: readonly AffectedPackage[];
readonly references?: readonly string[];
readonly scopes?: readonly string[];
readonly relationships?: readonly AdvisoryRelationship[];
readonly weaknesses?: readonly string[];
readonly published?: string;
readonly modified?: string;
readonly provenance: ObservationProvenance;
readonly ingestedAt: string;
}
// Conflict when sources disagree
export interface LinksetConflict {
readonly field: string;
readonly reason: string;
readonly values?: readonly string[];
readonly sourceIds?: readonly string[];
}
// Linkset provenance
export interface LinksetProvenance {
readonly observationHashes: readonly string[];
readonly toolVersion?: string;
readonly policyHash?: string;
}
// Normalized linkset aggregating multiple observations
export interface Linkset {
readonly linksetId: string;
readonly tenantId: string;
readonly advisoryId: string;
readonly source: string;
readonly observations: readonly string[]; // observation IDs
readonly normalized?: {
readonly purls?: readonly string[];
readonly versions?: readonly string[];
readonly ranges?: readonly VersionRange[];
readonly severities?: readonly AdvisorySeverity[];
};
readonly confidence?: number; // 0-1
readonly conflicts: readonly LinksetConflict[];
readonly createdAt: string;
readonly builtByJobId?: string;
readonly provenance?: LinksetProvenance;
}
// Policy decision result
export type PolicyDecision = 'pass' | 'warn' | 'block' | 'pending';
// Policy decision with evidence
export interface PolicyEvidence {
readonly policyId: string;
readonly policyName: string;
readonly decision: PolicyDecision;
readonly decidedAt: string;
readonly reason?: string;
readonly rules: readonly PolicyRuleResult[];
readonly linksetIds: readonly string[];
readonly aocChain?: AocChainEntry[];
}
export interface PolicyRuleResult {
readonly ruleId: string;
readonly ruleName: string;
readonly passed: boolean;
readonly reason?: string;
readonly matchedItems?: readonly string[];
}
// AOC (Attestation of Compliance) chain entry
export interface AocChainEntry {
readonly attestationId: string;
readonly type: 'observation' | 'linkset' | 'policy' | 'signature';
readonly hash: string;
readonly timestamp: string;
readonly signer?: string;
readonly parentHash?: string;
}
// Evidence panel data combining all elements
export interface EvidenceData {
readonly advisoryId: string;
readonly title?: string;
readonly observations: readonly Observation[];
readonly linkset?: Linkset;
readonly policyEvidence?: PolicyEvidence;
readonly hasConflicts: boolean;
readonly conflictCount: number;
}
// Source metadata for display
export interface SourceInfo {
readonly sourceId: string;
readonly name: string;
readonly icon?: string;
readonly url?: string;
readonly lastUpdated?: string;
}
export const SOURCE_INFO: Record<string, SourceInfo> = {
ghsa: {
sourceId: 'ghsa',
name: 'GitHub Security Advisories',
icon: 'github',
url: 'https://github.com/advisories',
},
nvd: {
sourceId: 'nvd',
name: 'National Vulnerability Database',
icon: 'database',
url: 'https://nvd.nist.gov',
},
'cert-bund': {
sourceId: 'cert-bund',
name: 'CERT-Bund',
icon: 'shield',
url: 'https://www.cert-bund.de',
},
osv: {
sourceId: 'osv',
name: 'Open Source Vulnerabilities',
icon: 'box',
url: 'https://osv.dev',
},
cve: {
sourceId: 'cve',
name: 'CVE Program',
icon: 'alert-triangle',
url: 'https://cve.mitre.org',
},
};

View File

@@ -0,0 +1,373 @@
import { Injectable, InjectionToken } from '@angular/core';
import { Observable, of, delay } from 'rxjs';
import {
Release,
ReleaseArtifact,
PolicyEvaluation,
PolicyGateResult,
DeterminismGateDetails,
RemediationHint,
DeterminismFeatureFlags,
PolicyGateStatus,
} from './release.models';
/**
* Injection token for Release API client.
*/
export const RELEASE_API = new InjectionToken<ReleaseApi>('RELEASE_API');
/**
* Release API interface.
*/
export interface ReleaseApi {
getRelease(releaseId: string): Observable<Release>;
listReleases(): Observable<readonly Release[]>;
publishRelease(releaseId: string): Observable<Release>;
cancelRelease(releaseId: string): Observable<Release>;
getFeatureFlags(): Observable<DeterminismFeatureFlags>;
requestBypass(releaseId: string, reason: string): Observable<{ requestId: string }>;
}
// ============================================================================
// Mock Data Fixtures
// ============================================================================
const determinismPassingGate: PolicyGateResult = {
gateId: 'gate-det-001',
gateType: 'determinism',
name: 'SBOM Determinism',
status: 'passed',
message: 'Merkle root consistent. All fragment attestations verified.',
evaluatedAt: '2025-11-27T10:15:00Z',
blockingPublish: true,
evidence: {
type: 'determinism',
url: '/scans/scan-abc123?tab=determinism',
details: {
merkleRoot: 'sha256:a1b2c3d4e5f6...',
fragmentCount: 8,
verifiedFragments: 8,
},
},
};
const determinismFailingGate: PolicyGateResult = {
gateId: 'gate-det-002',
gateType: 'determinism',
name: 'SBOM Determinism',
status: 'failed',
message: 'Merkle root mismatch. 2 fragment attestations failed verification.',
evaluatedAt: '2025-11-27T09:30:00Z',
blockingPublish: true,
evidence: {
type: 'determinism',
url: '/scans/scan-def456?tab=determinism',
details: {
merkleRoot: 'sha256:f1e2d3c4b5a6...',
expectedMerkleRoot: 'sha256:9a8b7c6d5e4f...',
fragmentCount: 8,
verifiedFragments: 6,
failedFragments: [
'sha256:layer3digest...',
'sha256:layer5digest...',
],
},
},
remediation: {
gateType: 'determinism',
severity: 'critical',
summary: 'The SBOM composition cannot be independently verified. Fragment attestations for layers 3 and 5 failed DSSE signature verification.',
steps: [
{
action: 'rebuild',
title: 'Rebuild with deterministic toolchain',
description: 'Rebuild the image using Stella Scanner with --deterministic flag to ensure consistent fragment hashes.',
command: 'stella scan --deterministic --sign --push',
documentationUrl: 'https://docs.stellaops.io/scanner/determinism',
automated: false,
},
{
action: 'provide-provenance',
title: 'Provide provenance attestation',
description: 'Ensure build provenance (SLSA Level 2+) is attached to the image manifest.',
documentationUrl: 'https://docs.stellaops.io/provenance',
automated: false,
},
{
action: 'sign-artifact',
title: 'Re-sign with valid key',
description: 'Sign the SBOM fragments with a valid DSSE key registered in your tenant.',
command: 'stella sign --artifact sha256:...',
automated: true,
},
{
action: 'request-exception',
title: 'Request policy exception',
description: 'If this is a known issue with a compensating control, request a time-boxed exception.',
automated: true,
},
],
estimatedEffort: '15-30 minutes',
exceptionAllowed: true,
},
};
const vulnerabilityPassingGate: PolicyGateResult = {
gateId: 'gate-vuln-001',
gateType: 'vulnerability',
name: 'Vulnerability Scan',
status: 'passed',
message: 'No critical or high vulnerabilities. 3 medium, 12 low.',
evaluatedAt: '2025-11-27T10:15:00Z',
blockingPublish: false,
};
const entropyWarningGate: PolicyGateResult = {
gateId: 'gate-ent-001',
gateType: 'entropy',
name: 'Entropy Analysis',
status: 'warning',
message: 'Image opaque ratio 12% (warn threshold: 10%). Consider providing provenance.',
evaluatedAt: '2025-11-27T10:15:00Z',
blockingPublish: false,
remediation: {
gateType: 'entropy',
severity: 'medium',
summary: 'High entropy detected in some layers. This may indicate packed/encrypted content.',
steps: [
{
action: 'provide-provenance',
title: 'Provide source provenance',
description: 'Attach build provenance or source mappings for high-entropy binaries.',
automated: false,
},
],
estimatedEffort: '10 minutes',
exceptionAllowed: true,
},
};
const licensePassingGate: PolicyGateResult = {
gateId: 'gate-lic-001',
gateType: 'license',
name: 'License Compliance',
status: 'passed',
message: 'All licenses approved. 45 MIT, 12 Apache-2.0, 3 BSD-3-Clause.',
evaluatedAt: '2025-11-27T10:15:00Z',
blockingPublish: false,
};
const signaturePassingGate: PolicyGateResult = {
gateId: 'gate-sig-001',
gateType: 'signature',
name: 'Signature Verification',
status: 'passed',
message: 'Image signature verified against tenant keyring.',
evaluatedAt: '2025-11-27T10:15:00Z',
blockingPublish: true,
};
const signatureFailingGate: PolicyGateResult = {
gateId: 'gate-sig-002',
gateType: 'signature',
name: 'Signature Verification',
status: 'failed',
message: 'No valid signature found. Image must be signed before release.',
evaluatedAt: '2025-11-27T09:30:00Z',
blockingPublish: true,
remediation: {
gateType: 'signature',
severity: 'critical',
summary: 'The image is not signed or the signature cannot be verified.',
steps: [
{
action: 'sign-artifact',
title: 'Sign the image',
description: 'Sign the image using your tenant signing key.',
command: 'cosign sign --key cosign.key myregistry/myimage:v1.2.3',
automated: true,
},
],
estimatedEffort: '2 minutes',
exceptionAllowed: false,
},
};
// Artifacts with policy evaluations
const passingArtifact: ReleaseArtifact = {
artifactId: 'art-001',
name: 'api-service',
tag: 'v1.2.3',
digest: 'sha256:abc123def456789012345678901234567890abcdef',
size: 245_000_000,
createdAt: '2025-11-27T08:00:00Z',
registry: 'registry.stellaops.io/prod',
policyEvaluation: {
evaluationId: 'eval-001',
artifactDigest: 'sha256:abc123def456789012345678901234567890abcdef',
evaluatedAt: '2025-11-27T10:15:00Z',
overallStatus: 'passed',
gates: [
determinismPassingGate,
vulnerabilityPassingGate,
entropyWarningGate,
licensePassingGate,
signaturePassingGate,
],
blockingGates: [],
canPublish: true,
determinismDetails: {
merkleRoot: 'sha256:a1b2c3d4e5f67890abcdef1234567890fedcba0987654321',
merkleRootConsistent: true,
contentHash: 'sha256:content1234567890abcdef',
compositionManifestUri: 'oci://registry.stellaops.io/prod/api-service@sha256:abc123/_composition.json',
fragmentCount: 8,
verifiedFragments: 8,
},
},
};
const failingArtifact: ReleaseArtifact = {
artifactId: 'art-002',
name: 'worker-service',
tag: 'v1.2.3',
digest: 'sha256:def456abc789012345678901234567890fedcba98',
size: 312_000_000,
createdAt: '2025-11-27T07:45:00Z',
registry: 'registry.stellaops.io/prod',
policyEvaluation: {
evaluationId: 'eval-002',
artifactDigest: 'sha256:def456abc789012345678901234567890fedcba98',
evaluatedAt: '2025-11-27T09:30:00Z',
overallStatus: 'failed',
gates: [
determinismFailingGate,
vulnerabilityPassingGate,
licensePassingGate,
signatureFailingGate,
],
blockingGates: ['gate-det-002', 'gate-sig-002'],
canPublish: false,
determinismDetails: {
merkleRoot: 'sha256:f1e2d3c4b5a67890',
merkleRootConsistent: false,
contentHash: 'sha256:content9876543210',
compositionManifestUri: 'oci://registry.stellaops.io/prod/worker-service@sha256:def456/_composition.json',
fragmentCount: 8,
verifiedFragments: 6,
failedFragments: ['sha256:layer3digest...', 'sha256:layer5digest...'],
},
},
};
// Release fixtures
const passingRelease: Release = {
releaseId: 'rel-001',
name: 'Platform v1.2.3',
version: '1.2.3',
status: 'pending_approval',
createdAt: '2025-11-27T08:30:00Z',
createdBy: 'deploy-bot',
artifacts: [passingArtifact],
targetEnvironment: 'production',
notes: 'Feature release with API improvements and bug fixes.',
approvals: [
{
approvalId: 'apr-001',
approver: 'security-team',
decision: 'approved',
comment: 'Security review passed.',
decidedAt: '2025-11-27T09:00:00Z',
},
{
approvalId: 'apr-002',
approver: 'release-manager',
decision: 'pending',
},
],
};
const blockedRelease: Release = {
releaseId: 'rel-002',
name: 'Platform v1.2.4-rc1',
version: '1.2.4-rc1',
status: 'blocked',
createdAt: '2025-11-27T07:00:00Z',
createdBy: 'deploy-bot',
artifacts: [failingArtifact],
targetEnvironment: 'staging',
notes: 'Release candidate blocked due to policy gate failures.',
};
const mixedRelease: Release = {
releaseId: 'rel-003',
name: 'Platform v1.2.5',
version: '1.2.5',
status: 'blocked',
createdAt: '2025-11-27T06:00:00Z',
createdBy: 'ci-pipeline',
artifacts: [passingArtifact, failingArtifact],
targetEnvironment: 'production',
notes: 'Multi-artifact release with mixed policy results.',
};
const mockReleases: readonly Release[] = [passingRelease, blockedRelease, mixedRelease];
const mockFeatureFlags: DeterminismFeatureFlags = {
enabled: true,
blockOnFailure: true,
warnOnly: false,
bypassRoles: ['security-admin', 'release-manager'],
requireApprovalForBypass: true,
};
// ============================================================================
// Mock API Implementation
// ============================================================================
@Injectable({ providedIn: 'root' })
export class MockReleaseApi implements ReleaseApi {
getRelease(releaseId: string): Observable<Release> {
const release = mockReleases.find((r) => r.releaseId === releaseId);
if (!release) {
throw new Error(`Release not found: ${releaseId}`);
}
return of(release).pipe(delay(200));
}
listReleases(): Observable<readonly Release[]> {
return of(mockReleases).pipe(delay(300));
}
publishRelease(releaseId: string): Observable<Release> {
const release = mockReleases.find((r) => r.releaseId === releaseId);
if (!release) {
throw new Error(`Release not found: ${releaseId}`);
}
// Simulate publish (would update status in real implementation)
return of({
...release,
status: 'published',
publishedAt: new Date().toISOString(),
} as Release).pipe(delay(500));
}
cancelRelease(releaseId: string): Observable<Release> {
const release = mockReleases.find((r) => r.releaseId === releaseId);
if (!release) {
throw new Error(`Release not found: ${releaseId}`);
}
return of({
...release,
status: 'cancelled',
} as Release).pipe(delay(300));
}
getFeatureFlags(): Observable<DeterminismFeatureFlags> {
return of(mockFeatureFlags).pipe(delay(100));
}
requestBypass(releaseId: string, reason: string): Observable<{ requestId: string }> {
return of({ requestId: `bypass-${Date.now()}` }).pipe(delay(400));
}
}

View File

@@ -0,0 +1,161 @@
/**
* Release and Policy Gate models for UI-POLICY-DET-01.
* Supports determinism-gated release flows with remediation hints.
*/
// Policy gate evaluation status
export type PolicyGateStatus = 'passed' | 'failed' | 'pending' | 'skipped' | 'warning';
// Types of policy gates
export type PolicyGateType =
| 'determinism'
| 'vulnerability'
| 'license'
| 'entropy'
| 'signature'
| 'sbom-completeness'
| 'custom';
// Remediation action types
export type RemediationActionType =
| 'rebuild'
| 'provide-provenance'
| 'sign-artifact'
| 'update-dependency'
| 'request-exception'
| 'manual-review';
/**
* A single remediation step with optional automation support.
*/
export interface RemediationStep {
readonly action: RemediationActionType;
readonly title: string;
readonly description: string;
readonly command?: string; // Optional CLI command to run
readonly documentationUrl?: string;
readonly automated: boolean; // Can be triggered from UI
}
/**
* Remediation hints for a failed policy gate.
*/
export interface RemediationHint {
readonly gateType: PolicyGateType;
readonly severity: 'critical' | 'high' | 'medium' | 'low';
readonly summary: string;
readonly steps: readonly RemediationStep[];
readonly estimatedEffort?: string; // e.g., "5 minutes", "1 hour"
readonly exceptionAllowed: boolean;
}
/**
* Individual policy gate evaluation result.
*/
export interface PolicyGateResult {
readonly gateId: string;
readonly gateType: PolicyGateType;
readonly name: string;
readonly status: PolicyGateStatus;
readonly message: string;
readonly evaluatedAt: string;
readonly blockingPublish: boolean;
readonly evidence?: {
readonly type: string;
readonly url?: string;
readonly details?: Record<string, unknown>;
};
readonly remediation?: RemediationHint;
}
/**
* Determinism-specific gate details.
*/
export interface DeterminismGateDetails {
readonly merkleRoot?: string;
readonly merkleRootConsistent: boolean;
readonly contentHash?: string;
readonly compositionManifestUri?: string;
readonly fragmentCount?: number;
readonly verifiedFragments?: number;
readonly failedFragments?: readonly string[]; // Layer digests that failed
}
/**
* Overall policy evaluation for a release artifact.
*/
export interface PolicyEvaluation {
readonly evaluationId: string;
readonly artifactDigest: string;
readonly evaluatedAt: string;
readonly overallStatus: PolicyGateStatus;
readonly gates: readonly PolicyGateResult[];
readonly blockingGates: readonly string[]; // Gate IDs that block publish
readonly canPublish: boolean;
readonly determinismDetails?: DeterminismGateDetails;
}
/**
* Release artifact with policy evaluation.
*/
export interface ReleaseArtifact {
readonly artifactId: string;
readonly name: string;
readonly tag: string;
readonly digest: string;
readonly size: number;
readonly createdAt: string;
readonly registry: string;
readonly policyEvaluation?: PolicyEvaluation;
}
/**
* Release workflow status.
*/
export type ReleaseStatus =
| 'draft'
| 'pending_approval'
| 'approved'
| 'publishing'
| 'published'
| 'blocked'
| 'cancelled';
/**
* Release with multiple artifacts and policy gates.
*/
export interface Release {
readonly releaseId: string;
readonly name: string;
readonly version: string;
readonly status: ReleaseStatus;
readonly createdAt: string;
readonly createdBy: string;
readonly artifacts: readonly ReleaseArtifact[];
readonly targetEnvironment: string;
readonly notes?: string;
readonly approvals?: readonly ReleaseApproval[];
readonly publishedAt?: string;
}
/**
* Release approval record.
*/
export interface ReleaseApproval {
readonly approvalId: string;
readonly approver: string;
readonly decision: 'approved' | 'rejected' | 'pending';
readonly comment?: string;
readonly decidedAt?: string;
}
/**
* Feature flag configuration for determinism blocking.
*/
export interface DeterminismFeatureFlags {
readonly enabled: boolean;
readonly blockOnFailure: boolean;
readonly warnOnly: boolean;
readonly bypassRoles?: readonly string[];
readonly requireApprovalForBypass: boolean;
}

View File

@@ -9,9 +9,94 @@ export interface ScanAttestationStatus {
readonly statusMessage?: string;
}
// Determinism models based on docs/modules/scanner/deterministic-sbom-compose.md
export type DeterminismStatus = 'verified' | 'pending' | 'failed' | 'unknown';
export interface FragmentAttestation {
readonly layerDigest: string;
readonly fragmentSha256: string;
readonly dsseEnvelopeSha256: string;
readonly dsseStatus: 'verified' | 'pending' | 'failed';
readonly verifiedAt?: string;
}
export interface CompositionManifest {
readonly compositionUri: string;
readonly merkleRoot: string;
readonly fragmentCount: number;
readonly fragments: readonly FragmentAttestation[];
readonly createdAt: string;
}
export interface DeterminismEvidence {
readonly status: DeterminismStatus;
readonly merkleRoot?: string;
readonly merkleRootConsistent: boolean;
readonly compositionManifest?: CompositionManifest;
readonly contentHash?: string;
readonly verifiedAt?: string;
readonly failureReason?: string;
readonly stellaProperties?: {
readonly 'stellaops:stella.contentHash'?: string;
readonly 'stellaops:composition.manifest'?: string;
readonly 'stellaops:merkle.root'?: string;
};
}
// Entropy analysis models based on docs/modules/scanner/entropy.md
export interface EntropyWindow {
readonly offset: number;
readonly length: number;
readonly entropy: number; // 0-8 bits/byte
}
export interface EntropyFile {
readonly path: string;
readonly size: number;
readonly opaqueBytes: number;
readonly opaqueRatio: number; // 0-1
readonly flags: readonly string[]; // e.g., 'stripped', 'section:.UPX0', 'no-symbols', 'packed'
readonly windows: readonly EntropyWindow[];
}
export interface EntropyLayerSummary {
readonly digest: string;
readonly opaqueBytes: number;
readonly totalBytes: number;
readonly opaqueRatio: number; // 0-1
readonly indicators: readonly string[]; // e.g., 'packed', 'no-symbols'
}
export interface EntropyReport {
readonly schema: string;
readonly generatedAt: string;
readonly imageDigest: string;
readonly layerDigest?: string;
readonly files: readonly EntropyFile[];
}
export interface EntropyLayerSummaryReport {
readonly schema: string;
readonly generatedAt: string;
readonly imageDigest: string;
readonly layers: readonly EntropyLayerSummary[];
readonly imageOpaqueRatio: number; // 0-1
readonly entropyPenalty: number; // 0-0.3
}
export interface EntropyEvidence {
readonly report?: EntropyReport;
readonly layerSummary?: EntropyLayerSummaryReport;
readonly downloadUrl?: string; // URL to entropy.report.json
}
export interface ScanDetail {
readonly scanId: string;
readonly imageDigest: string;
readonly completedAt: string;
readonly attestation?: ScanAttestationStatus;
readonly determinism?: DeterminismEvidence;
readonly entropy?: EntropyEvidence;
}

View File

@@ -0,0 +1,125 @@
import { Injectable, InjectionToken, signal, computed } from '@angular/core';
import {
StellaOpsScopes,
StellaOpsScope,
ScopeGroups,
hasScope,
hasAllScopes,
hasAnyScope,
} from './scopes';
/**
* User info from authentication.
*/
export interface AuthUser {
readonly id: string;
readonly email: string;
readonly name: string;
readonly tenantId: string;
readonly tenantName: string;
readonly roles: readonly string[];
readonly scopes: readonly StellaOpsScope[];
}
/**
* Injection token for Auth service.
*/
export const AUTH_SERVICE = new InjectionToken<AuthService>('AUTH_SERVICE');
/**
* Auth service interface.
*/
export interface AuthService {
readonly isAuthenticated: ReturnType<typeof signal<boolean>>;
readonly user: ReturnType<typeof signal<AuthUser | null>>;
readonly scopes: ReturnType<typeof computed<readonly StellaOpsScope[]>>;
hasScope(scope: StellaOpsScope): boolean;
hasAllScopes(scopes: readonly StellaOpsScope[]): boolean;
hasAnyScope(scopes: readonly StellaOpsScope[]): boolean;
canViewGraph(): boolean;
canEditGraph(): boolean;
canExportGraph(): boolean;
canSimulate(): boolean;
}
// ============================================================================
// Mock Auth Service
// ============================================================================
const MOCK_USER: AuthUser = {
id: 'user-001',
email: 'developer@example.com',
name: 'Developer User',
tenantId: 'tenant-001',
tenantName: 'Acme Corp',
roles: ['developer', 'security-analyst'],
scopes: [
// Graph permissions
StellaOpsScopes.GRAPH_READ,
StellaOpsScopes.GRAPH_WRITE,
StellaOpsScopes.GRAPH_SIMULATE,
StellaOpsScopes.GRAPH_EXPORT,
// SBOM permissions
StellaOpsScopes.SBOM_READ,
// Policy permissions
StellaOpsScopes.POLICY_READ,
StellaOpsScopes.POLICY_EVALUATE,
StellaOpsScopes.POLICY_SIMULATE,
// Scanner permissions
StellaOpsScopes.SCANNER_READ,
// Exception permissions
StellaOpsScopes.EXCEPTION_READ,
StellaOpsScopes.EXCEPTION_WRITE,
// Release permissions
StellaOpsScopes.RELEASE_READ,
// AOC permissions
StellaOpsScopes.AOC_READ,
],
};
@Injectable({ providedIn: 'root' })
export class MockAuthService implements AuthService {
readonly isAuthenticated = signal(true);
readonly user = signal<AuthUser | null>(MOCK_USER);
readonly scopes = computed(() => {
const u = this.user();
return u?.scopes ?? [];
});
hasScope(scope: StellaOpsScope): boolean {
return hasScope(this.scopes(), scope);
}
hasAllScopes(scopes: readonly StellaOpsScope[]): boolean {
return hasAllScopes(this.scopes(), scopes);
}
hasAnyScope(scopes: readonly StellaOpsScope[]): boolean {
return hasAnyScope(this.scopes(), scopes);
}
canViewGraph(): boolean {
return this.hasScope(StellaOpsScopes.GRAPH_READ);
}
canEditGraph(): boolean {
return this.hasScope(StellaOpsScopes.GRAPH_WRITE);
}
canExportGraph(): boolean {
return this.hasScope(StellaOpsScopes.GRAPH_EXPORT);
}
canSimulate(): boolean {
return this.hasAnyScope([
StellaOpsScopes.GRAPH_SIMULATE,
StellaOpsScopes.POLICY_SIMULATE,
]);
}
}
// Re-export scopes for convenience
export { StellaOpsScopes, ScopeGroups } from './scopes';
export type { StellaOpsScope } from './scopes';

View File

@@ -0,0 +1,16 @@
export {
StellaOpsScopes,
StellaOpsScope,
ScopeGroups,
ScopeLabels,
hasScope,
hasAllScopes,
hasAnyScope,
} from './scopes';
export {
AuthUser,
AuthService,
AUTH_SERVICE,
MockAuthService,
} from './auth.service';

Some files were not shown because too many files have changed in this diff Show More