Restructure solution layout by module
	
		
			
	
		
	
	
		
	
		
			Some checks failed
		
		
	
	
		
			
				
	
				Docs CI / lint-and-preview (push) Has been cancelled
				
			
		
		
	
	
				
					
				
			
		
			Some checks failed
		
		
	
	Docs CI / lint-and-preview (push) Has been cancelled
				
			This commit is contained in:
		| @@ -1,92 +1,92 @@ | ||||
| # DevOps Release Automation | ||||
|  | ||||
| The **release** workflow builds and signs the StellaOps service containers, | ||||
| generates SBOM + provenance attestations, and emits a canonical | ||||
| `release.yaml`. The logic lives under `ops/devops/release/` and is invoked | ||||
| by the new `.gitea/workflows/release.yml` pipeline. | ||||
|  | ||||
| ## Local dry run | ||||
|  | ||||
| ```bash | ||||
| ./ops/devops/release/build_release.py \ | ||||
|   --version 2025.10.0-edge \ | ||||
|   --channel edge \ | ||||
|   --dry-run | ||||
| ``` | ||||
|  | ||||
| Outputs land under `out/release/`. Use `--no-push` to run full builds without | ||||
| pushing to the registry. | ||||
|  | ||||
| After the build completes, run the verifier to validate recorded hashes and artefact | ||||
| presence: | ||||
|  | ||||
| ```bash | ||||
| python ops/devops/release/verify_release.py --release-dir out/release | ||||
| ``` | ||||
|  | ||||
| ## Python analyzer smoke & signing | ||||
|  | ||||
| `dotnet run --project tools/LanguageAnalyzerSmoke` exercises the Python language | ||||
| analyzer plug-in against the golden fixtures (cold/warm timings, determinism). The | ||||
| release workflow runs this harness automatically and then produces Cosign | ||||
| signatures + SHA-256 sidecars for `StellaOps.Scanner.Analyzers.Lang.Python.dll` | ||||
| and its `manifest.json`. Keep `COSIGN_KEY_REF`/`COSIGN_IDENTITY_TOKEN` populated so | ||||
| the step can sign the artefacts; the generated `.sig`/`.sha256` files ship with the | ||||
| Offline Kit bundle. | ||||
|  | ||||
| ## Required tooling | ||||
|  | ||||
| - Docker 25+ with Buildx | ||||
| - .NET 10 preview SDK (builds container stages and the SBOM generator) | ||||
| - Node.js 20 (Angular UI build) | ||||
| - Helm 3.16+ | ||||
| - Cosign 2.2+ | ||||
|  | ||||
| Supply signing material via environment variables: | ||||
|  | ||||
| - `COSIGN_KEY_REF` – e.g. `file:./keys/cosign.key` or `azurekms://…` | ||||
| - `COSIGN_PASSWORD` – password protecting the above key | ||||
|  | ||||
| The workflow defaults to multi-arch (`linux/amd64,linux/arm64`), SBOM in | ||||
| CycloneDX, and SLSA provenance (`https://slsa.dev/provenance/v1`). | ||||
|  | ||||
| ## Debug store extraction | ||||
|  | ||||
| `build_release.py` now exports stripped debug artefacts for every ELF discovered in the published images. The files land under `out/release/debug/.build-id/<aa>/<rest>.debug`, with metadata captured in `debug/debug-manifest.json` (and a `.sha256` sidecar). Use `jq` to inspect the manifest or `readelf -n` to spot-check a build-id. Offline Kit packaging should reuse the `debug/` directory as-is. | ||||
|  | ||||
| ## UI auth smoke (Playwright) | ||||
|  | ||||
| As part of **DEVOPS-UI-13-006** the pipelines will execute the UI auth smoke | ||||
| tests (`npm run test:e2e`) after building the Angular bundle. See | ||||
| `docs/ops/ui-auth-smoke.md` for the job design, environment stubs, and | ||||
| offline runner considerations. | ||||
|  | ||||
| ## NuGet preview bootstrap | ||||
|  | ||||
| `.NET 10` preview packages (Microsoft.Extensions.*, JwtBearer 10.0 RC, Sqlite 9 RC) | ||||
| ship from the public `dotnet-public` Azure DevOps feed. We mirror them into | ||||
| `./local-nuget` so restores succeed inside Offline Kit. | ||||
|  | ||||
| 1. Run `./ops/devops/sync-preview-nuget.sh` whenever you update the manifest. | ||||
| 2. The script now understands the optional `SourceBase` column (V3 flat container) | ||||
|    and writes packages alongside their SHA-256 checks. | ||||
| 3. `NuGet.config` registers the mirror (`local`), dotnet-public, and nuget.org. | ||||
|  | ||||
| Use `python3 ops/devops/validate_restore_sources.py` to prove the repo still | ||||
| prefers the local mirror and that `Directory.Build.props` enforces the same order. | ||||
| The validator now runs automatically in the `build-test-deploy` and `release` | ||||
| workflows so CI fails fast when a feed priority regression slips in. | ||||
|  | ||||
| Detailed operator instructions live in `docs/ops/nuget-preview-bootstrap.md`. | ||||
|  | ||||
| ## Telemetry collector tooling (DEVOPS-OBS-50-001) | ||||
|  | ||||
| - `ops/devops/telemetry/generate_dev_tls.sh` – generates a development CA and | ||||
|   client/server certificates for the OpenTelemetry collector overlay (mutual TLS). | ||||
| - `ops/devops/telemetry/smoke_otel_collector.py` – sends OTLP traces/metrics/logs | ||||
|   over TLS and validates that the collector increments its receiver counters. | ||||
| - `ops/devops/telemetry/package_offline_bundle.py` – re-packages collector assets for the Offline Kit. | ||||
| - `deploy/compose/docker-compose.telemetry-storage.yaml` – Prometheus/Tempo/Loki stack for staging validation. | ||||
|  | ||||
| Combine these helpers with `deploy/compose/docker-compose.telemetry.yaml` to run | ||||
| a secured collector locally before rolling out the Helm-based deployment. | ||||
| # DevOps Release Automation | ||||
|  | ||||
| The **release** workflow builds and signs the StellaOps service containers, | ||||
| generates SBOM + provenance attestations, and emits a canonical | ||||
| `release.yaml`. The logic lives under `ops/devops/release/` and is invoked | ||||
| by the new `.gitea/workflows/release.yml` pipeline. | ||||
|  | ||||
| ## Local dry run | ||||
|  | ||||
| ```bash | ||||
| ./ops/devops/release/build_release.py \ | ||||
|   --version 2025.10.0-edge \ | ||||
|   --channel edge \ | ||||
|   --dry-run | ||||
| ``` | ||||
|  | ||||
| Outputs land under `out/release/`. Use `--no-push` to run full builds without | ||||
| pushing to the registry. | ||||
|  | ||||
| After the build completes, run the verifier to validate recorded hashes and artefact | ||||
| presence: | ||||
|  | ||||
| ```bash | ||||
| python ops/devops/release/verify_release.py --release-dir out/release | ||||
| ``` | ||||
|  | ||||
| ## Python analyzer smoke & signing | ||||
|  | ||||
| `dotnet run --project tools/LanguageAnalyzerSmoke` exercises the Python language | ||||
| analyzer plug-in against the golden fixtures (cold/warm timings, determinism). The | ||||
| release workflow runs this harness automatically and then produces Cosign | ||||
| signatures + SHA-256 sidecars for `StellaOps.Scanner.Analyzers.Lang.Python.dll` | ||||
| and its `manifest.json`. Keep `COSIGN_KEY_REF`/`COSIGN_IDENTITY_TOKEN` populated so | ||||
| the step can sign the artefacts; the generated `.sig`/`.sha256` files ship with the | ||||
| Offline Kit bundle. | ||||
|  | ||||
| ## Required tooling | ||||
|  | ||||
| - Docker 25+ with Buildx | ||||
| - .NET 10 preview SDK (builds container stages and the SBOM generator) | ||||
| - Node.js 20 (Angular UI build) | ||||
| - Helm 3.16+ | ||||
| - Cosign 2.2+ | ||||
|  | ||||
| Supply signing material via environment variables: | ||||
|  | ||||
| - `COSIGN_KEY_REF` – e.g. `file:./keys/cosign.key` or `azurekms://…` | ||||
| - `COSIGN_PASSWORD` – password protecting the above key | ||||
|  | ||||
| The workflow defaults to multi-arch (`linux/amd64,linux/arm64`), SBOM in | ||||
| CycloneDX, and SLSA provenance (`https://slsa.dev/provenance/v1`). | ||||
|  | ||||
| ## Debug store extraction | ||||
|  | ||||
| `build_release.py` now exports stripped debug artefacts for every ELF discovered in the published images. The files land under `out/release/debug/.build-id/<aa>/<rest>.debug`, with metadata captured in `debug/debug-manifest.json` (and a `.sha256` sidecar). Use `jq` to inspect the manifest or `readelf -n` to spot-check a build-id. Offline Kit packaging should reuse the `debug/` directory as-is. | ||||
|  | ||||
| ## UI auth smoke (Playwright) | ||||
|  | ||||
| As part of **DEVOPS-UI-13-006** the pipelines will execute the UI auth smoke | ||||
| tests (`npm run test:e2e`) after building the Angular bundle. See | ||||
| `docs/ops/ui-auth-smoke.md` for the job design, environment stubs, and | ||||
| offline runner considerations. | ||||
|  | ||||
| ## NuGet preview bootstrap | ||||
|  | ||||
| `.NET 10` preview packages (Microsoft.Extensions.*, JwtBearer 10.0 RC, Sqlite 9 RC) | ||||
| ship from the public `dotnet-public` Azure DevOps feed. We mirror them into | ||||
| `./local-nuget` so restores succeed inside Offline Kit. | ||||
|  | ||||
| 1. Run `./ops/devops/sync-preview-nuget.sh` whenever you update the manifest. | ||||
| 2. The script now understands the optional `SourceBase` column (V3 flat container) | ||||
|    and writes packages alongside their SHA-256 checks. | ||||
| 3. `NuGet.config` registers the mirror (`local`), dotnet-public, and nuget.org. | ||||
|  | ||||
| Use `python3 ops/devops/validate_restore_sources.py` to prove the repo still | ||||
| prefers the local mirror and that `Directory.Build.props` enforces the same order. | ||||
| The validator now runs automatically in the `build-test-deploy` and `release` | ||||
| workflows so CI fails fast when a feed priority regression slips in. | ||||
|  | ||||
| Detailed operator instructions live in `docs/ops/nuget-preview-bootstrap.md`. | ||||
|  | ||||
| ## Telemetry collector tooling (DEVOPS-OBS-50-001) | ||||
|  | ||||
| - `ops/devops/telemetry/generate_dev_tls.sh` – generates a development CA and | ||||
|   client/server certificates for the OpenTelemetry collector overlay (mutual TLS). | ||||
| - `ops/devops/telemetry/smoke_otel_collector.py` – sends OTLP traces/metrics/logs | ||||
|   over TLS and validates that the collector increments its receiver counters. | ||||
| - `ops/devops/telemetry/package_offline_bundle.py` – re-packages collector assets for the Offline Kit. | ||||
| - `deploy/compose/docker-compose.telemetry-storage.yaml` – Prometheus/Tempo/Loki stack for staging validation. | ||||
|  | ||||
| Combine these helpers with `deploy/compose/docker-compose.telemetry.yaml` to run | ||||
| a secured collector locally before rolling out the Helm-based deployment. | ||||
|   | ||||
| @@ -1,172 +1,172 @@ | ||||
| # DevOps Task Board | ||||
|  | ||||
| ## Governance & Rules | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-RULES-33-001 | DOING (2025-10-26) | DevOps Guild, Platform Leads | — | Contracts & Rules anchor:<br>• Gateway proxies only; Policy Engine composes overlays/simulations.<br>• AOC ingestion cannot merge; only lossless canonicalization.<br>• One graph platform: Graph Indexer + Graph API. Cartographer retired. | Rules posted in SPRINTS/TASKS; duplicates cleaned per guidance; reviewers acknowledge in changelog. | | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-HELM-09-001 | DONE | DevOps Guild | SCANNER-WEB-09-101 | Create Helm/Compose environment profiles (dev, staging, airgap) with deterministic digests. | Profiles committed under `deploy/`; docs updated; CI smoke deploy passes. | | ||||
| | DEVOPS-SCANNER-09-204 | DONE (2025-10-21) | DevOps Guild, Scanner WebService Guild | SCANNER-EVENTS-15-201 | Surface `SCANNER__EVENTS__*` environment variables across docker-compose (dev/stage/airgap) and Helm values, defaulting to share the Redis queue DSN. | Compose/Helm configs ship enabled Redis event publishing with documented overrides; lint jobs updated; docs cross-link to new knobs. | | ||||
| | DEVOPS-SCANNER-09-205 | DONE (2025-10-21) | DevOps Guild, Notify Guild | DEVOPS-SCANNER-09-204 | Add Notify smoke stage that tails the Redis stream and asserts `scanner.report.ready`/`scanner.scan.completed` reach Notify WebService in staging. | CI job reads Redis stream during scanner smoke deploy, confirms Notify ingestion via API, alerts on failure. | | ||||
| | DEVOPS-PERF-10-001 | DONE | DevOps Guild | BENCH-SCANNER-10-001 | Add perf smoke job (SBOM compose <5 s target) to CI. | CI job runs sample build verifying <5 s; alerts configured. | | ||||
| | DEVOPS-PERF-10-002 | DONE (2025-10-23) | DevOps Guild | BENCH-SCANNER-10-002 | Publish analyzer bench metrics to Grafana/perf workbook and alarm on ≥20 % regressions. | CI exports JSON for dashboards; Grafana panel wired; Ops on-call doc updated with alert hook. | | ||||
| | DEVOPS-AOC-19-001 | BLOCKED (2025-10-26) | DevOps Guild, Platform Guild | WEB-AOC-19-003 | Integrate the AOC Roslyn analyzer and guard tests into CI, failing builds when ingestion projects attempt banned writes. | Analyzer runs in PR/CI pipelines, results surfaced in build summary, docs updated under `docs/ops/ci-aoc.md`. | | ||||
| > Docs hand-off (2025-10-26): see `docs/ingestion/aggregation-only-contract.md` §5, `docs/architecture/overview.md`, and `docs/cli/cli-reference.md` for guard + verifier expectations. | ||||
| | DEVOPS-AOC-19-002 | BLOCKED (2025-10-26) | DevOps Guild | CLI-AOC-19-002, CONCELIER-WEB-AOC-19-004, EXCITITOR-WEB-AOC-19-004 | Add pipeline stage executing `stella aoc verify --since` against seeded Mongo snapshots for Concelier + Excititor, publishing violation report artefacts. | Stage runs on main/nightly, fails on violations, artifacts retained, runbook documented. | | ||||
| > Blocked: waiting on CLI verifier command and Concelier/Excititor guard endpoints to land (CLI-AOC-19-002, CONCELIER-WEB-AOC-19-004, EXCITITOR-WEB-AOC-19-004). | ||||
| | DEVOPS-AOC-19-003 | BLOCKED (2025-10-26) | DevOps Guild, QA Guild | CONCELIER-WEB-AOC-19-003, EXCITITOR-WEB-AOC-19-003 | Enforce unit test coverage thresholds for AOC guard suites and ensure coverage exported to dashboards. | Coverage report includes guard projects, threshold gate passes/fails as expected, dashboards refreshed with new metrics. | | ||||
| > Blocked: guard coverage suites and exporter hooks pending in Concelier/Excititor (CONCELIER-WEB-AOC-19-003, EXCITITOR-WEB-AOC-19-003). | ||||
| | DEVOPS-AOC-19-101 | TODO (2025-10-28) | DevOps Guild, Concelier Storage Guild | CONCELIER-STORE-AOC-19-002 | Draft supersedes backfill rollout (freeze window, dry-run steps, rollback) once advisory_raw idempotency index passes staging verification. | Runbook committed in `docs/deploy/containers.md` + Offline Kit notes, staging rehearsal scheduled with dependencies captured in SPRINTS. | | ||||
| | DEVOPS-OBS-50-001 | DONE (2025-10-26) | DevOps Guild, Observability Guild | TELEMETRY-OBS-50-001 | Deliver default OpenTelemetry collector deployment (Compose/Helm manifests), OTLP ingestion endpoints, and secure pipeline (authN, mTLS, tenant partitioning). Provide smoke test verifying traces/logs/metrics ingestion. | Collector manifests committed; smoke test green; docs updated; imposed rule banner reminder noted. | | ||||
| | DEVOPS-OBS-50-002 | DOING (2025-10-26) | DevOps Guild, Security Guild | DEVOPS-OBS-50-001, TELEMETRY-OBS-51-002 | Stand up multi-tenant storage backends (Prometheus, Tempo/Jaeger, Loki) with retention policies, tenant isolation, and redaction guard rails. Integrate with Authority scopes for read paths. | Storage stack deployed with auth; retention configured; integration tests verify tenant isolation; runbook drafted. | | ||||
| > Coordination started with Observability Guild (2025-10-26) to schedule staging rollout and provision service accounts. Staging bootstrap commands and secret names documented in `docs/ops/telemetry-storage.md`. | ||||
| | DEVOPS-OBS-50-003 | DONE (2025-10-26) | DevOps Guild, Offline Kit Guild | DEVOPS-OBS-50-001 | Package telemetry stack configs for air-gapped installs (Offline Kit bundle, documented overrides, sample values) and automate checksum/signature generation. | Offline bundle includes collector+storage configs; checksums published; docs cross-linked; imposed rule annotation recorded. | | ||||
| | DEVOPS-OBS-51-001 | TODO | DevOps Guild, Observability Guild | WEB-OBS-51-001, DEVOPS-OBS-50-001 | Implement SLO evaluator service (burn rate calculators, webhook emitters), Grafana dashboards, and alert routing to Notifier. Provide Terraform/Helm automation. | Dashboards live; evaluator emits webhooks; alert runbook referenced; staging alert fired in test. | | ||||
| | DEVOPS-OBS-52-001 | TODO | DevOps Guild, Timeline Indexer Guild | TIMELINE-OBS-52-002 | Configure streaming pipeline (NATS/Redis/Kafka) with retention, partitioning, and backpressure tuning for timeline events; add CI validation of schema + rate caps. | Pipeline deployed; load test meets SLA; schema validation job passes; documentation updated. | | ||||
| | DEVOPS-OBS-53-001 | TODO | DevOps Guild, Evidence Locker Guild | EVID-OBS-53-001 | Provision object storage with WORM/retention options (S3 Object Lock / MinIO immutability), legal hold automation, and backup/restore scripts for evidence locker. | Storage configured with WORM; legal hold script documented; backup test performed; runbook updated. | | ||||
| | DEVOPS-OBS-54-001 | TODO | DevOps Guild, Security Guild | PROV-OBS-53-002, EVID-OBS-54-001 | Manage provenance signing infrastructure (KMS keys, rotation schedule, timestamp authority integration) and integrate verification jobs into CI. | Keys provisioned with rotation policy; timestamp authority configured; CI verifies sample bundles; audit trail stored. | | ||||
| | DEVOPS-OBS-55-001 | TODO | DevOps Guild, Ops Guild | DEVOPS-OBS-51-001, WEB-OBS-55-001 | Implement incident mode automation: feature flag service, auto-activation via SLO burn-rate, retention override management, and post-incident reset job. | Incident mode toggles via API/CLI; automation tested in staging; reset job verified; runbook referenced. | | ||||
|  | ||||
| ## Air-Gapped Mode (Epic 16) | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-AIRGAP-56-001 | TODO | DevOps Guild | AIRGAP-CTL-56-001 | Ship deny-all egress policies for Kubernetes (NetworkPolicy/eBPF) and docker-compose firewall rules; provide verification script for sealed mode. | Policies committed with tests; verification script passes/fails as expected; docs cross-linked. | | ||||
| | DEVOPS-AIRGAP-56-002 | TODO | DevOps Guild, AirGap Importer Guild | AIRGAP-IMP-57-002 | Provide import tooling for bundle staging: checksum validation, offline object-store loader scripts, removable media guidance. | Scripts documented; smoke tests validate import; runbook updated. | | ||||
| | DEVOPS-AIRGAP-56-003 | TODO | DevOps Guild, Container Distribution Guild | EXPORT-AIRGAP-56-002 | Build Bootstrap Pack pipeline bundling images/charts, generating checksums, and publishing manifest for offline transfer. | Pipeline runs in connected env; pack verified in air-gap smoke test; manifest recorded. | | ||||
| | DEVOPS-AIRGAP-57-001 | TODO | DevOps Guild, Mirror Creator Guild | MIRROR-CRT-56-002 | Automate Mirror Bundle creation jobs with dual-control approvals, artifact signing, and checksum publication. | Approval workflow enforced; CI artifact includes DSSE/TUF metadata; audit logs stored. | | ||||
| | DEVOPS-AIRGAP-57-002 | TODO | DevOps Guild, Authority Guild | AUTH-OBS-50-001 | Configure sealed-mode CI tests that run services with sealed flag and ensure no egress occurs (iptables + mock DNS). | CI suite fails on attempted egress; reports remediation; documentation updated. | | ||||
| | DEVOPS-AIRGAP-58-001 | TODO | DevOps Guild, Notifications Guild | NOTIFY-AIRGAP-56-002 | Provide local SMTP/syslog container templates and health checks for sealed environments; integrate into Bootstrap Pack. | Templates deployed successfully; health checks in CI; docs updated. | | ||||
| | DEVOPS-AIRGAP-58-002 | TODO | DevOps Guild, Observability Guild | DEVOPS-AIRGAP-56-001, DEVOPS-OBS-51-001 | Ship sealed-mode observability stack (Prometheus/Grafana/Tempo/Loki) pre-configured with offline dashboards and no remote exporters. | Stack boots offline; dashboards available; verification script confirms zero egress. | | ||||
| | DEVOPS-REL-14-001 | DONE (2025-10-26) | DevOps Guild | SIGNER-API-11-101, ATTESTOR-API-11-201 | Deterministic build/release pipeline with SBOM/provenance, signing, manifest generation. | CI pipeline produces signed images + SBOM/attestations, manifests published with verified hashes, docs updated. | | ||||
| | DEVOPS-REL-14-004 | DONE (2025-10-26) | DevOps Guild, Scanner Guild | DEVOPS-REL-14-001, SCANNER-ANALYZERS-LANG-10-309P | Extend release/offline smoke jobs to exercise the Python analyzer plug-in (warm/cold scans, determinism, signature checks). | Release/Offline pipelines run Python analyzer smoke suite; alerts hooked; docs updated with new coverage matrix. | | ||||
| | DEVOPS-REL-17-002 | DONE (2025-10-26) | DevOps Guild | DEVOPS-REL-14-001, SCANNER-EMIT-17-701 | Persist stripped-debug artifacts organised by GNU build-id and bundle them into release/offline kits with checksum manifests. | CI job writes `.debug` files under `artifacts/debug/.build-id/`, manifest + checksums published, offline kit includes cache, smoke job proves symbol lookup via build-id. | | ||||
| | DEVOPS-REL-17-004 | BLOCKED (2025-10-26) | DevOps Guild | DEVOPS-REL-17-002 | Ensure release workflow publishes `out/release/debug` (build-id tree + manifest) and fails when symbols are missing. | Release job emits debug artefacts, `mirror_debug_store.py` summary committed, warning cleared from build logs, docs updated. | | ||||
| | DEVOPS-MIRROR-08-001 | DONE (2025-10-19) | DevOps Guild | DEVOPS-REL-14-001 | Stand up managed mirror profiles for `*.stella-ops.org` (Concelier/Excititor), including Helm/Compose overlays, multi-tenant secrets, CDN caching, and sync documentation. | Infra overlays committed, CI smoke deploy hits mirror endpoints, runbooks published for downstream sync and quota management. | | ||||
| > Note (2025-10-26, BLOCKED): IdentityModel.Tokens patched for logging 9.x, but release bundle still fails because Docker cannot stream multi-arch build context (`unix:///var/run/docker.sock` unavailable, EOF during copy). Retry once docker daemon/socket is healthy; until then `out/release/debug` cannot be generated. | ||||
| | DEVOPS-CONSOLE-23-001 | BLOCKED (2025-10-26) | DevOps Guild, Console Guild | CONSOLE-CORE-23-001 | Add console CI workflow (pnpm cache, lint, type-check, unit, Storybook a11y, Playwright, Lighthouse) with offline runners and artifact retention for screenshots/reports. | Workflow runs on PR & main, caches reduce install time, failing checks block merges, artifacts uploaded for triage, docs updated. | | ||||
| > Blocked: Console workspace and package scripts (CONSOLE-CORE-23-001..005) are not yet present; CI cannot execute pnpm/Playwright/Lighthouse until the Next.js app lands. | ||||
| | DEVOPS-CONSOLE-23-002 | TODO | DevOps Guild, Console Guild | DEVOPS-CONSOLE-23-001, CONSOLE-REL-23-301 | Produce `stella-console` container build + Helm chart overlays with deterministic digests, SBOM/provenance artefacts, and offline bundle packaging scripts. | Container published to registry mirror, Helm values committed, SBOM/attestations generated, offline kit job passes smoke test, docs updated. | | ||||
| | DEVOPS-LAUNCH-18-100 | DONE (2025-10-26) | DevOps Guild | - | Finalise production environment footprint (clusters, secrets, network overlays) for full-platform go-live. | IaC/compose overlays committed, secrets placeholders documented, dry-run deploy succeeds in staging. | | ||||
| | DEVOPS-LAUNCH-18-900 | DONE (2025-10-26) | DevOps Guild, Module Leads | Wave 0 completion | Collect “full implementation” sign-off from module owners and consolidate launch readiness checklist. | Sign-off record stored under `docs/ops/launch-readiness.md`; outstanding gaps triaged; checklist approved. | | ||||
| | DEVOPS-LAUNCH-18-001 | DONE (2025-10-26) | DevOps Guild | DEVOPS-LAUNCH-18-100, DEVOPS-LAUNCH-18-900 | Production launch cutover rehearsal and runbook publication. | `docs/ops/launch-cutover.md` drafted, rehearsal executed with rollback drill, approvals captured. | | ||||
| | DEVOPS-NUGET-13-001 | DONE (2025-10-25) | DevOps Guild, Platform Leads | DEVOPS-REL-14-001 | Add .NET 10 preview feeds / local mirrors so `Microsoft.Extensions.*` 10.0 preview packages restore offline; refresh restore docs. | NuGet.config maps preview feeds (or local mirrored packages), `dotnet restore` succeeds for Excititor/Concelier solutions without ad-hoc feed edits, docs updated for offline bootstrap. | | ||||
| | DEVOPS-NUGET-13-002 | DONE (2025-10-26) | DevOps Guild | DEVOPS-NUGET-13-001 | Ensure all solutions/projects prefer `local-nuget` before public sources and document restore order validation. | `NuGet.config` and solution-level configs resolve from `local-nuget` first; automated check verifies priority; docs updated for restore ordering. | | ||||
| | DEVOPS-NUGET-13-003 | DONE (2025-10-26) | DevOps Guild, Platform Leads | DEVOPS-NUGET-13-002 | Sweep `Microsoft.*` NuGet dependencies pinned to 8.* and upgrade to latest .NET 10 equivalents (or .NET 9 when 10 unavailable), updating restore guidance. | Dependency audit shows no 8.* `Microsoft.*` packages remaining; CI builds green; changelog/doc sections capture upgrade rationale. | | ||||
|  | ||||
| ## Policy Engine v2 | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-POLICY-20-001 | DONE (2025-10-26) | DevOps Guild, Policy Guild | POLICY-ENGINE-20-001 | Integrate DSL linting in CI (parser/compile) to block invalid policies; add pipeline step compiling sample policies. | CI fails on syntax errors; lint logs surfaced; docs updated with pipeline instructions. | | ||||
| | DEVOPS-POLICY-20-003 | DONE (2025-10-26) | DevOps Guild, QA Guild | DEVOPS-POLICY-20-001, POLICY-ENGINE-20-005 | Determinism CI: run Policy Engine twice with identical inputs and diff outputs to guard non-determinism. | CI job compares outputs, fails on differences, logs stored; documentation updated. | | ||||
| | DEVOPS-POLICY-20-004 | DONE (2025-10-27) | DevOps Guild, Scheduler Guild, CLI Guild | SCHED-MODELS-20-001, CLI-POLICY-20-002 | Automate policy schema exports: generate JSON Schema from `PolicyRun*` DTOs during CI, publish artefacts, and emit change alerts for CLI consumers (Slack + changelog). | CI stage outputs versioned schema files, uploads artefacts, notifies #policy-engine channel on change; docs/CLI references updated. | | ||||
| > 2025-10-27: `.gitea/workflows/build-test-deploy.yml` publishes the `policy-schema-exports` artefact under `artifacts/policy-schemas/<commit>/` and posts Slack diffs via `POLICY_ENGINE_SCHEMA_WEBHOOK`; diff stored as `policy-schema-diff.patch`. | ||||
|  | ||||
| ## Graph Explorer v1 | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
|  | ||||
| ## Orchestrator Dashboard | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-ORCH-32-001 | TODO | DevOps Guild, Orchestrator Service Guild | ORCH-SVC-32-001 | Provision orchestrator Postgres/message-bus infrastructure, add CI smoke deploy, seed Grafana dashboards (queue depth, inflight jobs), and document bootstrap. | Helm/Compose profiles committed; CI smoke deploy runs; dashboards live with metrics; runbook updated. | | ||||
| | DEVOPS-ORCH-33-001 | TODO | DevOps Guild, Observability Guild | DEVOPS-ORCH-32-001, ORCH-SVC-33-001..003 | Publish Grafana dashboards/alerts for rate limiter, backpressure, error clustering, and DLQ depth; integrate with on-call rotations. | Dashboards and alerts configured; synthetic tests validate thresholds; on-call playbook updated. | | ||||
| | DEVOPS-ORCH-34-001 | TODO | DevOps Guild, Orchestrator Service Guild | DEVOPS-ORCH-33-001, ORCH-SVC-34-001..003 | Harden production monitoring (synthetic probes, burn-rate alerts, replay smoke), document incident response, and prep GA readiness checklist. | Synthetic probes created; burn-rate alerts firing on test scenario; GA checklist approved; runbook linked. | | ||||
|  | ||||
| ## Link-Not-Merge v1 | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-LNM-22-001 | BLOCKED (2025-10-27) | DevOps Guild, Concelier Guild | CONCELIER-LNM-21-102 | Run migration/backfill pipelines for advisory observations/linksets in staging, validate counts/conflicts, and automate deployment steps. Awaiting storage backfill tooling. | | ||||
| | DEVOPS-LNM-22-002 | BLOCKED (2025-10-27) | DevOps Guild, Excititor Guild | EXCITITOR-LNM-21-102 | Execute VEX observation/linkset backfill with monitoring; ensure NATS/Redis events integrated; document ops runbook. Blocked until Excititor storage migration lands. | | ||||
| | DEVOPS-LNM-22-003 | TODO | DevOps Guild, Observability Guild | CONCELIER-LNM-21-005, EXCITITOR-LNM-21-005 | Add CI/monitoring coverage for new metrics (`advisory_observations_total`, `linksets_total`, etc.) and alerts on ingest-to-API SLA breaches. | Metrics scraped into Grafana; alert thresholds set; CI job verifies metric emission. | | ||||
|  | ||||
| ## Graph & Vuln Explorer v1 | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-GRAPH-24-001 | TODO | DevOps Guild, SBOM Service Guild | SBOM-GRAPH-24-002 | Load test graph index/adjacency APIs with 40k-node assets; capture perf dashboards and alert thresholds. | Perf suite added; dashboards live; alerts configured. | | ||||
| | DEVOPS-GRAPH-24-002 | TODO | DevOps Guild, UI Guild | UI-GRAPH-24-001..005 | Integrate synthetic UI perf runs (Playwright/WebGL metrics) for Graph/Vuln explorers; fail builds on regression. | CI job runs UI perf tests; baseline stored; documentation updated. | | ||||
| | DEVOPS-GRAPH-24-003 | TODO | DevOps Guild | WEB-GRAPH-24-002 | Implement smoke job for simulation endpoints ensuring we stay within SLA (<3s upgrade) and log results. | Smoke job in CI; alerts when SLA breached; runbook documented. | | ||||
| | DEVOPS-POLICY-27-001 | TODO | DevOps Guild, DevEx/CLI Guild | CLI-POLICY-27-001, REGISTRY-API-27-001 | Add CI pipeline stages to run `stella policy lint|compile|test` with secret scanning on policy sources for PRs touching `/policies/**`; publish diagnostics artifacts. | Pipeline executes on PR/main, failures block merges, secret scan summary uploaded, docs updated. | | ||||
| | DEVOPS-POLICY-27-002 | TODO | DevOps Guild, Policy Registry Guild | REGISTRY-API-27-005, SCHED-WORKER-27-301 | Provide optional batch simulation CI job (staging inventory) that triggers Registry run, polls results, and posts markdown summary to PR; enforce drift thresholds. | Job configurable via label, summary comment generated, drift threshold gates merges, runbook documented. | | ||||
| | DEVOPS-POLICY-27-003 | TODO | DevOps Guild, Security Guild | AUTH-POLICY-27-002, REGISTRY-API-27-007 | Manage signing key material for policy publish pipeline (OIDC workload identity + cosign), rotate keys, and document verification steps; integrate attestation verification stage. | Keys stored in secure vault, rotation procedure documented, CI verifies attestations, audit logs recorded. | | ||||
| | DEVOPS-POLICY-27-004 | TODO | DevOps Guild, Observability Guild | WEB-POLICY-27-005, TELEMETRY-CONSOLE-27-001 | Create dashboards/alerts for policy compile latency, simulation queue depth, approval latency, and promotion outcomes; integrate with on-call playbooks. | Grafana dashboards live, alerts tuned, runbooks updated, observability tests verify metric ingestion. | | ||||
| > Remark (2025-10-20): Repacked `Mongo2Go` local feed to require MongoDB.Driver 3.5.0 + SharpCompress 0.41.0; cache regression tests green and NU1902/NU1903 suppressed. | ||||
| > Remark (2025-10-21): Compose/Helm profiles now surface `SCANNER__EVENTS__*` toggles with docs pointing at new `.env` placeholders. | ||||
|  | ||||
| ## Reachability v1 | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-SIG-26-001 | TODO | DevOps Guild, Signals Guild | SIGNALS-24-001 | Provision CI/CD pipelines, Helm/Compose manifests for Signals service, including artifact storage and Redis dependencies. | Pipelines ship Signals service; deployment docs updated; smoke tests green. | | ||||
| | DEVOPS-SIG-26-002 | TODO | DevOps Guild, Observability Guild | SIGNALS-24-004 | Create dashboards/alerts for reachability scoring latency, cache hit rates, sensor staleness. | Dashboards live; alert thresholds configured; documentation updated. | | ||||
| | DEVOPS-VULN-29-001 | TODO | DevOps Guild, Findings Ledger Guild | LEDGER-29-002..009 | Provision CI jobs for ledger projector (replay, determinism), set up backups, monitor Merkle anchoring, and automate verification. | CI job verifies hash chains; backups documented; alerts for anchoring failures configured. | | ||||
| | DEVOPS-VULN-29-002 | TODO | DevOps Guild, Vuln Explorer API Guild | VULN-API-29-002..009 | Configure load/perf tests (5M findings/tenant), query budget enforcement, API SLO dashboards, and alerts for `vuln_list_latency` and `projection_lag`. | Perf suite integrated; dashboards live; alerts firing; runbooks updated. | | ||||
| | DEVOPS-VULN-29-003 | TODO | DevOps Guild, Console Guild | WEB-VULN-29-004, CONSOLE-VULN-29-007 | Instrument analytics pipeline for Vuln Explorer (telemetry ingestion, query hashes), ensure compliance with privacy/PII guardrails, and update observability docs. | Telemetry pipeline operational; PII redaction verified; docs updated with checklist. | | ||||
| | DEVOPS-VEX-30-001 | TODO | DevOps Guild, VEX Lens Guild | VEXLENS-30-009, ISSUER-30-005 | Provision CI, load tests, dashboards, alerts for VEX Lens and Issuer Directory (compute latency, disputed totals, signature verification rates). | CI/perf suites running; dashboards live; alerts configured; docs updated. | | ||||
| | DEVOPS-AIAI-31-001 | TODO | DevOps Guild, Advisory AI Guild | AIAI-31-006..007 | Stand up CI pipelines, inference monitoring, privacy logging review, and perf dashboards for Advisory AI (summaries/conflicts/remediation). | CI covers golden outputs, telemetry dashboards live, privacy controls reviewed, alerts configured. | | ||||
|  | ||||
| ## Export Center | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-EXPORT-35-001 | BLOCKED (2025-10-29) | DevOps Guild, Exporter Service Guild | EXPORT-SVC-35-001..006 | Establish exporter CI pipeline (lint/test/perf smoke), configure object storage fixtures, seed Grafana dashboards, and document bootstrap steps. | CI pipeline running; smoke export job seeded; dashboards live; runbook updated. | | ||||
| | DEVOPS-EXPORT-36-001 | TODO | DevOps Guild, Exporter Service Guild | DEVOPS-EXPORT-35-001, EXPORT-SVC-36-001..004 | Integrate Trivy compatibility validation, cosign signature checks, `trivy module db import` smoke tests, OCI distribution verification, and throughput/error dashboards. | CI executes cosign + Trivy import validation; OCI push smoke passes; dashboards/alerts configured. | | ||||
| | DEVOPS-EXPORT-37-001 | TODO | DevOps Guild, Exporter Service Guild | DEVOPS-EXPORT-36-001, EXPORT-SVC-37-001..004 | Finalize exporter monitoring (failure alerts, verify metrics, retention jobs) and chaos/latency tests ahead of GA. | Alerts tuned; chaos tests documented; retention monitoring active; runbook updated. | | ||||
|  | ||||
| ## CLI Parity & Task Packs | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-CLI-41-001 | TODO | DevOps Guild, DevEx/CLI Guild | CLI-CORE-41-001 | Establish CLI build pipeline (multi-platform binaries, SBOM, checksums), parity matrix CI enforcement, and release artifact signing. | Build pipeline operational; SBOM/checksums published; parity gate failing on drift; docs updated. | | ||||
| | DEVOPS-CLI-42-001 | TODO | DevOps Guild | DEVOPS-CLI-41-001, CLI-PARITY-41-001 | Add CLI golden output tests, parity diff automation, pack run CI harness, and artifact cache for remote mode. | Golden tests running; parity diff automation in CI; pack run harness executes sample packs; documentation updated. | | ||||
| | DEVOPS-CLI-43-001 | DOING (2025-10-27) | DevOps Guild | DEVOPS-CLI-42-001, TASKRUN-42-001 | Finalize multi-platform release automation, SBOM signing, parity gate enforcement, and Task Pack chaos tests. | Release automation verified; SBOM signed; parity gate enforced; chaos tests documented. | | ||||
| > 2025-10-27: Release pipeline now packages CLI multi-platform artefacts with SBOM/signature coverage and enforces the CLI parity gate (`ops/devops/check_cli_parity.py`). Task Pack chaos smoke still pending CLI pack command delivery. | ||||
| | DEVOPS-CLI-43-002 | TODO | DevOps Guild, Task Runner Guild | CLI-PACKS-43-001, TASKRUN-43-001 | Implement Task Pack chaos smoke in CI (random failure injection, resume, sealed-mode toggle) and publish evidence bundles for review. | Chaos smoke job runs nightly; failures alert Slack; evidence stored in `out/pack-chaos`; runbook updated. | | ||||
| | DEVOPS-CLI-43-003 | TODO | DevOps Guild, DevEx/CLI Guild | CLI-PARITY-41-001, CLI-PACKS-42-001 | Integrate CLI golden output/parity diff automation into release gating; export parity report artifact consumed by Console Downloads workspace. | `check_cli_parity.py` wired to compare parity matrix and CLI outputs; artifact uploaded; release fails on regressions. | ||||
|  | ||||
| ## Containerized Distribution (Epic 13) | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-CONTAINERS-44-001 | TODO | DevOps Guild | DOCKER-44-001..003 | Automate multi-arch image builds with buildx, SBOM generation, cosign signing, and signature verification in CI. | Pipeline builds amd64/arm64; SBOMs pushed as referrers; cosign verify job passes. | | ||||
| | DEVOPS-CONTAINERS-45-001 | TODO | DevOps Guild | HELM-45-001 | Add Compose and Helm smoke tests (fresh VM + kind cluster) to CI; publish test artifacts and logs. | CI jobs running; failures block releases; documentation updated. | | ||||
| | DEVOPS-CONTAINERS-46-001 | TODO | DevOps Guild | DEPLOY-PACKS-43-001 | Build air-gap bundle generator (`tools/make-airgap-bundle.sh`), produce signed bundle, and verify in CI using private registry. | Bundle artifact produced with signatures/checksums; verification job passes; instructions documented. | | ||||
|  | ||||
| ### Container Images (Epic 13) | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DOCKER-44-001 | TODO | DevOps Guild, Service Owners | DEVOPS-CLI-41-001 | Author multi-stage Dockerfiles for all core services (API, Console, Orchestrator, Task Runner, Conseiller, Excitator, Policy, Notify, Export, AI) with non-root users, read-only file systems, and health scripts. | Dockerfiles committed; images build successfully; container security scans clean; health endpoints reachable. | | ||||
| | DOCKER-44-002 | TODO | DevOps Guild | DOCKER-44-001 | Generate SBOMs and cosign attestations for each image and integrate verification into CI. | SBOMs attached as OCI artifacts; cosign signatures published; CI verifies signatures prior to release. | | ||||
| | DOCKER-44-003 | TODO | DevOps Guild | DOCKER-44-001 | Implement `/health/liveness`, `/health/readiness`, `/version`, `/metrics`, and ensure capability endpoint returns `merge=false` for Conseiller/Excitator. | Endpoints available across services; automated tests confirm responses; documentation updated with imposed rule reminder. | | ||||
|  | ||||
| ## Authority-Backed Scopes & Tenancy (Epic 14) | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-TEN-47-001 | TODO | DevOps Guild | AUTH-TEN-47-001 | Add JWKS cache monitoring, signature verification regression tests, and token expiration chaos tests to CI. | CI verifies tokens using cached keys; chaos test for expired keys passes; documentation updated. | | ||||
| | DEVOPS-TEN-48-001 | TODO | DevOps Guild | WEB-TEN-48-001 | Build integration tests to assert RLS enforcement, tenant-prefixed object storage, and audit event emission; set up lint to prevent raw SQL bypass. | Tests fail on cross-tenant access; lint enforced; dashboards capture audit events. | | ||||
| | DEVOPS-TEN-49-001 | TODO | DevOps Guild | AUTH-TEN-49-001 | Deploy audit pipeline, scope usage metrics, JWKS outage chaos tests, and tenant load/perf benchmarks. | Audit pipeline live; metrics dashboards updated; chaos tests documented; perf benchmarks recorded. | | ||||
|  | ||||
| ## SDKs & OpenAPI (Epic 17) | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-OAS-61-001 | TODO | DevOps Guild, API Contracts Guild | OAS-61-002 | Add CI stages for OpenAPI linting, validation, and compatibility diff; enforce gating on PRs. | Pipeline active; merge blocked on failures; documentation updated. | | ||||
| | DEVOPS-OAS-61-002 | TODO | DevOps Guild, Contract Testing Guild | CONTR-62-002 | Integrate mock server + contract test suite into PR and nightly workflows; publish artifacts. | Tests run in CI; artifacts stored; failures alert. | | ||||
| | DEVOPS-SDK-63-001 | TODO | DevOps Guild, SDK Release Guild | SDKREL-63-001 | Provision registry credentials, signing keys, and secure storage for SDK publishing pipelines. | Keys stored/rotated; publish pipeline authenticated; audit logs recorded. | | ||||
| | DEVOPS-DEVPORT-63-001 | TODO | DevOps Guild, Developer Portal Guild | DEVPORT-62-001 | Automate developer portal build pipeline with caching, link & accessibility checks, performance budgets. | Pipeline enforced; reports archived; failures gate merges. | | ||||
| | DEVOPS-DEVPORT-64-001 | TODO | DevOps Guild, DevPortal Offline Guild | DVOFF-64-001 | Schedule `devportal --offline` nightly builds with checksum validation and artifact retention policies. | Nightly job running; checksums published; retention policy documented. | | ||||
|  | ||||
| ## Attestor Console (Epic 19) | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-ATTEST-73-001 | TODO | DevOps Guild, Attestor Service Guild | ATTESTOR-72-002 | Provision CI pipelines for attestor service (lint/test/security scan, seed data) and manage secrets for KMS drivers. | CI pipeline running; secrets stored securely; docs updated. | | ||||
| | DEVOPS-ATTEST-73-002 | TODO | DevOps Guild, KMS Guild | KMS-72-001 | Establish secure storage for signing keys (vault integration, rotation schedule) and audit logging. | Key storage configured; rotation documented; audit logs verified. | | ||||
| | DEVOPS-ATTEST-74-001 | TODO | DevOps Guild, Transparency Guild | TRANSP-74-001 | Deploy transparency log witness infrastructure and monitoring. | Witness service deployed; dashboards/alerts live. | | ||||
| | DEVOPS-ATTEST-74-002 | TODO | DevOps Guild, Export Attestation Guild | EXPORT-ATTEST-74-001 | Integrate attestation bundle builds into release/offline pipelines with checksum verification. | Bundle job in CI; checksum verification passes; docs updated. | | ||||
| | DEVOPS-ATTEST-75-001 | TODO | DevOps Guild, Observability Guild | ATTEST-VERIFY-74-001 | Add dashboards/alerts for signing latency, verification failures, key rotation events. | Dashboards live; alerts configured. | | ||||
| # DevOps Task Board | ||||
|  | ||||
| ## Governance & Rules | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-RULES-33-001 | DOING (2025-10-26) | DevOps Guild, Platform Leads | — | Contracts & Rules anchor:<br>• Gateway proxies only; Policy Engine composes overlays/simulations.<br>• AOC ingestion cannot merge; only lossless canonicalization.<br>• One graph platform: Graph Indexer + Graph API. Cartographer retired. | Rules posted in SPRINTS/TASKS; duplicates cleaned per guidance; reviewers acknowledge in changelog. | | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-HELM-09-001 | DONE | DevOps Guild | SCANNER-WEB-09-101 | Create Helm/Compose environment profiles (dev, staging, airgap) with deterministic digests. | Profiles committed under `deploy/`; docs updated; CI smoke deploy passes. | | ||||
| | DEVOPS-SCANNER-09-204 | DONE (2025-10-21) | DevOps Guild, Scanner WebService Guild | SCANNER-EVENTS-15-201 | Surface `SCANNER__EVENTS__*` environment variables across docker-compose (dev/stage/airgap) and Helm values, defaulting to share the Redis queue DSN. | Compose/Helm configs ship enabled Redis event publishing with documented overrides; lint jobs updated; docs cross-link to new knobs. | | ||||
| | DEVOPS-SCANNER-09-205 | DONE (2025-10-21) | DevOps Guild, Notify Guild | DEVOPS-SCANNER-09-204 | Add Notify smoke stage that tails the Redis stream and asserts `scanner.report.ready`/`scanner.scan.completed` reach Notify WebService in staging. | CI job reads Redis stream during scanner smoke deploy, confirms Notify ingestion via API, alerts on failure. | | ||||
| | DEVOPS-PERF-10-001 | DONE | DevOps Guild | BENCH-SCANNER-10-001 | Add perf smoke job (SBOM compose <5 s target) to CI. | CI job runs sample build verifying <5 s; alerts configured. | | ||||
| | DEVOPS-PERF-10-002 | DONE (2025-10-23) | DevOps Guild | BENCH-SCANNER-10-002 | Publish analyzer bench metrics to Grafana/perf workbook and alarm on ≥20 % regressions. | CI exports JSON for dashboards; Grafana panel wired; Ops on-call doc updated with alert hook. | | ||||
| | DEVOPS-AOC-19-001 | BLOCKED (2025-10-26) | DevOps Guild, Platform Guild | WEB-AOC-19-003 | Integrate the AOC Roslyn analyzer and guard tests into CI, failing builds when ingestion projects attempt banned writes. | Analyzer runs in PR/CI pipelines, results surfaced in build summary, docs updated under `docs/ops/ci-aoc.md`. | | ||||
| > Docs hand-off (2025-10-26): see `docs/ingestion/aggregation-only-contract.md` §5, `docs/architecture/overview.md`, and `docs/cli/cli-reference.md` for guard + verifier expectations. | ||||
| | DEVOPS-AOC-19-002 | BLOCKED (2025-10-26) | DevOps Guild | CLI-AOC-19-002, CONCELIER-WEB-AOC-19-004, EXCITITOR-WEB-AOC-19-004 | Add pipeline stage executing `stella aoc verify --since` against seeded Mongo snapshots for Concelier + Excititor, publishing violation report artefacts. | Stage runs on main/nightly, fails on violations, artifacts retained, runbook documented. | | ||||
| > Blocked: waiting on CLI verifier command and Concelier/Excititor guard endpoints to land (CLI-AOC-19-002, CONCELIER-WEB-AOC-19-004, EXCITITOR-WEB-AOC-19-004). | ||||
| | DEVOPS-AOC-19-003 | BLOCKED (2025-10-26) | DevOps Guild, QA Guild | CONCELIER-WEB-AOC-19-003, EXCITITOR-WEB-AOC-19-003 | Enforce unit test coverage thresholds for AOC guard suites and ensure coverage exported to dashboards. | Coverage report includes guard projects, threshold gate passes/fails as expected, dashboards refreshed with new metrics. | | ||||
| > Blocked: guard coverage suites and exporter hooks pending in Concelier/Excititor (CONCELIER-WEB-AOC-19-003, EXCITITOR-WEB-AOC-19-003). | ||||
| | DEVOPS-AOC-19-101 | TODO (2025-10-28) | DevOps Guild, Concelier Storage Guild | CONCELIER-STORE-AOC-19-002 | Draft supersedes backfill rollout (freeze window, dry-run steps, rollback) once advisory_raw idempotency index passes staging verification. | Runbook committed in `docs/deploy/containers.md` + Offline Kit notes, staging rehearsal scheduled with dependencies captured in SPRINTS. | | ||||
| | DEVOPS-OBS-50-001 | DONE (2025-10-26) | DevOps Guild, Observability Guild | TELEMETRY-OBS-50-001 | Deliver default OpenTelemetry collector deployment (Compose/Helm manifests), OTLP ingestion endpoints, and secure pipeline (authN, mTLS, tenant partitioning). Provide smoke test verifying traces/logs/metrics ingestion. | Collector manifests committed; smoke test green; docs updated; imposed rule banner reminder noted. | | ||||
| | DEVOPS-OBS-50-002 | DOING (2025-10-26) | DevOps Guild, Security Guild | DEVOPS-OBS-50-001, TELEMETRY-OBS-51-002 | Stand up multi-tenant storage backends (Prometheus, Tempo/Jaeger, Loki) with retention policies, tenant isolation, and redaction guard rails. Integrate with Authority scopes for read paths. | Storage stack deployed with auth; retention configured; integration tests verify tenant isolation; runbook drafted. | | ||||
| > Coordination started with Observability Guild (2025-10-26) to schedule staging rollout and provision service accounts. Staging bootstrap commands and secret names documented in `docs/ops/telemetry-storage.md`. | ||||
| | DEVOPS-OBS-50-003 | DONE (2025-10-26) | DevOps Guild, Offline Kit Guild | DEVOPS-OBS-50-001 | Package telemetry stack configs for air-gapped installs (Offline Kit bundle, documented overrides, sample values) and automate checksum/signature generation. | Offline bundle includes collector+storage configs; checksums published; docs cross-linked; imposed rule annotation recorded. | | ||||
| | DEVOPS-OBS-51-001 | TODO | DevOps Guild, Observability Guild | WEB-OBS-51-001, DEVOPS-OBS-50-001 | Implement SLO evaluator service (burn rate calculators, webhook emitters), Grafana dashboards, and alert routing to Notifier. Provide Terraform/Helm automation. | Dashboards live; evaluator emits webhooks; alert runbook referenced; staging alert fired in test. | | ||||
| | DEVOPS-OBS-52-001 | TODO | DevOps Guild, Timeline Indexer Guild | TIMELINE-OBS-52-002 | Configure streaming pipeline (NATS/Redis/Kafka) with retention, partitioning, and backpressure tuning for timeline events; add CI validation of schema + rate caps. | Pipeline deployed; load test meets SLA; schema validation job passes; documentation updated. | | ||||
| | DEVOPS-OBS-53-001 | TODO | DevOps Guild, Evidence Locker Guild | EVID-OBS-53-001 | Provision object storage with WORM/retention options (S3 Object Lock / MinIO immutability), legal hold automation, and backup/restore scripts for evidence locker. | Storage configured with WORM; legal hold script documented; backup test performed; runbook updated. | | ||||
| | DEVOPS-OBS-54-001 | TODO | DevOps Guild, Security Guild | PROV-OBS-53-002, EVID-OBS-54-001 | Manage provenance signing infrastructure (KMS keys, rotation schedule, timestamp authority integration) and integrate verification jobs into CI. | Keys provisioned with rotation policy; timestamp authority configured; CI verifies sample bundles; audit trail stored. | | ||||
| | DEVOPS-OBS-55-001 | TODO | DevOps Guild, Ops Guild | DEVOPS-OBS-51-001, WEB-OBS-55-001 | Implement incident mode automation: feature flag service, auto-activation via SLO burn-rate, retention override management, and post-incident reset job. | Incident mode toggles via API/CLI; automation tested in staging; reset job verified; runbook referenced. | | ||||
|  | ||||
| ## Air-Gapped Mode (Epic 16) | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-AIRGAP-56-001 | TODO | DevOps Guild | AIRGAP-CTL-56-001 | Ship deny-all egress policies for Kubernetes (NetworkPolicy/eBPF) and docker-compose firewall rules; provide verification script for sealed mode. | Policies committed with tests; verification script passes/fails as expected; docs cross-linked. | | ||||
| | DEVOPS-AIRGAP-56-002 | TODO | DevOps Guild, AirGap Importer Guild | AIRGAP-IMP-57-002 | Provide import tooling for bundle staging: checksum validation, offline object-store loader scripts, removable media guidance. | Scripts documented; smoke tests validate import; runbook updated. | | ||||
| | DEVOPS-AIRGAP-56-003 | TODO | DevOps Guild, Container Distribution Guild | EXPORT-AIRGAP-56-002 | Build Bootstrap Pack pipeline bundling images/charts, generating checksums, and publishing manifest for offline transfer. | Pipeline runs in connected env; pack verified in air-gap smoke test; manifest recorded. | | ||||
| | DEVOPS-AIRGAP-57-001 | TODO | DevOps Guild, Mirror Creator Guild | MIRROR-CRT-56-002 | Automate Mirror Bundle creation jobs with dual-control approvals, artifact signing, and checksum publication. | Approval workflow enforced; CI artifact includes DSSE/TUF metadata; audit logs stored. | | ||||
| | DEVOPS-AIRGAP-57-002 | TODO | DevOps Guild, Authority Guild | AUTH-OBS-50-001 | Configure sealed-mode CI tests that run services with sealed flag and ensure no egress occurs (iptables + mock DNS). | CI suite fails on attempted egress; reports remediation; documentation updated. | | ||||
| | DEVOPS-AIRGAP-58-001 | TODO | DevOps Guild, Notifications Guild | NOTIFY-AIRGAP-56-002 | Provide local SMTP/syslog container templates and health checks for sealed environments; integrate into Bootstrap Pack. | Templates deployed successfully; health checks in CI; docs updated. | | ||||
| | DEVOPS-AIRGAP-58-002 | TODO | DevOps Guild, Observability Guild | DEVOPS-AIRGAP-56-001, DEVOPS-OBS-51-001 | Ship sealed-mode observability stack (Prometheus/Grafana/Tempo/Loki) pre-configured with offline dashboards and no remote exporters. | Stack boots offline; dashboards available; verification script confirms zero egress. | | ||||
| | DEVOPS-REL-14-001 | DONE (2025-10-26) | DevOps Guild | SIGNER-API-11-101, ATTESTOR-API-11-201 | Deterministic build/release pipeline with SBOM/provenance, signing, manifest generation. | CI pipeline produces signed images + SBOM/attestations, manifests published with verified hashes, docs updated. | | ||||
| | DEVOPS-REL-14-004 | DONE (2025-10-26) | DevOps Guild, Scanner Guild | DEVOPS-REL-14-001, SCANNER-ANALYZERS-LANG-10-309P | Extend release/offline smoke jobs to exercise the Python analyzer plug-in (warm/cold scans, determinism, signature checks). | Release/Offline pipelines run Python analyzer smoke suite; alerts hooked; docs updated with new coverage matrix. | | ||||
| | DEVOPS-REL-17-002 | DONE (2025-10-26) | DevOps Guild | DEVOPS-REL-14-001, SCANNER-EMIT-17-701 | Persist stripped-debug artifacts organised by GNU build-id and bundle them into release/offline kits with checksum manifests. | CI job writes `.debug` files under `artifacts/debug/.build-id/`, manifest + checksums published, offline kit includes cache, smoke job proves symbol lookup via build-id. | | ||||
| | DEVOPS-REL-17-004 | BLOCKED (2025-10-26) | DevOps Guild | DEVOPS-REL-17-002 | Ensure release workflow publishes `out/release/debug` (build-id tree + manifest) and fails when symbols are missing. | Release job emits debug artefacts, `mirror_debug_store.py` summary committed, warning cleared from build logs, docs updated. | | ||||
| | DEVOPS-MIRROR-08-001 | DONE (2025-10-19) | DevOps Guild | DEVOPS-REL-14-001 | Stand up managed mirror profiles for `*.stella-ops.org` (Concelier/Excititor), including Helm/Compose overlays, multi-tenant secrets, CDN caching, and sync documentation. | Infra overlays committed, CI smoke deploy hits mirror endpoints, runbooks published for downstream sync and quota management. | | ||||
| > Note (2025-10-26, BLOCKED): IdentityModel.Tokens patched for logging 9.x, but release bundle still fails because Docker cannot stream multi-arch build context (`unix:///var/run/docker.sock` unavailable, EOF during copy). Retry once docker daemon/socket is healthy; until then `out/release/debug` cannot be generated. | ||||
| | DEVOPS-CONSOLE-23-001 | BLOCKED (2025-10-26) | DevOps Guild, Console Guild | CONSOLE-CORE-23-001 | Add console CI workflow (pnpm cache, lint, type-check, unit, Storybook a11y, Playwright, Lighthouse) with offline runners and artifact retention for screenshots/reports. | Workflow runs on PR & main, caches reduce install time, failing checks block merges, artifacts uploaded for triage, docs updated. | | ||||
| > Blocked: Console workspace and package scripts (CONSOLE-CORE-23-001..005) are not yet present; CI cannot execute pnpm/Playwright/Lighthouse until the Next.js app lands. | ||||
| | DEVOPS-CONSOLE-23-002 | TODO | DevOps Guild, Console Guild | DEVOPS-CONSOLE-23-001, CONSOLE-REL-23-301 | Produce `stella-console` container build + Helm chart overlays with deterministic digests, SBOM/provenance artefacts, and offline bundle packaging scripts. | Container published to registry mirror, Helm values committed, SBOM/attestations generated, offline kit job passes smoke test, docs updated. | | ||||
| | DEVOPS-LAUNCH-18-100 | DONE (2025-10-26) | DevOps Guild | - | Finalise production environment footprint (clusters, secrets, network overlays) for full-platform go-live. | IaC/compose overlays committed, secrets placeholders documented, dry-run deploy succeeds in staging. | | ||||
| | DEVOPS-LAUNCH-18-900 | DONE (2025-10-26) | DevOps Guild, Module Leads | Wave 0 completion | Collect “full implementation” sign-off from module owners and consolidate launch readiness checklist. | Sign-off record stored under `docs/ops/launch-readiness.md`; outstanding gaps triaged; checklist approved. | | ||||
| | DEVOPS-LAUNCH-18-001 | DONE (2025-10-26) | DevOps Guild | DEVOPS-LAUNCH-18-100, DEVOPS-LAUNCH-18-900 | Production launch cutover rehearsal and runbook publication. | `docs/ops/launch-cutover.md` drafted, rehearsal executed with rollback drill, approvals captured. | | ||||
| | DEVOPS-NUGET-13-001 | DONE (2025-10-25) | DevOps Guild, Platform Leads | DEVOPS-REL-14-001 | Add .NET 10 preview feeds / local mirrors so `Microsoft.Extensions.*` 10.0 preview packages restore offline; refresh restore docs. | NuGet.config maps preview feeds (or local mirrored packages), `dotnet restore` succeeds for Excititor/Concelier solutions without ad-hoc feed edits, docs updated for offline bootstrap. | | ||||
| | DEVOPS-NUGET-13-002 | DONE (2025-10-26) | DevOps Guild | DEVOPS-NUGET-13-001 | Ensure all solutions/projects prefer `local-nuget` before public sources and document restore order validation. | `NuGet.config` and solution-level configs resolve from `local-nuget` first; automated check verifies priority; docs updated for restore ordering. | | ||||
| | DEVOPS-NUGET-13-003 | DONE (2025-10-26) | DevOps Guild, Platform Leads | DEVOPS-NUGET-13-002 | Sweep `Microsoft.*` NuGet dependencies pinned to 8.* and upgrade to latest .NET 10 equivalents (or .NET 9 when 10 unavailable), updating restore guidance. | Dependency audit shows no 8.* `Microsoft.*` packages remaining; CI builds green; changelog/doc sections capture upgrade rationale. | | ||||
|  | ||||
| ## Policy Engine v2 | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-POLICY-20-001 | DONE (2025-10-26) | DevOps Guild, Policy Guild | POLICY-ENGINE-20-001 | Integrate DSL linting in CI (parser/compile) to block invalid policies; add pipeline step compiling sample policies. | CI fails on syntax errors; lint logs surfaced; docs updated with pipeline instructions. | | ||||
| | DEVOPS-POLICY-20-003 | DONE (2025-10-26) | DevOps Guild, QA Guild | DEVOPS-POLICY-20-001, POLICY-ENGINE-20-005 | Determinism CI: run Policy Engine twice with identical inputs and diff outputs to guard non-determinism. | CI job compares outputs, fails on differences, logs stored; documentation updated. | | ||||
| | DEVOPS-POLICY-20-004 | DONE (2025-10-27) | DevOps Guild, Scheduler Guild, CLI Guild | SCHED-MODELS-20-001, CLI-POLICY-20-002 | Automate policy schema exports: generate JSON Schema from `PolicyRun*` DTOs during CI, publish artefacts, and emit change alerts for CLI consumers (Slack + changelog). | CI stage outputs versioned schema files, uploads artefacts, notifies #policy-engine channel on change; docs/CLI references updated. | | ||||
| > 2025-10-27: `.gitea/workflows/build-test-deploy.yml` publishes the `policy-schema-exports` artefact under `artifacts/policy-schemas/<commit>/` and posts Slack diffs via `POLICY_ENGINE_SCHEMA_WEBHOOK`; diff stored as `policy-schema-diff.patch`. | ||||
|  | ||||
| ## Graph Explorer v1 | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
|  | ||||
| ## Orchestrator Dashboard | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-ORCH-32-001 | TODO | DevOps Guild, Orchestrator Service Guild | ORCH-SVC-32-001 | Provision orchestrator Postgres/message-bus infrastructure, add CI smoke deploy, seed Grafana dashboards (queue depth, inflight jobs), and document bootstrap. | Helm/Compose profiles committed; CI smoke deploy runs; dashboards live with metrics; runbook updated. | | ||||
| | DEVOPS-ORCH-33-001 | TODO | DevOps Guild, Observability Guild | DEVOPS-ORCH-32-001, ORCH-SVC-33-001..003 | Publish Grafana dashboards/alerts for rate limiter, backpressure, error clustering, and DLQ depth; integrate with on-call rotations. | Dashboards and alerts configured; synthetic tests validate thresholds; on-call playbook updated. | | ||||
| | DEVOPS-ORCH-34-001 | TODO | DevOps Guild, Orchestrator Service Guild | DEVOPS-ORCH-33-001, ORCH-SVC-34-001..003 | Harden production monitoring (synthetic probes, burn-rate alerts, replay smoke), document incident response, and prep GA readiness checklist. | Synthetic probes created; burn-rate alerts firing on test scenario; GA checklist approved; runbook linked. | | ||||
|  | ||||
| ## Link-Not-Merge v1 | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-LNM-22-001 | BLOCKED (2025-10-27) | DevOps Guild, Concelier Guild | CONCELIER-LNM-21-102 | Run migration/backfill pipelines for advisory observations/linksets in staging, validate counts/conflicts, and automate deployment steps. Awaiting storage backfill tooling. | | ||||
| | DEVOPS-LNM-22-002 | BLOCKED (2025-10-27) | DevOps Guild, Excititor Guild | EXCITITOR-LNM-21-102 | Execute VEX observation/linkset backfill with monitoring; ensure NATS/Redis events integrated; document ops runbook. Blocked until Excititor storage migration lands. | | ||||
| | DEVOPS-LNM-22-003 | TODO | DevOps Guild, Observability Guild | CONCELIER-LNM-21-005, EXCITITOR-LNM-21-005 | Add CI/monitoring coverage for new metrics (`advisory_observations_total`, `linksets_total`, etc.) and alerts on ingest-to-API SLA breaches. | Metrics scraped into Grafana; alert thresholds set; CI job verifies metric emission. | | ||||
|  | ||||
| ## Graph & Vuln Explorer v1 | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-GRAPH-24-001 | TODO | DevOps Guild, SBOM Service Guild | SBOM-GRAPH-24-002 | Load test graph index/adjacency APIs with 40k-node assets; capture perf dashboards and alert thresholds. | Perf suite added; dashboards live; alerts configured. | | ||||
| | DEVOPS-GRAPH-24-002 | TODO | DevOps Guild, UI Guild | UI-GRAPH-24-001..005 | Integrate synthetic UI perf runs (Playwright/WebGL metrics) for Graph/Vuln explorers; fail builds on regression. | CI job runs UI perf tests; baseline stored; documentation updated. | | ||||
| | DEVOPS-GRAPH-24-003 | TODO | DevOps Guild | WEB-GRAPH-24-002 | Implement smoke job for simulation endpoints ensuring we stay within SLA (<3s upgrade) and log results. | Smoke job in CI; alerts when SLA breached; runbook documented. | | ||||
| | DEVOPS-POLICY-27-001 | TODO | DevOps Guild, DevEx/CLI Guild | CLI-POLICY-27-001, REGISTRY-API-27-001 | Add CI pipeline stages to run `stella policy lint|compile|test` with secret scanning on policy sources for PRs touching `/policies/**`; publish diagnostics artifacts. | Pipeline executes on PR/main, failures block merges, secret scan summary uploaded, docs updated. | | ||||
| | DEVOPS-POLICY-27-002 | TODO | DevOps Guild, Policy Registry Guild | REGISTRY-API-27-005, SCHED-WORKER-27-301 | Provide optional batch simulation CI job (staging inventory) that triggers Registry run, polls results, and posts markdown summary to PR; enforce drift thresholds. | Job configurable via label, summary comment generated, drift threshold gates merges, runbook documented. | | ||||
| | DEVOPS-POLICY-27-003 | TODO | DevOps Guild, Security Guild | AUTH-POLICY-27-002, REGISTRY-API-27-007 | Manage signing key material for policy publish pipeline (OIDC workload identity + cosign), rotate keys, and document verification steps; integrate attestation verification stage. | Keys stored in secure vault, rotation procedure documented, CI verifies attestations, audit logs recorded. | | ||||
| | DEVOPS-POLICY-27-004 | TODO | DevOps Guild, Observability Guild | WEB-POLICY-27-005, TELEMETRY-CONSOLE-27-001 | Create dashboards/alerts for policy compile latency, simulation queue depth, approval latency, and promotion outcomes; integrate with on-call playbooks. | Grafana dashboards live, alerts tuned, runbooks updated, observability tests verify metric ingestion. | | ||||
| > Remark (2025-10-20): Repacked `Mongo2Go` local feed to require MongoDB.Driver 3.5.0 + SharpCompress 0.41.0; cache regression tests green and NU1902/NU1903 suppressed. | ||||
| > Remark (2025-10-21): Compose/Helm profiles now surface `SCANNER__EVENTS__*` toggles with docs pointing at new `.env` placeholders. | ||||
|  | ||||
| ## Reachability v1 | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-SIG-26-001 | TODO | DevOps Guild, Signals Guild | SIGNALS-24-001 | Provision CI/CD pipelines, Helm/Compose manifests for Signals service, including artifact storage and Redis dependencies. | Pipelines ship Signals service; deployment docs updated; smoke tests green. | | ||||
| | DEVOPS-SIG-26-002 | TODO | DevOps Guild, Observability Guild | SIGNALS-24-004 | Create dashboards/alerts for reachability scoring latency, cache hit rates, sensor staleness. | Dashboards live; alert thresholds configured; documentation updated. | | ||||
| | DEVOPS-VULN-29-001 | TODO | DevOps Guild, Findings Ledger Guild | LEDGER-29-002..009 | Provision CI jobs for ledger projector (replay, determinism), set up backups, monitor Merkle anchoring, and automate verification. | CI job verifies hash chains; backups documented; alerts for anchoring failures configured. | | ||||
| | DEVOPS-VULN-29-002 | TODO | DevOps Guild, Vuln Explorer API Guild | VULN-API-29-002..009 | Configure load/perf tests (5M findings/tenant), query budget enforcement, API SLO dashboards, and alerts for `vuln_list_latency` and `projection_lag`. | Perf suite integrated; dashboards live; alerts firing; runbooks updated. | | ||||
| | DEVOPS-VULN-29-003 | TODO | DevOps Guild, Console Guild | WEB-VULN-29-004, CONSOLE-VULN-29-007 | Instrument analytics pipeline for Vuln Explorer (telemetry ingestion, query hashes), ensure compliance with privacy/PII guardrails, and update observability docs. | Telemetry pipeline operational; PII redaction verified; docs updated with checklist. | | ||||
| | DEVOPS-VEX-30-001 | TODO | DevOps Guild, VEX Lens Guild | VEXLENS-30-009, ISSUER-30-005 | Provision CI, load tests, dashboards, alerts for VEX Lens and Issuer Directory (compute latency, disputed totals, signature verification rates). | CI/perf suites running; dashboards live; alerts configured; docs updated. | | ||||
| | DEVOPS-AIAI-31-001 | TODO | DevOps Guild, Advisory AI Guild | AIAI-31-006..007 | Stand up CI pipelines, inference monitoring, privacy logging review, and perf dashboards for Advisory AI (summaries/conflicts/remediation). | CI covers golden outputs, telemetry dashboards live, privacy controls reviewed, alerts configured. | | ||||
|  | ||||
| ## Export Center | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-EXPORT-35-001 | BLOCKED (2025-10-29) | DevOps Guild, Exporter Service Guild | EXPORT-SVC-35-001..006 | Establish exporter CI pipeline (lint/test/perf smoke), configure object storage fixtures, seed Grafana dashboards, and document bootstrap steps. | CI pipeline running; smoke export job seeded; dashboards live; runbook updated. | | ||||
| | DEVOPS-EXPORT-36-001 | TODO | DevOps Guild, Exporter Service Guild | DEVOPS-EXPORT-35-001, EXPORT-SVC-36-001..004 | Integrate Trivy compatibility validation, cosign signature checks, `trivy module db import` smoke tests, OCI distribution verification, and throughput/error dashboards. | CI executes cosign + Trivy import validation; OCI push smoke passes; dashboards/alerts configured. | | ||||
| | DEVOPS-EXPORT-37-001 | TODO | DevOps Guild, Exporter Service Guild | DEVOPS-EXPORT-36-001, EXPORT-SVC-37-001..004 | Finalize exporter monitoring (failure alerts, verify metrics, retention jobs) and chaos/latency tests ahead of GA. | Alerts tuned; chaos tests documented; retention monitoring active; runbook updated. | | ||||
|  | ||||
| ## CLI Parity & Task Packs | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-CLI-41-001 | TODO | DevOps Guild, DevEx/CLI Guild | CLI-CORE-41-001 | Establish CLI build pipeline (multi-platform binaries, SBOM, checksums), parity matrix CI enforcement, and release artifact signing. | Build pipeline operational; SBOM/checksums published; parity gate failing on drift; docs updated. | | ||||
| | DEVOPS-CLI-42-001 | TODO | DevOps Guild | DEVOPS-CLI-41-001, CLI-PARITY-41-001 | Add CLI golden output tests, parity diff automation, pack run CI harness, and artifact cache for remote mode. | Golden tests running; parity diff automation in CI; pack run harness executes sample packs; documentation updated. | | ||||
| | DEVOPS-CLI-43-001 | DOING (2025-10-27) | DevOps Guild | DEVOPS-CLI-42-001, TASKRUN-42-001 | Finalize multi-platform release automation, SBOM signing, parity gate enforcement, and Task Pack chaos tests. | Release automation verified; SBOM signed; parity gate enforced; chaos tests documented. | | ||||
| > 2025-10-27: Release pipeline now packages CLI multi-platform artefacts with SBOM/signature coverage and enforces the CLI parity gate (`ops/devops/check_cli_parity.py`). Task Pack chaos smoke still pending CLI pack command delivery. | ||||
| | DEVOPS-CLI-43-002 | TODO | DevOps Guild, Task Runner Guild | CLI-PACKS-43-001, TASKRUN-43-001 | Implement Task Pack chaos smoke in CI (random failure injection, resume, sealed-mode toggle) and publish evidence bundles for review. | Chaos smoke job runs nightly; failures alert Slack; evidence stored in `out/pack-chaos`; runbook updated. | | ||||
| | DEVOPS-CLI-43-003 | TODO | DevOps Guild, DevEx/CLI Guild | CLI-PARITY-41-001, CLI-PACKS-42-001 | Integrate CLI golden output/parity diff automation into release gating; export parity report artifact consumed by Console Downloads workspace. | `check_cli_parity.py` wired to compare parity matrix and CLI outputs; artifact uploaded; release fails on regressions. | ||||
|  | ||||
| ## Containerized Distribution (Epic 13) | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-CONTAINERS-44-001 | TODO | DevOps Guild | DOCKER-44-001..003 | Automate multi-arch image builds with buildx, SBOM generation, cosign signing, and signature verification in CI. | Pipeline builds amd64/arm64; SBOMs pushed as referrers; cosign verify job passes. | | ||||
| | DEVOPS-CONTAINERS-45-001 | TODO | DevOps Guild | HELM-45-001 | Add Compose and Helm smoke tests (fresh VM + kind cluster) to CI; publish test artifacts and logs. | CI jobs running; failures block releases; documentation updated. | | ||||
| | DEVOPS-CONTAINERS-46-001 | TODO | DevOps Guild | DEPLOY-PACKS-43-001 | Build air-gap bundle generator (`tools/make-airgap-bundle.sh`), produce signed bundle, and verify in CI using private registry. | Bundle artifact produced with signatures/checksums; verification job passes; instructions documented. | | ||||
|  | ||||
| ### Container Images (Epic 13) | ||||
|  | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DOCKER-44-001 | TODO | DevOps Guild, Service Owners | DEVOPS-CLI-41-001 | Author multi-stage Dockerfiles for all core services (API, Console, Orchestrator, Task Runner, Conseiller, Excitator, Policy, Notify, Export, AI) with non-root users, read-only file systems, and health scripts. | Dockerfiles committed; images build successfully; container security scans clean; health endpoints reachable. | | ||||
| | DOCKER-44-002 | TODO | DevOps Guild | DOCKER-44-001 | Generate SBOMs and cosign attestations for each image and integrate verification into CI. | SBOMs attached as OCI artifacts; cosign signatures published; CI verifies signatures prior to release. | | ||||
| | DOCKER-44-003 | TODO | DevOps Guild | DOCKER-44-001 | Implement `/health/liveness`, `/health/readiness`, `/version`, `/metrics`, and ensure capability endpoint returns `merge=false` for Conseiller/Excitator. | Endpoints available across services; automated tests confirm responses; documentation updated with imposed rule reminder. | | ||||
|  | ||||
| ## Authority-Backed Scopes & Tenancy (Epic 14) | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-TEN-47-001 | TODO | DevOps Guild | AUTH-TEN-47-001 | Add JWKS cache monitoring, signature verification regression tests, and token expiration chaos tests to CI. | CI verifies tokens using cached keys; chaos test for expired keys passes; documentation updated. | | ||||
| | DEVOPS-TEN-48-001 | TODO | DevOps Guild | WEB-TEN-48-001 | Build integration tests to assert RLS enforcement, tenant-prefixed object storage, and audit event emission; set up lint to prevent raw SQL bypass. | Tests fail on cross-tenant access; lint enforced; dashboards capture audit events. | | ||||
| | DEVOPS-TEN-49-001 | TODO | DevOps Guild | AUTH-TEN-49-001 | Deploy audit pipeline, scope usage metrics, JWKS outage chaos tests, and tenant load/perf benchmarks. | Audit pipeline live; metrics dashboards updated; chaos tests documented; perf benchmarks recorded. | | ||||
|  | ||||
| ## SDKs & OpenAPI (Epic 17) | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-OAS-61-001 | TODO | DevOps Guild, API Contracts Guild | OAS-61-002 | Add CI stages for OpenAPI linting, validation, and compatibility diff; enforce gating on PRs. | Pipeline active; merge blocked on failures; documentation updated. | | ||||
| | DEVOPS-OAS-61-002 | TODO | DevOps Guild, Contract Testing Guild | CONTR-62-002 | Integrate mock server + contract test suite into PR and nightly workflows; publish artifacts. | Tests run in CI; artifacts stored; failures alert. | | ||||
| | DEVOPS-SDK-63-001 | TODO | DevOps Guild, SDK Release Guild | SDKREL-63-001 | Provision registry credentials, signing keys, and secure storage for SDK publishing pipelines. | Keys stored/rotated; publish pipeline authenticated; audit logs recorded. | | ||||
| | DEVOPS-DEVPORT-63-001 | TODO | DevOps Guild, Developer Portal Guild | DEVPORT-62-001 | Automate developer portal build pipeline with caching, link & accessibility checks, performance budgets. | Pipeline enforced; reports archived; failures gate merges. | | ||||
| | DEVOPS-DEVPORT-64-001 | TODO | DevOps Guild, DevPortal Offline Guild | DVOFF-64-001 | Schedule `devportal --offline` nightly builds with checksum validation and artifact retention policies. | Nightly job running; checksums published; retention policy documented. | | ||||
|  | ||||
| ## Attestor Console (Epic 19) | ||||
| | ID | Status | Owner(s) | Depends on | Description | Exit Criteria | | ||||
| |----|--------|----------|------------|-------------|---------------| | ||||
| | DEVOPS-ATTEST-73-001 | TODO | DevOps Guild, Attestor Service Guild | ATTESTOR-72-002 | Provision CI pipelines for attestor service (lint/test/security scan, seed data) and manage secrets for KMS drivers. | CI pipeline running; secrets stored securely; docs updated. | | ||||
| | DEVOPS-ATTEST-73-002 | TODO | DevOps Guild, KMS Guild | KMS-72-001 | Establish secure storage for signing keys (vault integration, rotation schedule) and audit logging. | Key storage configured; rotation documented; audit logs verified. | | ||||
| | DEVOPS-ATTEST-74-001 | TODO | DevOps Guild, Transparency Guild | TRANSP-74-001 | Deploy transparency log witness infrastructure and monitoring. | Witness service deployed; dashboards/alerts live. | | ||||
| | DEVOPS-ATTEST-74-002 | TODO | DevOps Guild, Export Attestation Guild | EXPORT-ATTEST-74-001 | Integrate attestation bundle builds into release/offline pipelines with checksum verification. | Bundle job in CI; checksum verification passes; docs updated. | | ||||
| | DEVOPS-ATTEST-75-001 | TODO | DevOps Guild, Observability Guild | ATTEST-VERIFY-74-001 | Add dashboards/alerts for signing latency, verification failures, key rotation events. | Dashboards live; alerts configured. | | ||||
|   | ||||
| @@ -1,53 +1,53 @@ | ||||
| #!/usr/bin/env python3 | ||||
| """Ensure CLI parity matrix contains no outstanding blockers before release.""" | ||||
| from __future__ import annotations | ||||
|  | ||||
| import pathlib | ||||
| import re | ||||
| import sys | ||||
|  | ||||
| REPO_ROOT = pathlib.Path(__file__).resolve().parents[2] | ||||
| PARITY_DOC = REPO_ROOT / "docs/cli-vs-ui-parity.md" | ||||
|  | ||||
| BLOCKERS = { | ||||
|     "🟥": "blocking gap", | ||||
|     "❌": "missing feature", | ||||
|     "🚫": "unsupported", | ||||
| } | ||||
| WARNINGS = { | ||||
|     "🟡": "partial support", | ||||
|     "⚠️": "warning", | ||||
| } | ||||
|  | ||||
|  | ||||
| def main() -> int: | ||||
|     if not PARITY_DOC.exists(): | ||||
|         print(f"❌ Parity matrix not found at {PARITY_DOC}", file=sys.stderr) | ||||
|         return 1 | ||||
|     text = PARITY_DOC.read_text(encoding="utf-8") | ||||
|     blockers: list[str] = [] | ||||
|     warnings: list[str] = [] | ||||
|     for line in text.splitlines(): | ||||
|         for symbol, label in BLOCKERS.items(): | ||||
|             if symbol in line: | ||||
|                 blockers.append(f"{label}: {line.strip()}") | ||||
|         for symbol, label in WARNINGS.items(): | ||||
|             if symbol in line: | ||||
|                 warnings.append(f"{label}: {line.strip()}") | ||||
|     if blockers: | ||||
|         print("❌ CLI parity gate failed — blocking items present:", file=sys.stderr) | ||||
|         for item in blockers: | ||||
|             print(f"  - {item}", file=sys.stderr) | ||||
|         return 1 | ||||
|     if warnings: | ||||
|         print("⚠️ CLI parity gate warnings detected:", file=sys.stderr) | ||||
|         for item in warnings: | ||||
|             print(f"  - {item}", file=sys.stderr) | ||||
|         print("Treat warnings as failures until parity matrix is fully green.", file=sys.stderr) | ||||
|         return 1 | ||||
|     print("✅ CLI parity matrix has no blocking or warning entries.") | ||||
|     return 0 | ||||
|  | ||||
|  | ||||
| if __name__ == "__main__": | ||||
|     raise SystemExit(main()) | ||||
| #!/usr/bin/env python3 | ||||
| """Ensure CLI parity matrix contains no outstanding blockers before release.""" | ||||
| from __future__ import annotations | ||||
|  | ||||
| import pathlib | ||||
| import re | ||||
| import sys | ||||
|  | ||||
| REPO_ROOT = pathlib.Path(__file__).resolve().parents[2] | ||||
| PARITY_DOC = REPO_ROOT / "docs/cli-vs-ui-parity.md" | ||||
|  | ||||
| BLOCKERS = { | ||||
|     "🟥": "blocking gap", | ||||
|     "❌": "missing feature", | ||||
|     "🚫": "unsupported", | ||||
| } | ||||
| WARNINGS = { | ||||
|     "🟡": "partial support", | ||||
|     "⚠️": "warning", | ||||
| } | ||||
|  | ||||
|  | ||||
| def main() -> int: | ||||
|     if not PARITY_DOC.exists(): | ||||
|         print(f"❌ Parity matrix not found at {PARITY_DOC}", file=sys.stderr) | ||||
|         return 1 | ||||
|     text = PARITY_DOC.read_text(encoding="utf-8") | ||||
|     blockers: list[str] = [] | ||||
|     warnings: list[str] = [] | ||||
|     for line in text.splitlines(): | ||||
|         for symbol, label in BLOCKERS.items(): | ||||
|             if symbol in line: | ||||
|                 blockers.append(f"{label}: {line.strip()}") | ||||
|         for symbol, label in WARNINGS.items(): | ||||
|             if symbol in line: | ||||
|                 warnings.append(f"{label}: {line.strip()}") | ||||
|     if blockers: | ||||
|         print("❌ CLI parity gate failed — blocking items present:", file=sys.stderr) | ||||
|         for item in blockers: | ||||
|             print(f"  - {item}", file=sys.stderr) | ||||
|         return 1 | ||||
|     if warnings: | ||||
|         print("⚠️ CLI parity gate warnings detected:", file=sys.stderr) | ||||
|         for item in warnings: | ||||
|             print(f"  - {item}", file=sys.stderr) | ||||
|         print("Treat warnings as failures until parity matrix is fully green.", file=sys.stderr) | ||||
|         return 1 | ||||
|     print("✅ CLI parity matrix has no blocking or warning entries.") | ||||
|     return 0 | ||||
|  | ||||
|  | ||||
| if __name__ == "__main__": | ||||
|     raise SystemExit(main()) | ||||
|   | ||||
| @@ -1,30 +1,30 @@ | ||||
| # Package,Version,SHA256,SourceBase(optional) | ||||
| # DotNetPublicFlat=https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.AspNetCore.Authentication.JwtBearer,10.0.0-rc.2.25502.107,3223f447bde9a3620477305a89520e8becafe23b481a0b423552af572439f8c2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.AspNetCore.Mvc.Testing,10.0.0-rc.2.25502.107,b6b53c62e0abefdca30e6ca08ab8357e395177dd9f368ab3ad4bbbd07e517229,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.AspNetCore.OpenApi,10.0.0-rc.2.25502.107,f64de1fe870306053346a31263e53e29f2fdfe0eae432a3156f8d7d705c81d85,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Data.Sqlite,9.0.0-rc.1.24451.1,770b637317e1e924f1b13587b31af0787c8c668b1d9f53f2fccae8ee8704e167,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Caching.Memory,10.0.0-rc.2.25502.107,6ec6d156ed06b07cbee9fa1c0803b8d54a5f904a0bf0183172f87b63c4044426,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Configuration,10.0.0-rc.2.25502.107,0716f72cdc99b03946c98c418c39d42208fc65f20301bd1f26a6c174646870f6,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Configuration.Abstractions,10.0.0-rc.2.25502.107,db6e2cd37c40b5ac5ca7a4f40f5edafda2b6a8690f95a8c64b54c777a1d757c0,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Configuration.Binder,10.0.0-rc.2.25502.107,80f04da6beef001d3c357584485c2ddc6fdbf3776cfd10f0d7b40dfe8a79ee43,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Configuration.CommandLine,10.0.0-rc.2.25502.107,91974a95ae35bcfcd5e977427f3d0e6d3416e78678a159f5ec9e55f33a2e19af,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Configuration.EnvironmentVariables,10.0.0-rc.2.25502.107,74d65a20e2764d5f42863f5f203b216533fc51b22fb02a8491036feb98ae5fef,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Configuration.FileExtensions,10.0.0-rc.2.25502.107,5f97b56ea2ba3a1b252022504060351ce457f78ac9055d5fdd1311678721c1a1,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Configuration.Json,10.0.0-rc.2.25502.107,0ba362c479213eb3425f8e14d8a8495250dbaf2d5dad7c0a4ca8d3239b03c392,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.DependencyInjection,10.0.0-rc.2.25502.107,2e1b51b4fa196f0819adf69a15ad8c3432b64c3b196f2ed3d14b65136a6a8709,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.DependencyInjection.Abstractions,10.0.0-rc.2.25502.107,d6787ccf69e09428b3424974896c09fdabb8040bae06ed318212871817933352,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Diagnostics.Abstractions,10.0.0-rc.2.25502.107,b4bc47b4b4ded4ab2f134d318179537cbe16aed511bb3672553ea197929dc7d8,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Diagnostics.HealthChecks,10.0.0-rc.2.25502.107,855fd4da26b955b6b1d036390b1af10564986067b5cc6356cffa081c83eec158,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Diagnostics.HealthChecks.Abstractions,10.0.0-rc.2.25502.107,59f4724daed68a067a661e208f0a934f253b91ec5d52310d008e185bc2c9294c,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Hosting,10.0.0-rc.2.25502.107,ea9b1fa8e50acae720294671e6c36d4c58e20cfc9720335ab4f5ad4eba92cf62,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Hosting.Abstractions,10.0.0-rc.2.25502.107,98fa23ac82e19be221a598fc6f4b469e8b00c4ca2b7a42ad0bfea8b63bbaa9a2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Http,10.0.0-rc.2.25502.107,c63c8bf4ca637137a561ca487b674859c2408918c4838a871bb26eb0c809a665,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Http.Polly,10.0.0-rc.2.25502.107,0b436196bcedd484796795f6a795d7a191294f1190f7a477f1a4937ef7f78110,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Logging.Abstractions,10.0.0-rc.2.25502.107,92b9a5ed62fe945ee88983af43c347429ec15691c9acb207872c548241cef961,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Logging.Console,10.0.0-rc.2.25502.107,fa1e10b5d6261675d9d2e97b9584ff9aaea2a2276eac584dfa77a1e35dcc58f5,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Options,10.0.0-rc.2.25502.107,d208acec60bec3350989694fd443e2d2f0ab583ad5f2c53a2879ade16908e5b4,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Options.ConfigurationExtensions,10.0.0-rc.2.25502.107,c2863bb28c36fd67f308dd4af486897b512d62ecff2d96613ef954f5bef443e2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.TimeProvider.Testing,9.10.0,919a47156fc13f756202702cacc6e853123c84f1b696970445d89f16dfa45829,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.IdentityModel.Tokens,8.14.0,00b78c7b7023132e1d6b31d305e47524732dce6faca92dd16eb8d05a835bba7a,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.SourceLink.GitLab,8.0.0,a7efb9c177888f952ea8c88bc5714fc83c64af32b70fb080a1323b8d32233973,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| # Package,Version,SHA256,SourceBase(optional) | ||||
| # DotNetPublicFlat=https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.AspNetCore.Authentication.JwtBearer,10.0.0-rc.2.25502.107,3223f447bde9a3620477305a89520e8becafe23b481a0b423552af572439f8c2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.AspNetCore.Mvc.Testing,10.0.0-rc.2.25502.107,b6b53c62e0abefdca30e6ca08ab8357e395177dd9f368ab3ad4bbbd07e517229,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.AspNetCore.OpenApi,10.0.0-rc.2.25502.107,f64de1fe870306053346a31263e53e29f2fdfe0eae432a3156f8d7d705c81d85,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Data.Sqlite,9.0.0-rc.1.24451.1,770b637317e1e924f1b13587b31af0787c8c668b1d9f53f2fccae8ee8704e167,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Caching.Memory,10.0.0-rc.2.25502.107,6ec6d156ed06b07cbee9fa1c0803b8d54a5f904a0bf0183172f87b63c4044426,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Configuration,10.0.0-rc.2.25502.107,0716f72cdc99b03946c98c418c39d42208fc65f20301bd1f26a6c174646870f6,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Configuration.Abstractions,10.0.0-rc.2.25502.107,db6e2cd37c40b5ac5ca7a4f40f5edafda2b6a8690f95a8c64b54c777a1d757c0,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Configuration.Binder,10.0.0-rc.2.25502.107,80f04da6beef001d3c357584485c2ddc6fdbf3776cfd10f0d7b40dfe8a79ee43,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Configuration.CommandLine,10.0.0-rc.2.25502.107,91974a95ae35bcfcd5e977427f3d0e6d3416e78678a159f5ec9e55f33a2e19af,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Configuration.EnvironmentVariables,10.0.0-rc.2.25502.107,74d65a20e2764d5f42863f5f203b216533fc51b22fb02a8491036feb98ae5fef,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Configuration.FileExtensions,10.0.0-rc.2.25502.107,5f97b56ea2ba3a1b252022504060351ce457f78ac9055d5fdd1311678721c1a1,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Configuration.Json,10.0.0-rc.2.25502.107,0ba362c479213eb3425f8e14d8a8495250dbaf2d5dad7c0a4ca8d3239b03c392,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.DependencyInjection,10.0.0-rc.2.25502.107,2e1b51b4fa196f0819adf69a15ad8c3432b64c3b196f2ed3d14b65136a6a8709,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.DependencyInjection.Abstractions,10.0.0-rc.2.25502.107,d6787ccf69e09428b3424974896c09fdabb8040bae06ed318212871817933352,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Diagnostics.Abstractions,10.0.0-rc.2.25502.107,b4bc47b4b4ded4ab2f134d318179537cbe16aed511bb3672553ea197929dc7d8,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Diagnostics.HealthChecks,10.0.0-rc.2.25502.107,855fd4da26b955b6b1d036390b1af10564986067b5cc6356cffa081c83eec158,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Diagnostics.HealthChecks.Abstractions,10.0.0-rc.2.25502.107,59f4724daed68a067a661e208f0a934f253b91ec5d52310d008e185bc2c9294c,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Hosting,10.0.0-rc.2.25502.107,ea9b1fa8e50acae720294671e6c36d4c58e20cfc9720335ab4f5ad4eba92cf62,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Hosting.Abstractions,10.0.0-rc.2.25502.107,98fa23ac82e19be221a598fc6f4b469e8b00c4ca2b7a42ad0bfea8b63bbaa9a2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Http,10.0.0-rc.2.25502.107,c63c8bf4ca637137a561ca487b674859c2408918c4838a871bb26eb0c809a665,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Http.Polly,10.0.0-rc.2.25502.107,0b436196bcedd484796795f6a795d7a191294f1190f7a477f1a4937ef7f78110,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Logging.Abstractions,10.0.0-rc.2.25502.107,92b9a5ed62fe945ee88983af43c347429ec15691c9acb207872c548241cef961,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Logging.Console,10.0.0-rc.2.25502.107,fa1e10b5d6261675d9d2e97b9584ff9aaea2a2276eac584dfa77a1e35dcc58f5,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Options,10.0.0-rc.2.25502.107,d208acec60bec3350989694fd443e2d2f0ab583ad5f2c53a2879ade16908e5b4,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.Options.ConfigurationExtensions,10.0.0-rc.2.25502.107,c2863bb28c36fd67f308dd4af486897b512d62ecff2d96613ef954f5bef443e2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.Extensions.TimeProvider.Testing,9.10.0,919a47156fc13f756202702cacc6e853123c84f1b696970445d89f16dfa45829,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.IdentityModel.Tokens,8.14.0,00b78c7b7023132e1d6b31d305e47524732dce6faca92dd16eb8d05a835bba7a,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
| Microsoft.SourceLink.GitLab,8.0.0,a7efb9c177888f952ea8c88bc5714fc83c64af32b70fb080a1323b8d32233973,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2 | ||||
|   | ||||
| 
 | 
										
											
												File diff suppressed because it is too large
												Load Diff
											
										
									
								
							| @@ -15,7 +15,7 @@ | ||||
|       "kind": "dotnet-service", | ||||
|       "context": ".", | ||||
|       "dockerfile": "ops/devops/release/docker/Dockerfile.dotnet-service", | ||||
|       "project": "src/StellaOps.Authority/StellaOps.Authority/StellaOps.Authority.csproj", | ||||
|       "project": "src/Authority/StellaOps.Authority/StellaOps.Authority/StellaOps.Authority.csproj", | ||||
|       "entrypoint": "StellaOps.Authority.dll" | ||||
|     }, | ||||
|     { | ||||
| @@ -24,7 +24,7 @@ | ||||
|       "kind": "dotnet-service", | ||||
|       "context": ".", | ||||
|       "dockerfile": "ops/devops/release/docker/Dockerfile.dotnet-service", | ||||
|       "project": "src/StellaOps.Signer/StellaOps.Signer.WebService/StellaOps.Signer.WebService.csproj", | ||||
|       "project": "src/Signer/StellaOps.Signer/StellaOps.Signer.WebService/StellaOps.Signer.WebService.csproj", | ||||
|       "entrypoint": "StellaOps.Signer.WebService.dll" | ||||
|     }, | ||||
|     { | ||||
| @@ -33,7 +33,7 @@ | ||||
|       "kind": "dotnet-service", | ||||
|       "context": ".", | ||||
|       "dockerfile": "ops/devops/release/docker/Dockerfile.dotnet-service", | ||||
|       "project": "src/StellaOps.Attestor/StellaOps.Attestor.WebService/StellaOps.Attestor.WebService.csproj", | ||||
|       "project": "src/Attestor/StellaOps.Attestor/StellaOps.Attestor.WebService/StellaOps.Attestor.WebService.csproj", | ||||
|       "entrypoint": "StellaOps.Attestor.WebService.dll" | ||||
|     }, | ||||
|     { | ||||
| @@ -42,7 +42,7 @@ | ||||
|       "kind": "dotnet-service", | ||||
|       "context": ".", | ||||
|       "dockerfile": "ops/devops/release/docker/Dockerfile.dotnet-service", | ||||
|       "project": "src/StellaOps.Scanner.WebService/StellaOps.Scanner.WebService.csproj", | ||||
|       "project": "src/Scanner/StellaOps.Scanner.WebService/StellaOps.Scanner.WebService.csproj", | ||||
|       "entrypoint": "StellaOps.Scanner.WebService.dll" | ||||
|     }, | ||||
|     { | ||||
| @@ -51,7 +51,7 @@ | ||||
|       "kind": "dotnet-service", | ||||
|       "context": ".", | ||||
|       "dockerfile": "ops/devops/release/docker/Dockerfile.dotnet-service", | ||||
|       "project": "src/StellaOps.Scanner.Worker/StellaOps.Scanner.Worker.csproj", | ||||
|       "project": "src/Scanner/StellaOps.Scanner.Worker/StellaOps.Scanner.Worker.csproj", | ||||
|       "entrypoint": "StellaOps.Scanner.Worker.dll" | ||||
|     }, | ||||
|     { | ||||
| @@ -60,7 +60,7 @@ | ||||
|       "kind": "dotnet-service", | ||||
|       "context": ".", | ||||
|       "dockerfile": "ops/devops/release/docker/Dockerfile.dotnet-service", | ||||
|       "project": "src/StellaOps.Concelier.WebService/StellaOps.Concelier.WebService.csproj", | ||||
|       "project": "src/Concelier/StellaOps.Concelier.WebService/StellaOps.Concelier.WebService.csproj", | ||||
|       "entrypoint": "StellaOps.Concelier.WebService.dll" | ||||
|     }, | ||||
|     { | ||||
| @@ -69,7 +69,7 @@ | ||||
|       "kind": "dotnet-service", | ||||
|       "context": ".", | ||||
|       "dockerfile": "ops/devops/release/docker/Dockerfile.dotnet-service", | ||||
|       "project": "src/StellaOps.Excititor.WebService/StellaOps.Excititor.WebService.csproj", | ||||
|       "project": "src/Excititor/StellaOps.Excititor.WebService/StellaOps.Excititor.WebService.csproj", | ||||
|       "entrypoint": "StellaOps.Excititor.WebService.dll" | ||||
|     }, | ||||
|     { | ||||
| @@ -81,7 +81,7 @@ | ||||
|     } | ||||
|   ], | ||||
|   "cli": { | ||||
|     "project": "src/StellaOps.Cli/StellaOps.Cli.csproj", | ||||
|     "project": "src/Cli/StellaOps.Cli/StellaOps.Cli.csproj", | ||||
|     "runtimes": [ | ||||
|       "linux-x64", | ||||
|       "linux-arm64", | ||||
| @@ -104,6 +104,6 @@ | ||||
|     ] | ||||
|   }, | ||||
|   "buildxPlugin": { | ||||
|     "project": "src/StellaOps.Scanner.Sbomer.BuildXPlugin/StellaOps.Scanner.Sbomer.BuildXPlugin.csproj" | ||||
|     "project": "src/Scanner/StellaOps.Scanner.Sbomer.BuildXPlugin/StellaOps.Scanner.Sbomer.BuildXPlugin.csproj" | ||||
|   } | ||||
| } | ||||
|   | ||||
| @@ -11,9 +11,9 @@ FROM ${NODE_IMAGE} AS build | ||||
| WORKDIR /workspace | ||||
| ENV CI=1 \ | ||||
|     SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH} | ||||
| COPY src/StellaOps.Web/package.json src/StellaOps.Web/package-lock.json ./ | ||||
| COPY src/Web/StellaOps.Web/package.json src/Web/StellaOps.Web/package-lock.json ./ | ||||
| RUN npm ci --prefer-offline --no-audit --no-fund | ||||
| COPY src/StellaOps.Web/ ./ | ||||
| COPY src/Web/StellaOps.Web/ ./ | ||||
| RUN npm run build -- --configuration=production | ||||
|  | ||||
| FROM ${NGINX_IMAGE} AS runtime | ||||
|   | ||||
| @@ -1,52 +1,52 @@ | ||||
| # syntax=docker/dockerfile:1.7-labs | ||||
|  | ||||
| ARG SDK_IMAGE=mcr.microsoft.com/dotnet/nightly/sdk:10.0 | ||||
| ARG RUNTIME_IMAGE=gcr.io/distroless/dotnet/aspnet:latest | ||||
|  | ||||
| ARG PROJECT | ||||
| ARG ENTRYPOINT_DLL | ||||
| ARG VERSION=0.0.0 | ||||
| ARG CHANNEL=dev | ||||
| ARG GIT_SHA=0000000 | ||||
| ARG SOURCE_DATE_EPOCH=0 | ||||
|  | ||||
| FROM ${SDK_IMAGE} AS build | ||||
| ARG PROJECT | ||||
| ARG GIT_SHA | ||||
| ARG SOURCE_DATE_EPOCH | ||||
| WORKDIR /src | ||||
| ENV DOTNET_CLI_TELEMETRY_OPTOUT=1 \ | ||||
|     DOTNET_SKIP_FIRST_TIME_EXPERIENCE=1 \ | ||||
|     NUGET_XMLDOC_MODE=skip \ | ||||
|     SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH} | ||||
| COPY . . | ||||
| RUN --mount=type=cache,target=/root/.nuget/packages \ | ||||
|     dotnet restore "${PROJECT}" | ||||
| RUN --mount=type=cache,target=/root/.nuget/packages \ | ||||
|     dotnet publish "${PROJECT}" \ | ||||
|       -c Release \ | ||||
|       -o /app/publish \ | ||||
|       /p:UseAppHost=false \ | ||||
|       /p:ContinuousIntegrationBuild=true \ | ||||
|       /p:SourceRevisionId=${GIT_SHA} \ | ||||
|       /p:Deterministic=true \ | ||||
|       /p:TreatWarningsAsErrors=true | ||||
|  | ||||
| FROM ${RUNTIME_IMAGE} AS runtime | ||||
| WORKDIR /app | ||||
| ARG ENTRYPOINT_DLL | ||||
| ARG VERSION | ||||
| ARG CHANNEL | ||||
| ARG GIT_SHA | ||||
| ENV DOTNET_EnableDiagnostics=0 \ | ||||
|     ASPNETCORE_URLS=http://0.0.0.0:8080 | ||||
| COPY --from=build /app/publish/ ./ | ||||
| RUN set -eu; \ | ||||
|     printf '#!/usr/bin/env sh\nset -e\nexec dotnet %s "$@"\n' "${ENTRYPOINT_DLL}" > /entrypoint.sh; \ | ||||
|     chmod +x /entrypoint.sh | ||||
| EXPOSE 8080 | ||||
| LABEL org.opencontainers.image.version="${VERSION}" \ | ||||
|       org.opencontainers.image.revision="${GIT_SHA}" \ | ||||
|       org.opencontainers.image.source="https://git.stella-ops.org/stella-ops/feedser" \ | ||||
|       org.stellaops.release.channel="${CHANNEL}" | ||||
| ENTRYPOINT ["/entrypoint.sh"] | ||||
| # syntax=docker/dockerfile:1.7-labs | ||||
|  | ||||
| ARG SDK_IMAGE=mcr.microsoft.com/dotnet/nightly/sdk:10.0 | ||||
| ARG RUNTIME_IMAGE=gcr.io/distroless/dotnet/aspnet:latest | ||||
|  | ||||
| ARG PROJECT | ||||
| ARG ENTRYPOINT_DLL | ||||
| ARG VERSION=0.0.0 | ||||
| ARG CHANNEL=dev | ||||
| ARG GIT_SHA=0000000 | ||||
| ARG SOURCE_DATE_EPOCH=0 | ||||
|  | ||||
| FROM ${SDK_IMAGE} AS build | ||||
| ARG PROJECT | ||||
| ARG GIT_SHA | ||||
| ARG SOURCE_DATE_EPOCH | ||||
| WORKDIR /src | ||||
| ENV DOTNET_CLI_TELEMETRY_OPTOUT=1 \ | ||||
|     DOTNET_SKIP_FIRST_TIME_EXPERIENCE=1 \ | ||||
|     NUGET_XMLDOC_MODE=skip \ | ||||
|     SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH} | ||||
| COPY . . | ||||
| RUN --mount=type=cache,target=/root/.nuget/packages \ | ||||
|     dotnet restore "${PROJECT}" | ||||
| RUN --mount=type=cache,target=/root/.nuget/packages \ | ||||
|     dotnet publish "${PROJECT}" \ | ||||
|       -c Release \ | ||||
|       -o /app/publish \ | ||||
|       /p:UseAppHost=false \ | ||||
|       /p:ContinuousIntegrationBuild=true \ | ||||
|       /p:SourceRevisionId=${GIT_SHA} \ | ||||
|       /p:Deterministic=true \ | ||||
|       /p:TreatWarningsAsErrors=true | ||||
|  | ||||
| FROM ${RUNTIME_IMAGE} AS runtime | ||||
| WORKDIR /app | ||||
| ARG ENTRYPOINT_DLL | ||||
| ARG VERSION | ||||
| ARG CHANNEL | ||||
| ARG GIT_SHA | ||||
| ENV DOTNET_EnableDiagnostics=0 \ | ||||
|     ASPNETCORE_URLS=http://0.0.0.0:8080 | ||||
| COPY --from=build /app/publish/ ./ | ||||
| RUN set -eu; \ | ||||
|     printf '#!/usr/bin/env sh\nset -e\nexec dotnet %s "$@"\n' "${ENTRYPOINT_DLL}" > /entrypoint.sh; \ | ||||
|     chmod +x /entrypoint.sh | ||||
| EXPOSE 8080 | ||||
| LABEL org.opencontainers.image.version="${VERSION}" \ | ||||
|       org.opencontainers.image.revision="${GIT_SHA}" \ | ||||
|       org.opencontainers.image.source="https://git.stella-ops.org/stella-ops/feedser" \ | ||||
|       org.stellaops.release.channel="${CHANNEL}" | ||||
| ENTRYPOINT ["/entrypoint.sh"] | ||||
|   | ||||
| @@ -1,22 +1,22 @@ | ||||
| server { | ||||
|     listen       8080; | ||||
|     listen       [::]:8080; | ||||
|     server_name  _; | ||||
|  | ||||
|     root   /usr/share/nginx/html; | ||||
|     index  index.html; | ||||
|  | ||||
|     location / { | ||||
|         try_files $uri $uri/ /index.html; | ||||
|     } | ||||
|  | ||||
|     location ~* \.(?:js|css|svg|png|jpg|jpeg|gif|ico|woff2?)$ { | ||||
|         add_header Cache-Control "public, max-age=2592000"; | ||||
|     } | ||||
|  | ||||
|     location = /healthz { | ||||
|         access_log off; | ||||
|         add_header Content-Type text/plain; | ||||
|         return 200 'ok'; | ||||
|     } | ||||
| } | ||||
| server { | ||||
|     listen       8080; | ||||
|     listen       [::]:8080; | ||||
|     server_name  _; | ||||
|  | ||||
|     root   /usr/share/nginx/html; | ||||
|     index  index.html; | ||||
|  | ||||
|     location / { | ||||
|         try_files $uri $uri/ /index.html; | ||||
|     } | ||||
|  | ||||
|     location ~* \.(?:js|css|svg|png|jpg|jpeg|gif|ico|woff2?)$ { | ||||
|         add_header Cache-Control "public, max-age=2592000"; | ||||
|     } | ||||
|  | ||||
|     location = /healthz { | ||||
|         access_log off; | ||||
|         add_header Content-Type text/plain; | ||||
|         return 200 'ok'; | ||||
|     } | ||||
| } | ||||
|   | ||||
| @@ -1,232 +1,232 @@ | ||||
| from __future__ import annotations | ||||
|  | ||||
| import json | ||||
| import tempfile | ||||
| import unittest | ||||
| from collections import OrderedDict | ||||
| from pathlib import Path | ||||
| import sys | ||||
|  | ||||
| sys.path.append(str(Path(__file__).resolve().parent)) | ||||
|  | ||||
| from build_release import write_manifest  # type: ignore import-not-found | ||||
| from verify_release import VerificationError, compute_sha256, verify_release | ||||
|  | ||||
|  | ||||
| class VerifyReleaseTests(unittest.TestCase): | ||||
|     def setUp(self) -> None: | ||||
|         self._temp = tempfile.TemporaryDirectory() | ||||
|         self.base_path = Path(self._temp.name) | ||||
|         self.out_dir = self.base_path / "out" | ||||
|         self.release_dir = self.out_dir / "release" | ||||
|         self.release_dir.mkdir(parents=True, exist_ok=True) | ||||
|  | ||||
|     def tearDown(self) -> None: | ||||
|         self._temp.cleanup() | ||||
|  | ||||
|     def _relative_to_out(self, path: Path) -> str: | ||||
|         return path.relative_to(self.out_dir).as_posix() | ||||
|  | ||||
|     def _write_json(self, path: Path, payload: dict[str, object]) -> None: | ||||
|         path.parent.mkdir(parents=True, exist_ok=True) | ||||
|         with path.open("w", encoding="utf-8") as handle: | ||||
|             json.dump(payload, handle, indent=2) | ||||
|             handle.write("\n") | ||||
|  | ||||
|     def _create_sample_release(self) -> None: | ||||
|         sbom_path = self.release_dir / "artifacts/sboms/sample.cyclonedx.json" | ||||
|         sbom_path.parent.mkdir(parents=True, exist_ok=True) | ||||
|         sbom_path.write_text('{"bomFormat":"CycloneDX","specVersion":"1.5"}\n', encoding="utf-8") | ||||
|         sbom_sha = compute_sha256(sbom_path) | ||||
|  | ||||
|         provenance_path = self.release_dir / "artifacts/provenance/sample.provenance.json" | ||||
|         self._write_json( | ||||
|             provenance_path, | ||||
|             { | ||||
|                 "buildDefinition": {"buildType": "https://example/build", "externalParameters": {}}, | ||||
|                 "runDetails": {"builder": {"id": "https://example/ci"}}, | ||||
|             }, | ||||
|         ) | ||||
|         provenance_sha = compute_sha256(provenance_path) | ||||
|  | ||||
|         signature_path = self.release_dir / "artifacts/signatures/sample.signature" | ||||
|         signature_path.parent.mkdir(parents=True, exist_ok=True) | ||||
|         signature_path.write_text("signature-data\n", encoding="utf-8") | ||||
|         signature_sha = compute_sha256(signature_path) | ||||
|  | ||||
|         metadata_path = self.release_dir / "artifacts/metadata/sample.metadata.json" | ||||
|         self._write_json(metadata_path, {"digest": "sha256:1234"}) | ||||
|         metadata_sha = compute_sha256(metadata_path) | ||||
|  | ||||
|         chart_path = self.release_dir / "helm/stellaops-1.0.0.tgz" | ||||
|         chart_path.parent.mkdir(parents=True, exist_ok=True) | ||||
|         chart_path.write_bytes(b"helm-chart-data") | ||||
|         chart_sha = compute_sha256(chart_path) | ||||
|  | ||||
|         compose_path = self.release_dir.parent / "deploy/compose/docker-compose.dev.yaml" | ||||
|         compose_path.parent.mkdir(parents=True, exist_ok=True) | ||||
|         compose_path.write_text("services: {}\n", encoding="utf-8") | ||||
|         compose_sha = compute_sha256(compose_path) | ||||
|  | ||||
|         debug_file = self.release_dir / "debug/.build-id/ab/cdef.debug" | ||||
|         debug_file.parent.mkdir(parents=True, exist_ok=True) | ||||
|         debug_file.write_bytes(b"\x7fELFDEBUGDATA") | ||||
|         debug_sha = compute_sha256(debug_file) | ||||
|  | ||||
|         debug_manifest_path = self.release_dir / "debug/debug-manifest.json" | ||||
|         debug_manifest = OrderedDict( | ||||
|             ( | ||||
|                 ("generatedAt", "2025-10-26T00:00:00Z"), | ||||
|                 ("version", "1.0.0"), | ||||
|                 ("channel", "edge"), | ||||
|                 ( | ||||
|                     "artifacts", | ||||
|                     [ | ||||
|                         OrderedDict( | ||||
|                             ( | ||||
|                                 ("buildId", "abcdef1234"), | ||||
|                                 ("platform", "linux/amd64"), | ||||
|                                 ("debugPath", "debug/.build-id/ab/cdef.debug"), | ||||
|                                 ("sha256", debug_sha), | ||||
|                                 ("size", debug_file.stat().st_size), | ||||
|                                 ("components", ["sample"]), | ||||
|                                 ("images", ["registry.example/sample@sha256:feedface"]), | ||||
|                                 ("sources", ["app/sample.dll"]), | ||||
|                             ) | ||||
|                         ) | ||||
|                     ], | ||||
|                 ), | ||||
|             ) | ||||
|         ) | ||||
|         self._write_json(debug_manifest_path, debug_manifest) | ||||
|         debug_manifest_sha = compute_sha256(debug_manifest_path) | ||||
|         (debug_manifest_path.with_suffix(debug_manifest_path.suffix + ".sha256")).write_text( | ||||
|             f"{debug_manifest_sha}  {debug_manifest_path.name}\n", encoding="utf-8" | ||||
|         ) | ||||
|  | ||||
|         manifest = OrderedDict( | ||||
|             ( | ||||
|                 ( | ||||
|                     "release", | ||||
|                     OrderedDict( | ||||
|                         ( | ||||
|                             ("version", "1.0.0"), | ||||
|                             ("channel", "edge"), | ||||
|                             ("date", "2025-10-26T00:00:00Z"), | ||||
|                             ("calendar", "2025.10"), | ||||
|                         ) | ||||
|                     ), | ||||
|                 ), | ||||
|                 ( | ||||
|                     "components", | ||||
|                     [ | ||||
|                         OrderedDict( | ||||
|                             ( | ||||
|                                 ("name", "sample"), | ||||
|                                 ("image", "registry.example/sample@sha256:feedface"), | ||||
|                                 ("tags", ["registry.example/sample:1.0.0"]), | ||||
|                                 ( | ||||
|                                     "sbom", | ||||
|                                     OrderedDict( | ||||
|                                         ( | ||||
|                                             ("path", self._relative_to_out(sbom_path)), | ||||
|                                             ("sha256", sbom_sha), | ||||
|                                         ) | ||||
|                                     ), | ||||
|                                 ), | ||||
|                                 ( | ||||
|                                     "provenance", | ||||
|                                     OrderedDict( | ||||
|                                         ( | ||||
|                                             ("path", self._relative_to_out(provenance_path)), | ||||
|                                             ("sha256", provenance_sha), | ||||
|                                         ) | ||||
|                                     ), | ||||
|                                 ), | ||||
|                                 ( | ||||
|                                     "signature", | ||||
|                                     OrderedDict( | ||||
|                                         ( | ||||
|                                             ("path", self._relative_to_out(signature_path)), | ||||
|                                             ("sha256", signature_sha), | ||||
|                                             ("ref", "sigstore://example"), | ||||
|                                             ("tlogUploaded", True), | ||||
|                                         ) | ||||
|                                     ), | ||||
|                                 ), | ||||
|                                 ( | ||||
|                                     "metadata", | ||||
|                                     OrderedDict( | ||||
|                                         ( | ||||
|                                             ("path", self._relative_to_out(metadata_path)), | ||||
|                                             ("sha256", metadata_sha), | ||||
|                                         ) | ||||
|                                     ), | ||||
|                                 ), | ||||
|                             ) | ||||
|                         ) | ||||
|                     ], | ||||
|                 ), | ||||
|                 ( | ||||
|                     "charts", | ||||
|                     [ | ||||
|                         OrderedDict( | ||||
|                             ( | ||||
|                                 ("name", "stellaops"), | ||||
|                                 ("version", "1.0.0"), | ||||
|                                 ("path", self._relative_to_out(chart_path)), | ||||
|                                 ("sha256", chart_sha), | ||||
|                             ) | ||||
|                         ) | ||||
|                     ], | ||||
|                 ), | ||||
|                 ( | ||||
|                     "compose", | ||||
|                     [ | ||||
|                         OrderedDict( | ||||
|                             ( | ||||
|                                 ("name", "docker-compose.dev.yaml"), | ||||
|                                 ("path", compose_path.relative_to(self.out_dir).as_posix()), | ||||
|                                 ("sha256", compose_sha), | ||||
|                             ) | ||||
|                         ) | ||||
|                     ], | ||||
|                 ), | ||||
|                 ( | ||||
|                     "debugStore", | ||||
|                     OrderedDict( | ||||
|                         ( | ||||
|                             ("manifest", "debug/debug-manifest.json"), | ||||
|                             ("sha256", debug_manifest_sha), | ||||
|                             ("entries", 1), | ||||
|                             ("platforms", ["linux/amd64"]), | ||||
|                             ("directory", "debug/.build-id"), | ||||
|                         ) | ||||
|                     ), | ||||
|                 ), | ||||
|             ) | ||||
|         ) | ||||
|         write_manifest(manifest, self.release_dir) | ||||
|  | ||||
|     def test_verify_release_success(self) -> None: | ||||
|         self._create_sample_release() | ||||
|         # Should not raise | ||||
|         verify_release(self.release_dir) | ||||
|  | ||||
|     def test_verify_release_detects_sha_mismatch(self) -> None: | ||||
|         self._create_sample_release() | ||||
|         tampered = self.release_dir / "artifacts/sboms/sample.cyclonedx.json" | ||||
|         tampered.write_text("tampered\n", encoding="utf-8") | ||||
|         with self.assertRaises(VerificationError): | ||||
|             verify_release(self.release_dir) | ||||
|  | ||||
|     def test_verify_release_detects_missing_debug_file(self) -> None: | ||||
|         self._create_sample_release() | ||||
|         debug_file = self.release_dir / "debug/.build-id/ab/cdef.debug" | ||||
|         debug_file.unlink() | ||||
|         with self.assertRaises(VerificationError): | ||||
|             verify_release(self.release_dir) | ||||
|  | ||||
|  | ||||
| if __name__ == "__main__": | ||||
|     unittest.main() | ||||
| from __future__ import annotations | ||||
|  | ||||
| import json | ||||
| import tempfile | ||||
| import unittest | ||||
| from collections import OrderedDict | ||||
| from pathlib import Path | ||||
| import sys | ||||
|  | ||||
| sys.path.append(str(Path(__file__).resolve().parent)) | ||||
|  | ||||
| from build_release import write_manifest  # type: ignore import-not-found | ||||
| from verify_release import VerificationError, compute_sha256, verify_release | ||||
|  | ||||
|  | ||||
| class VerifyReleaseTests(unittest.TestCase): | ||||
|     def setUp(self) -> None: | ||||
|         self._temp = tempfile.TemporaryDirectory() | ||||
|         self.base_path = Path(self._temp.name) | ||||
|         self.out_dir = self.base_path / "out" | ||||
|         self.release_dir = self.out_dir / "release" | ||||
|         self.release_dir.mkdir(parents=True, exist_ok=True) | ||||
|  | ||||
|     def tearDown(self) -> None: | ||||
|         self._temp.cleanup() | ||||
|  | ||||
|     def _relative_to_out(self, path: Path) -> str: | ||||
|         return path.relative_to(self.out_dir).as_posix() | ||||
|  | ||||
|     def _write_json(self, path: Path, payload: dict[str, object]) -> None: | ||||
|         path.parent.mkdir(parents=True, exist_ok=True) | ||||
|         with path.open("w", encoding="utf-8") as handle: | ||||
|             json.dump(payload, handle, indent=2) | ||||
|             handle.write("\n") | ||||
|  | ||||
|     def _create_sample_release(self) -> None: | ||||
|         sbom_path = self.release_dir / "artifacts/sboms/sample.cyclonedx.json" | ||||
|         sbom_path.parent.mkdir(parents=True, exist_ok=True) | ||||
|         sbom_path.write_text('{"bomFormat":"CycloneDX","specVersion":"1.5"}\n', encoding="utf-8") | ||||
|         sbom_sha = compute_sha256(sbom_path) | ||||
|  | ||||
|         provenance_path = self.release_dir / "artifacts/provenance/sample.provenance.json" | ||||
|         self._write_json( | ||||
|             provenance_path, | ||||
|             { | ||||
|                 "buildDefinition": {"buildType": "https://example/build", "externalParameters": {}}, | ||||
|                 "runDetails": {"builder": {"id": "https://example/ci"}}, | ||||
|             }, | ||||
|         ) | ||||
|         provenance_sha = compute_sha256(provenance_path) | ||||
|  | ||||
|         signature_path = self.release_dir / "artifacts/signatures/sample.signature" | ||||
|         signature_path.parent.mkdir(parents=True, exist_ok=True) | ||||
|         signature_path.write_text("signature-data\n", encoding="utf-8") | ||||
|         signature_sha = compute_sha256(signature_path) | ||||
|  | ||||
|         metadata_path = self.release_dir / "artifacts/metadata/sample.metadata.json" | ||||
|         self._write_json(metadata_path, {"digest": "sha256:1234"}) | ||||
|         metadata_sha = compute_sha256(metadata_path) | ||||
|  | ||||
|         chart_path = self.release_dir / "helm/stellaops-1.0.0.tgz" | ||||
|         chart_path.parent.mkdir(parents=True, exist_ok=True) | ||||
|         chart_path.write_bytes(b"helm-chart-data") | ||||
|         chart_sha = compute_sha256(chart_path) | ||||
|  | ||||
|         compose_path = self.release_dir.parent / "deploy/compose/docker-compose.dev.yaml" | ||||
|         compose_path.parent.mkdir(parents=True, exist_ok=True) | ||||
|         compose_path.write_text("services: {}\n", encoding="utf-8") | ||||
|         compose_sha = compute_sha256(compose_path) | ||||
|  | ||||
|         debug_file = self.release_dir / "debug/.build-id/ab/cdef.debug" | ||||
|         debug_file.parent.mkdir(parents=True, exist_ok=True) | ||||
|         debug_file.write_bytes(b"\x7fELFDEBUGDATA") | ||||
|         debug_sha = compute_sha256(debug_file) | ||||
|  | ||||
|         debug_manifest_path = self.release_dir / "debug/debug-manifest.json" | ||||
|         debug_manifest = OrderedDict( | ||||
|             ( | ||||
|                 ("generatedAt", "2025-10-26T00:00:00Z"), | ||||
|                 ("version", "1.0.0"), | ||||
|                 ("channel", "edge"), | ||||
|                 ( | ||||
|                     "artifacts", | ||||
|                     [ | ||||
|                         OrderedDict( | ||||
|                             ( | ||||
|                                 ("buildId", "abcdef1234"), | ||||
|                                 ("platform", "linux/amd64"), | ||||
|                                 ("debugPath", "debug/.build-id/ab/cdef.debug"), | ||||
|                                 ("sha256", debug_sha), | ||||
|                                 ("size", debug_file.stat().st_size), | ||||
|                                 ("components", ["sample"]), | ||||
|                                 ("images", ["registry.example/sample@sha256:feedface"]), | ||||
|                                 ("sources", ["app/sample.dll"]), | ||||
|                             ) | ||||
|                         ) | ||||
|                     ], | ||||
|                 ), | ||||
|             ) | ||||
|         ) | ||||
|         self._write_json(debug_manifest_path, debug_manifest) | ||||
|         debug_manifest_sha = compute_sha256(debug_manifest_path) | ||||
|         (debug_manifest_path.with_suffix(debug_manifest_path.suffix + ".sha256")).write_text( | ||||
|             f"{debug_manifest_sha}  {debug_manifest_path.name}\n", encoding="utf-8" | ||||
|         ) | ||||
|  | ||||
|         manifest = OrderedDict( | ||||
|             ( | ||||
|                 ( | ||||
|                     "release", | ||||
|                     OrderedDict( | ||||
|                         ( | ||||
|                             ("version", "1.0.0"), | ||||
|                             ("channel", "edge"), | ||||
|                             ("date", "2025-10-26T00:00:00Z"), | ||||
|                             ("calendar", "2025.10"), | ||||
|                         ) | ||||
|                     ), | ||||
|                 ), | ||||
|                 ( | ||||
|                     "components", | ||||
|                     [ | ||||
|                         OrderedDict( | ||||
|                             ( | ||||
|                                 ("name", "sample"), | ||||
|                                 ("image", "registry.example/sample@sha256:feedface"), | ||||
|                                 ("tags", ["registry.example/sample:1.0.0"]), | ||||
|                                 ( | ||||
|                                     "sbom", | ||||
|                                     OrderedDict( | ||||
|                                         ( | ||||
|                                             ("path", self._relative_to_out(sbom_path)), | ||||
|                                             ("sha256", sbom_sha), | ||||
|                                         ) | ||||
|                                     ), | ||||
|                                 ), | ||||
|                                 ( | ||||
|                                     "provenance", | ||||
|                                     OrderedDict( | ||||
|                                         ( | ||||
|                                             ("path", self._relative_to_out(provenance_path)), | ||||
|                                             ("sha256", provenance_sha), | ||||
|                                         ) | ||||
|                                     ), | ||||
|                                 ), | ||||
|                                 ( | ||||
|                                     "signature", | ||||
|                                     OrderedDict( | ||||
|                                         ( | ||||
|                                             ("path", self._relative_to_out(signature_path)), | ||||
|                                             ("sha256", signature_sha), | ||||
|                                             ("ref", "sigstore://example"), | ||||
|                                             ("tlogUploaded", True), | ||||
|                                         ) | ||||
|                                     ), | ||||
|                                 ), | ||||
|                                 ( | ||||
|                                     "metadata", | ||||
|                                     OrderedDict( | ||||
|                                         ( | ||||
|                                             ("path", self._relative_to_out(metadata_path)), | ||||
|                                             ("sha256", metadata_sha), | ||||
|                                         ) | ||||
|                                     ), | ||||
|                                 ), | ||||
|                             ) | ||||
|                         ) | ||||
|                     ], | ||||
|                 ), | ||||
|                 ( | ||||
|                     "charts", | ||||
|                     [ | ||||
|                         OrderedDict( | ||||
|                             ( | ||||
|                                 ("name", "stellaops"), | ||||
|                                 ("version", "1.0.0"), | ||||
|                                 ("path", self._relative_to_out(chart_path)), | ||||
|                                 ("sha256", chart_sha), | ||||
|                             ) | ||||
|                         ) | ||||
|                     ], | ||||
|                 ), | ||||
|                 ( | ||||
|                     "compose", | ||||
|                     [ | ||||
|                         OrderedDict( | ||||
|                             ( | ||||
|                                 ("name", "docker-compose.dev.yaml"), | ||||
|                                 ("path", compose_path.relative_to(self.out_dir).as_posix()), | ||||
|                                 ("sha256", compose_sha), | ||||
|                             ) | ||||
|                         ) | ||||
|                     ], | ||||
|                 ), | ||||
|                 ( | ||||
|                     "debugStore", | ||||
|                     OrderedDict( | ||||
|                         ( | ||||
|                             ("manifest", "debug/debug-manifest.json"), | ||||
|                             ("sha256", debug_manifest_sha), | ||||
|                             ("entries", 1), | ||||
|                             ("platforms", ["linux/amd64"]), | ||||
|                             ("directory", "debug/.build-id"), | ||||
|                         ) | ||||
|                     ), | ||||
|                 ), | ||||
|             ) | ||||
|         ) | ||||
|         write_manifest(manifest, self.release_dir) | ||||
|  | ||||
|     def test_verify_release_success(self) -> None: | ||||
|         self._create_sample_release() | ||||
|         # Should not raise | ||||
|         verify_release(self.release_dir) | ||||
|  | ||||
|     def test_verify_release_detects_sha_mismatch(self) -> None: | ||||
|         self._create_sample_release() | ||||
|         tampered = self.release_dir / "artifacts/sboms/sample.cyclonedx.json" | ||||
|         tampered.write_text("tampered\n", encoding="utf-8") | ||||
|         with self.assertRaises(VerificationError): | ||||
|             verify_release(self.release_dir) | ||||
|  | ||||
|     def test_verify_release_detects_missing_debug_file(self) -> None: | ||||
|         self._create_sample_release() | ||||
|         debug_file = self.release_dir / "debug/.build-id/ab/cdef.debug" | ||||
|         debug_file.unlink() | ||||
|         with self.assertRaises(VerificationError): | ||||
|             verify_release(self.release_dir) | ||||
|  | ||||
|  | ||||
| if __name__ == "__main__": | ||||
|     unittest.main() | ||||
|   | ||||
| @@ -1,334 +1,334 @@ | ||||
| #!/usr/bin/env python3 | ||||
| """Verify release artefacts (SBOMs, provenance, signatures, manifest hashes).""" | ||||
|  | ||||
| from __future__ import annotations | ||||
|  | ||||
| import argparse | ||||
| import hashlib | ||||
| import json | ||||
| import pathlib | ||||
| import sys | ||||
| from collections import OrderedDict | ||||
| from typing import Any, Mapping, Optional | ||||
|  | ||||
| from build_release import dump_yaml  # type: ignore import-not-found | ||||
|  | ||||
|  | ||||
| class VerificationError(Exception): | ||||
|     """Raised when release artefacts fail verification.""" | ||||
|  | ||||
|  | ||||
| def compute_sha256(path: pathlib.Path) -> str: | ||||
|     sha = hashlib.sha256() | ||||
|     with path.open("rb") as handle: | ||||
|         for chunk in iter(lambda: handle.read(1024 * 1024), b""): | ||||
|             sha.update(chunk) | ||||
|     return sha.hexdigest() | ||||
|  | ||||
|  | ||||
| def parse_sha_file(path: pathlib.Path) -> Optional[str]: | ||||
|     if not path.exists(): | ||||
|         return None | ||||
|     content = path.read_text(encoding="utf-8").strip() | ||||
|     if not content: | ||||
|         return None | ||||
|     return content.split()[0] | ||||
|  | ||||
|  | ||||
| def resolve_path(path_str: str, release_dir: pathlib.Path) -> pathlib.Path: | ||||
|     candidate = pathlib.Path(path_str.replace("\\", "/")) | ||||
|     if candidate.is_absolute(): | ||||
|         return candidate | ||||
|  | ||||
|     for base in (release_dir, release_dir.parent, release_dir.parent.parent): | ||||
|         resolved = (base / candidate).resolve() | ||||
|         if resolved.exists(): | ||||
|             return resolved | ||||
|     # Fall back to release_dir joined path even if missing to surface in caller. | ||||
|     return (release_dir / candidate).resolve() | ||||
|  | ||||
|  | ||||
| def load_manifest(release_dir: pathlib.Path) -> OrderedDict[str, Any]: | ||||
|     manifest_path = release_dir / "release.json" | ||||
|     if not manifest_path.exists(): | ||||
|         raise VerificationError(f"Release manifest JSON missing at {manifest_path}") | ||||
|     try: | ||||
|         with manifest_path.open("r", encoding="utf-8") as handle: | ||||
|             return json.load(handle, object_pairs_hook=OrderedDict) | ||||
|     except json.JSONDecodeError as exc: | ||||
|         raise VerificationError(f"Failed to parse {manifest_path}: {exc}") from exc | ||||
|  | ||||
|  | ||||
| def verify_manifest_hashes( | ||||
|     manifest: Mapping[str, Any], | ||||
|     release_dir: pathlib.Path, | ||||
|     errors: list[str], | ||||
| ) -> None: | ||||
|     yaml_path = release_dir / "release.yaml" | ||||
|     if not yaml_path.exists(): | ||||
|         errors.append(f"Missing release.yaml at {yaml_path}") | ||||
|         return | ||||
|  | ||||
|     recorded_yaml_sha = parse_sha_file(yaml_path.with_name(yaml_path.name + ".sha256")) | ||||
|     actual_yaml_sha = compute_sha256(yaml_path) | ||||
|     if recorded_yaml_sha and recorded_yaml_sha != actual_yaml_sha: | ||||
|         errors.append( | ||||
|             f"release.yaml.sha256 recorded {recorded_yaml_sha} but file hashes to {actual_yaml_sha}" | ||||
|         ) | ||||
|  | ||||
|     json_path = release_dir / "release.json" | ||||
|     recorded_json_sha = parse_sha_file(json_path.with_name(json_path.name + ".sha256")) | ||||
|     actual_json_sha = compute_sha256(json_path) | ||||
|     if recorded_json_sha and recorded_json_sha != actual_json_sha: | ||||
|         errors.append( | ||||
|             f"release.json.sha256 recorded {recorded_json_sha} but file hashes to {actual_json_sha}" | ||||
|         ) | ||||
|  | ||||
|     checksums = manifest.get("checksums") | ||||
|     if isinstance(checksums, Mapping): | ||||
|         recorded_digest = checksums.get("sha256") | ||||
|         base_manifest = OrderedDict(manifest) | ||||
|         base_manifest.pop("checksums", None) | ||||
|         yaml_without_checksums = dump_yaml(base_manifest) | ||||
|         computed_digest = hashlib.sha256(yaml_without_checksums.encode("utf-8")).hexdigest() | ||||
|         if recorded_digest != computed_digest: | ||||
|             errors.append( | ||||
|                 "Manifest checksum mismatch: " | ||||
|                 f"recorded {recorded_digest}, computed {computed_digest}" | ||||
|             ) | ||||
|  | ||||
|  | ||||
| def verify_artifact_entry( | ||||
|     entry: Mapping[str, Any], | ||||
|     release_dir: pathlib.Path, | ||||
|     label: str, | ||||
|     component_name: str, | ||||
|     errors: list[str], | ||||
| ) -> None: | ||||
|     path_str = entry.get("path") | ||||
|     if not path_str: | ||||
|         errors.append(f"{component_name}: {label} missing 'path' field.") | ||||
|         return | ||||
|     resolved = resolve_path(str(path_str), release_dir) | ||||
|     if not resolved.exists(): | ||||
|         errors.append(f"{component_name}: {label} path does not exist → {resolved}") | ||||
|         return | ||||
|     recorded_sha = entry.get("sha256") | ||||
|     if recorded_sha: | ||||
|         actual_sha = compute_sha256(resolved) | ||||
|         if actual_sha != recorded_sha: | ||||
|             errors.append( | ||||
|                 f"{component_name}: {label} SHA mismatch for {resolved} " | ||||
|                 f"(recorded {recorded_sha}, computed {actual_sha})" | ||||
|             ) | ||||
|  | ||||
|  | ||||
| def verify_components(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None: | ||||
|     for component in manifest.get("components", []): | ||||
|         if not isinstance(component, Mapping): | ||||
|             errors.append("Component entry is not a mapping.") | ||||
|             continue | ||||
|         name = str(component.get("name", "<unknown>")) | ||||
|         for key, label in ( | ||||
|             ("sbom", "SBOM"), | ||||
|             ("provenance", "provenance"), | ||||
|             ("signature", "signature"), | ||||
|             ("metadata", "metadata"), | ||||
|         ): | ||||
|             entry = component.get(key) | ||||
|             if not entry: | ||||
|                 continue | ||||
|             if not isinstance(entry, Mapping): | ||||
|                 errors.append(f"{name}: {label} entry must be a mapping.") | ||||
|                 continue | ||||
|             verify_artifact_entry(entry, release_dir, label, name, errors) | ||||
|  | ||||
|  | ||||
| def verify_collections(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None: | ||||
|     for collection, label in ( | ||||
|         ("charts", "chart"), | ||||
|         ("compose", "compose file"), | ||||
|     ): | ||||
|         for item in manifest.get(collection, []): | ||||
|             if not isinstance(item, Mapping): | ||||
|                 errors.append(f"{collection} entry is not a mapping.") | ||||
|                 continue | ||||
|             path_value = item.get("path") | ||||
|             if not path_value: | ||||
|                 errors.append(f"{collection} entry missing path.") | ||||
|                 continue | ||||
|             resolved = resolve_path(str(path_value), release_dir) | ||||
|             if not resolved.exists(): | ||||
|                 errors.append(f"{label} missing file → {resolved}") | ||||
|                 continue | ||||
|             recorded_sha = item.get("sha256") | ||||
|             if recorded_sha: | ||||
|                 actual_sha = compute_sha256(resolved) | ||||
|                 if actual_sha != recorded_sha: | ||||
|                     errors.append( | ||||
|                         f"{label} SHA mismatch for {resolved} " | ||||
|                         f"(recorded {recorded_sha}, computed {actual_sha})" | ||||
|                     ) | ||||
|  | ||||
|  | ||||
| def verify_debug_store(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None: | ||||
|     debug = manifest.get("debugStore") | ||||
|     if not isinstance(debug, Mapping): | ||||
|         return | ||||
|     manifest_path_str = debug.get("manifest") | ||||
|     manifest_data: Optional[Mapping[str, Any]] = None | ||||
|     if manifest_path_str: | ||||
|         manifest_path = resolve_path(str(manifest_path_str), release_dir) | ||||
|         if not manifest_path.exists(): | ||||
|             errors.append(f"Debug manifest missing → {manifest_path}") | ||||
|         else: | ||||
|             recorded_sha = debug.get("sha256") | ||||
|             if recorded_sha: | ||||
|                 actual_sha = compute_sha256(manifest_path) | ||||
|                 if actual_sha != recorded_sha: | ||||
|                     errors.append( | ||||
|                         f"Debug manifest SHA mismatch (recorded {recorded_sha}, computed {actual_sha})" | ||||
|                     ) | ||||
|             sha_sidecar = manifest_path.with_suffix(manifest_path.suffix + ".sha256") | ||||
|             sidecar_sha = parse_sha_file(sha_sidecar) | ||||
|             if sidecar_sha and recorded_sha and sidecar_sha != recorded_sha: | ||||
|                 errors.append( | ||||
|                     f"Debug manifest sidecar digest {sidecar_sha} disagrees with recorded {recorded_sha}" | ||||
|                 ) | ||||
|             try: | ||||
|                 with manifest_path.open("r", encoding="utf-8") as handle: | ||||
|                     manifest_data = json.load(handle) | ||||
|             except json.JSONDecodeError as exc: | ||||
|                 errors.append(f"Debug manifest JSON invalid: {exc}") | ||||
|     directory = debug.get("directory") | ||||
|     if directory: | ||||
|         debug_dir = resolve_path(str(directory), release_dir) | ||||
|         if not debug_dir.exists(): | ||||
|             errors.append(f"Debug directory missing → {debug_dir}") | ||||
|  | ||||
|     if manifest_data: | ||||
|         artifacts = manifest_data.get("artifacts") | ||||
|         if not isinstance(artifacts, list) or not artifacts: | ||||
|             errors.append("Debug manifest contains no artefacts.") | ||||
|             return | ||||
|  | ||||
|         declared_entries = debug.get("entries") | ||||
|         if isinstance(declared_entries, int) and declared_entries != len(artifacts): | ||||
|             errors.append( | ||||
|                 f"Debug manifest reports {declared_entries} entries but contains {len(artifacts)} artefacts." | ||||
|             ) | ||||
|  | ||||
|         for artefact in artifacts: | ||||
|             if not isinstance(artefact, Mapping): | ||||
|                 errors.append("Debug manifest artefact entry is not a mapping.") | ||||
|                 continue | ||||
|             debug_path = artefact.get("debugPath") | ||||
|             artefact_sha = artefact.get("sha256") | ||||
|             if not debug_path or not artefact_sha: | ||||
|                 errors.append("Debug manifest artefact missing debugPath or sha256.") | ||||
|                 continue | ||||
|             resolved_debug = resolve_path(str(debug_path), release_dir) | ||||
|             if not resolved_debug.exists(): | ||||
|                 errors.append(f"Debug artefact missing → {resolved_debug}") | ||||
|                 continue | ||||
|             actual_sha = compute_sha256(resolved_debug) | ||||
|             if actual_sha != artefact_sha: | ||||
|                 errors.append( | ||||
|                     f"Debug artefact SHA mismatch for {resolved_debug} " | ||||
|                     f"(recorded {artefact_sha}, computed {actual_sha})" | ||||
|                 ) | ||||
|  | ||||
| def verify_signature(signature: Mapping[str, Any], release_dir: pathlib.Path, label: str, component_name: str, errors: list[str]) -> None: | ||||
|     sig_path_value = signature.get("path") | ||||
|     if not sig_path_value: | ||||
|         errors.append(f"{component_name}: {label} signature missing path.") | ||||
|         return | ||||
|     sig_path = resolve_path(str(sig_path_value), release_dir) | ||||
|     if not sig_path.exists(): | ||||
|         errors.append(f"{component_name}: {label} signature missing → {sig_path}") | ||||
|         return | ||||
|     recorded_sha = signature.get("sha256") | ||||
|     if recorded_sha: | ||||
|         actual_sha = compute_sha256(sig_path) | ||||
|         if actual_sha != recorded_sha: | ||||
|             errors.append( | ||||
|                 f"{component_name}: {label} signature SHA mismatch for {sig_path} " | ||||
|                 f"(recorded {recorded_sha}, computed {actual_sha})" | ||||
|             ) | ||||
|  | ||||
|  | ||||
| def verify_cli_entries(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None: | ||||
|     cli_entries = manifest.get("cli") | ||||
|     if not cli_entries: | ||||
|         return | ||||
|     if not isinstance(cli_entries, list): | ||||
|         errors.append("CLI manifest section must be a list.") | ||||
|         return | ||||
|     for entry in cli_entries: | ||||
|         if not isinstance(entry, Mapping): | ||||
|             errors.append("CLI entry must be a mapping.") | ||||
|             continue | ||||
|         runtime = entry.get("runtime", "<unknown>") | ||||
|         component_name = f"cli[{runtime}]" | ||||
|         archive = entry.get("archive") | ||||
|         if not isinstance(archive, Mapping): | ||||
|             errors.append(f"{component_name}: archive metadata missing or invalid.") | ||||
|         else: | ||||
|             verify_artifact_entry(archive, release_dir, "archive", component_name, errors) | ||||
|             signature = archive.get("signature") | ||||
|             if isinstance(signature, Mapping): | ||||
|                 verify_signature(signature, release_dir, "archive", component_name, errors) | ||||
|             elif signature is not None: | ||||
|                 errors.append(f"{component_name}: archive signature must be an object.") | ||||
|         sbom = entry.get("sbom") | ||||
|         if sbom: | ||||
|             if not isinstance(sbom, Mapping): | ||||
|                 errors.append(f"{component_name}: sbom entry must be a mapping.") | ||||
|             else: | ||||
|                 verify_artifact_entry(sbom, release_dir, "sbom", component_name, errors) | ||||
|                 signature = sbom.get("signature") | ||||
|                 if isinstance(signature, Mapping): | ||||
|                     verify_signature(signature, release_dir, "sbom", component_name, errors) | ||||
|                 elif signature is not None: | ||||
|                     errors.append(f"{component_name}: sbom signature must be an object.") | ||||
|  | ||||
|  | ||||
| def verify_release(release_dir: pathlib.Path) -> None: | ||||
|     if not release_dir.exists(): | ||||
|         raise VerificationError(f"Release directory not found: {release_dir}") | ||||
|     manifest = load_manifest(release_dir) | ||||
|     errors: list[str] = [] | ||||
|     verify_manifest_hashes(manifest, release_dir, errors) | ||||
|     verify_components(manifest, release_dir, errors) | ||||
|     verify_cli_entries(manifest, release_dir, errors) | ||||
|     verify_collections(manifest, release_dir, errors) | ||||
|     verify_debug_store(manifest, release_dir, errors) | ||||
|     if errors: | ||||
|         bullet_list = "\n - ".join(errors) | ||||
|         raise VerificationError(f"Release verification failed:\n - {bullet_list}") | ||||
|  | ||||
|  | ||||
| def parse_args(argv: list[str] | None = None) -> argparse.Namespace: | ||||
|     parser = argparse.ArgumentParser(description=__doc__) | ||||
|     parser.add_argument( | ||||
|         "--release-dir", | ||||
|         type=pathlib.Path, | ||||
|         default=pathlib.Path("out/release"), | ||||
|         help="Path to the release artefact directory (default: %(default)s)", | ||||
|     ) | ||||
|     return parser.parse_args(argv) | ||||
|  | ||||
|  | ||||
| def main(argv: list[str] | None = None) -> int: | ||||
|     args = parse_args(argv) | ||||
|     try: | ||||
|         verify_release(args.release_dir.resolve()) | ||||
|     except VerificationError as exc: | ||||
|         print(str(exc), file=sys.stderr) | ||||
|         return 1 | ||||
|     print(f"✅ Release artefacts verified OK in {args.release_dir}") | ||||
|     return 0 | ||||
|  | ||||
|  | ||||
| if __name__ == "__main__": | ||||
|     raise SystemExit(main()) | ||||
| #!/usr/bin/env python3 | ||||
| """Verify release artefacts (SBOMs, provenance, signatures, manifest hashes).""" | ||||
|  | ||||
| from __future__ import annotations | ||||
|  | ||||
| import argparse | ||||
| import hashlib | ||||
| import json | ||||
| import pathlib | ||||
| import sys | ||||
| from collections import OrderedDict | ||||
| from typing import Any, Mapping, Optional | ||||
|  | ||||
| from build_release import dump_yaml  # type: ignore import-not-found | ||||
|  | ||||
|  | ||||
| class VerificationError(Exception): | ||||
|     """Raised when release artefacts fail verification.""" | ||||
|  | ||||
|  | ||||
| def compute_sha256(path: pathlib.Path) -> str: | ||||
|     sha = hashlib.sha256() | ||||
|     with path.open("rb") as handle: | ||||
|         for chunk in iter(lambda: handle.read(1024 * 1024), b""): | ||||
|             sha.update(chunk) | ||||
|     return sha.hexdigest() | ||||
|  | ||||
|  | ||||
| def parse_sha_file(path: pathlib.Path) -> Optional[str]: | ||||
|     if not path.exists(): | ||||
|         return None | ||||
|     content = path.read_text(encoding="utf-8").strip() | ||||
|     if not content: | ||||
|         return None | ||||
|     return content.split()[0] | ||||
|  | ||||
|  | ||||
| def resolve_path(path_str: str, release_dir: pathlib.Path) -> pathlib.Path: | ||||
|     candidate = pathlib.Path(path_str.replace("\\", "/")) | ||||
|     if candidate.is_absolute(): | ||||
|         return candidate | ||||
|  | ||||
|     for base in (release_dir, release_dir.parent, release_dir.parent.parent): | ||||
|         resolved = (base / candidate).resolve() | ||||
|         if resolved.exists(): | ||||
|             return resolved | ||||
|     # Fall back to release_dir joined path even if missing to surface in caller. | ||||
|     return (release_dir / candidate).resolve() | ||||
|  | ||||
|  | ||||
| def load_manifest(release_dir: pathlib.Path) -> OrderedDict[str, Any]: | ||||
|     manifest_path = release_dir / "release.json" | ||||
|     if not manifest_path.exists(): | ||||
|         raise VerificationError(f"Release manifest JSON missing at {manifest_path}") | ||||
|     try: | ||||
|         with manifest_path.open("r", encoding="utf-8") as handle: | ||||
|             return json.load(handle, object_pairs_hook=OrderedDict) | ||||
|     except json.JSONDecodeError as exc: | ||||
|         raise VerificationError(f"Failed to parse {manifest_path}: {exc}") from exc | ||||
|  | ||||
|  | ||||
| def verify_manifest_hashes( | ||||
|     manifest: Mapping[str, Any], | ||||
|     release_dir: pathlib.Path, | ||||
|     errors: list[str], | ||||
| ) -> None: | ||||
|     yaml_path = release_dir / "release.yaml" | ||||
|     if not yaml_path.exists(): | ||||
|         errors.append(f"Missing release.yaml at {yaml_path}") | ||||
|         return | ||||
|  | ||||
|     recorded_yaml_sha = parse_sha_file(yaml_path.with_name(yaml_path.name + ".sha256")) | ||||
|     actual_yaml_sha = compute_sha256(yaml_path) | ||||
|     if recorded_yaml_sha and recorded_yaml_sha != actual_yaml_sha: | ||||
|         errors.append( | ||||
|             f"release.yaml.sha256 recorded {recorded_yaml_sha} but file hashes to {actual_yaml_sha}" | ||||
|         ) | ||||
|  | ||||
|     json_path = release_dir / "release.json" | ||||
|     recorded_json_sha = parse_sha_file(json_path.with_name(json_path.name + ".sha256")) | ||||
|     actual_json_sha = compute_sha256(json_path) | ||||
|     if recorded_json_sha and recorded_json_sha != actual_json_sha: | ||||
|         errors.append( | ||||
|             f"release.json.sha256 recorded {recorded_json_sha} but file hashes to {actual_json_sha}" | ||||
|         ) | ||||
|  | ||||
|     checksums = manifest.get("checksums") | ||||
|     if isinstance(checksums, Mapping): | ||||
|         recorded_digest = checksums.get("sha256") | ||||
|         base_manifest = OrderedDict(manifest) | ||||
|         base_manifest.pop("checksums", None) | ||||
|         yaml_without_checksums = dump_yaml(base_manifest) | ||||
|         computed_digest = hashlib.sha256(yaml_without_checksums.encode("utf-8")).hexdigest() | ||||
|         if recorded_digest != computed_digest: | ||||
|             errors.append( | ||||
|                 "Manifest checksum mismatch: " | ||||
|                 f"recorded {recorded_digest}, computed {computed_digest}" | ||||
|             ) | ||||
|  | ||||
|  | ||||
| def verify_artifact_entry( | ||||
|     entry: Mapping[str, Any], | ||||
|     release_dir: pathlib.Path, | ||||
|     label: str, | ||||
|     component_name: str, | ||||
|     errors: list[str], | ||||
| ) -> None: | ||||
|     path_str = entry.get("path") | ||||
|     if not path_str: | ||||
|         errors.append(f"{component_name}: {label} missing 'path' field.") | ||||
|         return | ||||
|     resolved = resolve_path(str(path_str), release_dir) | ||||
|     if not resolved.exists(): | ||||
|         errors.append(f"{component_name}: {label} path does not exist → {resolved}") | ||||
|         return | ||||
|     recorded_sha = entry.get("sha256") | ||||
|     if recorded_sha: | ||||
|         actual_sha = compute_sha256(resolved) | ||||
|         if actual_sha != recorded_sha: | ||||
|             errors.append( | ||||
|                 f"{component_name}: {label} SHA mismatch for {resolved} " | ||||
|                 f"(recorded {recorded_sha}, computed {actual_sha})" | ||||
|             ) | ||||
|  | ||||
|  | ||||
| def verify_components(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None: | ||||
|     for component in manifest.get("components", []): | ||||
|         if not isinstance(component, Mapping): | ||||
|             errors.append("Component entry is not a mapping.") | ||||
|             continue | ||||
|         name = str(component.get("name", "<unknown>")) | ||||
|         for key, label in ( | ||||
|             ("sbom", "SBOM"), | ||||
|             ("provenance", "provenance"), | ||||
|             ("signature", "signature"), | ||||
|             ("metadata", "metadata"), | ||||
|         ): | ||||
|             entry = component.get(key) | ||||
|             if not entry: | ||||
|                 continue | ||||
|             if not isinstance(entry, Mapping): | ||||
|                 errors.append(f"{name}: {label} entry must be a mapping.") | ||||
|                 continue | ||||
|             verify_artifact_entry(entry, release_dir, label, name, errors) | ||||
|  | ||||
|  | ||||
| def verify_collections(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None: | ||||
|     for collection, label in ( | ||||
|         ("charts", "chart"), | ||||
|         ("compose", "compose file"), | ||||
|     ): | ||||
|         for item in manifest.get(collection, []): | ||||
|             if not isinstance(item, Mapping): | ||||
|                 errors.append(f"{collection} entry is not a mapping.") | ||||
|                 continue | ||||
|             path_value = item.get("path") | ||||
|             if not path_value: | ||||
|                 errors.append(f"{collection} entry missing path.") | ||||
|                 continue | ||||
|             resolved = resolve_path(str(path_value), release_dir) | ||||
|             if not resolved.exists(): | ||||
|                 errors.append(f"{label} missing file → {resolved}") | ||||
|                 continue | ||||
|             recorded_sha = item.get("sha256") | ||||
|             if recorded_sha: | ||||
|                 actual_sha = compute_sha256(resolved) | ||||
|                 if actual_sha != recorded_sha: | ||||
|                     errors.append( | ||||
|                         f"{label} SHA mismatch for {resolved} " | ||||
|                         f"(recorded {recorded_sha}, computed {actual_sha})" | ||||
|                     ) | ||||
|  | ||||
|  | ||||
| def verify_debug_store(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None: | ||||
|     debug = manifest.get("debugStore") | ||||
|     if not isinstance(debug, Mapping): | ||||
|         return | ||||
|     manifest_path_str = debug.get("manifest") | ||||
|     manifest_data: Optional[Mapping[str, Any]] = None | ||||
|     if manifest_path_str: | ||||
|         manifest_path = resolve_path(str(manifest_path_str), release_dir) | ||||
|         if not manifest_path.exists(): | ||||
|             errors.append(f"Debug manifest missing → {manifest_path}") | ||||
|         else: | ||||
|             recorded_sha = debug.get("sha256") | ||||
|             if recorded_sha: | ||||
|                 actual_sha = compute_sha256(manifest_path) | ||||
|                 if actual_sha != recorded_sha: | ||||
|                     errors.append( | ||||
|                         f"Debug manifest SHA mismatch (recorded {recorded_sha}, computed {actual_sha})" | ||||
|                     ) | ||||
|             sha_sidecar = manifest_path.with_suffix(manifest_path.suffix + ".sha256") | ||||
|             sidecar_sha = parse_sha_file(sha_sidecar) | ||||
|             if sidecar_sha and recorded_sha and sidecar_sha != recorded_sha: | ||||
|                 errors.append( | ||||
|                     f"Debug manifest sidecar digest {sidecar_sha} disagrees with recorded {recorded_sha}" | ||||
|                 ) | ||||
|             try: | ||||
|                 with manifest_path.open("r", encoding="utf-8") as handle: | ||||
|                     manifest_data = json.load(handle) | ||||
|             except json.JSONDecodeError as exc: | ||||
|                 errors.append(f"Debug manifest JSON invalid: {exc}") | ||||
|     directory = debug.get("directory") | ||||
|     if directory: | ||||
|         debug_dir = resolve_path(str(directory), release_dir) | ||||
|         if not debug_dir.exists(): | ||||
|             errors.append(f"Debug directory missing → {debug_dir}") | ||||
|  | ||||
|     if manifest_data: | ||||
|         artifacts = manifest_data.get("artifacts") | ||||
|         if not isinstance(artifacts, list) or not artifacts: | ||||
|             errors.append("Debug manifest contains no artefacts.") | ||||
|             return | ||||
|  | ||||
|         declared_entries = debug.get("entries") | ||||
|         if isinstance(declared_entries, int) and declared_entries != len(artifacts): | ||||
|             errors.append( | ||||
|                 f"Debug manifest reports {declared_entries} entries but contains {len(artifacts)} artefacts." | ||||
|             ) | ||||
|  | ||||
|         for artefact in artifacts: | ||||
|             if not isinstance(artefact, Mapping): | ||||
|                 errors.append("Debug manifest artefact entry is not a mapping.") | ||||
|                 continue | ||||
|             debug_path = artefact.get("debugPath") | ||||
|             artefact_sha = artefact.get("sha256") | ||||
|             if not debug_path or not artefact_sha: | ||||
|                 errors.append("Debug manifest artefact missing debugPath or sha256.") | ||||
|                 continue | ||||
|             resolved_debug = resolve_path(str(debug_path), release_dir) | ||||
|             if not resolved_debug.exists(): | ||||
|                 errors.append(f"Debug artefact missing → {resolved_debug}") | ||||
|                 continue | ||||
|             actual_sha = compute_sha256(resolved_debug) | ||||
|             if actual_sha != artefact_sha: | ||||
|                 errors.append( | ||||
|                     f"Debug artefact SHA mismatch for {resolved_debug} " | ||||
|                     f"(recorded {artefact_sha}, computed {actual_sha})" | ||||
|                 ) | ||||
|  | ||||
| def verify_signature(signature: Mapping[str, Any], release_dir: pathlib.Path, label: str, component_name: str, errors: list[str]) -> None: | ||||
|     sig_path_value = signature.get("path") | ||||
|     if not sig_path_value: | ||||
|         errors.append(f"{component_name}: {label} signature missing path.") | ||||
|         return | ||||
|     sig_path = resolve_path(str(sig_path_value), release_dir) | ||||
|     if not sig_path.exists(): | ||||
|         errors.append(f"{component_name}: {label} signature missing → {sig_path}") | ||||
|         return | ||||
|     recorded_sha = signature.get("sha256") | ||||
|     if recorded_sha: | ||||
|         actual_sha = compute_sha256(sig_path) | ||||
|         if actual_sha != recorded_sha: | ||||
|             errors.append( | ||||
|                 f"{component_name}: {label} signature SHA mismatch for {sig_path} " | ||||
|                 f"(recorded {recorded_sha}, computed {actual_sha})" | ||||
|             ) | ||||
|  | ||||
|  | ||||
| def verify_cli_entries(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None: | ||||
|     cli_entries = manifest.get("cli") | ||||
|     if not cli_entries: | ||||
|         return | ||||
|     if not isinstance(cli_entries, list): | ||||
|         errors.append("CLI manifest section must be a list.") | ||||
|         return | ||||
|     for entry in cli_entries: | ||||
|         if not isinstance(entry, Mapping): | ||||
|             errors.append("CLI entry must be a mapping.") | ||||
|             continue | ||||
|         runtime = entry.get("runtime", "<unknown>") | ||||
|         component_name = f"cli[{runtime}]" | ||||
|         archive = entry.get("archive") | ||||
|         if not isinstance(archive, Mapping): | ||||
|             errors.append(f"{component_name}: archive metadata missing or invalid.") | ||||
|         else: | ||||
|             verify_artifact_entry(archive, release_dir, "archive", component_name, errors) | ||||
|             signature = archive.get("signature") | ||||
|             if isinstance(signature, Mapping): | ||||
|                 verify_signature(signature, release_dir, "archive", component_name, errors) | ||||
|             elif signature is not None: | ||||
|                 errors.append(f"{component_name}: archive signature must be an object.") | ||||
|         sbom = entry.get("sbom") | ||||
|         if sbom: | ||||
|             if not isinstance(sbom, Mapping): | ||||
|                 errors.append(f"{component_name}: sbom entry must be a mapping.") | ||||
|             else: | ||||
|                 verify_artifact_entry(sbom, release_dir, "sbom", component_name, errors) | ||||
|                 signature = sbom.get("signature") | ||||
|                 if isinstance(signature, Mapping): | ||||
|                     verify_signature(signature, release_dir, "sbom", component_name, errors) | ||||
|                 elif signature is not None: | ||||
|                     errors.append(f"{component_name}: sbom signature must be an object.") | ||||
|  | ||||
|  | ||||
| def verify_release(release_dir: pathlib.Path) -> None: | ||||
|     if not release_dir.exists(): | ||||
|         raise VerificationError(f"Release directory not found: {release_dir}") | ||||
|     manifest = load_manifest(release_dir) | ||||
|     errors: list[str] = [] | ||||
|     verify_manifest_hashes(manifest, release_dir, errors) | ||||
|     verify_components(manifest, release_dir, errors) | ||||
|     verify_cli_entries(manifest, release_dir, errors) | ||||
|     verify_collections(manifest, release_dir, errors) | ||||
|     verify_debug_store(manifest, release_dir, errors) | ||||
|     if errors: | ||||
|         bullet_list = "\n - ".join(errors) | ||||
|         raise VerificationError(f"Release verification failed:\n - {bullet_list}") | ||||
|  | ||||
|  | ||||
| def parse_args(argv: list[str] | None = None) -> argparse.Namespace: | ||||
|     parser = argparse.ArgumentParser(description=__doc__) | ||||
|     parser.add_argument( | ||||
|         "--release-dir", | ||||
|         type=pathlib.Path, | ||||
|         default=pathlib.Path("out/release"), | ||||
|         help="Path to the release artefact directory (default: %(default)s)", | ||||
|     ) | ||||
|     return parser.parse_args(argv) | ||||
|  | ||||
|  | ||||
| def main(argv: list[str] | None = None) -> int: | ||||
|     args = parse_args(argv) | ||||
|     try: | ||||
|         verify_release(args.release_dir.resolve()) | ||||
|     except VerificationError as exc: | ||||
|         print(str(exc), file=sys.stderr) | ||||
|         return 1 | ||||
|     print(f"✅ Release artefacts verified OK in {args.release_dir}") | ||||
|     return 0 | ||||
|  | ||||
|  | ||||
| if __name__ == "__main__": | ||||
|     raise SystemExit(main()) | ||||
|   | ||||
| @@ -1,77 +1,77 @@ | ||||
| /** | ||||
|  * Aggregation helper that surfaces advisory_raw duplicate candidates prior to enabling the | ||||
|  * idempotency unique index. Intended for staging/offline snapshots. | ||||
|  * | ||||
|  * Usage: | ||||
|  *   mongo concelier ops/devops/scripts/check-advisory-raw-duplicates.js | ||||
|  * | ||||
|  * Environment variables: | ||||
|  *   LIMIT - optional cap on number of duplicate groups to print (default 50). | ||||
|  */ | ||||
| (function () { | ||||
|   function toInt(value, fallback) { | ||||
|     var parsed = parseInt(value, 10); | ||||
|     return Number.isFinite(parsed) && parsed > 0 ? parsed : fallback; | ||||
|   } | ||||
|  | ||||
|   var limit = typeof LIMIT !== "undefined" ? toInt(LIMIT, 50) : 50; | ||||
|   var database = db.getName ? db.getSiblingDB(db.getName()) : db; | ||||
|   if (!database) { | ||||
|     throw new Error("Unable to resolve database handle"); | ||||
|   } | ||||
|  | ||||
|   print(""); | ||||
|   print("== advisory_raw duplicate audit =="); | ||||
|   print("Database: " + database.getName()); | ||||
|   print("Limit   : " + limit); | ||||
|   print(""); | ||||
|  | ||||
|   var pipeline = [ | ||||
|     { | ||||
|       $group: { | ||||
|         _id: { | ||||
|           vendor: "$source.vendor", | ||||
|           upstreamId: "$upstream.upstream_id", | ||||
|           contentHash: "$upstream.content_hash", | ||||
|           tenant: "$tenant" | ||||
|         }, | ||||
|         ids: { $addToSet: "$_id" }, | ||||
|         count: { $sum: 1 } | ||||
|       } | ||||
|     }, | ||||
|     { $match: { count: { $gt: 1 } } }, | ||||
|     { | ||||
|       $project: { | ||||
|         _id: 0, | ||||
|         vendor: "$_id.vendor", | ||||
|         upstreamId: "$_id.upstreamId", | ||||
|         contentHash: "$_id.contentHash", | ||||
|         tenant: "$_id.tenant", | ||||
|         count: 1, | ||||
|         ids: 1 | ||||
|       } | ||||
|     }, | ||||
|     { $sort: { count: -1, vendor: 1, upstreamId: 1 } }, | ||||
|     { $limit: limit } | ||||
|   ]; | ||||
|  | ||||
|   var cursor = database.getCollection("advisory_raw").aggregate(pipeline, { allowDiskUse: true }); | ||||
|   var any = false; | ||||
|   while (cursor.hasNext()) { | ||||
|     var doc = cursor.next(); | ||||
|     any = true; | ||||
|     print("---"); | ||||
|     print("vendor      : " + doc.vendor); | ||||
|     print("upstream_id : " + doc.upstreamId); | ||||
|     print("tenant      : " + doc.tenant); | ||||
|     print("content_hash: " + doc.contentHash); | ||||
|     print("count       : " + doc.count); | ||||
|     print("ids         : " + doc.ids.join(", ")); | ||||
|   } | ||||
|  | ||||
|   if (!any) { | ||||
|     print("No duplicate advisory_raw documents detected."); | ||||
|   } | ||||
|  | ||||
|   print(""); | ||||
| })(); | ||||
| /** | ||||
|  * Aggregation helper that surfaces advisory_raw duplicate candidates prior to enabling the | ||||
|  * idempotency unique index. Intended for staging/offline snapshots. | ||||
|  * | ||||
|  * Usage: | ||||
|  *   mongo concelier ops/devops/scripts/check-advisory-raw-duplicates.js | ||||
|  * | ||||
|  * Environment variables: | ||||
|  *   LIMIT - optional cap on number of duplicate groups to print (default 50). | ||||
|  */ | ||||
| (function () { | ||||
|   function toInt(value, fallback) { | ||||
|     var parsed = parseInt(value, 10); | ||||
|     return Number.isFinite(parsed) && parsed > 0 ? parsed : fallback; | ||||
|   } | ||||
|  | ||||
|   var limit = typeof LIMIT !== "undefined" ? toInt(LIMIT, 50) : 50; | ||||
|   var database = db.getName ? db.getSiblingDB(db.getName()) : db; | ||||
|   if (!database) { | ||||
|     throw new Error("Unable to resolve database handle"); | ||||
|   } | ||||
|  | ||||
|   print(""); | ||||
|   print("== advisory_raw duplicate audit =="); | ||||
|   print("Database: " + database.getName()); | ||||
|   print("Limit   : " + limit); | ||||
|   print(""); | ||||
|  | ||||
|   var pipeline = [ | ||||
|     { | ||||
|       $group: { | ||||
|         _id: { | ||||
|           vendor: "$source.vendor", | ||||
|           upstreamId: "$upstream.upstream_id", | ||||
|           contentHash: "$upstream.content_hash", | ||||
|           tenant: "$tenant" | ||||
|         }, | ||||
|         ids: { $addToSet: "$_id" }, | ||||
|         count: { $sum: 1 } | ||||
|       } | ||||
|     }, | ||||
|     { $match: { count: { $gt: 1 } } }, | ||||
|     { | ||||
|       $project: { | ||||
|         _id: 0, | ||||
|         vendor: "$_id.vendor", | ||||
|         upstreamId: "$_id.upstreamId", | ||||
|         contentHash: "$_id.contentHash", | ||||
|         tenant: "$_id.tenant", | ||||
|         count: 1, | ||||
|         ids: 1 | ||||
|       } | ||||
|     }, | ||||
|     { $sort: { count: -1, vendor: 1, upstreamId: 1 } }, | ||||
|     { $limit: limit } | ||||
|   ]; | ||||
|  | ||||
|   var cursor = database.getCollection("advisory_raw").aggregate(pipeline, { allowDiskUse: true }); | ||||
|   var any = false; | ||||
|   while (cursor.hasNext()) { | ||||
|     var doc = cursor.next(); | ||||
|     any = true; | ||||
|     print("---"); | ||||
|     print("vendor      : " + doc.vendor); | ||||
|     print("upstream_id : " + doc.upstreamId); | ||||
|     print("tenant      : " + doc.tenant); | ||||
|     print("content_hash: " + doc.contentHash); | ||||
|     print("count       : " + doc.count); | ||||
|     print("ids         : " + doc.ids.join(", ")); | ||||
|   } | ||||
|  | ||||
|   if (!any) { | ||||
|     print("No duplicate advisory_raw documents detected."); | ||||
|   } | ||||
|  | ||||
|   print(""); | ||||
| })(); | ||||
|   | ||||
| @@ -1,71 +1,71 @@ | ||||
| #!/usr/bin/env bash | ||||
|  | ||||
| # Sync preview NuGet packages into the local offline feed. | ||||
| # Reads package metadata from ops/devops/nuget-preview-packages.csv | ||||
| # and ensures ./local-nuget holds the expected artefacts (with SHA-256 verification). | ||||
| # Optional 4th CSV column can override the download base (e.g. dotnet-public flat container). | ||||
|  | ||||
| set -euo pipefail | ||||
|  | ||||
| repo_root="$(git -C "${BASH_SOURCE%/*}/.." rev-parse --show-toplevel 2>/dev/null || pwd)" | ||||
| manifest="${repo_root}/ops/devops/nuget-preview-packages.csv" | ||||
| dest="${repo_root}/local-nuget" | ||||
| nuget_v2_base="${NUGET_V2_BASE:-https://www.nuget.org/api/v2/package}" | ||||
|  | ||||
| if [[ ! -f "$manifest" ]]; then | ||||
|   echo "Manifest not found: $manifest" >&2 | ||||
|   exit 1 | ||||
| fi | ||||
|  | ||||
| mkdir -p "$dest" | ||||
|  | ||||
| fetch_package() { | ||||
|   local package="$1" | ||||
|   local version="$2" | ||||
|   local expected_sha="$3" | ||||
|   local source_base="$4" | ||||
|   local target="$dest/${package}.${version}.nupkg" | ||||
|   local url | ||||
|  | ||||
|   if [[ -n "$source_base" ]]; then | ||||
|     local package_lower | ||||
|     package_lower="${package,,}" | ||||
|     url="${source_base%/}/${package_lower}/${version}/${package_lower}.${version}.nupkg" | ||||
|   else | ||||
|     url="${nuget_v2_base%/}/${package}/${version}" | ||||
|   fi | ||||
|  | ||||
|   echo "[sync-nuget] Fetching ${package} ${version}" | ||||
|   local tmp | ||||
|   tmp="$(mktemp)" | ||||
|   trap 'rm -f "$tmp"' RETURN | ||||
|   curl -fsSL --retry 3 --retry-delay 1 "$url" -o "$tmp" | ||||
|   local actual_sha | ||||
|   actual_sha="$(sha256sum "$tmp" | awk '{print $1}')" | ||||
|   if [[ "$actual_sha" != "$expected_sha" ]]; then | ||||
|     echo "Checksum mismatch for ${package} ${version}" >&2 | ||||
|     echo "  expected: $expected_sha" >&2 | ||||
|     echo "  actual:   $actual_sha" >&2 | ||||
|     exit 1 | ||||
|   fi | ||||
|   mv "$tmp" "$target" | ||||
|   trap - RETURN | ||||
| } | ||||
|  | ||||
| while IFS=',' read -r package version sha source_base; do | ||||
|   [[ -z "$package" || "$package" == \#* ]] && continue | ||||
|  | ||||
|   local_path="$dest/${package}.${version}.nupkg" | ||||
|   if [[ -f "$local_path" ]]; then | ||||
|     current_sha="$(sha256sum "$local_path" | awk '{print $1}')" | ||||
|     if [[ "$current_sha" == "$sha" ]]; then | ||||
|       echo "[sync-nuget] OK ${package} ${version}" | ||||
|       continue | ||||
|     fi | ||||
|     echo "[sync-nuget] SHA mismatch for ${package} ${version}, refreshing" | ||||
|   else | ||||
|     echo "[sync-nuget] Missing ${package} ${version}" | ||||
|   fi | ||||
|  | ||||
|   fetch_package "$package" "$version" "$sha" "${source_base:-}" | ||||
| done < "$manifest" | ||||
| #!/usr/bin/env bash | ||||
|  | ||||
| # Sync preview NuGet packages into the local offline feed. | ||||
| # Reads package metadata from ops/devops/nuget-preview-packages.csv | ||||
| # and ensures ./local-nuget holds the expected artefacts (with SHA-256 verification). | ||||
| # Optional 4th CSV column can override the download base (e.g. dotnet-public flat container). | ||||
|  | ||||
| set -euo pipefail | ||||
|  | ||||
| repo_root="$(git -C "${BASH_SOURCE%/*}/.." rev-parse --show-toplevel 2>/dev/null || pwd)" | ||||
| manifest="${repo_root}/ops/devops/nuget-preview-packages.csv" | ||||
| dest="${repo_root}/local-nuget" | ||||
| nuget_v2_base="${NUGET_V2_BASE:-https://www.nuget.org/api/v2/package}" | ||||
|  | ||||
| if [[ ! -f "$manifest" ]]; then | ||||
|   echo "Manifest not found: $manifest" >&2 | ||||
|   exit 1 | ||||
| fi | ||||
|  | ||||
| mkdir -p "$dest" | ||||
|  | ||||
| fetch_package() { | ||||
|   local package="$1" | ||||
|   local version="$2" | ||||
|   local expected_sha="$3" | ||||
|   local source_base="$4" | ||||
|   local target="$dest/${package}.${version}.nupkg" | ||||
|   local url | ||||
|  | ||||
|   if [[ -n "$source_base" ]]; then | ||||
|     local package_lower | ||||
|     package_lower="${package,,}" | ||||
|     url="${source_base%/}/${package_lower}/${version}/${package_lower}.${version}.nupkg" | ||||
|   else | ||||
|     url="${nuget_v2_base%/}/${package}/${version}" | ||||
|   fi | ||||
|  | ||||
|   echo "[sync-nuget] Fetching ${package} ${version}" | ||||
|   local tmp | ||||
|   tmp="$(mktemp)" | ||||
|   trap 'rm -f "$tmp"' RETURN | ||||
|   curl -fsSL --retry 3 --retry-delay 1 "$url" -o "$tmp" | ||||
|   local actual_sha | ||||
|   actual_sha="$(sha256sum "$tmp" | awk '{print $1}')" | ||||
|   if [[ "$actual_sha" != "$expected_sha" ]]; then | ||||
|     echo "Checksum mismatch for ${package} ${version}" >&2 | ||||
|     echo "  expected: $expected_sha" >&2 | ||||
|     echo "  actual:   $actual_sha" >&2 | ||||
|     exit 1 | ||||
|   fi | ||||
|   mv "$tmp" "$target" | ||||
|   trap - RETURN | ||||
| } | ||||
|  | ||||
| while IFS=',' read -r package version sha source_base; do | ||||
|   [[ -z "$package" || "$package" == \#* ]] && continue | ||||
|  | ||||
|   local_path="$dest/${package}.${version}.nupkg" | ||||
|   if [[ -f "$local_path" ]]; then | ||||
|     current_sha="$(sha256sum "$local_path" | awk '{print $1}')" | ||||
|     if [[ "$current_sha" == "$sha" ]]; then | ||||
|       echo "[sync-nuget] OK ${package} ${version}" | ||||
|       continue | ||||
|     fi | ||||
|     echo "[sync-nuget] SHA mismatch for ${package} ${version}, refreshing" | ||||
|   else | ||||
|     echo "[sync-nuget] Missing ${package} ${version}" | ||||
|   fi | ||||
|  | ||||
|   fetch_package "$package" "$version" "$sha" "${source_base:-}" | ||||
| done < "$manifest" | ||||
|   | ||||
| @@ -1,77 +1,77 @@ | ||||
| #!/usr/bin/env bash | ||||
|  | ||||
| set -euo pipefail | ||||
|  | ||||
| SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" | ||||
| CERT_DIR="${SCRIPT_DIR}/../../deploy/telemetry/certs" | ||||
|  | ||||
| mkdir -p "${CERT_DIR}" | ||||
|  | ||||
| CA_KEY="${CERT_DIR}/ca.key" | ||||
| CA_CRT="${CERT_DIR}/ca.crt" | ||||
| COL_KEY="${CERT_DIR}/collector.key" | ||||
| COL_CSR="${CERT_DIR}/collector.csr" | ||||
| COL_CRT="${CERT_DIR}/collector.crt" | ||||
| CLIENT_KEY="${CERT_DIR}/client.key" | ||||
| CLIENT_CSR="${CERT_DIR}/client.csr" | ||||
| CLIENT_CRT="${CERT_DIR}/client.crt" | ||||
|  | ||||
| echo "[*] Generating OpenTelemetry dev CA and certificates in ${CERT_DIR}" | ||||
|  | ||||
| # Root CA | ||||
| if [[ ! -f "${CA_KEY}" ]]; then | ||||
|   openssl genrsa -out "${CA_KEY}" 4096 >/dev/null 2>&1 | ||||
| fi | ||||
| openssl req -x509 -new -key "${CA_KEY}" -days 365 -sha256 \ | ||||
|   -out "${CA_CRT}" -subj "/CN=StellaOps Dev Telemetry CA" \ | ||||
|   -config <(cat <<'EOF' | ||||
| [req] | ||||
| distinguished_name = req_distinguished_name | ||||
| prompt = no | ||||
| [req_distinguished_name] | ||||
| EOF | ||||
| ) >/dev/null 2>&1 | ||||
|  | ||||
| # Collector certificate (server + client auth) | ||||
| openssl req -new -nodes -newkey rsa:4096 \ | ||||
|   -keyout "${COL_KEY}" \ | ||||
|   -out "${COL_CSR}" \ | ||||
|   -subj "/CN=stellaops-otel-collector" >/dev/null 2>&1 | ||||
|  | ||||
| openssl x509 -req -in "${COL_CSR}" -CA "${CA_CRT}" -CAkey "${CA_KEY}" \ | ||||
|   -CAcreateserial -out "${COL_CRT}" -days 365 -sha256 \ | ||||
|   -extensions v3_req -extfile <(cat <<'EOF' | ||||
| [v3_req] | ||||
| subjectAltName = @alt_names | ||||
| extendedKeyUsage = serverAuth, clientAuth | ||||
| [alt_names] | ||||
| DNS.1 = stellaops-otel-collector | ||||
| DNS.2 = localhost | ||||
| IP.1 = 127.0.0.1 | ||||
| EOF | ||||
| ) >/dev/null 2>&1 | ||||
|  | ||||
| # Client certificate | ||||
| openssl req -new -nodes -newkey rsa:4096 \ | ||||
|   -keyout "${CLIENT_KEY}" \ | ||||
|   -out "${CLIENT_CSR}" \ | ||||
|   -subj "/CN=stellaops-otel-client" >/dev/null 2>&1 | ||||
|  | ||||
| openssl x509 -req -in "${CLIENT_CSR}" -CA "${CA_CRT}" -CAkey "${CA_KEY}" \ | ||||
|   -CAcreateserial -out "${CLIENT_CRT}" -days 365 -sha256 \ | ||||
|   -extensions v3_req -extfile <(cat <<'EOF' | ||||
| [v3_req] | ||||
| extendedKeyUsage = clientAuth | ||||
| subjectAltName = @alt_names | ||||
| [alt_names] | ||||
| DNS.1 = stellaops-otel-client | ||||
| DNS.2 = localhost | ||||
| IP.1 = 127.0.0.1 | ||||
| EOF | ||||
| ) >/dev/null 2>&1 | ||||
|  | ||||
| rm -f "${COL_CSR}" "${CLIENT_CSR}" | ||||
| rm -f "${CERT_DIR}/ca.srl" | ||||
|  | ||||
| echo "[✓] Certificates ready:" | ||||
| ls -1 "${CERT_DIR}" | ||||
| #!/usr/bin/env bash | ||||
|  | ||||
| set -euo pipefail | ||||
|  | ||||
| SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" | ||||
| CERT_DIR="${SCRIPT_DIR}/../../deploy/telemetry/certs" | ||||
|  | ||||
| mkdir -p "${CERT_DIR}" | ||||
|  | ||||
| CA_KEY="${CERT_DIR}/ca.key" | ||||
| CA_CRT="${CERT_DIR}/ca.crt" | ||||
| COL_KEY="${CERT_DIR}/collector.key" | ||||
| COL_CSR="${CERT_DIR}/collector.csr" | ||||
| COL_CRT="${CERT_DIR}/collector.crt" | ||||
| CLIENT_KEY="${CERT_DIR}/client.key" | ||||
| CLIENT_CSR="${CERT_DIR}/client.csr" | ||||
| CLIENT_CRT="${CERT_DIR}/client.crt" | ||||
|  | ||||
| echo "[*] Generating OpenTelemetry dev CA and certificates in ${CERT_DIR}" | ||||
|  | ||||
| # Root CA | ||||
| if [[ ! -f "${CA_KEY}" ]]; then | ||||
|   openssl genrsa -out "${CA_KEY}" 4096 >/dev/null 2>&1 | ||||
| fi | ||||
| openssl req -x509 -new -key "${CA_KEY}" -days 365 -sha256 \ | ||||
|   -out "${CA_CRT}" -subj "/CN=StellaOps Dev Telemetry CA" \ | ||||
|   -config <(cat <<'EOF' | ||||
| [req] | ||||
| distinguished_name = req_distinguished_name | ||||
| prompt = no | ||||
| [req_distinguished_name] | ||||
| EOF | ||||
| ) >/dev/null 2>&1 | ||||
|  | ||||
| # Collector certificate (server + client auth) | ||||
| openssl req -new -nodes -newkey rsa:4096 \ | ||||
|   -keyout "${COL_KEY}" \ | ||||
|   -out "${COL_CSR}" \ | ||||
|   -subj "/CN=stellaops-otel-collector" >/dev/null 2>&1 | ||||
|  | ||||
| openssl x509 -req -in "${COL_CSR}" -CA "${CA_CRT}" -CAkey "${CA_KEY}" \ | ||||
|   -CAcreateserial -out "${COL_CRT}" -days 365 -sha256 \ | ||||
|   -extensions v3_req -extfile <(cat <<'EOF' | ||||
| [v3_req] | ||||
| subjectAltName = @alt_names | ||||
| extendedKeyUsage = serverAuth, clientAuth | ||||
| [alt_names] | ||||
| DNS.1 = stellaops-otel-collector | ||||
| DNS.2 = localhost | ||||
| IP.1 = 127.0.0.1 | ||||
| EOF | ||||
| ) >/dev/null 2>&1 | ||||
|  | ||||
| # Client certificate | ||||
| openssl req -new -nodes -newkey rsa:4096 \ | ||||
|   -keyout "${CLIENT_KEY}" \ | ||||
|   -out "${CLIENT_CSR}" \ | ||||
|   -subj "/CN=stellaops-otel-client" >/dev/null 2>&1 | ||||
|  | ||||
| openssl x509 -req -in "${CLIENT_CSR}" -CA "${CA_CRT}" -CAkey "${CA_KEY}" \ | ||||
|   -CAcreateserial -out "${CLIENT_CRT}" -days 365 -sha256 \ | ||||
|   -extensions v3_req -extfile <(cat <<'EOF' | ||||
| [v3_req] | ||||
| extendedKeyUsage = clientAuth | ||||
| subjectAltName = @alt_names | ||||
| [alt_names] | ||||
| DNS.1 = stellaops-otel-client | ||||
| DNS.2 = localhost | ||||
| IP.1 = 127.0.0.1 | ||||
| EOF | ||||
| ) >/dev/null 2>&1 | ||||
|  | ||||
| rm -f "${COL_CSR}" "${CLIENT_CSR}" | ||||
| rm -f "${CERT_DIR}/ca.srl" | ||||
|  | ||||
| echo "[✓] Certificates ready:" | ||||
| ls -1 "${CERT_DIR}" | ||||
|   | ||||
| @@ -1,136 +1,136 @@ | ||||
| #!/usr/bin/env python3 | ||||
| """Package telemetry collector assets for offline/air-gapped installs. | ||||
|  | ||||
| Outputs a tarball containing the collector configuration, Compose overlay, | ||||
| Helm defaults, and operator README. A SHA-256 checksum sidecar is emitted, and | ||||
| optional Cosign signing can be enabled with --sign. | ||||
| """ | ||||
| from __future__ import annotations | ||||
|  | ||||
| import argparse | ||||
| import hashlib | ||||
| import os | ||||
| import subprocess | ||||
| import sys | ||||
| import tarfile | ||||
| from pathlib import Path | ||||
| from typing import Iterable | ||||
|  | ||||
| REPO_ROOT = Path(__file__).resolve().parents[3] | ||||
| DEFAULT_OUTPUT = REPO_ROOT / "out" / "telemetry" / "telemetry-offline-bundle.tar.gz" | ||||
| BUNDLE_CONTENTS: tuple[Path, ...] = ( | ||||
|     Path("deploy/telemetry/README.md"), | ||||
|     Path("deploy/telemetry/otel-collector-config.yaml"), | ||||
|     Path("deploy/telemetry/storage/README.md"), | ||||
|     Path("deploy/telemetry/storage/prometheus.yaml"), | ||||
|     Path("deploy/telemetry/storage/tempo.yaml"), | ||||
|     Path("deploy/telemetry/storage/loki.yaml"), | ||||
|     Path("deploy/telemetry/storage/tenants/tempo-overrides.yaml"), | ||||
|     Path("deploy/telemetry/storage/tenants/loki-overrides.yaml"), | ||||
|     Path("deploy/helm/stellaops/files/otel-collector-config.yaml"), | ||||
|     Path("deploy/helm/stellaops/values.yaml"), | ||||
|     Path("deploy/helm/stellaops/templates/otel-collector.yaml"), | ||||
|     Path("deploy/compose/docker-compose.telemetry.yaml"), | ||||
|     Path("deploy/compose/docker-compose.telemetry-storage.yaml"), | ||||
|     Path("docs/ops/telemetry-collector.md"), | ||||
|     Path("docs/ops/telemetry-storage.md"), | ||||
| ) | ||||
|  | ||||
|  | ||||
| def compute_sha256(path: Path) -> str: | ||||
|     sha = hashlib.sha256() | ||||
|     with path.open("rb") as handle: | ||||
|         for chunk in iter(lambda: handle.read(1024 * 1024), b""): | ||||
|             sha.update(chunk) | ||||
|     return sha.hexdigest() | ||||
|  | ||||
|  | ||||
| def validate_files(paths: Iterable[Path]) -> None: | ||||
|     missing = [str(p) for p in paths if not (REPO_ROOT / p).exists()] | ||||
|     if missing: | ||||
|         raise FileNotFoundError(f"Missing bundle artefacts: {', '.join(missing)}") | ||||
|  | ||||
|  | ||||
| def create_bundle(output_path: Path) -> Path: | ||||
|     output_path.parent.mkdir(parents=True, exist_ok=True) | ||||
|     with tarfile.open(output_path, "w:gz") as tar: | ||||
|         for rel_path in BUNDLE_CONTENTS: | ||||
|             abs_path = REPO_ROOT / rel_path | ||||
|             tar.add(abs_path, arcname=str(rel_path)) | ||||
|     return output_path | ||||
|  | ||||
|  | ||||
| def write_checksum(bundle_path: Path) -> Path: | ||||
|     digest = compute_sha256(bundle_path) | ||||
|     sha_path = bundle_path.with_suffix(bundle_path.suffix + ".sha256") | ||||
|     sha_path.write_text(f"{digest}  {bundle_path.name}\n", encoding="utf-8") | ||||
|     return sha_path | ||||
|  | ||||
|  | ||||
| def cosign_sign(bundle_path: Path, key_ref: str | None, identity_token: str | None) -> None: | ||||
|     cmd = ["cosign", "sign-blob", "--yes", str(bundle_path)] | ||||
|     if key_ref: | ||||
|         cmd.extend(["--key", key_ref]) | ||||
|     env = os.environ.copy() | ||||
|     if identity_token: | ||||
|         env["COSIGN_IDENTITY_TOKEN"] = identity_token | ||||
|     try: | ||||
|         subprocess.run(cmd, check=True, env=env) | ||||
|     except FileNotFoundError as exc: | ||||
|         raise RuntimeError("cosign not found on PATH; install cosign or omit --sign") from exc | ||||
|     except subprocess.CalledProcessError as exc: | ||||
|         raise RuntimeError(f"cosign sign-blob failed: {exc}") from exc | ||||
|  | ||||
|  | ||||
| def parse_args(argv: list[str] | None = None) -> argparse.Namespace: | ||||
|     parser = argparse.ArgumentParser(description=__doc__) | ||||
|     parser.add_argument( | ||||
|         "--output", | ||||
|         type=Path, | ||||
|         default=DEFAULT_OUTPUT, | ||||
|         help=f"Output bundle path (default: {DEFAULT_OUTPUT})", | ||||
|     ) | ||||
|     parser.add_argument( | ||||
|         "--sign", | ||||
|         action="store_true", | ||||
|         help="Sign the bundle using cosign (requires cosign on PATH)", | ||||
|     ) | ||||
|     parser.add_argument( | ||||
|         "--cosign-key", | ||||
|         type=str, | ||||
|         default=os.environ.get("COSIGN_KEY_REF"), | ||||
|         help="Cosign key reference (file:..., azurekms://..., etc.)", | ||||
|     ) | ||||
|     parser.add_argument( | ||||
|         "--identity-token", | ||||
|         type=str, | ||||
|         default=os.environ.get("COSIGN_IDENTITY_TOKEN"), | ||||
|         help="OIDC identity token for keyless signing", | ||||
|     ) | ||||
|     return parser.parse_args(argv) | ||||
|  | ||||
|  | ||||
| def main(argv: list[str] | None = None) -> int: | ||||
|     args = parse_args(argv) | ||||
|     validate_files(BUNDLE_CONTENTS) | ||||
|  | ||||
|     bundle_path = args.output.resolve() | ||||
|     print(f"[*] Creating telemetry bundle at {bundle_path}") | ||||
|     create_bundle(bundle_path) | ||||
|     sha_path = write_checksum(bundle_path) | ||||
|     print(f"[✓] SHA-256 written to {sha_path}") | ||||
|  | ||||
|     if args.sign: | ||||
|         print("[*] Signing bundle with cosign") | ||||
|         cosign_sign(bundle_path, args.cosign_key, args.identity_token) | ||||
|         sig_path = bundle_path.with_suffix(bundle_path.suffix + ".sig") | ||||
|         if sig_path.exists(): | ||||
|             print(f"[✓] Cosign signature written to {sig_path}") | ||||
|         else: | ||||
|             print("[!] Cosign completed but signature file not found (ensure cosign version >= 2.2)") | ||||
|  | ||||
|     return 0 | ||||
|  | ||||
|  | ||||
| if __name__ == "__main__": | ||||
|     sys.exit(main()) | ||||
| #!/usr/bin/env python3 | ||||
| """Package telemetry collector assets for offline/air-gapped installs. | ||||
|  | ||||
| Outputs a tarball containing the collector configuration, Compose overlay, | ||||
| Helm defaults, and operator README. A SHA-256 checksum sidecar is emitted, and | ||||
| optional Cosign signing can be enabled with --sign. | ||||
| """ | ||||
| from __future__ import annotations | ||||
|  | ||||
| import argparse | ||||
| import hashlib | ||||
| import os | ||||
| import subprocess | ||||
| import sys | ||||
| import tarfile | ||||
| from pathlib import Path | ||||
| from typing import Iterable | ||||
|  | ||||
| REPO_ROOT = Path(__file__).resolve().parents[3] | ||||
| DEFAULT_OUTPUT = REPO_ROOT / "out" / "telemetry" / "telemetry-offline-bundle.tar.gz" | ||||
| BUNDLE_CONTENTS: tuple[Path, ...] = ( | ||||
|     Path("deploy/telemetry/README.md"), | ||||
|     Path("deploy/telemetry/otel-collector-config.yaml"), | ||||
|     Path("deploy/telemetry/storage/README.md"), | ||||
|     Path("deploy/telemetry/storage/prometheus.yaml"), | ||||
|     Path("deploy/telemetry/storage/tempo.yaml"), | ||||
|     Path("deploy/telemetry/storage/loki.yaml"), | ||||
|     Path("deploy/telemetry/storage/tenants/tempo-overrides.yaml"), | ||||
|     Path("deploy/telemetry/storage/tenants/loki-overrides.yaml"), | ||||
|     Path("deploy/helm/stellaops/files/otel-collector-config.yaml"), | ||||
|     Path("deploy/helm/stellaops/values.yaml"), | ||||
|     Path("deploy/helm/stellaops/templates/otel-collector.yaml"), | ||||
|     Path("deploy/compose/docker-compose.telemetry.yaml"), | ||||
|     Path("deploy/compose/docker-compose.telemetry-storage.yaml"), | ||||
|     Path("docs/ops/telemetry-collector.md"), | ||||
|     Path("docs/ops/telemetry-storage.md"), | ||||
| ) | ||||
|  | ||||
|  | ||||
| def compute_sha256(path: Path) -> str: | ||||
|     sha = hashlib.sha256() | ||||
|     with path.open("rb") as handle: | ||||
|         for chunk in iter(lambda: handle.read(1024 * 1024), b""): | ||||
|             sha.update(chunk) | ||||
|     return sha.hexdigest() | ||||
|  | ||||
|  | ||||
| def validate_files(paths: Iterable[Path]) -> None: | ||||
|     missing = [str(p) for p in paths if not (REPO_ROOT / p).exists()] | ||||
|     if missing: | ||||
|         raise FileNotFoundError(f"Missing bundle artefacts: {', '.join(missing)}") | ||||
|  | ||||
|  | ||||
| def create_bundle(output_path: Path) -> Path: | ||||
|     output_path.parent.mkdir(parents=True, exist_ok=True) | ||||
|     with tarfile.open(output_path, "w:gz") as tar: | ||||
|         for rel_path in BUNDLE_CONTENTS: | ||||
|             abs_path = REPO_ROOT / rel_path | ||||
|             tar.add(abs_path, arcname=str(rel_path)) | ||||
|     return output_path | ||||
|  | ||||
|  | ||||
| def write_checksum(bundle_path: Path) -> Path: | ||||
|     digest = compute_sha256(bundle_path) | ||||
|     sha_path = bundle_path.with_suffix(bundle_path.suffix + ".sha256") | ||||
|     sha_path.write_text(f"{digest}  {bundle_path.name}\n", encoding="utf-8") | ||||
|     return sha_path | ||||
|  | ||||
|  | ||||
| def cosign_sign(bundle_path: Path, key_ref: str | None, identity_token: str | None) -> None: | ||||
|     cmd = ["cosign", "sign-blob", "--yes", str(bundle_path)] | ||||
|     if key_ref: | ||||
|         cmd.extend(["--key", key_ref]) | ||||
|     env = os.environ.copy() | ||||
|     if identity_token: | ||||
|         env["COSIGN_IDENTITY_TOKEN"] = identity_token | ||||
|     try: | ||||
|         subprocess.run(cmd, check=True, env=env) | ||||
|     except FileNotFoundError as exc: | ||||
|         raise RuntimeError("cosign not found on PATH; install cosign or omit --sign") from exc | ||||
|     except subprocess.CalledProcessError as exc: | ||||
|         raise RuntimeError(f"cosign sign-blob failed: {exc}") from exc | ||||
|  | ||||
|  | ||||
| def parse_args(argv: list[str] | None = None) -> argparse.Namespace: | ||||
|     parser = argparse.ArgumentParser(description=__doc__) | ||||
|     parser.add_argument( | ||||
|         "--output", | ||||
|         type=Path, | ||||
|         default=DEFAULT_OUTPUT, | ||||
|         help=f"Output bundle path (default: {DEFAULT_OUTPUT})", | ||||
|     ) | ||||
|     parser.add_argument( | ||||
|         "--sign", | ||||
|         action="store_true", | ||||
|         help="Sign the bundle using cosign (requires cosign on PATH)", | ||||
|     ) | ||||
|     parser.add_argument( | ||||
|         "--cosign-key", | ||||
|         type=str, | ||||
|         default=os.environ.get("COSIGN_KEY_REF"), | ||||
|         help="Cosign key reference (file:..., azurekms://..., etc.)", | ||||
|     ) | ||||
|     parser.add_argument( | ||||
|         "--identity-token", | ||||
|         type=str, | ||||
|         default=os.environ.get("COSIGN_IDENTITY_TOKEN"), | ||||
|         help="OIDC identity token for keyless signing", | ||||
|     ) | ||||
|     return parser.parse_args(argv) | ||||
|  | ||||
|  | ||||
| def main(argv: list[str] | None = None) -> int: | ||||
|     args = parse_args(argv) | ||||
|     validate_files(BUNDLE_CONTENTS) | ||||
|  | ||||
|     bundle_path = args.output.resolve() | ||||
|     print(f"[*] Creating telemetry bundle at {bundle_path}") | ||||
|     create_bundle(bundle_path) | ||||
|     sha_path = write_checksum(bundle_path) | ||||
|     print(f"[✓] SHA-256 written to {sha_path}") | ||||
|  | ||||
|     if args.sign: | ||||
|         print("[*] Signing bundle with cosign") | ||||
|         cosign_sign(bundle_path, args.cosign_key, args.identity_token) | ||||
|         sig_path = bundle_path.with_suffix(bundle_path.suffix + ".sig") | ||||
|         if sig_path.exists(): | ||||
|             print(f"[✓] Cosign signature written to {sig_path}") | ||||
|         else: | ||||
|             print("[!] Cosign completed but signature file not found (ensure cosign version >= 2.2)") | ||||
|  | ||||
|     return 0 | ||||
|  | ||||
|  | ||||
| if __name__ == "__main__": | ||||
|     sys.exit(main()) | ||||
|   | ||||
| @@ -1,197 +1,197 @@ | ||||
| #!/usr/bin/env python3 | ||||
| """ | ||||
| Smoke test for the StellaOps OpenTelemetry Collector deployment. | ||||
|  | ||||
| The script sends sample traces, metrics, and logs over OTLP/HTTP with mutual TLS | ||||
| and asserts that the collector accepted the payloads by checking its Prometheus | ||||
| metrics endpoint. | ||||
| """ | ||||
|  | ||||
| from __future__ import annotations | ||||
|  | ||||
| import argparse | ||||
| import json | ||||
| import ssl | ||||
| import sys | ||||
| import time | ||||
| import urllib.request | ||||
| from pathlib import Path | ||||
|  | ||||
| TRACE_PAYLOAD = { | ||||
|     "resourceSpans": [ | ||||
|         { | ||||
|             "resource": { | ||||
|                 "attributes": [ | ||||
|                     {"key": "service.name", "value": {"stringValue": "smoke-client"}}, | ||||
|                     {"key": "tenant.id", "value": {"stringValue": "dev"}}, | ||||
|                 ] | ||||
|             }, | ||||
|             "scopeSpans": [ | ||||
|                 { | ||||
|                     "scope": {"name": "smoke-test"}, | ||||
|                     "spans": [ | ||||
|                         { | ||||
|                             "traceId": "00000000000000000000000000000001", | ||||
|                             "spanId": "0000000000000001", | ||||
|                             "name": "smoke-span", | ||||
|                             "kind": 1, | ||||
|                             "startTimeUnixNano": "1730000000000000000", | ||||
|                             "endTimeUnixNano": "1730000000500000000", | ||||
|                             "status": {"code": 0}, | ||||
|                         } | ||||
|                     ], | ||||
|                 } | ||||
|             ], | ||||
|         } | ||||
|     ] | ||||
| } | ||||
|  | ||||
| METRIC_PAYLOAD = { | ||||
|     "resourceMetrics": [ | ||||
|         { | ||||
|             "resource": { | ||||
|                 "attributes": [ | ||||
|                     {"key": "service.name", "value": {"stringValue": "smoke-client"}}, | ||||
|                     {"key": "tenant.id", "value": {"stringValue": "dev"}}, | ||||
|                 ] | ||||
|             }, | ||||
|             "scopeMetrics": [ | ||||
|                 { | ||||
|                     "scope": {"name": "smoke-test"}, | ||||
|                     "metrics": [ | ||||
|                         { | ||||
|                             "name": "smoke_gauge", | ||||
|                             "gauge": { | ||||
|                                 "dataPoints": [ | ||||
|                                     { | ||||
|                                         "asDouble": 1.0, | ||||
|                                         "timeUnixNano": "1730000001000000000", | ||||
|                                         "attributes": [ | ||||
|                                             {"key": "phase", "value": {"stringValue": "ingest"}} | ||||
|                                         ], | ||||
|                                     } | ||||
|                                 ] | ||||
|                             }, | ||||
|                         } | ||||
|                     ], | ||||
|                 } | ||||
|             ], | ||||
|         } | ||||
|     ] | ||||
| } | ||||
|  | ||||
| LOG_PAYLOAD = { | ||||
|     "resourceLogs": [ | ||||
|         { | ||||
|             "resource": { | ||||
|                 "attributes": [ | ||||
|                     {"key": "service.name", "value": {"stringValue": "smoke-client"}}, | ||||
|                     {"key": "tenant.id", "value": {"stringValue": "dev"}}, | ||||
|                 ] | ||||
|             }, | ||||
|             "scopeLogs": [ | ||||
|                 { | ||||
|                     "scope": {"name": "smoke-test"}, | ||||
|                     "logRecords": [ | ||||
|                         { | ||||
|                             "timeUnixNano": "1730000002000000000", | ||||
|                             "severityNumber": 9, | ||||
|                             "severityText": "Info", | ||||
|                             "body": {"stringValue": "StellaOps collector smoke log"}, | ||||
|                         } | ||||
|                     ], | ||||
|                 } | ||||
|             ], | ||||
|         } | ||||
|     ] | ||||
| } | ||||
|  | ||||
|  | ||||
| def _load_context(ca: Path, cert: Path, key: Path) -> ssl.SSLContext: | ||||
|     context = ssl.create_default_context(cafile=str(ca)) | ||||
|     context.check_hostname = False | ||||
|     context.verify_mode = ssl.CERT_REQUIRED | ||||
|     context.load_cert_chain(certfile=str(cert), keyfile=str(key)) | ||||
|     return context | ||||
|  | ||||
|  | ||||
| def _post_json(url: str, payload: dict, context: ssl.SSLContext) -> None: | ||||
|     data = json.dumps(payload).encode("utf-8") | ||||
|     request = urllib.request.Request( | ||||
|         url, | ||||
|         data=data, | ||||
|         headers={ | ||||
|             "Content-Type": "application/json", | ||||
|             "User-Agent": "stellaops-otel-smoke/1.0", | ||||
|         }, | ||||
|         method="POST", | ||||
|     ) | ||||
|     with urllib.request.urlopen(request, context=context, timeout=10) as response: | ||||
|         if response.status // 100 != 2: | ||||
|             raise RuntimeError(f"{url} returned HTTP {response.status}") | ||||
|  | ||||
|  | ||||
| def _fetch_metrics(url: str, context: ssl.SSLContext) -> str: | ||||
|     request = urllib.request.Request( | ||||
|         url, | ||||
|         headers={ | ||||
|             "User-Agent": "stellaops-otel-smoke/1.0", | ||||
|         }, | ||||
|     ) | ||||
|     with urllib.request.urlopen(request, context=context, timeout=10) as response: | ||||
|         return response.read().decode("utf-8") | ||||
|  | ||||
|  | ||||
| def _assert_counter(metrics: str, metric_name: str) -> None: | ||||
|     for line in metrics.splitlines(): | ||||
|         if line.startswith(metric_name): | ||||
|             try: | ||||
|                 _, value = line.split(" ") | ||||
|                 if float(value) > 0: | ||||
|                     return | ||||
|             except ValueError: | ||||
|                 continue | ||||
|     raise AssertionError(f"{metric_name} not incremented") | ||||
|  | ||||
|  | ||||
| def main() -> int: | ||||
|     parser = argparse.ArgumentParser(description=__doc__) | ||||
|     parser.add_argument("--host", default="localhost", help="Collector host (default: %(default)s)") | ||||
|     parser.add_argument("--otlp-port", type=int, default=4318, help="OTLP/HTTP port") | ||||
|     parser.add_argument("--metrics-port", type=int, default=9464, help="Prometheus metrics port") | ||||
|     parser.add_argument("--health-port", type=int, default=13133, help="Health check port") | ||||
|     parser.add_argument("--ca", type=Path, default=Path("deploy/telemetry/certs/ca.crt"), help="CA certificate path") | ||||
|     parser.add_argument("--cert", type=Path, default=Path("deploy/telemetry/certs/client.crt"), help="Client certificate path") | ||||
|     parser.add_argument("--key", type=Path, default=Path("deploy/telemetry/certs/client.key"), help="Client key path") | ||||
|     args = parser.parse_args() | ||||
|  | ||||
|     for path in (args.ca, args.cert, args.key): | ||||
|         if not path.exists(): | ||||
|             print(f"[!] missing TLS material: {path}", file=sys.stderr) | ||||
|             return 1 | ||||
|  | ||||
|     context = _load_context(args.ca, args.cert, args.key) | ||||
|  | ||||
|     otlp_base = f"https://{args.host}:{args.otlp_port}/v1" | ||||
|     print(f"[*] Sending OTLP traffic to {otlp_base}") | ||||
|     _post_json(f"{otlp_base}/traces", TRACE_PAYLOAD, context) | ||||
|     _post_json(f"{otlp_base}/metrics", METRIC_PAYLOAD, context) | ||||
|     _post_json(f"{otlp_base}/logs", LOG_PAYLOAD, context) | ||||
|  | ||||
|     # Allow Prometheus exporter to update metrics | ||||
|     time.sleep(2) | ||||
|  | ||||
|     metrics_url = f"https://{args.host}:{args.metrics_port}/metrics" | ||||
|     print(f"[*] Fetching collector metrics from {metrics_url}") | ||||
|     metrics = _fetch_metrics(metrics_url, context) | ||||
|  | ||||
|     _assert_counter(metrics, "otelcol_receiver_accepted_spans") | ||||
|     _assert_counter(metrics, "otelcol_receiver_accepted_logs") | ||||
|     _assert_counter(metrics, "otelcol_receiver_accepted_metric_points") | ||||
|  | ||||
|     print("[✓] Collector accepted traces, logs, and metrics.") | ||||
|     return 0 | ||||
|  | ||||
|  | ||||
| if __name__ == "__main__": | ||||
|     raise SystemExit(main()) | ||||
| #!/usr/bin/env python3 | ||||
| """ | ||||
| Smoke test for the StellaOps OpenTelemetry Collector deployment. | ||||
|  | ||||
| The script sends sample traces, metrics, and logs over OTLP/HTTP with mutual TLS | ||||
| and asserts that the collector accepted the payloads by checking its Prometheus | ||||
| metrics endpoint. | ||||
| """ | ||||
|  | ||||
| from __future__ import annotations | ||||
|  | ||||
| import argparse | ||||
| import json | ||||
| import ssl | ||||
| import sys | ||||
| import time | ||||
| import urllib.request | ||||
| from pathlib import Path | ||||
|  | ||||
| TRACE_PAYLOAD = { | ||||
|     "resourceSpans": [ | ||||
|         { | ||||
|             "resource": { | ||||
|                 "attributes": [ | ||||
|                     {"key": "service.name", "value": {"stringValue": "smoke-client"}}, | ||||
|                     {"key": "tenant.id", "value": {"stringValue": "dev"}}, | ||||
|                 ] | ||||
|             }, | ||||
|             "scopeSpans": [ | ||||
|                 { | ||||
|                     "scope": {"name": "smoke-test"}, | ||||
|                     "spans": [ | ||||
|                         { | ||||
|                             "traceId": "00000000000000000000000000000001", | ||||
|                             "spanId": "0000000000000001", | ||||
|                             "name": "smoke-span", | ||||
|                             "kind": 1, | ||||
|                             "startTimeUnixNano": "1730000000000000000", | ||||
|                             "endTimeUnixNano": "1730000000500000000", | ||||
|                             "status": {"code": 0}, | ||||
|                         } | ||||
|                     ], | ||||
|                 } | ||||
|             ], | ||||
|         } | ||||
|     ] | ||||
| } | ||||
|  | ||||
| METRIC_PAYLOAD = { | ||||
|     "resourceMetrics": [ | ||||
|         { | ||||
|             "resource": { | ||||
|                 "attributes": [ | ||||
|                     {"key": "service.name", "value": {"stringValue": "smoke-client"}}, | ||||
|                     {"key": "tenant.id", "value": {"stringValue": "dev"}}, | ||||
|                 ] | ||||
|             }, | ||||
|             "scopeMetrics": [ | ||||
|                 { | ||||
|                     "scope": {"name": "smoke-test"}, | ||||
|                     "metrics": [ | ||||
|                         { | ||||
|                             "name": "smoke_gauge", | ||||
|                             "gauge": { | ||||
|                                 "dataPoints": [ | ||||
|                                     { | ||||
|                                         "asDouble": 1.0, | ||||
|                                         "timeUnixNano": "1730000001000000000", | ||||
|                                         "attributes": [ | ||||
|                                             {"key": "phase", "value": {"stringValue": "ingest"}} | ||||
|                                         ], | ||||
|                                     } | ||||
|                                 ] | ||||
|                             }, | ||||
|                         } | ||||
|                     ], | ||||
|                 } | ||||
|             ], | ||||
|         } | ||||
|     ] | ||||
| } | ||||
|  | ||||
| LOG_PAYLOAD = { | ||||
|     "resourceLogs": [ | ||||
|         { | ||||
|             "resource": { | ||||
|                 "attributes": [ | ||||
|                     {"key": "service.name", "value": {"stringValue": "smoke-client"}}, | ||||
|                     {"key": "tenant.id", "value": {"stringValue": "dev"}}, | ||||
|                 ] | ||||
|             }, | ||||
|             "scopeLogs": [ | ||||
|                 { | ||||
|                     "scope": {"name": "smoke-test"}, | ||||
|                     "logRecords": [ | ||||
|                         { | ||||
|                             "timeUnixNano": "1730000002000000000", | ||||
|                             "severityNumber": 9, | ||||
|                             "severityText": "Info", | ||||
|                             "body": {"stringValue": "StellaOps collector smoke log"}, | ||||
|                         } | ||||
|                     ], | ||||
|                 } | ||||
|             ], | ||||
|         } | ||||
|     ] | ||||
| } | ||||
|  | ||||
|  | ||||
| def _load_context(ca: Path, cert: Path, key: Path) -> ssl.SSLContext: | ||||
|     context = ssl.create_default_context(cafile=str(ca)) | ||||
|     context.check_hostname = False | ||||
|     context.verify_mode = ssl.CERT_REQUIRED | ||||
|     context.load_cert_chain(certfile=str(cert), keyfile=str(key)) | ||||
|     return context | ||||
|  | ||||
|  | ||||
| def _post_json(url: str, payload: dict, context: ssl.SSLContext) -> None: | ||||
|     data = json.dumps(payload).encode("utf-8") | ||||
|     request = urllib.request.Request( | ||||
|         url, | ||||
|         data=data, | ||||
|         headers={ | ||||
|             "Content-Type": "application/json", | ||||
|             "User-Agent": "stellaops-otel-smoke/1.0", | ||||
|         }, | ||||
|         method="POST", | ||||
|     ) | ||||
|     with urllib.request.urlopen(request, context=context, timeout=10) as response: | ||||
|         if response.status // 100 != 2: | ||||
|             raise RuntimeError(f"{url} returned HTTP {response.status}") | ||||
|  | ||||
|  | ||||
| def _fetch_metrics(url: str, context: ssl.SSLContext) -> str: | ||||
|     request = urllib.request.Request( | ||||
|         url, | ||||
|         headers={ | ||||
|             "User-Agent": "stellaops-otel-smoke/1.0", | ||||
|         }, | ||||
|     ) | ||||
|     with urllib.request.urlopen(request, context=context, timeout=10) as response: | ||||
|         return response.read().decode("utf-8") | ||||
|  | ||||
|  | ||||
| def _assert_counter(metrics: str, metric_name: str) -> None: | ||||
|     for line in metrics.splitlines(): | ||||
|         if line.startswith(metric_name): | ||||
|             try: | ||||
|                 _, value = line.split(" ") | ||||
|                 if float(value) > 0: | ||||
|                     return | ||||
|             except ValueError: | ||||
|                 continue | ||||
|     raise AssertionError(f"{metric_name} not incremented") | ||||
|  | ||||
|  | ||||
| def main() -> int: | ||||
|     parser = argparse.ArgumentParser(description=__doc__) | ||||
|     parser.add_argument("--host", default="localhost", help="Collector host (default: %(default)s)") | ||||
|     parser.add_argument("--otlp-port", type=int, default=4318, help="OTLP/HTTP port") | ||||
|     parser.add_argument("--metrics-port", type=int, default=9464, help="Prometheus metrics port") | ||||
|     parser.add_argument("--health-port", type=int, default=13133, help="Health check port") | ||||
|     parser.add_argument("--ca", type=Path, default=Path("deploy/telemetry/certs/ca.crt"), help="CA certificate path") | ||||
|     parser.add_argument("--cert", type=Path, default=Path("deploy/telemetry/certs/client.crt"), help="Client certificate path") | ||||
|     parser.add_argument("--key", type=Path, default=Path("deploy/telemetry/certs/client.key"), help="Client key path") | ||||
|     args = parser.parse_args() | ||||
|  | ||||
|     for path in (args.ca, args.cert, args.key): | ||||
|         if not path.exists(): | ||||
|             print(f"[!] missing TLS material: {path}", file=sys.stderr) | ||||
|             return 1 | ||||
|  | ||||
|     context = _load_context(args.ca, args.cert, args.key) | ||||
|  | ||||
|     otlp_base = f"https://{args.host}:{args.otlp_port}/v1" | ||||
|     print(f"[*] Sending OTLP traffic to {otlp_base}") | ||||
|     _post_json(f"{otlp_base}/traces", TRACE_PAYLOAD, context) | ||||
|     _post_json(f"{otlp_base}/metrics", METRIC_PAYLOAD, context) | ||||
|     _post_json(f"{otlp_base}/logs", LOG_PAYLOAD, context) | ||||
|  | ||||
|     # Allow Prometheus exporter to update metrics | ||||
|     time.sleep(2) | ||||
|  | ||||
|     metrics_url = f"https://{args.host}:{args.metrics_port}/metrics" | ||||
|     print(f"[*] Fetching collector metrics from {metrics_url}") | ||||
|     metrics = _fetch_metrics(metrics_url, context) | ||||
|  | ||||
|     _assert_counter(metrics, "otelcol_receiver_accepted_spans") | ||||
|     _assert_counter(metrics, "otelcol_receiver_accepted_logs") | ||||
|     _assert_counter(metrics, "otelcol_receiver_accepted_metric_points") | ||||
|  | ||||
|     print("[✓] Collector accepted traces, logs, and metrics.") | ||||
|     return 0 | ||||
|  | ||||
|  | ||||
| if __name__ == "__main__": | ||||
|     raise SystemExit(main()) | ||||
|   | ||||
| @@ -1,183 +1,183 @@ | ||||
| #!/usr/bin/env python3 | ||||
|  | ||||
| """ | ||||
| Validate NuGet source ordering for StellaOps. | ||||
|  | ||||
| Ensures `local-nuget` is the highest priority feed in both NuGet.config and the | ||||
| Directory.Build.props restore configuration. Fails fast with actionable errors | ||||
| so CI/offline kit workflows can assert deterministic restore ordering. | ||||
| """ | ||||
|  | ||||
| from __future__ import annotations | ||||
|  | ||||
| import argparse | ||||
| import subprocess | ||||
| import sys | ||||
| import xml.etree.ElementTree as ET | ||||
| from pathlib import Path | ||||
|  | ||||
|  | ||||
| REPO_ROOT = Path(__file__).resolve().parents[2] | ||||
| NUGET_CONFIG = REPO_ROOT / "NuGet.config" | ||||
| ROOT_PROPS = REPO_ROOT / "Directory.Build.props" | ||||
| EXPECTED_SOURCE_KEYS = ["local", "dotnet-public", "nuget.org"] | ||||
|  | ||||
|  | ||||
| class ValidationError(Exception): | ||||
|     """Raised when validation fails.""" | ||||
|  | ||||
|  | ||||
| def _fail(message: str) -> None: | ||||
|     raise ValidationError(message) | ||||
|  | ||||
|  | ||||
| def _parse_xml(path: Path) -> ET.ElementTree: | ||||
|     try: | ||||
|         return ET.parse(path) | ||||
|     except FileNotFoundError as exc: | ||||
|         _fail(f"Missing required file: {path}") | ||||
|     except ET.ParseError as exc: | ||||
|         _fail(f"Could not parse XML for {path}: {exc}") | ||||
|  | ||||
|  | ||||
| def validate_nuget_config() -> None: | ||||
|     tree = _parse_xml(NUGET_CONFIG) | ||||
|     root = tree.getroot() | ||||
|  | ||||
|     package_sources = root.find("packageSources") | ||||
|     if package_sources is None: | ||||
|         _fail("NuGet.config must declare a <packageSources> section.") | ||||
|  | ||||
|     children = list(package_sources) | ||||
|     if not children or children[0].tag != "clear": | ||||
|         _fail("NuGet.config packageSources must begin with a <clear /> element.") | ||||
|  | ||||
|     adds = [child for child in children if child.tag == "add"] | ||||
|     if not adds: | ||||
|         _fail("NuGet.config packageSources must define at least one <add> entry.") | ||||
|  | ||||
|     keys = [add.attrib.get("key") for add in adds] | ||||
|     if keys[: len(EXPECTED_SOURCE_KEYS)] != EXPECTED_SOURCE_KEYS: | ||||
|         formatted = ", ".join(keys) or "<empty>" | ||||
|         _fail( | ||||
|             "NuGet.config packageSources must list feeds in the order " | ||||
|             f"{EXPECTED_SOURCE_KEYS}. Found: {formatted}" | ||||
|         ) | ||||
|  | ||||
|     local_value = adds[0].attrib.get("value", "") | ||||
|     if Path(local_value).name != "local-nuget": | ||||
|         _fail( | ||||
|             "NuGet.config local feed should point at the repo-local mirror " | ||||
|             f"'local-nuget', found value '{local_value}'." | ||||
|         ) | ||||
|  | ||||
|     clear = package_sources.find("clear") | ||||
|     if clear is None: | ||||
|         _fail("NuGet.config packageSources must start with <clear /> to avoid inherited feeds.") | ||||
|  | ||||
|  | ||||
| def validate_directory_build_props() -> None: | ||||
|     tree = _parse_xml(ROOT_PROPS) | ||||
|     root = tree.getroot() | ||||
|     defaults = None | ||||
|     for element in root.findall(".//_StellaOpsDefaultRestoreSources"): | ||||
|         defaults = [fragment.strip() for fragment in element.text.split(";") if fragment.strip()] | ||||
|         break | ||||
|  | ||||
|     if defaults is None: | ||||
|         _fail("Directory.Build.props must define _StellaOpsDefaultRestoreSources.") | ||||
|  | ||||
|     expected_props = [ | ||||
|         "$(StellaOpsLocalNuGetSource)", | ||||
|         "$(StellaOpsDotNetPublicSource)", | ||||
|         "$(StellaOpsNuGetOrgSource)", | ||||
|     ] | ||||
|     if defaults != expected_props: | ||||
|         _fail( | ||||
|             "Directory.Build.props _StellaOpsDefaultRestoreSources must list feeds " | ||||
|             f"in the order {expected_props}. Found: {defaults}" | ||||
|         ) | ||||
|  | ||||
|     restore_nodes = root.findall(".//RestoreSources") | ||||
|     if not restore_nodes: | ||||
|         _fail("Directory.Build.props must override RestoreSources to force deterministic ordering.") | ||||
|  | ||||
|     uses_default_first = any( | ||||
|         node.text | ||||
|         and node.text.strip().startswith("$(_StellaOpsDefaultRestoreSources)") | ||||
|         for node in restore_nodes | ||||
|     ) | ||||
|     if not uses_default_first: | ||||
|         _fail( | ||||
|             "Directory.Build.props RestoreSources override must place " | ||||
|             "$(_StellaOpsDefaultRestoreSources) at the beginning." | ||||
|         ) | ||||
|  | ||||
|  | ||||
| def assert_single_nuget_config() -> None: | ||||
|     extra_configs: list[Path] = [] | ||||
|     configs: set[Path] = set() | ||||
|     for glob in ("NuGet.config", "nuget.config"): | ||||
|         try: | ||||
|             result = subprocess.run( | ||||
|                 ["rg", "--files", f"-g{glob}"], | ||||
|                 check=False, | ||||
|                 capture_output=True, | ||||
|                 text=True, | ||||
|                 cwd=REPO_ROOT, | ||||
|             ) | ||||
|         except FileNotFoundError as exc: | ||||
|             _fail("ripgrep (rg) is required for validation but was not found on PATH.") | ||||
|         if result.returncode not in (0, 1): | ||||
|             _fail( | ||||
|                 f"ripgrep failed while searching for {glob}: {result.stderr.strip() or result.returncode}" | ||||
|             ) | ||||
|         for line in result.stdout.splitlines(): | ||||
|             configs.add((REPO_ROOT / line).resolve()) | ||||
|  | ||||
|     configs.discard(NUGET_CONFIG.resolve()) | ||||
|     extra_configs.extend(sorted(configs)) | ||||
|     if extra_configs: | ||||
|         formatted = "\n  ".join(str(path.relative_to(REPO_ROOT)) for path in extra_configs) | ||||
|         _fail( | ||||
|             "Unexpected additional NuGet.config files detected. " | ||||
|             "Consolidate feed configuration in the repo root:\n  " | ||||
|             f"{formatted}" | ||||
|         ) | ||||
|  | ||||
|  | ||||
| def parse_args(argv: list[str]) -> argparse.Namespace: | ||||
|     parser = argparse.ArgumentParser( | ||||
|         description="Verify StellaOps NuGet feeds prioritise the local mirror." | ||||
|     ) | ||||
|     parser.add_argument( | ||||
|         "--skip-rg", | ||||
|         action="store_true", | ||||
|         help="Skip ripgrep discovery of extra NuGet.config files (useful for focused runs).", | ||||
|     ) | ||||
|     return parser.parse_args(argv) | ||||
|  | ||||
|  | ||||
| def main(argv: list[str]) -> int: | ||||
|     args = parse_args(argv) | ||||
|     validations = [ | ||||
|         ("NuGet.config ordering", validate_nuget_config), | ||||
|         ("Directory.Build.props restore override", validate_directory_build_props), | ||||
|     ] | ||||
|     if not args.skip_rg: | ||||
|         validations.append(("single NuGet.config", assert_single_nuget_config)) | ||||
|  | ||||
|     for label, check in validations: | ||||
|         try: | ||||
|             check() | ||||
|         except ValidationError as exc: | ||||
|             sys.stderr.write(f"[FAIL] {label}: {exc}\n") | ||||
|             return 1 | ||||
|         else: | ||||
|             sys.stdout.write(f"[OK] {label}\n") | ||||
|  | ||||
|     return 0 | ||||
|  | ||||
|  | ||||
| if __name__ == "__main__": | ||||
|     sys.exit(main(sys.argv[1:])) | ||||
| #!/usr/bin/env python3 | ||||
|  | ||||
| """ | ||||
| Validate NuGet source ordering for StellaOps. | ||||
|  | ||||
| Ensures `local-nuget` is the highest priority feed in both NuGet.config and the | ||||
| Directory.Build.props restore configuration. Fails fast with actionable errors | ||||
| so CI/offline kit workflows can assert deterministic restore ordering. | ||||
| """ | ||||
|  | ||||
| from __future__ import annotations | ||||
|  | ||||
| import argparse | ||||
| import subprocess | ||||
| import sys | ||||
| import xml.etree.ElementTree as ET | ||||
| from pathlib import Path | ||||
|  | ||||
|  | ||||
| REPO_ROOT = Path(__file__).resolve().parents[2] | ||||
| NUGET_CONFIG = REPO_ROOT / "NuGet.config" | ||||
| ROOT_PROPS = REPO_ROOT / "Directory.Build.props" | ||||
| EXPECTED_SOURCE_KEYS = ["local", "dotnet-public", "nuget.org"] | ||||
|  | ||||
|  | ||||
| class ValidationError(Exception): | ||||
|     """Raised when validation fails.""" | ||||
|  | ||||
|  | ||||
| def _fail(message: str) -> None: | ||||
|     raise ValidationError(message) | ||||
|  | ||||
|  | ||||
| def _parse_xml(path: Path) -> ET.ElementTree: | ||||
|     try: | ||||
|         return ET.parse(path) | ||||
|     except FileNotFoundError as exc: | ||||
|         _fail(f"Missing required file: {path}") | ||||
|     except ET.ParseError as exc: | ||||
|         _fail(f"Could not parse XML for {path}: {exc}") | ||||
|  | ||||
|  | ||||
| def validate_nuget_config() -> None: | ||||
|     tree = _parse_xml(NUGET_CONFIG) | ||||
|     root = tree.getroot() | ||||
|  | ||||
|     package_sources = root.find("packageSources") | ||||
|     if package_sources is None: | ||||
|         _fail("NuGet.config must declare a <packageSources> section.") | ||||
|  | ||||
|     children = list(package_sources) | ||||
|     if not children or children[0].tag != "clear": | ||||
|         _fail("NuGet.config packageSources must begin with a <clear /> element.") | ||||
|  | ||||
|     adds = [child for child in children if child.tag == "add"] | ||||
|     if not adds: | ||||
|         _fail("NuGet.config packageSources must define at least one <add> entry.") | ||||
|  | ||||
|     keys = [add.attrib.get("key") for add in adds] | ||||
|     if keys[: len(EXPECTED_SOURCE_KEYS)] != EXPECTED_SOURCE_KEYS: | ||||
|         formatted = ", ".join(keys) or "<empty>" | ||||
|         _fail( | ||||
|             "NuGet.config packageSources must list feeds in the order " | ||||
|             f"{EXPECTED_SOURCE_KEYS}. Found: {formatted}" | ||||
|         ) | ||||
|  | ||||
|     local_value = adds[0].attrib.get("value", "") | ||||
|     if Path(local_value).name != "local-nuget": | ||||
|         _fail( | ||||
|             "NuGet.config local feed should point at the repo-local mirror " | ||||
|             f"'local-nuget', found value '{local_value}'." | ||||
|         ) | ||||
|  | ||||
|     clear = package_sources.find("clear") | ||||
|     if clear is None: | ||||
|         _fail("NuGet.config packageSources must start with <clear /> to avoid inherited feeds.") | ||||
|  | ||||
|  | ||||
| def validate_directory_build_props() -> None: | ||||
|     tree = _parse_xml(ROOT_PROPS) | ||||
|     root = tree.getroot() | ||||
|     defaults = None | ||||
|     for element in root.findall(".//_StellaOpsDefaultRestoreSources"): | ||||
|         defaults = [fragment.strip() for fragment in element.text.split(";") if fragment.strip()] | ||||
|         break | ||||
|  | ||||
|     if defaults is None: | ||||
|         _fail("Directory.Build.props must define _StellaOpsDefaultRestoreSources.") | ||||
|  | ||||
|     expected_props = [ | ||||
|         "$(StellaOpsLocalNuGetSource)", | ||||
|         "$(StellaOpsDotNetPublicSource)", | ||||
|         "$(StellaOpsNuGetOrgSource)", | ||||
|     ] | ||||
|     if defaults != expected_props: | ||||
|         _fail( | ||||
|             "Directory.Build.props _StellaOpsDefaultRestoreSources must list feeds " | ||||
|             f"in the order {expected_props}. Found: {defaults}" | ||||
|         ) | ||||
|  | ||||
|     restore_nodes = root.findall(".//RestoreSources") | ||||
|     if not restore_nodes: | ||||
|         _fail("Directory.Build.props must override RestoreSources to force deterministic ordering.") | ||||
|  | ||||
|     uses_default_first = any( | ||||
|         node.text | ||||
|         and node.text.strip().startswith("$(_StellaOpsDefaultRestoreSources)") | ||||
|         for node in restore_nodes | ||||
|     ) | ||||
|     if not uses_default_first: | ||||
|         _fail( | ||||
|             "Directory.Build.props RestoreSources override must place " | ||||
|             "$(_StellaOpsDefaultRestoreSources) at the beginning." | ||||
|         ) | ||||
|  | ||||
|  | ||||
| def assert_single_nuget_config() -> None: | ||||
|     extra_configs: list[Path] = [] | ||||
|     configs: set[Path] = set() | ||||
|     for glob in ("NuGet.config", "nuget.config"): | ||||
|         try: | ||||
|             result = subprocess.run( | ||||
|                 ["rg", "--files", f"-g{glob}"], | ||||
|                 check=False, | ||||
|                 capture_output=True, | ||||
|                 text=True, | ||||
|                 cwd=REPO_ROOT, | ||||
|             ) | ||||
|         except FileNotFoundError as exc: | ||||
|             _fail("ripgrep (rg) is required for validation but was not found on PATH.") | ||||
|         if result.returncode not in (0, 1): | ||||
|             _fail( | ||||
|                 f"ripgrep failed while searching for {glob}: {result.stderr.strip() or result.returncode}" | ||||
|             ) | ||||
|         for line in result.stdout.splitlines(): | ||||
|             configs.add((REPO_ROOT / line).resolve()) | ||||
|  | ||||
|     configs.discard(NUGET_CONFIG.resolve()) | ||||
|     extra_configs.extend(sorted(configs)) | ||||
|     if extra_configs: | ||||
|         formatted = "\n  ".join(str(path.relative_to(REPO_ROOT)) for path in extra_configs) | ||||
|         _fail( | ||||
|             "Unexpected additional NuGet.config files detected. " | ||||
|             "Consolidate feed configuration in the repo root:\n  " | ||||
|             f"{formatted}" | ||||
|         ) | ||||
|  | ||||
|  | ||||
| def parse_args(argv: list[str]) -> argparse.Namespace: | ||||
|     parser = argparse.ArgumentParser( | ||||
|         description="Verify StellaOps NuGet feeds prioritise the local mirror." | ||||
|     ) | ||||
|     parser.add_argument( | ||||
|         "--skip-rg", | ||||
|         action="store_true", | ||||
|         help="Skip ripgrep discovery of extra NuGet.config files (useful for focused runs).", | ||||
|     ) | ||||
|     return parser.parse_args(argv) | ||||
|  | ||||
|  | ||||
| def main(argv: list[str]) -> int: | ||||
|     args = parse_args(argv) | ||||
|     validations = [ | ||||
|         ("NuGet.config ordering", validate_nuget_config), | ||||
|         ("Directory.Build.props restore override", validate_directory_build_props), | ||||
|     ] | ||||
|     if not args.skip_rg: | ||||
|         validations.append(("single NuGet.config", assert_single_nuget_config)) | ||||
|  | ||||
|     for label, check in validations: | ||||
|         try: | ||||
|             check() | ||||
|         except ValidationError as exc: | ||||
|             sys.stderr.write(f"[FAIL] {label}: {exc}\n") | ||||
|             return 1 | ||||
|         else: | ||||
|             sys.stdout.write(f"[OK] {label}\n") | ||||
|  | ||||
|     return 0 | ||||
|  | ||||
|  | ||||
| if __name__ == "__main__": | ||||
|     sys.exit(main(sys.argv[1:])) | ||||
|   | ||||
		Reference in New Issue
	
	Block a user