Restructure solution layout by module
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled

This commit is contained in:
root
2025-10-28 15:10:40 +02:00
parent 4e3e575db5
commit 68da90a11a
4103 changed files with 192899 additions and 187024 deletions

View File

@@ -15,7 +15,7 @@ WORKDIR /src
# Restore & publish
COPY . .
RUN dotnet restore src/StellaOps.sln
RUN dotnet publish src/StellaOps.Authority/StellaOps.Authority/StellaOps.Authority.csproj \
RUN dotnet publish src/Authority/StellaOps.Authority/StellaOps.Authority/StellaOps.Authority.csproj \
-c Release \
-o /app/publish \
/p:UseAppHost=false

View File

@@ -1,62 +1,62 @@
# StellaOps Authority Container Scaffold
This directory provides a distroless Dockerfile and `docker-compose` sample for bootstrapping the Authority service alongside MongoDB (required) and Redis (optional).
## Prerequisites
- Docker Engine 25+ and Compose V2
- .NET 10 preview SDK (only required when building locally outside of Compose)
- Populated Authority configuration at `etc/authority.yaml` and plugin manifests under `etc/authority.plugins/`
## Usage
```bash
# 1. Ensure configuration files exist (copied from etc/authority.yaml.sample, etc/authority.plugins/*.yaml)
# 2. Build and start the stack
docker compose -f ops/authority/docker-compose.authority.yaml up --build
```
`authority.yaml` is mounted read-only at `/etc/authority.yaml` inside the container. Plugin manifests are mounted to `/app/etc/authority.plugins`. Update the issuer URL plus any Mongo credentials in the compose file or via an `.env`.
To run with pre-built images, replace the `build:` block in the compose file with an `image:` reference.
## Volumes
- `mongo-data` persists MongoDB state.
- `redis-data` optional Redis persistence (enable the service before use).
- `authority-keys` writable volume for Authority signing keys.
## Environment overrides
Key environment variables (mirroring `StellaOpsAuthorityOptions`):
| Variable | Description |
| --- | --- |
| `STELLAOPS_AUTHORITY__ISSUER` | Public issuer URL advertised by Authority |
| `STELLAOPS_AUTHORITY__PLUGINDIRECTORIES__0` | Primary plugin binaries directory inside the container |
| `STELLAOPS_AUTHORITY__PLUGINS__CONFIGURATIONDIRECTORY` | Path to plugin manifest directory |
For additional options, see `etc/authority.yaml.sample`.
> **Graph Explorer reminder:** When enabling Cartographer or Graph API components, update `etc/authority.yaml` so the `cartographer-service` client includes `properties.serviceIdentity: "cartographer"` and a tenant hint. Authority now rejects `graph:write` tokens that lack this marker, so existing deployments must apply the update before rolling out the new build.
> **Console endpoint reminder:** The Console UI now calls `/console/tenants`, `/console/profile`, and `/console/token/introspect`. Reverse proxies must forward the `X-Stella-Tenant` header (derived from the access token) so Authority can enforce tenancy; audit events are logged under `authority.console.*`. Admin actions obey a five-minute fresh-auth window reported by `/console/profile`, so keep session timeout prompts aligned with that value.
## Key rotation automation (OPS3)
The `key-rotation.sh` helper wraps the `/internal/signing/rotate` endpoint delivered with CORE10. It can run in CI/CD once the new PEM key is staged on the Authority host volume.
```bash
AUTHORITY_BOOTSTRAP_KEY=$(cat ~/.secrets/authority-bootstrap.key) \
./key-rotation.sh \
--authority-url https://authority.stella-ops.local \
--key-id authority-signing-2025 \
--key-path ../certificates/authority-signing-2025.pem \
--meta rotatedBy=pipeline --meta changeTicket=OPS-1234
```
- `--key-path` should resolve from the Authority content root (same as `docs/11_AUTHORITY.md` SOP).
- Provide `--source`/`--provider` if the key loader differs from the default file-based provider.
- Pass `--dry-run` during rehearsals to inspect the JSON payload without invoking the API.
After rotation, export a fresh revocation bundle (`stellaops-cli auth revoke export`) so downstream mirrors consume signatures from the new `kid`. The canonical operational steps live in `docs/11_AUTHORITY.md` make sure any local automation keeps that guide as source of truth.
# StellaOps Authority Container Scaffold
This directory provides a distroless Dockerfile and `docker-compose` sample for bootstrapping the Authority service alongside MongoDB (required) and Redis (optional).
## Prerequisites
- Docker Engine 25+ and Compose V2
- .NET 10 preview SDK (only required when building locally outside of Compose)
- Populated Authority configuration at `etc/authority.yaml` and plugin manifests under `etc/authority.plugins/`
## Usage
```bash
# 1. Ensure configuration files exist (copied from etc/authority.yaml.sample, etc/authority.plugins/*.yaml)
# 2. Build and start the stack
docker compose -f ops/authority/docker-compose.authority.yaml up --build
```
`authority.yaml` is mounted read-only at `/etc/authority.yaml` inside the container. Plugin manifests are mounted to `/app/etc/authority.plugins`. Update the issuer URL plus any Mongo credentials in the compose file or via an `.env`.
To run with pre-built images, replace the `build:` block in the compose file with an `image:` reference.
## Volumes
- `mongo-data` persists MongoDB state.
- `redis-data` optional Redis persistence (enable the service before use).
- `authority-keys` writable volume for Authority signing keys.
## Environment overrides
Key environment variables (mirroring `StellaOpsAuthorityOptions`):
| Variable | Description |
| --- | --- |
| `STELLAOPS_AUTHORITY__ISSUER` | Public issuer URL advertised by Authority |
| `STELLAOPS_AUTHORITY__PLUGINDIRECTORIES__0` | Primary plugin binaries directory inside the container |
| `STELLAOPS_AUTHORITY__PLUGINS__CONFIGURATIONDIRECTORY` | Path to plugin manifest directory |
For additional options, see `etc/authority.yaml.sample`.
> **Graph Explorer reminder:** When enabling Cartographer or Graph API components, update `etc/authority.yaml` so the `cartographer-service` client includes `properties.serviceIdentity: "cartographer"` and a tenant hint. Authority now rejects `graph:write` tokens that lack this marker, so existing deployments must apply the update before rolling out the new build.
> **Console endpoint reminder:** The Console UI now calls `/console/tenants`, `/console/profile`, and `/console/token/introspect`. Reverse proxies must forward the `X-Stella-Tenant` header (derived from the access token) so Authority can enforce tenancy; audit events are logged under `authority.console.*`. Admin actions obey a five-minute fresh-auth window reported by `/console/profile`, so keep session timeout prompts aligned with that value.
## Key rotation automation (OPS3)
The `key-rotation.sh` helper wraps the `/internal/signing/rotate` endpoint delivered with CORE10. It can run in CI/CD once the new PEM key is staged on the Authority host volume.
```bash
AUTHORITY_BOOTSTRAP_KEY=$(cat ~/.secrets/authority-bootstrap.key) \
./key-rotation.sh \
--authority-url https://authority.stella-ops.local \
--key-id authority-signing-2025 \
--key-path ../certificates/authority-signing-2025.pem \
--meta rotatedBy=pipeline --meta changeTicket=OPS-1234
```
- `--key-path` should resolve from the Authority content root (same as `docs/11_AUTHORITY.md` SOP).
- Provide `--source`/`--provider` if the key loader differs from the default file-based provider.
- Pass `--dry-run` during rehearsals to inspect the JSON payload without invoking the API.
After rotation, export a fresh revocation bundle (`stellaops-cli auth revoke export`) so downstream mirrors consume signatures from the new `kid`. The canonical operational steps live in `docs/11_AUTHORITY.md` make sure any local automation keeps that guide as source of truth.

View File

@@ -1,92 +1,92 @@
# DevOps Release Automation
The **release** workflow builds and signs the StellaOps service containers,
generates SBOM + provenance attestations, and emits a canonical
`release.yaml`. The logic lives under `ops/devops/release/` and is invoked
by the new `.gitea/workflows/release.yml` pipeline.
## Local dry run
```bash
./ops/devops/release/build_release.py \
--version 2025.10.0-edge \
--channel edge \
--dry-run
```
Outputs land under `out/release/`. Use `--no-push` to run full builds without
pushing to the registry.
After the build completes, run the verifier to validate recorded hashes and artefact
presence:
```bash
python ops/devops/release/verify_release.py --release-dir out/release
```
## Python analyzer smoke & signing
`dotnet run --project tools/LanguageAnalyzerSmoke` exercises the Python language
analyzer plug-in against the golden fixtures (cold/warm timings, determinism). The
release workflow runs this harness automatically and then produces Cosign
signatures + SHA-256 sidecars for `StellaOps.Scanner.Analyzers.Lang.Python.dll`
and its `manifest.json`. Keep `COSIGN_KEY_REF`/`COSIGN_IDENTITY_TOKEN` populated so
the step can sign the artefacts; the generated `.sig`/`.sha256` files ship with the
Offline Kit bundle.
## Required tooling
- Docker 25+ with Buildx
- .NET 10 preview SDK (builds container stages and the SBOM generator)
- Node.js 20 (Angular UI build)
- Helm 3.16+
- Cosign 2.2+
Supply signing material via environment variables:
- `COSIGN_KEY_REF` e.g. `file:./keys/cosign.key` or `azurekms://…`
- `COSIGN_PASSWORD` password protecting the above key
The workflow defaults to multi-arch (`linux/amd64,linux/arm64`), SBOM in
CycloneDX, and SLSA provenance (`https://slsa.dev/provenance/v1`).
## Debug store extraction
`build_release.py` now exports stripped debug artefacts for every ELF discovered in the published images. The files land under `out/release/debug/.build-id/<aa>/<rest>.debug`, with metadata captured in `debug/debug-manifest.json` (and a `.sha256` sidecar). Use `jq` to inspect the manifest or `readelf -n` to spot-check a build-id. Offline Kit packaging should reuse the `debug/` directory as-is.
## UI auth smoke (Playwright)
As part of **DEVOPS-UI-13-006** the pipelines will execute the UI auth smoke
tests (`npm run test:e2e`) after building the Angular bundle. See
`docs/ops/ui-auth-smoke.md` for the job design, environment stubs, and
offline runner considerations.
## NuGet preview bootstrap
`.NET 10` preview packages (Microsoft.Extensions.*, JwtBearer 10.0 RC, Sqlite 9 RC)
ship from the public `dotnet-public` Azure DevOps feed. We mirror them into
`./local-nuget` so restores succeed inside Offline Kit.
1. Run `./ops/devops/sync-preview-nuget.sh` whenever you update the manifest.
2. The script now understands the optional `SourceBase` column (V3 flat container)
and writes packages alongside their SHA-256 checks.
3. `NuGet.config` registers the mirror (`local`), dotnet-public, and nuget.org.
Use `python3 ops/devops/validate_restore_sources.py` to prove the repo still
prefers the local mirror and that `Directory.Build.props` enforces the same order.
The validator now runs automatically in the `build-test-deploy` and `release`
workflows so CI fails fast when a feed priority regression slips in.
Detailed operator instructions live in `docs/ops/nuget-preview-bootstrap.md`.
## Telemetry collector tooling (DEVOPS-OBS-50-001)
- `ops/devops/telemetry/generate_dev_tls.sh` generates a development CA and
client/server certificates for the OpenTelemetry collector overlay (mutual TLS).
- `ops/devops/telemetry/smoke_otel_collector.py` sends OTLP traces/metrics/logs
over TLS and validates that the collector increments its receiver counters.
- `ops/devops/telemetry/package_offline_bundle.py` re-packages collector assets for the Offline Kit.
- `deploy/compose/docker-compose.telemetry-storage.yaml` Prometheus/Tempo/Loki stack for staging validation.
Combine these helpers with `deploy/compose/docker-compose.telemetry.yaml` to run
a secured collector locally before rolling out the Helm-based deployment.
# DevOps Release Automation
The **release** workflow builds and signs the StellaOps service containers,
generates SBOM + provenance attestations, and emits a canonical
`release.yaml`. The logic lives under `ops/devops/release/` and is invoked
by the new `.gitea/workflows/release.yml` pipeline.
## Local dry run
```bash
./ops/devops/release/build_release.py \
--version 2025.10.0-edge \
--channel edge \
--dry-run
```
Outputs land under `out/release/`. Use `--no-push` to run full builds without
pushing to the registry.
After the build completes, run the verifier to validate recorded hashes and artefact
presence:
```bash
python ops/devops/release/verify_release.py --release-dir out/release
```
## Python analyzer smoke & signing
`dotnet run --project tools/LanguageAnalyzerSmoke` exercises the Python language
analyzer plug-in against the golden fixtures (cold/warm timings, determinism). The
release workflow runs this harness automatically and then produces Cosign
signatures + SHA-256 sidecars for `StellaOps.Scanner.Analyzers.Lang.Python.dll`
and its `manifest.json`. Keep `COSIGN_KEY_REF`/`COSIGN_IDENTITY_TOKEN` populated so
the step can sign the artefacts; the generated `.sig`/`.sha256` files ship with the
Offline Kit bundle.
## Required tooling
- Docker 25+ with Buildx
- .NET 10 preview SDK (builds container stages and the SBOM generator)
- Node.js 20 (Angular UI build)
- Helm 3.16+
- Cosign 2.2+
Supply signing material via environment variables:
- `COSIGN_KEY_REF` e.g. `file:./keys/cosign.key` or `azurekms://…`
- `COSIGN_PASSWORD` password protecting the above key
The workflow defaults to multi-arch (`linux/amd64,linux/arm64`), SBOM in
CycloneDX, and SLSA provenance (`https://slsa.dev/provenance/v1`).
## Debug store extraction
`build_release.py` now exports stripped debug artefacts for every ELF discovered in the published images. The files land under `out/release/debug/.build-id/<aa>/<rest>.debug`, with metadata captured in `debug/debug-manifest.json` (and a `.sha256` sidecar). Use `jq` to inspect the manifest or `readelf -n` to spot-check a build-id. Offline Kit packaging should reuse the `debug/` directory as-is.
## UI auth smoke (Playwright)
As part of **DEVOPS-UI-13-006** the pipelines will execute the UI auth smoke
tests (`npm run test:e2e`) after building the Angular bundle. See
`docs/ops/ui-auth-smoke.md` for the job design, environment stubs, and
offline runner considerations.
## NuGet preview bootstrap
`.NET 10` preview packages (Microsoft.Extensions.*, JwtBearer 10.0 RC, Sqlite 9 RC)
ship from the public `dotnet-public` Azure DevOps feed. We mirror them into
`./local-nuget` so restores succeed inside Offline Kit.
1. Run `./ops/devops/sync-preview-nuget.sh` whenever you update the manifest.
2. The script now understands the optional `SourceBase` column (V3 flat container)
and writes packages alongside their SHA-256 checks.
3. `NuGet.config` registers the mirror (`local`), dotnet-public, and nuget.org.
Use `python3 ops/devops/validate_restore_sources.py` to prove the repo still
prefers the local mirror and that `Directory.Build.props` enforces the same order.
The validator now runs automatically in the `build-test-deploy` and `release`
workflows so CI fails fast when a feed priority regression slips in.
Detailed operator instructions live in `docs/ops/nuget-preview-bootstrap.md`.
## Telemetry collector tooling (DEVOPS-OBS-50-001)
- `ops/devops/telemetry/generate_dev_tls.sh` generates a development CA and
client/server certificates for the OpenTelemetry collector overlay (mutual TLS).
- `ops/devops/telemetry/smoke_otel_collector.py` sends OTLP traces/metrics/logs
over TLS and validates that the collector increments its receiver counters.
- `ops/devops/telemetry/package_offline_bundle.py` re-packages collector assets for the Offline Kit.
- `deploy/compose/docker-compose.telemetry-storage.yaml` Prometheus/Tempo/Loki stack for staging validation.
Combine these helpers with `deploy/compose/docker-compose.telemetry.yaml` to run
a secured collector locally before rolling out the Helm-based deployment.

View File

@@ -1,172 +1,172 @@
# DevOps Task Board
## Governance & Rules
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-RULES-33-001 | DOING (2025-10-26) | DevOps Guild, Platform Leads | — | Contracts & Rules anchor:<br>• Gateway proxies only; Policy Engine composes overlays/simulations.<br>• AOC ingestion cannot merge; only lossless canonicalization.<br>• One graph platform: Graph Indexer + Graph API. Cartographer retired. | Rules posted in SPRINTS/TASKS; duplicates cleaned per guidance; reviewers acknowledge in changelog. |
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-HELM-09-001 | DONE | DevOps Guild | SCANNER-WEB-09-101 | Create Helm/Compose environment profiles (dev, staging, airgap) with deterministic digests. | Profiles committed under `deploy/`; docs updated; CI smoke deploy passes. |
| DEVOPS-SCANNER-09-204 | DONE (2025-10-21) | DevOps Guild, Scanner WebService Guild | SCANNER-EVENTS-15-201 | Surface `SCANNER__EVENTS__*` environment variables across docker-compose (dev/stage/airgap) and Helm values, defaulting to share the Redis queue DSN. | Compose/Helm configs ship enabled Redis event publishing with documented overrides; lint jobs updated; docs cross-link to new knobs. |
| DEVOPS-SCANNER-09-205 | DONE (2025-10-21) | DevOps Guild, Notify Guild | DEVOPS-SCANNER-09-204 | Add Notify smoke stage that tails the Redis stream and asserts `scanner.report.ready`/`scanner.scan.completed` reach Notify WebService in staging. | CI job reads Redis stream during scanner smoke deploy, confirms Notify ingestion via API, alerts on failure. |
| DEVOPS-PERF-10-001 | DONE | DevOps Guild | BENCH-SCANNER-10-001 | Add perf smoke job (SBOM compose <5s target) to CI. | CI job runs sample build verifying <5s; alerts configured. |
| DEVOPS-PERF-10-002 | DONE (2025-10-23) | DevOps Guild | BENCH-SCANNER-10-002 | Publish analyzer bench metrics to Grafana/perf workbook and alarm on 20% regressions. | CI exports JSON for dashboards; Grafana panel wired; Ops on-call doc updated with alert hook. |
| DEVOPS-AOC-19-001 | BLOCKED (2025-10-26) | DevOps Guild, Platform Guild | WEB-AOC-19-003 | Integrate the AOC Roslyn analyzer and guard tests into CI, failing builds when ingestion projects attempt banned writes. | Analyzer runs in PR/CI pipelines, results surfaced in build summary, docs updated under `docs/ops/ci-aoc.md`. |
> Docs hand-off (2025-10-26): see `docs/ingestion/aggregation-only-contract.md` §5, `docs/architecture/overview.md`, and `docs/cli/cli-reference.md` for guard + verifier expectations.
| DEVOPS-AOC-19-002 | BLOCKED (2025-10-26) | DevOps Guild | CLI-AOC-19-002, CONCELIER-WEB-AOC-19-004, EXCITITOR-WEB-AOC-19-004 | Add pipeline stage executing `stella aoc verify --since` against seeded Mongo snapshots for Concelier + Excititor, publishing violation report artefacts. | Stage runs on main/nightly, fails on violations, artifacts retained, runbook documented. |
> Blocked: waiting on CLI verifier command and Concelier/Excititor guard endpoints to land (CLI-AOC-19-002, CONCELIER-WEB-AOC-19-004, EXCITITOR-WEB-AOC-19-004).
| DEVOPS-AOC-19-003 | BLOCKED (2025-10-26) | DevOps Guild, QA Guild | CONCELIER-WEB-AOC-19-003, EXCITITOR-WEB-AOC-19-003 | Enforce unit test coverage thresholds for AOC guard suites and ensure coverage exported to dashboards. | Coverage report includes guard projects, threshold gate passes/fails as expected, dashboards refreshed with new metrics. |
> Blocked: guard coverage suites and exporter hooks pending in Concelier/Excititor (CONCELIER-WEB-AOC-19-003, EXCITITOR-WEB-AOC-19-003).
| DEVOPS-AOC-19-101 | TODO (2025-10-28) | DevOps Guild, Concelier Storage Guild | CONCELIER-STORE-AOC-19-002 | Draft supersedes backfill rollout (freeze window, dry-run steps, rollback) once advisory_raw idempotency index passes staging verification. | Runbook committed in `docs/deploy/containers.md` + Offline Kit notes, staging rehearsal scheduled with dependencies captured in SPRINTS. |
| DEVOPS-OBS-50-001 | DONE (2025-10-26) | DevOps Guild, Observability Guild | TELEMETRY-OBS-50-001 | Deliver default OpenTelemetry collector deployment (Compose/Helm manifests), OTLP ingestion endpoints, and secure pipeline (authN, mTLS, tenant partitioning). Provide smoke test verifying traces/logs/metrics ingestion. | Collector manifests committed; smoke test green; docs updated; imposed rule banner reminder noted. |
| DEVOPS-OBS-50-002 | DOING (2025-10-26) | DevOps Guild, Security Guild | DEVOPS-OBS-50-001, TELEMETRY-OBS-51-002 | Stand up multi-tenant storage backends (Prometheus, Tempo/Jaeger, Loki) with retention policies, tenant isolation, and redaction guard rails. Integrate with Authority scopes for read paths. | Storage stack deployed with auth; retention configured; integration tests verify tenant isolation; runbook drafted. |
> Coordination started with Observability Guild (2025-10-26) to schedule staging rollout and provision service accounts. Staging bootstrap commands and secret names documented in `docs/ops/telemetry-storage.md`.
| DEVOPS-OBS-50-003 | DONE (2025-10-26) | DevOps Guild, Offline Kit Guild | DEVOPS-OBS-50-001 | Package telemetry stack configs for air-gapped installs (Offline Kit bundle, documented overrides, sample values) and automate checksum/signature generation. | Offline bundle includes collector+storage configs; checksums published; docs cross-linked; imposed rule annotation recorded. |
| DEVOPS-OBS-51-001 | TODO | DevOps Guild, Observability Guild | WEB-OBS-51-001, DEVOPS-OBS-50-001 | Implement SLO evaluator service (burn rate calculators, webhook emitters), Grafana dashboards, and alert routing to Notifier. Provide Terraform/Helm automation. | Dashboards live; evaluator emits webhooks; alert runbook referenced; staging alert fired in test. |
| DEVOPS-OBS-52-001 | TODO | DevOps Guild, Timeline Indexer Guild | TIMELINE-OBS-52-002 | Configure streaming pipeline (NATS/Redis/Kafka) with retention, partitioning, and backpressure tuning for timeline events; add CI validation of schema + rate caps. | Pipeline deployed; load test meets SLA; schema validation job passes; documentation updated. |
| DEVOPS-OBS-53-001 | TODO | DevOps Guild, Evidence Locker Guild | EVID-OBS-53-001 | Provision object storage with WORM/retention options (S3 Object Lock / MinIO immutability), legal hold automation, and backup/restore scripts for evidence locker. | Storage configured with WORM; legal hold script documented; backup test performed; runbook updated. |
| DEVOPS-OBS-54-001 | TODO | DevOps Guild, Security Guild | PROV-OBS-53-002, EVID-OBS-54-001 | Manage provenance signing infrastructure (KMS keys, rotation schedule, timestamp authority integration) and integrate verification jobs into CI. | Keys provisioned with rotation policy; timestamp authority configured; CI verifies sample bundles; audit trail stored. |
| DEVOPS-OBS-55-001 | TODO | DevOps Guild, Ops Guild | DEVOPS-OBS-51-001, WEB-OBS-55-001 | Implement incident mode automation: feature flag service, auto-activation via SLO burn-rate, retention override management, and post-incident reset job. | Incident mode toggles via API/CLI; automation tested in staging; reset job verified; runbook referenced. |
## Air-Gapped Mode (Epic 16)
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-AIRGAP-56-001 | TODO | DevOps Guild | AIRGAP-CTL-56-001 | Ship deny-all egress policies for Kubernetes (NetworkPolicy/eBPF) and docker-compose firewall rules; provide verification script for sealed mode. | Policies committed with tests; verification script passes/fails as expected; docs cross-linked. |
| DEVOPS-AIRGAP-56-002 | TODO | DevOps Guild, AirGap Importer Guild | AIRGAP-IMP-57-002 | Provide import tooling for bundle staging: checksum validation, offline object-store loader scripts, removable media guidance. | Scripts documented; smoke tests validate import; runbook updated. |
| DEVOPS-AIRGAP-56-003 | TODO | DevOps Guild, Container Distribution Guild | EXPORT-AIRGAP-56-002 | Build Bootstrap Pack pipeline bundling images/charts, generating checksums, and publishing manifest for offline transfer. | Pipeline runs in connected env; pack verified in air-gap smoke test; manifest recorded. |
| DEVOPS-AIRGAP-57-001 | TODO | DevOps Guild, Mirror Creator Guild | MIRROR-CRT-56-002 | Automate Mirror Bundle creation jobs with dual-control approvals, artifact signing, and checksum publication. | Approval workflow enforced; CI artifact includes DSSE/TUF metadata; audit logs stored. |
| DEVOPS-AIRGAP-57-002 | TODO | DevOps Guild, Authority Guild | AUTH-OBS-50-001 | Configure sealed-mode CI tests that run services with sealed flag and ensure no egress occurs (iptables + mock DNS). | CI suite fails on attempted egress; reports remediation; documentation updated. |
| DEVOPS-AIRGAP-58-001 | TODO | DevOps Guild, Notifications Guild | NOTIFY-AIRGAP-56-002 | Provide local SMTP/syslog container templates and health checks for sealed environments; integrate into Bootstrap Pack. | Templates deployed successfully; health checks in CI; docs updated. |
| DEVOPS-AIRGAP-58-002 | TODO | DevOps Guild, Observability Guild | DEVOPS-AIRGAP-56-001, DEVOPS-OBS-51-001 | Ship sealed-mode observability stack (Prometheus/Grafana/Tempo/Loki) pre-configured with offline dashboards and no remote exporters. | Stack boots offline; dashboards available; verification script confirms zero egress. |
| DEVOPS-REL-14-001 | DONE (2025-10-26) | DevOps Guild | SIGNER-API-11-101, ATTESTOR-API-11-201 | Deterministic build/release pipeline with SBOM/provenance, signing, manifest generation. | CI pipeline produces signed images + SBOM/attestations, manifests published with verified hashes, docs updated. |
| DEVOPS-REL-14-004 | DONE (2025-10-26) | DevOps Guild, Scanner Guild | DEVOPS-REL-14-001, SCANNER-ANALYZERS-LANG-10-309P | Extend release/offline smoke jobs to exercise the Python analyzer plug-in (warm/cold scans, determinism, signature checks). | Release/Offline pipelines run Python analyzer smoke suite; alerts hooked; docs updated with new coverage matrix. |
| DEVOPS-REL-17-002 | DONE (2025-10-26) | DevOps Guild | DEVOPS-REL-14-001, SCANNER-EMIT-17-701 | Persist stripped-debug artifacts organised by GNU build-id and bundle them into release/offline kits with checksum manifests. | CI job writes `.debug` files under `artifacts/debug/.build-id/`, manifest + checksums published, offline kit includes cache, smoke job proves symbol lookup via build-id. |
| DEVOPS-REL-17-004 | BLOCKED (2025-10-26) | DevOps Guild | DEVOPS-REL-17-002 | Ensure release workflow publishes `out/release/debug` (build-id tree + manifest) and fails when symbols are missing. | Release job emits debug artefacts, `mirror_debug_store.py` summary committed, warning cleared from build logs, docs updated. |
| DEVOPS-MIRROR-08-001 | DONE (2025-10-19) | DevOps Guild | DEVOPS-REL-14-001 | Stand up managed mirror profiles for `*.stella-ops.org` (Concelier/Excititor), including Helm/Compose overlays, multi-tenant secrets, CDN caching, and sync documentation. | Infra overlays committed, CI smoke deploy hits mirror endpoints, runbooks published for downstream sync and quota management. |
> Note (2025-10-26, BLOCKED): IdentityModel.Tokens patched for logging 9.x, but release bundle still fails because Docker cannot stream multi-arch build context (`unix:///var/run/docker.sock` unavailable, EOF during copy). Retry once docker daemon/socket is healthy; until then `out/release/debug` cannot be generated.
| DEVOPS-CONSOLE-23-001 | BLOCKED (2025-10-26) | DevOps Guild, Console Guild | CONSOLE-CORE-23-001 | Add console CI workflow (pnpm cache, lint, type-check, unit, Storybook a11y, Playwright, Lighthouse) with offline runners and artifact retention for screenshots/reports. | Workflow runs on PR & main, caches reduce install time, failing checks block merges, artifacts uploaded for triage, docs updated. |
> Blocked: Console workspace and package scripts (CONSOLE-CORE-23-001..005) are not yet present; CI cannot execute pnpm/Playwright/Lighthouse until the Next.js app lands.
| DEVOPS-CONSOLE-23-002 | TODO | DevOps Guild, Console Guild | DEVOPS-CONSOLE-23-001, CONSOLE-REL-23-301 | Produce `stella-console` container build + Helm chart overlays with deterministic digests, SBOM/provenance artefacts, and offline bundle packaging scripts. | Container published to registry mirror, Helm values committed, SBOM/attestations generated, offline kit job passes smoke test, docs updated. |
| DEVOPS-LAUNCH-18-100 | DONE (2025-10-26) | DevOps Guild | - | Finalise production environment footprint (clusters, secrets, network overlays) for full-platform go-live. | IaC/compose overlays committed, secrets placeholders documented, dry-run deploy succeeds in staging. |
| DEVOPS-LAUNCH-18-900 | DONE (2025-10-26) | DevOps Guild, Module Leads | Wave 0 completion | Collect full implementation sign-off from module owners and consolidate launch readiness checklist. | Sign-off record stored under `docs/ops/launch-readiness.md`; outstanding gaps triaged; checklist approved. |
| DEVOPS-LAUNCH-18-001 | DONE (2025-10-26) | DevOps Guild | DEVOPS-LAUNCH-18-100, DEVOPS-LAUNCH-18-900 | Production launch cutover rehearsal and runbook publication. | `docs/ops/launch-cutover.md` drafted, rehearsal executed with rollback drill, approvals captured. |
| DEVOPS-NUGET-13-001 | DONE (2025-10-25) | DevOps Guild, Platform Leads | DEVOPS-REL-14-001 | Add .NET 10 preview feeds / local mirrors so `Microsoft.Extensions.*` 10.0 preview packages restore offline; refresh restore docs. | NuGet.config maps preview feeds (or local mirrored packages), `dotnet restore` succeeds for Excititor/Concelier solutions without ad-hoc feed edits, docs updated for offline bootstrap. |
| DEVOPS-NUGET-13-002 | DONE (2025-10-26) | DevOps Guild | DEVOPS-NUGET-13-001 | Ensure all solutions/projects prefer `local-nuget` before public sources and document restore order validation. | `NuGet.config` and solution-level configs resolve from `local-nuget` first; automated check verifies priority; docs updated for restore ordering. |
| DEVOPS-NUGET-13-003 | DONE (2025-10-26) | DevOps Guild, Platform Leads | DEVOPS-NUGET-13-002 | Sweep `Microsoft.*` NuGet dependencies pinned to 8.* and upgrade to latest .NET 10 equivalents (or .NET 9 when 10 unavailable), updating restore guidance. | Dependency audit shows no 8.* `Microsoft.*` packages remaining; CI builds green; changelog/doc sections capture upgrade rationale. |
## Policy Engine v2
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-POLICY-20-001 | DONE (2025-10-26) | DevOps Guild, Policy Guild | POLICY-ENGINE-20-001 | Integrate DSL linting in CI (parser/compile) to block invalid policies; add pipeline step compiling sample policies. | CI fails on syntax errors; lint logs surfaced; docs updated with pipeline instructions. |
| DEVOPS-POLICY-20-003 | DONE (2025-10-26) | DevOps Guild, QA Guild | DEVOPS-POLICY-20-001, POLICY-ENGINE-20-005 | Determinism CI: run Policy Engine twice with identical inputs and diff outputs to guard non-determinism. | CI job compares outputs, fails on differences, logs stored; documentation updated. |
| DEVOPS-POLICY-20-004 | DONE (2025-10-27) | DevOps Guild, Scheduler Guild, CLI Guild | SCHED-MODELS-20-001, CLI-POLICY-20-002 | Automate policy schema exports: generate JSON Schema from `PolicyRun*` DTOs during CI, publish artefacts, and emit change alerts for CLI consumers (Slack + changelog). | CI stage outputs versioned schema files, uploads artefacts, notifies #policy-engine channel on change; docs/CLI references updated. |
> 2025-10-27: `.gitea/workflows/build-test-deploy.yml` publishes the `policy-schema-exports` artefact under `artifacts/policy-schemas/<commit>/` and posts Slack diffs via `POLICY_ENGINE_SCHEMA_WEBHOOK`; diff stored as `policy-schema-diff.patch`.
## Graph Explorer v1
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
## Orchestrator Dashboard
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-ORCH-32-001 | TODO | DevOps Guild, Orchestrator Service Guild | ORCH-SVC-32-001 | Provision orchestrator Postgres/message-bus infrastructure, add CI smoke deploy, seed Grafana dashboards (queue depth, inflight jobs), and document bootstrap. | Helm/Compose profiles committed; CI smoke deploy runs; dashboards live with metrics; runbook updated. |
| DEVOPS-ORCH-33-001 | TODO | DevOps Guild, Observability Guild | DEVOPS-ORCH-32-001, ORCH-SVC-33-001..003 | Publish Grafana dashboards/alerts for rate limiter, backpressure, error clustering, and DLQ depth; integrate with on-call rotations. | Dashboards and alerts configured; synthetic tests validate thresholds; on-call playbook updated. |
| DEVOPS-ORCH-34-001 | TODO | DevOps Guild, Orchestrator Service Guild | DEVOPS-ORCH-33-001, ORCH-SVC-34-001..003 | Harden production monitoring (synthetic probes, burn-rate alerts, replay smoke), document incident response, and prep GA readiness checklist. | Synthetic probes created; burn-rate alerts firing on test scenario; GA checklist approved; runbook linked. |
## Link-Not-Merge v1
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-LNM-22-001 | BLOCKED (2025-10-27) | DevOps Guild, Concelier Guild | CONCELIER-LNM-21-102 | Run migration/backfill pipelines for advisory observations/linksets in staging, validate counts/conflicts, and automate deployment steps. Awaiting storage backfill tooling. |
| DEVOPS-LNM-22-002 | BLOCKED (2025-10-27) | DevOps Guild, Excititor Guild | EXCITITOR-LNM-21-102 | Execute VEX observation/linkset backfill with monitoring; ensure NATS/Redis events integrated; document ops runbook. Blocked until Excititor storage migration lands. |
| DEVOPS-LNM-22-003 | TODO | DevOps Guild, Observability Guild | CONCELIER-LNM-21-005, EXCITITOR-LNM-21-005 | Add CI/monitoring coverage for new metrics (`advisory_observations_total`, `linksets_total`, etc.) and alerts on ingest-to-API SLA breaches. | Metrics scraped into Grafana; alert thresholds set; CI job verifies metric emission. |
## Graph & Vuln Explorer v1
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-GRAPH-24-001 | TODO | DevOps Guild, SBOM Service Guild | SBOM-GRAPH-24-002 | Load test graph index/adjacency APIs with 40k-node assets; capture perf dashboards and alert thresholds. | Perf suite added; dashboards live; alerts configured. |
| DEVOPS-GRAPH-24-002 | TODO | DevOps Guild, UI Guild | UI-GRAPH-24-001..005 | Integrate synthetic UI perf runs (Playwright/WebGL metrics) for Graph/Vuln explorers; fail builds on regression. | CI job runs UI perf tests; baseline stored; documentation updated. |
| DEVOPS-GRAPH-24-003 | TODO | DevOps Guild | WEB-GRAPH-24-002 | Implement smoke job for simulation endpoints ensuring we stay within SLA (<3s upgrade) and log results. | Smoke job in CI; alerts when SLA breached; runbook documented. |
| DEVOPS-POLICY-27-001 | TODO | DevOps Guild, DevEx/CLI Guild | CLI-POLICY-27-001, REGISTRY-API-27-001 | Add CI pipeline stages to run `stella policy lint|compile|test` with secret scanning on policy sources for PRs touching `/policies/**`; publish diagnostics artifacts. | Pipeline executes on PR/main, failures block merges, secret scan summary uploaded, docs updated. |
| DEVOPS-POLICY-27-002 | TODO | DevOps Guild, Policy Registry Guild | REGISTRY-API-27-005, SCHED-WORKER-27-301 | Provide optional batch simulation CI job (staging inventory) that triggers Registry run, polls results, and posts markdown summary to PR; enforce drift thresholds. | Job configurable via label, summary comment generated, drift threshold gates merges, runbook documented. |
| DEVOPS-POLICY-27-003 | TODO | DevOps Guild, Security Guild | AUTH-POLICY-27-002, REGISTRY-API-27-007 | Manage signing key material for policy publish pipeline (OIDC workload identity + cosign), rotate keys, and document verification steps; integrate attestation verification stage. | Keys stored in secure vault, rotation procedure documented, CI verifies attestations, audit logs recorded. |
| DEVOPS-POLICY-27-004 | TODO | DevOps Guild, Observability Guild | WEB-POLICY-27-005, TELEMETRY-CONSOLE-27-001 | Create dashboards/alerts for policy compile latency, simulation queue depth, approval latency, and promotion outcomes; integrate with on-call playbooks. | Grafana dashboards live, alerts tuned, runbooks updated, observability tests verify metric ingestion. |
> Remark (2025-10-20): Repacked `Mongo2Go` local feed to require MongoDB.Driver 3.5.0 + SharpCompress 0.41.0; cache regression tests green and NU1902/NU1903 suppressed.
> Remark (2025-10-21): Compose/Helm profiles now surface `SCANNER__EVENTS__*` toggles with docs pointing at new `.env` placeholders.
## Reachability v1
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-SIG-26-001 | TODO | DevOps Guild, Signals Guild | SIGNALS-24-001 | Provision CI/CD pipelines, Helm/Compose manifests for Signals service, including artifact storage and Redis dependencies. | Pipelines ship Signals service; deployment docs updated; smoke tests green. |
| DEVOPS-SIG-26-002 | TODO | DevOps Guild, Observability Guild | SIGNALS-24-004 | Create dashboards/alerts for reachability scoring latency, cache hit rates, sensor staleness. | Dashboards live; alert thresholds configured; documentation updated. |
| DEVOPS-VULN-29-001 | TODO | DevOps Guild, Findings Ledger Guild | LEDGER-29-002..009 | Provision CI jobs for ledger projector (replay, determinism), set up backups, monitor Merkle anchoring, and automate verification. | CI job verifies hash chains; backups documented; alerts for anchoring failures configured. |
| DEVOPS-VULN-29-002 | TODO | DevOps Guild, Vuln Explorer API Guild | VULN-API-29-002..009 | Configure load/perf tests (5M findings/tenant), query budget enforcement, API SLO dashboards, and alerts for `vuln_list_latency` and `projection_lag`. | Perf suite integrated; dashboards live; alerts firing; runbooks updated. |
| DEVOPS-VULN-29-003 | TODO | DevOps Guild, Console Guild | WEB-VULN-29-004, CONSOLE-VULN-29-007 | Instrument analytics pipeline for Vuln Explorer (telemetry ingestion, query hashes), ensure compliance with privacy/PII guardrails, and update observability docs. | Telemetry pipeline operational; PII redaction verified; docs updated with checklist. |
| DEVOPS-VEX-30-001 | TODO | DevOps Guild, VEX Lens Guild | VEXLENS-30-009, ISSUER-30-005 | Provision CI, load tests, dashboards, alerts for VEX Lens and Issuer Directory (compute latency, disputed totals, signature verification rates). | CI/perf suites running; dashboards live; alerts configured; docs updated. |
| DEVOPS-AIAI-31-001 | TODO | DevOps Guild, Advisory AI Guild | AIAI-31-006..007 | Stand up CI pipelines, inference monitoring, privacy logging review, and perf dashboards for Advisory AI (summaries/conflicts/remediation). | CI covers golden outputs, telemetry dashboards live, privacy controls reviewed, alerts configured. |
## Export Center
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-EXPORT-35-001 | BLOCKED (2025-10-29) | DevOps Guild, Exporter Service Guild | EXPORT-SVC-35-001..006 | Establish exporter CI pipeline (lint/test/perf smoke), configure object storage fixtures, seed Grafana dashboards, and document bootstrap steps. | CI pipeline running; smoke export job seeded; dashboards live; runbook updated. |
| DEVOPS-EXPORT-36-001 | TODO | DevOps Guild, Exporter Service Guild | DEVOPS-EXPORT-35-001, EXPORT-SVC-36-001..004 | Integrate Trivy compatibility validation, cosign signature checks, `trivy module db import` smoke tests, OCI distribution verification, and throughput/error dashboards. | CI executes cosign + Trivy import validation; OCI push smoke passes; dashboards/alerts configured. |
| DEVOPS-EXPORT-37-001 | TODO | DevOps Guild, Exporter Service Guild | DEVOPS-EXPORT-36-001, EXPORT-SVC-37-001..004 | Finalize exporter monitoring (failure alerts, verify metrics, retention jobs) and chaos/latency tests ahead of GA. | Alerts tuned; chaos tests documented; retention monitoring active; runbook updated. |
## CLI Parity & Task Packs
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-CLI-41-001 | TODO | DevOps Guild, DevEx/CLI Guild | CLI-CORE-41-001 | Establish CLI build pipeline (multi-platform binaries, SBOM, checksums), parity matrix CI enforcement, and release artifact signing. | Build pipeline operational; SBOM/checksums published; parity gate failing on drift; docs updated. |
| DEVOPS-CLI-42-001 | TODO | DevOps Guild | DEVOPS-CLI-41-001, CLI-PARITY-41-001 | Add CLI golden output tests, parity diff automation, pack run CI harness, and artifact cache for remote mode. | Golden tests running; parity diff automation in CI; pack run harness executes sample packs; documentation updated. |
| DEVOPS-CLI-43-001 | DOING (2025-10-27) | DevOps Guild | DEVOPS-CLI-42-001, TASKRUN-42-001 | Finalize multi-platform release automation, SBOM signing, parity gate enforcement, and Task Pack chaos tests. | Release automation verified; SBOM signed; parity gate enforced; chaos tests documented. |
> 2025-10-27: Release pipeline now packages CLI multi-platform artefacts with SBOM/signature coverage and enforces the CLI parity gate (`ops/devops/check_cli_parity.py`). Task Pack chaos smoke still pending CLI pack command delivery.
| DEVOPS-CLI-43-002 | TODO | DevOps Guild, Task Runner Guild | CLI-PACKS-43-001, TASKRUN-43-001 | Implement Task Pack chaos smoke in CI (random failure injection, resume, sealed-mode toggle) and publish evidence bundles for review. | Chaos smoke job runs nightly; failures alert Slack; evidence stored in `out/pack-chaos`; runbook updated. |
| DEVOPS-CLI-43-003 | TODO | DevOps Guild, DevEx/CLI Guild | CLI-PARITY-41-001, CLI-PACKS-42-001 | Integrate CLI golden output/parity diff automation into release gating; export parity report artifact consumed by Console Downloads workspace. | `check_cli_parity.py` wired to compare parity matrix and CLI outputs; artifact uploaded; release fails on regressions.
## Containerized Distribution (Epic 13)
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-CONTAINERS-44-001 | TODO | DevOps Guild | DOCKER-44-001..003 | Automate multi-arch image builds with buildx, SBOM generation, cosign signing, and signature verification in CI. | Pipeline builds amd64/arm64; SBOMs pushed as referrers; cosign verify job passes. |
| DEVOPS-CONTAINERS-45-001 | TODO | DevOps Guild | HELM-45-001 | Add Compose and Helm smoke tests (fresh VM + kind cluster) to CI; publish test artifacts and logs. | CI jobs running; failures block releases; documentation updated. |
| DEVOPS-CONTAINERS-46-001 | TODO | DevOps Guild | DEPLOY-PACKS-43-001 | Build air-gap bundle generator (`tools/make-airgap-bundle.sh`), produce signed bundle, and verify in CI using private registry. | Bundle artifact produced with signatures/checksums; verification job passes; instructions documented. |
### Container Images (Epic 13)
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DOCKER-44-001 | TODO | DevOps Guild, Service Owners | DEVOPS-CLI-41-001 | Author multi-stage Dockerfiles for all core services (API, Console, Orchestrator, Task Runner, Conseiller, Excitator, Policy, Notify, Export, AI) with non-root users, read-only file systems, and health scripts. | Dockerfiles committed; images build successfully; container security scans clean; health endpoints reachable. |
| DOCKER-44-002 | TODO | DevOps Guild | DOCKER-44-001 | Generate SBOMs and cosign attestations for each image and integrate verification into CI. | SBOMs attached as OCI artifacts; cosign signatures published; CI verifies signatures prior to release. |
| DOCKER-44-003 | TODO | DevOps Guild | DOCKER-44-001 | Implement `/health/liveness`, `/health/readiness`, `/version`, `/metrics`, and ensure capability endpoint returns `merge=false` for Conseiller/Excitator. | Endpoints available across services; automated tests confirm responses; documentation updated with imposed rule reminder. |
## Authority-Backed Scopes & Tenancy (Epic 14)
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-TEN-47-001 | TODO | DevOps Guild | AUTH-TEN-47-001 | Add JWKS cache monitoring, signature verification regression tests, and token expiration chaos tests to CI. | CI verifies tokens using cached keys; chaos test for expired keys passes; documentation updated. |
| DEVOPS-TEN-48-001 | TODO | DevOps Guild | WEB-TEN-48-001 | Build integration tests to assert RLS enforcement, tenant-prefixed object storage, and audit event emission; set up lint to prevent raw SQL bypass. | Tests fail on cross-tenant access; lint enforced; dashboards capture audit events. |
| DEVOPS-TEN-49-001 | TODO | DevOps Guild | AUTH-TEN-49-001 | Deploy audit pipeline, scope usage metrics, JWKS outage chaos tests, and tenant load/perf benchmarks. | Audit pipeline live; metrics dashboards updated; chaos tests documented; perf benchmarks recorded. |
## SDKs & OpenAPI (Epic 17)
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-OAS-61-001 | TODO | DevOps Guild, API Contracts Guild | OAS-61-002 | Add CI stages for OpenAPI linting, validation, and compatibility diff; enforce gating on PRs. | Pipeline active; merge blocked on failures; documentation updated. |
| DEVOPS-OAS-61-002 | TODO | DevOps Guild, Contract Testing Guild | CONTR-62-002 | Integrate mock server + contract test suite into PR and nightly workflows; publish artifacts. | Tests run in CI; artifacts stored; failures alert. |
| DEVOPS-SDK-63-001 | TODO | DevOps Guild, SDK Release Guild | SDKREL-63-001 | Provision registry credentials, signing keys, and secure storage for SDK publishing pipelines. | Keys stored/rotated; publish pipeline authenticated; audit logs recorded. |
| DEVOPS-DEVPORT-63-001 | TODO | DevOps Guild, Developer Portal Guild | DEVPORT-62-001 | Automate developer portal build pipeline with caching, link & accessibility checks, performance budgets. | Pipeline enforced; reports archived; failures gate merges. |
| DEVOPS-DEVPORT-64-001 | TODO | DevOps Guild, DevPortal Offline Guild | DVOFF-64-001 | Schedule `devportal --offline` nightly builds with checksum validation and artifact retention policies. | Nightly job running; checksums published; retention policy documented. |
## Attestor Console (Epic 19)
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-ATTEST-73-001 | TODO | DevOps Guild, Attestor Service Guild | ATTESTOR-72-002 | Provision CI pipelines for attestor service (lint/test/security scan, seed data) and manage secrets for KMS drivers. | CI pipeline running; secrets stored securely; docs updated. |
| DEVOPS-ATTEST-73-002 | TODO | DevOps Guild, KMS Guild | KMS-72-001 | Establish secure storage for signing keys (vault integration, rotation schedule) and audit logging. | Key storage configured; rotation documented; audit logs verified. |
| DEVOPS-ATTEST-74-001 | TODO | DevOps Guild, Transparency Guild | TRANSP-74-001 | Deploy transparency log witness infrastructure and monitoring. | Witness service deployed; dashboards/alerts live. |
| DEVOPS-ATTEST-74-002 | TODO | DevOps Guild, Export Attestation Guild | EXPORT-ATTEST-74-001 | Integrate attestation bundle builds into release/offline pipelines with checksum verification. | Bundle job in CI; checksum verification passes; docs updated. |
| DEVOPS-ATTEST-75-001 | TODO | DevOps Guild, Observability Guild | ATTEST-VERIFY-74-001 | Add dashboards/alerts for signing latency, verification failures, key rotation events. | Dashboards live; alerts configured. |
# DevOps Task Board
## Governance & Rules
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-RULES-33-001 | DOING (2025-10-26) | DevOps Guild, Platform Leads | — | Contracts & Rules anchor:<br>• Gateway proxies only; Policy Engine composes overlays/simulations.<br>• AOC ingestion cannot merge; only lossless canonicalization.<br>• One graph platform: Graph Indexer + Graph API. Cartographer retired. | Rules posted in SPRINTS/TASKS; duplicates cleaned per guidance; reviewers acknowledge in changelog. |
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-HELM-09-001 | DONE | DevOps Guild | SCANNER-WEB-09-101 | Create Helm/Compose environment profiles (dev, staging, airgap) with deterministic digests. | Profiles committed under `deploy/`; docs updated; CI smoke deploy passes. |
| DEVOPS-SCANNER-09-204 | DONE (2025-10-21) | DevOps Guild, Scanner WebService Guild | SCANNER-EVENTS-15-201 | Surface `SCANNER__EVENTS__*` environment variables across docker-compose (dev/stage/airgap) and Helm values, defaulting to share the Redis queue DSN. | Compose/Helm configs ship enabled Redis event publishing with documented overrides; lint jobs updated; docs cross-link to new knobs. |
| DEVOPS-SCANNER-09-205 | DONE (2025-10-21) | DevOps Guild, Notify Guild | DEVOPS-SCANNER-09-204 | Add Notify smoke stage that tails the Redis stream and asserts `scanner.report.ready`/`scanner.scan.completed` reach Notify WebService in staging. | CI job reads Redis stream during scanner smoke deploy, confirms Notify ingestion via API, alerts on failure. |
| DEVOPS-PERF-10-001 | DONE | DevOps Guild | BENCH-SCANNER-10-001 | Add perf smoke job (SBOM compose <5s target) to CI. | CI job runs sample build verifying <5s; alerts configured. |
| DEVOPS-PERF-10-002 | DONE (2025-10-23) | DevOps Guild | BENCH-SCANNER-10-002 | Publish analyzer bench metrics to Grafana/perf workbook and alarm on 20% regressions. | CI exports JSON for dashboards; Grafana panel wired; Ops on-call doc updated with alert hook. |
| DEVOPS-AOC-19-001 | BLOCKED (2025-10-26) | DevOps Guild, Platform Guild | WEB-AOC-19-003 | Integrate the AOC Roslyn analyzer and guard tests into CI, failing builds when ingestion projects attempt banned writes. | Analyzer runs in PR/CI pipelines, results surfaced in build summary, docs updated under `docs/ops/ci-aoc.md`. |
> Docs hand-off (2025-10-26): see `docs/ingestion/aggregation-only-contract.md` §5, `docs/architecture/overview.md`, and `docs/cli/cli-reference.md` for guard + verifier expectations.
| DEVOPS-AOC-19-002 | BLOCKED (2025-10-26) | DevOps Guild | CLI-AOC-19-002, CONCELIER-WEB-AOC-19-004, EXCITITOR-WEB-AOC-19-004 | Add pipeline stage executing `stella aoc verify --since` against seeded Mongo snapshots for Concelier + Excititor, publishing violation report artefacts. | Stage runs on main/nightly, fails on violations, artifacts retained, runbook documented. |
> Blocked: waiting on CLI verifier command and Concelier/Excititor guard endpoints to land (CLI-AOC-19-002, CONCELIER-WEB-AOC-19-004, EXCITITOR-WEB-AOC-19-004).
| DEVOPS-AOC-19-003 | BLOCKED (2025-10-26) | DevOps Guild, QA Guild | CONCELIER-WEB-AOC-19-003, EXCITITOR-WEB-AOC-19-003 | Enforce unit test coverage thresholds for AOC guard suites and ensure coverage exported to dashboards. | Coverage report includes guard projects, threshold gate passes/fails as expected, dashboards refreshed with new metrics. |
> Blocked: guard coverage suites and exporter hooks pending in Concelier/Excititor (CONCELIER-WEB-AOC-19-003, EXCITITOR-WEB-AOC-19-003).
| DEVOPS-AOC-19-101 | TODO (2025-10-28) | DevOps Guild, Concelier Storage Guild | CONCELIER-STORE-AOC-19-002 | Draft supersedes backfill rollout (freeze window, dry-run steps, rollback) once advisory_raw idempotency index passes staging verification. | Runbook committed in `docs/deploy/containers.md` + Offline Kit notes, staging rehearsal scheduled with dependencies captured in SPRINTS. |
| DEVOPS-OBS-50-001 | DONE (2025-10-26) | DevOps Guild, Observability Guild | TELEMETRY-OBS-50-001 | Deliver default OpenTelemetry collector deployment (Compose/Helm manifests), OTLP ingestion endpoints, and secure pipeline (authN, mTLS, tenant partitioning). Provide smoke test verifying traces/logs/metrics ingestion. | Collector manifests committed; smoke test green; docs updated; imposed rule banner reminder noted. |
| DEVOPS-OBS-50-002 | DOING (2025-10-26) | DevOps Guild, Security Guild | DEVOPS-OBS-50-001, TELEMETRY-OBS-51-002 | Stand up multi-tenant storage backends (Prometheus, Tempo/Jaeger, Loki) with retention policies, tenant isolation, and redaction guard rails. Integrate with Authority scopes for read paths. | Storage stack deployed with auth; retention configured; integration tests verify tenant isolation; runbook drafted. |
> Coordination started with Observability Guild (2025-10-26) to schedule staging rollout and provision service accounts. Staging bootstrap commands and secret names documented in `docs/ops/telemetry-storage.md`.
| DEVOPS-OBS-50-003 | DONE (2025-10-26) | DevOps Guild, Offline Kit Guild | DEVOPS-OBS-50-001 | Package telemetry stack configs for air-gapped installs (Offline Kit bundle, documented overrides, sample values) and automate checksum/signature generation. | Offline bundle includes collector+storage configs; checksums published; docs cross-linked; imposed rule annotation recorded. |
| DEVOPS-OBS-51-001 | TODO | DevOps Guild, Observability Guild | WEB-OBS-51-001, DEVOPS-OBS-50-001 | Implement SLO evaluator service (burn rate calculators, webhook emitters), Grafana dashboards, and alert routing to Notifier. Provide Terraform/Helm automation. | Dashboards live; evaluator emits webhooks; alert runbook referenced; staging alert fired in test. |
| DEVOPS-OBS-52-001 | TODO | DevOps Guild, Timeline Indexer Guild | TIMELINE-OBS-52-002 | Configure streaming pipeline (NATS/Redis/Kafka) with retention, partitioning, and backpressure tuning for timeline events; add CI validation of schema + rate caps. | Pipeline deployed; load test meets SLA; schema validation job passes; documentation updated. |
| DEVOPS-OBS-53-001 | TODO | DevOps Guild, Evidence Locker Guild | EVID-OBS-53-001 | Provision object storage with WORM/retention options (S3 Object Lock / MinIO immutability), legal hold automation, and backup/restore scripts for evidence locker. | Storage configured with WORM; legal hold script documented; backup test performed; runbook updated. |
| DEVOPS-OBS-54-001 | TODO | DevOps Guild, Security Guild | PROV-OBS-53-002, EVID-OBS-54-001 | Manage provenance signing infrastructure (KMS keys, rotation schedule, timestamp authority integration) and integrate verification jobs into CI. | Keys provisioned with rotation policy; timestamp authority configured; CI verifies sample bundles; audit trail stored. |
| DEVOPS-OBS-55-001 | TODO | DevOps Guild, Ops Guild | DEVOPS-OBS-51-001, WEB-OBS-55-001 | Implement incident mode automation: feature flag service, auto-activation via SLO burn-rate, retention override management, and post-incident reset job. | Incident mode toggles via API/CLI; automation tested in staging; reset job verified; runbook referenced. |
## Air-Gapped Mode (Epic 16)
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-AIRGAP-56-001 | TODO | DevOps Guild | AIRGAP-CTL-56-001 | Ship deny-all egress policies for Kubernetes (NetworkPolicy/eBPF) and docker-compose firewall rules; provide verification script for sealed mode. | Policies committed with tests; verification script passes/fails as expected; docs cross-linked. |
| DEVOPS-AIRGAP-56-002 | TODO | DevOps Guild, AirGap Importer Guild | AIRGAP-IMP-57-002 | Provide import tooling for bundle staging: checksum validation, offline object-store loader scripts, removable media guidance. | Scripts documented; smoke tests validate import; runbook updated. |
| DEVOPS-AIRGAP-56-003 | TODO | DevOps Guild, Container Distribution Guild | EXPORT-AIRGAP-56-002 | Build Bootstrap Pack pipeline bundling images/charts, generating checksums, and publishing manifest for offline transfer. | Pipeline runs in connected env; pack verified in air-gap smoke test; manifest recorded. |
| DEVOPS-AIRGAP-57-001 | TODO | DevOps Guild, Mirror Creator Guild | MIRROR-CRT-56-002 | Automate Mirror Bundle creation jobs with dual-control approvals, artifact signing, and checksum publication. | Approval workflow enforced; CI artifact includes DSSE/TUF metadata; audit logs stored. |
| DEVOPS-AIRGAP-57-002 | TODO | DevOps Guild, Authority Guild | AUTH-OBS-50-001 | Configure sealed-mode CI tests that run services with sealed flag and ensure no egress occurs (iptables + mock DNS). | CI suite fails on attempted egress; reports remediation; documentation updated. |
| DEVOPS-AIRGAP-58-001 | TODO | DevOps Guild, Notifications Guild | NOTIFY-AIRGAP-56-002 | Provide local SMTP/syslog container templates and health checks for sealed environments; integrate into Bootstrap Pack. | Templates deployed successfully; health checks in CI; docs updated. |
| DEVOPS-AIRGAP-58-002 | TODO | DevOps Guild, Observability Guild | DEVOPS-AIRGAP-56-001, DEVOPS-OBS-51-001 | Ship sealed-mode observability stack (Prometheus/Grafana/Tempo/Loki) pre-configured with offline dashboards and no remote exporters. | Stack boots offline; dashboards available; verification script confirms zero egress. |
| DEVOPS-REL-14-001 | DONE (2025-10-26) | DevOps Guild | SIGNER-API-11-101, ATTESTOR-API-11-201 | Deterministic build/release pipeline with SBOM/provenance, signing, manifest generation. | CI pipeline produces signed images + SBOM/attestations, manifests published with verified hashes, docs updated. |
| DEVOPS-REL-14-004 | DONE (2025-10-26) | DevOps Guild, Scanner Guild | DEVOPS-REL-14-001, SCANNER-ANALYZERS-LANG-10-309P | Extend release/offline smoke jobs to exercise the Python analyzer plug-in (warm/cold scans, determinism, signature checks). | Release/Offline pipelines run Python analyzer smoke suite; alerts hooked; docs updated with new coverage matrix. |
| DEVOPS-REL-17-002 | DONE (2025-10-26) | DevOps Guild | DEVOPS-REL-14-001, SCANNER-EMIT-17-701 | Persist stripped-debug artifacts organised by GNU build-id and bundle them into release/offline kits with checksum manifests. | CI job writes `.debug` files under `artifacts/debug/.build-id/`, manifest + checksums published, offline kit includes cache, smoke job proves symbol lookup via build-id. |
| DEVOPS-REL-17-004 | BLOCKED (2025-10-26) | DevOps Guild | DEVOPS-REL-17-002 | Ensure release workflow publishes `out/release/debug` (build-id tree + manifest) and fails when symbols are missing. | Release job emits debug artefacts, `mirror_debug_store.py` summary committed, warning cleared from build logs, docs updated. |
| DEVOPS-MIRROR-08-001 | DONE (2025-10-19) | DevOps Guild | DEVOPS-REL-14-001 | Stand up managed mirror profiles for `*.stella-ops.org` (Concelier/Excititor), including Helm/Compose overlays, multi-tenant secrets, CDN caching, and sync documentation. | Infra overlays committed, CI smoke deploy hits mirror endpoints, runbooks published for downstream sync and quota management. |
> Note (2025-10-26, BLOCKED): IdentityModel.Tokens patched for logging 9.x, but release bundle still fails because Docker cannot stream multi-arch build context (`unix:///var/run/docker.sock` unavailable, EOF during copy). Retry once docker daemon/socket is healthy; until then `out/release/debug` cannot be generated.
| DEVOPS-CONSOLE-23-001 | BLOCKED (2025-10-26) | DevOps Guild, Console Guild | CONSOLE-CORE-23-001 | Add console CI workflow (pnpm cache, lint, type-check, unit, Storybook a11y, Playwright, Lighthouse) with offline runners and artifact retention for screenshots/reports. | Workflow runs on PR & main, caches reduce install time, failing checks block merges, artifacts uploaded for triage, docs updated. |
> Blocked: Console workspace and package scripts (CONSOLE-CORE-23-001..005) are not yet present; CI cannot execute pnpm/Playwright/Lighthouse until the Next.js app lands.
| DEVOPS-CONSOLE-23-002 | TODO | DevOps Guild, Console Guild | DEVOPS-CONSOLE-23-001, CONSOLE-REL-23-301 | Produce `stella-console` container build + Helm chart overlays with deterministic digests, SBOM/provenance artefacts, and offline bundle packaging scripts. | Container published to registry mirror, Helm values committed, SBOM/attestations generated, offline kit job passes smoke test, docs updated. |
| DEVOPS-LAUNCH-18-100 | DONE (2025-10-26) | DevOps Guild | - | Finalise production environment footprint (clusters, secrets, network overlays) for full-platform go-live. | IaC/compose overlays committed, secrets placeholders documented, dry-run deploy succeeds in staging. |
| DEVOPS-LAUNCH-18-900 | DONE (2025-10-26) | DevOps Guild, Module Leads | Wave 0 completion | Collect full implementation sign-off from module owners and consolidate launch readiness checklist. | Sign-off record stored under `docs/ops/launch-readiness.md`; outstanding gaps triaged; checklist approved. |
| DEVOPS-LAUNCH-18-001 | DONE (2025-10-26) | DevOps Guild | DEVOPS-LAUNCH-18-100, DEVOPS-LAUNCH-18-900 | Production launch cutover rehearsal and runbook publication. | `docs/ops/launch-cutover.md` drafted, rehearsal executed with rollback drill, approvals captured. |
| DEVOPS-NUGET-13-001 | DONE (2025-10-25) | DevOps Guild, Platform Leads | DEVOPS-REL-14-001 | Add .NET 10 preview feeds / local mirrors so `Microsoft.Extensions.*` 10.0 preview packages restore offline; refresh restore docs. | NuGet.config maps preview feeds (or local mirrored packages), `dotnet restore` succeeds for Excititor/Concelier solutions without ad-hoc feed edits, docs updated for offline bootstrap. |
| DEVOPS-NUGET-13-002 | DONE (2025-10-26) | DevOps Guild | DEVOPS-NUGET-13-001 | Ensure all solutions/projects prefer `local-nuget` before public sources and document restore order validation. | `NuGet.config` and solution-level configs resolve from `local-nuget` first; automated check verifies priority; docs updated for restore ordering. |
| DEVOPS-NUGET-13-003 | DONE (2025-10-26) | DevOps Guild, Platform Leads | DEVOPS-NUGET-13-002 | Sweep `Microsoft.*` NuGet dependencies pinned to 8.* and upgrade to latest .NET 10 equivalents (or .NET 9 when 10 unavailable), updating restore guidance. | Dependency audit shows no 8.* `Microsoft.*` packages remaining; CI builds green; changelog/doc sections capture upgrade rationale. |
## Policy Engine v2
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-POLICY-20-001 | DONE (2025-10-26) | DevOps Guild, Policy Guild | POLICY-ENGINE-20-001 | Integrate DSL linting in CI (parser/compile) to block invalid policies; add pipeline step compiling sample policies. | CI fails on syntax errors; lint logs surfaced; docs updated with pipeline instructions. |
| DEVOPS-POLICY-20-003 | DONE (2025-10-26) | DevOps Guild, QA Guild | DEVOPS-POLICY-20-001, POLICY-ENGINE-20-005 | Determinism CI: run Policy Engine twice with identical inputs and diff outputs to guard non-determinism. | CI job compares outputs, fails on differences, logs stored; documentation updated. |
| DEVOPS-POLICY-20-004 | DONE (2025-10-27) | DevOps Guild, Scheduler Guild, CLI Guild | SCHED-MODELS-20-001, CLI-POLICY-20-002 | Automate policy schema exports: generate JSON Schema from `PolicyRun*` DTOs during CI, publish artefacts, and emit change alerts for CLI consumers (Slack + changelog). | CI stage outputs versioned schema files, uploads artefacts, notifies #policy-engine channel on change; docs/CLI references updated. |
> 2025-10-27: `.gitea/workflows/build-test-deploy.yml` publishes the `policy-schema-exports` artefact under `artifacts/policy-schemas/<commit>/` and posts Slack diffs via `POLICY_ENGINE_SCHEMA_WEBHOOK`; diff stored as `policy-schema-diff.patch`.
## Graph Explorer v1
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
## Orchestrator Dashboard
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-ORCH-32-001 | TODO | DevOps Guild, Orchestrator Service Guild | ORCH-SVC-32-001 | Provision orchestrator Postgres/message-bus infrastructure, add CI smoke deploy, seed Grafana dashboards (queue depth, inflight jobs), and document bootstrap. | Helm/Compose profiles committed; CI smoke deploy runs; dashboards live with metrics; runbook updated. |
| DEVOPS-ORCH-33-001 | TODO | DevOps Guild, Observability Guild | DEVOPS-ORCH-32-001, ORCH-SVC-33-001..003 | Publish Grafana dashboards/alerts for rate limiter, backpressure, error clustering, and DLQ depth; integrate with on-call rotations. | Dashboards and alerts configured; synthetic tests validate thresholds; on-call playbook updated. |
| DEVOPS-ORCH-34-001 | TODO | DevOps Guild, Orchestrator Service Guild | DEVOPS-ORCH-33-001, ORCH-SVC-34-001..003 | Harden production monitoring (synthetic probes, burn-rate alerts, replay smoke), document incident response, and prep GA readiness checklist. | Synthetic probes created; burn-rate alerts firing on test scenario; GA checklist approved; runbook linked. |
## Link-Not-Merge v1
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-LNM-22-001 | BLOCKED (2025-10-27) | DevOps Guild, Concelier Guild | CONCELIER-LNM-21-102 | Run migration/backfill pipelines for advisory observations/linksets in staging, validate counts/conflicts, and automate deployment steps. Awaiting storage backfill tooling. |
| DEVOPS-LNM-22-002 | BLOCKED (2025-10-27) | DevOps Guild, Excititor Guild | EXCITITOR-LNM-21-102 | Execute VEX observation/linkset backfill with monitoring; ensure NATS/Redis events integrated; document ops runbook. Blocked until Excititor storage migration lands. |
| DEVOPS-LNM-22-003 | TODO | DevOps Guild, Observability Guild | CONCELIER-LNM-21-005, EXCITITOR-LNM-21-005 | Add CI/monitoring coverage for new metrics (`advisory_observations_total`, `linksets_total`, etc.) and alerts on ingest-to-API SLA breaches. | Metrics scraped into Grafana; alert thresholds set; CI job verifies metric emission. |
## Graph & Vuln Explorer v1
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-GRAPH-24-001 | TODO | DevOps Guild, SBOM Service Guild | SBOM-GRAPH-24-002 | Load test graph index/adjacency APIs with 40k-node assets; capture perf dashboards and alert thresholds. | Perf suite added; dashboards live; alerts configured. |
| DEVOPS-GRAPH-24-002 | TODO | DevOps Guild, UI Guild | UI-GRAPH-24-001..005 | Integrate synthetic UI perf runs (Playwright/WebGL metrics) for Graph/Vuln explorers; fail builds on regression. | CI job runs UI perf tests; baseline stored; documentation updated. |
| DEVOPS-GRAPH-24-003 | TODO | DevOps Guild | WEB-GRAPH-24-002 | Implement smoke job for simulation endpoints ensuring we stay within SLA (<3s upgrade) and log results. | Smoke job in CI; alerts when SLA breached; runbook documented. |
| DEVOPS-POLICY-27-001 | TODO | DevOps Guild, DevEx/CLI Guild | CLI-POLICY-27-001, REGISTRY-API-27-001 | Add CI pipeline stages to run `stella policy lint|compile|test` with secret scanning on policy sources for PRs touching `/policies/**`; publish diagnostics artifacts. | Pipeline executes on PR/main, failures block merges, secret scan summary uploaded, docs updated. |
| DEVOPS-POLICY-27-002 | TODO | DevOps Guild, Policy Registry Guild | REGISTRY-API-27-005, SCHED-WORKER-27-301 | Provide optional batch simulation CI job (staging inventory) that triggers Registry run, polls results, and posts markdown summary to PR; enforce drift thresholds. | Job configurable via label, summary comment generated, drift threshold gates merges, runbook documented. |
| DEVOPS-POLICY-27-003 | TODO | DevOps Guild, Security Guild | AUTH-POLICY-27-002, REGISTRY-API-27-007 | Manage signing key material for policy publish pipeline (OIDC workload identity + cosign), rotate keys, and document verification steps; integrate attestation verification stage. | Keys stored in secure vault, rotation procedure documented, CI verifies attestations, audit logs recorded. |
| DEVOPS-POLICY-27-004 | TODO | DevOps Guild, Observability Guild | WEB-POLICY-27-005, TELEMETRY-CONSOLE-27-001 | Create dashboards/alerts for policy compile latency, simulation queue depth, approval latency, and promotion outcomes; integrate with on-call playbooks. | Grafana dashboards live, alerts tuned, runbooks updated, observability tests verify metric ingestion. |
> Remark (2025-10-20): Repacked `Mongo2Go` local feed to require MongoDB.Driver 3.5.0 + SharpCompress 0.41.0; cache regression tests green and NU1902/NU1903 suppressed.
> Remark (2025-10-21): Compose/Helm profiles now surface `SCANNER__EVENTS__*` toggles with docs pointing at new `.env` placeholders.
## Reachability v1
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-SIG-26-001 | TODO | DevOps Guild, Signals Guild | SIGNALS-24-001 | Provision CI/CD pipelines, Helm/Compose manifests for Signals service, including artifact storage and Redis dependencies. | Pipelines ship Signals service; deployment docs updated; smoke tests green. |
| DEVOPS-SIG-26-002 | TODO | DevOps Guild, Observability Guild | SIGNALS-24-004 | Create dashboards/alerts for reachability scoring latency, cache hit rates, sensor staleness. | Dashboards live; alert thresholds configured; documentation updated. |
| DEVOPS-VULN-29-001 | TODO | DevOps Guild, Findings Ledger Guild | LEDGER-29-002..009 | Provision CI jobs for ledger projector (replay, determinism), set up backups, monitor Merkle anchoring, and automate verification. | CI job verifies hash chains; backups documented; alerts for anchoring failures configured. |
| DEVOPS-VULN-29-002 | TODO | DevOps Guild, Vuln Explorer API Guild | VULN-API-29-002..009 | Configure load/perf tests (5M findings/tenant), query budget enforcement, API SLO dashboards, and alerts for `vuln_list_latency` and `projection_lag`. | Perf suite integrated; dashboards live; alerts firing; runbooks updated. |
| DEVOPS-VULN-29-003 | TODO | DevOps Guild, Console Guild | WEB-VULN-29-004, CONSOLE-VULN-29-007 | Instrument analytics pipeline for Vuln Explorer (telemetry ingestion, query hashes), ensure compliance with privacy/PII guardrails, and update observability docs. | Telemetry pipeline operational; PII redaction verified; docs updated with checklist. |
| DEVOPS-VEX-30-001 | TODO | DevOps Guild, VEX Lens Guild | VEXLENS-30-009, ISSUER-30-005 | Provision CI, load tests, dashboards, alerts for VEX Lens and Issuer Directory (compute latency, disputed totals, signature verification rates). | CI/perf suites running; dashboards live; alerts configured; docs updated. |
| DEVOPS-AIAI-31-001 | TODO | DevOps Guild, Advisory AI Guild | AIAI-31-006..007 | Stand up CI pipelines, inference monitoring, privacy logging review, and perf dashboards for Advisory AI (summaries/conflicts/remediation). | CI covers golden outputs, telemetry dashboards live, privacy controls reviewed, alerts configured. |
## Export Center
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-EXPORT-35-001 | BLOCKED (2025-10-29) | DevOps Guild, Exporter Service Guild | EXPORT-SVC-35-001..006 | Establish exporter CI pipeline (lint/test/perf smoke), configure object storage fixtures, seed Grafana dashboards, and document bootstrap steps. | CI pipeline running; smoke export job seeded; dashboards live; runbook updated. |
| DEVOPS-EXPORT-36-001 | TODO | DevOps Guild, Exporter Service Guild | DEVOPS-EXPORT-35-001, EXPORT-SVC-36-001..004 | Integrate Trivy compatibility validation, cosign signature checks, `trivy module db import` smoke tests, OCI distribution verification, and throughput/error dashboards. | CI executes cosign + Trivy import validation; OCI push smoke passes; dashboards/alerts configured. |
| DEVOPS-EXPORT-37-001 | TODO | DevOps Guild, Exporter Service Guild | DEVOPS-EXPORT-36-001, EXPORT-SVC-37-001..004 | Finalize exporter monitoring (failure alerts, verify metrics, retention jobs) and chaos/latency tests ahead of GA. | Alerts tuned; chaos tests documented; retention monitoring active; runbook updated. |
## CLI Parity & Task Packs
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-CLI-41-001 | TODO | DevOps Guild, DevEx/CLI Guild | CLI-CORE-41-001 | Establish CLI build pipeline (multi-platform binaries, SBOM, checksums), parity matrix CI enforcement, and release artifact signing. | Build pipeline operational; SBOM/checksums published; parity gate failing on drift; docs updated. |
| DEVOPS-CLI-42-001 | TODO | DevOps Guild | DEVOPS-CLI-41-001, CLI-PARITY-41-001 | Add CLI golden output tests, parity diff automation, pack run CI harness, and artifact cache for remote mode. | Golden tests running; parity diff automation in CI; pack run harness executes sample packs; documentation updated. |
| DEVOPS-CLI-43-001 | DOING (2025-10-27) | DevOps Guild | DEVOPS-CLI-42-001, TASKRUN-42-001 | Finalize multi-platform release automation, SBOM signing, parity gate enforcement, and Task Pack chaos tests. | Release automation verified; SBOM signed; parity gate enforced; chaos tests documented. |
> 2025-10-27: Release pipeline now packages CLI multi-platform artefacts with SBOM/signature coverage and enforces the CLI parity gate (`ops/devops/check_cli_parity.py`). Task Pack chaos smoke still pending CLI pack command delivery.
| DEVOPS-CLI-43-002 | TODO | DevOps Guild, Task Runner Guild | CLI-PACKS-43-001, TASKRUN-43-001 | Implement Task Pack chaos smoke in CI (random failure injection, resume, sealed-mode toggle) and publish evidence bundles for review. | Chaos smoke job runs nightly; failures alert Slack; evidence stored in `out/pack-chaos`; runbook updated. |
| DEVOPS-CLI-43-003 | TODO | DevOps Guild, DevEx/CLI Guild | CLI-PARITY-41-001, CLI-PACKS-42-001 | Integrate CLI golden output/parity diff automation into release gating; export parity report artifact consumed by Console Downloads workspace. | `check_cli_parity.py` wired to compare parity matrix and CLI outputs; artifact uploaded; release fails on regressions.
## Containerized Distribution (Epic 13)
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-CONTAINERS-44-001 | TODO | DevOps Guild | DOCKER-44-001..003 | Automate multi-arch image builds with buildx, SBOM generation, cosign signing, and signature verification in CI. | Pipeline builds amd64/arm64; SBOMs pushed as referrers; cosign verify job passes. |
| DEVOPS-CONTAINERS-45-001 | TODO | DevOps Guild | HELM-45-001 | Add Compose and Helm smoke tests (fresh VM + kind cluster) to CI; publish test artifacts and logs. | CI jobs running; failures block releases; documentation updated. |
| DEVOPS-CONTAINERS-46-001 | TODO | DevOps Guild | DEPLOY-PACKS-43-001 | Build air-gap bundle generator (`tools/make-airgap-bundle.sh`), produce signed bundle, and verify in CI using private registry. | Bundle artifact produced with signatures/checksums; verification job passes; instructions documented. |
### Container Images (Epic 13)
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DOCKER-44-001 | TODO | DevOps Guild, Service Owners | DEVOPS-CLI-41-001 | Author multi-stage Dockerfiles for all core services (API, Console, Orchestrator, Task Runner, Conseiller, Excitator, Policy, Notify, Export, AI) with non-root users, read-only file systems, and health scripts. | Dockerfiles committed; images build successfully; container security scans clean; health endpoints reachable. |
| DOCKER-44-002 | TODO | DevOps Guild | DOCKER-44-001 | Generate SBOMs and cosign attestations for each image and integrate verification into CI. | SBOMs attached as OCI artifacts; cosign signatures published; CI verifies signatures prior to release. |
| DOCKER-44-003 | TODO | DevOps Guild | DOCKER-44-001 | Implement `/health/liveness`, `/health/readiness`, `/version`, `/metrics`, and ensure capability endpoint returns `merge=false` for Conseiller/Excitator. | Endpoints available across services; automated tests confirm responses; documentation updated with imposed rule reminder. |
## Authority-Backed Scopes & Tenancy (Epic 14)
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-TEN-47-001 | TODO | DevOps Guild | AUTH-TEN-47-001 | Add JWKS cache monitoring, signature verification regression tests, and token expiration chaos tests to CI. | CI verifies tokens using cached keys; chaos test for expired keys passes; documentation updated. |
| DEVOPS-TEN-48-001 | TODO | DevOps Guild | WEB-TEN-48-001 | Build integration tests to assert RLS enforcement, tenant-prefixed object storage, and audit event emission; set up lint to prevent raw SQL bypass. | Tests fail on cross-tenant access; lint enforced; dashboards capture audit events. |
| DEVOPS-TEN-49-001 | TODO | DevOps Guild | AUTH-TEN-49-001 | Deploy audit pipeline, scope usage metrics, JWKS outage chaos tests, and tenant load/perf benchmarks. | Audit pipeline live; metrics dashboards updated; chaos tests documented; perf benchmarks recorded. |
## SDKs & OpenAPI (Epic 17)
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-OAS-61-001 | TODO | DevOps Guild, API Contracts Guild | OAS-61-002 | Add CI stages for OpenAPI linting, validation, and compatibility diff; enforce gating on PRs. | Pipeline active; merge blocked on failures; documentation updated. |
| DEVOPS-OAS-61-002 | TODO | DevOps Guild, Contract Testing Guild | CONTR-62-002 | Integrate mock server + contract test suite into PR and nightly workflows; publish artifacts. | Tests run in CI; artifacts stored; failures alert. |
| DEVOPS-SDK-63-001 | TODO | DevOps Guild, SDK Release Guild | SDKREL-63-001 | Provision registry credentials, signing keys, and secure storage for SDK publishing pipelines. | Keys stored/rotated; publish pipeline authenticated; audit logs recorded. |
| DEVOPS-DEVPORT-63-001 | TODO | DevOps Guild, Developer Portal Guild | DEVPORT-62-001 | Automate developer portal build pipeline with caching, link & accessibility checks, performance budgets. | Pipeline enforced; reports archived; failures gate merges. |
| DEVOPS-DEVPORT-64-001 | TODO | DevOps Guild, DevPortal Offline Guild | DVOFF-64-001 | Schedule `devportal --offline` nightly builds with checksum validation and artifact retention policies. | Nightly job running; checksums published; retention policy documented. |
## Attestor Console (Epic 19)
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|----|--------|----------|------------|-------------|---------------|
| DEVOPS-ATTEST-73-001 | TODO | DevOps Guild, Attestor Service Guild | ATTESTOR-72-002 | Provision CI pipelines for attestor service (lint/test/security scan, seed data) and manage secrets for KMS drivers. | CI pipeline running; secrets stored securely; docs updated. |
| DEVOPS-ATTEST-73-002 | TODO | DevOps Guild, KMS Guild | KMS-72-001 | Establish secure storage for signing keys (vault integration, rotation schedule) and audit logging. | Key storage configured; rotation documented; audit logs verified. |
| DEVOPS-ATTEST-74-001 | TODO | DevOps Guild, Transparency Guild | TRANSP-74-001 | Deploy transparency log witness infrastructure and monitoring. | Witness service deployed; dashboards/alerts live. |
| DEVOPS-ATTEST-74-002 | TODO | DevOps Guild, Export Attestation Guild | EXPORT-ATTEST-74-001 | Integrate attestation bundle builds into release/offline pipelines with checksum verification. | Bundle job in CI; checksum verification passes; docs updated. |
| DEVOPS-ATTEST-75-001 | TODO | DevOps Guild, Observability Guild | ATTEST-VERIFY-74-001 | Add dashboards/alerts for signing latency, verification failures, key rotation events. | Dashboards live; alerts configured. |

View File

@@ -1,53 +1,53 @@
#!/usr/bin/env python3
"""Ensure CLI parity matrix contains no outstanding blockers before release."""
from __future__ import annotations
import pathlib
import re
import sys
REPO_ROOT = pathlib.Path(__file__).resolve().parents[2]
PARITY_DOC = REPO_ROOT / "docs/cli-vs-ui-parity.md"
BLOCKERS = {
"🟥": "blocking gap",
"": "missing feature",
"🚫": "unsupported",
}
WARNINGS = {
"🟡": "partial support",
"⚠️": "warning",
}
def main() -> int:
if not PARITY_DOC.exists():
print(f"❌ Parity matrix not found at {PARITY_DOC}", file=sys.stderr)
return 1
text = PARITY_DOC.read_text(encoding="utf-8")
blockers: list[str] = []
warnings: list[str] = []
for line in text.splitlines():
for symbol, label in BLOCKERS.items():
if symbol in line:
blockers.append(f"{label}: {line.strip()}")
for symbol, label in WARNINGS.items():
if symbol in line:
warnings.append(f"{label}: {line.strip()}")
if blockers:
print("❌ CLI parity gate failed — blocking items present:", file=sys.stderr)
for item in blockers:
print(f" - {item}", file=sys.stderr)
return 1
if warnings:
print("⚠️ CLI parity gate warnings detected:", file=sys.stderr)
for item in warnings:
print(f" - {item}", file=sys.stderr)
print("Treat warnings as failures until parity matrix is fully green.", file=sys.stderr)
return 1
print("✅ CLI parity matrix has no blocking or warning entries.")
return 0
if __name__ == "__main__":
raise SystemExit(main())
#!/usr/bin/env python3
"""Ensure CLI parity matrix contains no outstanding blockers before release."""
from __future__ import annotations
import pathlib
import re
import sys
REPO_ROOT = pathlib.Path(__file__).resolve().parents[2]
PARITY_DOC = REPO_ROOT / "docs/cli-vs-ui-parity.md"
BLOCKERS = {
"🟥": "blocking gap",
"": "missing feature",
"🚫": "unsupported",
}
WARNINGS = {
"🟡": "partial support",
"⚠️": "warning",
}
def main() -> int:
if not PARITY_DOC.exists():
print(f"❌ Parity matrix not found at {PARITY_DOC}", file=sys.stderr)
return 1
text = PARITY_DOC.read_text(encoding="utf-8")
blockers: list[str] = []
warnings: list[str] = []
for line in text.splitlines():
for symbol, label in BLOCKERS.items():
if symbol in line:
blockers.append(f"{label}: {line.strip()}")
for symbol, label in WARNINGS.items():
if symbol in line:
warnings.append(f"{label}: {line.strip()}")
if blockers:
print("❌ CLI parity gate failed — blocking items present:", file=sys.stderr)
for item in blockers:
print(f" - {item}", file=sys.stderr)
return 1
if warnings:
print("⚠️ CLI parity gate warnings detected:", file=sys.stderr)
for item in warnings:
print(f" - {item}", file=sys.stderr)
print("Treat warnings as failures until parity matrix is fully green.", file=sys.stderr)
return 1
print("✅ CLI parity matrix has no blocking or warning entries.")
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -1,30 +1,30 @@
# Package,Version,SHA256,SourceBase(optional)
# DotNetPublicFlat=https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.AspNetCore.Authentication.JwtBearer,10.0.0-rc.2.25502.107,3223f447bde9a3620477305a89520e8becafe23b481a0b423552af572439f8c2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.AspNetCore.Mvc.Testing,10.0.0-rc.2.25502.107,b6b53c62e0abefdca30e6ca08ab8357e395177dd9f368ab3ad4bbbd07e517229,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.AspNetCore.OpenApi,10.0.0-rc.2.25502.107,f64de1fe870306053346a31263e53e29f2fdfe0eae432a3156f8d7d705c81d85,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Data.Sqlite,9.0.0-rc.1.24451.1,770b637317e1e924f1b13587b31af0787c8c668b1d9f53f2fccae8ee8704e167,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Caching.Memory,10.0.0-rc.2.25502.107,6ec6d156ed06b07cbee9fa1c0803b8d54a5f904a0bf0183172f87b63c4044426,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Configuration,10.0.0-rc.2.25502.107,0716f72cdc99b03946c98c418c39d42208fc65f20301bd1f26a6c174646870f6,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Configuration.Abstractions,10.0.0-rc.2.25502.107,db6e2cd37c40b5ac5ca7a4f40f5edafda2b6a8690f95a8c64b54c777a1d757c0,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Configuration.Binder,10.0.0-rc.2.25502.107,80f04da6beef001d3c357584485c2ddc6fdbf3776cfd10f0d7b40dfe8a79ee43,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Configuration.CommandLine,10.0.0-rc.2.25502.107,91974a95ae35bcfcd5e977427f3d0e6d3416e78678a159f5ec9e55f33a2e19af,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Configuration.EnvironmentVariables,10.0.0-rc.2.25502.107,74d65a20e2764d5f42863f5f203b216533fc51b22fb02a8491036feb98ae5fef,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Configuration.FileExtensions,10.0.0-rc.2.25502.107,5f97b56ea2ba3a1b252022504060351ce457f78ac9055d5fdd1311678721c1a1,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Configuration.Json,10.0.0-rc.2.25502.107,0ba362c479213eb3425f8e14d8a8495250dbaf2d5dad7c0a4ca8d3239b03c392,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.DependencyInjection,10.0.0-rc.2.25502.107,2e1b51b4fa196f0819adf69a15ad8c3432b64c3b196f2ed3d14b65136a6a8709,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.DependencyInjection.Abstractions,10.0.0-rc.2.25502.107,d6787ccf69e09428b3424974896c09fdabb8040bae06ed318212871817933352,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Diagnostics.Abstractions,10.0.0-rc.2.25502.107,b4bc47b4b4ded4ab2f134d318179537cbe16aed511bb3672553ea197929dc7d8,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Diagnostics.HealthChecks,10.0.0-rc.2.25502.107,855fd4da26b955b6b1d036390b1af10564986067b5cc6356cffa081c83eec158,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Diagnostics.HealthChecks.Abstractions,10.0.0-rc.2.25502.107,59f4724daed68a067a661e208f0a934f253b91ec5d52310d008e185bc2c9294c,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Hosting,10.0.0-rc.2.25502.107,ea9b1fa8e50acae720294671e6c36d4c58e20cfc9720335ab4f5ad4eba92cf62,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Hosting.Abstractions,10.0.0-rc.2.25502.107,98fa23ac82e19be221a598fc6f4b469e8b00c4ca2b7a42ad0bfea8b63bbaa9a2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Http,10.0.0-rc.2.25502.107,c63c8bf4ca637137a561ca487b674859c2408918c4838a871bb26eb0c809a665,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Http.Polly,10.0.0-rc.2.25502.107,0b436196bcedd484796795f6a795d7a191294f1190f7a477f1a4937ef7f78110,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Logging.Abstractions,10.0.0-rc.2.25502.107,92b9a5ed62fe945ee88983af43c347429ec15691c9acb207872c548241cef961,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Logging.Console,10.0.0-rc.2.25502.107,fa1e10b5d6261675d9d2e97b9584ff9aaea2a2276eac584dfa77a1e35dcc58f5,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Options,10.0.0-rc.2.25502.107,d208acec60bec3350989694fd443e2d2f0ab583ad5f2c53a2879ade16908e5b4,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Options.ConfigurationExtensions,10.0.0-rc.2.25502.107,c2863bb28c36fd67f308dd4af486897b512d62ecff2d96613ef954f5bef443e2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.TimeProvider.Testing,9.10.0,919a47156fc13f756202702cacc6e853123c84f1b696970445d89f16dfa45829,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.IdentityModel.Tokens,8.14.0,00b78c7b7023132e1d6b31d305e47524732dce6faca92dd16eb8d05a835bba7a,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.SourceLink.GitLab,8.0.0,a7efb9c177888f952ea8c88bc5714fc83c64af32b70fb080a1323b8d32233973,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
# Package,Version,SHA256,SourceBase(optional)
# DotNetPublicFlat=https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.AspNetCore.Authentication.JwtBearer,10.0.0-rc.2.25502.107,3223f447bde9a3620477305a89520e8becafe23b481a0b423552af572439f8c2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.AspNetCore.Mvc.Testing,10.0.0-rc.2.25502.107,b6b53c62e0abefdca30e6ca08ab8357e395177dd9f368ab3ad4bbbd07e517229,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.AspNetCore.OpenApi,10.0.0-rc.2.25502.107,f64de1fe870306053346a31263e53e29f2fdfe0eae432a3156f8d7d705c81d85,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Data.Sqlite,9.0.0-rc.1.24451.1,770b637317e1e924f1b13587b31af0787c8c668b1d9f53f2fccae8ee8704e167,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Caching.Memory,10.0.0-rc.2.25502.107,6ec6d156ed06b07cbee9fa1c0803b8d54a5f904a0bf0183172f87b63c4044426,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Configuration,10.0.0-rc.2.25502.107,0716f72cdc99b03946c98c418c39d42208fc65f20301bd1f26a6c174646870f6,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Configuration.Abstractions,10.0.0-rc.2.25502.107,db6e2cd37c40b5ac5ca7a4f40f5edafda2b6a8690f95a8c64b54c777a1d757c0,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Configuration.Binder,10.0.0-rc.2.25502.107,80f04da6beef001d3c357584485c2ddc6fdbf3776cfd10f0d7b40dfe8a79ee43,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Configuration.CommandLine,10.0.0-rc.2.25502.107,91974a95ae35bcfcd5e977427f3d0e6d3416e78678a159f5ec9e55f33a2e19af,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Configuration.EnvironmentVariables,10.0.0-rc.2.25502.107,74d65a20e2764d5f42863f5f203b216533fc51b22fb02a8491036feb98ae5fef,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Configuration.FileExtensions,10.0.0-rc.2.25502.107,5f97b56ea2ba3a1b252022504060351ce457f78ac9055d5fdd1311678721c1a1,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Configuration.Json,10.0.0-rc.2.25502.107,0ba362c479213eb3425f8e14d8a8495250dbaf2d5dad7c0a4ca8d3239b03c392,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.DependencyInjection,10.0.0-rc.2.25502.107,2e1b51b4fa196f0819adf69a15ad8c3432b64c3b196f2ed3d14b65136a6a8709,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.DependencyInjection.Abstractions,10.0.0-rc.2.25502.107,d6787ccf69e09428b3424974896c09fdabb8040bae06ed318212871817933352,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Diagnostics.Abstractions,10.0.0-rc.2.25502.107,b4bc47b4b4ded4ab2f134d318179537cbe16aed511bb3672553ea197929dc7d8,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Diagnostics.HealthChecks,10.0.0-rc.2.25502.107,855fd4da26b955b6b1d036390b1af10564986067b5cc6356cffa081c83eec158,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Diagnostics.HealthChecks.Abstractions,10.0.0-rc.2.25502.107,59f4724daed68a067a661e208f0a934f253b91ec5d52310d008e185bc2c9294c,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Hosting,10.0.0-rc.2.25502.107,ea9b1fa8e50acae720294671e6c36d4c58e20cfc9720335ab4f5ad4eba92cf62,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Hosting.Abstractions,10.0.0-rc.2.25502.107,98fa23ac82e19be221a598fc6f4b469e8b00c4ca2b7a42ad0bfea8b63bbaa9a2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Http,10.0.0-rc.2.25502.107,c63c8bf4ca637137a561ca487b674859c2408918c4838a871bb26eb0c809a665,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Http.Polly,10.0.0-rc.2.25502.107,0b436196bcedd484796795f6a795d7a191294f1190f7a477f1a4937ef7f78110,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Logging.Abstractions,10.0.0-rc.2.25502.107,92b9a5ed62fe945ee88983af43c347429ec15691c9acb207872c548241cef961,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Logging.Console,10.0.0-rc.2.25502.107,fa1e10b5d6261675d9d2e97b9584ff9aaea2a2276eac584dfa77a1e35dcc58f5,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Options,10.0.0-rc.2.25502.107,d208acec60bec3350989694fd443e2d2f0ab583ad5f2c53a2879ade16908e5b4,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.Options.ConfigurationExtensions,10.0.0-rc.2.25502.107,c2863bb28c36fd67f308dd4af486897b512d62ecff2d96613ef954f5bef443e2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.Extensions.TimeProvider.Testing,9.10.0,919a47156fc13f756202702cacc6e853123c84f1b696970445d89f16dfa45829,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.IdentityModel.Tokens,8.14.0,00b78c7b7023132e1d6b31d305e47524732dce6faca92dd16eb8d05a835bba7a,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
Microsoft.SourceLink.GitLab,8.0.0,a7efb9c177888f952ea8c88bc5714fc83c64af32b70fb080a1323b8d32233973,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
1 # Package,Version,SHA256,SourceBase(optional)
2 # DotNetPublicFlat=https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
3 Microsoft.AspNetCore.Authentication.JwtBearer,10.0.0-rc.2.25502.107,3223f447bde9a3620477305a89520e8becafe23b481a0b423552af572439f8c2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
4 Microsoft.AspNetCore.Mvc.Testing,10.0.0-rc.2.25502.107,b6b53c62e0abefdca30e6ca08ab8357e395177dd9f368ab3ad4bbbd07e517229,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
5 Microsoft.AspNetCore.OpenApi,10.0.0-rc.2.25502.107,f64de1fe870306053346a31263e53e29f2fdfe0eae432a3156f8d7d705c81d85,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
6 Microsoft.Data.Sqlite,9.0.0-rc.1.24451.1,770b637317e1e924f1b13587b31af0787c8c668b1d9f53f2fccae8ee8704e167,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
7 Microsoft.Extensions.Caching.Memory,10.0.0-rc.2.25502.107,6ec6d156ed06b07cbee9fa1c0803b8d54a5f904a0bf0183172f87b63c4044426,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
8 Microsoft.Extensions.Configuration,10.0.0-rc.2.25502.107,0716f72cdc99b03946c98c418c39d42208fc65f20301bd1f26a6c174646870f6,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
9 Microsoft.Extensions.Configuration.Abstractions,10.0.0-rc.2.25502.107,db6e2cd37c40b5ac5ca7a4f40f5edafda2b6a8690f95a8c64b54c777a1d757c0,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
10 Microsoft.Extensions.Configuration.Binder,10.0.0-rc.2.25502.107,80f04da6beef001d3c357584485c2ddc6fdbf3776cfd10f0d7b40dfe8a79ee43,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
11 Microsoft.Extensions.Configuration.CommandLine,10.0.0-rc.2.25502.107,91974a95ae35bcfcd5e977427f3d0e6d3416e78678a159f5ec9e55f33a2e19af,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
12 Microsoft.Extensions.Configuration.EnvironmentVariables,10.0.0-rc.2.25502.107,74d65a20e2764d5f42863f5f203b216533fc51b22fb02a8491036feb98ae5fef,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
13 Microsoft.Extensions.Configuration.FileExtensions,10.0.0-rc.2.25502.107,5f97b56ea2ba3a1b252022504060351ce457f78ac9055d5fdd1311678721c1a1,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
14 Microsoft.Extensions.Configuration.Json,10.0.0-rc.2.25502.107,0ba362c479213eb3425f8e14d8a8495250dbaf2d5dad7c0a4ca8d3239b03c392,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
15 Microsoft.Extensions.DependencyInjection,10.0.0-rc.2.25502.107,2e1b51b4fa196f0819adf69a15ad8c3432b64c3b196f2ed3d14b65136a6a8709,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
16 Microsoft.Extensions.DependencyInjection.Abstractions,10.0.0-rc.2.25502.107,d6787ccf69e09428b3424974896c09fdabb8040bae06ed318212871817933352,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
17 Microsoft.Extensions.Diagnostics.Abstractions,10.0.0-rc.2.25502.107,b4bc47b4b4ded4ab2f134d318179537cbe16aed511bb3672553ea197929dc7d8,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
18 Microsoft.Extensions.Diagnostics.HealthChecks,10.0.0-rc.2.25502.107,855fd4da26b955b6b1d036390b1af10564986067b5cc6356cffa081c83eec158,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
19 Microsoft.Extensions.Diagnostics.HealthChecks.Abstractions,10.0.0-rc.2.25502.107,59f4724daed68a067a661e208f0a934f253b91ec5d52310d008e185bc2c9294c,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
20 Microsoft.Extensions.Hosting,10.0.0-rc.2.25502.107,ea9b1fa8e50acae720294671e6c36d4c58e20cfc9720335ab4f5ad4eba92cf62,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
21 Microsoft.Extensions.Hosting.Abstractions,10.0.0-rc.2.25502.107,98fa23ac82e19be221a598fc6f4b469e8b00c4ca2b7a42ad0bfea8b63bbaa9a2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
22 Microsoft.Extensions.Http,10.0.0-rc.2.25502.107,c63c8bf4ca637137a561ca487b674859c2408918c4838a871bb26eb0c809a665,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
23 Microsoft.Extensions.Http.Polly,10.0.0-rc.2.25502.107,0b436196bcedd484796795f6a795d7a191294f1190f7a477f1a4937ef7f78110,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
24 Microsoft.Extensions.Logging.Abstractions,10.0.0-rc.2.25502.107,92b9a5ed62fe945ee88983af43c347429ec15691c9acb207872c548241cef961,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
25 Microsoft.Extensions.Logging.Console,10.0.0-rc.2.25502.107,fa1e10b5d6261675d9d2e97b9584ff9aaea2a2276eac584dfa77a1e35dcc58f5,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
26 Microsoft.Extensions.Options,10.0.0-rc.2.25502.107,d208acec60bec3350989694fd443e2d2f0ab583ad5f2c53a2879ade16908e5b4,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
27 Microsoft.Extensions.Options.ConfigurationExtensions,10.0.0-rc.2.25502.107,c2863bb28c36fd67f308dd4af486897b512d62ecff2d96613ef954f5bef443e2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
28 Microsoft.Extensions.TimeProvider.Testing,9.10.0,919a47156fc13f756202702cacc6e853123c84f1b696970445d89f16dfa45829,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
29 Microsoft.IdentityModel.Tokens,8.14.0,00b78c7b7023132e1d6b31d305e47524732dce6faca92dd16eb8d05a835bba7a,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
30 Microsoft.SourceLink.GitLab,8.0.0,a7efb9c177888f952ea8c88bc5714fc83c64af32b70fb080a1323b8d32233973,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2

File diff suppressed because it is too large Load Diff

View File

@@ -15,7 +15,7 @@
"kind": "dotnet-service",
"context": ".",
"dockerfile": "ops/devops/release/docker/Dockerfile.dotnet-service",
"project": "src/StellaOps.Authority/StellaOps.Authority/StellaOps.Authority.csproj",
"project": "src/Authority/StellaOps.Authority/StellaOps.Authority/StellaOps.Authority.csproj",
"entrypoint": "StellaOps.Authority.dll"
},
{
@@ -24,7 +24,7 @@
"kind": "dotnet-service",
"context": ".",
"dockerfile": "ops/devops/release/docker/Dockerfile.dotnet-service",
"project": "src/StellaOps.Signer/StellaOps.Signer.WebService/StellaOps.Signer.WebService.csproj",
"project": "src/Signer/StellaOps.Signer/StellaOps.Signer.WebService/StellaOps.Signer.WebService.csproj",
"entrypoint": "StellaOps.Signer.WebService.dll"
},
{
@@ -33,7 +33,7 @@
"kind": "dotnet-service",
"context": ".",
"dockerfile": "ops/devops/release/docker/Dockerfile.dotnet-service",
"project": "src/StellaOps.Attestor/StellaOps.Attestor.WebService/StellaOps.Attestor.WebService.csproj",
"project": "src/Attestor/StellaOps.Attestor/StellaOps.Attestor.WebService/StellaOps.Attestor.WebService.csproj",
"entrypoint": "StellaOps.Attestor.WebService.dll"
},
{
@@ -42,7 +42,7 @@
"kind": "dotnet-service",
"context": ".",
"dockerfile": "ops/devops/release/docker/Dockerfile.dotnet-service",
"project": "src/StellaOps.Scanner.WebService/StellaOps.Scanner.WebService.csproj",
"project": "src/Scanner/StellaOps.Scanner.WebService/StellaOps.Scanner.WebService.csproj",
"entrypoint": "StellaOps.Scanner.WebService.dll"
},
{
@@ -51,7 +51,7 @@
"kind": "dotnet-service",
"context": ".",
"dockerfile": "ops/devops/release/docker/Dockerfile.dotnet-service",
"project": "src/StellaOps.Scanner.Worker/StellaOps.Scanner.Worker.csproj",
"project": "src/Scanner/StellaOps.Scanner.Worker/StellaOps.Scanner.Worker.csproj",
"entrypoint": "StellaOps.Scanner.Worker.dll"
},
{
@@ -60,7 +60,7 @@
"kind": "dotnet-service",
"context": ".",
"dockerfile": "ops/devops/release/docker/Dockerfile.dotnet-service",
"project": "src/StellaOps.Concelier.WebService/StellaOps.Concelier.WebService.csproj",
"project": "src/Concelier/StellaOps.Concelier.WebService/StellaOps.Concelier.WebService.csproj",
"entrypoint": "StellaOps.Concelier.WebService.dll"
},
{
@@ -69,7 +69,7 @@
"kind": "dotnet-service",
"context": ".",
"dockerfile": "ops/devops/release/docker/Dockerfile.dotnet-service",
"project": "src/StellaOps.Excititor.WebService/StellaOps.Excititor.WebService.csproj",
"project": "src/Excititor/StellaOps.Excititor.WebService/StellaOps.Excititor.WebService.csproj",
"entrypoint": "StellaOps.Excititor.WebService.dll"
},
{
@@ -81,7 +81,7 @@
}
],
"cli": {
"project": "src/StellaOps.Cli/StellaOps.Cli.csproj",
"project": "src/Cli/StellaOps.Cli/StellaOps.Cli.csproj",
"runtimes": [
"linux-x64",
"linux-arm64",
@@ -104,6 +104,6 @@
]
},
"buildxPlugin": {
"project": "src/StellaOps.Scanner.Sbomer.BuildXPlugin/StellaOps.Scanner.Sbomer.BuildXPlugin.csproj"
"project": "src/Scanner/StellaOps.Scanner.Sbomer.BuildXPlugin/StellaOps.Scanner.Sbomer.BuildXPlugin.csproj"
}
}

View File

@@ -11,9 +11,9 @@ FROM ${NODE_IMAGE} AS build
WORKDIR /workspace
ENV CI=1 \
SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH}
COPY src/StellaOps.Web/package.json src/StellaOps.Web/package-lock.json ./
COPY src/Web/StellaOps.Web/package.json src/Web/StellaOps.Web/package-lock.json ./
RUN npm ci --prefer-offline --no-audit --no-fund
COPY src/StellaOps.Web/ ./
COPY src/Web/StellaOps.Web/ ./
RUN npm run build -- --configuration=production
FROM ${NGINX_IMAGE} AS runtime

View File

@@ -1,52 +1,52 @@
# syntax=docker/dockerfile:1.7-labs
ARG SDK_IMAGE=mcr.microsoft.com/dotnet/nightly/sdk:10.0
ARG RUNTIME_IMAGE=gcr.io/distroless/dotnet/aspnet:latest
ARG PROJECT
ARG ENTRYPOINT_DLL
ARG VERSION=0.0.0
ARG CHANNEL=dev
ARG GIT_SHA=0000000
ARG SOURCE_DATE_EPOCH=0
FROM ${SDK_IMAGE} AS build
ARG PROJECT
ARG GIT_SHA
ARG SOURCE_DATE_EPOCH
WORKDIR /src
ENV DOTNET_CLI_TELEMETRY_OPTOUT=1 \
DOTNET_SKIP_FIRST_TIME_EXPERIENCE=1 \
NUGET_XMLDOC_MODE=skip \
SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH}
COPY . .
RUN --mount=type=cache,target=/root/.nuget/packages \
dotnet restore "${PROJECT}"
RUN --mount=type=cache,target=/root/.nuget/packages \
dotnet publish "${PROJECT}" \
-c Release \
-o /app/publish \
/p:UseAppHost=false \
/p:ContinuousIntegrationBuild=true \
/p:SourceRevisionId=${GIT_SHA} \
/p:Deterministic=true \
/p:TreatWarningsAsErrors=true
FROM ${RUNTIME_IMAGE} AS runtime
WORKDIR /app
ARG ENTRYPOINT_DLL
ARG VERSION
ARG CHANNEL
ARG GIT_SHA
ENV DOTNET_EnableDiagnostics=0 \
ASPNETCORE_URLS=http://0.0.0.0:8080
COPY --from=build /app/publish/ ./
RUN set -eu; \
printf '#!/usr/bin/env sh\nset -e\nexec dotnet %s "$@"\n' "${ENTRYPOINT_DLL}" > /entrypoint.sh; \
chmod +x /entrypoint.sh
EXPOSE 8080
LABEL org.opencontainers.image.version="${VERSION}" \
org.opencontainers.image.revision="${GIT_SHA}" \
org.opencontainers.image.source="https://git.stella-ops.org/stella-ops/feedser" \
org.stellaops.release.channel="${CHANNEL}"
ENTRYPOINT ["/entrypoint.sh"]
# syntax=docker/dockerfile:1.7-labs
ARG SDK_IMAGE=mcr.microsoft.com/dotnet/nightly/sdk:10.0
ARG RUNTIME_IMAGE=gcr.io/distroless/dotnet/aspnet:latest
ARG PROJECT
ARG ENTRYPOINT_DLL
ARG VERSION=0.0.0
ARG CHANNEL=dev
ARG GIT_SHA=0000000
ARG SOURCE_DATE_EPOCH=0
FROM ${SDK_IMAGE} AS build
ARG PROJECT
ARG GIT_SHA
ARG SOURCE_DATE_EPOCH
WORKDIR /src
ENV DOTNET_CLI_TELEMETRY_OPTOUT=1 \
DOTNET_SKIP_FIRST_TIME_EXPERIENCE=1 \
NUGET_XMLDOC_MODE=skip \
SOURCE_DATE_EPOCH=${SOURCE_DATE_EPOCH}
COPY . .
RUN --mount=type=cache,target=/root/.nuget/packages \
dotnet restore "${PROJECT}"
RUN --mount=type=cache,target=/root/.nuget/packages \
dotnet publish "${PROJECT}" \
-c Release \
-o /app/publish \
/p:UseAppHost=false \
/p:ContinuousIntegrationBuild=true \
/p:SourceRevisionId=${GIT_SHA} \
/p:Deterministic=true \
/p:TreatWarningsAsErrors=true
FROM ${RUNTIME_IMAGE} AS runtime
WORKDIR /app
ARG ENTRYPOINT_DLL
ARG VERSION
ARG CHANNEL
ARG GIT_SHA
ENV DOTNET_EnableDiagnostics=0 \
ASPNETCORE_URLS=http://0.0.0.0:8080
COPY --from=build /app/publish/ ./
RUN set -eu; \
printf '#!/usr/bin/env sh\nset -e\nexec dotnet %s "$@"\n' "${ENTRYPOINT_DLL}" > /entrypoint.sh; \
chmod +x /entrypoint.sh
EXPOSE 8080
LABEL org.opencontainers.image.version="${VERSION}" \
org.opencontainers.image.revision="${GIT_SHA}" \
org.opencontainers.image.source="https://git.stella-ops.org/stella-ops/feedser" \
org.stellaops.release.channel="${CHANNEL}"
ENTRYPOINT ["/entrypoint.sh"]

View File

@@ -1,22 +1,22 @@
server {
listen 8080;
listen [::]:8080;
server_name _;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location ~* \.(?:js|css|svg|png|jpg|jpeg|gif|ico|woff2?)$ {
add_header Cache-Control "public, max-age=2592000";
}
location = /healthz {
access_log off;
add_header Content-Type text/plain;
return 200 'ok';
}
}
server {
listen 8080;
listen [::]:8080;
server_name _;
root /usr/share/nginx/html;
index index.html;
location / {
try_files $uri $uri/ /index.html;
}
location ~* \.(?:js|css|svg|png|jpg|jpeg|gif|ico|woff2?)$ {
add_header Cache-Control "public, max-age=2592000";
}
location = /healthz {
access_log off;
add_header Content-Type text/plain;
return 200 'ok';
}
}

View File

@@ -1,232 +1,232 @@
from __future__ import annotations
import json
import tempfile
import unittest
from collections import OrderedDict
from pathlib import Path
import sys
sys.path.append(str(Path(__file__).resolve().parent))
from build_release import write_manifest # type: ignore import-not-found
from verify_release import VerificationError, compute_sha256, verify_release
class VerifyReleaseTests(unittest.TestCase):
def setUp(self) -> None:
self._temp = tempfile.TemporaryDirectory()
self.base_path = Path(self._temp.name)
self.out_dir = self.base_path / "out"
self.release_dir = self.out_dir / "release"
self.release_dir.mkdir(parents=True, exist_ok=True)
def tearDown(self) -> None:
self._temp.cleanup()
def _relative_to_out(self, path: Path) -> str:
return path.relative_to(self.out_dir).as_posix()
def _write_json(self, path: Path, payload: dict[str, object]) -> None:
path.parent.mkdir(parents=True, exist_ok=True)
with path.open("w", encoding="utf-8") as handle:
json.dump(payload, handle, indent=2)
handle.write("\n")
def _create_sample_release(self) -> None:
sbom_path = self.release_dir / "artifacts/sboms/sample.cyclonedx.json"
sbom_path.parent.mkdir(parents=True, exist_ok=True)
sbom_path.write_text('{"bomFormat":"CycloneDX","specVersion":"1.5"}\n', encoding="utf-8")
sbom_sha = compute_sha256(sbom_path)
provenance_path = self.release_dir / "artifacts/provenance/sample.provenance.json"
self._write_json(
provenance_path,
{
"buildDefinition": {"buildType": "https://example/build", "externalParameters": {}},
"runDetails": {"builder": {"id": "https://example/ci"}},
},
)
provenance_sha = compute_sha256(provenance_path)
signature_path = self.release_dir / "artifacts/signatures/sample.signature"
signature_path.parent.mkdir(parents=True, exist_ok=True)
signature_path.write_text("signature-data\n", encoding="utf-8")
signature_sha = compute_sha256(signature_path)
metadata_path = self.release_dir / "artifacts/metadata/sample.metadata.json"
self._write_json(metadata_path, {"digest": "sha256:1234"})
metadata_sha = compute_sha256(metadata_path)
chart_path = self.release_dir / "helm/stellaops-1.0.0.tgz"
chart_path.parent.mkdir(parents=True, exist_ok=True)
chart_path.write_bytes(b"helm-chart-data")
chart_sha = compute_sha256(chart_path)
compose_path = self.release_dir.parent / "deploy/compose/docker-compose.dev.yaml"
compose_path.parent.mkdir(parents=True, exist_ok=True)
compose_path.write_text("services: {}\n", encoding="utf-8")
compose_sha = compute_sha256(compose_path)
debug_file = self.release_dir / "debug/.build-id/ab/cdef.debug"
debug_file.parent.mkdir(parents=True, exist_ok=True)
debug_file.write_bytes(b"\x7fELFDEBUGDATA")
debug_sha = compute_sha256(debug_file)
debug_manifest_path = self.release_dir / "debug/debug-manifest.json"
debug_manifest = OrderedDict(
(
("generatedAt", "2025-10-26T00:00:00Z"),
("version", "1.0.0"),
("channel", "edge"),
(
"artifacts",
[
OrderedDict(
(
("buildId", "abcdef1234"),
("platform", "linux/amd64"),
("debugPath", "debug/.build-id/ab/cdef.debug"),
("sha256", debug_sha),
("size", debug_file.stat().st_size),
("components", ["sample"]),
("images", ["registry.example/sample@sha256:feedface"]),
("sources", ["app/sample.dll"]),
)
)
],
),
)
)
self._write_json(debug_manifest_path, debug_manifest)
debug_manifest_sha = compute_sha256(debug_manifest_path)
(debug_manifest_path.with_suffix(debug_manifest_path.suffix + ".sha256")).write_text(
f"{debug_manifest_sha} {debug_manifest_path.name}\n", encoding="utf-8"
)
manifest = OrderedDict(
(
(
"release",
OrderedDict(
(
("version", "1.0.0"),
("channel", "edge"),
("date", "2025-10-26T00:00:00Z"),
("calendar", "2025.10"),
)
),
),
(
"components",
[
OrderedDict(
(
("name", "sample"),
("image", "registry.example/sample@sha256:feedface"),
("tags", ["registry.example/sample:1.0.0"]),
(
"sbom",
OrderedDict(
(
("path", self._relative_to_out(sbom_path)),
("sha256", sbom_sha),
)
),
),
(
"provenance",
OrderedDict(
(
("path", self._relative_to_out(provenance_path)),
("sha256", provenance_sha),
)
),
),
(
"signature",
OrderedDict(
(
("path", self._relative_to_out(signature_path)),
("sha256", signature_sha),
("ref", "sigstore://example"),
("tlogUploaded", True),
)
),
),
(
"metadata",
OrderedDict(
(
("path", self._relative_to_out(metadata_path)),
("sha256", metadata_sha),
)
),
),
)
)
],
),
(
"charts",
[
OrderedDict(
(
("name", "stellaops"),
("version", "1.0.0"),
("path", self._relative_to_out(chart_path)),
("sha256", chart_sha),
)
)
],
),
(
"compose",
[
OrderedDict(
(
("name", "docker-compose.dev.yaml"),
("path", compose_path.relative_to(self.out_dir).as_posix()),
("sha256", compose_sha),
)
)
],
),
(
"debugStore",
OrderedDict(
(
("manifest", "debug/debug-manifest.json"),
("sha256", debug_manifest_sha),
("entries", 1),
("platforms", ["linux/amd64"]),
("directory", "debug/.build-id"),
)
),
),
)
)
write_manifest(manifest, self.release_dir)
def test_verify_release_success(self) -> None:
self._create_sample_release()
# Should not raise
verify_release(self.release_dir)
def test_verify_release_detects_sha_mismatch(self) -> None:
self._create_sample_release()
tampered = self.release_dir / "artifacts/sboms/sample.cyclonedx.json"
tampered.write_text("tampered\n", encoding="utf-8")
with self.assertRaises(VerificationError):
verify_release(self.release_dir)
def test_verify_release_detects_missing_debug_file(self) -> None:
self._create_sample_release()
debug_file = self.release_dir / "debug/.build-id/ab/cdef.debug"
debug_file.unlink()
with self.assertRaises(VerificationError):
verify_release(self.release_dir)
if __name__ == "__main__":
unittest.main()
from __future__ import annotations
import json
import tempfile
import unittest
from collections import OrderedDict
from pathlib import Path
import sys
sys.path.append(str(Path(__file__).resolve().parent))
from build_release import write_manifest # type: ignore import-not-found
from verify_release import VerificationError, compute_sha256, verify_release
class VerifyReleaseTests(unittest.TestCase):
def setUp(self) -> None:
self._temp = tempfile.TemporaryDirectory()
self.base_path = Path(self._temp.name)
self.out_dir = self.base_path / "out"
self.release_dir = self.out_dir / "release"
self.release_dir.mkdir(parents=True, exist_ok=True)
def tearDown(self) -> None:
self._temp.cleanup()
def _relative_to_out(self, path: Path) -> str:
return path.relative_to(self.out_dir).as_posix()
def _write_json(self, path: Path, payload: dict[str, object]) -> None:
path.parent.mkdir(parents=True, exist_ok=True)
with path.open("w", encoding="utf-8") as handle:
json.dump(payload, handle, indent=2)
handle.write("\n")
def _create_sample_release(self) -> None:
sbom_path = self.release_dir / "artifacts/sboms/sample.cyclonedx.json"
sbom_path.parent.mkdir(parents=True, exist_ok=True)
sbom_path.write_text('{"bomFormat":"CycloneDX","specVersion":"1.5"}\n', encoding="utf-8")
sbom_sha = compute_sha256(sbom_path)
provenance_path = self.release_dir / "artifacts/provenance/sample.provenance.json"
self._write_json(
provenance_path,
{
"buildDefinition": {"buildType": "https://example/build", "externalParameters": {}},
"runDetails": {"builder": {"id": "https://example/ci"}},
},
)
provenance_sha = compute_sha256(provenance_path)
signature_path = self.release_dir / "artifacts/signatures/sample.signature"
signature_path.parent.mkdir(parents=True, exist_ok=True)
signature_path.write_text("signature-data\n", encoding="utf-8")
signature_sha = compute_sha256(signature_path)
metadata_path = self.release_dir / "artifacts/metadata/sample.metadata.json"
self._write_json(metadata_path, {"digest": "sha256:1234"})
metadata_sha = compute_sha256(metadata_path)
chart_path = self.release_dir / "helm/stellaops-1.0.0.tgz"
chart_path.parent.mkdir(parents=True, exist_ok=True)
chart_path.write_bytes(b"helm-chart-data")
chart_sha = compute_sha256(chart_path)
compose_path = self.release_dir.parent / "deploy/compose/docker-compose.dev.yaml"
compose_path.parent.mkdir(parents=True, exist_ok=True)
compose_path.write_text("services: {}\n", encoding="utf-8")
compose_sha = compute_sha256(compose_path)
debug_file = self.release_dir / "debug/.build-id/ab/cdef.debug"
debug_file.parent.mkdir(parents=True, exist_ok=True)
debug_file.write_bytes(b"\x7fELFDEBUGDATA")
debug_sha = compute_sha256(debug_file)
debug_manifest_path = self.release_dir / "debug/debug-manifest.json"
debug_manifest = OrderedDict(
(
("generatedAt", "2025-10-26T00:00:00Z"),
("version", "1.0.0"),
("channel", "edge"),
(
"artifacts",
[
OrderedDict(
(
("buildId", "abcdef1234"),
("platform", "linux/amd64"),
("debugPath", "debug/.build-id/ab/cdef.debug"),
("sha256", debug_sha),
("size", debug_file.stat().st_size),
("components", ["sample"]),
("images", ["registry.example/sample@sha256:feedface"]),
("sources", ["app/sample.dll"]),
)
)
],
),
)
)
self._write_json(debug_manifest_path, debug_manifest)
debug_manifest_sha = compute_sha256(debug_manifest_path)
(debug_manifest_path.with_suffix(debug_manifest_path.suffix + ".sha256")).write_text(
f"{debug_manifest_sha} {debug_manifest_path.name}\n", encoding="utf-8"
)
manifest = OrderedDict(
(
(
"release",
OrderedDict(
(
("version", "1.0.0"),
("channel", "edge"),
("date", "2025-10-26T00:00:00Z"),
("calendar", "2025.10"),
)
),
),
(
"components",
[
OrderedDict(
(
("name", "sample"),
("image", "registry.example/sample@sha256:feedface"),
("tags", ["registry.example/sample:1.0.0"]),
(
"sbom",
OrderedDict(
(
("path", self._relative_to_out(sbom_path)),
("sha256", sbom_sha),
)
),
),
(
"provenance",
OrderedDict(
(
("path", self._relative_to_out(provenance_path)),
("sha256", provenance_sha),
)
),
),
(
"signature",
OrderedDict(
(
("path", self._relative_to_out(signature_path)),
("sha256", signature_sha),
("ref", "sigstore://example"),
("tlogUploaded", True),
)
),
),
(
"metadata",
OrderedDict(
(
("path", self._relative_to_out(metadata_path)),
("sha256", metadata_sha),
)
),
),
)
)
],
),
(
"charts",
[
OrderedDict(
(
("name", "stellaops"),
("version", "1.0.0"),
("path", self._relative_to_out(chart_path)),
("sha256", chart_sha),
)
)
],
),
(
"compose",
[
OrderedDict(
(
("name", "docker-compose.dev.yaml"),
("path", compose_path.relative_to(self.out_dir).as_posix()),
("sha256", compose_sha),
)
)
],
),
(
"debugStore",
OrderedDict(
(
("manifest", "debug/debug-manifest.json"),
("sha256", debug_manifest_sha),
("entries", 1),
("platforms", ["linux/amd64"]),
("directory", "debug/.build-id"),
)
),
),
)
)
write_manifest(manifest, self.release_dir)
def test_verify_release_success(self) -> None:
self._create_sample_release()
# Should not raise
verify_release(self.release_dir)
def test_verify_release_detects_sha_mismatch(self) -> None:
self._create_sample_release()
tampered = self.release_dir / "artifacts/sboms/sample.cyclonedx.json"
tampered.write_text("tampered\n", encoding="utf-8")
with self.assertRaises(VerificationError):
verify_release(self.release_dir)
def test_verify_release_detects_missing_debug_file(self) -> None:
self._create_sample_release()
debug_file = self.release_dir / "debug/.build-id/ab/cdef.debug"
debug_file.unlink()
with self.assertRaises(VerificationError):
verify_release(self.release_dir)
if __name__ == "__main__":
unittest.main()

View File

@@ -1,334 +1,334 @@
#!/usr/bin/env python3
"""Verify release artefacts (SBOMs, provenance, signatures, manifest hashes)."""
from __future__ import annotations
import argparse
import hashlib
import json
import pathlib
import sys
from collections import OrderedDict
from typing import Any, Mapping, Optional
from build_release import dump_yaml # type: ignore import-not-found
class VerificationError(Exception):
"""Raised when release artefacts fail verification."""
def compute_sha256(path: pathlib.Path) -> str:
sha = hashlib.sha256()
with path.open("rb") as handle:
for chunk in iter(lambda: handle.read(1024 * 1024), b""):
sha.update(chunk)
return sha.hexdigest()
def parse_sha_file(path: pathlib.Path) -> Optional[str]:
if not path.exists():
return None
content = path.read_text(encoding="utf-8").strip()
if not content:
return None
return content.split()[0]
def resolve_path(path_str: str, release_dir: pathlib.Path) -> pathlib.Path:
candidate = pathlib.Path(path_str.replace("\\", "/"))
if candidate.is_absolute():
return candidate
for base in (release_dir, release_dir.parent, release_dir.parent.parent):
resolved = (base / candidate).resolve()
if resolved.exists():
return resolved
# Fall back to release_dir joined path even if missing to surface in caller.
return (release_dir / candidate).resolve()
def load_manifest(release_dir: pathlib.Path) -> OrderedDict[str, Any]:
manifest_path = release_dir / "release.json"
if not manifest_path.exists():
raise VerificationError(f"Release manifest JSON missing at {manifest_path}")
try:
with manifest_path.open("r", encoding="utf-8") as handle:
return json.load(handle, object_pairs_hook=OrderedDict)
except json.JSONDecodeError as exc:
raise VerificationError(f"Failed to parse {manifest_path}: {exc}") from exc
def verify_manifest_hashes(
manifest: Mapping[str, Any],
release_dir: pathlib.Path,
errors: list[str],
) -> None:
yaml_path = release_dir / "release.yaml"
if not yaml_path.exists():
errors.append(f"Missing release.yaml at {yaml_path}")
return
recorded_yaml_sha = parse_sha_file(yaml_path.with_name(yaml_path.name + ".sha256"))
actual_yaml_sha = compute_sha256(yaml_path)
if recorded_yaml_sha and recorded_yaml_sha != actual_yaml_sha:
errors.append(
f"release.yaml.sha256 recorded {recorded_yaml_sha} but file hashes to {actual_yaml_sha}"
)
json_path = release_dir / "release.json"
recorded_json_sha = parse_sha_file(json_path.with_name(json_path.name + ".sha256"))
actual_json_sha = compute_sha256(json_path)
if recorded_json_sha and recorded_json_sha != actual_json_sha:
errors.append(
f"release.json.sha256 recorded {recorded_json_sha} but file hashes to {actual_json_sha}"
)
checksums = manifest.get("checksums")
if isinstance(checksums, Mapping):
recorded_digest = checksums.get("sha256")
base_manifest = OrderedDict(manifest)
base_manifest.pop("checksums", None)
yaml_without_checksums = dump_yaml(base_manifest)
computed_digest = hashlib.sha256(yaml_without_checksums.encode("utf-8")).hexdigest()
if recorded_digest != computed_digest:
errors.append(
"Manifest checksum mismatch: "
f"recorded {recorded_digest}, computed {computed_digest}"
)
def verify_artifact_entry(
entry: Mapping[str, Any],
release_dir: pathlib.Path,
label: str,
component_name: str,
errors: list[str],
) -> None:
path_str = entry.get("path")
if not path_str:
errors.append(f"{component_name}: {label} missing 'path' field.")
return
resolved = resolve_path(str(path_str), release_dir)
if not resolved.exists():
errors.append(f"{component_name}: {label} path does not exist → {resolved}")
return
recorded_sha = entry.get("sha256")
if recorded_sha:
actual_sha = compute_sha256(resolved)
if actual_sha != recorded_sha:
errors.append(
f"{component_name}: {label} SHA mismatch for {resolved} "
f"(recorded {recorded_sha}, computed {actual_sha})"
)
def verify_components(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None:
for component in manifest.get("components", []):
if not isinstance(component, Mapping):
errors.append("Component entry is not a mapping.")
continue
name = str(component.get("name", "<unknown>"))
for key, label in (
("sbom", "SBOM"),
("provenance", "provenance"),
("signature", "signature"),
("metadata", "metadata"),
):
entry = component.get(key)
if not entry:
continue
if not isinstance(entry, Mapping):
errors.append(f"{name}: {label} entry must be a mapping.")
continue
verify_artifact_entry(entry, release_dir, label, name, errors)
def verify_collections(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None:
for collection, label in (
("charts", "chart"),
("compose", "compose file"),
):
for item in manifest.get(collection, []):
if not isinstance(item, Mapping):
errors.append(f"{collection} entry is not a mapping.")
continue
path_value = item.get("path")
if not path_value:
errors.append(f"{collection} entry missing path.")
continue
resolved = resolve_path(str(path_value), release_dir)
if not resolved.exists():
errors.append(f"{label} missing file → {resolved}")
continue
recorded_sha = item.get("sha256")
if recorded_sha:
actual_sha = compute_sha256(resolved)
if actual_sha != recorded_sha:
errors.append(
f"{label} SHA mismatch for {resolved} "
f"(recorded {recorded_sha}, computed {actual_sha})"
)
def verify_debug_store(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None:
debug = manifest.get("debugStore")
if not isinstance(debug, Mapping):
return
manifest_path_str = debug.get("manifest")
manifest_data: Optional[Mapping[str, Any]] = None
if manifest_path_str:
manifest_path = resolve_path(str(manifest_path_str), release_dir)
if not manifest_path.exists():
errors.append(f"Debug manifest missing → {manifest_path}")
else:
recorded_sha = debug.get("sha256")
if recorded_sha:
actual_sha = compute_sha256(manifest_path)
if actual_sha != recorded_sha:
errors.append(
f"Debug manifest SHA mismatch (recorded {recorded_sha}, computed {actual_sha})"
)
sha_sidecar = manifest_path.with_suffix(manifest_path.suffix + ".sha256")
sidecar_sha = parse_sha_file(sha_sidecar)
if sidecar_sha and recorded_sha and sidecar_sha != recorded_sha:
errors.append(
f"Debug manifest sidecar digest {sidecar_sha} disagrees with recorded {recorded_sha}"
)
try:
with manifest_path.open("r", encoding="utf-8") as handle:
manifest_data = json.load(handle)
except json.JSONDecodeError as exc:
errors.append(f"Debug manifest JSON invalid: {exc}")
directory = debug.get("directory")
if directory:
debug_dir = resolve_path(str(directory), release_dir)
if not debug_dir.exists():
errors.append(f"Debug directory missing → {debug_dir}")
if manifest_data:
artifacts = manifest_data.get("artifacts")
if not isinstance(artifacts, list) or not artifacts:
errors.append("Debug manifest contains no artefacts.")
return
declared_entries = debug.get("entries")
if isinstance(declared_entries, int) and declared_entries != len(artifacts):
errors.append(
f"Debug manifest reports {declared_entries} entries but contains {len(artifacts)} artefacts."
)
for artefact in artifacts:
if not isinstance(artefact, Mapping):
errors.append("Debug manifest artefact entry is not a mapping.")
continue
debug_path = artefact.get("debugPath")
artefact_sha = artefact.get("sha256")
if not debug_path or not artefact_sha:
errors.append("Debug manifest artefact missing debugPath or sha256.")
continue
resolved_debug = resolve_path(str(debug_path), release_dir)
if not resolved_debug.exists():
errors.append(f"Debug artefact missing → {resolved_debug}")
continue
actual_sha = compute_sha256(resolved_debug)
if actual_sha != artefact_sha:
errors.append(
f"Debug artefact SHA mismatch for {resolved_debug} "
f"(recorded {artefact_sha}, computed {actual_sha})"
)
def verify_signature(signature: Mapping[str, Any], release_dir: pathlib.Path, label: str, component_name: str, errors: list[str]) -> None:
sig_path_value = signature.get("path")
if not sig_path_value:
errors.append(f"{component_name}: {label} signature missing path.")
return
sig_path = resolve_path(str(sig_path_value), release_dir)
if not sig_path.exists():
errors.append(f"{component_name}: {label} signature missing → {sig_path}")
return
recorded_sha = signature.get("sha256")
if recorded_sha:
actual_sha = compute_sha256(sig_path)
if actual_sha != recorded_sha:
errors.append(
f"{component_name}: {label} signature SHA mismatch for {sig_path} "
f"(recorded {recorded_sha}, computed {actual_sha})"
)
def verify_cli_entries(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None:
cli_entries = manifest.get("cli")
if not cli_entries:
return
if not isinstance(cli_entries, list):
errors.append("CLI manifest section must be a list.")
return
for entry in cli_entries:
if not isinstance(entry, Mapping):
errors.append("CLI entry must be a mapping.")
continue
runtime = entry.get("runtime", "<unknown>")
component_name = f"cli[{runtime}]"
archive = entry.get("archive")
if not isinstance(archive, Mapping):
errors.append(f"{component_name}: archive metadata missing or invalid.")
else:
verify_artifact_entry(archive, release_dir, "archive", component_name, errors)
signature = archive.get("signature")
if isinstance(signature, Mapping):
verify_signature(signature, release_dir, "archive", component_name, errors)
elif signature is not None:
errors.append(f"{component_name}: archive signature must be an object.")
sbom = entry.get("sbom")
if sbom:
if not isinstance(sbom, Mapping):
errors.append(f"{component_name}: sbom entry must be a mapping.")
else:
verify_artifact_entry(sbom, release_dir, "sbom", component_name, errors)
signature = sbom.get("signature")
if isinstance(signature, Mapping):
verify_signature(signature, release_dir, "sbom", component_name, errors)
elif signature is not None:
errors.append(f"{component_name}: sbom signature must be an object.")
def verify_release(release_dir: pathlib.Path) -> None:
if not release_dir.exists():
raise VerificationError(f"Release directory not found: {release_dir}")
manifest = load_manifest(release_dir)
errors: list[str] = []
verify_manifest_hashes(manifest, release_dir, errors)
verify_components(manifest, release_dir, errors)
verify_cli_entries(manifest, release_dir, errors)
verify_collections(manifest, release_dir, errors)
verify_debug_store(manifest, release_dir, errors)
if errors:
bullet_list = "\n - ".join(errors)
raise VerificationError(f"Release verification failed:\n - {bullet_list}")
def parse_args(argv: list[str] | None = None) -> argparse.Namespace:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--release-dir",
type=pathlib.Path,
default=pathlib.Path("out/release"),
help="Path to the release artefact directory (default: %(default)s)",
)
return parser.parse_args(argv)
def main(argv: list[str] | None = None) -> int:
args = parse_args(argv)
try:
verify_release(args.release_dir.resolve())
except VerificationError as exc:
print(str(exc), file=sys.stderr)
return 1
print(f"✅ Release artefacts verified OK in {args.release_dir}")
return 0
if __name__ == "__main__":
raise SystemExit(main())
#!/usr/bin/env python3
"""Verify release artefacts (SBOMs, provenance, signatures, manifest hashes)."""
from __future__ import annotations
import argparse
import hashlib
import json
import pathlib
import sys
from collections import OrderedDict
from typing import Any, Mapping, Optional
from build_release import dump_yaml # type: ignore import-not-found
class VerificationError(Exception):
"""Raised when release artefacts fail verification."""
def compute_sha256(path: pathlib.Path) -> str:
sha = hashlib.sha256()
with path.open("rb") as handle:
for chunk in iter(lambda: handle.read(1024 * 1024), b""):
sha.update(chunk)
return sha.hexdigest()
def parse_sha_file(path: pathlib.Path) -> Optional[str]:
if not path.exists():
return None
content = path.read_text(encoding="utf-8").strip()
if not content:
return None
return content.split()[0]
def resolve_path(path_str: str, release_dir: pathlib.Path) -> pathlib.Path:
candidate = pathlib.Path(path_str.replace("\\", "/"))
if candidate.is_absolute():
return candidate
for base in (release_dir, release_dir.parent, release_dir.parent.parent):
resolved = (base / candidate).resolve()
if resolved.exists():
return resolved
# Fall back to release_dir joined path even if missing to surface in caller.
return (release_dir / candidate).resolve()
def load_manifest(release_dir: pathlib.Path) -> OrderedDict[str, Any]:
manifest_path = release_dir / "release.json"
if not manifest_path.exists():
raise VerificationError(f"Release manifest JSON missing at {manifest_path}")
try:
with manifest_path.open("r", encoding="utf-8") as handle:
return json.load(handle, object_pairs_hook=OrderedDict)
except json.JSONDecodeError as exc:
raise VerificationError(f"Failed to parse {manifest_path}: {exc}") from exc
def verify_manifest_hashes(
manifest: Mapping[str, Any],
release_dir: pathlib.Path,
errors: list[str],
) -> None:
yaml_path = release_dir / "release.yaml"
if not yaml_path.exists():
errors.append(f"Missing release.yaml at {yaml_path}")
return
recorded_yaml_sha = parse_sha_file(yaml_path.with_name(yaml_path.name + ".sha256"))
actual_yaml_sha = compute_sha256(yaml_path)
if recorded_yaml_sha and recorded_yaml_sha != actual_yaml_sha:
errors.append(
f"release.yaml.sha256 recorded {recorded_yaml_sha} but file hashes to {actual_yaml_sha}"
)
json_path = release_dir / "release.json"
recorded_json_sha = parse_sha_file(json_path.with_name(json_path.name + ".sha256"))
actual_json_sha = compute_sha256(json_path)
if recorded_json_sha and recorded_json_sha != actual_json_sha:
errors.append(
f"release.json.sha256 recorded {recorded_json_sha} but file hashes to {actual_json_sha}"
)
checksums = manifest.get("checksums")
if isinstance(checksums, Mapping):
recorded_digest = checksums.get("sha256")
base_manifest = OrderedDict(manifest)
base_manifest.pop("checksums", None)
yaml_without_checksums = dump_yaml(base_manifest)
computed_digest = hashlib.sha256(yaml_without_checksums.encode("utf-8")).hexdigest()
if recorded_digest != computed_digest:
errors.append(
"Manifest checksum mismatch: "
f"recorded {recorded_digest}, computed {computed_digest}"
)
def verify_artifact_entry(
entry: Mapping[str, Any],
release_dir: pathlib.Path,
label: str,
component_name: str,
errors: list[str],
) -> None:
path_str = entry.get("path")
if not path_str:
errors.append(f"{component_name}: {label} missing 'path' field.")
return
resolved = resolve_path(str(path_str), release_dir)
if not resolved.exists():
errors.append(f"{component_name}: {label} path does not exist → {resolved}")
return
recorded_sha = entry.get("sha256")
if recorded_sha:
actual_sha = compute_sha256(resolved)
if actual_sha != recorded_sha:
errors.append(
f"{component_name}: {label} SHA mismatch for {resolved} "
f"(recorded {recorded_sha}, computed {actual_sha})"
)
def verify_components(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None:
for component in manifest.get("components", []):
if not isinstance(component, Mapping):
errors.append("Component entry is not a mapping.")
continue
name = str(component.get("name", "<unknown>"))
for key, label in (
("sbom", "SBOM"),
("provenance", "provenance"),
("signature", "signature"),
("metadata", "metadata"),
):
entry = component.get(key)
if not entry:
continue
if not isinstance(entry, Mapping):
errors.append(f"{name}: {label} entry must be a mapping.")
continue
verify_artifact_entry(entry, release_dir, label, name, errors)
def verify_collections(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None:
for collection, label in (
("charts", "chart"),
("compose", "compose file"),
):
for item in manifest.get(collection, []):
if not isinstance(item, Mapping):
errors.append(f"{collection} entry is not a mapping.")
continue
path_value = item.get("path")
if not path_value:
errors.append(f"{collection} entry missing path.")
continue
resolved = resolve_path(str(path_value), release_dir)
if not resolved.exists():
errors.append(f"{label} missing file → {resolved}")
continue
recorded_sha = item.get("sha256")
if recorded_sha:
actual_sha = compute_sha256(resolved)
if actual_sha != recorded_sha:
errors.append(
f"{label} SHA mismatch for {resolved} "
f"(recorded {recorded_sha}, computed {actual_sha})"
)
def verify_debug_store(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None:
debug = manifest.get("debugStore")
if not isinstance(debug, Mapping):
return
manifest_path_str = debug.get("manifest")
manifest_data: Optional[Mapping[str, Any]] = None
if manifest_path_str:
manifest_path = resolve_path(str(manifest_path_str), release_dir)
if not manifest_path.exists():
errors.append(f"Debug manifest missing → {manifest_path}")
else:
recorded_sha = debug.get("sha256")
if recorded_sha:
actual_sha = compute_sha256(manifest_path)
if actual_sha != recorded_sha:
errors.append(
f"Debug manifest SHA mismatch (recorded {recorded_sha}, computed {actual_sha})"
)
sha_sidecar = manifest_path.with_suffix(manifest_path.suffix + ".sha256")
sidecar_sha = parse_sha_file(sha_sidecar)
if sidecar_sha and recorded_sha and sidecar_sha != recorded_sha:
errors.append(
f"Debug manifest sidecar digest {sidecar_sha} disagrees with recorded {recorded_sha}"
)
try:
with manifest_path.open("r", encoding="utf-8") as handle:
manifest_data = json.load(handle)
except json.JSONDecodeError as exc:
errors.append(f"Debug manifest JSON invalid: {exc}")
directory = debug.get("directory")
if directory:
debug_dir = resolve_path(str(directory), release_dir)
if not debug_dir.exists():
errors.append(f"Debug directory missing → {debug_dir}")
if manifest_data:
artifacts = manifest_data.get("artifacts")
if not isinstance(artifacts, list) or not artifacts:
errors.append("Debug manifest contains no artefacts.")
return
declared_entries = debug.get("entries")
if isinstance(declared_entries, int) and declared_entries != len(artifacts):
errors.append(
f"Debug manifest reports {declared_entries} entries but contains {len(artifacts)} artefacts."
)
for artefact in artifacts:
if not isinstance(artefact, Mapping):
errors.append("Debug manifest artefact entry is not a mapping.")
continue
debug_path = artefact.get("debugPath")
artefact_sha = artefact.get("sha256")
if not debug_path or not artefact_sha:
errors.append("Debug manifest artefact missing debugPath or sha256.")
continue
resolved_debug = resolve_path(str(debug_path), release_dir)
if not resolved_debug.exists():
errors.append(f"Debug artefact missing → {resolved_debug}")
continue
actual_sha = compute_sha256(resolved_debug)
if actual_sha != artefact_sha:
errors.append(
f"Debug artefact SHA mismatch for {resolved_debug} "
f"(recorded {artefact_sha}, computed {actual_sha})"
)
def verify_signature(signature: Mapping[str, Any], release_dir: pathlib.Path, label: str, component_name: str, errors: list[str]) -> None:
sig_path_value = signature.get("path")
if not sig_path_value:
errors.append(f"{component_name}: {label} signature missing path.")
return
sig_path = resolve_path(str(sig_path_value), release_dir)
if not sig_path.exists():
errors.append(f"{component_name}: {label} signature missing → {sig_path}")
return
recorded_sha = signature.get("sha256")
if recorded_sha:
actual_sha = compute_sha256(sig_path)
if actual_sha != recorded_sha:
errors.append(
f"{component_name}: {label} signature SHA mismatch for {sig_path} "
f"(recorded {recorded_sha}, computed {actual_sha})"
)
def verify_cli_entries(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None:
cli_entries = manifest.get("cli")
if not cli_entries:
return
if not isinstance(cli_entries, list):
errors.append("CLI manifest section must be a list.")
return
for entry in cli_entries:
if not isinstance(entry, Mapping):
errors.append("CLI entry must be a mapping.")
continue
runtime = entry.get("runtime", "<unknown>")
component_name = f"cli[{runtime}]"
archive = entry.get("archive")
if not isinstance(archive, Mapping):
errors.append(f"{component_name}: archive metadata missing or invalid.")
else:
verify_artifact_entry(archive, release_dir, "archive", component_name, errors)
signature = archive.get("signature")
if isinstance(signature, Mapping):
verify_signature(signature, release_dir, "archive", component_name, errors)
elif signature is not None:
errors.append(f"{component_name}: archive signature must be an object.")
sbom = entry.get("sbom")
if sbom:
if not isinstance(sbom, Mapping):
errors.append(f"{component_name}: sbom entry must be a mapping.")
else:
verify_artifact_entry(sbom, release_dir, "sbom", component_name, errors)
signature = sbom.get("signature")
if isinstance(signature, Mapping):
verify_signature(signature, release_dir, "sbom", component_name, errors)
elif signature is not None:
errors.append(f"{component_name}: sbom signature must be an object.")
def verify_release(release_dir: pathlib.Path) -> None:
if not release_dir.exists():
raise VerificationError(f"Release directory not found: {release_dir}")
manifest = load_manifest(release_dir)
errors: list[str] = []
verify_manifest_hashes(manifest, release_dir, errors)
verify_components(manifest, release_dir, errors)
verify_cli_entries(manifest, release_dir, errors)
verify_collections(manifest, release_dir, errors)
verify_debug_store(manifest, release_dir, errors)
if errors:
bullet_list = "\n - ".join(errors)
raise VerificationError(f"Release verification failed:\n - {bullet_list}")
def parse_args(argv: list[str] | None = None) -> argparse.Namespace:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--release-dir",
type=pathlib.Path,
default=pathlib.Path("out/release"),
help="Path to the release artefact directory (default: %(default)s)",
)
return parser.parse_args(argv)
def main(argv: list[str] | None = None) -> int:
args = parse_args(argv)
try:
verify_release(args.release_dir.resolve())
except VerificationError as exc:
print(str(exc), file=sys.stderr)
return 1
print(f"✅ Release artefacts verified OK in {args.release_dir}")
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -1,77 +1,77 @@
/**
* Aggregation helper that surfaces advisory_raw duplicate candidates prior to enabling the
* idempotency unique index. Intended for staging/offline snapshots.
*
* Usage:
* mongo concelier ops/devops/scripts/check-advisory-raw-duplicates.js
*
* Environment variables:
* LIMIT - optional cap on number of duplicate groups to print (default 50).
*/
(function () {
function toInt(value, fallback) {
var parsed = parseInt(value, 10);
return Number.isFinite(parsed) && parsed > 0 ? parsed : fallback;
}
var limit = typeof LIMIT !== "undefined" ? toInt(LIMIT, 50) : 50;
var database = db.getName ? db.getSiblingDB(db.getName()) : db;
if (!database) {
throw new Error("Unable to resolve database handle");
}
print("");
print("== advisory_raw duplicate audit ==");
print("Database: " + database.getName());
print("Limit : " + limit);
print("");
var pipeline = [
{
$group: {
_id: {
vendor: "$source.vendor",
upstreamId: "$upstream.upstream_id",
contentHash: "$upstream.content_hash",
tenant: "$tenant"
},
ids: { $addToSet: "$_id" },
count: { $sum: 1 }
}
},
{ $match: { count: { $gt: 1 } } },
{
$project: {
_id: 0,
vendor: "$_id.vendor",
upstreamId: "$_id.upstreamId",
contentHash: "$_id.contentHash",
tenant: "$_id.tenant",
count: 1,
ids: 1
}
},
{ $sort: { count: -1, vendor: 1, upstreamId: 1 } },
{ $limit: limit }
];
var cursor = database.getCollection("advisory_raw").aggregate(pipeline, { allowDiskUse: true });
var any = false;
while (cursor.hasNext()) {
var doc = cursor.next();
any = true;
print("---");
print("vendor : " + doc.vendor);
print("upstream_id : " + doc.upstreamId);
print("tenant : " + doc.tenant);
print("content_hash: " + doc.contentHash);
print("count : " + doc.count);
print("ids : " + doc.ids.join(", "));
}
if (!any) {
print("No duplicate advisory_raw documents detected.");
}
print("");
})();
/**
* Aggregation helper that surfaces advisory_raw duplicate candidates prior to enabling the
* idempotency unique index. Intended for staging/offline snapshots.
*
* Usage:
* mongo concelier ops/devops/scripts/check-advisory-raw-duplicates.js
*
* Environment variables:
* LIMIT - optional cap on number of duplicate groups to print (default 50).
*/
(function () {
function toInt(value, fallback) {
var parsed = parseInt(value, 10);
return Number.isFinite(parsed) && parsed > 0 ? parsed : fallback;
}
var limit = typeof LIMIT !== "undefined" ? toInt(LIMIT, 50) : 50;
var database = db.getName ? db.getSiblingDB(db.getName()) : db;
if (!database) {
throw new Error("Unable to resolve database handle");
}
print("");
print("== advisory_raw duplicate audit ==");
print("Database: " + database.getName());
print("Limit : " + limit);
print("");
var pipeline = [
{
$group: {
_id: {
vendor: "$source.vendor",
upstreamId: "$upstream.upstream_id",
contentHash: "$upstream.content_hash",
tenant: "$tenant"
},
ids: { $addToSet: "$_id" },
count: { $sum: 1 }
}
},
{ $match: { count: { $gt: 1 } } },
{
$project: {
_id: 0,
vendor: "$_id.vendor",
upstreamId: "$_id.upstreamId",
contentHash: "$_id.contentHash",
tenant: "$_id.tenant",
count: 1,
ids: 1
}
},
{ $sort: { count: -1, vendor: 1, upstreamId: 1 } },
{ $limit: limit }
];
var cursor = database.getCollection("advisory_raw").aggregate(pipeline, { allowDiskUse: true });
var any = false;
while (cursor.hasNext()) {
var doc = cursor.next();
any = true;
print("---");
print("vendor : " + doc.vendor);
print("upstream_id : " + doc.upstreamId);
print("tenant : " + doc.tenant);
print("content_hash: " + doc.contentHash);
print("count : " + doc.count);
print("ids : " + doc.ids.join(", "));
}
if (!any) {
print("No duplicate advisory_raw documents detected.");
}
print("");
})();

View File

@@ -1,71 +1,71 @@
#!/usr/bin/env bash
# Sync preview NuGet packages into the local offline feed.
# Reads package metadata from ops/devops/nuget-preview-packages.csv
# and ensures ./local-nuget holds the expected artefacts (with SHA-256 verification).
# Optional 4th CSV column can override the download base (e.g. dotnet-public flat container).
set -euo pipefail
repo_root="$(git -C "${BASH_SOURCE%/*}/.." rev-parse --show-toplevel 2>/dev/null || pwd)"
manifest="${repo_root}/ops/devops/nuget-preview-packages.csv"
dest="${repo_root}/local-nuget"
nuget_v2_base="${NUGET_V2_BASE:-https://www.nuget.org/api/v2/package}"
if [[ ! -f "$manifest" ]]; then
echo "Manifest not found: $manifest" >&2
exit 1
fi
mkdir -p "$dest"
fetch_package() {
local package="$1"
local version="$2"
local expected_sha="$3"
local source_base="$4"
local target="$dest/${package}.${version}.nupkg"
local url
if [[ -n "$source_base" ]]; then
local package_lower
package_lower="${package,,}"
url="${source_base%/}/${package_lower}/${version}/${package_lower}.${version}.nupkg"
else
url="${nuget_v2_base%/}/${package}/${version}"
fi
echo "[sync-nuget] Fetching ${package} ${version}"
local tmp
tmp="$(mktemp)"
trap 'rm -f "$tmp"' RETURN
curl -fsSL --retry 3 --retry-delay 1 "$url" -o "$tmp"
local actual_sha
actual_sha="$(sha256sum "$tmp" | awk '{print $1}')"
if [[ "$actual_sha" != "$expected_sha" ]]; then
echo "Checksum mismatch for ${package} ${version}" >&2
echo " expected: $expected_sha" >&2
echo " actual: $actual_sha" >&2
exit 1
fi
mv "$tmp" "$target"
trap - RETURN
}
while IFS=',' read -r package version sha source_base; do
[[ -z "$package" || "$package" == \#* ]] && continue
local_path="$dest/${package}.${version}.nupkg"
if [[ -f "$local_path" ]]; then
current_sha="$(sha256sum "$local_path" | awk '{print $1}')"
if [[ "$current_sha" == "$sha" ]]; then
echo "[sync-nuget] OK ${package} ${version}"
continue
fi
echo "[sync-nuget] SHA mismatch for ${package} ${version}, refreshing"
else
echo "[sync-nuget] Missing ${package} ${version}"
fi
fetch_package "$package" "$version" "$sha" "${source_base:-}"
done < "$manifest"
#!/usr/bin/env bash
# Sync preview NuGet packages into the local offline feed.
# Reads package metadata from ops/devops/nuget-preview-packages.csv
# and ensures ./local-nuget holds the expected artefacts (with SHA-256 verification).
# Optional 4th CSV column can override the download base (e.g. dotnet-public flat container).
set -euo pipefail
repo_root="$(git -C "${BASH_SOURCE%/*}/.." rev-parse --show-toplevel 2>/dev/null || pwd)"
manifest="${repo_root}/ops/devops/nuget-preview-packages.csv"
dest="${repo_root}/local-nuget"
nuget_v2_base="${NUGET_V2_BASE:-https://www.nuget.org/api/v2/package}"
if [[ ! -f "$manifest" ]]; then
echo "Manifest not found: $manifest" >&2
exit 1
fi
mkdir -p "$dest"
fetch_package() {
local package="$1"
local version="$2"
local expected_sha="$3"
local source_base="$4"
local target="$dest/${package}.${version}.nupkg"
local url
if [[ -n "$source_base" ]]; then
local package_lower
package_lower="${package,,}"
url="${source_base%/}/${package_lower}/${version}/${package_lower}.${version}.nupkg"
else
url="${nuget_v2_base%/}/${package}/${version}"
fi
echo "[sync-nuget] Fetching ${package} ${version}"
local tmp
tmp="$(mktemp)"
trap 'rm -f "$tmp"' RETURN
curl -fsSL --retry 3 --retry-delay 1 "$url" -o "$tmp"
local actual_sha
actual_sha="$(sha256sum "$tmp" | awk '{print $1}')"
if [[ "$actual_sha" != "$expected_sha" ]]; then
echo "Checksum mismatch for ${package} ${version}" >&2
echo " expected: $expected_sha" >&2
echo " actual: $actual_sha" >&2
exit 1
fi
mv "$tmp" "$target"
trap - RETURN
}
while IFS=',' read -r package version sha source_base; do
[[ -z "$package" || "$package" == \#* ]] && continue
local_path="$dest/${package}.${version}.nupkg"
if [[ -f "$local_path" ]]; then
current_sha="$(sha256sum "$local_path" | awk '{print $1}')"
if [[ "$current_sha" == "$sha" ]]; then
echo "[sync-nuget] OK ${package} ${version}"
continue
fi
echo "[sync-nuget] SHA mismatch for ${package} ${version}, refreshing"
else
echo "[sync-nuget] Missing ${package} ${version}"
fi
fetch_package "$package" "$version" "$sha" "${source_base:-}"
done < "$manifest"

View File

@@ -1,77 +1,77 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CERT_DIR="${SCRIPT_DIR}/../../deploy/telemetry/certs"
mkdir -p "${CERT_DIR}"
CA_KEY="${CERT_DIR}/ca.key"
CA_CRT="${CERT_DIR}/ca.crt"
COL_KEY="${CERT_DIR}/collector.key"
COL_CSR="${CERT_DIR}/collector.csr"
COL_CRT="${CERT_DIR}/collector.crt"
CLIENT_KEY="${CERT_DIR}/client.key"
CLIENT_CSR="${CERT_DIR}/client.csr"
CLIENT_CRT="${CERT_DIR}/client.crt"
echo "[*] Generating OpenTelemetry dev CA and certificates in ${CERT_DIR}"
# Root CA
if [[ ! -f "${CA_KEY}" ]]; then
openssl genrsa -out "${CA_KEY}" 4096 >/dev/null 2>&1
fi
openssl req -x509 -new -key "${CA_KEY}" -days 365 -sha256 \
-out "${CA_CRT}" -subj "/CN=StellaOps Dev Telemetry CA" \
-config <(cat <<'EOF'
[req]
distinguished_name = req_distinguished_name
prompt = no
[req_distinguished_name]
EOF
) >/dev/null 2>&1
# Collector certificate (server + client auth)
openssl req -new -nodes -newkey rsa:4096 \
-keyout "${COL_KEY}" \
-out "${COL_CSR}" \
-subj "/CN=stellaops-otel-collector" >/dev/null 2>&1
openssl x509 -req -in "${COL_CSR}" -CA "${CA_CRT}" -CAkey "${CA_KEY}" \
-CAcreateserial -out "${COL_CRT}" -days 365 -sha256 \
-extensions v3_req -extfile <(cat <<'EOF'
[v3_req]
subjectAltName = @alt_names
extendedKeyUsage = serverAuth, clientAuth
[alt_names]
DNS.1 = stellaops-otel-collector
DNS.2 = localhost
IP.1 = 127.0.0.1
EOF
) >/dev/null 2>&1
# Client certificate
openssl req -new -nodes -newkey rsa:4096 \
-keyout "${CLIENT_KEY}" \
-out "${CLIENT_CSR}" \
-subj "/CN=stellaops-otel-client" >/dev/null 2>&1
openssl x509 -req -in "${CLIENT_CSR}" -CA "${CA_CRT}" -CAkey "${CA_KEY}" \
-CAcreateserial -out "${CLIENT_CRT}" -days 365 -sha256 \
-extensions v3_req -extfile <(cat <<'EOF'
[v3_req]
extendedKeyUsage = clientAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = stellaops-otel-client
DNS.2 = localhost
IP.1 = 127.0.0.1
EOF
) >/dev/null 2>&1
rm -f "${COL_CSR}" "${CLIENT_CSR}"
rm -f "${CERT_DIR}/ca.srl"
echo "[✓] Certificates ready:"
ls -1 "${CERT_DIR}"
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
CERT_DIR="${SCRIPT_DIR}/../../deploy/telemetry/certs"
mkdir -p "${CERT_DIR}"
CA_KEY="${CERT_DIR}/ca.key"
CA_CRT="${CERT_DIR}/ca.crt"
COL_KEY="${CERT_DIR}/collector.key"
COL_CSR="${CERT_DIR}/collector.csr"
COL_CRT="${CERT_DIR}/collector.crt"
CLIENT_KEY="${CERT_DIR}/client.key"
CLIENT_CSR="${CERT_DIR}/client.csr"
CLIENT_CRT="${CERT_DIR}/client.crt"
echo "[*] Generating OpenTelemetry dev CA and certificates in ${CERT_DIR}"
# Root CA
if [[ ! -f "${CA_KEY}" ]]; then
openssl genrsa -out "${CA_KEY}" 4096 >/dev/null 2>&1
fi
openssl req -x509 -new -key "${CA_KEY}" -days 365 -sha256 \
-out "${CA_CRT}" -subj "/CN=StellaOps Dev Telemetry CA" \
-config <(cat <<'EOF'
[req]
distinguished_name = req_distinguished_name
prompt = no
[req_distinguished_name]
EOF
) >/dev/null 2>&1
# Collector certificate (server + client auth)
openssl req -new -nodes -newkey rsa:4096 \
-keyout "${COL_KEY}" \
-out "${COL_CSR}" \
-subj "/CN=stellaops-otel-collector" >/dev/null 2>&1
openssl x509 -req -in "${COL_CSR}" -CA "${CA_CRT}" -CAkey "${CA_KEY}" \
-CAcreateserial -out "${COL_CRT}" -days 365 -sha256 \
-extensions v3_req -extfile <(cat <<'EOF'
[v3_req]
subjectAltName = @alt_names
extendedKeyUsage = serverAuth, clientAuth
[alt_names]
DNS.1 = stellaops-otel-collector
DNS.2 = localhost
IP.1 = 127.0.0.1
EOF
) >/dev/null 2>&1
# Client certificate
openssl req -new -nodes -newkey rsa:4096 \
-keyout "${CLIENT_KEY}" \
-out "${CLIENT_CSR}" \
-subj "/CN=stellaops-otel-client" >/dev/null 2>&1
openssl x509 -req -in "${CLIENT_CSR}" -CA "${CA_CRT}" -CAkey "${CA_KEY}" \
-CAcreateserial -out "${CLIENT_CRT}" -days 365 -sha256 \
-extensions v3_req -extfile <(cat <<'EOF'
[v3_req]
extendedKeyUsage = clientAuth
subjectAltName = @alt_names
[alt_names]
DNS.1 = stellaops-otel-client
DNS.2 = localhost
IP.1 = 127.0.0.1
EOF
) >/dev/null 2>&1
rm -f "${COL_CSR}" "${CLIENT_CSR}"
rm -f "${CERT_DIR}/ca.srl"
echo "[✓] Certificates ready:"
ls -1 "${CERT_DIR}"

View File

@@ -1,136 +1,136 @@
#!/usr/bin/env python3
"""Package telemetry collector assets for offline/air-gapped installs.
Outputs a tarball containing the collector configuration, Compose overlay,
Helm defaults, and operator README. A SHA-256 checksum sidecar is emitted, and
optional Cosign signing can be enabled with --sign.
"""
from __future__ import annotations
import argparse
import hashlib
import os
import subprocess
import sys
import tarfile
from pathlib import Path
from typing import Iterable
REPO_ROOT = Path(__file__).resolve().parents[3]
DEFAULT_OUTPUT = REPO_ROOT / "out" / "telemetry" / "telemetry-offline-bundle.tar.gz"
BUNDLE_CONTENTS: tuple[Path, ...] = (
Path("deploy/telemetry/README.md"),
Path("deploy/telemetry/otel-collector-config.yaml"),
Path("deploy/telemetry/storage/README.md"),
Path("deploy/telemetry/storage/prometheus.yaml"),
Path("deploy/telemetry/storage/tempo.yaml"),
Path("deploy/telemetry/storage/loki.yaml"),
Path("deploy/telemetry/storage/tenants/tempo-overrides.yaml"),
Path("deploy/telemetry/storage/tenants/loki-overrides.yaml"),
Path("deploy/helm/stellaops/files/otel-collector-config.yaml"),
Path("deploy/helm/stellaops/values.yaml"),
Path("deploy/helm/stellaops/templates/otel-collector.yaml"),
Path("deploy/compose/docker-compose.telemetry.yaml"),
Path("deploy/compose/docker-compose.telemetry-storage.yaml"),
Path("docs/ops/telemetry-collector.md"),
Path("docs/ops/telemetry-storage.md"),
)
def compute_sha256(path: Path) -> str:
sha = hashlib.sha256()
with path.open("rb") as handle:
for chunk in iter(lambda: handle.read(1024 * 1024), b""):
sha.update(chunk)
return sha.hexdigest()
def validate_files(paths: Iterable[Path]) -> None:
missing = [str(p) for p in paths if not (REPO_ROOT / p).exists()]
if missing:
raise FileNotFoundError(f"Missing bundle artefacts: {', '.join(missing)}")
def create_bundle(output_path: Path) -> Path:
output_path.parent.mkdir(parents=True, exist_ok=True)
with tarfile.open(output_path, "w:gz") as tar:
for rel_path in BUNDLE_CONTENTS:
abs_path = REPO_ROOT / rel_path
tar.add(abs_path, arcname=str(rel_path))
return output_path
def write_checksum(bundle_path: Path) -> Path:
digest = compute_sha256(bundle_path)
sha_path = bundle_path.with_suffix(bundle_path.suffix + ".sha256")
sha_path.write_text(f"{digest} {bundle_path.name}\n", encoding="utf-8")
return sha_path
def cosign_sign(bundle_path: Path, key_ref: str | None, identity_token: str | None) -> None:
cmd = ["cosign", "sign-blob", "--yes", str(bundle_path)]
if key_ref:
cmd.extend(["--key", key_ref])
env = os.environ.copy()
if identity_token:
env["COSIGN_IDENTITY_TOKEN"] = identity_token
try:
subprocess.run(cmd, check=True, env=env)
except FileNotFoundError as exc:
raise RuntimeError("cosign not found on PATH; install cosign or omit --sign") from exc
except subprocess.CalledProcessError as exc:
raise RuntimeError(f"cosign sign-blob failed: {exc}") from exc
def parse_args(argv: list[str] | None = None) -> argparse.Namespace:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--output",
type=Path,
default=DEFAULT_OUTPUT,
help=f"Output bundle path (default: {DEFAULT_OUTPUT})",
)
parser.add_argument(
"--sign",
action="store_true",
help="Sign the bundle using cosign (requires cosign on PATH)",
)
parser.add_argument(
"--cosign-key",
type=str,
default=os.environ.get("COSIGN_KEY_REF"),
help="Cosign key reference (file:..., azurekms://..., etc.)",
)
parser.add_argument(
"--identity-token",
type=str,
default=os.environ.get("COSIGN_IDENTITY_TOKEN"),
help="OIDC identity token for keyless signing",
)
return parser.parse_args(argv)
def main(argv: list[str] | None = None) -> int:
args = parse_args(argv)
validate_files(BUNDLE_CONTENTS)
bundle_path = args.output.resolve()
print(f"[*] Creating telemetry bundle at {bundle_path}")
create_bundle(bundle_path)
sha_path = write_checksum(bundle_path)
print(f"[✓] SHA-256 written to {sha_path}")
if args.sign:
print("[*] Signing bundle with cosign")
cosign_sign(bundle_path, args.cosign_key, args.identity_token)
sig_path = bundle_path.with_suffix(bundle_path.suffix + ".sig")
if sig_path.exists():
print(f"[✓] Cosign signature written to {sig_path}")
else:
print("[!] Cosign completed but signature file not found (ensure cosign version >= 2.2)")
return 0
if __name__ == "__main__":
sys.exit(main())
#!/usr/bin/env python3
"""Package telemetry collector assets for offline/air-gapped installs.
Outputs a tarball containing the collector configuration, Compose overlay,
Helm defaults, and operator README. A SHA-256 checksum sidecar is emitted, and
optional Cosign signing can be enabled with --sign.
"""
from __future__ import annotations
import argparse
import hashlib
import os
import subprocess
import sys
import tarfile
from pathlib import Path
from typing import Iterable
REPO_ROOT = Path(__file__).resolve().parents[3]
DEFAULT_OUTPUT = REPO_ROOT / "out" / "telemetry" / "telemetry-offline-bundle.tar.gz"
BUNDLE_CONTENTS: tuple[Path, ...] = (
Path("deploy/telemetry/README.md"),
Path("deploy/telemetry/otel-collector-config.yaml"),
Path("deploy/telemetry/storage/README.md"),
Path("deploy/telemetry/storage/prometheus.yaml"),
Path("deploy/telemetry/storage/tempo.yaml"),
Path("deploy/telemetry/storage/loki.yaml"),
Path("deploy/telemetry/storage/tenants/tempo-overrides.yaml"),
Path("deploy/telemetry/storage/tenants/loki-overrides.yaml"),
Path("deploy/helm/stellaops/files/otel-collector-config.yaml"),
Path("deploy/helm/stellaops/values.yaml"),
Path("deploy/helm/stellaops/templates/otel-collector.yaml"),
Path("deploy/compose/docker-compose.telemetry.yaml"),
Path("deploy/compose/docker-compose.telemetry-storage.yaml"),
Path("docs/ops/telemetry-collector.md"),
Path("docs/ops/telemetry-storage.md"),
)
def compute_sha256(path: Path) -> str:
sha = hashlib.sha256()
with path.open("rb") as handle:
for chunk in iter(lambda: handle.read(1024 * 1024), b""):
sha.update(chunk)
return sha.hexdigest()
def validate_files(paths: Iterable[Path]) -> None:
missing = [str(p) for p in paths if not (REPO_ROOT / p).exists()]
if missing:
raise FileNotFoundError(f"Missing bundle artefacts: {', '.join(missing)}")
def create_bundle(output_path: Path) -> Path:
output_path.parent.mkdir(parents=True, exist_ok=True)
with tarfile.open(output_path, "w:gz") as tar:
for rel_path in BUNDLE_CONTENTS:
abs_path = REPO_ROOT / rel_path
tar.add(abs_path, arcname=str(rel_path))
return output_path
def write_checksum(bundle_path: Path) -> Path:
digest = compute_sha256(bundle_path)
sha_path = bundle_path.with_suffix(bundle_path.suffix + ".sha256")
sha_path.write_text(f"{digest} {bundle_path.name}\n", encoding="utf-8")
return sha_path
def cosign_sign(bundle_path: Path, key_ref: str | None, identity_token: str | None) -> None:
cmd = ["cosign", "sign-blob", "--yes", str(bundle_path)]
if key_ref:
cmd.extend(["--key", key_ref])
env = os.environ.copy()
if identity_token:
env["COSIGN_IDENTITY_TOKEN"] = identity_token
try:
subprocess.run(cmd, check=True, env=env)
except FileNotFoundError as exc:
raise RuntimeError("cosign not found on PATH; install cosign or omit --sign") from exc
except subprocess.CalledProcessError as exc:
raise RuntimeError(f"cosign sign-blob failed: {exc}") from exc
def parse_args(argv: list[str] | None = None) -> argparse.Namespace:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--output",
type=Path,
default=DEFAULT_OUTPUT,
help=f"Output bundle path (default: {DEFAULT_OUTPUT})",
)
parser.add_argument(
"--sign",
action="store_true",
help="Sign the bundle using cosign (requires cosign on PATH)",
)
parser.add_argument(
"--cosign-key",
type=str,
default=os.environ.get("COSIGN_KEY_REF"),
help="Cosign key reference (file:..., azurekms://..., etc.)",
)
parser.add_argument(
"--identity-token",
type=str,
default=os.environ.get("COSIGN_IDENTITY_TOKEN"),
help="OIDC identity token for keyless signing",
)
return parser.parse_args(argv)
def main(argv: list[str] | None = None) -> int:
args = parse_args(argv)
validate_files(BUNDLE_CONTENTS)
bundle_path = args.output.resolve()
print(f"[*] Creating telemetry bundle at {bundle_path}")
create_bundle(bundle_path)
sha_path = write_checksum(bundle_path)
print(f"[✓] SHA-256 written to {sha_path}")
if args.sign:
print("[*] Signing bundle with cosign")
cosign_sign(bundle_path, args.cosign_key, args.identity_token)
sig_path = bundle_path.with_suffix(bundle_path.suffix + ".sig")
if sig_path.exists():
print(f"[✓] Cosign signature written to {sig_path}")
else:
print("[!] Cosign completed but signature file not found (ensure cosign version >= 2.2)")
return 0
if __name__ == "__main__":
sys.exit(main())

View File

@@ -1,197 +1,197 @@
#!/usr/bin/env python3
"""
Smoke test for the StellaOps OpenTelemetry Collector deployment.
The script sends sample traces, metrics, and logs over OTLP/HTTP with mutual TLS
and asserts that the collector accepted the payloads by checking its Prometheus
metrics endpoint.
"""
from __future__ import annotations
import argparse
import json
import ssl
import sys
import time
import urllib.request
from pathlib import Path
TRACE_PAYLOAD = {
"resourceSpans": [
{
"resource": {
"attributes": [
{"key": "service.name", "value": {"stringValue": "smoke-client"}},
{"key": "tenant.id", "value": {"stringValue": "dev"}},
]
},
"scopeSpans": [
{
"scope": {"name": "smoke-test"},
"spans": [
{
"traceId": "00000000000000000000000000000001",
"spanId": "0000000000000001",
"name": "smoke-span",
"kind": 1,
"startTimeUnixNano": "1730000000000000000",
"endTimeUnixNano": "1730000000500000000",
"status": {"code": 0},
}
],
}
],
}
]
}
METRIC_PAYLOAD = {
"resourceMetrics": [
{
"resource": {
"attributes": [
{"key": "service.name", "value": {"stringValue": "smoke-client"}},
{"key": "tenant.id", "value": {"stringValue": "dev"}},
]
},
"scopeMetrics": [
{
"scope": {"name": "smoke-test"},
"metrics": [
{
"name": "smoke_gauge",
"gauge": {
"dataPoints": [
{
"asDouble": 1.0,
"timeUnixNano": "1730000001000000000",
"attributes": [
{"key": "phase", "value": {"stringValue": "ingest"}}
],
}
]
},
}
],
}
],
}
]
}
LOG_PAYLOAD = {
"resourceLogs": [
{
"resource": {
"attributes": [
{"key": "service.name", "value": {"stringValue": "smoke-client"}},
{"key": "tenant.id", "value": {"stringValue": "dev"}},
]
},
"scopeLogs": [
{
"scope": {"name": "smoke-test"},
"logRecords": [
{
"timeUnixNano": "1730000002000000000",
"severityNumber": 9,
"severityText": "Info",
"body": {"stringValue": "StellaOps collector smoke log"},
}
],
}
],
}
]
}
def _load_context(ca: Path, cert: Path, key: Path) -> ssl.SSLContext:
context = ssl.create_default_context(cafile=str(ca))
context.check_hostname = False
context.verify_mode = ssl.CERT_REQUIRED
context.load_cert_chain(certfile=str(cert), keyfile=str(key))
return context
def _post_json(url: str, payload: dict, context: ssl.SSLContext) -> None:
data = json.dumps(payload).encode("utf-8")
request = urllib.request.Request(
url,
data=data,
headers={
"Content-Type": "application/json",
"User-Agent": "stellaops-otel-smoke/1.0",
},
method="POST",
)
with urllib.request.urlopen(request, context=context, timeout=10) as response:
if response.status // 100 != 2:
raise RuntimeError(f"{url} returned HTTP {response.status}")
def _fetch_metrics(url: str, context: ssl.SSLContext) -> str:
request = urllib.request.Request(
url,
headers={
"User-Agent": "stellaops-otel-smoke/1.0",
},
)
with urllib.request.urlopen(request, context=context, timeout=10) as response:
return response.read().decode("utf-8")
def _assert_counter(metrics: str, metric_name: str) -> None:
for line in metrics.splitlines():
if line.startswith(metric_name):
try:
_, value = line.split(" ")
if float(value) > 0:
return
except ValueError:
continue
raise AssertionError(f"{metric_name} not incremented")
def main() -> int:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("--host", default="localhost", help="Collector host (default: %(default)s)")
parser.add_argument("--otlp-port", type=int, default=4318, help="OTLP/HTTP port")
parser.add_argument("--metrics-port", type=int, default=9464, help="Prometheus metrics port")
parser.add_argument("--health-port", type=int, default=13133, help="Health check port")
parser.add_argument("--ca", type=Path, default=Path("deploy/telemetry/certs/ca.crt"), help="CA certificate path")
parser.add_argument("--cert", type=Path, default=Path("deploy/telemetry/certs/client.crt"), help="Client certificate path")
parser.add_argument("--key", type=Path, default=Path("deploy/telemetry/certs/client.key"), help="Client key path")
args = parser.parse_args()
for path in (args.ca, args.cert, args.key):
if not path.exists():
print(f"[!] missing TLS material: {path}", file=sys.stderr)
return 1
context = _load_context(args.ca, args.cert, args.key)
otlp_base = f"https://{args.host}:{args.otlp_port}/v1"
print(f"[*] Sending OTLP traffic to {otlp_base}")
_post_json(f"{otlp_base}/traces", TRACE_PAYLOAD, context)
_post_json(f"{otlp_base}/metrics", METRIC_PAYLOAD, context)
_post_json(f"{otlp_base}/logs", LOG_PAYLOAD, context)
# Allow Prometheus exporter to update metrics
time.sleep(2)
metrics_url = f"https://{args.host}:{args.metrics_port}/metrics"
print(f"[*] Fetching collector metrics from {metrics_url}")
metrics = _fetch_metrics(metrics_url, context)
_assert_counter(metrics, "otelcol_receiver_accepted_spans")
_assert_counter(metrics, "otelcol_receiver_accepted_logs")
_assert_counter(metrics, "otelcol_receiver_accepted_metric_points")
print("[✓] Collector accepted traces, logs, and metrics.")
return 0
if __name__ == "__main__":
raise SystemExit(main())
#!/usr/bin/env python3
"""
Smoke test for the StellaOps OpenTelemetry Collector deployment.
The script sends sample traces, metrics, and logs over OTLP/HTTP with mutual TLS
and asserts that the collector accepted the payloads by checking its Prometheus
metrics endpoint.
"""
from __future__ import annotations
import argparse
import json
import ssl
import sys
import time
import urllib.request
from pathlib import Path
TRACE_PAYLOAD = {
"resourceSpans": [
{
"resource": {
"attributes": [
{"key": "service.name", "value": {"stringValue": "smoke-client"}},
{"key": "tenant.id", "value": {"stringValue": "dev"}},
]
},
"scopeSpans": [
{
"scope": {"name": "smoke-test"},
"spans": [
{
"traceId": "00000000000000000000000000000001",
"spanId": "0000000000000001",
"name": "smoke-span",
"kind": 1,
"startTimeUnixNano": "1730000000000000000",
"endTimeUnixNano": "1730000000500000000",
"status": {"code": 0},
}
],
}
],
}
]
}
METRIC_PAYLOAD = {
"resourceMetrics": [
{
"resource": {
"attributes": [
{"key": "service.name", "value": {"stringValue": "smoke-client"}},
{"key": "tenant.id", "value": {"stringValue": "dev"}},
]
},
"scopeMetrics": [
{
"scope": {"name": "smoke-test"},
"metrics": [
{
"name": "smoke_gauge",
"gauge": {
"dataPoints": [
{
"asDouble": 1.0,
"timeUnixNano": "1730000001000000000",
"attributes": [
{"key": "phase", "value": {"stringValue": "ingest"}}
],
}
]
},
}
],
}
],
}
]
}
LOG_PAYLOAD = {
"resourceLogs": [
{
"resource": {
"attributes": [
{"key": "service.name", "value": {"stringValue": "smoke-client"}},
{"key": "tenant.id", "value": {"stringValue": "dev"}},
]
},
"scopeLogs": [
{
"scope": {"name": "smoke-test"},
"logRecords": [
{
"timeUnixNano": "1730000002000000000",
"severityNumber": 9,
"severityText": "Info",
"body": {"stringValue": "StellaOps collector smoke log"},
}
],
}
],
}
]
}
def _load_context(ca: Path, cert: Path, key: Path) -> ssl.SSLContext:
context = ssl.create_default_context(cafile=str(ca))
context.check_hostname = False
context.verify_mode = ssl.CERT_REQUIRED
context.load_cert_chain(certfile=str(cert), keyfile=str(key))
return context
def _post_json(url: str, payload: dict, context: ssl.SSLContext) -> None:
data = json.dumps(payload).encode("utf-8")
request = urllib.request.Request(
url,
data=data,
headers={
"Content-Type": "application/json",
"User-Agent": "stellaops-otel-smoke/1.0",
},
method="POST",
)
with urllib.request.urlopen(request, context=context, timeout=10) as response:
if response.status // 100 != 2:
raise RuntimeError(f"{url} returned HTTP {response.status}")
def _fetch_metrics(url: str, context: ssl.SSLContext) -> str:
request = urllib.request.Request(
url,
headers={
"User-Agent": "stellaops-otel-smoke/1.0",
},
)
with urllib.request.urlopen(request, context=context, timeout=10) as response:
return response.read().decode("utf-8")
def _assert_counter(metrics: str, metric_name: str) -> None:
for line in metrics.splitlines():
if line.startswith(metric_name):
try:
_, value = line.split(" ")
if float(value) > 0:
return
except ValueError:
continue
raise AssertionError(f"{metric_name} not incremented")
def main() -> int:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("--host", default="localhost", help="Collector host (default: %(default)s)")
parser.add_argument("--otlp-port", type=int, default=4318, help="OTLP/HTTP port")
parser.add_argument("--metrics-port", type=int, default=9464, help="Prometheus metrics port")
parser.add_argument("--health-port", type=int, default=13133, help="Health check port")
parser.add_argument("--ca", type=Path, default=Path("deploy/telemetry/certs/ca.crt"), help="CA certificate path")
parser.add_argument("--cert", type=Path, default=Path("deploy/telemetry/certs/client.crt"), help="Client certificate path")
parser.add_argument("--key", type=Path, default=Path("deploy/telemetry/certs/client.key"), help="Client key path")
args = parser.parse_args()
for path in (args.ca, args.cert, args.key):
if not path.exists():
print(f"[!] missing TLS material: {path}", file=sys.stderr)
return 1
context = _load_context(args.ca, args.cert, args.key)
otlp_base = f"https://{args.host}:{args.otlp_port}/v1"
print(f"[*] Sending OTLP traffic to {otlp_base}")
_post_json(f"{otlp_base}/traces", TRACE_PAYLOAD, context)
_post_json(f"{otlp_base}/metrics", METRIC_PAYLOAD, context)
_post_json(f"{otlp_base}/logs", LOG_PAYLOAD, context)
# Allow Prometheus exporter to update metrics
time.sleep(2)
metrics_url = f"https://{args.host}:{args.metrics_port}/metrics"
print(f"[*] Fetching collector metrics from {metrics_url}")
metrics = _fetch_metrics(metrics_url, context)
_assert_counter(metrics, "otelcol_receiver_accepted_spans")
_assert_counter(metrics, "otelcol_receiver_accepted_logs")
_assert_counter(metrics, "otelcol_receiver_accepted_metric_points")
print("[✓] Collector accepted traces, logs, and metrics.")
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -1,183 +1,183 @@
#!/usr/bin/env python3
"""
Validate NuGet source ordering for StellaOps.
Ensures `local-nuget` is the highest priority feed in both NuGet.config and the
Directory.Build.props restore configuration. Fails fast with actionable errors
so CI/offline kit workflows can assert deterministic restore ordering.
"""
from __future__ import annotations
import argparse
import subprocess
import sys
import xml.etree.ElementTree as ET
from pathlib import Path
REPO_ROOT = Path(__file__).resolve().parents[2]
NUGET_CONFIG = REPO_ROOT / "NuGet.config"
ROOT_PROPS = REPO_ROOT / "Directory.Build.props"
EXPECTED_SOURCE_KEYS = ["local", "dotnet-public", "nuget.org"]
class ValidationError(Exception):
"""Raised when validation fails."""
def _fail(message: str) -> None:
raise ValidationError(message)
def _parse_xml(path: Path) -> ET.ElementTree:
try:
return ET.parse(path)
except FileNotFoundError as exc:
_fail(f"Missing required file: {path}")
except ET.ParseError as exc:
_fail(f"Could not parse XML for {path}: {exc}")
def validate_nuget_config() -> None:
tree = _parse_xml(NUGET_CONFIG)
root = tree.getroot()
package_sources = root.find("packageSources")
if package_sources is None:
_fail("NuGet.config must declare a <packageSources> section.")
children = list(package_sources)
if not children or children[0].tag != "clear":
_fail("NuGet.config packageSources must begin with a <clear /> element.")
adds = [child for child in children if child.tag == "add"]
if not adds:
_fail("NuGet.config packageSources must define at least one <add> entry.")
keys = [add.attrib.get("key") for add in adds]
if keys[: len(EXPECTED_SOURCE_KEYS)] != EXPECTED_SOURCE_KEYS:
formatted = ", ".join(keys) or "<empty>"
_fail(
"NuGet.config packageSources must list feeds in the order "
f"{EXPECTED_SOURCE_KEYS}. Found: {formatted}"
)
local_value = adds[0].attrib.get("value", "")
if Path(local_value).name != "local-nuget":
_fail(
"NuGet.config local feed should point at the repo-local mirror "
f"'local-nuget', found value '{local_value}'."
)
clear = package_sources.find("clear")
if clear is None:
_fail("NuGet.config packageSources must start with <clear /> to avoid inherited feeds.")
def validate_directory_build_props() -> None:
tree = _parse_xml(ROOT_PROPS)
root = tree.getroot()
defaults = None
for element in root.findall(".//_StellaOpsDefaultRestoreSources"):
defaults = [fragment.strip() for fragment in element.text.split(";") if fragment.strip()]
break
if defaults is None:
_fail("Directory.Build.props must define _StellaOpsDefaultRestoreSources.")
expected_props = [
"$(StellaOpsLocalNuGetSource)",
"$(StellaOpsDotNetPublicSource)",
"$(StellaOpsNuGetOrgSource)",
]
if defaults != expected_props:
_fail(
"Directory.Build.props _StellaOpsDefaultRestoreSources must list feeds "
f"in the order {expected_props}. Found: {defaults}"
)
restore_nodes = root.findall(".//RestoreSources")
if not restore_nodes:
_fail("Directory.Build.props must override RestoreSources to force deterministic ordering.")
uses_default_first = any(
node.text
and node.text.strip().startswith("$(_StellaOpsDefaultRestoreSources)")
for node in restore_nodes
)
if not uses_default_first:
_fail(
"Directory.Build.props RestoreSources override must place "
"$(_StellaOpsDefaultRestoreSources) at the beginning."
)
def assert_single_nuget_config() -> None:
extra_configs: list[Path] = []
configs: set[Path] = set()
for glob in ("NuGet.config", "nuget.config"):
try:
result = subprocess.run(
["rg", "--files", f"-g{glob}"],
check=False,
capture_output=True,
text=True,
cwd=REPO_ROOT,
)
except FileNotFoundError as exc:
_fail("ripgrep (rg) is required for validation but was not found on PATH.")
if result.returncode not in (0, 1):
_fail(
f"ripgrep failed while searching for {glob}: {result.stderr.strip() or result.returncode}"
)
for line in result.stdout.splitlines():
configs.add((REPO_ROOT / line).resolve())
configs.discard(NUGET_CONFIG.resolve())
extra_configs.extend(sorted(configs))
if extra_configs:
formatted = "\n ".join(str(path.relative_to(REPO_ROOT)) for path in extra_configs)
_fail(
"Unexpected additional NuGet.config files detected. "
"Consolidate feed configuration in the repo root:\n "
f"{formatted}"
)
def parse_args(argv: list[str]) -> argparse.Namespace:
parser = argparse.ArgumentParser(
description="Verify StellaOps NuGet feeds prioritise the local mirror."
)
parser.add_argument(
"--skip-rg",
action="store_true",
help="Skip ripgrep discovery of extra NuGet.config files (useful for focused runs).",
)
return parser.parse_args(argv)
def main(argv: list[str]) -> int:
args = parse_args(argv)
validations = [
("NuGet.config ordering", validate_nuget_config),
("Directory.Build.props restore override", validate_directory_build_props),
]
if not args.skip_rg:
validations.append(("single NuGet.config", assert_single_nuget_config))
for label, check in validations:
try:
check()
except ValidationError as exc:
sys.stderr.write(f"[FAIL] {label}: {exc}\n")
return 1
else:
sys.stdout.write(f"[OK] {label}\n")
return 0
if __name__ == "__main__":
sys.exit(main(sys.argv[1:]))
#!/usr/bin/env python3
"""
Validate NuGet source ordering for StellaOps.
Ensures `local-nuget` is the highest priority feed in both NuGet.config and the
Directory.Build.props restore configuration. Fails fast with actionable errors
so CI/offline kit workflows can assert deterministic restore ordering.
"""
from __future__ import annotations
import argparse
import subprocess
import sys
import xml.etree.ElementTree as ET
from pathlib import Path
REPO_ROOT = Path(__file__).resolve().parents[2]
NUGET_CONFIG = REPO_ROOT / "NuGet.config"
ROOT_PROPS = REPO_ROOT / "Directory.Build.props"
EXPECTED_SOURCE_KEYS = ["local", "dotnet-public", "nuget.org"]
class ValidationError(Exception):
"""Raised when validation fails."""
def _fail(message: str) -> None:
raise ValidationError(message)
def _parse_xml(path: Path) -> ET.ElementTree:
try:
return ET.parse(path)
except FileNotFoundError as exc:
_fail(f"Missing required file: {path}")
except ET.ParseError as exc:
_fail(f"Could not parse XML for {path}: {exc}")
def validate_nuget_config() -> None:
tree = _parse_xml(NUGET_CONFIG)
root = tree.getroot()
package_sources = root.find("packageSources")
if package_sources is None:
_fail("NuGet.config must declare a <packageSources> section.")
children = list(package_sources)
if not children or children[0].tag != "clear":
_fail("NuGet.config packageSources must begin with a <clear /> element.")
adds = [child for child in children if child.tag == "add"]
if not adds:
_fail("NuGet.config packageSources must define at least one <add> entry.")
keys = [add.attrib.get("key") for add in adds]
if keys[: len(EXPECTED_SOURCE_KEYS)] != EXPECTED_SOURCE_KEYS:
formatted = ", ".join(keys) or "<empty>"
_fail(
"NuGet.config packageSources must list feeds in the order "
f"{EXPECTED_SOURCE_KEYS}. Found: {formatted}"
)
local_value = adds[0].attrib.get("value", "")
if Path(local_value).name != "local-nuget":
_fail(
"NuGet.config local feed should point at the repo-local mirror "
f"'local-nuget', found value '{local_value}'."
)
clear = package_sources.find("clear")
if clear is None:
_fail("NuGet.config packageSources must start with <clear /> to avoid inherited feeds.")
def validate_directory_build_props() -> None:
tree = _parse_xml(ROOT_PROPS)
root = tree.getroot()
defaults = None
for element in root.findall(".//_StellaOpsDefaultRestoreSources"):
defaults = [fragment.strip() for fragment in element.text.split(";") if fragment.strip()]
break
if defaults is None:
_fail("Directory.Build.props must define _StellaOpsDefaultRestoreSources.")
expected_props = [
"$(StellaOpsLocalNuGetSource)",
"$(StellaOpsDotNetPublicSource)",
"$(StellaOpsNuGetOrgSource)",
]
if defaults != expected_props:
_fail(
"Directory.Build.props _StellaOpsDefaultRestoreSources must list feeds "
f"in the order {expected_props}. Found: {defaults}"
)
restore_nodes = root.findall(".//RestoreSources")
if not restore_nodes:
_fail("Directory.Build.props must override RestoreSources to force deterministic ordering.")
uses_default_first = any(
node.text
and node.text.strip().startswith("$(_StellaOpsDefaultRestoreSources)")
for node in restore_nodes
)
if not uses_default_first:
_fail(
"Directory.Build.props RestoreSources override must place "
"$(_StellaOpsDefaultRestoreSources) at the beginning."
)
def assert_single_nuget_config() -> None:
extra_configs: list[Path] = []
configs: set[Path] = set()
for glob in ("NuGet.config", "nuget.config"):
try:
result = subprocess.run(
["rg", "--files", f"-g{glob}"],
check=False,
capture_output=True,
text=True,
cwd=REPO_ROOT,
)
except FileNotFoundError as exc:
_fail("ripgrep (rg) is required for validation but was not found on PATH.")
if result.returncode not in (0, 1):
_fail(
f"ripgrep failed while searching for {glob}: {result.stderr.strip() or result.returncode}"
)
for line in result.stdout.splitlines():
configs.add((REPO_ROOT / line).resolve())
configs.discard(NUGET_CONFIG.resolve())
extra_configs.extend(sorted(configs))
if extra_configs:
formatted = "\n ".join(str(path.relative_to(REPO_ROOT)) for path in extra_configs)
_fail(
"Unexpected additional NuGet.config files detected. "
"Consolidate feed configuration in the repo root:\n "
f"{formatted}"
)
def parse_args(argv: list[str]) -> argparse.Namespace:
parser = argparse.ArgumentParser(
description="Verify StellaOps NuGet feeds prioritise the local mirror."
)
parser.add_argument(
"--skip-rg",
action="store_true",
help="Skip ripgrep discovery of extra NuGet.config files (useful for focused runs).",
)
return parser.parse_args(argv)
def main(argv: list[str]) -> int:
args = parse_args(argv)
validations = [
("NuGet.config ordering", validate_nuget_config),
("Directory.Build.props restore override", validate_directory_build_props),
]
if not args.skip_rg:
validations.append(("single NuGet.config", assert_single_nuget_config))
for label, check in validations:
try:
check()
except ValidationError as exc:
sys.stderr.write(f"[FAIL] {label}: {exc}\n")
return 1
else:
sys.stdout.write(f"[OK] {label}\n")
return 0
if __name__ == "__main__":
sys.exit(main(sys.argv[1:]))

View File

@@ -1,445 +1,445 @@
#!/usr/bin/env python3
"""Package the StellaOps Offline Kit with deterministic artefacts and manifest."""
from __future__ import annotations
import argparse
import datetime as dt
import hashlib
import json
import os
import re
import shutil
import subprocess
import sys
import tarfile
from collections import OrderedDict
from pathlib import Path
from typing import Any, Iterable, Mapping, MutableMapping, Optional
REPO_ROOT = Path(__file__).resolve().parents[2]
RELEASE_TOOLS_DIR = REPO_ROOT / "ops" / "devops" / "release"
TELEMETRY_TOOLS_DIR = REPO_ROOT / "ops" / "devops" / "telemetry"
TELEMETRY_BUNDLE_PATH = REPO_ROOT / "out" / "telemetry" / "telemetry-offline-bundle.tar.gz"
if str(RELEASE_TOOLS_DIR) not in sys.path:
sys.path.insert(0, str(RELEASE_TOOLS_DIR))
from verify_release import ( # type: ignore import-not-found
load_manifest,
resolve_path,
verify_release,
)
import mirror_debug_store # type: ignore import-not-found
DEFAULT_RELEASE_DIR = REPO_ROOT / "out" / "release"
DEFAULT_STAGING_DIR = REPO_ROOT / "out" / "offline-kit" / "staging"
DEFAULT_OUTPUT_DIR = REPO_ROOT / "out" / "offline-kit" / "dist"
ARTIFACT_TARGETS = {
"sbom": Path("sboms"),
"provenance": Path("attest"),
"signature": Path("signatures"),
"metadata": Path("metadata/docker"),
}
class CommandError(RuntimeError):
"""Raised when an external command fails."""
def run(cmd: Iterable[str], *, cwd: Optional[Path] = None, env: Optional[Mapping[str, str]] = None) -> str:
process_env = dict(os.environ)
if env:
process_env.update(env)
result = subprocess.run(
list(cmd),
cwd=str(cwd) if cwd else None,
env=process_env,
check=False,
capture_output=True,
text=True,
)
if result.returncode != 0:
raise CommandError(
f"Command failed ({result.returncode}): {' '.join(cmd)}\nSTDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}"
)
return result.stdout
def compute_sha256(path: Path) -> str:
sha = hashlib.sha256()
with path.open("rb") as handle:
for chunk in iter(lambda: handle.read(1024 * 1024), b""):
sha.update(chunk)
return sha.hexdigest()
def utc_now_iso() -> str:
return dt.datetime.now(tz=dt.timezone.utc).replace(microsecond=0).isoformat().replace("+00:00", "Z")
def safe_component_name(name: str) -> str:
return re.sub(r"[^A-Za-z0-9_.-]", "-", name.strip().lower())
def clean_directory(path: Path) -> None:
if path.exists():
shutil.rmtree(path)
path.mkdir(parents=True, exist_ok=True)
def run_python_analyzer_smoke() -> None:
script = REPO_ROOT / "ops" / "offline-kit" / "run-python-analyzer-smoke.sh"
run(["bash", str(script)], cwd=REPO_ROOT)
def copy_if_exists(source: Path, target: Path) -> None:
if source.is_dir():
shutil.copytree(source, target, dirs_exist_ok=True)
elif source.is_file():
target.parent.mkdir(parents=True, exist_ok=True)
shutil.copy2(source, target)
def copy_release_manifests(release_dir: Path, staging_dir: Path) -> None:
manifest_dir = staging_dir / "manifest"
manifest_dir.mkdir(parents=True, exist_ok=True)
for name in ("release.yaml", "release.yaml.sha256", "release.json", "release.json.sha256"):
source = release_dir / name
if source.exists():
shutil.copy2(source, manifest_dir / source.name)
def copy_component_artifacts(
manifest: Mapping[str, Any],
release_dir: Path,
staging_dir: Path,
) -> None:
components = manifest.get("components") or []
for component in sorted(components, key=lambda entry: str(entry.get("name", ""))):
if not isinstance(component, Mapping):
continue
component_name = safe_component_name(str(component.get("name", "component")))
for key, target_root in ARTIFACT_TARGETS.items():
entry = component.get(key)
if not entry or not isinstance(entry, Mapping):
continue
path_str = entry.get("path")
if not path_str:
continue
resolved = resolve_path(str(path_str), release_dir)
if not resolved.exists():
raise FileNotFoundError(f"Component '{component_name}' {key} artefact not found: {resolved}")
target_dir = staging_dir / target_root
target_dir.mkdir(parents=True, exist_ok=True)
target_name = f"{component_name}-{resolved.name}" if resolved.name else component_name
shutil.copy2(resolved, target_dir / target_name)
def copy_collections(
manifest: Mapping[str, Any],
release_dir: Path,
staging_dir: Path,
) -> None:
for collection, subdir in (("charts", Path("charts")), ("compose", Path("compose"))):
entries = manifest.get(collection) or []
for entry in entries:
if not isinstance(entry, Mapping):
continue
path_str = entry.get("path")
if not path_str:
continue
resolved = resolve_path(str(path_str), release_dir)
if not resolved.exists():
raise FileNotFoundError(f"{collection} artefact not found: {resolved}")
target_dir = staging_dir / subdir
target_dir.mkdir(parents=True, exist_ok=True)
shutil.copy2(resolved, target_dir / resolved.name)
def copy_debug_store(release_dir: Path, staging_dir: Path) -> None:
mirror_debug_store.main(
[
"--release-dir",
str(release_dir),
"--offline-kit-dir",
str(staging_dir),
]
)
def copy_plugins_and_assets(staging_dir: Path) -> None:
copy_if_exists(REPO_ROOT / "plugins" / "scanner", staging_dir / "plugins" / "scanner")
copy_if_exists(REPO_ROOT / "certificates", staging_dir / "certificates")
copy_if_exists(REPO_ROOT / "seed-data", staging_dir / "seed-data")
docs_dir = staging_dir / "docs"
docs_dir.mkdir(parents=True, exist_ok=True)
copy_if_exists(REPO_ROOT / "docs" / "24_OFFLINE_KIT.md", docs_dir / "24_OFFLINE_KIT.md")
copy_if_exists(REPO_ROOT / "docs" / "ops" / "telemetry-collector.md", docs_dir / "telemetry-collector.md")
copy_if_exists(REPO_ROOT / "docs" / "ops" / "telemetry-storage.md", docs_dir / "telemetry-storage.md")
def package_telemetry_bundle(staging_dir: Path) -> None:
script = TELEMETRY_TOOLS_DIR / "package_offline_bundle.py"
if not script.exists():
return
TELEMETRY_BUNDLE_PATH.parent.mkdir(parents=True, exist_ok=True)
run(["python", str(script), "--output", str(TELEMETRY_BUNDLE_PATH)], cwd=REPO_ROOT)
telemetry_dir = staging_dir / "telemetry"
telemetry_dir.mkdir(parents=True, exist_ok=True)
shutil.copy2(TELEMETRY_BUNDLE_PATH, telemetry_dir / TELEMETRY_BUNDLE_PATH.name)
sha_path = TELEMETRY_BUNDLE_PATH.with_suffix(TELEMETRY_BUNDLE_PATH.suffix + ".sha256")
if sha_path.exists():
shutil.copy2(sha_path, telemetry_dir / sha_path.name)
def scan_files(staging_dir: Path, exclude: Optional[set[str]] = None) -> list[OrderedDict[str, Any]]:
entries: list[OrderedDict[str, Any]] = []
exclude = exclude or set()
for path in sorted(staging_dir.rglob("*")):
if not path.is_file():
continue
rel = path.relative_to(staging_dir).as_posix()
if rel in exclude:
continue
entries.append(
OrderedDict(
(
("name", rel),
("sha256", compute_sha256(path)),
("size", path.stat().st_size),
)
)
)
return entries
def write_offline_manifest(
staging_dir: Path,
version: str,
channel: str,
release_manifest_sha: Optional[str],
) -> tuple[Path, str]:
manifest_dir = staging_dir / "manifest"
manifest_dir.mkdir(parents=True, exist_ok=True)
offline_manifest_path = manifest_dir / "offline-manifest.json"
files = scan_files(staging_dir, exclude={"manifest/offline-manifest.json", "manifest/offline-manifest.json.sha256"})
manifest_data = OrderedDict(
(
(
"bundle",
OrderedDict(
(
("version", version),
("channel", channel),
("capturedAt", utc_now_iso()),
("releaseManifestSha256", release_manifest_sha),
)
),
),
("artifacts", files),
)
)
with offline_manifest_path.open("w", encoding="utf-8") as handle:
json.dump(manifest_data, handle, indent=2)
handle.write("\n")
manifest_sha = compute_sha256(offline_manifest_path)
(offline_manifest_path.with_suffix(".json.sha256")).write_text(
f"{manifest_sha} {offline_manifest_path.name}\n",
encoding="utf-8",
)
return offline_manifest_path, manifest_sha
def tarinfo_filter(tarinfo: tarfile.TarInfo) -> tarfile.TarInfo:
tarinfo.uid = 0
tarinfo.gid = 0
tarinfo.uname = ""
tarinfo.gname = ""
tarinfo.mtime = 0
return tarinfo
def create_tarball(staging_dir: Path, output_dir: Path, bundle_name: str) -> Path:
output_dir.mkdir(parents=True, exist_ok=True)
bundle_path = output_dir / f"{bundle_name}.tar.gz"
if bundle_path.exists():
bundle_path.unlink()
with tarfile.open(bundle_path, "w:gz", compresslevel=9) as tar:
for path in sorted(staging_dir.rglob("*")):
if path.is_file():
arcname = path.relative_to(staging_dir).as_posix()
tar.add(path, arcname=arcname, filter=tarinfo_filter)
return bundle_path
def sign_blob(
path: Path,
*,
key_ref: Optional[str],
identity_token: Optional[str],
password: Optional[str],
tlog_upload: bool,
) -> Optional[Path]:
if not key_ref and not identity_token:
return None
cmd = ["cosign", "sign-blob", "--yes", str(path)]
if key_ref:
cmd.extend(["--key", key_ref])
if identity_token:
cmd.extend(["--identity-token", identity_token])
if not tlog_upload:
cmd.append("--tlog-upload=false")
env = {"COSIGN_PASSWORD": password or ""}
signature = run(cmd, env=env)
sig_path = path.with_suffix(path.suffix + ".sig")
sig_path.write_text(signature, encoding="utf-8")
return sig_path
def build_offline_kit(args: argparse.Namespace) -> MutableMapping[str, Any]:
release_dir = args.release_dir.resolve()
staging_dir = args.staging_dir.resolve()
output_dir = args.output_dir.resolve()
verify_release(release_dir)
if not args.skip_smoke:
run_python_analyzer_smoke()
clean_directory(staging_dir)
copy_debug_store(release_dir, staging_dir)
manifest_data = load_manifest(release_dir)
release_manifest_sha = None
checksums = manifest_data.get("checksums")
if isinstance(checksums, Mapping):
release_manifest_sha = checksums.get("sha256")
copy_release_manifests(release_dir, staging_dir)
copy_component_artifacts(manifest_data, release_dir, staging_dir)
copy_collections(manifest_data, release_dir, staging_dir)
copy_plugins_and_assets(staging_dir)
package_telemetry_bundle(staging_dir)
offline_manifest_path, offline_manifest_sha = write_offline_manifest(
staging_dir,
args.version,
args.channel,
release_manifest_sha,
)
bundle_name = f"stella-ops-offline-kit-{args.version}-{args.channel}"
bundle_path = create_tarball(staging_dir, output_dir, bundle_name)
bundle_sha = compute_sha256(bundle_path)
bundle_sha_prefixed = f"sha256:{bundle_sha}"
(bundle_path.with_suffix(".tar.gz.sha256")).write_text(
f"{bundle_sha} {bundle_path.name}\n",
encoding="utf-8",
)
signature_paths: dict[str, str] = {}
sig = sign_blob(
bundle_path,
key_ref=args.cosign_key,
identity_token=args.cosign_identity_token,
password=args.cosign_password,
tlog_upload=not args.no_transparency,
)
if sig:
signature_paths["bundleSignature"] = str(sig)
manifest_sig = sign_blob(
offline_manifest_path,
key_ref=args.cosign_key,
identity_token=args.cosign_identity_token,
password=args.cosign_password,
tlog_upload=not args.no_transparency,
)
if manifest_sig:
signature_paths["manifestSignature"] = str(manifest_sig)
metadata = OrderedDict(
(
("bundleId", args.bundle_id or f"{args.version}-{args.channel}-{utc_now_iso()}"),
("bundleName", bundle_path.name),
("bundleSha256", bundle_sha_prefixed),
("bundleSize", bundle_path.stat().st_size),
("manifestName", offline_manifest_path.name),
("manifestSha256", f"sha256:{offline_manifest_sha}"),
("manifestSize", offline_manifest_path.stat().st_size),
("channel", args.channel),
("version", args.version),
("capturedAt", utc_now_iso()),
)
)
if sig:
metadata["bundleSignatureName"] = Path(sig).name
if manifest_sig:
metadata["manifestSignatureName"] = Path(manifest_sig).name
metadata_path = output_dir / f"{bundle_name}.metadata.json"
with metadata_path.open("w", encoding="utf-8") as handle:
json.dump(metadata, handle, indent=2)
handle.write("\n")
return OrderedDict(
(
("bundlePath", str(bundle_path)),
("bundleSha256", bundle_sha),
("manifestPath", str(offline_manifest_path)),
("metadataPath", str(metadata_path)),
("signatures", signature_paths),
)
)
def parse_args(argv: Optional[list[str]] = None) -> argparse.Namespace:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("--version", required=True, help="Bundle version (e.g. 2025.10.0)")
parser.add_argument("--channel", default="edge", help="Release channel (default: %(default)s)")
parser.add_argument("--bundle-id", help="Optional explicit bundle identifier")
parser.add_argument(
"--release-dir",
type=Path,
default=DEFAULT_RELEASE_DIR,
help="Release artefact directory (default: %(default)s)",
)
parser.add_argument(
"--staging-dir",
type=Path,
default=DEFAULT_STAGING_DIR,
help="Temporary staging directory (default: %(default)s)",
)
parser.add_argument(
"--output-dir",
type=Path,
default=DEFAULT_OUTPUT_DIR,
help="Destination directory for packaged bundles (default: %(default)s)",
)
parser.add_argument("--cosign-key", dest="cosign_key", help="Cosign key reference for signing")
parser.add_argument("--cosign-password", dest="cosign_password", help="Cosign key password (if applicable)")
parser.add_argument("--cosign-identity-token", dest="cosign_identity_token", help="Cosign identity token")
parser.add_argument("--no-transparency", action="store_true", help="Disable Rekor transparency log uploads")
parser.add_argument("--skip-smoke", action="store_true", help="Skip analyzer smoke execution (testing only)")
return parser.parse_args(argv)
def main(argv: Optional[list[str]] = None) -> int:
args = parse_args(argv)
try:
result = build_offline_kit(args)
except Exception as exc: # pylint: disable=broad-except
print(f"offline-kit packaging failed: {exc}", file=sys.stderr)
return 1
print("✅ Offline kit packaged")
for key, value in result.items():
if isinstance(value, dict):
for sub_key, sub_val in value.items():
print(f" - {key}.{sub_key}: {sub_val}")
else:
print(f" - {key}: {value}")
return 0
if __name__ == "__main__":
raise SystemExit(main())
#!/usr/bin/env python3
"""Package the StellaOps Offline Kit with deterministic artefacts and manifest."""
from __future__ import annotations
import argparse
import datetime as dt
import hashlib
import json
import os
import re
import shutil
import subprocess
import sys
import tarfile
from collections import OrderedDict
from pathlib import Path
from typing import Any, Iterable, Mapping, MutableMapping, Optional
REPO_ROOT = Path(__file__).resolve().parents[2]
RELEASE_TOOLS_DIR = REPO_ROOT / "ops" / "devops" / "release"
TELEMETRY_TOOLS_DIR = REPO_ROOT / "ops" / "devops" / "telemetry"
TELEMETRY_BUNDLE_PATH = REPO_ROOT / "out" / "telemetry" / "telemetry-offline-bundle.tar.gz"
if str(RELEASE_TOOLS_DIR) not in sys.path:
sys.path.insert(0, str(RELEASE_TOOLS_DIR))
from verify_release import ( # type: ignore import-not-found
load_manifest,
resolve_path,
verify_release,
)
import mirror_debug_store # type: ignore import-not-found
DEFAULT_RELEASE_DIR = REPO_ROOT / "out" / "release"
DEFAULT_STAGING_DIR = REPO_ROOT / "out" / "offline-kit" / "staging"
DEFAULT_OUTPUT_DIR = REPO_ROOT / "out" / "offline-kit" / "dist"
ARTIFACT_TARGETS = {
"sbom": Path("sboms"),
"provenance": Path("attest"),
"signature": Path("signatures"),
"metadata": Path("metadata/docker"),
}
class CommandError(RuntimeError):
"""Raised when an external command fails."""
def run(cmd: Iterable[str], *, cwd: Optional[Path] = None, env: Optional[Mapping[str, str]] = None) -> str:
process_env = dict(os.environ)
if env:
process_env.update(env)
result = subprocess.run(
list(cmd),
cwd=str(cwd) if cwd else None,
env=process_env,
check=False,
capture_output=True,
text=True,
)
if result.returncode != 0:
raise CommandError(
f"Command failed ({result.returncode}): {' '.join(cmd)}\nSTDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}"
)
return result.stdout
def compute_sha256(path: Path) -> str:
sha = hashlib.sha256()
with path.open("rb") as handle:
for chunk in iter(lambda: handle.read(1024 * 1024), b""):
sha.update(chunk)
return sha.hexdigest()
def utc_now_iso() -> str:
return dt.datetime.now(tz=dt.timezone.utc).replace(microsecond=0).isoformat().replace("+00:00", "Z")
def safe_component_name(name: str) -> str:
return re.sub(r"[^A-Za-z0-9_.-]", "-", name.strip().lower())
def clean_directory(path: Path) -> None:
if path.exists():
shutil.rmtree(path)
path.mkdir(parents=True, exist_ok=True)
def run_python_analyzer_smoke() -> None:
script = REPO_ROOT / "ops" / "offline-kit" / "run-python-analyzer-smoke.sh"
run(["bash", str(script)], cwd=REPO_ROOT)
def copy_if_exists(source: Path, target: Path) -> None:
if source.is_dir():
shutil.copytree(source, target, dirs_exist_ok=True)
elif source.is_file():
target.parent.mkdir(parents=True, exist_ok=True)
shutil.copy2(source, target)
def copy_release_manifests(release_dir: Path, staging_dir: Path) -> None:
manifest_dir = staging_dir / "manifest"
manifest_dir.mkdir(parents=True, exist_ok=True)
for name in ("release.yaml", "release.yaml.sha256", "release.json", "release.json.sha256"):
source = release_dir / name
if source.exists():
shutil.copy2(source, manifest_dir / source.name)
def copy_component_artifacts(
manifest: Mapping[str, Any],
release_dir: Path,
staging_dir: Path,
) -> None:
components = manifest.get("components") or []
for component in sorted(components, key=lambda entry: str(entry.get("name", ""))):
if not isinstance(component, Mapping):
continue
component_name = safe_component_name(str(component.get("name", "component")))
for key, target_root in ARTIFACT_TARGETS.items():
entry = component.get(key)
if not entry or not isinstance(entry, Mapping):
continue
path_str = entry.get("path")
if not path_str:
continue
resolved = resolve_path(str(path_str), release_dir)
if not resolved.exists():
raise FileNotFoundError(f"Component '{component_name}' {key} artefact not found: {resolved}")
target_dir = staging_dir / target_root
target_dir.mkdir(parents=True, exist_ok=True)
target_name = f"{component_name}-{resolved.name}" if resolved.name else component_name
shutil.copy2(resolved, target_dir / target_name)
def copy_collections(
manifest: Mapping[str, Any],
release_dir: Path,
staging_dir: Path,
) -> None:
for collection, subdir in (("charts", Path("charts")), ("compose", Path("compose"))):
entries = manifest.get(collection) or []
for entry in entries:
if not isinstance(entry, Mapping):
continue
path_str = entry.get("path")
if not path_str:
continue
resolved = resolve_path(str(path_str), release_dir)
if not resolved.exists():
raise FileNotFoundError(f"{collection} artefact not found: {resolved}")
target_dir = staging_dir / subdir
target_dir.mkdir(parents=True, exist_ok=True)
shutil.copy2(resolved, target_dir / resolved.name)
def copy_debug_store(release_dir: Path, staging_dir: Path) -> None:
mirror_debug_store.main(
[
"--release-dir",
str(release_dir),
"--offline-kit-dir",
str(staging_dir),
]
)
def copy_plugins_and_assets(staging_dir: Path) -> None:
copy_if_exists(REPO_ROOT / "plugins" / "scanner", staging_dir / "plugins" / "scanner")
copy_if_exists(REPO_ROOT / "certificates", staging_dir / "certificates")
copy_if_exists(REPO_ROOT / "seed-data", staging_dir / "seed-data")
docs_dir = staging_dir / "docs"
docs_dir.mkdir(parents=True, exist_ok=True)
copy_if_exists(REPO_ROOT / "docs" / "24_OFFLINE_KIT.md", docs_dir / "24_OFFLINE_KIT.md")
copy_if_exists(REPO_ROOT / "docs" / "ops" / "telemetry-collector.md", docs_dir / "telemetry-collector.md")
copy_if_exists(REPO_ROOT / "docs" / "ops" / "telemetry-storage.md", docs_dir / "telemetry-storage.md")
def package_telemetry_bundle(staging_dir: Path) -> None:
script = TELEMETRY_TOOLS_DIR / "package_offline_bundle.py"
if not script.exists():
return
TELEMETRY_BUNDLE_PATH.parent.mkdir(parents=True, exist_ok=True)
run(["python", str(script), "--output", str(TELEMETRY_BUNDLE_PATH)], cwd=REPO_ROOT)
telemetry_dir = staging_dir / "telemetry"
telemetry_dir.mkdir(parents=True, exist_ok=True)
shutil.copy2(TELEMETRY_BUNDLE_PATH, telemetry_dir / TELEMETRY_BUNDLE_PATH.name)
sha_path = TELEMETRY_BUNDLE_PATH.with_suffix(TELEMETRY_BUNDLE_PATH.suffix + ".sha256")
if sha_path.exists():
shutil.copy2(sha_path, telemetry_dir / sha_path.name)
def scan_files(staging_dir: Path, exclude: Optional[set[str]] = None) -> list[OrderedDict[str, Any]]:
entries: list[OrderedDict[str, Any]] = []
exclude = exclude or set()
for path in sorted(staging_dir.rglob("*")):
if not path.is_file():
continue
rel = path.relative_to(staging_dir).as_posix()
if rel in exclude:
continue
entries.append(
OrderedDict(
(
("name", rel),
("sha256", compute_sha256(path)),
("size", path.stat().st_size),
)
)
)
return entries
def write_offline_manifest(
staging_dir: Path,
version: str,
channel: str,
release_manifest_sha: Optional[str],
) -> tuple[Path, str]:
manifest_dir = staging_dir / "manifest"
manifest_dir.mkdir(parents=True, exist_ok=True)
offline_manifest_path = manifest_dir / "offline-manifest.json"
files = scan_files(staging_dir, exclude={"manifest/offline-manifest.json", "manifest/offline-manifest.json.sha256"})
manifest_data = OrderedDict(
(
(
"bundle",
OrderedDict(
(
("version", version),
("channel", channel),
("capturedAt", utc_now_iso()),
("releaseManifestSha256", release_manifest_sha),
)
),
),
("artifacts", files),
)
)
with offline_manifest_path.open("w", encoding="utf-8") as handle:
json.dump(manifest_data, handle, indent=2)
handle.write("\n")
manifest_sha = compute_sha256(offline_manifest_path)
(offline_manifest_path.with_suffix(".json.sha256")).write_text(
f"{manifest_sha} {offline_manifest_path.name}\n",
encoding="utf-8",
)
return offline_manifest_path, manifest_sha
def tarinfo_filter(tarinfo: tarfile.TarInfo) -> tarfile.TarInfo:
tarinfo.uid = 0
tarinfo.gid = 0
tarinfo.uname = ""
tarinfo.gname = ""
tarinfo.mtime = 0
return tarinfo
def create_tarball(staging_dir: Path, output_dir: Path, bundle_name: str) -> Path:
output_dir.mkdir(parents=True, exist_ok=True)
bundle_path = output_dir / f"{bundle_name}.tar.gz"
if bundle_path.exists():
bundle_path.unlink()
with tarfile.open(bundle_path, "w:gz", compresslevel=9) as tar:
for path in sorted(staging_dir.rglob("*")):
if path.is_file():
arcname = path.relative_to(staging_dir).as_posix()
tar.add(path, arcname=arcname, filter=tarinfo_filter)
return bundle_path
def sign_blob(
path: Path,
*,
key_ref: Optional[str],
identity_token: Optional[str],
password: Optional[str],
tlog_upload: bool,
) -> Optional[Path]:
if not key_ref and not identity_token:
return None
cmd = ["cosign", "sign-blob", "--yes", str(path)]
if key_ref:
cmd.extend(["--key", key_ref])
if identity_token:
cmd.extend(["--identity-token", identity_token])
if not tlog_upload:
cmd.append("--tlog-upload=false")
env = {"COSIGN_PASSWORD": password or ""}
signature = run(cmd, env=env)
sig_path = path.with_suffix(path.suffix + ".sig")
sig_path.write_text(signature, encoding="utf-8")
return sig_path
def build_offline_kit(args: argparse.Namespace) -> MutableMapping[str, Any]:
release_dir = args.release_dir.resolve()
staging_dir = args.staging_dir.resolve()
output_dir = args.output_dir.resolve()
verify_release(release_dir)
if not args.skip_smoke:
run_python_analyzer_smoke()
clean_directory(staging_dir)
copy_debug_store(release_dir, staging_dir)
manifest_data = load_manifest(release_dir)
release_manifest_sha = None
checksums = manifest_data.get("checksums")
if isinstance(checksums, Mapping):
release_manifest_sha = checksums.get("sha256")
copy_release_manifests(release_dir, staging_dir)
copy_component_artifacts(manifest_data, release_dir, staging_dir)
copy_collections(manifest_data, release_dir, staging_dir)
copy_plugins_and_assets(staging_dir)
package_telemetry_bundle(staging_dir)
offline_manifest_path, offline_manifest_sha = write_offline_manifest(
staging_dir,
args.version,
args.channel,
release_manifest_sha,
)
bundle_name = f"stella-ops-offline-kit-{args.version}-{args.channel}"
bundle_path = create_tarball(staging_dir, output_dir, bundle_name)
bundle_sha = compute_sha256(bundle_path)
bundle_sha_prefixed = f"sha256:{bundle_sha}"
(bundle_path.with_suffix(".tar.gz.sha256")).write_text(
f"{bundle_sha} {bundle_path.name}\n",
encoding="utf-8",
)
signature_paths: dict[str, str] = {}
sig = sign_blob(
bundle_path,
key_ref=args.cosign_key,
identity_token=args.cosign_identity_token,
password=args.cosign_password,
tlog_upload=not args.no_transparency,
)
if sig:
signature_paths["bundleSignature"] = str(sig)
manifest_sig = sign_blob(
offline_manifest_path,
key_ref=args.cosign_key,
identity_token=args.cosign_identity_token,
password=args.cosign_password,
tlog_upload=not args.no_transparency,
)
if manifest_sig:
signature_paths["manifestSignature"] = str(manifest_sig)
metadata = OrderedDict(
(
("bundleId", args.bundle_id or f"{args.version}-{args.channel}-{utc_now_iso()}"),
("bundleName", bundle_path.name),
("bundleSha256", bundle_sha_prefixed),
("bundleSize", bundle_path.stat().st_size),
("manifestName", offline_manifest_path.name),
("manifestSha256", f"sha256:{offline_manifest_sha}"),
("manifestSize", offline_manifest_path.stat().st_size),
("channel", args.channel),
("version", args.version),
("capturedAt", utc_now_iso()),
)
)
if sig:
metadata["bundleSignatureName"] = Path(sig).name
if manifest_sig:
metadata["manifestSignatureName"] = Path(manifest_sig).name
metadata_path = output_dir / f"{bundle_name}.metadata.json"
with metadata_path.open("w", encoding="utf-8") as handle:
json.dump(metadata, handle, indent=2)
handle.write("\n")
return OrderedDict(
(
("bundlePath", str(bundle_path)),
("bundleSha256", bundle_sha),
("manifestPath", str(offline_manifest_path)),
("metadataPath", str(metadata_path)),
("signatures", signature_paths),
)
)
def parse_args(argv: Optional[list[str]] = None) -> argparse.Namespace:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument("--version", required=True, help="Bundle version (e.g. 2025.10.0)")
parser.add_argument("--channel", default="edge", help="Release channel (default: %(default)s)")
parser.add_argument("--bundle-id", help="Optional explicit bundle identifier")
parser.add_argument(
"--release-dir",
type=Path,
default=DEFAULT_RELEASE_DIR,
help="Release artefact directory (default: %(default)s)",
)
parser.add_argument(
"--staging-dir",
type=Path,
default=DEFAULT_STAGING_DIR,
help="Temporary staging directory (default: %(default)s)",
)
parser.add_argument(
"--output-dir",
type=Path,
default=DEFAULT_OUTPUT_DIR,
help="Destination directory for packaged bundles (default: %(default)s)",
)
parser.add_argument("--cosign-key", dest="cosign_key", help="Cosign key reference for signing")
parser.add_argument("--cosign-password", dest="cosign_password", help="Cosign key password (if applicable)")
parser.add_argument("--cosign-identity-token", dest="cosign_identity_token", help="Cosign identity token")
parser.add_argument("--no-transparency", action="store_true", help="Disable Rekor transparency log uploads")
parser.add_argument("--skip-smoke", action="store_true", help="Skip analyzer smoke execution (testing only)")
return parser.parse_args(argv)
def main(argv: Optional[list[str]] = None) -> int:
args = parse_args(argv)
try:
result = build_offline_kit(args)
except Exception as exc: # pylint: disable=broad-except
print(f"offline-kit packaging failed: {exc}", file=sys.stderr)
return 1
print("✅ Offline kit packaged")
for key, value in result.items():
if isinstance(value, dict):
for sub_key, sub_val in value.items():
print(f" - {key}.{sub_key}: {sub_val}")
else:
print(f" - {key}: {value}")
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -1,221 +1,221 @@
#!/usr/bin/env python3
"""Mirror release debug-store artefacts into the Offline Kit staging tree.
This helper copies the release `debug/` directory (including `.build-id/`,
`debug-manifest.json`, and the `.sha256` companion) into the Offline Kit
output directory and verifies the manifest hashes after the copy. A summary
document is written under `metadata/debug-store.json` so packaging jobs can
surface the available build-ids and validation status.
"""
from __future__ import annotations
import argparse
import datetime as dt
import json
import pathlib
import shutil
import sys
from typing import Iterable, Tuple
REPO_ROOT = pathlib.Path(__file__).resolve().parents[2]
def compute_sha256(path: pathlib.Path) -> str:
import hashlib
sha = hashlib.sha256()
with path.open("rb") as handle:
for chunk in iter(lambda: handle.read(1024 * 1024), b""):
sha.update(chunk)
return sha.hexdigest()
def load_manifest(manifest_path: pathlib.Path) -> dict:
with manifest_path.open("r", encoding="utf-8") as handle:
return json.load(handle)
def parse_manifest_sha(sha_path: pathlib.Path) -> str | None:
if not sha_path.exists():
return None
text = sha_path.read_text(encoding="utf-8").strip()
if not text:
return None
# Allow either "<sha>" or "<sha> filename" formats.
return text.split()[0]
def iter_debug_files(base_dir: pathlib.Path) -> Iterable[pathlib.Path]:
for path in base_dir.rglob("*"):
if path.is_file():
yield path
def copy_debug_store(source_root: pathlib.Path, target_root: pathlib.Path, *, dry_run: bool) -> None:
if dry_run:
print(f"[dry-run] Would copy '{source_root}' -> '{target_root}'")
return
if target_root.exists():
shutil.rmtree(target_root)
shutil.copytree(source_root, target_root)
def verify_debug_store(manifest: dict, offline_root: pathlib.Path) -> Tuple[int, int]:
"""Return (verified_count, total_entries)."""
artifacts = manifest.get("artifacts", [])
verified = 0
for entry in artifacts:
debug_path = entry.get("debugPath")
expected_sha = entry.get("sha256")
expected_size = entry.get("size")
if not debug_path or not expected_sha:
continue
relative = pathlib.PurePosixPath(debug_path)
resolved = (offline_root.parent / relative).resolve()
if not resolved.exists():
raise FileNotFoundError(f"Debug artefact missing after mirror: {relative}")
actual_sha = compute_sha256(resolved)
if actual_sha != expected_sha:
raise ValueError(
f"Digest mismatch for {relative}: expected {expected_sha}, found {actual_sha}"
)
if expected_size is not None:
actual_size = resolved.stat().st_size
if actual_size != expected_size:
raise ValueError(
f"Size mismatch for {relative}: expected {expected_size}, found {actual_size}"
)
verified += 1
return verified, len(artifacts)
def summarize_store(manifest: dict, manifest_sha: str | None, offline_root: pathlib.Path, summary_path: pathlib.Path) -> None:
debug_files = [
path
for path in iter_debug_files(offline_root)
if path.suffix == ".debug"
]
total_size = sum(path.stat().st_size for path in debug_files)
build_ids = sorted(
{entry.get("buildId") for entry in manifest.get("artifacts", []) if entry.get("buildId")}
)
summary = {
"generatedAt": dt.datetime.now(tz=dt.timezone.utc)
.replace(microsecond=0)
.isoformat()
.replace("+00:00", "Z"),
"manifestGeneratedAt": manifest.get("generatedAt"),
"manifestSha256": manifest_sha,
"platforms": manifest.get("platforms")
or sorted({entry.get("platform") for entry in manifest.get("artifacts", []) if entry.get("platform")}),
"artifactCount": len(manifest.get("artifacts", [])),
"buildIds": {
"total": len(build_ids),
"samples": build_ids[:10],
},
"debugFiles": {
"count": len(debug_files),
"totalSizeBytes": total_size,
},
}
summary_path.parent.mkdir(parents=True, exist_ok=True)
with summary_path.open("w", encoding="utf-8") as handle:
json.dump(summary, handle, indent=2)
handle.write("\n")
def resolve_release_debug_dir(base: pathlib.Path) -> pathlib.Path:
debug_dir = base / "debug"
if debug_dir.exists():
return debug_dir
# Allow specifying the channel directory directly (e.g. out/release/stable)
if base.name == "debug":
return base
raise FileNotFoundError(f"Debug directory not found under '{base}'")
def parse_args(argv: list[str] | None = None) -> argparse.Namespace:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--release-dir",
type=pathlib.Path,
default=REPO_ROOT / "out" / "release",
help="Release output directory containing the debug store (default: %(default)s)",
)
parser.add_argument(
"--offline-kit-dir",
type=pathlib.Path,
default=REPO_ROOT / "out" / "offline-kit",
help="Offline Kit staging directory (default: %(default)s)",
)
parser.add_argument(
"--verify-only",
action="store_true",
help="Skip copying and only verify the existing offline kit debug store",
)
parser.add_argument(
"--dry-run",
action="store_true",
help="Print actions without copying files",
)
return parser.parse_args(argv)
def main(argv: list[str] | None = None) -> int:
args = parse_args(argv)
try:
source_debug = resolve_release_debug_dir(args.release_dir.resolve())
except FileNotFoundError as exc:
print(f"error: {exc}", file=sys.stderr)
return 2
target_root = (args.offline_kit_dir / "debug").resolve()
if not args.verify_only:
copy_debug_store(source_debug, target_root, dry_run=args.dry_run)
if args.dry_run:
return 0
manifest_path = target_root / "debug-manifest.json"
if not manifest_path.exists():
print(f"error: offline kit manifest missing at {manifest_path}", file=sys.stderr)
return 3
manifest = load_manifest(manifest_path)
manifest_sha_path = manifest_path.with_suffix(manifest_path.suffix + ".sha256")
recorded_sha = parse_manifest_sha(manifest_sha_path)
recomputed_sha = compute_sha256(manifest_path)
if recorded_sha and recorded_sha != recomputed_sha:
print(
f"warning: manifest SHA mismatch (recorded {recorded_sha}, recomputed {recomputed_sha}); updating checksum",
file=sys.stderr,
)
manifest_sha_path.write_text(f"{recomputed_sha} {manifest_path.name}\n", encoding="utf-8")
verified, total = verify_debug_store(manifest, target_root)
print(f"✔ verified {verified}/{total} debug artefacts (manifest SHA {recomputed_sha})")
summary_path = args.offline_kit_dir / "metadata" / "debug-store.json"
summarize_store(manifest, recomputed_sha, target_root, summary_path)
print(f" summary written to {summary_path}")
return 0
if __name__ == "__main__":
raise SystemExit(main())
#!/usr/bin/env python3
"""Mirror release debug-store artefacts into the Offline Kit staging tree.
This helper copies the release `debug/` directory (including `.build-id/`,
`debug-manifest.json`, and the `.sha256` companion) into the Offline Kit
output directory and verifies the manifest hashes after the copy. A summary
document is written under `metadata/debug-store.json` so packaging jobs can
surface the available build-ids and validation status.
"""
from __future__ import annotations
import argparse
import datetime as dt
import json
import pathlib
import shutil
import sys
from typing import Iterable, Tuple
REPO_ROOT = pathlib.Path(__file__).resolve().parents[2]
def compute_sha256(path: pathlib.Path) -> str:
import hashlib
sha = hashlib.sha256()
with path.open("rb") as handle:
for chunk in iter(lambda: handle.read(1024 * 1024), b""):
sha.update(chunk)
return sha.hexdigest()
def load_manifest(manifest_path: pathlib.Path) -> dict:
with manifest_path.open("r", encoding="utf-8") as handle:
return json.load(handle)
def parse_manifest_sha(sha_path: pathlib.Path) -> str | None:
if not sha_path.exists():
return None
text = sha_path.read_text(encoding="utf-8").strip()
if not text:
return None
# Allow either "<sha>" or "<sha> filename" formats.
return text.split()[0]
def iter_debug_files(base_dir: pathlib.Path) -> Iterable[pathlib.Path]:
for path in base_dir.rglob("*"):
if path.is_file():
yield path
def copy_debug_store(source_root: pathlib.Path, target_root: pathlib.Path, *, dry_run: bool) -> None:
if dry_run:
print(f"[dry-run] Would copy '{source_root}' -> '{target_root}'")
return
if target_root.exists():
shutil.rmtree(target_root)
shutil.copytree(source_root, target_root)
def verify_debug_store(manifest: dict, offline_root: pathlib.Path) -> Tuple[int, int]:
"""Return (verified_count, total_entries)."""
artifacts = manifest.get("artifacts", [])
verified = 0
for entry in artifacts:
debug_path = entry.get("debugPath")
expected_sha = entry.get("sha256")
expected_size = entry.get("size")
if not debug_path or not expected_sha:
continue
relative = pathlib.PurePosixPath(debug_path)
resolved = (offline_root.parent / relative).resolve()
if not resolved.exists():
raise FileNotFoundError(f"Debug artefact missing after mirror: {relative}")
actual_sha = compute_sha256(resolved)
if actual_sha != expected_sha:
raise ValueError(
f"Digest mismatch for {relative}: expected {expected_sha}, found {actual_sha}"
)
if expected_size is not None:
actual_size = resolved.stat().st_size
if actual_size != expected_size:
raise ValueError(
f"Size mismatch for {relative}: expected {expected_size}, found {actual_size}"
)
verified += 1
return verified, len(artifacts)
def summarize_store(manifest: dict, manifest_sha: str | None, offline_root: pathlib.Path, summary_path: pathlib.Path) -> None:
debug_files = [
path
for path in iter_debug_files(offline_root)
if path.suffix == ".debug"
]
total_size = sum(path.stat().st_size for path in debug_files)
build_ids = sorted(
{entry.get("buildId") for entry in manifest.get("artifacts", []) if entry.get("buildId")}
)
summary = {
"generatedAt": dt.datetime.now(tz=dt.timezone.utc)
.replace(microsecond=0)
.isoformat()
.replace("+00:00", "Z"),
"manifestGeneratedAt": manifest.get("generatedAt"),
"manifestSha256": manifest_sha,
"platforms": manifest.get("platforms")
or sorted({entry.get("platform") for entry in manifest.get("artifacts", []) if entry.get("platform")}),
"artifactCount": len(manifest.get("artifacts", [])),
"buildIds": {
"total": len(build_ids),
"samples": build_ids[:10],
},
"debugFiles": {
"count": len(debug_files),
"totalSizeBytes": total_size,
},
}
summary_path.parent.mkdir(parents=True, exist_ok=True)
with summary_path.open("w", encoding="utf-8") as handle:
json.dump(summary, handle, indent=2)
handle.write("\n")
def resolve_release_debug_dir(base: pathlib.Path) -> pathlib.Path:
debug_dir = base / "debug"
if debug_dir.exists():
return debug_dir
# Allow specifying the channel directory directly (e.g. out/release/stable)
if base.name == "debug":
return base
raise FileNotFoundError(f"Debug directory not found under '{base}'")
def parse_args(argv: list[str] | None = None) -> argparse.Namespace:
parser = argparse.ArgumentParser(description=__doc__)
parser.add_argument(
"--release-dir",
type=pathlib.Path,
default=REPO_ROOT / "out" / "release",
help="Release output directory containing the debug store (default: %(default)s)",
)
parser.add_argument(
"--offline-kit-dir",
type=pathlib.Path,
default=REPO_ROOT / "out" / "offline-kit",
help="Offline Kit staging directory (default: %(default)s)",
)
parser.add_argument(
"--verify-only",
action="store_true",
help="Skip copying and only verify the existing offline kit debug store",
)
parser.add_argument(
"--dry-run",
action="store_true",
help="Print actions without copying files",
)
return parser.parse_args(argv)
def main(argv: list[str] | None = None) -> int:
args = parse_args(argv)
try:
source_debug = resolve_release_debug_dir(args.release_dir.resolve())
except FileNotFoundError as exc:
print(f"error: {exc}", file=sys.stderr)
return 2
target_root = (args.offline_kit_dir / "debug").resolve()
if not args.verify_only:
copy_debug_store(source_debug, target_root, dry_run=args.dry_run)
if args.dry_run:
return 0
manifest_path = target_root / "debug-manifest.json"
if not manifest_path.exists():
print(f"error: offline kit manifest missing at {manifest_path}", file=sys.stderr)
return 3
manifest = load_manifest(manifest_path)
manifest_sha_path = manifest_path.with_suffix(manifest_path.suffix + ".sha256")
recorded_sha = parse_manifest_sha(manifest_sha_path)
recomputed_sha = compute_sha256(manifest_path)
if recorded_sha and recorded_sha != recomputed_sha:
print(
f"warning: manifest SHA mismatch (recorded {recorded_sha}, recomputed {recomputed_sha}); updating checksum",
file=sys.stderr,
)
manifest_sha_path.write_text(f"{recomputed_sha} {manifest_path.name}\n", encoding="utf-8")
verified, total = verify_debug_store(manifest, target_root)
print(f"✔ verified {verified}/{total} debug artefacts (manifest SHA {recomputed_sha})")
summary_path = args.offline_kit_dir / "metadata" / "debug-store.json"
summarize_store(manifest, recomputed_sha, target_root, summary_path)
print(f" summary written to {summary_path}")
return 0
if __name__ == "__main__":
raise SystemExit(main())

View File

@@ -2,7 +2,7 @@
set -euo pipefail
repo_root="$(git -C "${BASH_SOURCE%/*}/.." rev-parse --show-toplevel 2>/dev/null || pwd)"
project_path="${repo_root}/src/StellaOps.Scanner.Analyzers.Lang.Python/StellaOps.Scanner.Analyzers.Lang.Python.csproj"
project_path="${repo_root}/src/Scanner/__Libraries/StellaOps.Scanner.Analyzers.Lang.Python/StellaOps.Scanner.Analyzers.Lang.Python.csproj"
output_dir="${repo_root}/out/analyzers/python"
plugin_dir="${repo_root}/plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.Python"

View File

@@ -1,256 +1,256 @@
from __future__ import annotations
import json
import tarfile
import tempfile
import unittest
import argparse
import sys
from collections import OrderedDict
from pathlib import Path
sys.path.append(str(Path(__file__).resolve().parent))
from build_release import write_manifest # type: ignore import-not-found
from build_offline_kit import build_offline_kit, compute_sha256 # type: ignore import-not-found
class OfflineKitBuilderTests(unittest.TestCase):
def setUp(self) -> None:
self._temp = tempfile.TemporaryDirectory()
self.base_path = Path(self._temp.name)
self.out_dir = self.base_path / "out"
self.release_dir = self.out_dir / "release"
self.staging_dir = self.base_path / "staging"
self.output_dir = self.base_path / "dist"
self._create_sample_release()
def tearDown(self) -> None:
self._temp.cleanup()
def _relative_to_out(self, path: Path) -> str:
return path.relative_to(self.out_dir).as_posix()
def _write_json(self, path: Path, payload: dict[str, object]) -> None:
path.parent.mkdir(parents=True, exist_ok=True)
with path.open("w", encoding="utf-8") as handle:
json.dump(payload, handle, indent=2)
handle.write("\n")
def _create_sample_release(self) -> None:
self.release_dir.mkdir(parents=True, exist_ok=True)
sbom_path = self.release_dir / "artifacts/sboms/sample.cyclonedx.json"
sbom_path.parent.mkdir(parents=True, exist_ok=True)
sbom_path.write_text('{"bomFormat":"CycloneDX","specVersion":"1.5"}\n', encoding="utf-8")
sbom_sha = compute_sha256(sbom_path)
provenance_path = self.release_dir / "artifacts/provenance/sample.provenance.json"
self._write_json(
provenance_path,
{
"buildDefinition": {"buildType": "https://example/build"},
"runDetails": {"builder": {"id": "https://example/ci"}},
},
)
provenance_sha = compute_sha256(provenance_path)
signature_path = self.release_dir / "artifacts/signatures/sample.signature"
signature_path.parent.mkdir(parents=True, exist_ok=True)
signature_path.write_text("signature-data\n", encoding="utf-8")
signature_sha = compute_sha256(signature_path)
metadata_path = self.release_dir / "artifacts/metadata/sample.metadata.json"
self._write_json(metadata_path, {"digest": "sha256:1234"})
metadata_sha = compute_sha256(metadata_path)
chart_path = self.release_dir / "helm/stellaops-1.0.0.tgz"
chart_path.parent.mkdir(parents=True, exist_ok=True)
chart_path.write_bytes(b"helm-chart-data")
chart_sha = compute_sha256(chart_path)
compose_path = self.release_dir.parent / "deploy/compose/docker-compose.dev.yaml"
compose_path.parent.mkdir(parents=True, exist_ok=True)
compose_path.write_text("services: {}\n", encoding="utf-8")
compose_sha = compute_sha256(compose_path)
debug_file = self.release_dir / "debug/.build-id/ab/cdef.debug"
debug_file.parent.mkdir(parents=True, exist_ok=True)
debug_file.write_bytes(b"\x7fELFDEBUGDATA")
debug_sha = compute_sha256(debug_file)
debug_manifest_path = self.release_dir / "debug/debug-manifest.json"
debug_manifest = OrderedDict(
(
("generatedAt", "2025-10-26T00:00:00Z"),
("version", "1.0.0"),
("channel", "edge"),
(
"artifacts",
[
OrderedDict(
(
("buildId", "abcdef1234"),
("platform", "linux/amd64"),
("debugPath", "debug/.build-id/ab/cdef.debug"),
("sha256", debug_sha),
("size", debug_file.stat().st_size),
("components", ["sample"]),
("images", ["registry.example/sample@sha256:feedface"]),
("sources", ["app/sample.dll"]),
)
)
],
),
)
)
self._write_json(debug_manifest_path, debug_manifest)
debug_manifest_sha = compute_sha256(debug_manifest_path)
(debug_manifest_path.with_suffix(debug_manifest_path.suffix + ".sha256")).write_text(
f"{debug_manifest_sha} {debug_manifest_path.name}\n",
encoding="utf-8",
)
manifest = OrderedDict(
(
(
"release",
OrderedDict(
(
("version", "1.0.0"),
("channel", "edge"),
("date", "2025-10-26T00:00:00Z"),
("calendar", "2025.10"),
)
),
),
(
"components",
[
OrderedDict(
(
("name", "sample"),
("image", "registry.example/sample@sha256:feedface"),
("tags", ["registry.example/sample:1.0.0"]),
(
"sbom",
OrderedDict(
(
("path", self._relative_to_out(sbom_path)),
("sha256", sbom_sha),
)
),
),
(
"provenance",
OrderedDict(
(
("path", self._relative_to_out(provenance_path)),
("sha256", provenance_sha),
)
),
),
(
"signature",
OrderedDict(
(
("path", self._relative_to_out(signature_path)),
("sha256", signature_sha),
("ref", "sigstore://example"),
("tlogUploaded", True),
)
),
),
(
"metadata",
OrderedDict(
(
("path", self._relative_to_out(metadata_path)),
("sha256", metadata_sha),
)
),
),
)
)
],
),
(
"charts",
[
OrderedDict(
(
("name", "stellaops"),
("version", "1.0.0"),
("path", self._relative_to_out(chart_path)),
("sha256", chart_sha),
)
)
],
),
(
"compose",
[
OrderedDict(
(
("name", "docker-compose.dev.yaml"),
("path", compose_path.relative_to(self.out_dir).as_posix()),
("sha256", compose_sha),
)
)
],
),
(
"debugStore",
OrderedDict(
(
("manifest", "debug/debug-manifest.json"),
("sha256", debug_manifest_sha),
("entries", 1),
("platforms", ["linux/amd64"]),
("directory", "debug/.build-id"),
)
),
),
)
)
write_manifest(manifest, self.release_dir)
def test_build_offline_kit(self) -> None:
args = argparse.Namespace(
version="2025.10.0",
channel="edge",
bundle_id="bundle-001",
release_dir=self.release_dir,
staging_dir=self.staging_dir,
output_dir=self.output_dir,
cosign_key=None,
cosign_password=None,
cosign_identity_token=None,
no_transparency=False,
skip_smoke=True,
)
result = build_offline_kit(args)
bundle_path = Path(result["bundlePath"])
self.assertTrue(bundle_path.exists())
offline_manifest = self.output_dir.parent / "staging" / "manifest" / "offline-manifest.json"
self.assertTrue(offline_manifest.exists())
with offline_manifest.open("r", encoding="utf-8") as handle:
manifest_data = json.load(handle)
artifacts = manifest_data["artifacts"]
self.assertTrue(any(item["name"].startswith("sboms/") for item in artifacts))
metadata_path = Path(result["metadataPath"])
data = json.loads(metadata_path.read_text(encoding="utf-8"))
self.assertTrue(data["bundleSha256"].startswith("sha256:"))
self.assertTrue(data["manifestSha256"].startswith("sha256:"))
with tarfile.open(bundle_path, "r:gz") as tar:
members = tar.getnames()
self.assertIn("manifest/release.yaml", members)
self.assertTrue(any(name.startswith("sboms/sample-") for name in members))
if __name__ == "__main__":
unittest.main()
from __future__ import annotations
import json
import tarfile
import tempfile
import unittest
import argparse
import sys
from collections import OrderedDict
from pathlib import Path
sys.path.append(str(Path(__file__).resolve().parent))
from build_release import write_manifest # type: ignore import-not-found
from build_offline_kit import build_offline_kit, compute_sha256 # type: ignore import-not-found
class OfflineKitBuilderTests(unittest.TestCase):
def setUp(self) -> None:
self._temp = tempfile.TemporaryDirectory()
self.base_path = Path(self._temp.name)
self.out_dir = self.base_path / "out"
self.release_dir = self.out_dir / "release"
self.staging_dir = self.base_path / "staging"
self.output_dir = self.base_path / "dist"
self._create_sample_release()
def tearDown(self) -> None:
self._temp.cleanup()
def _relative_to_out(self, path: Path) -> str:
return path.relative_to(self.out_dir).as_posix()
def _write_json(self, path: Path, payload: dict[str, object]) -> None:
path.parent.mkdir(parents=True, exist_ok=True)
with path.open("w", encoding="utf-8") as handle:
json.dump(payload, handle, indent=2)
handle.write("\n")
def _create_sample_release(self) -> None:
self.release_dir.mkdir(parents=True, exist_ok=True)
sbom_path = self.release_dir / "artifacts/sboms/sample.cyclonedx.json"
sbom_path.parent.mkdir(parents=True, exist_ok=True)
sbom_path.write_text('{"bomFormat":"CycloneDX","specVersion":"1.5"}\n', encoding="utf-8")
sbom_sha = compute_sha256(sbom_path)
provenance_path = self.release_dir / "artifacts/provenance/sample.provenance.json"
self._write_json(
provenance_path,
{
"buildDefinition": {"buildType": "https://example/build"},
"runDetails": {"builder": {"id": "https://example/ci"}},
},
)
provenance_sha = compute_sha256(provenance_path)
signature_path = self.release_dir / "artifacts/signatures/sample.signature"
signature_path.parent.mkdir(parents=True, exist_ok=True)
signature_path.write_text("signature-data\n", encoding="utf-8")
signature_sha = compute_sha256(signature_path)
metadata_path = self.release_dir / "artifacts/metadata/sample.metadata.json"
self._write_json(metadata_path, {"digest": "sha256:1234"})
metadata_sha = compute_sha256(metadata_path)
chart_path = self.release_dir / "helm/stellaops-1.0.0.tgz"
chart_path.parent.mkdir(parents=True, exist_ok=True)
chart_path.write_bytes(b"helm-chart-data")
chart_sha = compute_sha256(chart_path)
compose_path = self.release_dir.parent / "deploy/compose/docker-compose.dev.yaml"
compose_path.parent.mkdir(parents=True, exist_ok=True)
compose_path.write_text("services: {}\n", encoding="utf-8")
compose_sha = compute_sha256(compose_path)
debug_file = self.release_dir / "debug/.build-id/ab/cdef.debug"
debug_file.parent.mkdir(parents=True, exist_ok=True)
debug_file.write_bytes(b"\x7fELFDEBUGDATA")
debug_sha = compute_sha256(debug_file)
debug_manifest_path = self.release_dir / "debug/debug-manifest.json"
debug_manifest = OrderedDict(
(
("generatedAt", "2025-10-26T00:00:00Z"),
("version", "1.0.0"),
("channel", "edge"),
(
"artifacts",
[
OrderedDict(
(
("buildId", "abcdef1234"),
("platform", "linux/amd64"),
("debugPath", "debug/.build-id/ab/cdef.debug"),
("sha256", debug_sha),
("size", debug_file.stat().st_size),
("components", ["sample"]),
("images", ["registry.example/sample@sha256:feedface"]),
("sources", ["app/sample.dll"]),
)
)
],
),
)
)
self._write_json(debug_manifest_path, debug_manifest)
debug_manifest_sha = compute_sha256(debug_manifest_path)
(debug_manifest_path.with_suffix(debug_manifest_path.suffix + ".sha256")).write_text(
f"{debug_manifest_sha} {debug_manifest_path.name}\n",
encoding="utf-8",
)
manifest = OrderedDict(
(
(
"release",
OrderedDict(
(
("version", "1.0.0"),
("channel", "edge"),
("date", "2025-10-26T00:00:00Z"),
("calendar", "2025.10"),
)
),
),
(
"components",
[
OrderedDict(
(
("name", "sample"),
("image", "registry.example/sample@sha256:feedface"),
("tags", ["registry.example/sample:1.0.0"]),
(
"sbom",
OrderedDict(
(
("path", self._relative_to_out(sbom_path)),
("sha256", sbom_sha),
)
),
),
(
"provenance",
OrderedDict(
(
("path", self._relative_to_out(provenance_path)),
("sha256", provenance_sha),
)
),
),
(
"signature",
OrderedDict(
(
("path", self._relative_to_out(signature_path)),
("sha256", signature_sha),
("ref", "sigstore://example"),
("tlogUploaded", True),
)
),
),
(
"metadata",
OrderedDict(
(
("path", self._relative_to_out(metadata_path)),
("sha256", metadata_sha),
)
),
),
)
)
],
),
(
"charts",
[
OrderedDict(
(
("name", "stellaops"),
("version", "1.0.0"),
("path", self._relative_to_out(chart_path)),
("sha256", chart_sha),
)
)
],
),
(
"compose",
[
OrderedDict(
(
("name", "docker-compose.dev.yaml"),
("path", compose_path.relative_to(self.out_dir).as_posix()),
("sha256", compose_sha),
)
)
],
),
(
"debugStore",
OrderedDict(
(
("manifest", "debug/debug-manifest.json"),
("sha256", debug_manifest_sha),
("entries", 1),
("platforms", ["linux/amd64"]),
("directory", "debug/.build-id"),
)
),
),
)
)
write_manifest(manifest, self.release_dir)
def test_build_offline_kit(self) -> None:
args = argparse.Namespace(
version="2025.10.0",
channel="edge",
bundle_id="bundle-001",
release_dir=self.release_dir,
staging_dir=self.staging_dir,
output_dir=self.output_dir,
cosign_key=None,
cosign_password=None,
cosign_identity_token=None,
no_transparency=False,
skip_smoke=True,
)
result = build_offline_kit(args)
bundle_path = Path(result["bundlePath"])
self.assertTrue(bundle_path.exists())
offline_manifest = self.output_dir.parent / "staging" / "manifest" / "offline-manifest.json"
self.assertTrue(offline_manifest.exists())
with offline_manifest.open("r", encoding="utf-8") as handle:
manifest_data = json.load(handle)
artifacts = manifest_data["artifacts"]
self.assertTrue(any(item["name"].startswith("sboms/") for item in artifacts))
metadata_path = Path(result["metadataPath"])
data = json.loads(metadata_path.read_text(encoding="utf-8"))
self.assertTrue(data["bundleSha256"].startswith("sha256:"))
self.assertTrue(data["manifestSha256"].startswith("sha256:"))
with tarfile.open(bundle_path, "r:gz") as tar:
members = tar.getnames()
self.assertIn("manifest/release.yaml", members)
self.assertTrue(any(name.startswith("sboms/sample-") for name in members))
if __name__ == "__main__":
unittest.main()