Add Policy DSL Validator, Schema Exporter, and Simulation Smoke tools
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
Some checks failed
Docs CI / lint-and-preview (push) Has been cancelled
- Implemented PolicyDslValidator with command-line options for strict mode and JSON output. - Created PolicySchemaExporter to generate JSON schemas for policy-related models. - Developed PolicySimulationSmoke tool to validate policy simulations against expected outcomes. - Added project files and necessary dependencies for each tool. - Ensured proper error handling and usage instructions across tools.
This commit is contained in:
@@ -38,6 +38,8 @@ Key environment variables (mirroring `StellaOpsAuthorityOptions`):
|
||||
|
||||
For additional options, see `etc/authority.yaml.sample`.
|
||||
|
||||
> **Graph Explorer reminder:** When enabling Cartographer or Graph API components, update `etc/authority.yaml` so the `cartographer-service` client includes `properties.serviceIdentity: "cartographer"` and a tenant hint. Authority now rejects `graph:write` tokens that lack this marker, so existing deployments must apply the update before rolling out the new build.
|
||||
|
||||
## Key rotation automation (OPS3)
|
||||
|
||||
The `key-rotation.sh` helper wraps the `/internal/signing/rotate` endpoint delivered with CORE10. It can run in CI/CD once the new PEM key is staged on the Authority host volume.
|
||||
|
||||
@@ -2,7 +2,7 @@
|
||||
|
||||
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|
||||
|----|--------|----------|------------|-------------|---------------|
|
||||
| DEVOPS-OPS-14-003 | TODO | Deployment Guild | DEVOPS-REL-14-001 | Document and script upgrade/rollback flows, channel management, and compatibility matrices per architecture. | Helm/Compose guides updated with digest pinning, automated checks committed, rollback drill recorded. |
|
||||
| DEVOPS-OPS-14-003 | DONE (2025-10-26) | Deployment Guild | DEVOPS-REL-14-001 | Document and script upgrade/rollback flows, channel management, and compatibility matrices per architecture. | Helm/Compose guides updated with digest pinning, automated checks committed, rollback drill recorded. |
|
||||
| DOWNLOADS-CONSOLE-23-001 | TODO | Deployment Guild, DevOps Guild | DEVOPS-CONSOLE-23-002 | Maintain signed downloads manifest pipeline (images, Helm, offline bundles), publish JSON under `deploy/downloads/manifest.json`, and document sync cadence for Console + docs parity. | Pipeline generates signed manifest with checksums, automated PR updates manifest, docs updated with sync workflow, parity check in CI passes. |
|
||||
| DEPLOY-POLICY-27-001 | TODO | Deployment Guild, Policy Registry Guild | REGISTRY-API-27-001, DEVOPS-POLICY-27-003 | Produce Helm/Compose overlays for Policy Registry + simulation workers, including Mongo migrations, object storage buckets, signing key secrets, and tenancy defaults. | Overlays committed with deterministic digests; install docs updated; smoke deploy validated in staging. |
|
||||
| DEPLOY-POLICY-27-002 | TODO | Deployment Guild, Policy Guild | DEPLOY-POLICY-27-001, WEB-POLICY-27-004 | Document rollout/rollback playbooks for policy publish/promote (canary strategy, emergency freeze toggle, evidence retrieval) under `/docs/runbooks/policy-incident.md`. | Runbook published with decision tree; checklist appended; rehearsal recorded. |
|
||||
|
||||
@@ -17,6 +17,23 @@ by the new `.gitea/workflows/release.yml` pipeline.
|
||||
Outputs land under `out/release/`. Use `--no-push` to run full builds without
|
||||
pushing to the registry.
|
||||
|
||||
After the build completes, run the verifier to validate recorded hashes and artefact
|
||||
presence:
|
||||
|
||||
```bash
|
||||
python ops/devops/release/verify_release.py --release-dir out/release
|
||||
```
|
||||
|
||||
## Python analyzer smoke & signing
|
||||
|
||||
`dotnet run --project tools/LanguageAnalyzerSmoke` exercises the Python language
|
||||
analyzer plug-in against the golden fixtures (cold/warm timings, determinism). The
|
||||
release workflow runs this harness automatically and then produces Cosign
|
||||
signatures + SHA-256 sidecars for `StellaOps.Scanner.Analyzers.Lang.Python.dll`
|
||||
and its `manifest.json`. Keep `COSIGN_KEY_REF`/`COSIGN_IDENTITY_TOKEN` populated so
|
||||
the step can sign the artefacts; the generated `.sig`/`.sha256` files ship with the
|
||||
Offline Kit bundle.
|
||||
|
||||
## Required tooling
|
||||
|
||||
- Docker 25+ with Buildx
|
||||
@@ -33,6 +50,10 @@ Supply signing material via environment variables:
|
||||
The workflow defaults to multi-arch (`linux/amd64,linux/arm64`), SBOM in
|
||||
CycloneDX, and SLSA provenance (`https://slsa.dev/provenance/v1`).
|
||||
|
||||
## Debug store extraction
|
||||
|
||||
`build_release.py` now exports stripped debug artefacts for every ELF discovered in the published images. The files land under `out/release/debug/.build-id/<aa>/<rest>.debug`, with metadata captured in `debug/debug-manifest.json` (and a `.sha256` sidecar). Use `jq` to inspect the manifest or `readelf -n` to spot-check a build-id. Offline Kit packaging should reuse the `debug/` directory as-is.
|
||||
|
||||
## UI auth smoke (Playwright)
|
||||
|
||||
As part of **DEVOPS-UI-13-006** the pipelines will execute the UI auth smoke
|
||||
@@ -51,4 +72,21 @@ ship from the public `dotnet-public` Azure DevOps feed. We mirror them into
|
||||
and writes packages alongside their SHA-256 checks.
|
||||
3. `NuGet.config` registers the mirror (`local`), dotnet-public, and nuget.org.
|
||||
|
||||
Use `python3 ops/devops/validate_restore_sources.py` to prove the repo still
|
||||
prefers the local mirror and that `Directory.Build.props` enforces the same order.
|
||||
The validator now runs automatically in the `build-test-deploy` and `release`
|
||||
workflows so CI fails fast when a feed priority regression slips in.
|
||||
|
||||
Detailed operator instructions live in `docs/ops/nuget-preview-bootstrap.md`.
|
||||
|
||||
## Telemetry collector tooling (DEVOPS-OBS-50-001)
|
||||
|
||||
- `ops/devops/telemetry/generate_dev_tls.sh` – generates a development CA and
|
||||
client/server certificates for the OpenTelemetry collector overlay (mutual TLS).
|
||||
- `ops/devops/telemetry/smoke_otel_collector.py` – sends OTLP traces/metrics/logs
|
||||
over TLS and validates that the collector increments its receiver counters.
|
||||
- `ops/devops/telemetry/package_offline_bundle.py` – re-packages collector assets for the Offline Kit.
|
||||
- `deploy/compose/docker-compose.telemetry-storage.yaml` – Prometheus/Tempo/Loki stack for staging validation.
|
||||
|
||||
Combine these helpers with `deploy/compose/docker-compose.telemetry.yaml` to run
|
||||
a secured collector locally before rolling out the Helm-based deployment.
|
||||
|
||||
@@ -1,5 +1,11 @@
|
||||
# DevOps Task Board
|
||||
|
||||
## Governance & Rules
|
||||
|
||||
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|
||||
|----|--------|----------|------------|-------------|---------------|
|
||||
| DEVOPS-RULES-33-001 | DOING (2025-10-26) | DevOps Guild, Platform Leads | — | Contracts & Rules anchor:<br>• Gateway proxies only; Policy Engine composes overlays/simulations.<br>• AOC ingestion cannot merge; only lossless canonicalization.<br>• One graph platform: Graph Indexer + Graph API. Cartographer retired. | Rules posted in SPRINTS/TASKS; duplicates cleaned per guidance; reviewers acknowledge in changelog. |
|
||||
|
||||
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|
||||
|----|--------|----------|------------|-------------|---------------|
|
||||
| DEVOPS-HELM-09-001 | DONE | DevOps Guild | SCANNER-WEB-09-101 | Create Helm/Compose environment profiles (dev, staging, airgap) with deterministic digests. | Profiles committed under `deploy/`; docs updated; CI smoke deploy passes. |
|
||||
@@ -7,12 +13,16 @@
|
||||
| DEVOPS-SCANNER-09-205 | DONE (2025-10-21) | DevOps Guild, Notify Guild | DEVOPS-SCANNER-09-204 | Add Notify smoke stage that tails the Redis stream and asserts `scanner.report.ready`/`scanner.scan.completed` reach Notify WebService in staging. | CI job reads Redis stream during scanner smoke deploy, confirms Notify ingestion via API, alerts on failure. |
|
||||
| DEVOPS-PERF-10-001 | DONE | DevOps Guild | BENCH-SCANNER-10-001 | Add perf smoke job (SBOM compose <5 s target) to CI. | CI job runs sample build verifying <5 s; alerts configured. |
|
||||
| DEVOPS-PERF-10-002 | DONE (2025-10-23) | DevOps Guild | BENCH-SCANNER-10-002 | Publish analyzer bench metrics to Grafana/perf workbook and alarm on ≥20 % regressions. | CI exports JSON for dashboards; Grafana panel wired; Ops on-call doc updated with alert hook. |
|
||||
| DEVOPS-AOC-19-001 | TODO | DevOps Guild, Platform Guild | WEB-AOC-19-003 | Integrate the AOC Roslyn analyzer and guard tests into CI, failing builds when ingestion projects attempt banned writes. | Analyzer runs in PR/CI pipelines, results surfaced in build summary, docs updated under `docs/ops/ci-aoc.md`. |
|
||||
| DEVOPS-AOC-19-002 | TODO | DevOps Guild | CLI-AOC-19-002, CONCELIER-WEB-AOC-19-004, EXCITITOR-WEB-AOC-19-004 | Add pipeline stage executing `stella aoc verify --since` against seeded Mongo snapshots for Concelier + Excititor, publishing violation report artefacts. | Stage runs on main/nightly, fails on violations, artifacts retained, runbook documented. |
|
||||
| DEVOPS-AOC-19-003 | TODO | DevOps Guild, QA Guild | CONCELIER-WEB-AOC-19-003, EXCITITOR-WEB-AOC-19-003 | Enforce unit test coverage thresholds for AOC guard suites and ensure coverage exported to dashboards. | Coverage report includes guard projects, threshold gate passes/fails as expected, dashboards refreshed with new metrics. |
|
||||
| DEVOPS-OBS-50-001 | TODO | DevOps Guild, Observability Guild | TELEMETRY-OBS-50-001 | Deliver default OpenTelemetry collector deployment (Compose/Helm manifests), OTLP ingestion endpoints, and secure pipeline (authN, mTLS, tenant partitioning). Provide smoke test verifying traces/logs/metrics ingestion. | Collector manifests committed; smoke test green; docs updated; imposed rule banner reminder noted. |
|
||||
| DEVOPS-OBS-50-002 | TODO | DevOps Guild, Security Guild | DEVOPS-OBS-50-001, TELEMETRY-OBS-51-002 | Stand up multi-tenant storage backends (Prometheus, Tempo/Jaeger, Loki) with retention policies, tenant isolation, and redaction guard rails. Integrate with Authority scopes for read paths. | Storage stack deployed with auth; retention configured; integration tests verify tenant isolation; runbook drafted. |
|
||||
| DEVOPS-OBS-50-003 | TODO | DevOps Guild, Offline Kit Guild | DEVOPS-OBS-50-001 | Package telemetry stack configs for air-gapped installs (Offline Kit bundle, documented overrides, sample values) and automate checksum/signature generation. | Offline bundle includes collector+storage configs; checksums published; docs cross-linked; imposed rule annotation recorded. |
|
||||
| DEVOPS-AOC-19-001 | BLOCKED (2025-10-26) | DevOps Guild, Platform Guild | WEB-AOC-19-003 | Integrate the AOC Roslyn analyzer and guard tests into CI, failing builds when ingestion projects attempt banned writes. | Analyzer runs in PR/CI pipelines, results surfaced in build summary, docs updated under `docs/ops/ci-aoc.md`. |
|
||||
> Docs hand-off (2025-10-26): see `docs/ingestion/aggregation-only-contract.md` §5, `docs/architecture/overview.md`, and `docs/cli/cli-reference.md` for guard + verifier expectations.
|
||||
| DEVOPS-AOC-19-002 | BLOCKED (2025-10-26) | DevOps Guild | CLI-AOC-19-002, CONCELIER-WEB-AOC-19-004, EXCITITOR-WEB-AOC-19-004 | Add pipeline stage executing `stella aoc verify --since` against seeded Mongo snapshots for Concelier + Excititor, publishing violation report artefacts. | Stage runs on main/nightly, fails on violations, artifacts retained, runbook documented. |
|
||||
> Blocked: waiting on CLI verifier command and Concelier/Excititor guard endpoints to land (CLI-AOC-19-002, CONCELIER-WEB-AOC-19-004, EXCITITOR-WEB-AOC-19-004).
|
||||
| DEVOPS-AOC-19-003 | BLOCKED (2025-10-26) | DevOps Guild, QA Guild | CONCELIER-WEB-AOC-19-003, EXCITITOR-WEB-AOC-19-003 | Enforce unit test coverage thresholds for AOC guard suites and ensure coverage exported to dashboards. | Coverage report includes guard projects, threshold gate passes/fails as expected, dashboards refreshed with new metrics. |
|
||||
> Blocked: guard coverage suites and exporter hooks pending in Concelier/Excititor (CONCELIER-WEB-AOC-19-003, EXCITITOR-WEB-AOC-19-003).
|
||||
| DEVOPS-OBS-50-001 | DONE (2025-10-26) | DevOps Guild, Observability Guild | TELEMETRY-OBS-50-001 | Deliver default OpenTelemetry collector deployment (Compose/Helm manifests), OTLP ingestion endpoints, and secure pipeline (authN, mTLS, tenant partitioning). Provide smoke test verifying traces/logs/metrics ingestion. | Collector manifests committed; smoke test green; docs updated; imposed rule banner reminder noted. |
|
||||
| DEVOPS-OBS-50-002 | DOING (2025-10-26) | DevOps Guild, Security Guild | DEVOPS-OBS-50-001, TELEMETRY-OBS-51-002 | Stand up multi-tenant storage backends (Prometheus, Tempo/Jaeger, Loki) with retention policies, tenant isolation, and redaction guard rails. Integrate with Authority scopes for read paths. | Storage stack deployed with auth; retention configured; integration tests verify tenant isolation; runbook drafted. |
|
||||
> Coordination started with Observability Guild (2025-10-26) to schedule staging rollout and provision service accounts. Staging bootstrap commands and secret names documented in `docs/ops/telemetry-storage.md`.
|
||||
| DEVOPS-OBS-50-003 | DONE (2025-10-26) | DevOps Guild, Offline Kit Guild | DEVOPS-OBS-50-001 | Package telemetry stack configs for air-gapped installs (Offline Kit bundle, documented overrides, sample values) and automate checksum/signature generation. | Offline bundle includes collector+storage configs; checksums published; docs cross-linked; imposed rule annotation recorded. |
|
||||
| DEVOPS-OBS-51-001 | TODO | DevOps Guild, Observability Guild | WEB-OBS-51-001, DEVOPS-OBS-50-001 | Implement SLO evaluator service (burn rate calculators, webhook emitters), Grafana dashboards, and alert routing to Notifier. Provide Terraform/Helm automation. | Dashboards live; evaluator emits webhooks; alert runbook referenced; staging alert fired in test. |
|
||||
| DEVOPS-OBS-52-001 | TODO | DevOps Guild, Timeline Indexer Guild | TIMELINE-OBS-52-002 | Configure streaming pipeline (NATS/Redis/Kafka) with retention, partitioning, and backpressure tuning for timeline events; add CI validation of schema + rate caps. | Pipeline deployed; load test meets SLA; schema validation job passes; documentation updated. |
|
||||
| DEVOPS-OBS-53-001 | TODO | DevOps Guild, Evidence Locker Guild | EVID-OBS-53-001 | Provision object storage with WORM/retention options (S3 Object Lock / MinIO immutability), legal hold automation, and backup/restore scripts for evidence locker. | Storage configured with WORM; legal hold script documented; backup test performed; runbook updated. |
|
||||
@@ -29,35 +39,34 @@
|
||||
| DEVOPS-AIRGAP-57-002 | TODO | DevOps Guild, Authority Guild | AUTH-OBS-50-001 | Configure sealed-mode CI tests that run services with sealed flag and ensure no egress occurs (iptables + mock DNS). | CI suite fails on attempted egress; reports remediation; documentation updated. |
|
||||
| DEVOPS-AIRGAP-58-001 | TODO | DevOps Guild, Notifications Guild | NOTIFY-AIRGAP-56-002 | Provide local SMTP/syslog container templates and health checks for sealed environments; integrate into Bootstrap Pack. | Templates deployed successfully; health checks in CI; docs updated. |
|
||||
| DEVOPS-AIRGAP-58-002 | TODO | DevOps Guild, Observability Guild | DEVOPS-AIRGAP-56-001, DEVOPS-OBS-51-001 | Ship sealed-mode observability stack (Prometheus/Grafana/Tempo/Loki) pre-configured with offline dashboards and no remote exporters. | Stack boots offline; dashboards available; verification script confirms zero egress. |
|
||||
| DEVOPS-REL-14-001 | DOING (2025-10-23) | DevOps Guild | SIGNER-API-11-101, ATTESTOR-API-11-201 | Deterministic build/release pipeline with SBOM/provenance, signing, manifest generation. | CI pipeline produces signed images + SBOM/attestations, manifests published with verified hashes, docs updated. |
|
||||
| DEVOPS-REL-14-004 | TODO | DevOps Guild, Scanner Guild | DEVOPS-REL-14-001, SCANNER-ANALYZERS-LANG-10-309P | Extend release/offline smoke jobs to exercise the Python analyzer plug-in (warm/cold scans, determinism, signature checks). | Release/Offline pipelines run Python analyzer smoke suite; alerts hooked; docs updated with new coverage matrix. |
|
||||
| DEVOPS-REL-17-002 | TODO | DevOps Guild | DEVOPS-REL-14-001, SCANNER-EMIT-17-701 | Persist stripped-debug artifacts organised by GNU build-id and bundle them into release/offline kits with checksum manifests. | CI job writes `.debug` files under `artifacts/debug/.build-id/`, manifest + checksums published, offline kit includes cache, smoke job proves symbol lookup via build-id. |
|
||||
| DEVOPS-REL-14-001 | DONE (2025-10-26) | DevOps Guild | SIGNER-API-11-101, ATTESTOR-API-11-201 | Deterministic build/release pipeline with SBOM/provenance, signing, manifest generation. | CI pipeline produces signed images + SBOM/attestations, manifests published with verified hashes, docs updated. |
|
||||
| DEVOPS-REL-14-004 | DONE (2025-10-26) | DevOps Guild, Scanner Guild | DEVOPS-REL-14-001, SCANNER-ANALYZERS-LANG-10-309P | Extend release/offline smoke jobs to exercise the Python analyzer plug-in (warm/cold scans, determinism, signature checks). | Release/Offline pipelines run Python analyzer smoke suite; alerts hooked; docs updated with new coverage matrix. |
|
||||
| DEVOPS-REL-17-002 | DONE (2025-10-26) | DevOps Guild | DEVOPS-REL-14-001, SCANNER-EMIT-17-701 | Persist stripped-debug artifacts organised by GNU build-id and bundle them into release/offline kits with checksum manifests. | CI job writes `.debug` files under `artifacts/debug/.build-id/`, manifest + checksums published, offline kit includes cache, smoke job proves symbol lookup via build-id. |
|
||||
| DEVOPS-REL-17-004 | BLOCKED (2025-10-26) | DevOps Guild | DEVOPS-REL-17-002 | Ensure release workflow publishes `out/release/debug` (build-id tree + manifest) and fails when symbols are missing. | Release job emits debug artefacts, `mirror_debug_store.py` summary committed, warning cleared from build logs, docs updated. |
|
||||
| DEVOPS-MIRROR-08-001 | DONE (2025-10-19) | DevOps Guild | DEVOPS-REL-14-001 | Stand up managed mirror profiles for `*.stella-ops.org` (Concelier/Excititor), including Helm/Compose overlays, multi-tenant secrets, CDN caching, and sync documentation. | Infra overlays committed, CI smoke deploy hits mirror endpoints, runbooks published for downstream sync and quota management. |
|
||||
| DEVOPS-SEC-10-301 | DONE (2025-10-20) | DevOps Guild | Wave 0A complete | Address NU1902/NU1903 advisories for `MongoDB.Driver` 2.12.0 and `SharpCompress` 0.23.0 surfaced during scanner cache and worker test runs. | Dependencies bumped to patched releases, audit logs free of NU1902/NU1903 warnings, regression tests green, change log documents upgrade guidance. |
|
||||
| DEVOPS-CONSOLE-23-001 | TODO | DevOps Guild, Console Guild | CONSOLE-CORE-23-001 | Add console CI workflow (pnpm cache, lint, type-check, unit, Storybook a11y, Playwright, Lighthouse) with offline runners and artifact retention for screenshots/reports. | Workflow runs on PR & main, caches reduce install time, failing checks block merges, artifacts uploaded for triage, docs updated. |
|
||||
> Note (2025-10-26, BLOCKED): IdentityModel.Tokens patched for logging 9.x, but release bundle still fails because Docker cannot stream multi-arch build context (`unix:///var/run/docker.sock` unavailable, EOF during copy). Retry once docker daemon/socket is healthy; until then `out/release/debug` cannot be generated.
|
||||
| DEVOPS-CONSOLE-23-001 | BLOCKED (2025-10-26) | DevOps Guild, Console Guild | CONSOLE-CORE-23-001 | Add console CI workflow (pnpm cache, lint, type-check, unit, Storybook a11y, Playwright, Lighthouse) with offline runners and artifact retention for screenshots/reports. | Workflow runs on PR & main, caches reduce install time, failing checks block merges, artifacts uploaded for triage, docs updated. |
|
||||
> Blocked: Console workspace and package scripts (CONSOLE-CORE-23-001..005) are not yet present; CI cannot execute pnpm/Playwright/Lighthouse until the Next.js app lands.
|
||||
| DEVOPS-CONSOLE-23-002 | TODO | DevOps Guild, Console Guild | DEVOPS-CONSOLE-23-001, CONSOLE-REL-23-301 | Produce `stella-console` container build + Helm chart overlays with deterministic digests, SBOM/provenance artefacts, and offline bundle packaging scripts. | Container published to registry mirror, Helm values committed, SBOM/attestations generated, offline kit job passes smoke test, docs updated. |
|
||||
| DEVOPS-LAUNCH-18-100 | TODO | DevOps Guild | - | Finalise production environment footprint (clusters, secrets, network overlays) for full-platform go-live. | IaC/compose overlays committed, secrets placeholders documented, dry-run deploy succeeds in staging. |
|
||||
| DEVOPS-LAUNCH-18-900 | TODO | DevOps Guild, Module Leads | Wave 0 completion | Collect “full implementation” sign-off from module owners and consolidate launch readiness checklist. | Sign-off record stored under `docs/ops/launch-readiness.md`; outstanding gaps triaged; checklist approved. |
|
||||
| DEVOPS-LAUNCH-18-001 | TODO | DevOps Guild | DEVOPS-LAUNCH-18-100, DEVOPS-LAUNCH-18-900 | Production launch cutover rehearsal and runbook publication. | `docs/ops/launch-cutover.md` drafted, rehearsal executed with rollback drill, approvals captured. |
|
||||
| DEVOPS-LAUNCH-18-100 | DONE (2025-10-26) | DevOps Guild | - | Finalise production environment footprint (clusters, secrets, network overlays) for full-platform go-live. | IaC/compose overlays committed, secrets placeholders documented, dry-run deploy succeeds in staging. |
|
||||
| DEVOPS-LAUNCH-18-900 | DONE (2025-10-26) | DevOps Guild, Module Leads | Wave 0 completion | Collect “full implementation” sign-off from module owners and consolidate launch readiness checklist. | Sign-off record stored under `docs/ops/launch-readiness.md`; outstanding gaps triaged; checklist approved. |
|
||||
| DEVOPS-LAUNCH-18-001 | DONE (2025-10-26) | DevOps Guild | DEVOPS-LAUNCH-18-100, DEVOPS-LAUNCH-18-900 | Production launch cutover rehearsal and runbook publication. | `docs/ops/launch-cutover.md` drafted, rehearsal executed with rollback drill, approvals captured. |
|
||||
| DEVOPS-NUGET-13-001 | DONE (2025-10-25) | DevOps Guild, Platform Leads | DEVOPS-REL-14-001 | Add .NET 10 preview feeds / local mirrors so `Microsoft.Extensions.*` 10.0 preview packages restore offline; refresh restore docs. | NuGet.config maps preview feeds (or local mirrored packages), `dotnet restore` succeeds for Excititor/Concelier solutions without ad-hoc feed edits, docs updated for offline bootstrap. |
|
||||
| DEVOPS-NUGET-13-002 | TODO | DevOps Guild | DEVOPS-NUGET-13-001 | Ensure all solutions/projects prefer `local-nuget` before public sources and document restore order validation. | `NuGet.config` and solution-level configs resolve from `local-nuget` first; automated check verifies priority; docs updated for restore ordering. |
|
||||
| DEVOPS-NUGET-13-003 | TODO | DevOps Guild, Platform Leads | DEVOPS-NUGET-13-002 | Sweep `Microsoft.*` NuGet dependencies pinned to 8.* and upgrade to latest .NET 10 equivalents (or .NET 9 when 10 unavailable), updating restore guidance. | Dependency audit shows no 8.* `Microsoft.*` packages remaining; CI builds green; changelog/doc sections capture upgrade rationale. |
|
||||
| DEVOPS-NUGET-13-002 | DONE (2025-10-26) | DevOps Guild | DEVOPS-NUGET-13-001 | Ensure all solutions/projects prefer `local-nuget` before public sources and document restore order validation. | `NuGet.config` and solution-level configs resolve from `local-nuget` first; automated check verifies priority; docs updated for restore ordering. |
|
||||
| DEVOPS-NUGET-13-003 | DONE (2025-10-26) | DevOps Guild, Platform Leads | DEVOPS-NUGET-13-002 | Sweep `Microsoft.*` NuGet dependencies pinned to 8.* and upgrade to latest .NET 10 equivalents (or .NET 9 when 10 unavailable), updating restore guidance. | Dependency audit shows no 8.* `Microsoft.*` packages remaining; CI builds green; changelog/doc sections capture upgrade rationale. |
|
||||
|
||||
## Policy Engine v2
|
||||
|
||||
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|
||||
|----|--------|----------|------------|-------------|---------------|
|
||||
| DEVOPS-POLICY-20-001 | TODO | DevOps Guild, Policy Guild | POLICY-ENGINE-20-001 | Integrate DSL linting in CI (parser/compile) to block invalid policies; add pipeline step compiling sample policies. | CI fails on syntax errors; lint logs surfaced; docs updated with pipeline instructions. |
|
||||
| DEVOPS-POLICY-20-002 | TODO | DevOps Guild | DEVOPS-POLICY-20-001, POLICY-ENGINE-20-006 | Add `stella policy simulate` CI stage against golden SBOMs to detect delta explosions; publish diff artifacts. | Stage runs nightly/main; artifacts retained; alert thresholds configured. |
|
||||
| DEVOPS-POLICY-20-003 | TODO | DevOps Guild, QA Guild | DEVOPS-POLICY-20-001, POLICY-ENGINE-20-005 | Determinism CI: run Policy Engine twice with identical inputs and diff outputs to guard non-determinism. | CI job compares outputs, fails on differences, logs stored; documentation updated. |
|
||||
| DEVOPS-POLICY-20-001 | DONE (2025-10-26) | DevOps Guild, Policy Guild | POLICY-ENGINE-20-001 | Integrate DSL linting in CI (parser/compile) to block invalid policies; add pipeline step compiling sample policies. | CI fails on syntax errors; lint logs surfaced; docs updated with pipeline instructions. |
|
||||
| DEVOPS-POLICY-20-003 | DONE (2025-10-26) | DevOps Guild, QA Guild | DEVOPS-POLICY-20-001, POLICY-ENGINE-20-005 | Determinism CI: run Policy Engine twice with identical inputs and diff outputs to guard non-determinism. | CI job compares outputs, fails on differences, logs stored; documentation updated. |
|
||||
| DEVOPS-POLICY-20-004 | DOING (2025-10-26) | DevOps Guild, Scheduler Guild, CLI Guild | SCHED-MODELS-20-001, CLI-POLICY-20-002 | Automate policy schema exports: generate JSON Schema from `PolicyRun*` DTOs during CI, publish artefacts, and emit change alerts for CLI consumers (Slack + changelog). | CI stage outputs versioned schema files, uploads artefacts, notifies #policy-engine channel on change; docs/CLI references updated. |
|
||||
|
||||
## Graph Explorer v1
|
||||
|
||||
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|
||||
|----|--------|----------|------------|-------------|---------------|
|
||||
| DEVOPS-GRAPH-21-001 | TODO | DevOps Guild, Cartographer Guild | CARTO-GRAPH-21-006 | Add load/perf jobs hitting graph viewport/path/diff endpoints with synthetic 50k/100k graphs; emit dashboards/alerts for SLOs. | CI perf job introduced; Grafana panels live; alerts configured for latency/SLA breaches. |
|
||||
| DEVOPS-GRAPH-21-002 | TODO | DevOps Guild, UI Guild | UI-GRAPH-21-001 | Capture golden screenshots (Playwright) and JSON exports for visual regressions; wire into CI/offline kit. | Visual regression suite runs in CI; artifacts stored; failure triage docs updated. |
|
||||
| DEVOPS-GRAPH-21-003 | TODO | DevOps Guild | CARTO-GRAPH-21-009, SBOM-SERVICE-21-002 | Package Cartographer + SBOM Service into offline kit bundles with seeded data/layout caches; document deployment steps. | Offline kit includes graph seeds; docs updated; smoke scripts validate airgapped startup. |
|
||||
|
||||
## Orchestrator Dashboard
|
||||
|
||||
|
||||
@@ -1,17 +1,30 @@
|
||||
# Package,Version,SHA256,SourceBase(optional)
|
||||
# DotNetPublicFlat=https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Caching.Memory,10.0.0-preview.7.25380.108,8721fd1420fea6e828963c8343cd83605902b663385e8c9060098374139f9b2f,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Configuration,10.0.0-preview.7.25380.108,5a17ba4ba47f920a04ae51d80560833da82a0926d1e462af0d11c16b5da969f4,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Configuration.Binder,10.0.0-preview.7.25380.108,5a3af17729241e205fe8fbb1d458470e9603935ab2eb67cbbb06ce51265ff68f,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.DependencyInjection.Abstractions,10.0.0-preview.7.25380.108,1e9cd330d7833a3a850a7a42bbe0c729906c60bf1c359ad30a8622b50da4399b,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Hosting,10.0.0-preview.7.25380.108,3123bb019bbc0182cf7ac27f30018ca620929f8027e137bd5bdfb952037c7d29,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Hosting.Abstractions,10.0.0-preview.7.25380.108,b57625436c9eb53e3aa27445b680bb93285d0d2c91007bbc221b0c378ab016a3,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Http,10.0.0-preview.7.25380.108,daec142b7c7bd09ec1f2a86bfc3d7fe009825f5b653d310bc9e959c0a98a0f19,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Logging.Abstractions,10.0.0-preview.7.25380.108,87a495fa0b7054e134a5cf44ec8b071fe2bc3ddfb27e9aefc6375701dca2a33a,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Options,10.0.0-preview.7.25380.108,c0657c2be3b7b894024586cf6e46a2ebc0e710db64d2645c4655b893b8487d8a,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.DependencyInjection.Abstractions,9.0.0,0a7715c24299e42b081b63b4f8e33da97b985e1de9e941b2b9e4c748b0d52fe7
|
||||
Microsoft.Extensions.Logging.Abstractions,9.0.0,8814ecf6dc2359715e111b78084ae42087282595358eb775456088f15e63eca5
|
||||
Microsoft.Extensions.Options,9.0.0,0d3e5eb80418fc8b41e4b3c8f16229e839ddd254af0513f7e6f1643970baf1c9
|
||||
Microsoft.Extensions.Options.ConfigurationExtensions,9.0.0,af5677b04552223787d942a3f8a323f3a85aafaf20ff3c9b4aaa128c44817280
|
||||
Microsoft.Data.Sqlite,9.0.0-rc.1.24451.1,770b637317e1e924f1b13587b31af0787c8c668b1d9f53f2fccae8ee8704e167
|
||||
Microsoft.AspNetCore.Authentication.JwtBearer,10.0.0-rc.1.25451.107,05f168c2db7ba79230e3fd77e84f6912bc73721c6656494df0b227867a6c2d3c,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.AspNetCore.Authentication.JwtBearer,10.0.0-rc.2.25502.107,3223f447bde9a3620477305a89520e8becafe23b481a0b423552af572439f8c2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.AspNetCore.Mvc.Testing,10.0.0-rc.2.25502.107,b6b53c62e0abefdca30e6ca08ab8357e395177dd9f368ab3ad4bbbd07e517229,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.AspNetCore.OpenApi,10.0.0-rc.2.25502.107,f64de1fe870306053346a31263e53e29f2fdfe0eae432a3156f8d7d705c81d85,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Data.Sqlite,9.0.0-rc.1.24451.1,770b637317e1e924f1b13587b31af0787c8c668b1d9f53f2fccae8ee8704e167,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Caching.Memory,10.0.0-rc.2.25502.107,6ec6d156ed06b07cbee9fa1c0803b8d54a5f904a0bf0183172f87b63c4044426,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Configuration,10.0.0-rc.2.25502.107,0716f72cdc99b03946c98c418c39d42208fc65f20301bd1f26a6c174646870f6,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Configuration.Abstractions,10.0.0-rc.2.25502.107,db6e2cd37c40b5ac5ca7a4f40f5edafda2b6a8690f95a8c64b54c777a1d757c0,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Configuration.Binder,10.0.0-rc.2.25502.107,80f04da6beef001d3c357584485c2ddc6fdbf3776cfd10f0d7b40dfe8a79ee43,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Configuration.CommandLine,10.0.0-rc.2.25502.107,91974a95ae35bcfcd5e977427f3d0e6d3416e78678a159f5ec9e55f33a2e19af,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Configuration.EnvironmentVariables,10.0.0-rc.2.25502.107,74d65a20e2764d5f42863f5f203b216533fc51b22fb02a8491036feb98ae5fef,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Configuration.FileExtensions,10.0.0-rc.2.25502.107,5f97b56ea2ba3a1b252022504060351ce457f78ac9055d5fdd1311678721c1a1,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Configuration.Json,10.0.0-rc.2.25502.107,0ba362c479213eb3425f8e14d8a8495250dbaf2d5dad7c0a4ca8d3239b03c392,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.DependencyInjection,10.0.0-rc.2.25502.107,2e1b51b4fa196f0819adf69a15ad8c3432b64c3b196f2ed3d14b65136a6a8709,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.DependencyInjection.Abstractions,10.0.0-rc.2.25502.107,d6787ccf69e09428b3424974896c09fdabb8040bae06ed318212871817933352,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Diagnostics.Abstractions,10.0.0-rc.2.25502.107,b4bc47b4b4ded4ab2f134d318179537cbe16aed511bb3672553ea197929dc7d8,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Diagnostics.HealthChecks,10.0.0-rc.2.25502.107,855fd4da26b955b6b1d036390b1af10564986067b5cc6356cffa081c83eec158,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Diagnostics.HealthChecks.Abstractions,10.0.0-rc.2.25502.107,59f4724daed68a067a661e208f0a934f253b91ec5d52310d008e185bc2c9294c,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Hosting,10.0.0-rc.2.25502.107,ea9b1fa8e50acae720294671e6c36d4c58e20cfc9720335ab4f5ad4eba92cf62,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Hosting.Abstractions,10.0.0-rc.2.25502.107,98fa23ac82e19be221a598fc6f4b469e8b00c4ca2b7a42ad0bfea8b63bbaa9a2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Http,10.0.0-rc.2.25502.107,c63c8bf4ca637137a561ca487b674859c2408918c4838a871bb26eb0c809a665,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Http.Polly,10.0.0-rc.2.25502.107,0b436196bcedd484796795f6a795d7a191294f1190f7a477f1a4937ef7f78110,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Logging.Abstractions,10.0.0-rc.2.25502.107,92b9a5ed62fe945ee88983af43c347429ec15691c9acb207872c548241cef961,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Logging.Console,10.0.0-rc.2.25502.107,fa1e10b5d6261675d9d2e97b9584ff9aaea2a2276eac584dfa77a1e35dcc58f5,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Options,10.0.0-rc.2.25502.107,d208acec60bec3350989694fd443e2d2f0ab583ad5f2c53a2879ade16908e5b4,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.Options.ConfigurationExtensions,10.0.0-rc.2.25502.107,c2863bb28c36fd67f308dd4af486897b512d62ecff2d96613ef954f5bef443e2,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.Extensions.TimeProvider.Testing,9.10.0,919a47156fc13f756202702cacc6e853123c84f1b696970445d89f16dfa45829,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.IdentityModel.Tokens,8.14.0,00b78c7b7023132e1d6b31d305e47524732dce6faca92dd16eb8d05a835bba7a,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
Microsoft.SourceLink.GitLab,8.0.0,a7efb9c177888f952ea8c88bc5714fc83c64af32b70fb080a1323b8d32233973,https://pkgs.dev.azure.com/dnceng/public/_packaging/dotnet-public/nuget/v3/flat2
|
||||
|
||||
|
BIN
ops/devops/release/__pycache__/build_release.cpython-312.pyc
Normal file
BIN
ops/devops/release/__pycache__/build_release.cpython-312.pyc
Normal file
Binary file not shown.
Binary file not shown.
BIN
ops/devops/release/__pycache__/verify_release.cpython-312.pyc
Normal file
BIN
ops/devops/release/__pycache__/verify_release.cpython-312.pyc
Normal file
Binary file not shown.
@@ -14,6 +14,7 @@ The workflow expects external tooling to be available on PATH:
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import contextlib
|
||||
import datetime as dt
|
||||
import hashlib
|
||||
import json
|
||||
@@ -21,11 +22,14 @@ import os
|
||||
import pathlib
|
||||
import re
|
||||
import shlex
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import tarfile
|
||||
import tempfile
|
||||
import uuid
|
||||
from collections import OrderedDict
|
||||
from typing import Any, Dict, Iterable, List, Mapping, MutableMapping, Optional, Sequence
|
||||
from typing import Any, Dict, Iterable, List, Mapping, MutableMapping, Optional, Sequence, Tuple
|
||||
|
||||
REPO_ROOT = pathlib.Path(__file__).resolve().parents[3]
|
||||
DEFAULT_CONFIG = REPO_ROOT / "ops/devops/release/components.json"
|
||||
@@ -184,6 +188,8 @@ class ReleaseBuilder:
|
||||
self.provenance_dir = ensure_directory(self.artifacts_dir / "provenance")
|
||||
self.signature_dir = ensure_directory(self.artifacts_dir / "signatures")
|
||||
self.metadata_dir = ensure_directory(self.artifacts_dir / "metadata")
|
||||
self.debug_dir = ensure_directory(self.output_dir / "debug")
|
||||
self.debug_store_dir = ensure_directory(self.debug_dir / ".build-id")
|
||||
self.temp_dir = pathlib.Path(tempfile.mkdtemp(prefix="stellaops-release-"))
|
||||
self.skip_signing = skip_signing
|
||||
self.tlog_upload = tlog_upload
|
||||
@@ -196,6 +202,9 @@ class ReleaseBuilder:
|
||||
"COSIGN_ALLOW_HTTP_REGISTRY": os.environ.get("COSIGN_ALLOW_HTTP_REGISTRY", "1"),
|
||||
"COSIGN_DOCKER_MEDIA_TYPES": os.environ.get("COSIGN_DOCKER_MEDIA_TYPES", "1"),
|
||||
}
|
||||
# Cache resolved objcopy binaries keyed by machine identifier to avoid repeated lookups.
|
||||
self._objcopy_cache: Dict[str, Optional[str]] = {}
|
||||
self._missing_symbol_platforms: Dict[str, int] = {}
|
||||
|
||||
# ----------------
|
||||
# Build steps
|
||||
@@ -210,7 +219,8 @@ class ReleaseBuilder:
|
||||
components_result.append(result)
|
||||
helm_meta = self._package_helm()
|
||||
compose_meta = self._digest_compose_files()
|
||||
manifest = self._compose_manifest(components_result, helm_meta, compose_meta)
|
||||
debug_meta = self._collect_debug_store(components_result)
|
||||
manifest = self._compose_manifest(components_result, helm_meta, compose_meta, debug_meta)
|
||||
return manifest
|
||||
|
||||
def _prime_buildx_plugin(self) -> None:
|
||||
@@ -339,7 +349,15 @@ class ReleaseBuilder:
|
||||
if bundle_info:
|
||||
component_entry["signature"] = bundle_info
|
||||
if metadata_file.exists():
|
||||
component_entry["metadata"] = str(metadata_file.relative_to(self.output_dir.parent)) if metadata_file.is_relative_to(self.output_dir.parent) else str(metadata_file)
|
||||
metadata_rel = (
|
||||
str(metadata_file.relative_to(self.output_dir.parent))
|
||||
if metadata_file.is_relative_to(self.output_dir.parent)
|
||||
else str(metadata_file)
|
||||
)
|
||||
component_entry["metadata"] = OrderedDict((
|
||||
("path", metadata_rel),
|
||||
("sha256", compute_sha256(metadata_file)),
|
||||
))
|
||||
return component_entry
|
||||
|
||||
def _sign_image(self, name: str, image_ref: str, tags: Sequence[str]) -> Optional[Mapping[str, Any]]:
|
||||
@@ -370,6 +388,7 @@ class ReleaseBuilder:
|
||||
image_ref,
|
||||
])
|
||||
signature_path.write_text(signature_data, encoding="utf-8")
|
||||
signature_sha = compute_sha256(signature_path)
|
||||
signature_ref = run([
|
||||
"cosign",
|
||||
"triangulate",
|
||||
@@ -380,6 +399,7 @@ class ReleaseBuilder:
|
||||
(
|
||||
("signature", OrderedDict((
|
||||
("path", str(signature_path.relative_to(self.output_dir.parent)) if signature_path.is_relative_to(self.output_dir.parent) else str(signature_path)),
|
||||
("sha256", signature_sha),
|
||||
("ref", signature_ref),
|
||||
("tlogUploaded", self.tlog_upload),
|
||||
))),
|
||||
@@ -479,6 +499,271 @@ class ReleaseBuilder:
|
||||
entry["ref"] = ref
|
||||
return entry
|
||||
|
||||
def _collect_debug_store(self, components: Sequence[Mapping[str, Any]]) -> Optional[Mapping[str, Any]]:
|
||||
if self.dry_run:
|
||||
return None
|
||||
debug_records: Dict[Tuple[str, str], OrderedDict[str, Any]] = {}
|
||||
for component in components:
|
||||
image_ref = component.get("image")
|
||||
if not image_ref:
|
||||
continue
|
||||
name = component.get("name", "unknown")
|
||||
entries = self._extract_debug_entries(name, image_ref)
|
||||
for entry in entries:
|
||||
key = (entry["platform"], entry["buildId"])
|
||||
existing = debug_records.get(key)
|
||||
if existing is None:
|
||||
record = OrderedDict((
|
||||
("buildId", entry["buildId"]),
|
||||
("platform", entry["platform"]),
|
||||
("debugPath", entry["debugPath"]),
|
||||
("sha256", entry["sha256"]),
|
||||
("size", entry["size"]),
|
||||
("components", [entry["component"]]),
|
||||
("images", [entry["image"]]),
|
||||
("sources", list(entry["sources"])),
|
||||
))
|
||||
debug_records[key] = record
|
||||
else:
|
||||
if entry["sha256"] != existing["sha256"]:
|
||||
raise RuntimeError(
|
||||
f"Build-id {entry['buildId']} for platform {entry['platform']} produced conflicting hashes"
|
||||
)
|
||||
if entry["component"] not in existing["components"]:
|
||||
existing["components"].append(entry["component"])
|
||||
if entry["image"] not in existing["images"]:
|
||||
existing["images"].append(entry["image"])
|
||||
for source in entry["sources"]:
|
||||
if source not in existing["sources"]:
|
||||
existing["sources"].append(source)
|
||||
if not debug_records:
|
||||
sys.stderr.write(
|
||||
"[error] release build produced no debug artefacts; enable symbol extraction so out/release/debug is populated (DEVOPS-REL-17-004).\n"
|
||||
)
|
||||
# Remove empty directories before failing
|
||||
with contextlib.suppress(FileNotFoundError, OSError):
|
||||
if not any(self.debug_store_dir.iterdir()):
|
||||
self.debug_store_dir.rmdir()
|
||||
with contextlib.suppress(FileNotFoundError, OSError):
|
||||
if not any(self.debug_dir.iterdir()):
|
||||
self.debug_dir.rmdir()
|
||||
raise RuntimeError(
|
||||
"Debug store collection produced no build-id artefacts (DEVOPS-REL-17-004)."
|
||||
)
|
||||
entries = []
|
||||
for record in debug_records.values():
|
||||
entry = OrderedDict((
|
||||
("buildId", record["buildId"]),
|
||||
("platform", record["platform"]),
|
||||
("debugPath", record["debugPath"]),
|
||||
("sha256", record["sha256"]),
|
||||
("size", record["size"]),
|
||||
("components", sorted(record["components"])),
|
||||
("images", sorted(record["images"])),
|
||||
("sources", sorted(record["sources"])),
|
||||
))
|
||||
entries.append(entry)
|
||||
entries.sort(key=lambda item: (item["platform"], item["buildId"]))
|
||||
manifest_path = self.debug_dir / "debug-manifest.json"
|
||||
platform_counts: Dict[str, int] = {}
|
||||
for entry in entries:
|
||||
platform_counts[entry["platform"]] = platform_counts.get(entry["platform"], 0) + 1
|
||||
missing_platforms = [
|
||||
platform
|
||||
for platform in self._missing_symbol_platforms
|
||||
if platform_counts.get(platform, 0) == 0
|
||||
]
|
||||
if missing_platforms:
|
||||
raise RuntimeError(
|
||||
"Debug extraction skipped all binaries for platforms without objcopy support: "
|
||||
+ ", ".join(sorted(missing_platforms))
|
||||
)
|
||||
manifest_data = OrderedDict((
|
||||
("generatedAt", self.release_date),
|
||||
("version", self.version),
|
||||
("channel", self.channel),
|
||||
("artifacts", entries),
|
||||
))
|
||||
with manifest_path.open("w", encoding="utf-8") as handle:
|
||||
json.dump(manifest_data, handle, indent=2)
|
||||
handle.write("\n")
|
||||
manifest_sha = compute_sha256(manifest_path)
|
||||
sha_path = manifest_path.with_suffix(manifest_path.suffix + ".sha256")
|
||||
sha_path.write_text(f"{manifest_sha} {manifest_path.name}\n", encoding="utf-8")
|
||||
manifest_rel = manifest_path.relative_to(self.output_dir).as_posix()
|
||||
store_rel = self.debug_store_dir.relative_to(self.output_dir).as_posix()
|
||||
platforms = sorted({entry["platform"] for entry in entries})
|
||||
return OrderedDict((
|
||||
("manifest", manifest_rel),
|
||||
("sha256", manifest_sha),
|
||||
("entries", len(entries)),
|
||||
("platforms", platforms),
|
||||
("directory", store_rel),
|
||||
))
|
||||
|
||||
def _extract_debug_entries(self, component_name: str, image_ref: str) -> List[OrderedDict[str, Any]]:
|
||||
if self.dry_run:
|
||||
return []
|
||||
entries: List[OrderedDict[str, Any]] = []
|
||||
platforms = self.platforms if self.push else [None]
|
||||
for platform in platforms:
|
||||
platform_label = platform or (self.platforms[0] if self.platforms else "linux/amd64")
|
||||
if self.push:
|
||||
pull_cmd = ["docker", "pull"]
|
||||
if platform:
|
||||
pull_cmd.extend(["--platform", platform])
|
||||
pull_cmd.append(image_ref)
|
||||
run(pull_cmd)
|
||||
create_cmd = ["docker", "create"]
|
||||
if platform:
|
||||
create_cmd.extend(["--platform", platform])
|
||||
create_cmd.append(image_ref)
|
||||
container_id = run(create_cmd).strip()
|
||||
export_path = self.temp_dir / f"{container_id}.tar"
|
||||
try:
|
||||
run(["docker", "export", container_id, "-o", str(export_path)], capture=False)
|
||||
finally:
|
||||
run(["docker", "rm", container_id], capture=False)
|
||||
rootfs_dir = ensure_directory(self.temp_dir / f"{component_name}-{platform_label}-{uuid.uuid4().hex}")
|
||||
try:
|
||||
with tarfile.open(export_path, "r:*") as tar:
|
||||
self._safe_extract_tar(tar, rootfs_dir)
|
||||
finally:
|
||||
export_path.unlink(missing_ok=True)
|
||||
try:
|
||||
for file_path in rootfs_dir.rglob("*"):
|
||||
if not file_path.is_file() or file_path.is_symlink():
|
||||
continue
|
||||
if not self._is_elf(file_path):
|
||||
continue
|
||||
build_id, machine = self._read_build_id_and_machine(file_path)
|
||||
if not build_id:
|
||||
continue
|
||||
debug_file = self._debug_file_for_build_id(build_id)
|
||||
if not debug_file.exists():
|
||||
debug_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
temp_debug = self.temp_dir / f"{build_id}.debug"
|
||||
with contextlib.suppress(FileNotFoundError):
|
||||
temp_debug.unlink()
|
||||
objcopy_tool = self._resolve_objcopy_tool(machine)
|
||||
if not objcopy_tool:
|
||||
self._emit_objcopy_warning(machine, platform_label, file_path)
|
||||
with contextlib.suppress(FileNotFoundError):
|
||||
temp_debug.unlink()
|
||||
continue
|
||||
try:
|
||||
run([objcopy_tool, "--only-keep-debug", str(file_path), str(temp_debug)], capture=False)
|
||||
except CommandError:
|
||||
with contextlib.suppress(FileNotFoundError):
|
||||
temp_debug.unlink()
|
||||
continue
|
||||
debug_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
shutil.move(str(temp_debug), str(debug_file))
|
||||
sha = compute_sha256(debug_file)
|
||||
rel_debug = debug_file.relative_to(self.output_dir).as_posix()
|
||||
source_rel = file_path.relative_to(rootfs_dir).as_posix()
|
||||
entry = OrderedDict((
|
||||
("component", component_name),
|
||||
("image", image_ref),
|
||||
("platform", platform_label),
|
||||
("buildId", build_id),
|
||||
("debugPath", rel_debug),
|
||||
("sha256", sha),
|
||||
("size", debug_file.stat().st_size),
|
||||
("sources", [source_rel]),
|
||||
))
|
||||
entries.append(entry)
|
||||
finally:
|
||||
shutil.rmtree(rootfs_dir, ignore_errors=True)
|
||||
return entries
|
||||
|
||||
def _debug_file_for_build_id(self, build_id: str) -> pathlib.Path:
|
||||
normalized = build_id.lower()
|
||||
prefix = normalized[:2]
|
||||
remainder = normalized[2:]
|
||||
return self.debug_store_dir / prefix / f"{remainder}.debug"
|
||||
|
||||
@staticmethod
|
||||
def _safe_extract_tar(tar: tarfile.TarFile, dest: pathlib.Path) -> None:
|
||||
dest_root = dest.resolve()
|
||||
members = tar.getmembers()
|
||||
for member in members:
|
||||
member_path = (dest / member.name).resolve()
|
||||
if not str(member_path).startswith(str(dest_root)):
|
||||
raise RuntimeError(f"Refusing to extract '{member.name}' outside of destination directory")
|
||||
tar.extractall(dest)
|
||||
|
||||
@staticmethod
|
||||
def _is_elf(path: pathlib.Path) -> bool:
|
||||
try:
|
||||
with path.open("rb") as handle:
|
||||
return handle.read(4) == b"\x7fELF"
|
||||
except OSError:
|
||||
return False
|
||||
|
||||
def _read_build_id_and_machine(self, path: pathlib.Path) -> Tuple[Optional[str], Optional[str]]:
|
||||
try:
|
||||
header_output = run(["readelf", "-nh", str(path)])
|
||||
except CommandError:
|
||||
return None, None
|
||||
build_id: Optional[str] = None
|
||||
machine: Optional[str] = None
|
||||
for line in header_output.splitlines():
|
||||
stripped = line.strip()
|
||||
if stripped.startswith("Build ID:"):
|
||||
build_id = stripped.split("Build ID:", 1)[1].strip().lower()
|
||||
elif stripped.startswith("Machine:"):
|
||||
machine = stripped.split("Machine:", 1)[1].strip()
|
||||
return build_id, machine
|
||||
|
||||
def _resolve_objcopy_tool(self, machine: Optional[str]) -> Optional[str]:
|
||||
key = (machine or "generic").lower()
|
||||
if key in self._objcopy_cache:
|
||||
return self._objcopy_cache[key]
|
||||
|
||||
env_override = None
|
||||
if machine and "aarch64" in machine.lower():
|
||||
env_override = os.environ.get("STELLAOPS_OBJCOPY_AARCH64")
|
||||
candidates = [
|
||||
env_override,
|
||||
"aarch64-linux-gnu-objcopy",
|
||||
"llvm-objcopy",
|
||||
"objcopy",
|
||||
]
|
||||
elif machine and any(token in machine.lower() for token in ("x86-64", "amd", "x86_64")):
|
||||
env_override = os.environ.get("STELLAOPS_OBJCOPY_AMD64")
|
||||
candidates = [
|
||||
env_override,
|
||||
"objcopy",
|
||||
"llvm-objcopy",
|
||||
]
|
||||
else:
|
||||
env_override = os.environ.get("STELLAOPS_OBJCOPY_DEFAULT")
|
||||
candidates = [
|
||||
env_override,
|
||||
"objcopy",
|
||||
"llvm-objcopy",
|
||||
]
|
||||
|
||||
for candidate in candidates:
|
||||
if not candidate:
|
||||
continue
|
||||
tool = shutil.which(candidate)
|
||||
if tool:
|
||||
self._objcopy_cache[key] = tool
|
||||
return tool
|
||||
self._objcopy_cache[key] = None
|
||||
return None
|
||||
|
||||
def _emit_objcopy_warning(self, machine: Optional[str], platform: str, file_path: pathlib.Path) -> None:
|
||||
machine_label = machine or "unknown-machine"
|
||||
count = self._missing_symbol_platforms.get(platform, 0)
|
||||
self._missing_symbol_platforms[platform] = count + 1
|
||||
if count == 0:
|
||||
sys.stderr.write(
|
||||
f"[warn] no objcopy tool available for {machine_label}; skipping debug extraction for {file_path}.\n"
|
||||
)
|
||||
|
||||
# ----------------
|
||||
# Helm + compose
|
||||
# ----------------
|
||||
@@ -546,6 +831,7 @@ class ReleaseBuilder:
|
||||
components: List[Mapping[str, Any]],
|
||||
helm_meta: Optional[Mapping[str, Any]],
|
||||
compose_meta: List[Mapping[str, Any]],
|
||||
debug_meta: Optional[Mapping[str, Any]],
|
||||
) -> Dict[str, Any]:
|
||||
manifest = OrderedDict()
|
||||
manifest["release"] = OrderedDict((
|
||||
@@ -559,6 +845,8 @@ class ReleaseBuilder:
|
||||
manifest["charts"] = [helm_meta]
|
||||
if compose_meta:
|
||||
manifest["compose"] = compose_meta
|
||||
if debug_meta:
|
||||
manifest["debugStore"] = debug_meta
|
||||
return manifest
|
||||
|
||||
|
||||
@@ -593,6 +881,18 @@ def write_manifest(manifest: Mapping[str, Any], output_dir: pathlib.Path) -> pat
|
||||
output_path = output_dir / "release.yaml"
|
||||
with output_path.open("w", encoding="utf-8") as handle:
|
||||
handle.write(final_yaml)
|
||||
sha_path = output_path.with_name(output_path.name + ".sha256")
|
||||
yaml_file_digest = compute_sha256(output_path)
|
||||
sha_path.write_text(f"{yaml_file_digest} {output_path.name}\n", encoding="utf-8")
|
||||
|
||||
json_text = json.dumps(manifest_with_checksum, indent=2)
|
||||
json_path = output_dir / "release.json"
|
||||
with json_path.open("w", encoding="utf-8") as handle:
|
||||
handle.write(json_text)
|
||||
handle.write("\n")
|
||||
json_digest = compute_sha256(json_path)
|
||||
json_sha_path = json_path.with_name(json_path.name + ".sha256")
|
||||
json_sha_path.write_text(f"{json_digest} {json_path.name}\n", encoding="utf-8")
|
||||
return output_path
|
||||
|
||||
|
||||
|
||||
232
ops/devops/release/test_verify_release.py
Normal file
232
ops/devops/release/test_verify_release.py
Normal file
@@ -0,0 +1,232 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import tempfile
|
||||
import unittest
|
||||
from collections import OrderedDict
|
||||
from pathlib import Path
|
||||
import sys
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parent))
|
||||
|
||||
from build_release import write_manifest # type: ignore import-not-found
|
||||
from verify_release import VerificationError, compute_sha256, verify_release
|
||||
|
||||
|
||||
class VerifyReleaseTests(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
self._temp = tempfile.TemporaryDirectory()
|
||||
self.base_path = Path(self._temp.name)
|
||||
self.out_dir = self.base_path / "out"
|
||||
self.release_dir = self.out_dir / "release"
|
||||
self.release_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
def tearDown(self) -> None:
|
||||
self._temp.cleanup()
|
||||
|
||||
def _relative_to_out(self, path: Path) -> str:
|
||||
return path.relative_to(self.out_dir).as_posix()
|
||||
|
||||
def _write_json(self, path: Path, payload: dict[str, object]) -> None:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with path.open("w", encoding="utf-8") as handle:
|
||||
json.dump(payload, handle, indent=2)
|
||||
handle.write("\n")
|
||||
|
||||
def _create_sample_release(self) -> None:
|
||||
sbom_path = self.release_dir / "artifacts/sboms/sample.cyclonedx.json"
|
||||
sbom_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
sbom_path.write_text('{"bomFormat":"CycloneDX","specVersion":"1.5"}\n', encoding="utf-8")
|
||||
sbom_sha = compute_sha256(sbom_path)
|
||||
|
||||
provenance_path = self.release_dir / "artifacts/provenance/sample.provenance.json"
|
||||
self._write_json(
|
||||
provenance_path,
|
||||
{
|
||||
"buildDefinition": {"buildType": "https://example/build", "externalParameters": {}},
|
||||
"runDetails": {"builder": {"id": "https://example/ci"}},
|
||||
},
|
||||
)
|
||||
provenance_sha = compute_sha256(provenance_path)
|
||||
|
||||
signature_path = self.release_dir / "artifacts/signatures/sample.signature"
|
||||
signature_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
signature_path.write_text("signature-data\n", encoding="utf-8")
|
||||
signature_sha = compute_sha256(signature_path)
|
||||
|
||||
metadata_path = self.release_dir / "artifacts/metadata/sample.metadata.json"
|
||||
self._write_json(metadata_path, {"digest": "sha256:1234"})
|
||||
metadata_sha = compute_sha256(metadata_path)
|
||||
|
||||
chart_path = self.release_dir / "helm/stellaops-1.0.0.tgz"
|
||||
chart_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
chart_path.write_bytes(b"helm-chart-data")
|
||||
chart_sha = compute_sha256(chart_path)
|
||||
|
||||
compose_path = self.release_dir.parent / "deploy/compose/docker-compose.dev.yaml"
|
||||
compose_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
compose_path.write_text("services: {}\n", encoding="utf-8")
|
||||
compose_sha = compute_sha256(compose_path)
|
||||
|
||||
debug_file = self.release_dir / "debug/.build-id/ab/cdef.debug"
|
||||
debug_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
debug_file.write_bytes(b"\x7fELFDEBUGDATA")
|
||||
debug_sha = compute_sha256(debug_file)
|
||||
|
||||
debug_manifest_path = self.release_dir / "debug/debug-manifest.json"
|
||||
debug_manifest = OrderedDict(
|
||||
(
|
||||
("generatedAt", "2025-10-26T00:00:00Z"),
|
||||
("version", "1.0.0"),
|
||||
("channel", "edge"),
|
||||
(
|
||||
"artifacts",
|
||||
[
|
||||
OrderedDict(
|
||||
(
|
||||
("buildId", "abcdef1234"),
|
||||
("platform", "linux/amd64"),
|
||||
("debugPath", "debug/.build-id/ab/cdef.debug"),
|
||||
("sha256", debug_sha),
|
||||
("size", debug_file.stat().st_size),
|
||||
("components", ["sample"]),
|
||||
("images", ["registry.example/sample@sha256:feedface"]),
|
||||
("sources", ["app/sample.dll"]),
|
||||
)
|
||||
)
|
||||
],
|
||||
),
|
||||
)
|
||||
)
|
||||
self._write_json(debug_manifest_path, debug_manifest)
|
||||
debug_manifest_sha = compute_sha256(debug_manifest_path)
|
||||
(debug_manifest_path.with_suffix(debug_manifest_path.suffix + ".sha256")).write_text(
|
||||
f"{debug_manifest_sha} {debug_manifest_path.name}\n", encoding="utf-8"
|
||||
)
|
||||
|
||||
manifest = OrderedDict(
|
||||
(
|
||||
(
|
||||
"release",
|
||||
OrderedDict(
|
||||
(
|
||||
("version", "1.0.0"),
|
||||
("channel", "edge"),
|
||||
("date", "2025-10-26T00:00:00Z"),
|
||||
("calendar", "2025.10"),
|
||||
)
|
||||
),
|
||||
),
|
||||
(
|
||||
"components",
|
||||
[
|
||||
OrderedDict(
|
||||
(
|
||||
("name", "sample"),
|
||||
("image", "registry.example/sample@sha256:feedface"),
|
||||
("tags", ["registry.example/sample:1.0.0"]),
|
||||
(
|
||||
"sbom",
|
||||
OrderedDict(
|
||||
(
|
||||
("path", self._relative_to_out(sbom_path)),
|
||||
("sha256", sbom_sha),
|
||||
)
|
||||
),
|
||||
),
|
||||
(
|
||||
"provenance",
|
||||
OrderedDict(
|
||||
(
|
||||
("path", self._relative_to_out(provenance_path)),
|
||||
("sha256", provenance_sha),
|
||||
)
|
||||
),
|
||||
),
|
||||
(
|
||||
"signature",
|
||||
OrderedDict(
|
||||
(
|
||||
("path", self._relative_to_out(signature_path)),
|
||||
("sha256", signature_sha),
|
||||
("ref", "sigstore://example"),
|
||||
("tlogUploaded", True),
|
||||
)
|
||||
),
|
||||
),
|
||||
(
|
||||
"metadata",
|
||||
OrderedDict(
|
||||
(
|
||||
("path", self._relative_to_out(metadata_path)),
|
||||
("sha256", metadata_sha),
|
||||
)
|
||||
),
|
||||
),
|
||||
)
|
||||
)
|
||||
],
|
||||
),
|
||||
(
|
||||
"charts",
|
||||
[
|
||||
OrderedDict(
|
||||
(
|
||||
("name", "stellaops"),
|
||||
("version", "1.0.0"),
|
||||
("path", self._relative_to_out(chart_path)),
|
||||
("sha256", chart_sha),
|
||||
)
|
||||
)
|
||||
],
|
||||
),
|
||||
(
|
||||
"compose",
|
||||
[
|
||||
OrderedDict(
|
||||
(
|
||||
("name", "docker-compose.dev.yaml"),
|
||||
("path", compose_path.relative_to(self.out_dir).as_posix()),
|
||||
("sha256", compose_sha),
|
||||
)
|
||||
)
|
||||
],
|
||||
),
|
||||
(
|
||||
"debugStore",
|
||||
OrderedDict(
|
||||
(
|
||||
("manifest", "debug/debug-manifest.json"),
|
||||
("sha256", debug_manifest_sha),
|
||||
("entries", 1),
|
||||
("platforms", ["linux/amd64"]),
|
||||
("directory", "debug/.build-id"),
|
||||
)
|
||||
),
|
||||
),
|
||||
)
|
||||
)
|
||||
write_manifest(manifest, self.release_dir)
|
||||
|
||||
def test_verify_release_success(self) -> None:
|
||||
self._create_sample_release()
|
||||
# Should not raise
|
||||
verify_release(self.release_dir)
|
||||
|
||||
def test_verify_release_detects_sha_mismatch(self) -> None:
|
||||
self._create_sample_release()
|
||||
tampered = self.release_dir / "artifacts/sboms/sample.cyclonedx.json"
|
||||
tampered.write_text("tampered\n", encoding="utf-8")
|
||||
with self.assertRaises(VerificationError):
|
||||
verify_release(self.release_dir)
|
||||
|
||||
def test_verify_release_detects_missing_debug_file(self) -> None:
|
||||
self._create_sample_release()
|
||||
debug_file = self.release_dir / "debug/.build-id/ab/cdef.debug"
|
||||
debug_file.unlink()
|
||||
with self.assertRaises(VerificationError):
|
||||
verify_release(self.release_dir)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
279
ops/devops/release/verify_release.py
Normal file
279
ops/devops/release/verify_release.py
Normal file
@@ -0,0 +1,279 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Verify release artefacts (SBOMs, provenance, signatures, manifest hashes)."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import hashlib
|
||||
import json
|
||||
import pathlib
|
||||
import sys
|
||||
from collections import OrderedDict
|
||||
from typing import Any, Mapping, Optional
|
||||
|
||||
from build_release import dump_yaml # type: ignore import-not-found
|
||||
|
||||
|
||||
class VerificationError(Exception):
|
||||
"""Raised when release artefacts fail verification."""
|
||||
|
||||
|
||||
def compute_sha256(path: pathlib.Path) -> str:
|
||||
sha = hashlib.sha256()
|
||||
with path.open("rb") as handle:
|
||||
for chunk in iter(lambda: handle.read(1024 * 1024), b""):
|
||||
sha.update(chunk)
|
||||
return sha.hexdigest()
|
||||
|
||||
|
||||
def parse_sha_file(path: pathlib.Path) -> Optional[str]:
|
||||
if not path.exists():
|
||||
return None
|
||||
content = path.read_text(encoding="utf-8").strip()
|
||||
if not content:
|
||||
return None
|
||||
return content.split()[0]
|
||||
|
||||
|
||||
def resolve_path(path_str: str, release_dir: pathlib.Path) -> pathlib.Path:
|
||||
candidate = pathlib.Path(path_str.replace("\\", "/"))
|
||||
if candidate.is_absolute():
|
||||
return candidate
|
||||
|
||||
for base in (release_dir, release_dir.parent, release_dir.parent.parent):
|
||||
resolved = (base / candidate).resolve()
|
||||
if resolved.exists():
|
||||
return resolved
|
||||
# Fall back to release_dir joined path even if missing to surface in caller.
|
||||
return (release_dir / candidate).resolve()
|
||||
|
||||
|
||||
def load_manifest(release_dir: pathlib.Path) -> OrderedDict[str, Any]:
|
||||
manifest_path = release_dir / "release.json"
|
||||
if not manifest_path.exists():
|
||||
raise VerificationError(f"Release manifest JSON missing at {manifest_path}")
|
||||
try:
|
||||
with manifest_path.open("r", encoding="utf-8") as handle:
|
||||
return json.load(handle, object_pairs_hook=OrderedDict)
|
||||
except json.JSONDecodeError as exc:
|
||||
raise VerificationError(f"Failed to parse {manifest_path}: {exc}") from exc
|
||||
|
||||
|
||||
def verify_manifest_hashes(
|
||||
manifest: Mapping[str, Any],
|
||||
release_dir: pathlib.Path,
|
||||
errors: list[str],
|
||||
) -> None:
|
||||
yaml_path = release_dir / "release.yaml"
|
||||
if not yaml_path.exists():
|
||||
errors.append(f"Missing release.yaml at {yaml_path}")
|
||||
return
|
||||
|
||||
recorded_yaml_sha = parse_sha_file(yaml_path.with_name(yaml_path.name + ".sha256"))
|
||||
actual_yaml_sha = compute_sha256(yaml_path)
|
||||
if recorded_yaml_sha and recorded_yaml_sha != actual_yaml_sha:
|
||||
errors.append(
|
||||
f"release.yaml.sha256 recorded {recorded_yaml_sha} but file hashes to {actual_yaml_sha}"
|
||||
)
|
||||
|
||||
json_path = release_dir / "release.json"
|
||||
recorded_json_sha = parse_sha_file(json_path.with_name(json_path.name + ".sha256"))
|
||||
actual_json_sha = compute_sha256(json_path)
|
||||
if recorded_json_sha and recorded_json_sha != actual_json_sha:
|
||||
errors.append(
|
||||
f"release.json.sha256 recorded {recorded_json_sha} but file hashes to {actual_json_sha}"
|
||||
)
|
||||
|
||||
checksums = manifest.get("checksums")
|
||||
if isinstance(checksums, Mapping):
|
||||
recorded_digest = checksums.get("sha256")
|
||||
base_manifest = OrderedDict(manifest)
|
||||
base_manifest.pop("checksums", None)
|
||||
yaml_without_checksums = dump_yaml(base_manifest)
|
||||
computed_digest = hashlib.sha256(yaml_without_checksums.encode("utf-8")).hexdigest()
|
||||
if recorded_digest != computed_digest:
|
||||
errors.append(
|
||||
"Manifest checksum mismatch: "
|
||||
f"recorded {recorded_digest}, computed {computed_digest}"
|
||||
)
|
||||
|
||||
|
||||
def verify_artifact_entry(
|
||||
entry: Mapping[str, Any],
|
||||
release_dir: pathlib.Path,
|
||||
label: str,
|
||||
component_name: str,
|
||||
errors: list[str],
|
||||
) -> None:
|
||||
path_str = entry.get("path")
|
||||
if not path_str:
|
||||
errors.append(f"{component_name}: {label} missing 'path' field.")
|
||||
return
|
||||
resolved = resolve_path(str(path_str), release_dir)
|
||||
if not resolved.exists():
|
||||
errors.append(f"{component_name}: {label} path does not exist → {resolved}")
|
||||
return
|
||||
recorded_sha = entry.get("sha256")
|
||||
if recorded_sha:
|
||||
actual_sha = compute_sha256(resolved)
|
||||
if actual_sha != recorded_sha:
|
||||
errors.append(
|
||||
f"{component_name}: {label} SHA mismatch for {resolved} "
|
||||
f"(recorded {recorded_sha}, computed {actual_sha})"
|
||||
)
|
||||
|
||||
|
||||
def verify_components(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None:
|
||||
for component in manifest.get("components", []):
|
||||
if not isinstance(component, Mapping):
|
||||
errors.append("Component entry is not a mapping.")
|
||||
continue
|
||||
name = str(component.get("name", "<unknown>"))
|
||||
for key, label in (
|
||||
("sbom", "SBOM"),
|
||||
("provenance", "provenance"),
|
||||
("signature", "signature"),
|
||||
("metadata", "metadata"),
|
||||
):
|
||||
entry = component.get(key)
|
||||
if not entry:
|
||||
continue
|
||||
if not isinstance(entry, Mapping):
|
||||
errors.append(f"{name}: {label} entry must be a mapping.")
|
||||
continue
|
||||
verify_artifact_entry(entry, release_dir, label, name, errors)
|
||||
|
||||
|
||||
def verify_collections(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None:
|
||||
for collection, label in (
|
||||
("charts", "chart"),
|
||||
("compose", "compose file"),
|
||||
):
|
||||
for item in manifest.get(collection, []):
|
||||
if not isinstance(item, Mapping):
|
||||
errors.append(f"{collection} entry is not a mapping.")
|
||||
continue
|
||||
path_value = item.get("path")
|
||||
if not path_value:
|
||||
errors.append(f"{collection} entry missing path.")
|
||||
continue
|
||||
resolved = resolve_path(str(path_value), release_dir)
|
||||
if not resolved.exists():
|
||||
errors.append(f"{label} missing file → {resolved}")
|
||||
continue
|
||||
recorded_sha = item.get("sha256")
|
||||
if recorded_sha:
|
||||
actual_sha = compute_sha256(resolved)
|
||||
if actual_sha != recorded_sha:
|
||||
errors.append(
|
||||
f"{label} SHA mismatch for {resolved} "
|
||||
f"(recorded {recorded_sha}, computed {actual_sha})"
|
||||
)
|
||||
|
||||
|
||||
def verify_debug_store(manifest: Mapping[str, Any], release_dir: pathlib.Path, errors: list[str]) -> None:
|
||||
debug = manifest.get("debugStore")
|
||||
if not isinstance(debug, Mapping):
|
||||
return
|
||||
manifest_path_str = debug.get("manifest")
|
||||
manifest_data: Optional[Mapping[str, Any]] = None
|
||||
if manifest_path_str:
|
||||
manifest_path = resolve_path(str(manifest_path_str), release_dir)
|
||||
if not manifest_path.exists():
|
||||
errors.append(f"Debug manifest missing → {manifest_path}")
|
||||
else:
|
||||
recorded_sha = debug.get("sha256")
|
||||
if recorded_sha:
|
||||
actual_sha = compute_sha256(manifest_path)
|
||||
if actual_sha != recorded_sha:
|
||||
errors.append(
|
||||
f"Debug manifest SHA mismatch (recorded {recorded_sha}, computed {actual_sha})"
|
||||
)
|
||||
sha_sidecar = manifest_path.with_suffix(manifest_path.suffix + ".sha256")
|
||||
sidecar_sha = parse_sha_file(sha_sidecar)
|
||||
if sidecar_sha and recorded_sha and sidecar_sha != recorded_sha:
|
||||
errors.append(
|
||||
f"Debug manifest sidecar digest {sidecar_sha} disagrees with recorded {recorded_sha}"
|
||||
)
|
||||
try:
|
||||
with manifest_path.open("r", encoding="utf-8") as handle:
|
||||
manifest_data = json.load(handle)
|
||||
except json.JSONDecodeError as exc:
|
||||
errors.append(f"Debug manifest JSON invalid: {exc}")
|
||||
directory = debug.get("directory")
|
||||
if directory:
|
||||
debug_dir = resolve_path(str(directory), release_dir)
|
||||
if not debug_dir.exists():
|
||||
errors.append(f"Debug directory missing → {debug_dir}")
|
||||
|
||||
if manifest_data:
|
||||
artifacts = manifest_data.get("artifacts")
|
||||
if not isinstance(artifacts, list) or not artifacts:
|
||||
errors.append("Debug manifest contains no artefacts.")
|
||||
return
|
||||
|
||||
declared_entries = debug.get("entries")
|
||||
if isinstance(declared_entries, int) and declared_entries != len(artifacts):
|
||||
errors.append(
|
||||
f"Debug manifest reports {declared_entries} entries but contains {len(artifacts)} artefacts."
|
||||
)
|
||||
|
||||
for artefact in artifacts:
|
||||
if not isinstance(artefact, Mapping):
|
||||
errors.append("Debug manifest artefact entry is not a mapping.")
|
||||
continue
|
||||
debug_path = artefact.get("debugPath")
|
||||
artefact_sha = artefact.get("sha256")
|
||||
if not debug_path or not artefact_sha:
|
||||
errors.append("Debug manifest artefact missing debugPath or sha256.")
|
||||
continue
|
||||
resolved_debug = resolve_path(str(debug_path), release_dir)
|
||||
if not resolved_debug.exists():
|
||||
errors.append(f"Debug artefact missing → {resolved_debug}")
|
||||
continue
|
||||
actual_sha = compute_sha256(resolved_debug)
|
||||
if actual_sha != artefact_sha:
|
||||
errors.append(
|
||||
f"Debug artefact SHA mismatch for {resolved_debug} "
|
||||
f"(recorded {artefact_sha}, computed {actual_sha})"
|
||||
)
|
||||
|
||||
|
||||
def verify_release(release_dir: pathlib.Path) -> None:
|
||||
if not release_dir.exists():
|
||||
raise VerificationError(f"Release directory not found: {release_dir}")
|
||||
manifest = load_manifest(release_dir)
|
||||
errors: list[str] = []
|
||||
verify_manifest_hashes(manifest, release_dir, errors)
|
||||
verify_components(manifest, release_dir, errors)
|
||||
verify_collections(manifest, release_dir, errors)
|
||||
verify_debug_store(manifest, release_dir, errors)
|
||||
if errors:
|
||||
bullet_list = "\n - ".join(errors)
|
||||
raise VerificationError(f"Release verification failed:\n - {bullet_list}")
|
||||
|
||||
|
||||
def parse_args(argv: list[str] | None = None) -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description=__doc__)
|
||||
parser.add_argument(
|
||||
"--release-dir",
|
||||
type=pathlib.Path,
|
||||
default=pathlib.Path("out/release"),
|
||||
help="Path to the release artefact directory (default: %(default)s)",
|
||||
)
|
||||
return parser.parse_args(argv)
|
||||
|
||||
|
||||
def main(argv: list[str] | None = None) -> int:
|
||||
args = parse_args(argv)
|
||||
try:
|
||||
verify_release(args.release_dir.resolve())
|
||||
except VerificationError as exc:
|
||||
print(str(exc), file=sys.stderr)
|
||||
return 1
|
||||
print(f"✅ Release artefacts verified OK in {args.release_dir}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
Binary file not shown.
77
ops/devops/telemetry/generate_dev_tls.sh
Normal file
77
ops/devops/telemetry/generate_dev_tls.sh
Normal file
@@ -0,0 +1,77 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
CERT_DIR="${SCRIPT_DIR}/../../deploy/telemetry/certs"
|
||||
|
||||
mkdir -p "${CERT_DIR}"
|
||||
|
||||
CA_KEY="${CERT_DIR}/ca.key"
|
||||
CA_CRT="${CERT_DIR}/ca.crt"
|
||||
COL_KEY="${CERT_DIR}/collector.key"
|
||||
COL_CSR="${CERT_DIR}/collector.csr"
|
||||
COL_CRT="${CERT_DIR}/collector.crt"
|
||||
CLIENT_KEY="${CERT_DIR}/client.key"
|
||||
CLIENT_CSR="${CERT_DIR}/client.csr"
|
||||
CLIENT_CRT="${CERT_DIR}/client.crt"
|
||||
|
||||
echo "[*] Generating OpenTelemetry dev CA and certificates in ${CERT_DIR}"
|
||||
|
||||
# Root CA
|
||||
if [[ ! -f "${CA_KEY}" ]]; then
|
||||
openssl genrsa -out "${CA_KEY}" 4096 >/dev/null 2>&1
|
||||
fi
|
||||
openssl req -x509 -new -key "${CA_KEY}" -days 365 -sha256 \
|
||||
-out "${CA_CRT}" -subj "/CN=StellaOps Dev Telemetry CA" \
|
||||
-config <(cat <<'EOF'
|
||||
[req]
|
||||
distinguished_name = req_distinguished_name
|
||||
prompt = no
|
||||
[req_distinguished_name]
|
||||
EOF
|
||||
) >/dev/null 2>&1
|
||||
|
||||
# Collector certificate (server + client auth)
|
||||
openssl req -new -nodes -newkey rsa:4096 \
|
||||
-keyout "${COL_KEY}" \
|
||||
-out "${COL_CSR}" \
|
||||
-subj "/CN=stellaops-otel-collector" >/dev/null 2>&1
|
||||
|
||||
openssl x509 -req -in "${COL_CSR}" -CA "${CA_CRT}" -CAkey "${CA_KEY}" \
|
||||
-CAcreateserial -out "${COL_CRT}" -days 365 -sha256 \
|
||||
-extensions v3_req -extfile <(cat <<'EOF'
|
||||
[v3_req]
|
||||
subjectAltName = @alt_names
|
||||
extendedKeyUsage = serverAuth, clientAuth
|
||||
[alt_names]
|
||||
DNS.1 = stellaops-otel-collector
|
||||
DNS.2 = localhost
|
||||
IP.1 = 127.0.0.1
|
||||
EOF
|
||||
) >/dev/null 2>&1
|
||||
|
||||
# Client certificate
|
||||
openssl req -new -nodes -newkey rsa:4096 \
|
||||
-keyout "${CLIENT_KEY}" \
|
||||
-out "${CLIENT_CSR}" \
|
||||
-subj "/CN=stellaops-otel-client" >/dev/null 2>&1
|
||||
|
||||
openssl x509 -req -in "${CLIENT_CSR}" -CA "${CA_CRT}" -CAkey "${CA_KEY}" \
|
||||
-CAcreateserial -out "${CLIENT_CRT}" -days 365 -sha256 \
|
||||
-extensions v3_req -extfile <(cat <<'EOF'
|
||||
[v3_req]
|
||||
extendedKeyUsage = clientAuth
|
||||
subjectAltName = @alt_names
|
||||
[alt_names]
|
||||
DNS.1 = stellaops-otel-client
|
||||
DNS.2 = localhost
|
||||
IP.1 = 127.0.0.1
|
||||
EOF
|
||||
) >/dev/null 2>&1
|
||||
|
||||
rm -f "${COL_CSR}" "${CLIENT_CSR}"
|
||||
rm -f "${CERT_DIR}/ca.srl"
|
||||
|
||||
echo "[✓] Certificates ready:"
|
||||
ls -1 "${CERT_DIR}"
|
||||
136
ops/devops/telemetry/package_offline_bundle.py
Normal file
136
ops/devops/telemetry/package_offline_bundle.py
Normal file
@@ -0,0 +1,136 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Package telemetry collector assets for offline/air-gapped installs.
|
||||
|
||||
Outputs a tarball containing the collector configuration, Compose overlay,
|
||||
Helm defaults, and operator README. A SHA-256 checksum sidecar is emitted, and
|
||||
optional Cosign signing can be enabled with --sign.
|
||||
"""
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import hashlib
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
import tarfile
|
||||
from pathlib import Path
|
||||
from typing import Iterable
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[3]
|
||||
DEFAULT_OUTPUT = REPO_ROOT / "out" / "telemetry" / "telemetry-offline-bundle.tar.gz"
|
||||
BUNDLE_CONTENTS: tuple[Path, ...] = (
|
||||
Path("deploy/telemetry/README.md"),
|
||||
Path("deploy/telemetry/otel-collector-config.yaml"),
|
||||
Path("deploy/telemetry/storage/README.md"),
|
||||
Path("deploy/telemetry/storage/prometheus.yaml"),
|
||||
Path("deploy/telemetry/storage/tempo.yaml"),
|
||||
Path("deploy/telemetry/storage/loki.yaml"),
|
||||
Path("deploy/telemetry/storage/tenants/tempo-overrides.yaml"),
|
||||
Path("deploy/telemetry/storage/tenants/loki-overrides.yaml"),
|
||||
Path("deploy/helm/stellaops/files/otel-collector-config.yaml"),
|
||||
Path("deploy/helm/stellaops/values.yaml"),
|
||||
Path("deploy/helm/stellaops/templates/otel-collector.yaml"),
|
||||
Path("deploy/compose/docker-compose.telemetry.yaml"),
|
||||
Path("deploy/compose/docker-compose.telemetry-storage.yaml"),
|
||||
Path("docs/ops/telemetry-collector.md"),
|
||||
Path("docs/ops/telemetry-storage.md"),
|
||||
)
|
||||
|
||||
|
||||
def compute_sha256(path: Path) -> str:
|
||||
sha = hashlib.sha256()
|
||||
with path.open("rb") as handle:
|
||||
for chunk in iter(lambda: handle.read(1024 * 1024), b""):
|
||||
sha.update(chunk)
|
||||
return sha.hexdigest()
|
||||
|
||||
|
||||
def validate_files(paths: Iterable[Path]) -> None:
|
||||
missing = [str(p) for p in paths if not (REPO_ROOT / p).exists()]
|
||||
if missing:
|
||||
raise FileNotFoundError(f"Missing bundle artefacts: {', '.join(missing)}")
|
||||
|
||||
|
||||
def create_bundle(output_path: Path) -> Path:
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with tarfile.open(output_path, "w:gz") as tar:
|
||||
for rel_path in BUNDLE_CONTENTS:
|
||||
abs_path = REPO_ROOT / rel_path
|
||||
tar.add(abs_path, arcname=str(rel_path))
|
||||
return output_path
|
||||
|
||||
|
||||
def write_checksum(bundle_path: Path) -> Path:
|
||||
digest = compute_sha256(bundle_path)
|
||||
sha_path = bundle_path.with_suffix(bundle_path.suffix + ".sha256")
|
||||
sha_path.write_text(f"{digest} {bundle_path.name}\n", encoding="utf-8")
|
||||
return sha_path
|
||||
|
||||
|
||||
def cosign_sign(bundle_path: Path, key_ref: str | None, identity_token: str | None) -> None:
|
||||
cmd = ["cosign", "sign-blob", "--yes", str(bundle_path)]
|
||||
if key_ref:
|
||||
cmd.extend(["--key", key_ref])
|
||||
env = os.environ.copy()
|
||||
if identity_token:
|
||||
env["COSIGN_IDENTITY_TOKEN"] = identity_token
|
||||
try:
|
||||
subprocess.run(cmd, check=True, env=env)
|
||||
except FileNotFoundError as exc:
|
||||
raise RuntimeError("cosign not found on PATH; install cosign or omit --sign") from exc
|
||||
except subprocess.CalledProcessError as exc:
|
||||
raise RuntimeError(f"cosign sign-blob failed: {exc}") from exc
|
||||
|
||||
|
||||
def parse_args(argv: list[str] | None = None) -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description=__doc__)
|
||||
parser.add_argument(
|
||||
"--output",
|
||||
type=Path,
|
||||
default=DEFAULT_OUTPUT,
|
||||
help=f"Output bundle path (default: {DEFAULT_OUTPUT})",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--sign",
|
||||
action="store_true",
|
||||
help="Sign the bundle using cosign (requires cosign on PATH)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--cosign-key",
|
||||
type=str,
|
||||
default=os.environ.get("COSIGN_KEY_REF"),
|
||||
help="Cosign key reference (file:..., azurekms://..., etc.)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--identity-token",
|
||||
type=str,
|
||||
default=os.environ.get("COSIGN_IDENTITY_TOKEN"),
|
||||
help="OIDC identity token for keyless signing",
|
||||
)
|
||||
return parser.parse_args(argv)
|
||||
|
||||
|
||||
def main(argv: list[str] | None = None) -> int:
|
||||
args = parse_args(argv)
|
||||
validate_files(BUNDLE_CONTENTS)
|
||||
|
||||
bundle_path = args.output.resolve()
|
||||
print(f"[*] Creating telemetry bundle at {bundle_path}")
|
||||
create_bundle(bundle_path)
|
||||
sha_path = write_checksum(bundle_path)
|
||||
print(f"[✓] SHA-256 written to {sha_path}")
|
||||
|
||||
if args.sign:
|
||||
print("[*] Signing bundle with cosign")
|
||||
cosign_sign(bundle_path, args.cosign_key, args.identity_token)
|
||||
sig_path = bundle_path.with_suffix(bundle_path.suffix + ".sig")
|
||||
if sig_path.exists():
|
||||
print(f"[✓] Cosign signature written to {sig_path}")
|
||||
else:
|
||||
print("[!] Cosign completed but signature file not found (ensure cosign version >= 2.2)")
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
197
ops/devops/telemetry/smoke_otel_collector.py
Normal file
197
ops/devops/telemetry/smoke_otel_collector.py
Normal file
@@ -0,0 +1,197 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Smoke test for the StellaOps OpenTelemetry Collector deployment.
|
||||
|
||||
The script sends sample traces, metrics, and logs over OTLP/HTTP with mutual TLS
|
||||
and asserts that the collector accepted the payloads by checking its Prometheus
|
||||
metrics endpoint.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import ssl
|
||||
import sys
|
||||
import time
|
||||
import urllib.request
|
||||
from pathlib import Path
|
||||
|
||||
TRACE_PAYLOAD = {
|
||||
"resourceSpans": [
|
||||
{
|
||||
"resource": {
|
||||
"attributes": [
|
||||
{"key": "service.name", "value": {"stringValue": "smoke-client"}},
|
||||
{"key": "tenant.id", "value": {"stringValue": "dev"}},
|
||||
]
|
||||
},
|
||||
"scopeSpans": [
|
||||
{
|
||||
"scope": {"name": "smoke-test"},
|
||||
"spans": [
|
||||
{
|
||||
"traceId": "00000000000000000000000000000001",
|
||||
"spanId": "0000000000000001",
|
||||
"name": "smoke-span",
|
||||
"kind": 1,
|
||||
"startTimeUnixNano": "1730000000000000000",
|
||||
"endTimeUnixNano": "1730000000500000000",
|
||||
"status": {"code": 0},
|
||||
}
|
||||
],
|
||||
}
|
||||
],
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
METRIC_PAYLOAD = {
|
||||
"resourceMetrics": [
|
||||
{
|
||||
"resource": {
|
||||
"attributes": [
|
||||
{"key": "service.name", "value": {"stringValue": "smoke-client"}},
|
||||
{"key": "tenant.id", "value": {"stringValue": "dev"}},
|
||||
]
|
||||
},
|
||||
"scopeMetrics": [
|
||||
{
|
||||
"scope": {"name": "smoke-test"},
|
||||
"metrics": [
|
||||
{
|
||||
"name": "smoke_gauge",
|
||||
"gauge": {
|
||||
"dataPoints": [
|
||||
{
|
||||
"asDouble": 1.0,
|
||||
"timeUnixNano": "1730000001000000000",
|
||||
"attributes": [
|
||||
{"key": "phase", "value": {"stringValue": "ingest"}}
|
||||
],
|
||||
}
|
||||
]
|
||||
},
|
||||
}
|
||||
],
|
||||
}
|
||||
],
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
LOG_PAYLOAD = {
|
||||
"resourceLogs": [
|
||||
{
|
||||
"resource": {
|
||||
"attributes": [
|
||||
{"key": "service.name", "value": {"stringValue": "smoke-client"}},
|
||||
{"key": "tenant.id", "value": {"stringValue": "dev"}},
|
||||
]
|
||||
},
|
||||
"scopeLogs": [
|
||||
{
|
||||
"scope": {"name": "smoke-test"},
|
||||
"logRecords": [
|
||||
{
|
||||
"timeUnixNano": "1730000002000000000",
|
||||
"severityNumber": 9,
|
||||
"severityText": "Info",
|
||||
"body": {"stringValue": "StellaOps collector smoke log"},
|
||||
}
|
||||
],
|
||||
}
|
||||
],
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
|
||||
def _load_context(ca: Path, cert: Path, key: Path) -> ssl.SSLContext:
|
||||
context = ssl.create_default_context(cafile=str(ca))
|
||||
context.check_hostname = False
|
||||
context.verify_mode = ssl.CERT_REQUIRED
|
||||
context.load_cert_chain(certfile=str(cert), keyfile=str(key))
|
||||
return context
|
||||
|
||||
|
||||
def _post_json(url: str, payload: dict, context: ssl.SSLContext) -> None:
|
||||
data = json.dumps(payload).encode("utf-8")
|
||||
request = urllib.request.Request(
|
||||
url,
|
||||
data=data,
|
||||
headers={
|
||||
"Content-Type": "application/json",
|
||||
"User-Agent": "stellaops-otel-smoke/1.0",
|
||||
},
|
||||
method="POST",
|
||||
)
|
||||
with urllib.request.urlopen(request, context=context, timeout=10) as response:
|
||||
if response.status // 100 != 2:
|
||||
raise RuntimeError(f"{url} returned HTTP {response.status}")
|
||||
|
||||
|
||||
def _fetch_metrics(url: str, context: ssl.SSLContext) -> str:
|
||||
request = urllib.request.Request(
|
||||
url,
|
||||
headers={
|
||||
"User-Agent": "stellaops-otel-smoke/1.0",
|
||||
},
|
||||
)
|
||||
with urllib.request.urlopen(request, context=context, timeout=10) as response:
|
||||
return response.read().decode("utf-8")
|
||||
|
||||
|
||||
def _assert_counter(metrics: str, metric_name: str) -> None:
|
||||
for line in metrics.splitlines():
|
||||
if line.startswith(metric_name):
|
||||
try:
|
||||
_, value = line.split(" ")
|
||||
if float(value) > 0:
|
||||
return
|
||||
except ValueError:
|
||||
continue
|
||||
raise AssertionError(f"{metric_name} not incremented")
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(description=__doc__)
|
||||
parser.add_argument("--host", default="localhost", help="Collector host (default: %(default)s)")
|
||||
parser.add_argument("--otlp-port", type=int, default=4318, help="OTLP/HTTP port")
|
||||
parser.add_argument("--metrics-port", type=int, default=9464, help="Prometheus metrics port")
|
||||
parser.add_argument("--health-port", type=int, default=13133, help="Health check port")
|
||||
parser.add_argument("--ca", type=Path, default=Path("deploy/telemetry/certs/ca.crt"), help="CA certificate path")
|
||||
parser.add_argument("--cert", type=Path, default=Path("deploy/telemetry/certs/client.crt"), help="Client certificate path")
|
||||
parser.add_argument("--key", type=Path, default=Path("deploy/telemetry/certs/client.key"), help="Client key path")
|
||||
args = parser.parse_args()
|
||||
|
||||
for path in (args.ca, args.cert, args.key):
|
||||
if not path.exists():
|
||||
print(f"[!] missing TLS material: {path}", file=sys.stderr)
|
||||
return 1
|
||||
|
||||
context = _load_context(args.ca, args.cert, args.key)
|
||||
|
||||
otlp_base = f"https://{args.host}:{args.otlp_port}/v1"
|
||||
print(f"[*] Sending OTLP traffic to {otlp_base}")
|
||||
_post_json(f"{otlp_base}/traces", TRACE_PAYLOAD, context)
|
||||
_post_json(f"{otlp_base}/metrics", METRIC_PAYLOAD, context)
|
||||
_post_json(f"{otlp_base}/logs", LOG_PAYLOAD, context)
|
||||
|
||||
# Allow Prometheus exporter to update metrics
|
||||
time.sleep(2)
|
||||
|
||||
metrics_url = f"https://{args.host}:{args.metrics_port}/metrics"
|
||||
print(f"[*] Fetching collector metrics from {metrics_url}")
|
||||
metrics = _fetch_metrics(metrics_url, context)
|
||||
|
||||
_assert_counter(metrics, "otelcol_receiver_accepted_spans")
|
||||
_assert_counter(metrics, "otelcol_receiver_accepted_logs")
|
||||
_assert_counter(metrics, "otelcol_receiver_accepted_metric_points")
|
||||
|
||||
print("[✓] Collector accepted traces, logs, and metrics.")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
183
ops/devops/validate_restore_sources.py
Normal file
183
ops/devops/validate_restore_sources.py
Normal file
@@ -0,0 +1,183 @@
|
||||
#!/usr/bin/env python3
|
||||
|
||||
"""
|
||||
Validate NuGet source ordering for StellaOps.
|
||||
|
||||
Ensures `local-nuget` is the highest priority feed in both NuGet.config and the
|
||||
Directory.Build.props restore configuration. Fails fast with actionable errors
|
||||
so CI/offline kit workflows can assert deterministic restore ordering.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import subprocess
|
||||
import sys
|
||||
import xml.etree.ElementTree as ET
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[2]
|
||||
NUGET_CONFIG = REPO_ROOT / "NuGet.config"
|
||||
ROOT_PROPS = REPO_ROOT / "Directory.Build.props"
|
||||
EXPECTED_SOURCE_KEYS = ["local", "dotnet-public", "nuget.org"]
|
||||
|
||||
|
||||
class ValidationError(Exception):
|
||||
"""Raised when validation fails."""
|
||||
|
||||
|
||||
def _fail(message: str) -> None:
|
||||
raise ValidationError(message)
|
||||
|
||||
|
||||
def _parse_xml(path: Path) -> ET.ElementTree:
|
||||
try:
|
||||
return ET.parse(path)
|
||||
except FileNotFoundError as exc:
|
||||
_fail(f"Missing required file: {path}")
|
||||
except ET.ParseError as exc:
|
||||
_fail(f"Could not parse XML for {path}: {exc}")
|
||||
|
||||
|
||||
def validate_nuget_config() -> None:
|
||||
tree = _parse_xml(NUGET_CONFIG)
|
||||
root = tree.getroot()
|
||||
|
||||
package_sources = root.find("packageSources")
|
||||
if package_sources is None:
|
||||
_fail("NuGet.config must declare a <packageSources> section.")
|
||||
|
||||
children = list(package_sources)
|
||||
if not children or children[0].tag != "clear":
|
||||
_fail("NuGet.config packageSources must begin with a <clear /> element.")
|
||||
|
||||
adds = [child for child in children if child.tag == "add"]
|
||||
if not adds:
|
||||
_fail("NuGet.config packageSources must define at least one <add> entry.")
|
||||
|
||||
keys = [add.attrib.get("key") for add in adds]
|
||||
if keys[: len(EXPECTED_SOURCE_KEYS)] != EXPECTED_SOURCE_KEYS:
|
||||
formatted = ", ".join(keys) or "<empty>"
|
||||
_fail(
|
||||
"NuGet.config packageSources must list feeds in the order "
|
||||
f"{EXPECTED_SOURCE_KEYS}. Found: {formatted}"
|
||||
)
|
||||
|
||||
local_value = adds[0].attrib.get("value", "")
|
||||
if Path(local_value).name != "local-nuget":
|
||||
_fail(
|
||||
"NuGet.config local feed should point at the repo-local mirror "
|
||||
f"'local-nuget', found value '{local_value}'."
|
||||
)
|
||||
|
||||
clear = package_sources.find("clear")
|
||||
if clear is None:
|
||||
_fail("NuGet.config packageSources must start with <clear /> to avoid inherited feeds.")
|
||||
|
||||
|
||||
def validate_directory_build_props() -> None:
|
||||
tree = _parse_xml(ROOT_PROPS)
|
||||
root = tree.getroot()
|
||||
defaults = None
|
||||
for element in root.findall(".//_StellaOpsDefaultRestoreSources"):
|
||||
defaults = [fragment.strip() for fragment in element.text.split(";") if fragment.strip()]
|
||||
break
|
||||
|
||||
if defaults is None:
|
||||
_fail("Directory.Build.props must define _StellaOpsDefaultRestoreSources.")
|
||||
|
||||
expected_props = [
|
||||
"$(StellaOpsLocalNuGetSource)",
|
||||
"$(StellaOpsDotNetPublicSource)",
|
||||
"$(StellaOpsNuGetOrgSource)",
|
||||
]
|
||||
if defaults != expected_props:
|
||||
_fail(
|
||||
"Directory.Build.props _StellaOpsDefaultRestoreSources must list feeds "
|
||||
f"in the order {expected_props}. Found: {defaults}"
|
||||
)
|
||||
|
||||
restore_nodes = root.findall(".//RestoreSources")
|
||||
if not restore_nodes:
|
||||
_fail("Directory.Build.props must override RestoreSources to force deterministic ordering.")
|
||||
|
||||
uses_default_first = any(
|
||||
node.text
|
||||
and node.text.strip().startswith("$(_StellaOpsDefaultRestoreSources)")
|
||||
for node in restore_nodes
|
||||
)
|
||||
if not uses_default_first:
|
||||
_fail(
|
||||
"Directory.Build.props RestoreSources override must place "
|
||||
"$(_StellaOpsDefaultRestoreSources) at the beginning."
|
||||
)
|
||||
|
||||
|
||||
def assert_single_nuget_config() -> None:
|
||||
extra_configs: list[Path] = []
|
||||
configs: set[Path] = set()
|
||||
for glob in ("NuGet.config", "nuget.config"):
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["rg", "--files", f"-g{glob}"],
|
||||
check=False,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
cwd=REPO_ROOT,
|
||||
)
|
||||
except FileNotFoundError as exc:
|
||||
_fail("ripgrep (rg) is required for validation but was not found on PATH.")
|
||||
if result.returncode not in (0, 1):
|
||||
_fail(
|
||||
f"ripgrep failed while searching for {glob}: {result.stderr.strip() or result.returncode}"
|
||||
)
|
||||
for line in result.stdout.splitlines():
|
||||
configs.add((REPO_ROOT / line).resolve())
|
||||
|
||||
configs.discard(NUGET_CONFIG.resolve())
|
||||
extra_configs.extend(sorted(configs))
|
||||
if extra_configs:
|
||||
formatted = "\n ".join(str(path.relative_to(REPO_ROOT)) for path in extra_configs)
|
||||
_fail(
|
||||
"Unexpected additional NuGet.config files detected. "
|
||||
"Consolidate feed configuration in the repo root:\n "
|
||||
f"{formatted}"
|
||||
)
|
||||
|
||||
|
||||
def parse_args(argv: list[str]) -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Verify StellaOps NuGet feeds prioritise the local mirror."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--skip-rg",
|
||||
action="store_true",
|
||||
help="Skip ripgrep discovery of extra NuGet.config files (useful for focused runs).",
|
||||
)
|
||||
return parser.parse_args(argv)
|
||||
|
||||
|
||||
def main(argv: list[str]) -> int:
|
||||
args = parse_args(argv)
|
||||
validations = [
|
||||
("NuGet.config ordering", validate_nuget_config),
|
||||
("Directory.Build.props restore override", validate_directory_build_props),
|
||||
]
|
||||
if not args.skip_rg:
|
||||
validations.append(("single NuGet.config", assert_single_nuget_config))
|
||||
|
||||
for label, check in validations:
|
||||
try:
|
||||
check()
|
||||
except ValidationError as exc:
|
||||
sys.stderr.write(f"[FAIL] {label}: {exc}\n")
|
||||
return 1
|
||||
else:
|
||||
sys.stdout.write(f"[OK] {label}\n")
|
||||
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main(sys.argv[1:]))
|
||||
@@ -2,4 +2,4 @@
|
||||
|
||||
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|
||||
|----|--------|----------|------------|-------------|---------------|
|
||||
| DEVOPS-LIC-14-004 | TODO | Licensing Guild | AUTH-MTLS-11-002 | Implement registry token service tied to Authority (DPoP/mTLS), plan gating, revocation handling, and monitoring per architecture. | Token service issues scoped tokens, revocation tested, monitoring dashboards in place, docs updated. |
|
||||
| DEVOPS-LIC-14-004 | DONE (2025-10-26) | Licensing Guild | AUTH-MTLS-11-002 | Implement registry token service tied to Authority (DPoP/mTLS), plan gating, revocation handling, and monitoring per architecture. | Token service issues scoped tokens, revocation tested, monitoring dashboards in place, docs updated. |
|
||||
|
||||
@@ -2,10 +2,13 @@
|
||||
|
||||
| ID | Status | Owner(s) | Depends on | Description | Exit Criteria |
|
||||
|----|--------|----------|------------|-------------|---------------|
|
||||
| DEVOPS-OFFLINE-14-002 | TODO | Offline Kit Guild | DEVOPS-REL-14-001 | Build offline kit packaging workflow (artifact bundling, manifest generation, signature verification). | Offline tarball generated with manifest + checksums + signatures; import script verifies integrity; docs updated. |
|
||||
| DEVOPS-OFFLINE-14-002 | DONE (2025-10-26) | Offline Kit Guild | DEVOPS-REL-14-001 | Build offline kit packaging workflow (artifact bundling, manifest generation, signature verification). | Offline tarball generated with manifest + checksums + signatures; `ops/offline-kit/run-python-analyzer-smoke.sh` invoked as part of packaging; `debug/.build-id` tree mirrored from release output; import script verifies integrity; docs updated. |
|
||||
| DEVOPS-OFFLINE-18-004 | DONE (2025-10-22) | Offline Kit Guild, Scanner Guild | DEVOPS-OFFLINE-18-003, SCANNER-ANALYZERS-LANG-10-309G | Rebuild Offline Kit bundle with Go analyzer plug-in and updated manifest/signature set. | Kit tarball includes Go analyzer artifacts; manifest/signature refreshed; verification steps executed and logged; docs updated with new bundle version. |
|
||||
| DEVOPS-OFFLINE-18-005 | TODO | Offline Kit Guild, Scanner Guild | DEVOPS-REL-14-004, SCANNER-ANALYZERS-LANG-10-309P | Repackage Offline Kit with Python analyzer plug-in artefacts and refreshed manifest/signature set. | Kit tarball includes Python analyzer DLL/PDB/manifest; signature + manifest updated; Offline Kit guide references Python coverage; smoke import validated. |
|
||||
| DEVOPS-OFFLINE-18-005 | DONE (2025-10-26) | Offline Kit Guild, Scanner Guild | DEVOPS-REL-14-004, SCANNER-ANALYZERS-LANG-10-309P | Repackage Offline Kit with Python analyzer plug-in artefacts and refreshed manifest/signature set. | Kit tarball includes Python analyzer DLL/PDB/manifest; signature + manifest updated; Offline Kit guide references Python coverage; smoke import validated. |
|
||||
| DEVOPS-OFFLINE-34-006 | TODO | Offline Kit Guild, Orchestrator Service Guild | ORCH-SVC-34-004, DEPLOY-ORCH-34-001 | Bundle orchestrator service container, worker SDK samples, Postgres snapshot, and dashboards into Offline Kit with manifest/signature updates. | Offline kit contains orchestrator assets; manifest/signature validated; docs updated with air-gapped install steps; smoke import executed. |
|
||||
| DEVOPS-OFFLINE-37-001 | TODO | Offline Kit Guild, Exporter Service Guild | EXPORT-SVC-37-001..004, DEPLOY-EXPORT-36-001 | Package Export Center tooling, sample mirror bundles, verification CLI, and docs into Offline Kit with manifest/signature refresh and air-gap import script. | Offline kit includes export bundles/tools; verification script passes; manifest/signature updated; docs detail import workflow. |
|
||||
| DEVOPS-OFFLINE-37-001 | TODO | Offline Kit Guild, Exporter Service Guild | EXPORT-SVC-37-001..004, DEPLOY-EXPORT-36-001 | Export Center offline bundles + verification tooling (mirror artefacts, verification CLI, manifest/signature refresh, air-gap import script). | Offline kit includes export bundles/tools; verification script passes; manifest/signature updated; docs detail import workflow. |
|
||||
| DEVOPS-OFFLINE-37-002 | TODO | Offline Kit Guild, Notifications Service Guild | NOTIFY-SVC-40-001..004, WEB-NOTIFY-40-001 | Notifier offline packs (sample configs, template/digest packs, dry-run harness) with integrity checks and operator docs. | Offline kit ships notifier assets with checksums; dry-run harness validated; docs outline sealed/connected install steps. |
|
||||
| CLI-PACKS-43-002 | TODO | Offline Kit Guild, Packs Registry Guild | PACKS-REG-42-001, DEPLOY-PACKS-43-001 | Bundle Task Pack samples, registry mirror seeds, Task Runner configs, and CLI binaries with checksums into Offline Kit. | Offline kit includes packs registry mirror, Task Runner configs, CLI binaries; manifest/signature updated; docs describe air-gapped execution. |
|
||||
| OFFLINE-CONTAINERS-46-001 | TODO | Offline Kit Guild, Deployment Guild | DEVOPS-CONTAINERS-46-001, DEPLOY-AIRGAP-46-001 | Include container air-gap bundle, verification docs, and mirrored registry instructions inside Offline Kit. | Offline kit ships bundle + how-to; verification steps validated; manifest/signature updated; imposed rule noted. |
|
||||
| DEVOPS-OFFLINE-17-003 | DONE (2025-10-26) | Offline Kit Guild, DevOps Guild | DEVOPS-REL-17-002 | Mirror release debug-store artefacts ( `.build-id/` tree and `debug-manifest.json`) into Offline Kit packaging and document import validation. | Offline kit archives `debug/.build-id/` with manifest/sha256, docs cover symbol lookup workflow, smoke job confirms build-id lookup succeeds on air-gapped install. |
|
||||
| DEVOPS-OFFLINE-17-004 | BLOCKED (2025-10-26) | Offline Kit Guild, DevOps Guild | DEVOPS-REL-17-002 | Execute `mirror_debug_store.py` after the next release pipeline emits `out/release/debug`, verify manifest hashes, and archive `metadata/debug-store.json` with the kit. | Debug store mirrored post-release, manifest SHA validated, summary committed alongside Offline Kit bundle evidence. ⏳ Blocked until the release pipeline publishes the next `out/release/debug` tree; rerun the mirroring script as part of that pipeline. |
|
||||
|
||||
BIN
ops/offline-kit/__pycache__/build_offline_kit.cpython-312.pyc
Normal file
BIN
ops/offline-kit/__pycache__/build_offline_kit.cpython-312.pyc
Normal file
Binary file not shown.
BIN
ops/offline-kit/__pycache__/mirror_debug_store.cpython-312.pyc
Normal file
BIN
ops/offline-kit/__pycache__/mirror_debug_store.cpython-312.pyc
Normal file
Binary file not shown.
Binary file not shown.
445
ops/offline-kit/build_offline_kit.py
Normal file
445
ops/offline-kit/build_offline_kit.py
Normal file
@@ -0,0 +1,445 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Package the StellaOps Offline Kit with deterministic artefacts and manifest."""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import datetime as dt
|
||||
import hashlib
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import shutil
|
||||
import subprocess
|
||||
import sys
|
||||
import tarfile
|
||||
from collections import OrderedDict
|
||||
from pathlib import Path
|
||||
from typing import Any, Iterable, Mapping, MutableMapping, Optional
|
||||
|
||||
REPO_ROOT = Path(__file__).resolve().parents[2]
|
||||
RELEASE_TOOLS_DIR = REPO_ROOT / "ops" / "devops" / "release"
|
||||
TELEMETRY_TOOLS_DIR = REPO_ROOT / "ops" / "devops" / "telemetry"
|
||||
TELEMETRY_BUNDLE_PATH = REPO_ROOT / "out" / "telemetry" / "telemetry-offline-bundle.tar.gz"
|
||||
|
||||
if str(RELEASE_TOOLS_DIR) not in sys.path:
|
||||
sys.path.insert(0, str(RELEASE_TOOLS_DIR))
|
||||
|
||||
from verify_release import ( # type: ignore import-not-found
|
||||
load_manifest,
|
||||
resolve_path,
|
||||
verify_release,
|
||||
)
|
||||
|
||||
import mirror_debug_store # type: ignore import-not-found
|
||||
|
||||
DEFAULT_RELEASE_DIR = REPO_ROOT / "out" / "release"
|
||||
DEFAULT_STAGING_DIR = REPO_ROOT / "out" / "offline-kit" / "staging"
|
||||
DEFAULT_OUTPUT_DIR = REPO_ROOT / "out" / "offline-kit" / "dist"
|
||||
|
||||
ARTIFACT_TARGETS = {
|
||||
"sbom": Path("sboms"),
|
||||
"provenance": Path("attest"),
|
||||
"signature": Path("signatures"),
|
||||
"metadata": Path("metadata/docker"),
|
||||
}
|
||||
|
||||
|
||||
class CommandError(RuntimeError):
|
||||
"""Raised when an external command fails."""
|
||||
|
||||
|
||||
def run(cmd: Iterable[str], *, cwd: Optional[Path] = None, env: Optional[Mapping[str, str]] = None) -> str:
|
||||
process_env = dict(os.environ)
|
||||
if env:
|
||||
process_env.update(env)
|
||||
result = subprocess.run(
|
||||
list(cmd),
|
||||
cwd=str(cwd) if cwd else None,
|
||||
env=process_env,
|
||||
check=False,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
raise CommandError(
|
||||
f"Command failed ({result.returncode}): {' '.join(cmd)}\nSTDOUT:\n{result.stdout}\nSTDERR:\n{result.stderr}"
|
||||
)
|
||||
return result.stdout
|
||||
|
||||
|
||||
def compute_sha256(path: Path) -> str:
|
||||
sha = hashlib.sha256()
|
||||
with path.open("rb") as handle:
|
||||
for chunk in iter(lambda: handle.read(1024 * 1024), b""):
|
||||
sha.update(chunk)
|
||||
return sha.hexdigest()
|
||||
|
||||
|
||||
def utc_now_iso() -> str:
|
||||
return dt.datetime.now(tz=dt.timezone.utc).replace(microsecond=0).isoformat().replace("+00:00", "Z")
|
||||
|
||||
|
||||
def safe_component_name(name: str) -> str:
|
||||
return re.sub(r"[^A-Za-z0-9_.-]", "-", name.strip().lower())
|
||||
|
||||
|
||||
def clean_directory(path: Path) -> None:
|
||||
if path.exists():
|
||||
shutil.rmtree(path)
|
||||
path.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
|
||||
def run_python_analyzer_smoke() -> None:
|
||||
script = REPO_ROOT / "ops" / "offline-kit" / "run-python-analyzer-smoke.sh"
|
||||
run(["bash", str(script)], cwd=REPO_ROOT)
|
||||
|
||||
|
||||
def copy_if_exists(source: Path, target: Path) -> None:
|
||||
if source.is_dir():
|
||||
shutil.copytree(source, target, dirs_exist_ok=True)
|
||||
elif source.is_file():
|
||||
target.parent.mkdir(parents=True, exist_ok=True)
|
||||
shutil.copy2(source, target)
|
||||
|
||||
|
||||
def copy_release_manifests(release_dir: Path, staging_dir: Path) -> None:
|
||||
manifest_dir = staging_dir / "manifest"
|
||||
manifest_dir.mkdir(parents=True, exist_ok=True)
|
||||
for name in ("release.yaml", "release.yaml.sha256", "release.json", "release.json.sha256"):
|
||||
source = release_dir / name
|
||||
if source.exists():
|
||||
shutil.copy2(source, manifest_dir / source.name)
|
||||
|
||||
|
||||
def copy_component_artifacts(
|
||||
manifest: Mapping[str, Any],
|
||||
release_dir: Path,
|
||||
staging_dir: Path,
|
||||
) -> None:
|
||||
components = manifest.get("components") or []
|
||||
for component in sorted(components, key=lambda entry: str(entry.get("name", ""))):
|
||||
if not isinstance(component, Mapping):
|
||||
continue
|
||||
component_name = safe_component_name(str(component.get("name", "component")))
|
||||
for key, target_root in ARTIFACT_TARGETS.items():
|
||||
entry = component.get(key)
|
||||
if not entry or not isinstance(entry, Mapping):
|
||||
continue
|
||||
path_str = entry.get("path")
|
||||
if not path_str:
|
||||
continue
|
||||
resolved = resolve_path(str(path_str), release_dir)
|
||||
if not resolved.exists():
|
||||
raise FileNotFoundError(f"Component '{component_name}' {key} artefact not found: {resolved}")
|
||||
target_dir = staging_dir / target_root
|
||||
target_dir.mkdir(parents=True, exist_ok=True)
|
||||
target_name = f"{component_name}-{resolved.name}" if resolved.name else component_name
|
||||
shutil.copy2(resolved, target_dir / target_name)
|
||||
|
||||
|
||||
def copy_collections(
|
||||
manifest: Mapping[str, Any],
|
||||
release_dir: Path,
|
||||
staging_dir: Path,
|
||||
) -> None:
|
||||
for collection, subdir in (("charts", Path("charts")), ("compose", Path("compose"))):
|
||||
entries = manifest.get(collection) or []
|
||||
for entry in entries:
|
||||
if not isinstance(entry, Mapping):
|
||||
continue
|
||||
path_str = entry.get("path")
|
||||
if not path_str:
|
||||
continue
|
||||
resolved = resolve_path(str(path_str), release_dir)
|
||||
if not resolved.exists():
|
||||
raise FileNotFoundError(f"{collection} artefact not found: {resolved}")
|
||||
target_dir = staging_dir / subdir
|
||||
target_dir.mkdir(parents=True, exist_ok=True)
|
||||
shutil.copy2(resolved, target_dir / resolved.name)
|
||||
|
||||
|
||||
def copy_debug_store(release_dir: Path, staging_dir: Path) -> None:
|
||||
mirror_debug_store.main(
|
||||
[
|
||||
"--release-dir",
|
||||
str(release_dir),
|
||||
"--offline-kit-dir",
|
||||
str(staging_dir),
|
||||
]
|
||||
)
|
||||
|
||||
|
||||
def copy_plugins_and_assets(staging_dir: Path) -> None:
|
||||
copy_if_exists(REPO_ROOT / "plugins" / "scanner", staging_dir / "plugins" / "scanner")
|
||||
copy_if_exists(REPO_ROOT / "certificates", staging_dir / "certificates")
|
||||
copy_if_exists(REPO_ROOT / "seed-data", staging_dir / "seed-data")
|
||||
docs_dir = staging_dir / "docs"
|
||||
docs_dir.mkdir(parents=True, exist_ok=True)
|
||||
copy_if_exists(REPO_ROOT / "docs" / "24_OFFLINE_KIT.md", docs_dir / "24_OFFLINE_KIT.md")
|
||||
copy_if_exists(REPO_ROOT / "docs" / "ops" / "telemetry-collector.md", docs_dir / "telemetry-collector.md")
|
||||
copy_if_exists(REPO_ROOT / "docs" / "ops" / "telemetry-storage.md", docs_dir / "telemetry-storage.md")
|
||||
|
||||
|
||||
def package_telemetry_bundle(staging_dir: Path) -> None:
|
||||
script = TELEMETRY_TOOLS_DIR / "package_offline_bundle.py"
|
||||
if not script.exists():
|
||||
return
|
||||
TELEMETRY_BUNDLE_PATH.parent.mkdir(parents=True, exist_ok=True)
|
||||
run(["python", str(script), "--output", str(TELEMETRY_BUNDLE_PATH)], cwd=REPO_ROOT)
|
||||
telemetry_dir = staging_dir / "telemetry"
|
||||
telemetry_dir.mkdir(parents=True, exist_ok=True)
|
||||
shutil.copy2(TELEMETRY_BUNDLE_PATH, telemetry_dir / TELEMETRY_BUNDLE_PATH.name)
|
||||
sha_path = TELEMETRY_BUNDLE_PATH.with_suffix(TELEMETRY_BUNDLE_PATH.suffix + ".sha256")
|
||||
if sha_path.exists():
|
||||
shutil.copy2(sha_path, telemetry_dir / sha_path.name)
|
||||
|
||||
|
||||
def scan_files(staging_dir: Path, exclude: Optional[set[str]] = None) -> list[OrderedDict[str, Any]]:
|
||||
entries: list[OrderedDict[str, Any]] = []
|
||||
exclude = exclude or set()
|
||||
for path in sorted(staging_dir.rglob("*")):
|
||||
if not path.is_file():
|
||||
continue
|
||||
rel = path.relative_to(staging_dir).as_posix()
|
||||
if rel in exclude:
|
||||
continue
|
||||
entries.append(
|
||||
OrderedDict(
|
||||
(
|
||||
("name", rel),
|
||||
("sha256", compute_sha256(path)),
|
||||
("size", path.stat().st_size),
|
||||
)
|
||||
)
|
||||
)
|
||||
return entries
|
||||
|
||||
|
||||
def write_offline_manifest(
|
||||
staging_dir: Path,
|
||||
version: str,
|
||||
channel: str,
|
||||
release_manifest_sha: Optional[str],
|
||||
) -> tuple[Path, str]:
|
||||
manifest_dir = staging_dir / "manifest"
|
||||
manifest_dir.mkdir(parents=True, exist_ok=True)
|
||||
offline_manifest_path = manifest_dir / "offline-manifest.json"
|
||||
files = scan_files(staging_dir, exclude={"manifest/offline-manifest.json", "manifest/offline-manifest.json.sha256"})
|
||||
manifest_data = OrderedDict(
|
||||
(
|
||||
(
|
||||
"bundle",
|
||||
OrderedDict(
|
||||
(
|
||||
("version", version),
|
||||
("channel", channel),
|
||||
("capturedAt", utc_now_iso()),
|
||||
("releaseManifestSha256", release_manifest_sha),
|
||||
)
|
||||
),
|
||||
),
|
||||
("artifacts", files),
|
||||
)
|
||||
)
|
||||
with offline_manifest_path.open("w", encoding="utf-8") as handle:
|
||||
json.dump(manifest_data, handle, indent=2)
|
||||
handle.write("\n")
|
||||
manifest_sha = compute_sha256(offline_manifest_path)
|
||||
(offline_manifest_path.with_suffix(".json.sha256")).write_text(
|
||||
f"{manifest_sha} {offline_manifest_path.name}\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
return offline_manifest_path, manifest_sha
|
||||
|
||||
|
||||
def tarinfo_filter(tarinfo: tarfile.TarInfo) -> tarfile.TarInfo:
|
||||
tarinfo.uid = 0
|
||||
tarinfo.gid = 0
|
||||
tarinfo.uname = ""
|
||||
tarinfo.gname = ""
|
||||
tarinfo.mtime = 0
|
||||
return tarinfo
|
||||
|
||||
|
||||
def create_tarball(staging_dir: Path, output_dir: Path, bundle_name: str) -> Path:
|
||||
output_dir.mkdir(parents=True, exist_ok=True)
|
||||
bundle_path = output_dir / f"{bundle_name}.tar.gz"
|
||||
if bundle_path.exists():
|
||||
bundle_path.unlink()
|
||||
with tarfile.open(bundle_path, "w:gz", compresslevel=9) as tar:
|
||||
for path in sorted(staging_dir.rglob("*")):
|
||||
if path.is_file():
|
||||
arcname = path.relative_to(staging_dir).as_posix()
|
||||
tar.add(path, arcname=arcname, filter=tarinfo_filter)
|
||||
return bundle_path
|
||||
|
||||
|
||||
def sign_blob(
|
||||
path: Path,
|
||||
*,
|
||||
key_ref: Optional[str],
|
||||
identity_token: Optional[str],
|
||||
password: Optional[str],
|
||||
tlog_upload: bool,
|
||||
) -> Optional[Path]:
|
||||
if not key_ref and not identity_token:
|
||||
return None
|
||||
cmd = ["cosign", "sign-blob", "--yes", str(path)]
|
||||
if key_ref:
|
||||
cmd.extend(["--key", key_ref])
|
||||
if identity_token:
|
||||
cmd.extend(["--identity-token", identity_token])
|
||||
if not tlog_upload:
|
||||
cmd.append("--tlog-upload=false")
|
||||
env = {"COSIGN_PASSWORD": password or ""}
|
||||
signature = run(cmd, env=env)
|
||||
sig_path = path.with_suffix(path.suffix + ".sig")
|
||||
sig_path.write_text(signature, encoding="utf-8")
|
||||
return sig_path
|
||||
|
||||
|
||||
def build_offline_kit(args: argparse.Namespace) -> MutableMapping[str, Any]:
|
||||
release_dir = args.release_dir.resolve()
|
||||
staging_dir = args.staging_dir.resolve()
|
||||
output_dir = args.output_dir.resolve()
|
||||
|
||||
verify_release(release_dir)
|
||||
if not args.skip_smoke:
|
||||
run_python_analyzer_smoke()
|
||||
clean_directory(staging_dir)
|
||||
copy_debug_store(release_dir, staging_dir)
|
||||
|
||||
manifest_data = load_manifest(release_dir)
|
||||
release_manifest_sha = None
|
||||
checksums = manifest_data.get("checksums")
|
||||
if isinstance(checksums, Mapping):
|
||||
release_manifest_sha = checksums.get("sha256")
|
||||
|
||||
copy_release_manifests(release_dir, staging_dir)
|
||||
copy_component_artifacts(manifest_data, release_dir, staging_dir)
|
||||
copy_collections(manifest_data, release_dir, staging_dir)
|
||||
copy_plugins_and_assets(staging_dir)
|
||||
package_telemetry_bundle(staging_dir)
|
||||
|
||||
offline_manifest_path, offline_manifest_sha = write_offline_manifest(
|
||||
staging_dir,
|
||||
args.version,
|
||||
args.channel,
|
||||
release_manifest_sha,
|
||||
)
|
||||
bundle_name = f"stella-ops-offline-kit-{args.version}-{args.channel}"
|
||||
bundle_path = create_tarball(staging_dir, output_dir, bundle_name)
|
||||
bundle_sha = compute_sha256(bundle_path)
|
||||
bundle_sha_prefixed = f"sha256:{bundle_sha}"
|
||||
(bundle_path.with_suffix(".tar.gz.sha256")).write_text(
|
||||
f"{bundle_sha} {bundle_path.name}\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
signature_paths: dict[str, str] = {}
|
||||
sig = sign_blob(
|
||||
bundle_path,
|
||||
key_ref=args.cosign_key,
|
||||
identity_token=args.cosign_identity_token,
|
||||
password=args.cosign_password,
|
||||
tlog_upload=not args.no_transparency,
|
||||
)
|
||||
if sig:
|
||||
signature_paths["bundleSignature"] = str(sig)
|
||||
manifest_sig = sign_blob(
|
||||
offline_manifest_path,
|
||||
key_ref=args.cosign_key,
|
||||
identity_token=args.cosign_identity_token,
|
||||
password=args.cosign_password,
|
||||
tlog_upload=not args.no_transparency,
|
||||
)
|
||||
if manifest_sig:
|
||||
signature_paths["manifestSignature"] = str(manifest_sig)
|
||||
|
||||
metadata = OrderedDict(
|
||||
(
|
||||
("bundleId", args.bundle_id or f"{args.version}-{args.channel}-{utc_now_iso()}"),
|
||||
("bundleName", bundle_path.name),
|
||||
("bundleSha256", bundle_sha_prefixed),
|
||||
("bundleSize", bundle_path.stat().st_size),
|
||||
("manifestName", offline_manifest_path.name),
|
||||
("manifestSha256", f"sha256:{offline_manifest_sha}"),
|
||||
("manifestSize", offline_manifest_path.stat().st_size),
|
||||
("channel", args.channel),
|
||||
("version", args.version),
|
||||
("capturedAt", utc_now_iso()),
|
||||
)
|
||||
)
|
||||
|
||||
if sig:
|
||||
metadata["bundleSignatureName"] = Path(sig).name
|
||||
if manifest_sig:
|
||||
metadata["manifestSignatureName"] = Path(manifest_sig).name
|
||||
|
||||
metadata_path = output_dir / f"{bundle_name}.metadata.json"
|
||||
with metadata_path.open("w", encoding="utf-8") as handle:
|
||||
json.dump(metadata, handle, indent=2)
|
||||
handle.write("\n")
|
||||
|
||||
return OrderedDict(
|
||||
(
|
||||
("bundlePath", str(bundle_path)),
|
||||
("bundleSha256", bundle_sha),
|
||||
("manifestPath", str(offline_manifest_path)),
|
||||
("metadataPath", str(metadata_path)),
|
||||
("signatures", signature_paths),
|
||||
)
|
||||
)
|
||||
|
||||
|
||||
def parse_args(argv: Optional[list[str]] = None) -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description=__doc__)
|
||||
parser.add_argument("--version", required=True, help="Bundle version (e.g. 2025.10.0)")
|
||||
parser.add_argument("--channel", default="edge", help="Release channel (default: %(default)s)")
|
||||
parser.add_argument("--bundle-id", help="Optional explicit bundle identifier")
|
||||
parser.add_argument(
|
||||
"--release-dir",
|
||||
type=Path,
|
||||
default=DEFAULT_RELEASE_DIR,
|
||||
help="Release artefact directory (default: %(default)s)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--staging-dir",
|
||||
type=Path,
|
||||
default=DEFAULT_STAGING_DIR,
|
||||
help="Temporary staging directory (default: %(default)s)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--output-dir",
|
||||
type=Path,
|
||||
default=DEFAULT_OUTPUT_DIR,
|
||||
help="Destination directory for packaged bundles (default: %(default)s)",
|
||||
)
|
||||
parser.add_argument("--cosign-key", dest="cosign_key", help="Cosign key reference for signing")
|
||||
parser.add_argument("--cosign-password", dest="cosign_password", help="Cosign key password (if applicable)")
|
||||
parser.add_argument("--cosign-identity-token", dest="cosign_identity_token", help="Cosign identity token")
|
||||
parser.add_argument("--no-transparency", action="store_true", help="Disable Rekor transparency log uploads")
|
||||
parser.add_argument("--skip-smoke", action="store_true", help="Skip analyzer smoke execution (testing only)")
|
||||
return parser.parse_args(argv)
|
||||
|
||||
|
||||
def main(argv: Optional[list[str]] = None) -> int:
|
||||
args = parse_args(argv)
|
||||
try:
|
||||
result = build_offline_kit(args)
|
||||
except Exception as exc: # pylint: disable=broad-except
|
||||
print(f"offline-kit packaging failed: {exc}", file=sys.stderr)
|
||||
return 1
|
||||
print("✅ Offline kit packaged")
|
||||
for key, value in result.items():
|
||||
if isinstance(value, dict):
|
||||
for sub_key, sub_val in value.items():
|
||||
print(f" - {key}.{sub_key}: {sub_val}")
|
||||
else:
|
||||
print(f" - {key}: {value}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
221
ops/offline-kit/mirror_debug_store.py
Normal file
221
ops/offline-kit/mirror_debug_store.py
Normal file
@@ -0,0 +1,221 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Mirror release debug-store artefacts into the Offline Kit staging tree.
|
||||
|
||||
This helper copies the release `debug/` directory (including `.build-id/`,
|
||||
`debug-manifest.json`, and the `.sha256` companion) into the Offline Kit
|
||||
output directory and verifies the manifest hashes after the copy. A summary
|
||||
document is written under `metadata/debug-store.json` so packaging jobs can
|
||||
surface the available build-ids and validation status.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import datetime as dt
|
||||
import json
|
||||
import pathlib
|
||||
import shutil
|
||||
import sys
|
||||
from typing import Iterable, Tuple
|
||||
|
||||
REPO_ROOT = pathlib.Path(__file__).resolve().parents[2]
|
||||
|
||||
|
||||
def compute_sha256(path: pathlib.Path) -> str:
|
||||
import hashlib
|
||||
|
||||
sha = hashlib.sha256()
|
||||
with path.open("rb") as handle:
|
||||
for chunk in iter(lambda: handle.read(1024 * 1024), b""):
|
||||
sha.update(chunk)
|
||||
return sha.hexdigest()
|
||||
|
||||
|
||||
def load_manifest(manifest_path: pathlib.Path) -> dict:
|
||||
with manifest_path.open("r", encoding="utf-8") as handle:
|
||||
return json.load(handle)
|
||||
|
||||
|
||||
def parse_manifest_sha(sha_path: pathlib.Path) -> str | None:
|
||||
if not sha_path.exists():
|
||||
return None
|
||||
text = sha_path.read_text(encoding="utf-8").strip()
|
||||
if not text:
|
||||
return None
|
||||
# Allow either "<sha>" or "<sha> filename" formats.
|
||||
return text.split()[0]
|
||||
|
||||
|
||||
def iter_debug_files(base_dir: pathlib.Path) -> Iterable[pathlib.Path]:
|
||||
for path in base_dir.rglob("*"):
|
||||
if path.is_file():
|
||||
yield path
|
||||
|
||||
|
||||
def copy_debug_store(source_root: pathlib.Path, target_root: pathlib.Path, *, dry_run: bool) -> None:
|
||||
if dry_run:
|
||||
print(f"[dry-run] Would copy '{source_root}' -> '{target_root}'")
|
||||
return
|
||||
|
||||
if target_root.exists():
|
||||
shutil.rmtree(target_root)
|
||||
shutil.copytree(source_root, target_root)
|
||||
|
||||
|
||||
def verify_debug_store(manifest: dict, offline_root: pathlib.Path) -> Tuple[int, int]:
|
||||
"""Return (verified_count, total_entries)."""
|
||||
|
||||
artifacts = manifest.get("artifacts", [])
|
||||
verified = 0
|
||||
for entry in artifacts:
|
||||
debug_path = entry.get("debugPath")
|
||||
expected_sha = entry.get("sha256")
|
||||
expected_size = entry.get("size")
|
||||
|
||||
if not debug_path or not expected_sha:
|
||||
continue
|
||||
|
||||
relative = pathlib.PurePosixPath(debug_path)
|
||||
resolved = (offline_root.parent / relative).resolve()
|
||||
|
||||
if not resolved.exists():
|
||||
raise FileNotFoundError(f"Debug artefact missing after mirror: {relative}")
|
||||
|
||||
actual_sha = compute_sha256(resolved)
|
||||
if actual_sha != expected_sha:
|
||||
raise ValueError(
|
||||
f"Digest mismatch for {relative}: expected {expected_sha}, found {actual_sha}"
|
||||
)
|
||||
|
||||
if expected_size is not None:
|
||||
actual_size = resolved.stat().st_size
|
||||
if actual_size != expected_size:
|
||||
raise ValueError(
|
||||
f"Size mismatch for {relative}: expected {expected_size}, found {actual_size}"
|
||||
)
|
||||
|
||||
verified += 1
|
||||
|
||||
return verified, len(artifacts)
|
||||
|
||||
|
||||
def summarize_store(manifest: dict, manifest_sha: str | None, offline_root: pathlib.Path, summary_path: pathlib.Path) -> None:
|
||||
debug_files = [
|
||||
path
|
||||
for path in iter_debug_files(offline_root)
|
||||
if path.suffix == ".debug"
|
||||
]
|
||||
|
||||
total_size = sum(path.stat().st_size for path in debug_files)
|
||||
build_ids = sorted(
|
||||
{entry.get("buildId") for entry in manifest.get("artifacts", []) if entry.get("buildId")}
|
||||
)
|
||||
|
||||
summary = {
|
||||
"generatedAt": dt.datetime.now(tz=dt.timezone.utc)
|
||||
.replace(microsecond=0)
|
||||
.isoformat()
|
||||
.replace("+00:00", "Z"),
|
||||
"manifestGeneratedAt": manifest.get("generatedAt"),
|
||||
"manifestSha256": manifest_sha,
|
||||
"platforms": manifest.get("platforms")
|
||||
or sorted({entry.get("platform") for entry in manifest.get("artifacts", []) if entry.get("platform")}),
|
||||
"artifactCount": len(manifest.get("artifacts", [])),
|
||||
"buildIds": {
|
||||
"total": len(build_ids),
|
||||
"samples": build_ids[:10],
|
||||
},
|
||||
"debugFiles": {
|
||||
"count": len(debug_files),
|
||||
"totalSizeBytes": total_size,
|
||||
},
|
||||
}
|
||||
|
||||
summary_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with summary_path.open("w", encoding="utf-8") as handle:
|
||||
json.dump(summary, handle, indent=2)
|
||||
handle.write("\n")
|
||||
|
||||
|
||||
def resolve_release_debug_dir(base: pathlib.Path) -> pathlib.Path:
|
||||
debug_dir = base / "debug"
|
||||
if debug_dir.exists():
|
||||
return debug_dir
|
||||
|
||||
# Allow specifying the channel directory directly (e.g. out/release/stable)
|
||||
if base.name == "debug":
|
||||
return base
|
||||
|
||||
raise FileNotFoundError(f"Debug directory not found under '{base}'")
|
||||
|
||||
|
||||
def parse_args(argv: list[str] | None = None) -> argparse.Namespace:
|
||||
parser = argparse.ArgumentParser(description=__doc__)
|
||||
parser.add_argument(
|
||||
"--release-dir",
|
||||
type=pathlib.Path,
|
||||
default=REPO_ROOT / "out" / "release",
|
||||
help="Release output directory containing the debug store (default: %(default)s)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--offline-kit-dir",
|
||||
type=pathlib.Path,
|
||||
default=REPO_ROOT / "out" / "offline-kit",
|
||||
help="Offline Kit staging directory (default: %(default)s)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--verify-only",
|
||||
action="store_true",
|
||||
help="Skip copying and only verify the existing offline kit debug store",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dry-run",
|
||||
action="store_true",
|
||||
help="Print actions without copying files",
|
||||
)
|
||||
return parser.parse_args(argv)
|
||||
|
||||
|
||||
def main(argv: list[str] | None = None) -> int:
|
||||
args = parse_args(argv)
|
||||
|
||||
try:
|
||||
source_debug = resolve_release_debug_dir(args.release_dir.resolve())
|
||||
except FileNotFoundError as exc:
|
||||
print(f"error: {exc}", file=sys.stderr)
|
||||
return 2
|
||||
|
||||
target_root = (args.offline_kit_dir / "debug").resolve()
|
||||
|
||||
if not args.verify_only:
|
||||
copy_debug_store(source_debug, target_root, dry_run=args.dry_run)
|
||||
if args.dry_run:
|
||||
return 0
|
||||
|
||||
manifest_path = target_root / "debug-manifest.json"
|
||||
if not manifest_path.exists():
|
||||
print(f"error: offline kit manifest missing at {manifest_path}", file=sys.stderr)
|
||||
return 3
|
||||
|
||||
manifest = load_manifest(manifest_path)
|
||||
manifest_sha_path = manifest_path.with_suffix(manifest_path.suffix + ".sha256")
|
||||
recorded_sha = parse_manifest_sha(manifest_sha_path)
|
||||
recomputed_sha = compute_sha256(manifest_path)
|
||||
if recorded_sha and recorded_sha != recomputed_sha:
|
||||
print(
|
||||
f"warning: manifest SHA mismatch (recorded {recorded_sha}, recomputed {recomputed_sha}); updating checksum",
|
||||
file=sys.stderr,
|
||||
)
|
||||
manifest_sha_path.write_text(f"{recomputed_sha} {manifest_path.name}\n", encoding="utf-8")
|
||||
|
||||
verified, total = verify_debug_store(manifest, target_root)
|
||||
print(f"✔ verified {verified}/{total} debug artefacts (manifest SHA {recomputed_sha})")
|
||||
|
||||
summary_path = args.offline_kit_dir / "metadata" / "debug-store.json"
|
||||
summarize_store(manifest, recomputed_sha, target_root, summary_path)
|
||||
print(f"ℹ summary written to {summary_path}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
36
ops/offline-kit/run-python-analyzer-smoke.sh
Normal file
36
ops/offline-kit/run-python-analyzer-smoke.sh
Normal file
@@ -0,0 +1,36 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
repo_root="$(git -C "${BASH_SOURCE%/*}/.." rev-parse --show-toplevel 2>/dev/null || pwd)"
|
||||
project_path="${repo_root}/src/StellaOps.Scanner.Analyzers.Lang.Python/StellaOps.Scanner.Analyzers.Lang.Python.csproj"
|
||||
output_dir="${repo_root}/out/analyzers/python"
|
||||
plugin_dir="${repo_root}/plugins/scanner/analyzers/lang/StellaOps.Scanner.Analyzers.Lang.Python"
|
||||
|
||||
to_win_path() {
|
||||
if command -v wslpath >/dev/null 2>&1; then
|
||||
wslpath -w "$1"
|
||||
else
|
||||
printf '%s\n' "$1"
|
||||
fi
|
||||
}
|
||||
|
||||
rm -rf "${output_dir}"
|
||||
project_path_win="$(to_win_path "$project_path")"
|
||||
output_dir_win="$(to_win_path "$output_dir")"
|
||||
|
||||
dotnet publish "$project_path_win" \
|
||||
--configuration Release \
|
||||
--output "$output_dir_win" \
|
||||
--self-contained false
|
||||
|
||||
mkdir -p "${plugin_dir}"
|
||||
cp "${output_dir}/StellaOps.Scanner.Analyzers.Lang.Python.dll" "${plugin_dir}/"
|
||||
if [[ -f "${output_dir}/StellaOps.Scanner.Analyzers.Lang.Python.pdb" ]]; then
|
||||
cp "${output_dir}/StellaOps.Scanner.Analyzers.Lang.Python.pdb" "${plugin_dir}/"
|
||||
fi
|
||||
|
||||
repo_root_win="$(to_win_path "$repo_root")"
|
||||
exec dotnet run \
|
||||
--project "${repo_root_win}/tools/LanguageAnalyzerSmoke/LanguageAnalyzerSmoke.csproj" \
|
||||
--configuration Release \
|
||||
-- --repo-root "${repo_root_win}"
|
||||
256
ops/offline-kit/test_build_offline_kit.py
Normal file
256
ops/offline-kit/test_build_offline_kit.py
Normal file
@@ -0,0 +1,256 @@
|
||||
from __future__ import annotations
|
||||
|
||||
import json
|
||||
import tarfile
|
||||
import tempfile
|
||||
import unittest
|
||||
import argparse
|
||||
import sys
|
||||
from collections import OrderedDict
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parent))
|
||||
|
||||
from build_release import write_manifest # type: ignore import-not-found
|
||||
|
||||
from build_offline_kit import build_offline_kit, compute_sha256 # type: ignore import-not-found
|
||||
|
||||
|
||||
class OfflineKitBuilderTests(unittest.TestCase):
|
||||
def setUp(self) -> None:
|
||||
self._temp = tempfile.TemporaryDirectory()
|
||||
self.base_path = Path(self._temp.name)
|
||||
self.out_dir = self.base_path / "out"
|
||||
self.release_dir = self.out_dir / "release"
|
||||
self.staging_dir = self.base_path / "staging"
|
||||
self.output_dir = self.base_path / "dist"
|
||||
self._create_sample_release()
|
||||
|
||||
def tearDown(self) -> None:
|
||||
self._temp.cleanup()
|
||||
|
||||
def _relative_to_out(self, path: Path) -> str:
|
||||
return path.relative_to(self.out_dir).as_posix()
|
||||
|
||||
def _write_json(self, path: Path, payload: dict[str, object]) -> None:
|
||||
path.parent.mkdir(parents=True, exist_ok=True)
|
||||
with path.open("w", encoding="utf-8") as handle:
|
||||
json.dump(payload, handle, indent=2)
|
||||
handle.write("\n")
|
||||
|
||||
def _create_sample_release(self) -> None:
|
||||
self.release_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
sbom_path = self.release_dir / "artifacts/sboms/sample.cyclonedx.json"
|
||||
sbom_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
sbom_path.write_text('{"bomFormat":"CycloneDX","specVersion":"1.5"}\n', encoding="utf-8")
|
||||
sbom_sha = compute_sha256(sbom_path)
|
||||
|
||||
provenance_path = self.release_dir / "artifacts/provenance/sample.provenance.json"
|
||||
self._write_json(
|
||||
provenance_path,
|
||||
{
|
||||
"buildDefinition": {"buildType": "https://example/build"},
|
||||
"runDetails": {"builder": {"id": "https://example/ci"}},
|
||||
},
|
||||
)
|
||||
provenance_sha = compute_sha256(provenance_path)
|
||||
|
||||
signature_path = self.release_dir / "artifacts/signatures/sample.signature"
|
||||
signature_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
signature_path.write_text("signature-data\n", encoding="utf-8")
|
||||
signature_sha = compute_sha256(signature_path)
|
||||
|
||||
metadata_path = self.release_dir / "artifacts/metadata/sample.metadata.json"
|
||||
self._write_json(metadata_path, {"digest": "sha256:1234"})
|
||||
metadata_sha = compute_sha256(metadata_path)
|
||||
|
||||
chart_path = self.release_dir / "helm/stellaops-1.0.0.tgz"
|
||||
chart_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
chart_path.write_bytes(b"helm-chart-data")
|
||||
chart_sha = compute_sha256(chart_path)
|
||||
|
||||
compose_path = self.release_dir.parent / "deploy/compose/docker-compose.dev.yaml"
|
||||
compose_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
compose_path.write_text("services: {}\n", encoding="utf-8")
|
||||
compose_sha = compute_sha256(compose_path)
|
||||
|
||||
debug_file = self.release_dir / "debug/.build-id/ab/cdef.debug"
|
||||
debug_file.parent.mkdir(parents=True, exist_ok=True)
|
||||
debug_file.write_bytes(b"\x7fELFDEBUGDATA")
|
||||
debug_sha = compute_sha256(debug_file)
|
||||
|
||||
debug_manifest_path = self.release_dir / "debug/debug-manifest.json"
|
||||
debug_manifest = OrderedDict(
|
||||
(
|
||||
("generatedAt", "2025-10-26T00:00:00Z"),
|
||||
("version", "1.0.0"),
|
||||
("channel", "edge"),
|
||||
(
|
||||
"artifacts",
|
||||
[
|
||||
OrderedDict(
|
||||
(
|
||||
("buildId", "abcdef1234"),
|
||||
("platform", "linux/amd64"),
|
||||
("debugPath", "debug/.build-id/ab/cdef.debug"),
|
||||
("sha256", debug_sha),
|
||||
("size", debug_file.stat().st_size),
|
||||
("components", ["sample"]),
|
||||
("images", ["registry.example/sample@sha256:feedface"]),
|
||||
("sources", ["app/sample.dll"]),
|
||||
)
|
||||
)
|
||||
],
|
||||
),
|
||||
)
|
||||
)
|
||||
self._write_json(debug_manifest_path, debug_manifest)
|
||||
debug_manifest_sha = compute_sha256(debug_manifest_path)
|
||||
(debug_manifest_path.with_suffix(debug_manifest_path.suffix + ".sha256")).write_text(
|
||||
f"{debug_manifest_sha} {debug_manifest_path.name}\n",
|
||||
encoding="utf-8",
|
||||
)
|
||||
|
||||
manifest = OrderedDict(
|
||||
(
|
||||
(
|
||||
"release",
|
||||
OrderedDict(
|
||||
(
|
||||
("version", "1.0.0"),
|
||||
("channel", "edge"),
|
||||
("date", "2025-10-26T00:00:00Z"),
|
||||
("calendar", "2025.10"),
|
||||
)
|
||||
),
|
||||
),
|
||||
(
|
||||
"components",
|
||||
[
|
||||
OrderedDict(
|
||||
(
|
||||
("name", "sample"),
|
||||
("image", "registry.example/sample@sha256:feedface"),
|
||||
("tags", ["registry.example/sample:1.0.0"]),
|
||||
(
|
||||
"sbom",
|
||||
OrderedDict(
|
||||
(
|
||||
("path", self._relative_to_out(sbom_path)),
|
||||
("sha256", sbom_sha),
|
||||
)
|
||||
),
|
||||
),
|
||||
(
|
||||
"provenance",
|
||||
OrderedDict(
|
||||
(
|
||||
("path", self._relative_to_out(provenance_path)),
|
||||
("sha256", provenance_sha),
|
||||
)
|
||||
),
|
||||
),
|
||||
(
|
||||
"signature",
|
||||
OrderedDict(
|
||||
(
|
||||
("path", self._relative_to_out(signature_path)),
|
||||
("sha256", signature_sha),
|
||||
("ref", "sigstore://example"),
|
||||
("tlogUploaded", True),
|
||||
)
|
||||
),
|
||||
),
|
||||
(
|
||||
"metadata",
|
||||
OrderedDict(
|
||||
(
|
||||
("path", self._relative_to_out(metadata_path)),
|
||||
("sha256", metadata_sha),
|
||||
)
|
||||
),
|
||||
),
|
||||
)
|
||||
)
|
||||
],
|
||||
),
|
||||
(
|
||||
"charts",
|
||||
[
|
||||
OrderedDict(
|
||||
(
|
||||
("name", "stellaops"),
|
||||
("version", "1.0.0"),
|
||||
("path", self._relative_to_out(chart_path)),
|
||||
("sha256", chart_sha),
|
||||
)
|
||||
)
|
||||
],
|
||||
),
|
||||
(
|
||||
"compose",
|
||||
[
|
||||
OrderedDict(
|
||||
(
|
||||
("name", "docker-compose.dev.yaml"),
|
||||
("path", compose_path.relative_to(self.out_dir).as_posix()),
|
||||
("sha256", compose_sha),
|
||||
)
|
||||
)
|
||||
],
|
||||
),
|
||||
(
|
||||
"debugStore",
|
||||
OrderedDict(
|
||||
(
|
||||
("manifest", "debug/debug-manifest.json"),
|
||||
("sha256", debug_manifest_sha),
|
||||
("entries", 1),
|
||||
("platforms", ["linux/amd64"]),
|
||||
("directory", "debug/.build-id"),
|
||||
)
|
||||
),
|
||||
),
|
||||
)
|
||||
)
|
||||
write_manifest(manifest, self.release_dir)
|
||||
|
||||
def test_build_offline_kit(self) -> None:
|
||||
args = argparse.Namespace(
|
||||
version="2025.10.0",
|
||||
channel="edge",
|
||||
bundle_id="bundle-001",
|
||||
release_dir=self.release_dir,
|
||||
staging_dir=self.staging_dir,
|
||||
output_dir=self.output_dir,
|
||||
cosign_key=None,
|
||||
cosign_password=None,
|
||||
cosign_identity_token=None,
|
||||
no_transparency=False,
|
||||
skip_smoke=True,
|
||||
)
|
||||
result = build_offline_kit(args)
|
||||
bundle_path = Path(result["bundlePath"])
|
||||
self.assertTrue(bundle_path.exists())
|
||||
offline_manifest = self.output_dir.parent / "staging" / "manifest" / "offline-manifest.json"
|
||||
self.assertTrue(offline_manifest.exists())
|
||||
|
||||
with offline_manifest.open("r", encoding="utf-8") as handle:
|
||||
manifest_data = json.load(handle)
|
||||
artifacts = manifest_data["artifacts"]
|
||||
self.assertTrue(any(item["name"].startswith("sboms/") for item in artifacts))
|
||||
|
||||
metadata_path = Path(result["metadataPath"])
|
||||
data = json.loads(metadata_path.read_text(encoding="utf-8"))
|
||||
self.assertTrue(data["bundleSha256"].startswith("sha256:"))
|
||||
self.assertTrue(data["manifestSha256"].startswith("sha256:"))
|
||||
|
||||
with tarfile.open(bundle_path, "r:gz") as tar:
|
||||
members = tar.getnames()
|
||||
self.assertIn("manifest/release.yaml", members)
|
||||
self.assertTrue(any(name.startswith("sboms/sample-") for name in members))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
Reference in New Issue
Block a user