14 KiB
Executable File
Installation Guide
How to run Stella Ops from this repository using Docker Compose.
Prerequisites
- Docker Engine with Compose v2 (
docker compose version) - Enough disk for container images plus scan artifacts (SBOMs, logs, caches)
- For production-style installs, plan for persistent volumes (PostgreSQL + object storage) and a secrets provider
Runtime data assets (read before first deploy)
Stella Ops services depend on runtime data assets that are not produced by
dotnet publish — ML model weights for semantic search, JDK/Ghidra for binary
analysis, certificates, and more. Without them, services start but operate in
degraded mode.
# Download and verify all runtime assets
./devops/runtime-assets/acquire.sh --all
# Or just the embedding model (required for semantic search)
./devops/runtime-assets/acquire.sh --models
# Verify existing assets
./devops/runtime-assets/acquire.sh --verify
See devops/runtime-assets/README.md for the complete inventory, Docker
volume mount instructions, and air-gap packaging.
Quick path (automated setup scripts)
The fastest way to get running. The setup scripts validate prerequisites, configure the environment, start infrastructure, build solutions, build Docker images, and launch the full platform.
Windows (PowerShell 7):
.\scripts\setup.ps1 # full setup
.\scripts\setup.ps1 -InfraOnly # infrastructure only (PostgreSQL, Valkey, RustFS, Rekor, Zot)
.\scripts\setup.ps1 -QaIntegrationFixtures # full setup plus Harbor/GitHub App QA fixtures
Linux / macOS:
./scripts/setup.sh # full setup
./scripts/setup.sh --infra-only # infrastructure only
./scripts/setup.sh --qa-integration-fixtures # full setup plus Harbor/GitHub App QA fixtures
The scripts will:
- Check prerequisites (dotnet 10.x, node 20+, docker, git)
- Offer to install hosts file entries automatically
- Copy
env/stellaops.env.exampleto.envif needed (works out of the box) - Start infrastructure and wait for healthy containers
- Create or reuse the external frontdoor Docker network from
.env(FRONTDOOR_NETWORK, defaultstellaops_frontdoor) - Stop repo-local host-run Stella services that would lock build outputs, then build repo-owned .NET solutions and publish backend services locally into small Docker contexts before building hardened runtime images (vendored or generated trees such as
node_modules,dist,coverage, andoutputare excluded) - Launch the full platform with health checks, perform one bounded restart pass for services that stay unhealthy after first boot, wait for the first-user frontdoor bootstrap path (
/welcome,/envsettings.json, OIDC discovery,/connect/authorize), then complete an authenticated convergence gate that proves topology inventory, notifications administration overrides, and promotion bootstrap flows load cleanly before reporting success - If
-QaIntegrationFixtures/--qa-integration-fixturesis enabled, start deterministic Harbor and GitHub App fixtures and verify them so the local Integrations Hub can be exercised with successful UI onboarding
Open https://stella-ops.local when setup completes.
The automated setup path does not start the real third-party integration compose lane. devops/compose/docker-compose.testing.yml is the CI/testing lane, and the optional real providers live in devops/compose/docker-compose.integrations.yml. GitLab and Consul are opt-in there because they add noticeable idle CPU overhead.
For targeted backend rebuilds after a scoped code change on Windows:
.\devops\docker\build-all.ps1 -Services notify-web,orchestrator
This path avoids re-sending the full monorepo to Docker for every .NET service image.
Manual path (step by step)
1. Environment file
cd devops/compose
cp env/stellaops.env.example .env
The example file ships with working local-dev defaults. For production, change POSTGRES_PASSWORD and review all values.
2. Hosts file
Stella Ops services bind to unique loopback IPs so all can use port 443 without collisions. Add the entries from devops/compose/hosts.stellaops.local to your hosts file:
Runtime URL convention remains *.stella-ops.local; hosts.stellaops.local is the template file name only.
The same template also carries the optional harbor-fixture.stella-ops.local and github-app-fixture.stella-ops.local aliases used by the fixture-backed integrations QA lane.
- Windows:
C:\Windows\System32\drivers\etc\hosts(run editor as Administrator) - Linux / macOS:
sudo sh -c 'cat devops/compose/hosts.stellaops.local >> /etc/hosts'
3. Start infrastructure
cd devops/compose
docker compose -f docker-compose.dev.yml up -d
docker compose -f docker-compose.dev.yml ps # verify all healthy
4. Start the full platform
Create or reuse the external frontdoor network first:
docker network inspect "${FRONTDOOR_NETWORK:-stellaops_frontdoor}" >/dev/null 2>&1 || \
docker network create "${FRONTDOOR_NETWORK:-stellaops_frontdoor}"
docker compose -f docker-compose.stella-ops.yml up -d
Optional overlays:
# With Sigstore transparency log
docker compose -f docker-compose.stella-ops.yml --profile sigstore up -d
# With telemetry stack (Prometheus, Tempo, Loki)
docker compose -f docker-compose.stella-ops.yml -f docker-compose.telemetry.yml up -d
4a. Migration preflight and execution
Run a migration preflight after bringing up the stack:
# Check migration status for currently registered CLI modules
stella system migrations-status --module all
# Validate checksums for currently registered CLI modules
stella system migrations-verify --module all
# Optional: preview release migrations before any execution
stella system migrations-run --module all --category release --dry-run
If release migrations must be executed:
stella system migrations-run --module all --category release --force
stella system migrations-status --module all
Canonical policy for upgradeable on-prem installs:
- Use this CLI sequence as the required migration gate before rollouts and cutovers.
- Do not rely on Postgres init scripts for release upgrades.
- Use
docs/db/MIGRATION_CONSOLIDATION_PLAN.mdanddocs/db/MIGRATION_INVENTORY.mdto confirm module coverage and cutover wave state. - On empty migration history, CLI/API paths synthesize one per-service consolidated migration (
100_consolidated_<service>.sql) and then backfill legacy migration history rows to preserve incremental upgrade compatibility. - If consolidated history exists with partial legacy backfill, CLI/API paths auto-backfill missing legacy rows before source-set execution.
- UI-driven migration operations must call Platform WebService admin endpoints (
/api/v1/admin/migrations/*) withplatform.setup.admin; do not connect the browser directly to PostgreSQL. - Platform migration API implementation is in
src/Platform/StellaOps.Platform.WebService/Endpoints/MigrationAdminEndpoints.csand usessrc/Platform/__Libraries/StellaOps.Platform.Database/MigrationModuleRegistry.cs.
Notes:
- Compose PostgreSQL bootstrap scripts in
devops/compose/postgres-initrun only on first database initialization. devops/compose/postgres-init/14-platform-environment-settings.sqlnow leavesplatform.environment_settingsempty on fresh local databases so the setup wizard owns first-run completion truth. Older local volumes with the legacy(tenant_id, key)table shape are converged by Platform release migration064_EnvironmentSettingsInstallationScopeConvergence.sql.- Startup-hosted migrations are currently wired only for selected modules; CLI coverage is also module-limited.
- For the authoritative current-state module matrix, use
docs/db/MIGRATION_INVENTORY.md.
5. Verify
docker compose -f docker-compose.stella-ops.yml ps
curl -k https://stella-ops.local # should return the Angular UI
After the Angular UI is reachable, the supported local operator lanes are:
Browser-driven operator lane
Use the live browser UI at https://stella-ops.local and open the
Integrations Hub at /setup/integrations.
For a repeatable browser-driven run against the live frontdoor:
node src/Web/StellaOps.Web/scripts/live-integrations-ui-bootstrap.mjs
This harness signs in through the same frontdoor flow, drives the
/setup/integrations/onboarding/* routes in a real browser, and writes
evidence to
src/Web/StellaOps.Web/output/playwright/live-integrations-ui-bootstrap.json.
For a repeatable browser-driven proof of the setup wizard’s truthful state model:
node src/Web/StellaOps.Web/scripts/live-setup-wizard-state-truth-check.mjs
This harness signs in through the frontdoor, forces a fresh installation-scoped
setup session, proves that database probe does not complete the step, proves
that apply advances the backend state to the cache step, and proves that a page
reload resumes the same persisted session. Evidence is written to
src/Web/StellaOps.Web/output/playwright/live-setup-wizard-state-truth-check.json.
Verified current UI boundary on 2026-04-14:
- The browser flow can create the full 16-entry local integration catalog.
- GitLab-class providers can now be created from the UI without a manual Vault
write because the Integrations Hub stages credentials through Secret
Authority before binding the returned
authref://.... - The setup wizard now persists authoritative installation-scoped progress in
platform.setup_sessionsand owns only the five control-plane steps the running control plane can truthfully converge: PostgreSQL, Valkey, schema migrations, admin bootstrap, and crypto profile. - The Admin step depends on Platform reaching Authority's internal bootstrap
endpoint with the shared bootstrap API key. In local compose, this is wired
by forwarding
AUTHORITY_BOOTSTRAP_APIKEYinto Platform asSTELLAOPS_BOOTSTRAP_KEY. - Tenant-scoped onboarding stays on
/setup/*and other authenticated module surfaces instead of being duplicated inside the bootstrap wizard. - The inline GitLab path still needs real credential input from the operator.
For repeatable automation, the Playwright harness reads those values from
STELLAOPS_UI_BOOTSTRAP_GITLAB_ACCESS_TOKENandSTELLAOPS_UI_BOOTSTRAP_GITLAB_REGISTRY_BASIC. scripts/bootstrap-local-gitlab-secrets.ps1remains the scripted fallback when you want to pre-stage the local GitLab authrefs without using the UI.
Scripted convergence lane
For a fresh local developer install, populate the live integration catalog with:
powershell -ExecutionPolicy Bypass -File scripts/register-local-integrations.ps1 `
-Tenant demo-prod
This converges the default local-ready lane to 13 healthy providers:
Harbor fixture, Docker Registry, Nexus, GitHub App fixture, Gitea, Jenkins,
Vault, Consul, eBPF runtime-host fixture, MinIO, and the three feed mirror
providers (StellaOpsMirror, NvdMirror, OsvMirror).
GitLab server/CI and the GitLab registry remain opt-in because they require Vault-backed credentials. The scripted local path is:
powershell -ExecutionPolicy Bypass -File scripts/bootstrap-local-gitlab-secrets.ps1 `
-VerifyRegistry
powershell -ExecutionPolicy Bypass -File scripts/register-local-integrations.ps1 `
-Tenant demo-prod `
-IncludeGitLab
powershell -ExecutionPolicy Bypass -File scripts/register-local-integrations.ps1 `
-Tenant demo-prod `
-IncludeGitLab `
-IncludeGitLabRegistry
Or run the GitLab-backed registration in one step:
powershell -ExecutionPolicy Bypass -File scripts/register-local-integrations.ps1 `
-Tenant demo-prod `
-IncludeGitLab `
-IncludeGitLabRegistry `
-BootstrapGitLabSecrets
scripts/bootstrap-local-gitlab-secrets.ps1 reuses a valid secret/gitlab
secret when possible and otherwise rotates the local stella-local-integration
PAT, then writes both authref://vault/gitlab#access-token and
authref://vault/gitlab#registry-basic into the dev Vault.
Air-gapped deployments
For offline/air-gapped environments, use the sealed CI compose file and offline telemetry overlay:
# Sealed CI environment (authority, signer, attestor in isolation)
docker compose -f docker-compose.sealed-ci.yml up -d
# Offline observability (no external endpoints)
docker compose -f docker-compose.stella-ops.yml -f docker-compose.telemetry-offline.yml up -d
# Tile proxy for air-gapped Sigstore verification
docker compose -f docker-compose.stella-ops.yml -f docker-compose.tile-proxy.yml up -d
For offline bundles, imports, and update workflows, see:
docs/OFFLINE_KIT.mddocs/modules/airgap/guides/overview.md
Regional compliance overlays
| Region | Testing | Production |
|---|---|---|
| China (SM2/SM3/SM4) | docker-compose.compliance-china.yml + docker-compose.crypto-provider.crypto-sim.yml |
docker-compose.compliance-china.yml + docker-compose.crypto-provider.smremote.yml |
| Russia (GOST) | docker-compose.compliance-russia.yml + docker-compose.crypto-provider.crypto-sim.yml |
docker-compose.compliance-russia.yml + docker-compose.crypto-provider.cryptopro.yml |
| EU (eIDAS) | docker-compose.compliance-eu.yml + docker-compose.crypto-provider.crypto-sim.yml |
docker-compose.compliance-eu.yml |
See devops/compose/README.md for detailed compliance deployment instructions.
Hardening: require Authority for Concelier job triggers
If Concelier is exposed to untrusted networks, require Authority-issued tokens for /jobs* endpoints:
CONCELIER_AUTHORITY__ENABLED=true
CONCELIER_AUTHORITY__ALLOWANONYMOUSFALLBACK=false
Store the client secret outside source control (Docker secrets, mounted file, or Kubernetes Secret). For audit fields and alerting guidance, see docs/modules/concelier/operations/authority-audit-runbook.md.
Next steps
- Quickstart:
docs/quickstart.md - Developer setup details:
docs/dev/DEV_ENVIRONMENT_SETUP.md - Architecture overview:
docs/ARCHITECTURE_OVERVIEW.md - Compose profiles reference:
devops/compose/README.md