Files
git.stella-ops.org/devops/compose
master 50abd2137f Update docs, sprint plans, and compose configuration
Add 12 new sprint files (Integrations, Graph, JobEngine, FE, Router,
AdvisoryAI), archive completed scheduler UI sprint, update module
architecture docs (router, graph, jobengine, web, integrations),
and add Gitea entrypoint script for local dev.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-06 08:53:50 +03:00
..
2026-02-04 19:59:20 +02:00

Stella Ops Docker Compose Profiles

Consolidated Docker Compose configuration for the StellaOps platform. All profiles use immutable image digests from deploy/releases/*.yaml and are validated via docker compose config in CI.

Quick Reference

I want to... Command
Run the full platform docker compose -f docker-compose.stella-ops.yml up -d
Add observability docker compose -f docker-compose.stella-ops.yml -f docker-compose.telemetry.yml up -d
Start QA integration fixtures docker compose -f docker-compose.integration-fixtures.yml up -d
Start the default low-idle 3rd-party integration lane docker compose -f docker-compose.integrations.yml up -d
Start Consul KV only when needed docker compose -f docker-compose.integrations.yml --profile consul up -d consul
Start GitLab CE (heavy, low-idle defaults) docker compose -f docker-compose.integrations.yml --profile heavy up -d gitlab
Run integration E2E test suite See Integration Test Suite
Run CI/testing infrastructure docker compose -f docker-compose.testing.yml --profile ci up -d
Deploy with China compliance See China Compliance
Deploy with Russia compliance See Russia Compliance
Deploy with EU compliance See EU Compliance

File Structure

Core Stack Files

File Purpose
docker-compose.stella-ops.yml Main stack: PostgreSQL 18.1, Valkey 9.0.1, RustFS, Rekor v2, all StellaOps services
docker-compose.integration-fixtures.yml QA success fixtures: Harbor and GitHub App API fixtures for retained onboarding verification
docker-compose.telemetry.yml Observability: OpenTelemetry collector, Prometheus, Tempo, Loki
docker-compose.testing.yml CI/Testing: Test databases, mock services, Gitea for integration tests
docker-compose.dev.yml Minimal dev infrastructure: PostgreSQL, Valkey, RustFS only
docker-compose.integrations.yml Integration services: Gitea, Jenkins, Nexus, Vault, Docker Registry, MinIO, plus opt-in Consul and GitLab

Specialized Infrastructure

File Purpose
docker-compose.bsim.yml BSim analysis: PostgreSQL for Ghidra binary similarity corpus
docker-compose.corpus.yml Function corpus: PostgreSQL for function behavior database
docker-compose.sealed-ci.yml Air-gapped CI: Sealed testing environment with authority, signer, attestor
docker-compose.telemetry-offline.yml Offline observability: Air-gapped Loki, Promtail, OTEL collector, Tempo, Prometheus

Regional Compliance Overlays

File Purpose Jurisdiction
docker-compose.compliance-china.yml SM2/SM3/SM4 ShangMi crypto configuration China (OSCCA)
docker-compose.compliance-russia.yml GOST R 34.10-2012 crypto configuration Russia (FSB)
docker-compose.compliance-eu.yml eIDAS qualified trust services configuration EU

Crypto Provider Overlays

File Purpose Use Case
docker-compose.crypto-sim.yml Universal crypto simulation Testing without licensed crypto
docker-compose.cryptopro.yml CryptoPro CSP (real GOST) Production Russia deployments
docker-compose.sm-remote.yml SM Remote service (real SM2) Production China deployments

Additional Overlays

File Purpose Use Case
docker-compose.gpu.yaml NVIDIA GPU acceleration Advisory AI inference with GPU
docker-compose.cas.yaml Content Addressable Storage Dedicated CAS with retention policies
docker-compose.tile-proxy.yml Rekor tile caching proxy Air-gapped Sigstore deployments

Supporting Files

Path Purpose
env/*.env.example Environment variable templates per profile
scripts/backup.sh Create deterministic volume snapshots
scripts/reset.sh Stop stack and remove volumes (with confirmation)

Usage Patterns

Migration Workflow (Compose)

Use this sequence for deterministic migration handling in compose-based deployments:

# 1) Start stack (or restart after release image update)
docker compose -f docker-compose.stella-ops.yml up -d

# 2) Check migration status for CLI-registered modules
stella system migrations-status --module all

# 3) Verify checksums
stella system migrations-verify --module all

# 4) Preview release migrations
stella system migrations-run --module all --category release --dry-run

# 5) Execute release migrations when approved
stella system migrations-run --module all --category release --force

# 6) Re-check status
stella system migrations-status --module all

This sequence is the canonical migration gate for on-prem upgradeable deployments.

Current behavior details:

  • ./postgres-init scripts execute only during first PostgreSQL initialization (/docker-entrypoint-initdb.d mount).
  • Some services run startup migrations via hosted services; others are currently CLI-only or not wired yet.
  • Use docs/db/MIGRATION_INVENTORY.md as the authoritative current-state matrix before production upgrades.
  • Consolidation target policy and module cutover waves are defined in docs/db/MIGRATION_CONSOLIDATION_PLAN.md.
  • On empty migration history, CLI/API paths synthesize one per-service consolidated migration (100_consolidated_<service>.sql) and backfill legacy migration history rows for future update compatibility.
  • If consolidated history exists with partial legacy backfill, CLI/API paths auto-backfill missing legacy rows before source-set execution.
  • UI-driven migration execution must use Platform admin endpoints (/api/v1/admin/migrations/*) and never direct browser-to-PostgreSQL access.

Basic Development

# Copy environment template
cp env/stellaops.env.example .env

# Validate configuration
docker compose -f docker-compose.stella-ops.yml config

# Start the platform
docker compose -f docker-compose.stella-ops.yml up -d

# RustFS health probe (S3 mode)
curl -fsS http://127.1.1.3:8080/status

# View logs
docker compose -f docker-compose.stella-ops.yml logs -f scanner-web

Router Frontdoor Configuration

router-gateway uses the microservice-first route table in router-gateway-local.json. First-party Stella APIs are expected to flow through router transport; reverse proxy remains only for external/bootstrap surfaces that cannot participate in router registration yet (for example OIDC browser flows, Rekor, and static/platform bootstrap assets).

The local route table also carries a small set of explicit precedence rules that must stay ahead of the generic ^/api/v1/{service} and ^/api/v2/{service} matchers. Platform-owned surfaces such as /api/v1/aoc/*, /api/v1/administration/*, and the aggregated v2 read models /api/v2/context/*, /api/v2/releases/*, /api/v2/security/*, /api/v2/topology/*, /api/v2/evidence/*, and /api/v2/integrations/* resolve directly to platform. Browser compatibility prefixes such as /doctor/* and /scheduler/* are segment-bound before they strip the frontdoor prefix for dispatch. That keeps the target services on their canonical /api/v1/doctor/* and /api/v1/scheduler/* paths without hijacking frontend assets like doctor.routes-*.js or scheduler-ops.routes-*.js.

# Default frontdoor route table
ROUTER_GATEWAY_CONFIG=./router-gateway-local.json \
docker compose -f docker-compose.stella-ops.yml up -d

# Optional: scratch redeploy helper with health recovery + header-search smoke checks
pwsh ./scripts/router-mode-redeploy.ps1 -Mode microservice

The local compose defaults intentionally keep router control traffic calm: ROUTER_MESSAGING_HEARTBEAT_INTERVAL defaults to 30s so the stack does not churn small heartbeat traffic every 10 seconds across the full service fleet. Messaging endpoint/schema/OpenAPI replay is no longer periodic; it now happens on service startup, gateway-state recovery, or explicit administration resync. ROUTER_REGISTRATION_REFRESH_INTERVAL_SECONDS remains exposed only as a compatibility knob for older assumptions and non-messaging experiments.

Validation endpoints:

# Aggregated OpenAPI
curl -k https://stella-ops.local/openapi.json

# Timeline API schema (through router-gateway)
curl -k https://stella-ops.local/openapi.json | jq '.paths["/api/v1/timeline"]'

# Header search routing smoke (fails on missing /api/v1/search* or /api/v1/advisory-ai/search* routes)
pwsh ./scripts/header-search-smoke.ps1

With Observability

# Generate TLS certificates for telemetry
./ops/devops/telemetry/generate_dev_tls.sh

# Start platform with telemetry
docker compose -f docker-compose.stella-ops.yml \
  -f docker-compose.telemetry.yml up -d

CI/Testing Infrastructure

# Start CI infrastructure only (different ports to avoid conflicts)
docker compose -f docker-compose.testing.yml --profile ci up -d

# Start mock services for integration testing
docker compose -f docker-compose.testing.yml --profile mock up -d

# Start Gitea for SCM integration tests
docker compose -f docker-compose.testing.yml --profile gitea up -d

# Start everything
docker compose -f docker-compose.testing.yml --profile all up -d

Test Infrastructure Ports:

Service Port Purpose
postgres-test 5433 PostgreSQL 18 for tests
valkey-test 6380 Valkey for cache/queue tests
rustfs-test 8180 S3-compatible storage
mock-registry 5001 Container registry mock
gitea 3000 Git hosting for SCM tests

QA Integration Success Fixtures

Use the fixture compose lane when you need a scratch stack to prove successful Integrations Hub onboarding for the providers currently exposed in the local UI (Harbor, GitHub App).

# Start the fixture services after the main stack is up
docker compose -f docker-compose.integration-fixtures.yml up -d

# Harbor success-path endpoint
curl http://harbor-fixture.stella-ops.local/api/v2.0/health

# GitHub App success-path endpoints
curl http://github-app-fixture.stella-ops.local/api/v3/app
curl http://github-app-fixture.stella-ops.local/api/v3/rate_limit

Fixture endpoints:

Service Hostname Port Contract
Harbor fixture harbor-fixture.stella-ops.local 80 GET /api/v2.0/health -> healthy JSON + X-Harbor-Version
GitHub App fixture github-app-fixture.stella-ops.local 80 GET /api/v3/app, GET /api/v3/rate_limit

These fixtures are deterministic QA aids only; they are not production dependencies and remain opt-in.


Third-Party Integration Services

Real 3rd-party services for local integration testing. Unlike the QA fixtures above (which are nginx mocks), these are fully functional instances that exercise actual connector plugin code paths.

# Start the default low-idle integration lane (after the main stack is up)
docker compose -f docker-compose.integrations.yml up -d

# Start specific services only
docker compose -f docker-compose.integrations.yml up -d gitea vault jenkins

# Start Consul only when you need the Consul connector
docker compose -f docker-compose.integrations.yml --profile consul up -d consul

# Start GitLab CE (heavy — requires ~4 GB RAM, ~3 min startup)
# Default GitLab tuning keeps SCM/API coverage and disables registry extras.
docker compose -f docker-compose.integrations.yml --profile heavy up -d gitlab

# Re-enable GitLab registry/package surfaces for dedicated registry tests
GITLAB_ENABLE_REGISTRY=true GITLAB_ENABLE_PACKAGES=true \
  docker compose -f docker-compose.integrations.yml --profile heavy up -d gitlab

# Combine with mock fixtures for full coverage
docker compose \
  -f docker-compose.integrations.yml \
  -f docker-compose.integration-fixtures.yml \
  up -d

# Confirm the deterministic Gitea bootstrap completed
docker compose -f docker-compose.integrations.yml ps gitea

Hosts file entries (add to C:\Windows\System32\drivers\etc\hosts):

127.1.2.1  gitea.stella-ops.local
127.1.2.2  jenkins.stella-ops.local
127.1.2.3  nexus.stella-ops.local
127.1.2.4  vault.stella-ops.local
127.1.2.5  registry.stella-ops.local
127.1.2.6  minio.stella-ops.local
127.1.2.7  gitlab.stella-ops.local
127.1.2.8  consul.stella-ops.local

Service reference:

Service Type Address Credentials Integration Provider
Gitea SCM http://gitea.stella-ops.local:3000 stellaops / Stella2026! on fresh volumes Gitea
Jenkins CI/CD http://jenkins.stella-ops.local:8080 Setup wizard disabled Jenkins
Nexus Registry http://nexus.stella-ops.local:8081 admin / see admin.password Nexus
Vault Secrets http://vault.stella-ops.local:8200 Token: stellaops-dev-root-token-2026
Consul Settings/KV http://consul.stella-ops.local:8500 none (single-node local server, opt-in profile) Consul
Docker Registry Registry http://registry.stella-ops.local:5000 None (open dev) DockerHub
MinIO S3 Storage http://minio.stella-ops.local:9001 stellaops / Stella2026!
GitLab CE SCM+CI(+Registry opt-in) http://gitlab.stella-ops.local:8929 root / Stella2026! GitLabServer

Credential resolution: Integration connectors resolve secrets via authref://vault/{path}#{key} URIs. The Integrations service resolves these from Vault automatically in dev mode. Store credentials with:

export VAULT_ADDR=http://vault.stella-ops.local:8200
export VAULT_TOKEN=stellaops-dev-root-token-2026

vault kv put secret/harbor robot-account="harbor-robot-token"
vault kv put secret/github app-private-key="your-key"
vault kv put secret/gitea api-token="your-gitea-token"
vault kv put secret/gitlab access-token="glpat-your-token"
vault kv put secret/jenkins api-token="user:token"
vault kv put secret/nexus admin-password="your-password"

Gitea is now bootstrapped by the compose service itself: a fresh stellaops-gitea-data volume creates the default local admin user and the repository root before the container reports healthy. Personal access tokens remain a manual step because Gitea only reveals the token value when it is created.

docker-compose.testing.yml is a separate infrastructure-test lane. It starts postgres-test, valkey-test, mocks, and an isolated Gitea profile on different ports; it does not start Consul or GitLab. Use docker-compose.integrations.yml only when you need real third-party providers for connector validation.

Backend connector plugins (8 total, loaded in Integrations service):

Plugin Type Provider Health Endpoint
Harbor Registry Harbor /api/v2.0/health
Docker Registry Registry DockerHub /v2/
Nexus Registry Nexus /service/rest/v1/status
GitHub App SCM GitHubApp /api/v3/app
Gitea SCM Gitea /api/v1/version
GitLab SCM GitLabServer /api/v4/version
Jenkins CI/CD Jenkins /api/json
InMemory Testing InMemory — (hidden)

Advisory fixture endpoints (for advisory sources that are unreachable from Docker):

Service Hostname Port Mocked Sources
Advisory fixture advisory-fixture.stella-ops.local 80 CERT-In, FSTEC BDU, VEX Hub, StellaOps Mirror, Exploit-DB, AMD, Siemens, Ruby Advisory DB

IP address map:

IP Service Port(s)
127.1.1.6 harbor-fixture 80
127.1.1.7 github-app-fixture 80
127.1.1.8 advisory-fixture 80
127.1.2.1 gitea 3000, 2222
127.1.2.2 jenkins 8080, 50000
127.1.2.3 nexus 8081, 8082, 8083
127.1.2.4 vault 8200
127.1.2.5 docker-registry 5000
127.1.2.6 minio 9000, 9001
127.1.2.7 gitlab (heavy) 8929, 2224, 5050
127.1.2.8 consul (optional) 8500

For detailed setup instructions per service, see docs/integrations/LOCAL_SERVICES.md.

Integration Test Suite

A Playwright-based E2E test suite validates the full integration lifecycle against the live stack. It covers 5 areas:

Area What it tests
Compose Health All fixture + service containers are running and healthy
Endpoint Probes Direct HTTP to each 3rd-party service (Harbor, Gitea, Jenkins, Nexus, Vault, Registry, MinIO)
Connector Lifecycle Create integrations via API, verify auto-activation, test-connection, health-check, cleanup
Advisory Sources All 74 advisory & VEX sources report healthy
UI Verification Hub counts, per-tab list views, tab switching

Prerequisites:

# 1. Main stack must be running
docker compose -f docker-compose.stella-ops.yml up -d

# 2. Start integration fixtures (mock endpoints)
docker compose -f docker-compose.integration-fixtures.yml up -d

# 3. Start real 3rd-party services
docker compose -f docker-compose.integrations.yml up -d

# 3a. (Optional) Start Consul only when validating the Consul connector
docker compose -f docker-compose.integrations.yml --profile consul up -d consul

# 4. (Optional) Start GitLab for full SCM coverage
docker compose -f docker-compose.integrations.yml --profile heavy up -d gitlab

Run the test suite:

cd src/Web/StellaOps.Web

# Run all integration tests
E2E_RUN_ID=$(date +%s) \
PLAYWRIGHT_BASE_URL=https://stella-ops.local \
  npx playwright test --config=playwright.integrations.config.ts

# Run a specific test group
E2E_RUN_ID=$(date +%s) \
PLAYWRIGHT_BASE_URL=https://stella-ops.local \
  npx playwright test --config=playwright.integrations.config.ts \
  --grep "Compose Health"

# Run with verbose output
E2E_RUN_ID=$(date +%s) \
PLAYWRIGHT_BASE_URL=https://stella-ops.local \
  npx playwright test --config=playwright.integrations.config.ts \
  --reporter=list

Environment variables:

Variable Default Purpose
PLAYWRIGHT_BASE_URL https://stella-ops.local Target Stella Ops instance
E2E_RUN_ID run1 Unique suffix for test integration names (avoids duplicates across runs)
STELLAOPS_ADMIN_USER admin Login username
STELLAOPS_ADMIN_PASS Admin@Stella2026! Login password

Key files:

File Purpose
src/Web/StellaOps.Web/playwright.integrations.config.ts Playwright config (no dev server, live stack)
src/Web/StellaOps.Web/tests/e2e/integrations/integrations.e2e.spec.ts Test suite (35 tests)
src/Web/StellaOps.Web/tests/e2e/integrations/live-auth.fixture.ts Real OIDC login fixture
src/Web/StellaOps.Web/e2e/screenshots/integrations/ Test screenshots

Note: Unlike the mocked E2E tests in tests/e2e/ and e2e/, this suite performs real OIDC login and hits the live API. It requires all services to be running and healthy.


Regional Compliance Deployments

China Compliance (SM2/SM3/SM4)

For Testing (simulation):

docker compose -f docker-compose.stella-ops.yml \
  -f docker-compose.compliance-china.yml \
  -f docker-compose.crypto-sim.yml up -d

For Production (real SM crypto):

docker compose -f docker-compose.stella-ops.yml \
  -f docker-compose.compliance-china.yml \
  -f docker-compose.sm-remote.yml up -d

With OSCCA-certified HSM:

# Set HSM connection details in environment
export SM_REMOTE_HSM_URL="https://sm-hsm.example.com:8900"
export SM_REMOTE_HSM_API_KEY="your-api-key"

docker compose -f docker-compose.stella-ops.yml \
  -f docker-compose.compliance-china.yml \
  -f docker-compose.sm-remote.yml up -d

Algorithms:

  • SM2: Public key cryptography (GM/T 0003-2012)
  • SM3: Hash function, 256-bit (GM/T 0004-2012)
  • SM4: Block cipher, 128-bit (GM/T 0002-2012)

Russia Compliance (GOST)

For Testing (simulation):

docker compose -f docker-compose.stella-ops.yml \
  -f docker-compose.compliance-russia.yml \
  -f docker-compose.crypto-sim.yml up -d

For Production (CryptoPro CSP):

# CryptoPro requires EULA acceptance
CRYPTOPRO_ACCEPT_EULA=1 docker compose -f docker-compose.stella-ops.yml \
  -f docker-compose.compliance-russia.yml \
  -f docker-compose.cryptopro.yml up -d

Requirements for CryptoPro:

  • CryptoPro CSP license files in opt/cryptopro/downloads/
  • CRYPTOPRO_ACCEPT_EULA=1 environment variable
  • Valid CryptoPro container images

Algorithms:

  • GOST R 34.10-2012: Digital signature (256/512-bit)
  • GOST R 34.11-2012: Hash function (Streebog, 256/512-bit)
  • GOST R 34.12-2015: Block cipher (Kuznyechik, Magma)

EU Compliance (eIDAS)

For Testing (simulation):

docker compose -f docker-compose.stella-ops.yml \
  -f docker-compose.compliance-eu.yml \
  -f docker-compose.crypto-sim.yml up -d

For Production: EU eIDAS deployments typically integrate with external Qualified Trust Service Providers (QTSPs) rather than hosting crypto locally. Configure your QTSP integration in the application settings.

docker compose -f docker-compose.stella-ops.yml \
  -f docker-compose.compliance-eu.yml up -d

Standards:

  • ETSI TS 119 312 compliant algorithms
  • Qualified electronic signatures
  • QTSP integration for qualified trust services

Crypto Simulation Details

The docker-compose.crypto-sim.yml overlay provides a unified simulation service for all sovereign crypto profiles:

Algorithm ID Simulation Use Case
SM2, sm.sim HMAC-SHA256 China testing
GOST12-256, GOST12-512 HMAC-SHA256 Russia testing
ru.magma.sim, ru.kuznyechik.sim HMAC-SHA256 Russia testing
DILITHIUM3, FALCON512, pq.sim HMAC-SHA256 Post-quantum testing
fips.sim, eidas.sim, kcmvp.sim ECDSA P-256 FIPS/EU/Korea testing

Important: Simulation is for testing only. Uses deterministic HMAC or static ECDSA keys—not suitable for production or compliance certification.


Configuration Reference

Infrastructure Services

Service Default Port Purpose
PostgreSQL 5432 Primary database
Valkey 6379 Cache, queues, events
RustFS 8080 S3-compatible artifact storage
Rekor v2 (internal) Sigstore transparency log

Application Services

Service Default Port Purpose
Authority 8440 OAuth2/OIDC identity provider
Signer 8441 Cryptographic signing
Attestor 8442 SLSA attestation
Scanner Web 8444 SBOM/vulnerability scanning API
Concelier 8445 Advisory aggregation
Notify Web 8446 Notification service
Issuer Directory 8447 CSAF publisher registry
Advisory AI Web 8448 AI-powered advisory analysis
Web UI 8443 Angular frontend

Environment Variables

Key variables (see env/*.env.example for complete list):

# Database
POSTGRES_USER=stellaops
POSTGRES_PASSWORD=<secret>
POSTGRES_DB=stellaops_platform

# Authority
AUTHORITY_ISSUER=https://authority.example.com

# Scanner
SCANNER_EVENTS_ENABLED=false
SCANNER_OFFLINEKIT_ENABLED=false

# Crypto (for compliance overlays)
STELLAOPS_CRYPTO_PROFILE=default  # or: china, russia, eu
STELLAOPS_CRYPTO_ENABLE_SIM=0     # set to 1 for simulation

# CryptoPro (Russia only)
CRYPTOPRO_ACCEPT_EULA=0  # must be 1 to use CryptoPro

# SM Remote (China only)
SM_SOFT_ALLOWED=1  # software-only SM2
SM_REMOTE_HSM_URL= # optional: OSCCA-certified HSM

Networking

All profiles use a shared stellaops Docker network. Production deployments can attach a frontdoor network for reverse proxy integration:

# Create external network for load balancer
docker network create stellaops_frontdoor

# Set in environment
export FRONTDOOR_NETWORK=stellaops_frontdoor

Only externally-reachable services (Authority, Signer, Attestor, Concelier, Scanner Web, Notify Web, UI) attach to the frontdoor network. Infrastructure services (PostgreSQL, Valkey, RustFS) remain on the private network.


Sigstore Tools

Enable Sigstore CLI tools (rekor-cli, cosign) with the sigstore profile:

docker compose -f docker-compose.stella-ops.yml --profile sigstore up -d

Enable self-hosted Rekor v2 with the sigstore-local profile:

docker compose -f docker-compose.stella-ops.yml --profile sigstore-local up -d rekor-v2

sigstore-local requires:

  • Rekor signer key mounted at ../../etc/authority/keys/signing-dev.pem
  • Tessera backend config: REKOR_GCP_BUCKET and REKOR_GCP_SPANNER
  • GCP ADC credentials available to the container runtime

GPU Support for Advisory AI

GPU is disabled by default. To enable NVIDIA GPU inference:

docker compose -f docker-compose.stella-ops.yml \
  -f docker-compose.gpu.yaml up -d

Requirements:

  • NVIDIA GPU with CUDA support
  • nvidia-container-toolkit installed
  • Docker configured with nvidia runtime

Content Addressable Storage (CAS)

The CAS overlay provides dedicated RustFS instances with retention policies for different artifact types:

# Standalone CAS infrastructure
docker compose -f docker-compose.cas.yaml up -d

# Combined with main stack
docker compose -f docker-compose.stella-ops.yml \
  -f docker-compose.cas.yaml up -d

CAS Services:

Service Port Purpose
rustfs-cas 8180 Runtime facts, signals, replay artifacts
rustfs-evidence 8181 Merkle roots, hash chains, evidence bundles (immutable)
rustfs-attestation 8182 DSSE envelopes, in-toto attestations (immutable)

Retention Policies (configurable via env/cas.env.example):

  • Vulnerability DB: 7 days
  • SBOM artifacts: 365 days
  • Scan results: 90 days
  • Evidence bundles: Indefinite (immutable)
  • Attestations: Indefinite (immutable)

Tile Proxy (Air-Gapped Sigstore)

For air-gapped deployments, the tile-proxy caches Rekor transparency log tiles locally from public Sigstore:

docker compose -f docker-compose.stella-ops.yml \
  -f docker-compose.tile-proxy.yml up -d

Tile Proxy vs Rekor v2:

  • Use --profile sigstore-local when running your own Rekor transparency log (GCP Tessera backend required).
  • Use docker-compose.tile-proxy.yml when caching tiles from public Sigstore (rekor.sigstore.dev).

Configuration:

Variable Default Purpose
REKOR_SERVER_URL https://rekor.sigstore.dev Upstream Rekor to proxy
TILE_PROXY_SYNC_ENABLED true Enable periodic tile sync
TILE_PROXY_SYNC_SCHEDULE 0 */6 * * * Sync every 6 hours
TILE_PROXY_CACHE_MAX_SIZE_GB 10 Local cache size limit

The proxy syncs tiles on schedule and serves them to internal services for offline verification.


Maintenance

Backup

./scripts/backup.sh  # Creates timestamped tar.gz of volumes

Reset

./scripts/reset.sh  # Stops stack, removes volumes (requires confirmation)

Validate Configuration

docker compose -f docker-compose.stella-ops.yml config

Update to New Release

  1. Import new manifest to deploy/releases/
  2. Update image digests in compose files
  3. Run docker compose config to validate
  4. Run deploy/tools/validate-profiles.sh for audit

Troubleshooting

Port Conflicts

Override ports in your .env file:

POSTGRES_PORT=5433
VALKEY_PORT=6380
SCANNER_WEB_PORT=8544

Service Dependencies

Services declare depends_on with health checks. If a service fails to start, check its dependencies:

docker compose -f docker-compose.stella-ops.yml ps
docker compose -f docker-compose.stella-ops.yml logs postgres
docker compose -f docker-compose.stella-ops.yml logs valkey

Crypto Provider Issues

For crypto simulation issues:

# Check sim-crypto service
docker compose logs sim-crypto
curl http://localhost:18090/keys

For CryptoPro issues:

# Verify EULA acceptance
echo $CRYPTOPRO_ACCEPT_EULA  # must be 1

# Check CryptoPro service
docker compose logs cryptopro-csp