devops folders consolidate

This commit is contained in:
master
2026-01-25 23:27:41 +02:00
parent 6e687b523a
commit a743bb9a1d
613 changed files with 8611 additions and 41846 deletions

View File

@@ -1,76 +0,0 @@
# Docker hardening blueprint (DOCKER-44-001)
Use this template for core services (API, Console, Orchestrator, Task Runner, Concelier, Excititor, Policy, Notify, Export, AdvisoryAI).
The reusable multi-stage scaffold lives at `ops/devops/docker/Dockerfile.hardened.template` and expects:
- .NET 10 SDK/runtime images provided via offline mirror (`SDK_IMAGE` / `RUNTIME_IMAGE`).
- `APP_PROJECT` path to the service csproj.
- `healthcheck.sh` copied from `ops/devops/docker/` (already referenced by the template).
- Optional: `APP_BINARY` (assembly name, defaults to `StellaOps.Service`) and `APP_PORT`.
Copy the template next to the service and set build args in CI (per-service matrix) to avoid maintaining divergent Dockerfiles.
```Dockerfile
# syntax=docker/dockerfile:1.7
ARG SDK_IMAGE=mcr.microsoft.com/dotnet/sdk:10.0-bookworm-slim
ARG RUNTIME_IMAGE=mcr.microsoft.com/dotnet/aspnet:10.0-bookworm-slim
ARG APP_PROJECT=src/Service/Service.csproj
ARG CONFIGURATION=Release
ARG APP_USER=stella
ARG APP_UID=10001
ARG APP_GID=10001
ARG APP_PORT=8080
FROM ${SDK_IMAGE} AS build
ENV DOTNET_CLI_TELEMETRY_OPTOUT=1 DOTNET_NOLOGO=1 SOURCE_DATE_EPOCH=1704067200
WORKDIR /src
COPY . .
RUN dotnet restore ${APP_PROJECT} --packages /.nuget/packages && \
dotnet publish ${APP_PROJECT} -c ${CONFIGURATION} -o /app/publish /p:UseAppHost=true /p:PublishTrimmed=false
FROM ${RUNTIME_IMAGE} AS runtime
RUN groupadd -r -g ${APP_GID} ${APP_USER} && \
useradd -r -u ${APP_UID} -g ${APP_GID} -d /var/lib/${APP_USER} ${APP_USER}
WORKDIR /app
COPY --from=build --chown=${APP_UID}:${APP_GID} /app/publish/ ./
COPY --chown=${APP_UID}:${APP_GID} ops/devops/docker/healthcheck.sh /usr/local/bin/healthcheck.sh
ENV ASPNETCORE_URLS=http://+:${APP_PORT} \
DOTNET_EnableDiagnostics=0 \
DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=1 \
COMPlus_EnableDiagnostics=0
USER ${APP_UID}:${APP_GID}
EXPOSE ${APP_PORT}
HEALTHCHECK --interval=30s --timeout=5s --start-period=15s --retries=3 CMD /usr/local/bin/healthcheck.sh
RUN chmod 500 /app && find /app -maxdepth 1 -type f -exec chmod 400 {} \; && find /app -maxdepth 1 -type d -exec chmod 500 {} \;
ENTRYPOINT ["sh","-c","exec ./\"$APP_BINARY\""]
```
Build stage (per service) should:
- Use `mcr.microsoft.com/dotnet/sdk:10.0-bookworm-slim` (or mirror) with `DOTNET_CLI_TELEMETRY_OPTOUT=1`.
- Restore from `/.nuget/` (offline) and run `dotnet publish -c Release -o /app/out`.
- Set `SOURCE_DATE_EPOCH` to freeze timestamps.
Required checks:
- No `root` user in final image.
- `CAP_NET_RAW` dropped (default with non-root).
- Read-only rootfs enforced at deploy time (`securityContext.readOnlyRootFilesystem: true` in Helm/Compose).
- Health endpoints exposed: `/health/liveness`, `/health/readiness`, `/version`, `/metrics`.
- Image SBOM generated (syft) in pipeline; attach cosign attestations (see DOCKER-44-002).
Service matrix & helper:
- Build args for the core services are enumerated in `ops/devops/docker/services-matrix.env` (API, Console, Orchestrator, Task Runner, Concelier, Excititor, Policy, Notify, Export, AdvisoryAI).
- `ops/devops/docker/build-all.sh` reads the matrix and builds/tag images from the shared template with consistent non-root/health defaults. Override `REGISTRY` and `TAG_SUFFIX` to publish.
Console (Angular) image:
- Use `ops/devops/docker/Dockerfile.console` for the UI (Angular v17). It builds with `node:20-bullseye-slim`, serves via `nginxinc/nginx-unprivileged`, includes `healthcheck-frontend.sh`, and runs as non-root UID 101. Build with `docker build -f ops/devops/docker/Dockerfile.console --build-arg APP_DIR=src/Web/StellaOps.Web .`.
SBOM & attestation helper (DOCKER-44-002):
- Script: `ops/devops/docker/sbom_attest.sh <image> [out-dir] [cosign-key]`
- Emits SPDX (`*.spdx.json`) and CycloneDX (`*.cdx.json`) with `SOURCE_DATE_EPOCH` pinned for reproducibility.
- Attaches both as cosign attestations (`--type spdx` / `--type cyclonedx`); supports keyless when `COSIGN_EXPERIMENTAL=1` or explicit PEM key.
- Integrate in CI after image build/push; keep registry creds offline-friendly (use local registry mirror during air-gapped builds).
Health endpoint verification (DOCKER-44-003):
- Script: `ops/devops/docker/verify_health_endpoints.sh <image> [port]` spins container, checks `/health/liveness`, `/health/readiness`, `/version`, `/metrics`, and warns if `/capabilities.merge` is not `false` (for Concelier/Excititor).
- Run in CI after publishing the image; requires `docker` and `curl` (or `wget`).
- Endpoint contract and ASP.NET wiring examples live in `ops/devops/docker/health-endpoints.md`; service owners should copy the snippet and ensure readiness checks cover DB/cache/bus.

View File

@@ -1,43 +0,0 @@
# Copyright (c) StellaOps. All rights reserved.
# Licensed under BUSL-1.1.
# Function Behavior Corpus PostgreSQL Database
#
# Usage:
# docker compose -f docker-compose.corpus.yml up -d
#
# Environment variables:
# CORPUS_DB_PASSWORD - PostgreSQL password for corpus database
services:
corpus-postgres:
image: postgres:18.1-alpine
container_name: stellaops-corpus-db
environment:
POSTGRES_DB: stellaops_corpus
POSTGRES_USER: corpus_user
POSTGRES_PASSWORD: ${CORPUS_DB_PASSWORD:-stellaops_corpus_dev}
POSTGRES_INITDB_ARGS: "-E UTF8 --locale=C"
volumes:
- corpus-data:/var/lib/postgresql/data
- ../../../docs/db/schemas/corpus.sql:/docker-entrypoint-initdb.d/10-corpus-schema.sql:ro
- ./scripts/init-test-data.sql:/docker-entrypoint-initdb.d/20-test-data.sql:ro
ports:
- "5435:5432"
networks:
- stellaops-corpus
healthcheck:
test: ["CMD-SHELL", "pg_isready -U corpus_user -d stellaops_corpus"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
volumes:
corpus-data:
driver: local
networks:
stellaops-corpus:
driver: bridge

View File

@@ -1,220 +0,0 @@
-- =============================================================================
-- CORPUS TEST DATA - Minimal corpus for integration testing
-- Copyright (c) StellaOps. All rights reserved.
-- Licensed under BUSL-1.1.
-- =============================================================================
-- Set tenant for test data
SET app.tenant_id = 'test-tenant';
-- =============================================================================
-- LIBRARIES
-- =============================================================================
INSERT INTO corpus.libraries (id, name, description, homepage_url, source_repo)
VALUES
('a0000001-0000-0000-0000-000000000001', 'glibc', 'GNU C Library', 'https://www.gnu.org/software/libc/', 'https://sourceware.org/git/glibc.git'),
('a0000001-0000-0000-0000-000000000002', 'openssl', 'OpenSSL cryptographic library', 'https://www.openssl.org/', 'https://github.com/openssl/openssl.git'),
('a0000001-0000-0000-0000-000000000003', 'zlib', 'zlib compression library', 'https://zlib.net/', 'https://github.com/madler/zlib.git'),
('a0000001-0000-0000-0000-000000000004', 'curl', 'libcurl transfer library', 'https://curl.se/', 'https://github.com/curl/curl.git'),
('a0000001-0000-0000-0000-000000000005', 'sqlite', 'SQLite database engine', 'https://sqlite.org/', 'https://sqlite.org/src')
ON CONFLICT (tenant_id, name) DO NOTHING;
-- =============================================================================
-- LIBRARY VERSIONS (glibc)
-- =============================================================================
INSERT INTO corpus.library_versions (id, library_id, version, release_date, is_security_release)
VALUES
-- glibc versions
('b0000001-0000-0000-0000-000000000001', 'a0000001-0000-0000-0000-000000000001', '2.17', '2012-12-25', false),
('b0000001-0000-0000-0000-000000000002', 'a0000001-0000-0000-0000-000000000001', '2.28', '2018-08-01', false),
('b0000001-0000-0000-0000-000000000003', 'a0000001-0000-0000-0000-000000000001', '2.31', '2020-02-01', false),
('b0000001-0000-0000-0000-000000000004', 'a0000001-0000-0000-0000-000000000001', '2.35', '2022-02-03', false),
('b0000001-0000-0000-0000-000000000005', 'a0000001-0000-0000-0000-000000000001', '2.38', '2023-07-31', false),
-- OpenSSL versions
('b0000002-0000-0000-0000-000000000001', 'a0000001-0000-0000-0000-000000000002', '1.0.2u', '2019-12-20', true),
('b0000002-0000-0000-0000-000000000002', 'a0000001-0000-0000-0000-000000000002', '1.1.1w', '2023-09-11', true),
('b0000002-0000-0000-0000-000000000003', 'a0000001-0000-0000-0000-000000000002', '3.0.12', '2023-10-24', true),
('b0000002-0000-0000-0000-000000000004', 'a0000001-0000-0000-0000-000000000002', '3.1.4', '2023-10-24', true),
-- zlib versions
('b0000003-0000-0000-0000-000000000001', 'a0000001-0000-0000-0000-000000000003', '1.2.11', '2017-01-15', false),
('b0000003-0000-0000-0000-000000000002', 'a0000001-0000-0000-0000-000000000003', '1.2.13', '2022-10-13', true),
('b0000003-0000-0000-0000-000000000003', 'a0000001-0000-0000-0000-000000000003', '1.3.1', '2024-01-22', false)
ON CONFLICT (tenant_id, library_id, version) DO NOTHING;
-- =============================================================================
-- BUILD VARIANTS
-- =============================================================================
INSERT INTO corpus.build_variants (id, library_version_id, architecture, abi, compiler, compiler_version, optimization_level, binary_sha256)
VALUES
-- glibc 2.31 variants
('c0000001-0000-0000-0000-000000000001', 'b0000001-0000-0000-0000-000000000003', 'x86_64', 'gnu', 'gcc', '9.3.0', 'O2', 'a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2'),
('c0000001-0000-0000-0000-000000000002', 'b0000001-0000-0000-0000-000000000003', 'aarch64', 'gnu', 'gcc', '9.3.0', 'O2', 'b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3'),
('c0000001-0000-0000-0000-000000000003', 'b0000001-0000-0000-0000-000000000003', 'armhf', 'gnu', 'gcc', '9.3.0', 'O2', 'c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4'),
-- glibc 2.35 variants
('c0000002-0000-0000-0000-000000000001', 'b0000001-0000-0000-0000-000000000004', 'x86_64', 'gnu', 'gcc', '11.2.0', 'O2', 'd4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5'),
('c0000002-0000-0000-0000-000000000002', 'b0000001-0000-0000-0000-000000000004', 'aarch64', 'gnu', 'gcc', '11.2.0', 'O2', 'e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6'),
-- OpenSSL 3.0.12 variants
('c0000003-0000-0000-0000-000000000001', 'b0000002-0000-0000-0000-000000000003', 'x86_64', 'gnu', 'gcc', '11.2.0', 'O2', 'f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1'),
('c0000003-0000-0000-0000-000000000002', 'b0000002-0000-0000-0000-000000000003', 'aarch64', 'gnu', 'gcc', '11.2.0', 'O2', 'a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b3')
ON CONFLICT (tenant_id, library_version_id, architecture, abi, compiler, optimization_level) DO NOTHING;
-- =============================================================================
-- FUNCTIONS (Sample functions from glibc)
-- =============================================================================
INSERT INTO corpus.functions (id, build_variant_id, name, demangled_name, address, size_bytes, is_exported)
VALUES
-- glibc 2.31 x86_64 functions
('d0000001-0000-0000-0000-000000000001', 'c0000001-0000-0000-0000-000000000001', 'memcpy', 'memcpy', 140000, 256, true),
('d0000001-0000-0000-0000-000000000002', 'c0000001-0000-0000-0000-000000000001', 'memset', 'memset', 140256, 192, true),
('d0000001-0000-0000-0000-000000000003', 'c0000001-0000-0000-0000-000000000001', 'strlen', 'strlen', 140448, 128, true),
('d0000001-0000-0000-0000-000000000004', 'c0000001-0000-0000-0000-000000000001', 'strcmp', 'strcmp', 140576, 160, true),
('d0000001-0000-0000-0000-000000000005', 'c0000001-0000-0000-0000-000000000001', 'strcpy', 'strcpy', 140736, 144, true),
('d0000001-0000-0000-0000-000000000006', 'c0000001-0000-0000-0000-000000000001', 'malloc', 'malloc', 150000, 512, true),
('d0000001-0000-0000-0000-000000000007', 'c0000001-0000-0000-0000-000000000001', 'free', 'free', 150512, 384, true),
('d0000001-0000-0000-0000-000000000008', 'c0000001-0000-0000-0000-000000000001', 'realloc', 'realloc', 150896, 448, true),
('d0000001-0000-0000-0000-000000000009', 'c0000001-0000-0000-0000-000000000001', 'printf', 'printf', 160000, 1024, true),
('d0000001-0000-0000-0000-000000000010', 'c0000001-0000-0000-0000-000000000001', 'sprintf', 'sprintf', 161024, 896, true),
-- glibc 2.35 x86_64 functions (same functions, different addresses/sizes due to optimization)
('d0000002-0000-0000-0000-000000000001', 'c0000002-0000-0000-0000-000000000001', 'memcpy', 'memcpy', 145000, 280, true),
('d0000002-0000-0000-0000-000000000002', 'c0000002-0000-0000-0000-000000000001', 'memset', 'memset', 145280, 208, true),
('d0000002-0000-0000-0000-000000000003', 'c0000002-0000-0000-0000-000000000001', 'strlen', 'strlen', 145488, 144, true),
('d0000002-0000-0000-0000-000000000004', 'c0000002-0000-0000-0000-000000000001', 'strcmp', 'strcmp', 145632, 176, true),
('d0000002-0000-0000-0000-000000000005', 'c0000002-0000-0000-0000-000000000001', 'strcpy', 'strcpy', 145808, 160, true),
('d0000002-0000-0000-0000-000000000006', 'c0000002-0000-0000-0000-000000000001', 'malloc', 'malloc', 155000, 544, true),
('d0000002-0000-0000-0000-000000000007', 'c0000002-0000-0000-0000-000000000001', 'free', 'free', 155544, 400, true),
-- OpenSSL 3.0.12 functions
('d0000003-0000-0000-0000-000000000001', 'c0000003-0000-0000-0000-000000000001', 'EVP_DigestInit_ex', 'EVP_DigestInit_ex', 200000, 320, true),
('d0000003-0000-0000-0000-000000000002', 'c0000003-0000-0000-0000-000000000001', 'EVP_DigestUpdate', 'EVP_DigestUpdate', 200320, 256, true),
('d0000003-0000-0000-0000-000000000003', 'c0000003-0000-0000-0000-000000000001', 'EVP_DigestFinal_ex', 'EVP_DigestFinal_ex', 200576, 288, true),
('d0000003-0000-0000-0000-000000000004', 'c0000003-0000-0000-0000-000000000001', 'EVP_EncryptInit_ex', 'EVP_EncryptInit_ex', 201000, 384, true),
('d0000003-0000-0000-0000-000000000005', 'c0000003-0000-0000-0000-000000000001', 'EVP_DecryptInit_ex', 'EVP_DecryptInit_ex', 201384, 384, true),
('d0000003-0000-0000-0000-000000000006', 'c0000003-0000-0000-0000-000000000001', 'SSL_CTX_new', 'SSL_CTX_new', 300000, 512, true),
('d0000003-0000-0000-0000-000000000007', 'c0000003-0000-0000-0000-000000000001', 'SSL_new', 'SSL_new', 300512, 384, true),
('d0000003-0000-0000-0000-000000000008', 'c0000003-0000-0000-0000-000000000001', 'SSL_connect', 'SSL_connect', 300896, 1024, true)
ON CONFLICT (tenant_id, build_variant_id, name, address) DO NOTHING;
-- =============================================================================
-- FINGERPRINTS (Simulated semantic fingerprints)
-- =============================================================================
INSERT INTO corpus.fingerprints (id, function_id, algorithm, fingerprint, metadata)
VALUES
-- memcpy fingerprints (semantic_ksg algorithm)
('e0000001-0000-0000-0000-000000000001', 'd0000001-0000-0000-0000-000000000001', 'semantic_ksg',
decode('a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f60001', 'hex'),
'{"node_count": 45, "edge_count": 72, "api_calls": ["memcpy_internal"], "complexity": 8}'::jsonb),
('e0000001-0000-0000-0000-000000000002', 'd0000001-0000-0000-0000-000000000001', 'instruction_bb',
decode('b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a10001', 'hex'),
'{"bb_count": 8, "instruction_count": 64}'::jsonb),
-- memcpy 2.35 (similar fingerprint, different version)
('e0000002-0000-0000-0000-000000000001', 'd0000002-0000-0000-0000-000000000001', 'semantic_ksg',
decode('a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f60002', 'hex'),
'{"node_count": 48, "edge_count": 76, "api_calls": ["memcpy_internal"], "complexity": 9}'::jsonb),
-- memset fingerprints
('e0000003-0000-0000-0000-000000000001', 'd0000001-0000-0000-0000-000000000002', 'semantic_ksg',
decode('c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b20001', 'hex'),
'{"node_count": 32, "edge_count": 48, "api_calls": [], "complexity": 5}'::jsonb),
-- strlen fingerprints
('e0000004-0000-0000-0000-000000000001', 'd0000001-0000-0000-0000-000000000003', 'semantic_ksg',
decode('d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c30001', 'hex'),
'{"node_count": 24, "edge_count": 32, "api_calls": [], "complexity": 4}'::jsonb),
-- malloc fingerprints
('e0000005-0000-0000-0000-000000000001', 'd0000001-0000-0000-0000-000000000006', 'semantic_ksg',
decode('e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d40001', 'hex'),
'{"node_count": 128, "edge_count": 256, "api_calls": ["sbrk", "mmap"], "complexity": 24}'::jsonb),
-- OpenSSL EVP_DigestInit_ex
('e0000006-0000-0000-0000-000000000001', 'd0000003-0000-0000-0000-000000000001', 'semantic_ksg',
decode('f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e50001', 'hex'),
'{"node_count": 56, "edge_count": 84, "api_calls": ["OPENSSL_init_crypto"], "complexity": 12}'::jsonb),
-- SSL_CTX_new
('e0000007-0000-0000-0000-000000000001', 'd0000003-0000-0000-0000-000000000006', 'semantic_ksg',
decode('a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f6a1b2c3d4e5f60003', 'hex'),
'{"node_count": 96, "edge_count": 144, "api_calls": ["CRYPTO_malloc", "SSL_CTX_set_options"], "complexity": 18}'::jsonb)
ON CONFLICT (tenant_id, function_id, algorithm) DO NOTHING;
-- =============================================================================
-- FUNCTION CLUSTERS
-- =============================================================================
INSERT INTO corpus.function_clusters (id, library_id, canonical_name, description)
VALUES
('f0000001-0000-0000-0000-000000000001', 'a0000001-0000-0000-0000-000000000001', 'memcpy', 'Memory copy function across glibc versions'),
('f0000001-0000-0000-0000-000000000002', 'a0000001-0000-0000-0000-000000000001', 'memset', 'Memory set function across glibc versions'),
('f0000001-0000-0000-0000-000000000003', 'a0000001-0000-0000-0000-000000000001', 'strlen', 'String length function across glibc versions'),
('f0000001-0000-0000-0000-000000000004', 'a0000001-0000-0000-0000-000000000001', 'malloc', 'Memory allocation function across glibc versions'),
('f0000002-0000-0000-0000-000000000001', 'a0000001-0000-0000-0000-000000000002', 'EVP_DigestInit_ex', 'EVP digest initialization across OpenSSL versions'),
('f0000002-0000-0000-0000-000000000002', 'a0000001-0000-0000-0000-000000000002', 'SSL_CTX_new', 'SSL context creation across OpenSSL versions')
ON CONFLICT (tenant_id, library_id, canonical_name) DO NOTHING;
-- =============================================================================
-- CLUSTER MEMBERS
-- =============================================================================
INSERT INTO corpus.cluster_members (cluster_id, function_id, similarity_to_centroid)
VALUES
-- memcpy cluster
('f0000001-0000-0000-0000-000000000001', 'd0000001-0000-0000-0000-000000000001', 1.0),
('f0000001-0000-0000-0000-000000000001', 'd0000002-0000-0000-0000-000000000001', 0.95),
-- memset cluster
('f0000001-0000-0000-0000-000000000002', 'd0000001-0000-0000-0000-000000000002', 1.0),
('f0000001-0000-0000-0000-000000000002', 'd0000002-0000-0000-0000-000000000002', 0.92),
-- strlen cluster
('f0000001-0000-0000-0000-000000000003', 'd0000001-0000-0000-0000-000000000003', 1.0),
('f0000001-0000-0000-0000-000000000003', 'd0000002-0000-0000-0000-000000000003', 0.94),
-- malloc cluster
('f0000001-0000-0000-0000-000000000004', 'd0000001-0000-0000-0000-000000000006', 1.0),
('f0000001-0000-0000-0000-000000000004', 'd0000002-0000-0000-0000-000000000006', 0.88)
ON CONFLICT DO NOTHING;
-- =============================================================================
-- CVE ASSOCIATIONS
-- =============================================================================
INSERT INTO corpus.function_cves (function_id, cve_id, affected_state, confidence, evidence_type)
VALUES
-- CVE-2021-3999 affects glibc getcwd
-- Note: We don't have getcwd in our test data, but this shows the structure
-- CVE-2022-0778 affects OpenSSL BN_mod_sqrt (infinite loop)
('d0000003-0000-0000-0000-000000000001', 'CVE-2022-0778', 'fixed', 0.95, 'advisory'),
('d0000003-0000-0000-0000-000000000002', 'CVE-2022-0778', 'fixed', 0.95, 'advisory'),
-- CVE-2023-0286 affects OpenSSL X509 certificate handling
('d0000003-0000-0000-0000-000000000006', 'CVE-2023-0286', 'fixed', 0.90, 'commit'),
('d0000003-0000-0000-0000-000000000007', 'CVE-2023-0286', 'fixed', 0.90, 'commit')
ON CONFLICT (tenant_id, function_id, cve_id) DO NOTHING;
-- =============================================================================
-- INGESTION LOG
-- =============================================================================
INSERT INTO corpus.ingestion_jobs (id, library_id, job_type, status, functions_indexed, started_at, completed_at)
VALUES
('99000001-0000-0000-0000-000000000001', 'a0000001-0000-0000-0000-000000000001', 'full_ingest', 'completed', 10, now() - interval '1 day', now() - interval '1 day' + interval '5 minutes'),
('99000001-0000-0000-0000-000000000002', 'a0000001-0000-0000-0000-000000000002', 'full_ingest', 'completed', 8, now() - interval '12 hours', now() - interval '12 hours' + interval '3 minutes')
ON CONFLICT DO NOTHING;
-- =============================================================================
-- SUMMARY
-- =============================================================================
DO $$
DECLARE
lib_count INT;
ver_count INT;
func_count INT;
fp_count INT;
BEGIN
SELECT COUNT(*) INTO lib_count FROM corpus.libraries;
SELECT COUNT(*) INTO ver_count FROM corpus.library_versions;
SELECT COUNT(*) INTO func_count FROM corpus.functions;
SELECT COUNT(*) INTO fp_count FROM corpus.fingerprints;
RAISE NOTICE 'Corpus test data initialized:';
RAISE NOTICE ' Libraries: %', lib_count;
RAISE NOTICE ' Versions: %', ver_count;
RAISE NOTICE ' Functions: %', func_count;
RAISE NOTICE ' Fingerprints: %', fp_count;
END $$;

View File

@@ -1,84 +0,0 @@
# Copyright (c) StellaOps. All rights reserved.
# Licensed under BUSL-1.1.
# Ghidra Headless Analysis Server for BinaryIndex
#
# This image provides Ghidra headless analysis capabilities including:
# - Ghidra Headless Analyzer (analyzeHeadless)
# - ghidriff for automated binary diffing
# - Version Tracking and BSim support
#
# Build:
# docker build -f Dockerfile.headless -t stellaops/ghidra-headless:11.2 .
#
# Run:
# docker run --rm -v /path/to/binaries:/binaries stellaops/ghidra-headless:11.2 \
# /projects GhidraProject -import /binaries/target.exe -analyze
FROM eclipse-temurin:17-jdk-jammy
ARG GHIDRA_VERSION=11.2
ARG GHIDRA_BUILD_DATE=20241105
ARG GHIDRA_SHA256
LABEL org.opencontainers.image.title="StellaOps Ghidra Headless"
LABEL org.opencontainers.image.description="Ghidra headless analysis server with ghidriff for BinaryIndex"
LABEL org.opencontainers.image.version="${GHIDRA_VERSION}"
LABEL org.opencontainers.image.licenses="BUSL-1.1"
LABEL org.opencontainers.image.source="https://github.com/stellaops/stellaops"
LABEL org.opencontainers.image.vendor="StellaOps"
# Install dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
python3 \
python3-pip \
python3-venv \
curl \
unzip \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
# Download and verify Ghidra
# Note: Set GHIDRA_SHA256 build arg for production builds
RUN curl -fsSL "https://github.com/NationalSecurityAgency/ghidra/releases/download/Ghidra_${GHIDRA_VERSION}_build/ghidra_${GHIDRA_VERSION}_PUBLIC_${GHIDRA_BUILD_DATE}.zip" \
-o /tmp/ghidra.zip \
&& if [ -n "${GHIDRA_SHA256}" ]; then \
echo "${GHIDRA_SHA256} /tmp/ghidra.zip" | sha256sum -c -; \
fi \
&& unzip -q /tmp/ghidra.zip -d /opt \
&& rm /tmp/ghidra.zip \
&& ln -s /opt/ghidra_${GHIDRA_VERSION}_PUBLIC /opt/ghidra \
&& chmod +x /opt/ghidra/support/analyzeHeadless
# Install ghidriff in isolated virtual environment
RUN python3 -m venv /opt/venv \
&& /opt/venv/bin/pip install --no-cache-dir --upgrade pip \
&& /opt/venv/bin/pip install --no-cache-dir ghidriff
# Set environment variables
ENV GHIDRA_HOME=/opt/ghidra
ENV GHIDRA_INSTALL_DIR=/opt/ghidra
ENV JAVA_HOME=/opt/java/openjdk
ENV PATH="${GHIDRA_HOME}/support:/opt/venv/bin:${PATH}"
ENV MAXMEM=4G
# Create working directories with proper permissions
RUN mkdir -p /projects /scripts /output \
&& chmod 755 /projects /scripts /output
# Create non-root user for security
RUN groupadd -r ghidra && useradd -r -g ghidra ghidra \
&& chown -R ghidra:ghidra /projects /scripts /output
WORKDIR /projects
# Healthcheck - verify Ghidra is functional
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD analyzeHeadless /tmp HealthCheck -help > /dev/null 2>&1 || exit 1
# Switch to non-root user
USER ghidra
# Default entrypoint is analyzeHeadless
ENTRYPOINT ["analyzeHeadless"]
CMD ["--help"]

View File

@@ -1,78 +0,0 @@
# Copyright (c) StellaOps. All rights reserved.
# Licensed under BUSL-1.1.
# BSim PostgreSQL Database and Ghidra Headless Services
#
# Usage:
# docker compose -f docker-compose.bsim.yml up -d
#
# Environment variables:
# BSIM_DB_PASSWORD - PostgreSQL password for BSim database
version: '3.8'
services:
bsim-postgres:
image: postgres:18.1-alpine
container_name: stellaops-bsim-db
environment:
POSTGRES_DB: bsim_corpus
POSTGRES_USER: bsim_user
POSTGRES_PASSWORD: ${BSIM_DB_PASSWORD:-stellaops_bsim_dev}
POSTGRES_INITDB_ARGS: "-E UTF8 --locale=C"
volumes:
- bsim-data:/var/lib/postgresql/data
- ./scripts/init-bsim.sql:/docker-entrypoint-initdb.d/10-init-bsim.sql:ro
ports:
- "5433:5432"
networks:
- stellaops-bsim
healthcheck:
test: ["CMD-SHELL", "pg_isready -U bsim_user -d bsim_corpus"]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
# Ghidra Headless service for BSim analysis
ghidra-headless:
build:
context: .
dockerfile: Dockerfile.headless
image: stellaops/ghidra-headless:11.2
container_name: stellaops-ghidra
depends_on:
bsim-postgres:
condition: service_healthy
environment:
BSIM_DB_URL: "postgresql://bsim-postgres:5432/bsim_corpus"
BSIM_DB_USER: bsim_user
BSIM_DB_PASSWORD: ${BSIM_DB_PASSWORD:-stellaops_bsim_dev}
JAVA_HOME: /opt/java/openjdk
MAXMEM: 4G
volumes:
- ghidra-projects:/projects
- ghidra-scripts:/scripts
- ghidra-output:/output
networks:
- stellaops-bsim
deploy:
resources:
limits:
cpus: '4'
memory: 8G
# Keep container running for ad-hoc analysis
entrypoint: ["tail", "-f", "/dev/null"]
restart: unless-stopped
volumes:
bsim-data:
driver: local
ghidra-projects:
ghidra-scripts:
ghidra-output:
networks:
stellaops-bsim:
driver: bridge

View File

@@ -1,140 +0,0 @@
-- BSim PostgreSQL Schema Initialization
-- Copyright (c) StellaOps. All rights reserved.
-- Licensed under BUSL-1.1.
--
-- This script creates the core BSim schema structure.
-- Note: Full Ghidra BSim schema is auto-created by Ghidra tools.
-- This provides a minimal functional schema for integration testing.
-- Create schema comment
COMMENT ON DATABASE bsim_corpus IS 'Ghidra BSim function signature database for StellaOps BinaryIndex';
-- Enable required extensions
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pg_trgm";
-- BSim executables table
CREATE TABLE IF NOT EXISTS bsim_executables (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
name TEXT NOT NULL,
architecture TEXT NOT NULL,
library_name TEXT,
library_version TEXT,
md5_hash BYTEA,
sha256_hash BYTEA,
date_added TIMESTAMPTZ NOT NULL DEFAULT now(),
UNIQUE (sha256_hash)
);
-- BSim functions table
CREATE TABLE IF NOT EXISTS bsim_functions (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
executable_id UUID NOT NULL REFERENCES bsim_executables(id) ON DELETE CASCADE,
name TEXT NOT NULL,
address BIGINT NOT NULL,
flags INTEGER DEFAULT 0,
UNIQUE (executable_id, address)
);
-- BSim function vectors (feature vectors for similarity)
CREATE TABLE IF NOT EXISTS bsim_vectors (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
function_id UUID NOT NULL REFERENCES bsim_functions(id) ON DELETE CASCADE,
lsh_hash BYTEA NOT NULL, -- Locality-sensitive hash
feature_count INTEGER NOT NULL,
vector_data BYTEA NOT NULL, -- Serialized feature vector
UNIQUE (function_id)
);
-- BSim function signatures (compact fingerprints)
CREATE TABLE IF NOT EXISTS bsim_signatures (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
function_id UUID NOT NULL REFERENCES bsim_functions(id) ON DELETE CASCADE,
signature_type TEXT NOT NULL, -- 'basic', 'weighted', 'full'
signature_hash BYTEA NOT NULL,
significance REAL NOT NULL DEFAULT 0.0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
UNIQUE (function_id, signature_type)
);
-- BSim clusters (similar function groups)
CREATE TABLE IF NOT EXISTS bsim_clusters (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
name TEXT,
function_count INTEGER NOT NULL DEFAULT 0,
centroid_vector BYTEA,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Cluster membership
CREATE TABLE IF NOT EXISTS bsim_cluster_members (
cluster_id UUID NOT NULL REFERENCES bsim_clusters(id) ON DELETE CASCADE,
function_id UUID NOT NULL REFERENCES bsim_functions(id) ON DELETE CASCADE,
similarity REAL NOT NULL,
PRIMARY KEY (cluster_id, function_id)
);
-- Ingestion tracking
CREATE TABLE IF NOT EXISTS bsim_ingest_log (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
executable_id UUID REFERENCES bsim_executables(id),
library_name TEXT NOT NULL,
library_version TEXT,
functions_ingested INTEGER NOT NULL DEFAULT 0,
status TEXT NOT NULL DEFAULT 'pending',
error_message TEXT,
started_at TIMESTAMPTZ,
completed_at TIMESTAMPTZ,
ingested_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Indexes for efficient querying
CREATE INDEX IF NOT EXISTS idx_bsim_functions_executable ON bsim_functions(executable_id);
CREATE INDEX IF NOT EXISTS idx_bsim_functions_name ON bsim_functions(name);
CREATE INDEX IF NOT EXISTS idx_bsim_vectors_lsh ON bsim_vectors USING hash (lsh_hash);
CREATE INDEX IF NOT EXISTS idx_bsim_signatures_hash ON bsim_signatures USING hash (signature_hash);
CREATE INDEX IF NOT EXISTS idx_bsim_executables_library ON bsim_executables(library_name, library_version);
CREATE INDEX IF NOT EXISTS idx_bsim_ingest_log_status ON bsim_ingest_log(status);
-- Views for common queries
CREATE OR REPLACE VIEW bsim_function_summary AS
SELECT
f.id AS function_id,
f.name AS function_name,
f.address,
e.name AS executable_name,
e.library_name,
e.library_version,
e.architecture,
s.significance
FROM bsim_functions f
JOIN bsim_executables e ON f.executable_id = e.id
LEFT JOIN bsim_signatures s ON f.id = s.function_id AND s.signature_type = 'basic';
CREATE OR REPLACE VIEW bsim_library_stats AS
SELECT
e.library_name,
e.library_version,
COUNT(DISTINCT e.id) AS executable_count,
COUNT(DISTINCT f.id) AS function_count,
MAX(l.ingested_at) AS last_ingested
FROM bsim_executables e
LEFT JOIN bsim_functions f ON e.id = f.executable_id
LEFT JOIN bsim_ingest_log l ON e.id = l.executable_id
WHERE e.library_name IS NOT NULL
GROUP BY e.library_name, e.library_version
ORDER BY e.library_name, e.library_version;
-- Grant permissions
GRANT ALL ON ALL TABLES IN SCHEMA public TO bsim_user;
GRANT ALL ON ALL SEQUENCES IN SCHEMA public TO bsim_user;
-- Insert schema version marker
INSERT INTO bsim_ingest_log (library_name, functions_ingested, status, completed_at)
VALUES ('_schema_init', 0, 'completed', now());
-- Log successful initialization
DO $$
BEGIN
RAISE NOTICE 'BSim schema initialized successfully';
END $$;

View File

@@ -1,44 +0,0 @@
# Health & capability endpoint contract (DOCKER-44-003)
Target services: API, Console, Orchestrator, Task Runner, Concelier, Excititor, Policy, Notify, Export, AdvisoryAI.
## HTTP paths
- `GET /health/liveness` — fast, dependency-free check; returns `200` and minimal body.
- `GET /health/readiness` — may hit critical deps (DB, bus, cache); returns `503` when not ready.
- `GET /version` — static payload with `service`, `version`, `commit`, `buildTimestamp` (ISO-8601 UTC), `source` (channel).
- `GET /metrics` — Prometheus text exposition; reuse existing instrumentation.
- `GET /capabilities` — if present for Concelier/Excititor, must include `"merge": false`.
## Minimal ASP.NET 10 wiring (per service)
```csharp
var builder = WebApplication.CreateBuilder(args);
// health checks; add real checks as needed
builder.Services.AddHealthChecks();
var app = builder.Build();
app.MapHealthChecks("/health/liveness", new() { Predicate = _ => false });
app.MapHealthChecks("/health/readiness");
app.MapGet("/version", () => Results.Json(new {
service = "StellaOps.Policy", // override per service
version = ThisAssembly.AssemblyInformationalVersion,
commit = ThisAssembly.Git.Commit,
buildTimestamp = ThisAssembly.Git.CommitDate.UtcDateTime,
source = Environment.GetEnvironmentVariable("STELLA_CHANNEL") ?? "edge"
}));
app.UseHttpMetrics();
app.MapMetrics();
app.Run();
```
- Ensure `ThisAssembly.*` source generators are enabled or substitute build vars.
- Keep `/health/liveness` lightweight; `/health/readiness` should test critical dependencies (Mongo, Redis, message bus) with timeouts.
- When adding `/capabilities`, explicitly emit `merge = false` for Concelier/Excititor.
## CI verification
- After publishing an image, run `ops/devops/docker/verify_health_endpoints.sh <image> [port]`.
- CI should fail if any required endpoint is missing or non-200.
## Deployment
- Helm/Compose should set `readOnlyRootFilesystem: true` and wire readiness/liveness probes to these paths/port.

View File

@@ -1,318 +0,0 @@
# Reproducible Build Environment Requirements
**Sprint:** SPRINT_1227_0002_0001_LB_reproducible_builders
**Task:** T12 — Document build environment requirements
---
## Overview
This document describes the environment requirements for running reproducible distro package builds. The build system supports Alpine, Debian, and RHEL package ecosystems.
---
## Hardware Requirements
### Minimum Requirements
| Resource | Minimum | Recommended |
|----------|---------|-------------|
| CPU | 4 cores | 8+ cores |
| RAM | 8 GB | 16+ GB |
| Disk | 50 GB SSD | 200+ GB NVMe |
| Network | 10 Mbps | 100+ Mbps |
### Storage Breakdown
| Directory | Purpose | Estimated Size |
|-----------|---------|----------------|
| `/var/lib/docker` | Docker images and containers | 30 GB |
| `/var/cache/stellaops/builds` | Build cache | 50 GB |
| `/var/cache/stellaops/sources` | Source package cache | 20 GB |
| `/var/cache/stellaops/artifacts` | Output artifacts | 50 GB |
---
## Software Requirements
### Host System
| Component | Version | Purpose |
|-----------|---------|---------|
| Docker | 24.0+ | Container runtime |
| Docker Compose | 2.20+ | Multi-container orchestration |
| .NET SDK | 10.0 | Worker service runtime |
| objdump | binutils 2.40+ | Binary analysis |
| readelf | binutils 2.40+ | ELF parsing |
### Container Images
The build system uses the following base images:
| Builder | Base Image | Tag |
|---------|------------|-----|
| Alpine | `alpine` | `3.19`, `3.18` |
| Debian | `debian` | `bookworm`, `bullseye` |
| RHEL | `almalinux` | `9`, `8` |
---
## Environment Variables
### Required Variables
```bash
# Build configuration
export STELLAOPS_BUILD_CACHE=/var/cache/stellaops/builds
export STELLAOPS_SOURCE_CACHE=/var/cache/stellaops/sources
export STELLAOPS_ARTIFACT_DIR=/var/cache/stellaops/artifacts
# Reproducibility settings
export TZ=UTC
export LC_ALL=C.UTF-8
export SOURCE_DATE_EPOCH=$(date +%s)
# Docker settings
export DOCKER_BUILDKIT=1
export COMPOSE_DOCKER_CLI_BUILD=1
```
### Optional Variables
```bash
# Parallel build settings
export STELLAOPS_MAX_CONCURRENT_BUILDS=2
export STELLAOPS_BUILD_TIMEOUT=1800 # 30 minutes
# Proxy settings (if behind corporate firewall)
export HTTP_PROXY=http://proxy:8080
export HTTPS_PROXY=http://proxy:8080
export NO_PROXY=localhost,127.0.0.1
```
---
## Builder-Specific Requirements
### Alpine Builder
```dockerfile
# Required packages in builder image
apk add --no-cache \
alpine-sdk \
abuild \
sudo \
binutils \
elfutils \
build-base
```
**Normalization requirements:**
- `SOURCE_DATE_EPOCH` must be set
- Use `abuild -r` with reproducible flags
- Archive ordering: `--sort=name`
### Debian Builder
```dockerfile
# Required packages in builder image
apt-get install -y \
build-essential \
devscripts \
dpkg-dev \
fakeroot \
binutils \
elfutils \
debhelper
```
**Normalization requirements:**
- Use `dpkg-buildpackage -b` with reproducible flags
- Set `DEB_BUILD_OPTIONS=reproducible`
- Apply `dh_strip_nondeterminism` post-build
### RHEL Builder
```dockerfile
# Required packages in builder image (AlmaLinux 9)
dnf install -y \
mock \
rpm-build \
rpmdevtools \
binutils \
elfutils
```
**Normalization requirements:**
- Use mock with `--enable-network=false`
- Configure mock for deterministic builds
- Set `%_buildhost stellaops.build`
---
## Compiler Flags for Reproducibility
### C/C++ Flags
```bash
CFLAGS="-fno-record-gcc-switches -fdebug-prefix-map=$(pwd)=/build -grecord-gcc-switches=off"
CXXFLAGS="${CFLAGS}"
LDFLAGS="-Wl,--build-id=sha1"
```
### Additional Flags
```bash
# Disable date/time macros
-Wdate-time -Werror=date-time
# Normalize paths
-fmacro-prefix-map=$(pwd)=/build
-ffile-prefix-map=$(pwd)=/build
```
---
## Archive Determinism
### ar (Static Libraries)
```bash
# Use deterministic mode
ar --enable-deterministic-archives crs libfoo.a *.o
# Or set environment variable
export AR_FLAGS=--enable-deterministic-archives
```
### tar (Package Archives)
```bash
# Deterministic tar creation
tar --sort=name \
--mtime="@${SOURCE_DATE_EPOCH}" \
--owner=0 \
--group=0 \
--numeric-owner \
-cf archive.tar directory/
```
### zip/gzip
```bash
# Use gzip -n to avoid timestamp
gzip -n file
# Use mtime for consistent timestamps
touch -d "@${SOURCE_DATE_EPOCH}" file
```
---
## Network Requirements
### Outbound Access Required
| Destination | Port | Purpose |
|-------------|------|---------|
| `dl-cdn.alpinelinux.org` | 443 | Alpine packages |
| `deb.debian.org` | 443 | Debian packages |
| `vault.centos.org` | 443 | CentOS/RHEL sources |
| `mirror.almalinux.org` | 443 | AlmaLinux packages |
| `git.*.org` | 443 | Upstream source repos |
### Air-Gapped Operation
For air-gapped environments:
1. Pre-download source packages
2. Configure local mirrors
3. Set `STELLAOPS_OFFLINE_MODE=true`
4. Use cached build artifacts
---
## Security Considerations
### Container Isolation
- Builders run in unprivileged containers
- No host network access
- Read-only source mounts
- Ephemeral containers (destroyed after build)
### Signing Keys
- Build outputs are unsigned by default
- DSSE signing requires configured key material
- Keys stored in `/etc/stellaops/keys/` or HSM
### Build Verification
```bash
# Verify reproducibility
sha256sum build1/output/* > checksums1.txt
sha256sum build2/output/* > checksums2.txt
diff checksums1.txt checksums2.txt
```
---
## Troubleshooting
### Common Issues
| Issue | Cause | Resolution |
|-------|-------|------------|
| Build timestamp differs | `SOURCE_DATE_EPOCH` not set | Export variable before build |
| Path in debug info | Missing `-fdebug-prefix-map` | Add to CFLAGS |
| ar archive differs | Deterministic mode disabled | Use `--enable-deterministic-archives` |
| tar ordering differs | Random file order | Use `--sort=name` |
### Debugging Reproducibility
```bash
# Compare two builds byte-by-byte
diffoscope build1/output/libfoo.so build2/output/libfoo.so
# Check for timestamp differences
objdump -t binary | grep -i time
# Verify no random UUIDs
strings binary | grep -E '[0-9a-f]{8}-[0-9a-f]{4}'
```
---
## Monitoring and Metrics
### Key Metrics
| Metric | Description | Target |
|--------|-------------|--------|
| `build_reproducibility_rate` | % of reproducible builds | > 95% |
| `build_duration_seconds` | Time to complete build | < 1800 |
| `fingerprint_extraction_rate` | Functions per second | > 1000 |
| `build_cache_hit_rate` | Cache effectiveness | > 80% |
### Health Checks
```bash
# Verify builder containers are ready
docker ps --filter "name=repro-builder"
# Check cache disk usage
df -h /var/cache/stellaops/
# Verify build queue
curl -s http://localhost:9090/metrics | grep stellaops_build
```
---
## References
- [Reproducible Builds](https://reproducible-builds.org/)
- [Debian Reproducible Builds](https://wiki.debian.org/ReproducibleBuilds)
- [Alpine Reproducibility](https://wiki.alpinelinux.org/wiki/Reproducible_Builds)
- [RPM Reproducibility](https://rpm-software-management.github.io/rpm/manual/reproducibility.html)

View File

@@ -1,62 +0,0 @@
# Alpine Reproducible Builder
# Creates deterministic builds of Alpine packages for fingerprint diffing
#
# Usage:
# docker build -t repro-builder-alpine:3.20 --build-arg RELEASE=3.20 .
# docker run -v ./output:/output repro-builder-alpine:3.20 build openssl 3.0.7-r0
ARG RELEASE=3.20
FROM alpine:${RELEASE}
ARG RELEASE
ENV ALPINE_RELEASE=${RELEASE}
# Install build tools and dependencies
RUN apk add --no-cache \
alpine-sdk \
abuild \
sudo \
git \
curl \
binutils \
elfutils \
coreutils \
tar \
gzip \
xz \
patch \
diffutils \
file \
&& rm -rf /var/cache/apk/*
# Create build user (abuild requires non-root)
RUN adduser -D -G abuild builder \
&& echo "builder ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers \
&& mkdir -p /var/cache/distfiles \
&& chown -R builder:abuild /var/cache/distfiles
# Setup abuild
USER builder
WORKDIR /home/builder
# Generate abuild keys
RUN abuild-keygen -a -i -n
# Copy normalization and build scripts
COPY --chown=builder:abuild scripts/normalize.sh /usr/local/bin/normalize.sh
COPY --chown=builder:abuild scripts/build.sh /usr/local/bin/build.sh
COPY --chown=builder:abuild scripts/extract-functions.sh /usr/local/bin/extract-functions.sh
RUN chmod +x /usr/local/bin/*.sh
# Environment for reproducibility
ENV TZ=UTC
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
# Build output directory
VOLUME /output
WORKDIR /build
ENTRYPOINT ["/usr/local/bin/build.sh"]
CMD ["--help"]

View File

@@ -1,226 +0,0 @@
#!/bin/sh
# Alpine Reproducible Build Script
# Builds packages with deterministic settings for fingerprint generation
#
# Usage: build.sh [build|diff] <package> <version> [patch_url...]
#
# Examples:
# build.sh build openssl 3.0.7-r0
# build.sh diff openssl 3.0.7-r0 3.0.8-r0
# build.sh build openssl 3.0.7-r0 https://patch.url/CVE-2023-1234.patch
set -eu
COMMAND="${1:-help}"
PACKAGE="${2:-}"
VERSION="${3:-}"
OUTPUT_DIR="${OUTPUT_DIR:-/output}"
log() {
echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] $*" >&2
}
show_help() {
cat <<EOF
Alpine Reproducible Builder
Usage:
build.sh build <package> <version> [patch_urls...]
Build a package with reproducible settings
build.sh diff <package> <vuln_version> <patched_version>
Build two versions and compute fingerprint diff
build.sh --help
Show this help message
Environment:
SOURCE_DATE_EPOCH Override timestamp (extracted from APKBUILD if not set)
OUTPUT_DIR Output directory (default: /output)
CFLAGS Additional compiler flags
LDFLAGS Additional linker flags
Examples:
build.sh build openssl 3.0.7-r0
build.sh build curl 8.1.0-r0 https://patch/CVE-2023-1234.patch
build.sh diff openssl 3.0.7-r0 3.0.8-r0
EOF
}
setup_reproducible_env() {
local pkg="$1"
local ver="$2"
# Extract SOURCE_DATE_EPOCH from APKBUILD if not set
if [ -z "${SOURCE_DATE_EPOCH:-}" ]; then
if [ -f "aports/main/$pkg/APKBUILD" ]; then
# Use pkgrel date or fallback to current
SOURCE_DATE_EPOCH=$(stat -c %Y "aports/main/$pkg/APKBUILD" 2>/dev/null || date +%s)
else
SOURCE_DATE_EPOCH=$(date +%s)
fi
export SOURCE_DATE_EPOCH
fi
log "SOURCE_DATE_EPOCH=$SOURCE_DATE_EPOCH"
# Reproducible compiler flags
export CFLAGS="${CFLAGS:-} -fno-record-gcc-switches -fdebug-prefix-map=$(pwd)=/build"
export CXXFLAGS="${CXXFLAGS:-} ${CFLAGS}"
export LDFLAGS="${LDFLAGS:-}"
# Locale for deterministic sorting
export LC_ALL=C.UTF-8
export TZ=UTC
}
fetch_source() {
local pkg="$1"
local ver="$2"
log "Fetching source for $pkg-$ver"
# Clone aports if needed
if [ ! -d "aports" ]; then
git clone --depth 1 https://gitlab.alpinelinux.org/alpine/aports.git
fi
# Find package
local pkg_dir=""
for repo in main community testing; do
if [ -d "aports/$repo/$pkg" ]; then
pkg_dir="aports/$repo/$pkg"
break
fi
done
if [ -z "$pkg_dir" ]; then
log "ERROR: Package $pkg not found in aports"
return 1
fi
# Checkout specific version if needed
cd "$pkg_dir"
abuild fetch
abuild unpack
}
apply_patches() {
local src_dir="$1"
shift
for patch_url in "$@"; do
log "Applying patch: $patch_url"
curl -sSL "$patch_url" | patch -d "$src_dir" -p1
done
}
build_package() {
local pkg="$1"
local ver="$2"
shift 2
local patches="$@"
log "Building $pkg-$ver"
setup_reproducible_env "$pkg" "$ver"
cd /build
fetch_source "$pkg" "$ver"
if [ -n "$patches" ]; then
apply_patches "src/$pkg-*" $patches
fi
# Build with reproducible settings
abuild -r
# Copy output
local out_dir="$OUTPUT_DIR/$pkg-$ver"
mkdir -p "$out_dir"
cp -r ~/packages/*/*.apk "$out_dir/" 2>/dev/null || true
# Extract binaries and fingerprints
for apk in "$out_dir"/*.apk; do
[ -f "$apk" ] || continue
local apk_name=$(basename "$apk" .apk)
mkdir -p "$out_dir/extracted/$apk_name"
tar -xzf "$apk" -C "$out_dir/extracted/$apk_name"
# Extract function fingerprints
/usr/local/bin/extract-functions.sh "$out_dir/extracted/$apk_name" > "$out_dir/$apk_name.functions.json"
done
log "Build complete: $out_dir"
}
diff_versions() {
local pkg="$1"
local vuln_ver="$2"
local patched_ver="$3"
log "Building and diffing $pkg: $vuln_ver vs $patched_ver"
# Build vulnerable version
build_package "$pkg" "$vuln_ver"
# Build patched version
build_package "$pkg" "$patched_ver"
# Compute diff
local diff_out="$OUTPUT_DIR/$pkg-diff-$vuln_ver-vs-$patched_ver.json"
# Simple diff of function fingerprints
jq -s '
.[0] as $vuln |
.[1] as $patched |
{
package: "'"$pkg"'",
vulnerable_version: "'"$vuln_ver"'",
patched_version: "'"$patched_ver"'",
vulnerable_functions: ($vuln | length),
patched_functions: ($patched | length),
added: [($patched[] | select(.name as $n | ($vuln | map(.name) | index($n)) == null))],
removed: [($vuln[] | select(.name as $n | ($patched | map(.name) | index($n)) == null))],
modified: [
$vuln[] | .name as $n | .hash as $h |
($patched[] | select(.name == $n and .hash != $h)) |
{name: $n, vuln_hash: $h, patched_hash: .hash}
]
}
' \
"$OUTPUT_DIR/$pkg-$vuln_ver"/*.functions.json \
"$OUTPUT_DIR/$pkg-$patched_ver"/*.functions.json \
> "$diff_out"
log "Diff complete: $diff_out"
}
case "$COMMAND" in
build)
if [ -z "$PACKAGE" ] || [ -z "$VERSION" ]; then
log "ERROR: Package and version required"
show_help
exit 1
fi
shift 2 # Remove command, package, version
build_package "$PACKAGE" "$VERSION" "$@"
;;
diff)
PATCHED_VERSION="${4:-}"
if [ -z "$PACKAGE" ] || [ -z "$VERSION" ] || [ -z "$PATCHED_VERSION" ]; then
log "ERROR: Package, vulnerable version, and patched version required"
show_help
exit 1
fi
diff_versions "$PACKAGE" "$VERSION" "$PATCHED_VERSION"
;;
--help|help)
show_help
;;
*)
log "ERROR: Unknown command: $COMMAND"
show_help
exit 1
;;
esac

View File

@@ -1,71 +0,0 @@
#!/bin/sh
# Extract function fingerprints from ELF binaries
# Outputs JSON array with function name, offset, size, and hashes
#
# Usage: extract-functions.sh <directory>
#
# Dependencies: objdump, readelf, sha256sum, jq
set -eu
DIR="${1:-.}"
extract_functions_from_binary() {
local binary="$1"
# Skip non-ELF files
file "$binary" | grep -q "ELF" || return 0
# Get function symbols
objdump -t "$binary" 2>/dev/null | \
awk '/\.text.*[0-9a-f]+.*F/ {
# Fields: addr flags section size name
gsub(/\*.*\*/, "", $1) # Clean address
if ($5 != "" && $4 != "00000000" && $4 != "0000000000000000") {
printf "%s %s %s\n", $1, $4, $NF
}
}' | while read -r offset size name; do
# Skip compiler-generated symbols
case "$name" in
__*|_GLOBAL_*|.plt*|.text*|frame_dummy|register_tm_clones|deregister_tm_clones)
continue
;;
esac
# Convert hex size to decimal
dec_size=$((16#$size))
# Skip tiny functions (likely padding)
[ "$dec_size" -lt 16 ] && continue
# Extract function bytes and compute hash
# Using objdump to get disassembly and hash the opcodes
local hash=$(objdump -d --start-address="0x$offset" --stop-address="0x$((16#$offset + dec_size))" "$binary" 2>/dev/null | \
grep "^[[:space:]]*[0-9a-f]*:" | \
awk '{for(i=2;i<=NF;i++){if($i~/^[0-9a-f]{2}$/){printf "%s", $i}}}' | \
sha256sum | cut -d' ' -f1)
# Output JSON object
printf '{"name":"%s","offset":"0x%s","size":%d,"hash":"%s"}\n' \
"$name" "$offset" "$dec_size" "${hash:-unknown}"
done
}
# Find all ELF binaries in directory
echo "["
first=true
find "$DIR" -type f -executable 2>/dev/null | while read -r binary; do
# Check if ELF
file "$binary" 2>/dev/null | grep -q "ELF" || continue
extract_functions_from_binary "$binary" | while read -r json; do
[ -z "$json" ] && continue
if [ "$first" = "true" ]; then
first=false
else
echo ","
fi
echo "$json"
done
done
echo "]"

View File

@@ -1,65 +0,0 @@
#!/bin/sh
# Normalization scripts for reproducible builds
# Strips non-deterministic content from build artifacts
#
# Usage: normalize.sh <directory>
set -eu
DIR="${1:-.}"
log() {
echo "[normalize] $*" >&2
}
# Strip timestamps from __DATE__ and __TIME__ macros
strip_date_time() {
log "Stripping date/time macros..."
# Already handled by SOURCE_DATE_EPOCH in modern GCC
}
# Normalize build paths
normalize_paths() {
log "Normalizing build paths..."
# Handled by -fdebug-prefix-map
}
# Normalize ar archives for deterministic ordering
normalize_archives() {
log "Normalizing ar archives..."
find "$DIR" -name "*.a" -type f | while read -r archive; do
if ar --version 2>&1 | grep -q "GNU ar"; then
# GNU ar with deterministic mode
ar -rcsD "$archive.tmp" "$archive" && mv "$archive.tmp" "$archive" 2>/dev/null || true
fi
done
}
# Strip debug sections that contain non-deterministic info
strip_debug_timestamps() {
log "Stripping debug timestamps..."
find "$DIR" -type f \( -name "*.o" -o -name "*.so" -o -name "*.so.*" -o -executable \) | while read -r obj; do
# Check if ELF
file "$obj" 2>/dev/null | grep -q "ELF" || continue
# Strip build-id if not needed (we regenerate it)
# objcopy --remove-section=.note.gnu.build-id "$obj" 2>/dev/null || true
# Remove timestamps from DWARF debug info
# This is typically handled by SOURCE_DATE_EPOCH
done
}
# Normalize tar archives
normalize_tars() {
log "Normalizing tar archives..."
# When creating tars, use:
# tar --sort=name --mtime="@${SOURCE_DATE_EPOCH}" --owner=0 --group=0 --numeric-owner
}
# Run all normalizations
normalize_paths
normalize_archives
strip_debug_timestamps
log "Normalization complete"

View File

@@ -1,59 +0,0 @@
# Debian Reproducible Builder
# Creates deterministic builds of Debian packages for fingerprint diffing
#
# Usage:
# docker build -t repro-builder-debian:bookworm --build-arg RELEASE=bookworm .
# docker run -v ./output:/output repro-builder-debian:bookworm build openssl 3.0.7-1
ARG RELEASE=bookworm
FROM debian:${RELEASE}
ARG RELEASE
ENV DEBIAN_RELEASE=${RELEASE}
ENV DEBIAN_FRONTEND=noninteractive
# Install build tools
RUN apt-get update && apt-get install -y --no-install-recommends \
build-essential \
devscripts \
dpkg-dev \
equivs \
fakeroot \
git \
curl \
ca-certificates \
binutils \
elfutils \
coreutils \
patch \
diffutils \
file \
jq \
&& rm -rf /var/lib/apt/lists/*
# Create build user
RUN useradd -m -s /bin/bash builder \
&& echo "builder ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
USER builder
WORKDIR /home/builder
# Copy scripts
COPY --chown=builder:builder scripts/build.sh /usr/local/bin/build.sh
COPY --chown=builder:builder scripts/extract-functions.sh /usr/local/bin/extract-functions.sh
COPY --chown=builder:builder scripts/normalize.sh /usr/local/bin/normalize.sh
USER root
RUN chmod +x /usr/local/bin/*.sh
USER builder
# Environment for reproducibility
ENV TZ=UTC
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
VOLUME /output
WORKDIR /build
ENTRYPOINT ["/usr/local/bin/build.sh"]
CMD ["--help"]

View File

@@ -1,233 +0,0 @@
#!/bin/bash
# Debian Reproducible Build Script
# Builds packages with deterministic settings for fingerprint generation
#
# Usage: build.sh [build|diff] <package> <version> [patch_url...]
set -euo pipefail
COMMAND="${1:-help}"
PACKAGE="${2:-}"
VERSION="${3:-}"
OUTPUT_DIR="${OUTPUT_DIR:-/output}"
log() {
echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] $*" >&2
}
show_help() {
cat <<EOF
Debian Reproducible Builder
Usage:
build.sh build <package> <version> [patch_urls...]
Build a package with reproducible settings
build.sh diff <package> <vuln_version> <patched_version>
Build two versions and compute fingerprint diff
build.sh --help
Show this help message
Environment:
SOURCE_DATE_EPOCH Override timestamp (extracted from changelog if not set)
OUTPUT_DIR Output directory (default: /output)
DEB_BUILD_OPTIONS Additional build options
Examples:
build.sh build openssl 3.0.7-1
build.sh diff curl 8.1.0-1 8.1.0-2
EOF
}
setup_reproducible_env() {
local pkg="$1"
# Reproducible build flags
export DEB_BUILD_OPTIONS="${DEB_BUILD_OPTIONS:-} reproducible=+all"
export SOURCE_DATE_EPOCH="${SOURCE_DATE_EPOCH:-$(date +%s)}"
# Compiler flags for reproducibility
export CFLAGS="${CFLAGS:-} -fno-record-gcc-switches -fdebug-prefix-map=$(pwd)=/build"
export CXXFLAGS="${CXXFLAGS:-} ${CFLAGS}"
export LC_ALL=C.UTF-8
export TZ=UTC
log "SOURCE_DATE_EPOCH=$SOURCE_DATE_EPOCH"
}
fetch_source() {
local pkg="$1"
local ver="$2"
log "Fetching source for $pkg=$ver"
mkdir -p /build/src
cd /build/src
# Enable source repositories
sudo sed -i 's/^# deb-src/deb-src/' /etc/apt/sources.list.d/*.sources 2>/dev/null || \
sudo sed -i 's/^# deb-src/deb-src/' /etc/apt/sources.list 2>/dev/null || true
sudo apt-get update
# Fetch source
if [ -n "$ver" ]; then
apt-get source "${pkg}=${ver}" || apt-get source "$pkg"
else
apt-get source "$pkg"
fi
# Find extracted directory
local src_dir=$(ls -d "${pkg}"*/ 2>/dev/null | head -1)
if [ -z "$src_dir" ]; then
log "ERROR: Could not find source directory for $pkg"
return 1
fi
# Extract SOURCE_DATE_EPOCH from changelog
if [ -z "${SOURCE_DATE_EPOCH:-}" ]; then
if [ -f "$src_dir/debian/changelog" ]; then
SOURCE_DATE_EPOCH=$(dpkg-parsechangelog -l "$src_dir/debian/changelog" -S Timestamp 2>/dev/null || date +%s)
export SOURCE_DATE_EPOCH
fi
fi
echo "$src_dir"
}
install_build_deps() {
local src_dir="$1"
log "Installing build dependencies"
cd "$src_dir"
sudo apt-get build-dep -y . || true
}
apply_patches() {
local src_dir="$1"
shift
cd "$src_dir"
for patch_url in "$@"; do
log "Applying patch: $patch_url"
curl -sSL "$patch_url" | patch -p1
done
}
build_package() {
local pkg="$1"
local ver="$2"
shift 2
local patches="${@:-}"
log "Building $pkg version $ver"
setup_reproducible_env "$pkg"
cd /build
local src_dir=$(fetch_source "$pkg" "$ver")
install_build_deps "$src_dir"
if [ -n "$patches" ]; then
apply_patches "$src_dir" $patches
fi
cd "$src_dir"
# Build with reproducible settings
dpkg-buildpackage -b -us -uc
# Copy output
local out_dir="$OUTPUT_DIR/$pkg-$ver"
mkdir -p "$out_dir"
cp -r /build/src/*.deb "$out_dir/" 2>/dev/null || true
# Extract and fingerprint
for deb in "$out_dir"/*.deb; do
[ -f "$deb" ] || continue
local deb_name=$(basename "$deb" .deb)
mkdir -p "$out_dir/extracted/$deb_name"
dpkg-deb -x "$deb" "$out_dir/extracted/$deb_name"
# Extract function fingerprints
/usr/local/bin/extract-functions.sh "$out_dir/extracted/$deb_name" > "$out_dir/$deb_name.functions.json"
done
log "Build complete: $out_dir"
}
diff_versions() {
local pkg="$1"
local vuln_ver="$2"
local patched_ver="$3"
log "Building and diffing $pkg: $vuln_ver vs $patched_ver"
# Build vulnerable version
build_package "$pkg" "$vuln_ver"
# Clean build environment
rm -rf /build/src/*
# Build patched version
build_package "$pkg" "$patched_ver"
# Compute diff
local diff_out="$OUTPUT_DIR/$pkg-diff-$vuln_ver-vs-$patched_ver.json"
jq -s '
.[0] as $vuln |
.[1] as $patched |
{
package: "'"$pkg"'",
vulnerable_version: "'"$vuln_ver"'",
patched_version: "'"$patched_ver"'",
vulnerable_functions: ($vuln | length),
patched_functions: ($patched | length),
added: [($patched[] | select(.name as $n | ($vuln | map(.name) | index($n)) == null))],
removed: [($vuln[] | select(.name as $n | ($patched | map(.name) | index($n)) == null))],
modified: [
$vuln[] | .name as $n | .hash as $h |
($patched[] | select(.name == $n and .hash != $h)) |
{name: $n, vuln_hash: $h, patched_hash: .hash}
]
}
' \
"$OUTPUT_DIR/$pkg-$vuln_ver"/*.functions.json \
"$OUTPUT_DIR/$pkg-$patched_ver"/*.functions.json \
> "$diff_out" 2>/dev/null || log "Warning: Could not compute diff"
log "Diff complete: $diff_out"
}
case "$COMMAND" in
build)
if [ -z "$PACKAGE" ]; then
log "ERROR: Package required"
show_help
exit 1
fi
shift 2 # Remove command, package
[ -n "${VERSION:-}" ] && shift # Remove version if present
build_package "$PACKAGE" "${VERSION:-}" "$@"
;;
diff)
PATCHED_VERSION="${4:-}"
if [ -z "$PACKAGE" ] || [ -z "$VERSION" ] || [ -z "$PATCHED_VERSION" ]; then
log "ERROR: Package, vulnerable version, and patched version required"
show_help
exit 1
fi
diff_versions "$PACKAGE" "$VERSION" "$PATCHED_VERSION"
;;
--help|help)
show_help
;;
*)
log "ERROR: Unknown command: $COMMAND"
show_help
exit 1
;;
esac

View File

@@ -1,67 +0,0 @@
#!/bin/bash
# Extract function fingerprints from ELF binaries
# Outputs JSON array with function name, offset, size, and hashes
set -euo pipefail
DIR="${1:-.}"
extract_functions_from_binary() {
local binary="$1"
# Skip non-ELF files
file "$binary" 2>/dev/null | grep -q "ELF" || return 0
# Get function symbols with objdump
objdump -t "$binary" 2>/dev/null | \
awk '/\.text.*[0-9a-f]+.*F/ {
gsub(/\*.*\*/, "", $1)
if ($5 != "" && length($4) > 0) {
size = strtonum("0x" $4)
if (size >= 16) {
print $1, $4, $NF
}
}
}' | while read -r offset size name; do
# Skip compiler-generated symbols
case "$name" in
__*|_GLOBAL_*|.plt*|.text*|frame_dummy|register_tm_clones|deregister_tm_clones|_start|_init|_fini)
continue
;;
esac
# Convert hex size
dec_size=$((16#$size))
# Compute hash of function bytes
local hash=$(objdump -d --start-address="0x$offset" --stop-address="$((16#$offset + dec_size))" "$binary" 2>/dev/null | \
grep -E "^[[:space:]]*[0-9a-f]+:" | \
awk '{for(i=2;i<=NF;i++){if($i~/^[0-9a-f]{2}$/){printf "%s", $i}}}' | \
sha256sum | cut -d' ' -f1)
[ -n "$hash" ] || hash="unknown"
printf '{"name":"%s","offset":"0x%s","size":%d,"hash":"%s"}\n' \
"$name" "$offset" "$dec_size" "$hash"
done
}
# Output JSON array
echo "["
first=true
find "$DIR" -type f \( -executable -o -name "*.so" -o -name "*.so.*" \) 2>/dev/null | while read -r binary; do
file "$binary" 2>/dev/null | grep -q "ELF" || continue
extract_functions_from_binary "$binary" | while read -r json; do
[ -z "$json" ] && continue
if [ "$first" = "true" ]; then
first=false
echo "$json"
else
echo ",$json"
fi
done
done
echo "]"

View File

@@ -1,29 +0,0 @@
#!/bin/bash
# Normalization scripts for Debian reproducible builds
set -euo pipefail
DIR="${1:-.}"
log() {
echo "[normalize] $*" >&2
}
normalize_archives() {
log "Normalizing ar archives..."
find "$DIR" -name "*.a" -type f | while read -r archive; do
if ar --version 2>&1 | grep -q "GNU ar"; then
ar -rcsD "$archive.tmp" "$archive" 2>/dev/null && mv "$archive.tmp" "$archive" || true
fi
done
}
strip_debug_timestamps() {
log "Stripping debug timestamps..."
# Handled by SOURCE_DATE_EPOCH and DEB_BUILD_OPTIONS
}
normalize_archives
strip_debug_timestamps
log "Normalization complete"

View File

@@ -1,85 +0,0 @@
# RHEL-compatible Reproducible Build Container
# Sprint: SPRINT_1227_0002_0001 (Reproducible Builders)
# Task: T3 - RHEL builder with mock-based package building
#
# Uses AlmaLinux 9 as RHEL-compatible base for open source builds.
# Production RHEL builds require valid subscription.
ARG BASE_IMAGE=almalinux:9
FROM ${BASE_IMAGE} AS builder
LABEL org.opencontainers.image.title="StellaOps RHEL Reproducible Builder"
LABEL org.opencontainers.image.description="RHEL-compatible reproducible build environment for security patching"
LABEL org.opencontainers.image.vendor="StellaOps"
LABEL org.opencontainers.image.source="https://github.com/stellaops/stellaops"
# Install build dependencies
RUN dnf -y update && \
dnf -y install \
# Core build tools
rpm-build \
rpmdevtools \
rpmlint \
mock \
# Compiler toolchain
gcc \
gcc-c++ \
make \
cmake \
autoconf \
automake \
libtool \
# Package management
dnf-plugins-core \
yum-utils \
createrepo_c \
# Binary analysis
binutils \
elfutils \
gdb \
# Reproducibility
diffoscope \
# Source control
git \
patch \
# Utilities
wget \
curl \
jq \
python3 \
python3-pip && \
dnf clean all
# Create mock user (mock requires non-root)
RUN useradd -m mockbuild && \
usermod -a -G mock mockbuild
# Set up rpmbuild directories
RUN mkdir -p /build/{BUILD,RPMS,SOURCES,SPECS,SRPMS} && \
chown -R mockbuild:mockbuild /build
# Copy build scripts
COPY scripts/build.sh /usr/local/bin/build.sh
COPY scripts/extract-functions.sh /usr/local/bin/extract-functions.sh
COPY scripts/normalize.sh /usr/local/bin/normalize.sh
COPY scripts/mock-build.sh /usr/local/bin/mock-build.sh
RUN chmod +x /usr/local/bin/*.sh
# Set reproducibility environment
ENV TZ=UTC
ENV LC_ALL=C.UTF-8
ENV LANG=C.UTF-8
# Deterministic compiler flags
ENV CFLAGS="-fno-record-gcc-switches -fdebug-prefix-map=/build=/buildroot -O2 -g"
ENV CXXFLAGS="${CFLAGS}"
# Mock configuration for reproducible builds
COPY mock/stellaops-repro.cfg /etc/mock/stellaops-repro.cfg
WORKDIR /build
USER mockbuild
ENTRYPOINT ["/usr/local/bin/build.sh"]
CMD ["--help"]

View File

@@ -1,71 +0,0 @@
# StellaOps Reproducible Build Mock Configuration
# Sprint: SPRINT_1227_0002_0001 (Reproducible Builders)
#
# Mock configuration optimized for reproducible RHEL/AlmaLinux builds
config_opts['root'] = 'stellaops-repro'
config_opts['target_arch'] = 'x86_64'
config_opts['legal_host_arches'] = ('x86_64',)
config_opts['chroot_setup_cmd'] = 'install @buildsys-build'
config_opts['dist'] = 'el9'
config_opts['releasever'] = '9'
# Reproducibility settings
config_opts['use_host_resolv'] = False
config_opts['rpmbuild_networking'] = False
config_opts['cleanup_on_success'] = True
config_opts['cleanup_on_failure'] = True
# Deterministic build settings
config_opts['macros']['SOURCE_DATE_EPOCH'] = '%{getenv:SOURCE_DATE_EPOCH}'
config_opts['macros']['_buildhost'] = 'stellaops.build'
config_opts['macros']['debug_package'] = '%{nil}'
config_opts['macros']['_default_patch_fuzz'] = '0'
# Compiler flags for reproducibility
config_opts['macros']['optflags'] = '-O2 -g -fno-record-gcc-switches -fdebug-prefix-map=%{_builddir}=/buildroot'
# Environment normalization
config_opts['environment']['TZ'] = 'UTC'
config_opts['environment']['LC_ALL'] = 'C.UTF-8'
config_opts['environment']['LANG'] = 'C.UTF-8'
# Use AlmaLinux as RHEL-compatible base
config_opts['dnf.conf'] = """
[main]
keepcache=1
debuglevel=2
reposdir=/dev/null
logfile=/var/log/yum.log
retries=20
obsoletes=1
gpgcheck=0
assumeyes=1
syslog_ident=mock
syslog_device=
metadata_expire=0
mdpolicy=group:primary
best=1
install_weak_deps=0
protected_packages=
module_platform_id=platform:el9
user_agent={{ user_agent }}
[baseos]
name=AlmaLinux $releasever - BaseOS
mirrorlist=https://mirrors.almalinux.org/mirrorlist/$releasever/baseos
enabled=1
gpgcheck=0
[appstream]
name=AlmaLinux $releasever - AppStream
mirrorlist=https://mirrors.almalinux.org/mirrorlist/$releasever/appstream
enabled=1
gpgcheck=0
[crb]
name=AlmaLinux $releasever - CRB
mirrorlist=https://mirrors.almalinux.org/mirrorlist/$releasever/crb
enabled=1
gpgcheck=0
"""

View File

@@ -1,213 +0,0 @@
#!/bin/bash
# RHEL Reproducible Build Script
# Sprint: SPRINT_1227_0002_0001 (Reproducible Builders)
#
# Usage: build.sh --srpm <url_or_path> [--patch <patch_file>] [--output <dir>]
set -euo pipefail
# Default values
OUTPUT_DIR="/build/output"
WORK_DIR="/build/work"
SRPM=""
PATCH_FILE=""
SOURCE_DATE_EPOCH="${SOURCE_DATE_EPOCH:-}"
usage() {
cat <<EOF
RHEL Reproducible Build Script
Usage: $0 [OPTIONS]
Options:
--srpm <path> Path or URL to SRPM file (required)
--patch <path> Path to security patch file (optional)
--output <dir> Output directory (default: /build/output)
--epoch <timestamp> SOURCE_DATE_EPOCH value (default: from changelog)
--help Show this help message
Examples:
$0 --srpm openssl-3.0.7-1.el9.src.rpm --patch CVE-2023-0286.patch
$0 --srpm https://mirror/srpms/curl-8.0.1-1.el9.src.rpm
EOF
exit 0
}
log() {
echo "[$(date -u '+%Y-%m-%dT%H:%M:%SZ')] $*"
}
error() {
log "ERROR: $*" >&2
exit 1
}
# Parse arguments
while [[ $# -gt 0 ]]; do
case $1 in
--srpm)
SRPM="$2"
shift 2
;;
--patch)
PATCH_FILE="$2"
shift 2
;;
--output)
OUTPUT_DIR="$2"
shift 2
;;
--epoch)
SOURCE_DATE_EPOCH="$2"
shift 2
;;
--help)
usage
;;
*)
error "Unknown option: $1"
;;
esac
done
[[ -z "${SRPM}" ]] && error "SRPM path required. Use --srpm <path>"
# Create directories
mkdir -p "${OUTPUT_DIR}" "${WORK_DIR}"
cd "${WORK_DIR}"
log "Starting RHEL reproducible build"
log "SRPM: ${SRPM}"
# Download or copy SRPM
if [[ "${SRPM}" =~ ^https?:// ]]; then
log "Downloading SRPM..."
curl -fsSL -o source.src.rpm "${SRPM}"
SRPM="source.src.rpm"
elif [[ ! -f "${SRPM}" ]]; then
error "SRPM file not found: ${SRPM}"
fi
# Install SRPM
log "Installing SRPM..."
rpm2cpio "${SRPM}" | cpio -idmv
# Extract SOURCE_DATE_EPOCH from changelog if not provided
if [[ -z "${SOURCE_DATE_EPOCH}" ]]; then
SPEC_FILE=$(find . -name "*.spec" | head -1)
if [[ -n "${SPEC_FILE}" ]]; then
# Extract date from first changelog entry
CHANGELOG_DATE=$(grep -m1 '^\*' "${SPEC_FILE}" | sed 's/^\* //' | cut -d' ' -f1-3)
if [[ -n "${CHANGELOG_DATE}" ]]; then
SOURCE_DATE_EPOCH=$(date -d "${CHANGELOG_DATE}" +%s 2>/dev/null || echo "")
fi
fi
if [[ -z "${SOURCE_DATE_EPOCH}" ]]; then
SOURCE_DATE_EPOCH=$(date +%s)
log "Warning: Using current time for SOURCE_DATE_EPOCH"
fi
fi
export SOURCE_DATE_EPOCH
log "SOURCE_DATE_EPOCH: ${SOURCE_DATE_EPOCH}"
# Apply security patch if provided
if [[ -n "${PATCH_FILE}" ]]; then
if [[ ! -f "${PATCH_FILE}" ]]; then
error "Patch file not found: ${PATCH_FILE}"
fi
log "Applying security patch: ${PATCH_FILE}"
# Copy patch to SOURCES
PATCH_NAME=$(basename "${PATCH_FILE}")
cp "${PATCH_FILE}" SOURCES/
# Add patch to spec file
SPEC_FILE=$(find . -name "*.spec" | head -1)
if [[ -n "${SPEC_FILE}" ]]; then
# Find last Patch line or Source line
LAST_PATCH=$(grep -n '^Patch[0-9]*:' "${SPEC_FILE}" | tail -1 | cut -d: -f1)
if [[ -z "${LAST_PATCH}" ]]; then
LAST_PATCH=$(grep -n '^Source[0-9]*:' "${SPEC_FILE}" | tail -1 | cut -d: -f1)
fi
# Calculate next patch number
PATCH_NUM=$(grep -c '^Patch[0-9]*:' "${SPEC_FILE}" || echo 0)
PATCH_NUM=$((PATCH_NUM + 100)) # Use 100+ for security patches
# Insert patch declaration
sed -i "${LAST_PATCH}a Patch${PATCH_NUM}: ${PATCH_NAME}" "${SPEC_FILE}"
# Add %patch to %prep if not using autosetup
if ! grep -q '%autosetup' "${SPEC_FILE}"; then
PREP_LINE=$(grep -n '^%prep' "${SPEC_FILE}" | head -1 | cut -d: -f1)
if [[ -n "${PREP_LINE}" ]]; then
# Find last %patch line in %prep
LAST_PATCH_LINE=$(sed -n "${PREP_LINE},\$p" "${SPEC_FILE}" | grep -n '^%patch' | tail -1 | cut -d: -f1)
if [[ -n "${LAST_PATCH_LINE}" ]]; then
INSERT_LINE=$((PREP_LINE + LAST_PATCH_LINE))
else
INSERT_LINE=$((PREP_LINE + 1))
fi
sed -i "${INSERT_LINE}a %patch${PATCH_NUM} -p1" "${SPEC_FILE}"
fi
fi
fi
fi
# Set up rpmbuild tree
log "Setting up rpmbuild tree..."
rpmdev-setuptree || true
# Copy sources and spec
cp -r SOURCES/* ~/rpmbuild/SOURCES/ 2>/dev/null || true
cp *.spec ~/rpmbuild/SPECS/ 2>/dev/null || true
# Build using mock for isolation and reproducibility
log "Building with mock (stellaops-repro config)..."
SPEC_FILE=$(find ~/rpmbuild/SPECS -name "*.spec" | head -1)
if [[ -n "${SPEC_FILE}" ]]; then
# Build SRPM first
rpmbuild -bs "${SPEC_FILE}"
BUILT_SRPM=$(find ~/rpmbuild/SRPMS -name "*.src.rpm" | head -1)
if [[ -n "${BUILT_SRPM}" ]]; then
# Build with mock
mock -r stellaops-repro --rebuild "${BUILT_SRPM}" --resultdir="${OUTPUT_DIR}/rpms"
else
error "SRPM build failed"
fi
else
error "No spec file found"
fi
# Extract function fingerprints from built RPMs
log "Extracting function fingerprints..."
for rpm in "${OUTPUT_DIR}/rpms"/*.rpm; do
if [[ -f "${rpm}" ]] && [[ ! "${rpm}" =~ \.src\.rpm$ ]]; then
/usr/local/bin/extract-functions.sh "${rpm}" "${OUTPUT_DIR}/fingerprints"
fi
done
# Generate build manifest
log "Generating build manifest..."
cat > "${OUTPUT_DIR}/manifest.json" <<EOF
{
"builder": "rhel",
"base_image": "${BASE_IMAGE:-almalinux:9}",
"source_date_epoch": ${SOURCE_DATE_EPOCH},
"build_timestamp": "$(date -u '+%Y-%m-%dT%H:%M:%SZ')",
"srpm": "${SRPM}",
"patch_applied": $(if [[ -n "${PATCH_FILE}" ]]; then echo "\"${PATCH_FILE}\""; else echo "null"; fi),
"rpm_outputs": $(find "${OUTPUT_DIR}/rpms" -name "*.rpm" ! -name "*.src.rpm" -printf '"%f",' 2>/dev/null | sed 's/,$//' | sed 's/^/[/' | sed 's/$/]/'),
"fingerprint_files": $(find "${OUTPUT_DIR}/fingerprints" -name "*.json" -printf '"%f",' 2>/dev/null | sed 's/,$//' | sed 's/^/[/' | sed 's/$/]/')
}
EOF
log "Build complete. Output in: ${OUTPUT_DIR}"
log "Manifest: ${OUTPUT_DIR}/manifest.json"

View File

@@ -1,73 +0,0 @@
#!/bin/bash
# RHEL Function Extraction Script
# Sprint: SPRINT_1227_0002_0001 (Reproducible Builders)
#
# Extracts function-level fingerprints from RPM packages
set -euo pipefail
RPM_PATH="${1:-}"
OUTPUT_DIR="${2:-/build/fingerprints}"
[[ -z "${RPM_PATH}" ]] && { echo "Usage: $0 <rpm_path> [output_dir]"; exit 1; }
[[ ! -f "${RPM_PATH}" ]] && { echo "RPM not found: ${RPM_PATH}"; exit 1; }
mkdir -p "${OUTPUT_DIR}"
RPM_NAME=$(rpm -qp --qf '%{NAME}' "${RPM_PATH}" 2>/dev/null)
RPM_VERSION=$(rpm -qp --qf '%{VERSION}-%{RELEASE}' "${RPM_PATH}" 2>/dev/null)
WORK_DIR=$(mktemp -d)
trap "rm -rf ${WORK_DIR}" EXIT
cd "${WORK_DIR}"
# Extract RPM contents
rpm2cpio "${RPM_PATH}" | cpio -idmv 2>/dev/null
# Find ELF binaries
find . -type f -exec file {} \; | grep -E 'ELF.*(executable|shared object)' | cut -d: -f1 | while read -r binary; do
BINARY_NAME=$(basename "${binary}")
BINARY_PATH="${binary#./}"
# Get build-id if present
BUILD_ID=$(readelf -n "${binary}" 2>/dev/null | grep 'Build ID:' | awk '{print $3}' || echo "")
# Extract function symbols
OUTPUT_FILE="${OUTPUT_DIR}/${RPM_NAME}_${BINARY_NAME}.json"
{
echo "{"
echo " \"package\": \"${RPM_NAME}\","
echo " \"version\": \"${RPM_VERSION}\","
echo " \"binary\": \"${BINARY_PATH}\","
echo " \"build_id\": \"${BUILD_ID}\","
echo " \"extracted_at\": \"$(date -u '+%Y-%m-%dT%H:%M:%SZ')\","
echo " \"functions\": ["
# Extract function addresses and sizes using nm and objdump
FIRST=true
nm -S --defined-only "${binary}" 2>/dev/null | grep -E '^[0-9a-f]+ [0-9a-f]+ [Tt]' | while read -r addr size type name; do
if [[ "${FIRST}" == "true" ]]; then
FIRST=false
else
echo ","
fi
# Calculate function hash from disassembly
FUNC_HASH=$(objdump -d --start-address=0x${addr} --stop-address=$((0x${addr} + 0x${size})) "${binary}" 2>/dev/null | \
grep -E '^\s+[0-9a-f]+:' | awk '{$1=""; print}' | sha256sum | cut -d' ' -f1)
printf ' {"name": "%s", "address": "0x%s", "size": %d, "hash": "%s"}' \
"${name}" "${addr}" "$((0x${size}))" "${FUNC_HASH}"
done || true
echo ""
echo " ]"
echo "}"
} > "${OUTPUT_FILE}"
echo "Extracted: ${OUTPUT_FILE}"
done
echo "Function extraction complete for: ${RPM_NAME}"

View File

@@ -1,34 +0,0 @@
#!/bin/bash
# RHEL Mock Build Script
# Sprint: SPRINT_1227_0002_0001 (Reproducible Builders)
#
# Builds SRPMs using mock for isolation and reproducibility
set -euo pipefail
SRPM="${1:-}"
RESULT_DIR="${2:-/build/output}"
CONFIG="${3:-stellaops-repro}"
[[ -z "${SRPM}" ]] && { echo "Usage: $0 <srpm> [result_dir] [mock_config]"; exit 1; }
[[ ! -f "${SRPM}" ]] && { echo "SRPM not found: ${SRPM}"; exit 1; }
mkdir -p "${RESULT_DIR}"
echo "Building SRPM with mock: ${SRPM}"
echo "Config: ${CONFIG}"
echo "Output: ${RESULT_DIR}"
# Initialize mock if needed
mock -r "${CONFIG}" --init
# Build with reproducibility settings
mock -r "${CONFIG}" \
--rebuild "${SRPM}" \
--resultdir="${RESULT_DIR}" \
--define "SOURCE_DATE_EPOCH ${SOURCE_DATE_EPOCH:-$(date +%s)}" \
--define "_buildhost stellaops.build" \
--define "debug_package %{nil}"
echo "Build complete. Results in: ${RESULT_DIR}"
ls -la "${RESULT_DIR}"

View File

@@ -1,83 +0,0 @@
#!/bin/bash
# RHEL Build Normalization Script
# Sprint: SPRINT_1227_0002_0001 (Reproducible Builders)
#
# Normalizes RPM build environment for reproducibility
set -euo pipefail
# Normalize environment
export TZ=UTC
export LC_ALL=C.UTF-8
export LANG=C.UTF-8
# Deterministic compiler flags
export CFLAGS="${CFLAGS:--fno-record-gcc-switches -fdebug-prefix-map=$(pwd)=/buildroot -O2 -g}"
export CXXFLAGS="${CXXFLAGS:-${CFLAGS}}"
# Disable debug info that varies
export DEB_BUILD_OPTIONS="nostrip noopt"
# RPM-specific reproducibility
export RPM_BUILD_NCPUS=1
# Normalize timestamps in archives
normalize_ar() {
local archive="$1"
if command -v llvm-ar &>/dev/null; then
llvm-ar --format=gnu --enable-deterministic-archives rcs "${archive}.new" "${archive}"
mv "${archive}.new" "${archive}"
fi
}
# Normalize timestamps in tar archives
normalize_tar() {
local archive="$1"
local mtime="${SOURCE_DATE_EPOCH:-0}"
# Repack with deterministic settings
local tmp_dir=$(mktemp -d)
tar -xf "${archive}" -C "${tmp_dir}"
tar --sort=name \
--mtime="@${mtime}" \
--owner=0 --group=0 \
--numeric-owner \
-cf "${archive}.new" -C "${tmp_dir}" .
mv "${archive}.new" "${archive}"
rm -rf "${tmp_dir}"
}
# Normalize __pycache__ timestamps
normalize_python() {
find . -name '__pycache__' -type d -exec rm -rf {} + 2>/dev/null || true
find . -name '*.pyc' -delete 2>/dev/null || true
}
# Strip build paths from binaries
strip_build_paths() {
local binary="$1"
if command -v objcopy &>/dev/null; then
# Remove .note.gnu.build-id if it contains build path
objcopy --remove-section=.note.gnu.build-id "${binary}" 2>/dev/null || true
fi
}
# Main normalization
normalize_build() {
echo "Normalizing build environment..."
# Normalize Python bytecode
normalize_python
# Find and normalize archives
find . -name '*.a' -type f | while read -r ar; do
normalize_ar "${ar}"
done
echo "Normalization complete"
}
# If sourced, export functions; if executed, run normalization
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
normalize_build
fi

View File

@@ -1,49 +0,0 @@
# devops/docker/schema-versions/Dockerfile
# Versioned PostgreSQL container for schema evolution testing
# Sprint: SPRINT_20260105_002_005_TEST_cross_cutting
# Task: CCUT-008
#
# USAGE:
# ======
# Build for specific module and version:
# docker build --build-arg MODULE=scanner --build-arg SCHEMA_VERSION=v1.2.0 \
# -t stellaops/schema-test:scanner-v1.2.0 .
#
# Run for testing:
# docker run -d -p 5432:5432 stellaops/schema-test:scanner-v1.2.0
ARG POSTGRES_VERSION=16
FROM postgres:${POSTGRES_VERSION}-alpine
# Build arguments
ARG MODULE=scanner
ARG SCHEMA_VERSION=latest
ARG SCHEMA_DATE=""
# Labels for identification
LABEL org.opencontainers.image.title="StellaOps Schema Test - ${MODULE}"
LABEL org.opencontainers.image.description="PostgreSQL with ${MODULE} schema version ${SCHEMA_VERSION}"
LABEL org.opencontainers.image.version="${SCHEMA_VERSION}"
LABEL org.stellaops.module="${MODULE}"
LABEL org.stellaops.schema.version="${SCHEMA_VERSION}"
LABEL org.stellaops.schema.date="${SCHEMA_DATE}"
# Environment variables
ENV POSTGRES_USER=stellaops_test
ENV POSTGRES_PASSWORD=test_password
ENV POSTGRES_DB=stellaops_schema_test
ENV STELLAOPS_MODULE=${MODULE}
ENV STELLAOPS_SCHEMA_VERSION=${SCHEMA_VERSION}
# Copy initialization scripts
COPY docker-entrypoint-initdb.d/ /docker-entrypoint-initdb.d/
# Copy module-specific schema
COPY schemas/${MODULE}/ /schemas/${MODULE}/
# Health check
HEALTHCHECK --interval=10s --timeout=5s --start-period=30s --retries=3 \
CMD pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB} || exit 1
# Expose PostgreSQL port
EXPOSE 5432

View File

@@ -1,179 +0,0 @@
#!/bin/bash
# build-schema-images.sh
# Build versioned PostgreSQL images for schema evolution testing
# Sprint: SPRINT_20260105_002_005_TEST_cross_cutting
# Task: CCUT-008
#
# USAGE:
# ======
# Build all versions for a module:
# ./build-schema-images.sh scanner
#
# Build specific version:
# ./build-schema-images.sh scanner v1.2.0
#
# Build all modules:
# ./build-schema-images.sh --all
set -e
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
REPO_ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
REGISTRY="${SCHEMA_REGISTRY:-ghcr.io/stellaops}"
POSTGRES_VERSION="${POSTGRES_VERSION:-16}"
# Modules with schema evolution support
MODULES=("scanner" "concelier" "evidencelocker" "authority" "sbomservice" "policy")
usage() {
echo "Usage: $0 <module|--all> [version]"
echo ""
echo "Arguments:"
echo " module Module name (scanner, concelier, evidencelocker, authority, sbomservice, policy)"
echo " --all Build all modules"
echo " version Optional specific version to build (default: all versions)"
echo ""
echo "Environment variables:"
echo " SCHEMA_REGISTRY Container registry (default: ghcr.io/stellaops)"
echo " POSTGRES_VERSION PostgreSQL version (default: 16)"
echo " PUSH_IMAGES Set to 'true' to push images after build"
exit 1
}
# Get schema versions from git tags or migration files
get_schema_versions() {
local module=$1
local versions=()
# Check for version tags
local tags=$(git tag -l "${module}-schema-v*" 2>/dev/null | sed "s/${module}-schema-//" | sort -V)
if [ -n "$tags" ]; then
versions=($tags)
else
# Fall back to migration file count
local migration_dir="$REPO_ROOT/docs/db/migrations/${module}"
if [ -d "$migration_dir" ]; then
local count=$(ls -1 "$migration_dir"/*.sql 2>/dev/null | wc -l)
for i in $(seq 1 $count); do
versions+=("v1.0.$i")
done
fi
fi
# Always include 'latest'
versions+=("latest")
echo "${versions[@]}"
}
# Copy schema files to build context
prepare_schema_context() {
local module=$1
local version=$2
local build_dir="$SCRIPT_DIR/.build/${module}/${version}"
mkdir -p "$build_dir/schemas/${module}"
mkdir -p "$build_dir/docker-entrypoint-initdb.d"
# Copy entrypoint scripts
cp "$SCRIPT_DIR/docker-entrypoint-initdb.d/"*.sh "$build_dir/docker-entrypoint-initdb.d/"
# Copy base schema
local base_schema="$REPO_ROOT/docs/db/schemas/${module}.sql"
if [ -f "$base_schema" ]; then
cp "$base_schema" "$build_dir/schemas/${module}/base.sql"
fi
# Copy migrations directory
local migrations_dir="$REPO_ROOT/docs/db/migrations/${module}"
if [ -d "$migrations_dir" ]; then
mkdir -p "$build_dir/schemas/${module}/migrations"
cp "$migrations_dir"/*.sql "$build_dir/schemas/${module}/migrations/" 2>/dev/null || true
fi
echo "$build_dir"
}
# Build image for module and version
build_image() {
local module=$1
local version=$2
echo "Building ${module} schema version ${version}..."
local build_dir=$(prepare_schema_context "$module" "$version")
local image_tag="${REGISTRY}/schema-test:${module}-${version}"
local schema_date=$(date -u +%Y-%m-%dT%H:%M:%SZ)
# Copy Dockerfile to build context
cp "$SCRIPT_DIR/Dockerfile" "$build_dir/"
# Build the image
docker build \
--build-arg MODULE="$module" \
--build-arg SCHEMA_VERSION="$version" \
--build-arg SCHEMA_DATE="$schema_date" \
--build-arg POSTGRES_VERSION="$POSTGRES_VERSION" \
-t "$image_tag" \
"$build_dir"
echo "Built: $image_tag"
# Push if requested
if [ "$PUSH_IMAGES" = "true" ]; then
echo "Pushing: $image_tag"
docker push "$image_tag"
fi
# Cleanup build directory
rm -rf "$build_dir"
}
# Build all versions for a module
build_module() {
local module=$1
local target_version=$2
echo "========================================"
echo "Building schema images for: $module"
echo "========================================"
if [ -n "$target_version" ]; then
build_image "$module" "$target_version"
else
local versions=$(get_schema_versions "$module")
for version in $versions; do
build_image "$module" "$version"
done
fi
}
# Main
if [ $# -lt 1 ]; then
usage
fi
case "$1" in
--all)
for module in "${MODULES[@]}"; do
build_module "$module" "$2"
done
;;
--help|-h)
usage
;;
*)
if [[ " ${MODULES[*]} " =~ " $1 " ]]; then
build_module "$1" "$2"
else
echo "Error: Unknown module '$1'"
echo "Valid modules: ${MODULES[*]}"
exit 1
fi
;;
esac
echo ""
echo "Build complete!"
echo "To push images, run with PUSH_IMAGES=true"

View File

@@ -1,70 +0,0 @@
#!/bin/bash
# 00-init-schema.sh
# Initialize PostgreSQL with module schema for testing
# Sprint: SPRINT_20260105_002_005_TEST_cross_cutting
# Task: CCUT-008
set -e
echo "Initializing schema for module: ${STELLAOPS_MODULE}"
echo "Schema version: ${STELLAOPS_SCHEMA_VERSION}"
# Create extensions
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
CREATE EXTENSION IF NOT EXISTS "btree_gist";
EOSQL
# Apply base schema if exists
BASE_SCHEMA="/schemas/${STELLAOPS_MODULE}/base.sql"
if [ -f "$BASE_SCHEMA" ]; then
echo "Applying base schema: $BASE_SCHEMA"
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$BASE_SCHEMA"
fi
# Apply versioned schema if exists
VERSION_SCHEMA="/schemas/${STELLAOPS_MODULE}/${STELLAOPS_SCHEMA_VERSION}.sql"
if [ -f "$VERSION_SCHEMA" ]; then
echo "Applying version schema: $VERSION_SCHEMA"
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$VERSION_SCHEMA"
fi
# Apply all migrations up to version
MIGRATIONS_DIR="/schemas/${STELLAOPS_MODULE}/migrations"
if [ -d "$MIGRATIONS_DIR" ]; then
echo "Applying migrations from: $MIGRATIONS_DIR"
# Get version number for comparison
VERSION_NUM=$(echo "$STELLAOPS_SCHEMA_VERSION" | sed 's/v//' | sed 's/\.//g')
for migration in $(ls -1 "$MIGRATIONS_DIR"/*.sql 2>/dev/null | sort -V); do
MIGRATION_VERSION=$(basename "$migration" .sql | sed 's/[^0-9]//g')
if [ -n "$VERSION_NUM" ] && [ "$MIGRATION_VERSION" -gt "$VERSION_NUM" ]; then
echo "Skipping migration $migration (version $MIGRATION_VERSION > $VERSION_NUM)"
continue
fi
echo "Applying migration: $migration"
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$migration"
done
fi
# Record schema version in metadata table
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
CREATE TABLE IF NOT EXISTS _schema_metadata (
key TEXT PRIMARY KEY,
value TEXT NOT NULL,
updated_at TIMESTAMPTZ DEFAULT NOW()
);
INSERT INTO _schema_metadata (key, value)
VALUES
('module', '${STELLAOPS_MODULE}'),
('schema_version', '${STELLAOPS_SCHEMA_VERSION}'),
('initialized_at', NOW()::TEXT)
ON CONFLICT (key) DO UPDATE SET value = EXCLUDED.value, updated_at = NOW();
EOSQL
echo "Schema initialization complete for ${STELLAOPS_MODULE} version ${STELLAOPS_SCHEMA_VERSION}"

View File

@@ -1,63 +0,0 @@
# StellaOps Timeline Service
# Multi-stage build for optimized production image
FROM mcr.microsoft.com/dotnet/sdk:10.0-preview AS build
WORKDIR /src
# Copy solution and project files for restore
COPY ["src/Timeline/StellaOps.Timeline.WebService/StellaOps.Timeline.WebService.csproj", "src/Timeline/StellaOps.Timeline.WebService/"]
COPY ["src/Timeline/__Libraries/StellaOps.Timeline.Core/StellaOps.Timeline.Core.csproj", "src/Timeline/__Libraries/StellaOps.Timeline.Core/"]
COPY ["src/__Libraries/StellaOps.Eventing/StellaOps.Eventing.csproj", "src/__Libraries/StellaOps.Eventing/"]
COPY ["src/__Libraries/StellaOps.HybridLogicalClock/StellaOps.HybridLogicalClock.csproj", "src/__Libraries/StellaOps.HybridLogicalClock/"]
COPY ["src/__Libraries/StellaOps.Microservice/StellaOps.Microservice.csproj", "src/__Libraries/StellaOps.Microservice/"]
COPY ["src/__Libraries/StellaOps.Replay.Core/StellaOps.Replay.Core.csproj", "src/__Libraries/StellaOps.Replay.Core/"]
COPY ["nuget.config", "."]
COPY ["Directory.Build.props", "."]
COPY ["Directory.Packages.props", "."]
# Restore dependencies
RUN dotnet restore "src/Timeline/StellaOps.Timeline.WebService/StellaOps.Timeline.WebService.csproj"
# Copy source code
COPY ["src/", "src/"]
# Build
WORKDIR /src/src/Timeline/StellaOps.Timeline.WebService
RUN dotnet build -c Release -o /app/build --no-restore
# Publish
FROM build AS publish
RUN dotnet publish -c Release -o /app/publish --no-build /p:UseAppHost=false
# Runtime image
FROM mcr.microsoft.com/dotnet/aspnet:10.0-preview AS runtime
WORKDIR /app
# Create non-root user
RUN addgroup --system --gid 1000 stellaops && \
adduser --system --uid 1000 --ingroup stellaops stellaops
# Copy published files
COPY --from=publish /app/publish .
# Set ownership
RUN chown -R stellaops:stellaops /app
# Switch to non-root user
USER stellaops
# Environment configuration
ENV ASPNETCORE_URLS=http://+:8080 \
ASPNETCORE_ENVIRONMENT=Production \
DOTNET_EnableDiagnostics=0 \
DOTNET_SYSTEM_GLOBALIZATION_INVARIANT=false
# Health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
CMD curl -f http://localhost:8080/health || exit 1
# Expose port
EXPOSE 8080
# Entry point
ENTRYPOINT ["dotnet", "StellaOps.Timeline.WebService.dll"]