refactor: JobEngine cleanup + crypto compose refactor + sprint plans + timeline merge prep

- Remove zombie JobEngine WebService (no container runs it)
- Remove dangling STELLAOPS_JOBENGINE_URL, replace with RELEASE_ORCHESTRATOR_URL
- Update Timeline audit paths to release-orchestrator
- Extract smremote to docker-compose.crypto-provider.smremote.yml
- Rename crypto compose files for consistent naming
- Add crypto provider health probe API (CP-001) + tenant preferences (CP-002)
- Create sprint plans: crypto picker, VulnExplorer merge, scheduler plugins
- Timeline merge prep: ingestion worker relocated to infrastructure lib

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
master
2026-04-08 13:45:19 +03:00
parent 59e7f25d96
commit 886ff6f9d2
118 changed files with 1593 additions and 17761 deletions

View File

@@ -52,11 +52,19 @@ Consolidated Docker Compose configuration for the StellaOps platform. All profil
### Crypto Provider Overlays
Each crypto provider is an optional compose overlay:
- `docker-compose.crypto-provider.smremote.yml` -- Chinese ShangMi (SM2/SM3/SM4) microservice (extracted from main stack)
- `docker-compose.crypto-provider.cryptopro.yml` -- Russian GOST via CryptoPro CSP
- `docker-compose.crypto-provider.crypto-sim.yml` -- Universal crypto simulator for dev/test
Usage: `docker compose -f docker-compose.stella-ops.yml -f docker-compose.crypto-provider.smremote.yml up -d`
| File | Purpose | Use Case |
|------|---------|----------|
| `docker-compose.crypto-sim.yml` | Universal crypto simulation | Testing without licensed crypto |
| `docker-compose.cryptopro.yml` | CryptoPro CSP (real GOST) | Production Russia deployments |
| `docker-compose.sm-remote.yml` | SM Remote service (real SM2) | Production China deployments |
| `docker-compose.crypto-provider.smremote.yml` | SmRemote microservice (SM2/SM3/SM4) | China deployments (router-integrated) |
| `docker-compose.crypto-provider.cryptopro.yml` | CryptoPro CSP (real GOST) | Production Russia deployments |
| `docker-compose.crypto-provider.crypto-sim.yml` | Universal crypto simulation | Testing without licensed crypto |
| `docker-compose.sm-remote.yml` | Standalone SM Remote with HSM support | China production with OSCCA-certified HSM |
### Additional Overlays
@@ -435,17 +443,17 @@ PLAYWRIGHT_BASE_URL=https://stella-ops.local \
```bash
docker compose -f docker-compose.stella-ops.yml \
-f docker-compose.compliance-china.yml \
-f docker-compose.crypto-sim.yml up -d
-f docker-compose.crypto-provider.crypto-sim.yml up -d
```
**For Production (real SM crypto):**
```bash
docker compose -f docker-compose.stella-ops.yml \
-f docker-compose.compliance-china.yml \
-f docker-compose.sm-remote.yml up -d
-f docker-compose.crypto-provider.smremote.yml up -d
```
**With OSCCA-certified HSM:**
**With standalone SM Remote + OSCCA-certified HSM:**
```bash
# Set HSM connection details in environment
export SM_REMOTE_HSM_URL="https://sm-hsm.example.com:8900"
@@ -469,7 +477,7 @@ docker compose -f docker-compose.stella-ops.yml \
```bash
docker compose -f docker-compose.stella-ops.yml \
-f docker-compose.compliance-russia.yml \
-f docker-compose.crypto-sim.yml up -d
-f docker-compose.crypto-provider.crypto-sim.yml up -d
```
**For Production (CryptoPro CSP):**
@@ -477,7 +485,7 @@ docker compose -f docker-compose.stella-ops.yml \
# CryptoPro requires EULA acceptance
CRYPTOPRO_ACCEPT_EULA=1 docker compose -f docker-compose.stella-ops.yml \
-f docker-compose.compliance-russia.yml \
-f docker-compose.cryptopro.yml up -d
-f docker-compose.crypto-provider.cryptopro.yml up -d
```
**Requirements for CryptoPro:**
@@ -498,7 +506,7 @@ CRYPTOPRO_ACCEPT_EULA=1 docker compose -f docker-compose.stella-ops.yml \
```bash
docker compose -f docker-compose.stella-ops.yml \
-f docker-compose.compliance-eu.yml \
-f docker-compose.crypto-sim.yml up -d
-f docker-compose.crypto-provider.crypto-sim.yml up -d
```
**For Production:**
@@ -518,7 +526,7 @@ docker compose -f docker-compose.stella-ops.yml \
## Crypto Simulation Details
The `docker-compose.crypto-sim.yml` overlay provides a unified simulation service for all sovereign crypto profiles:
The `docker-compose.crypto-provider.crypto-sim.yml` overlay provides a unified simulation service for all sovereign crypto profiles:
| Algorithm ID | Simulation | Use Case |
|--------------|------------|----------|

View File

@@ -11,7 +11,7 @@
# With CryptoPro CSP:
# docker compose -f devops/compose/docker-compose.stella-ops.yml \
# -f devops/compose/docker-compose.compliance-russia.yml \
# -f devops/compose/docker-compose.cryptopro.yml up -d
# -f devops/compose/docker-compose.crypto-provider.cryptopro.yml up -d
#
# Cryptography:
# - GOST R 34.10-2012: Digital signature

View File

@@ -1,119 +0,0 @@
# =============================================================================
# STELLA OPS - CRYPTO SIMULATION OVERLAY
# =============================================================================
# Universal crypto simulation service for testing sovereign crypto without
# licensed hardware or certified modules.
#
# This overlay provides the sim-crypto-service which simulates:
# - GOST R 34.10-2012 (Russia): GOST12-256, GOST12-512, ru.magma.sim, ru.kuznyechik.sim
# - SM2/SM3/SM4 (China): SM2, sm.sim, sm2.sim
# - Post-Quantum: DILITHIUM3, FALCON512, pq.sim
# - FIPS/eIDAS/KCMVP: fips.sim, eidas.sim, kcmvp.sim, world.sim
#
# Usage with China compliance:
# docker compose -f docker-compose.stella-ops.yml \
# -f docker-compose.compliance-china.yml \
# -f docker-compose.crypto-sim.yml up -d
#
# Usage with Russia compliance:
# docker compose -f docker-compose.stella-ops.yml \
# -f docker-compose.compliance-russia.yml \
# -f docker-compose.crypto-sim.yml up -d
#
# Usage with EU compliance:
# docker compose -f docker-compose.stella-ops.yml \
# -f docker-compose.compliance-eu.yml \
# -f docker-compose.crypto-sim.yml up -d
#
# IMPORTANT: This is for TESTING/DEVELOPMENT ONLY.
# - Uses deterministic HMAC-SHA256 for SM/GOST/PQ (not real algorithms)
# - Uses static ECDSA P-256 key for FIPS/eIDAS/KCMVP
# - NOT suitable for production or compliance certification
#
# =============================================================================
x-crypto-sim-labels: &crypto-sim-labels
com.stellaops.component: "crypto-sim"
com.stellaops.profile: "simulation"
com.stellaops.production: "false"
x-sim-crypto-env: &sim-crypto-env
STELLAOPS_CRYPTO_ENABLE_SIM: "1"
STELLAOPS_CRYPTO_SIM_URL: "http://sim-crypto:8080"
networks:
stellaops:
external: true
name: stellaops
services:
# ---------------------------------------------------------------------------
# Sim Crypto Service - Universal sovereign crypto simulator
# ---------------------------------------------------------------------------
sim-crypto:
build:
context: ../services/crypto/sim-crypto-service
dockerfile: Dockerfile
image: registry.stella-ops.org/stellaops/sim-crypto:dev
container_name: stellaops-sim-crypto
restart: unless-stopped
environment:
ASPNETCORE_URLS: "http://0.0.0.0:8080"
ASPNETCORE_ENVIRONMENT: "Development"
ports:
- "${SIM_CRYPTO_PORT:-18090}:8080"
networks:
- stellaops
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/keys"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
labels: *crypto-sim-labels
# ---------------------------------------------------------------------------
# Override services to use sim-crypto
# ---------------------------------------------------------------------------
# Authority - Enable sim crypto
authority:
environment:
<<: *sim-crypto-env
labels:
com.stellaops.crypto.simulator: "enabled"
# Signer - Enable sim crypto
signer:
environment:
<<: *sim-crypto-env
labels:
com.stellaops.crypto.simulator: "enabled"
# Attestor - Enable sim crypto
attestor:
environment:
<<: *sim-crypto-env
labels:
com.stellaops.crypto.simulator: "enabled"
# Scanner Web - Enable sim crypto
scanner-web:
environment:
<<: *sim-crypto-env
labels:
com.stellaops.crypto.simulator: "enabled"
# Scanner Worker - Enable sim crypto
scanner-worker:
environment:
<<: *sim-crypto-env
labels:
com.stellaops.crypto.simulator: "enabled"
# Excititor - Enable sim crypto
excititor:
environment:
<<: *sim-crypto-env
labels:
com.stellaops.crypto.simulator: "enabled"

View File

@@ -1,149 +0,0 @@
# =============================================================================
# STELLA OPS - CRYPTOPRO CSP OVERLAY (Russia)
# =============================================================================
# CryptoPro CSP licensed provider overlay for compliance-russia.yml.
# Adds real CryptoPro CSP service for certified GOST R 34.10-2012 operations.
#
# IMPORTANT: Requires EULA acceptance before use.
#
# Usage (MUST be combined with stella-ops AND compliance-russia):
# CRYPTOPRO_ACCEPT_EULA=1 docker compose \
# -f docker-compose.stella-ops.yml \
# -f docker-compose.compliance-russia.yml \
# -f docker-compose.cryptopro.yml up -d
#
# For development/testing without CryptoPro license, use crypto-sim.yml instead:
# docker compose \
# -f docker-compose.stella-ops.yml \
# -f docker-compose.compliance-russia.yml \
# -f docker-compose.crypto-sim.yml up -d
#
# Requirements:
# - CryptoPro CSP license files in opt/cryptopro/downloads/
# - CRYPTOPRO_ACCEPT_EULA=1 environment variable
# - CryptoPro container images with GOST engine
#
# GOST Algorithms Provided:
# - GOST R 34.10-2012: Digital signature (256/512-bit)
# - GOST R 34.11-2012: Hash function (Streebog, 256/512-bit)
# - GOST R 34.12-2015: Block cipher (Kuznyechik, Magma)
#
# =============================================================================
x-cryptopro-labels: &cryptopro-labels
com.stellaops.component: "cryptopro-csp"
com.stellaops.crypto.provider: "cryptopro"
com.stellaops.crypto.profile: "russia"
com.stellaops.crypto.certified: "true"
x-cryptopro-env: &cryptopro-env
STELLAOPS_CRYPTO_PROVIDERS: "cryptopro.gost"
STELLAOPS_CRYPTO_CRYPTOPRO_URL: "http://cryptopro-csp:8080"
STELLAOPS_CRYPTO_CRYPTOPRO_ENABLED: "true"
networks:
stellaops:
external: true
name: stellaops
services:
# ---------------------------------------------------------------------------
# CryptoPro CSP - Certified GOST cryptography provider
# ---------------------------------------------------------------------------
cryptopro-csp:
build:
context: ../..
dockerfile: devops/services/cryptopro/linux-csp-service/Dockerfile
args:
CRYPTOPRO_ACCEPT_EULA: "${CRYPTOPRO_ACCEPT_EULA:-0}"
image: registry.stella-ops.org/stellaops/cryptopro-csp:2025.10.0
container_name: stellaops-cryptopro-csp
restart: unless-stopped
environment:
ASPNETCORE_URLS: "http://0.0.0.0:8080"
CRYPTOPRO_ACCEPT_EULA: "${CRYPTOPRO_ACCEPT_EULA:-0}"
# GOST algorithm configuration
CRYPTOPRO_GOST_SIGNATURE_ALGORITHM: "GOST R 34.10-2012"
CRYPTOPRO_GOST_HASH_ALGORITHM: "GOST R 34.11-2012"
# Container and key store settings
CRYPTOPRO_CONTAINER_NAME: "${CRYPTOPRO_CONTAINER_NAME:-stellaops-signing}"
CRYPTOPRO_USE_MACHINE_STORE: "${CRYPTOPRO_USE_MACHINE_STORE:-true}"
CRYPTOPRO_PROVIDER_TYPE: "${CRYPTOPRO_PROVIDER_TYPE:-80}"
volumes:
- ../../opt/cryptopro/downloads:/opt/cryptopro/downloads:ro
- ../../etc/cryptopro:/app/etc/cryptopro:ro
# Optional: Mount key containers
- cryptopro-keys:/var/opt/cprocsp/keys
ports:
- "${CRYPTOPRO_PORT:-18080}:8080"
networks:
- stellaops
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 30s
labels: *cryptopro-labels
# ---------------------------------------------------------------------------
# Override services to use CryptoPro
# ---------------------------------------------------------------------------
# Authority - Use CryptoPro for GOST signatures
authority:
environment:
<<: *cryptopro-env
depends_on:
- cryptopro-csp
labels:
com.stellaops.crypto.provider: "cryptopro"
# Signer - Use CryptoPro for GOST signatures
signer:
environment:
<<: *cryptopro-env
depends_on:
- cryptopro-csp
labels:
com.stellaops.crypto.provider: "cryptopro"
# Attestor - Use CryptoPro for GOST signatures
attestor:
environment:
<<: *cryptopro-env
depends_on:
- cryptopro-csp
labels:
com.stellaops.crypto.provider: "cryptopro"
# Scanner Web - Use CryptoPro for verification
scanner-web:
environment:
<<: *cryptopro-env
depends_on:
- cryptopro-csp
labels:
com.stellaops.crypto.provider: "cryptopro"
# Scanner Worker - Use CryptoPro for verification
scanner-worker:
environment:
<<: *cryptopro-env
depends_on:
- cryptopro-csp
labels:
com.stellaops.crypto.provider: "cryptopro"
# Excititor - Use CryptoPro for VEX signing
excititor:
environment:
<<: *cryptopro-env
depends_on:
- cryptopro-csp
labels:
com.stellaops.crypto.provider: "cryptopro"
volumes:
cryptopro-keys:
name: stellaops-cryptopro-keys

View File

@@ -433,7 +433,8 @@ services:
STELLAOPS_POLICY_ENGINE_URL: "http://policy-engine.stella-ops.local"
# STELLAOPS_POLICY_GATEWAY_URL removed: gateway merged into policy-engine
STELLAOPS_RISKENGINE_URL: "http://riskengine.stella-ops.local"
STELLAOPS_JOBENGINE_URL: "http://jobengine.stella-ops.local"
# STELLAOPS_JOBENGINE_URL removed: WebService retired; audit/first-signal now served by release-orchestrator
STELLAOPS_RELEASE_ORCHESTRATOR_URL: "http://release-orchestrator.stella-ops.local"
STELLAOPS_TASKRUNNER_URL: "http://taskrunner.stella-ops.local"
STELLAOPS_SCHEDULER_URL: "http://scheduler.stella-ops.local"
STELLAOPS_GRAPH_URL: "http://graph.stella-ops.local"
@@ -1414,57 +1415,26 @@ services:
<<: *healthcheck-tcp
labels: *release-labels
# --- Slot 23: Timeline Indexer ---------------------------------------------
timeline-indexer-web:
<<: *resources-light
image: stellaops/timeline-indexer-web:dev
container_name: stellaops-timeline-indexer-web
restart: unless-stopped
depends_on: *depends-infra
environment:
ASPNETCORE_URLS: "http://+:8080"
<<: [*kestrel-cert, *router-microservice-defaults, *gc-light]
ConnectionStrings__Default: *postgres-connection
ConnectionStrings__Redis: "cache.stella-ops.local:6379"
TIMELINE_Postgres__Timeline__ConnectionString: *postgres-connection
Router__Enabled: "${TIMELINE_ROUTER_ENABLED:-true}"
Router__Messaging__ConsumerGroup: "timelineindexer"
volumes:
- *cert-volume
ports:
- "127.1.0.23:80:80"
networks:
stellaops:
aliases:
- timelineindexer.stella-ops.local
frontdoor: {}
healthcheck:
test: ["CMD-SHELL", "bash -c 'echo > /dev/tcp/$(hostname)/80'"]
<<: *healthcheck-tcp
labels: *release-labels
# --- Slot 23: Timeline Indexer (MERGED into timeline-web in Slot 24) --------
# timeline-indexer-web and timeline-indexer-worker have been merged into
# timeline-web. The indexer endpoints, DI services, and background ingestion
# worker now run inside the unified timeline-web container.
# Network alias timelineindexer.stella-ops.local is preserved on timeline-web
# for backwards compatibility.
timeline-indexer-worker:
<<: *resources-light
image: stellaops/timeline-indexer-worker:dev
container_name: stellaops-timeline-indexer-worker
restart: unless-stopped
depends_on: *depends-infra
environment:
<<: [*kestrel-cert, *gc-light]
ConnectionStrings__Default: *postgres-connection
ConnectionStrings__Redis: "cache.stella-ops.local:6379"
TIMELINE_Postgres__Timeline__ConnectionString: *postgres-connection
volumes:
- *cert-volume
healthcheck:
<<: *healthcheck-worker
networks:
stellaops:
aliases:
- timeline-indexer-worker.stella-ops.local
labels: *release-labels
# timeline-indexer-web:
# <<: *resources-light
# image: stellaops/timeline-indexer-web:dev
# container_name: stellaops-timeline-indexer-web
# ...
# --- Slot 24: Timeline ----------------------------------------------------
# timeline-indexer-worker:
# <<: *resources-light
# image: stellaops/timeline-indexer-worker:dev
# container_name: stellaops-timeline-indexer-worker
# ...
# --- Slot 24: Timeline (unified: includes merged timeline-indexer) ----------
timeline-web:
<<: *resources-light
image: stellaops/timeline-web:dev
@@ -1481,6 +1451,7 @@ services:
Authority__ResourceServer__Audiences__0: ""
Authority__ResourceServer__BypassNetworks__0: "172.19.0.0/16"
Authority__ResourceServer__BypassNetworks__1: "172.20.0.0/16"
TIMELINE_Postgres__Timeline__ConnectionString: *postgres-connection
Router__Enabled: "${TIMELINE_SERVICE_ROUTER_ENABLED:-true}"
Router__Messaging__ConsumerGroup: "timeline"
volumes:
@@ -1491,6 +1462,7 @@ services:
stellaops:
aliases:
- timeline.stella-ops.local
- timelineindexer.stella-ops.local
frontdoor: {}
healthcheck:
test: ["CMD-SHELL", "bash -c 'echo > /dev/tcp/$(hostname)/80'"]

View File

@@ -263,11 +263,12 @@ services:
STELLAOPS_EXCITITOR_URL: "http://excititor.stella-ops.local"
STELLAOPS_VEXHUB_URL: "http://vexhub.stella-ops.local"
STELLAOPS_VEXLENS_URL: "http://vexlens.stella-ops.local"
STELLAOPS_VULNEXPLORER_URL: "http://vulnexplorer.stella-ops.local"
STELLAOPS_VULNEXPLORER_URL: "http://findings.stella-ops.local"
STELLAOPS_POLICY_ENGINE_URL: "http://policy-engine.stella-ops.local"
# STELLAOPS_POLICY_GATEWAY_URL removed: gateway merged into policy-engine
STELLAOPS_RISKENGINE_URL: "http://riskengine.stella-ops.local"
STELLAOPS_JOBENGINE_URL: "http://jobengine.stella-ops.local"
# STELLAOPS_JOBENGINE_URL removed: WebService retired; audit/first-signal now served by release-orchestrator
STELLAOPS_RELEASE_ORCHESTRATOR_URL: "http://release-orchestrator.stella-ops.local"
STELLAOPS_TASKRUNNER_URL: "http://taskrunner.stella-ops.local"
STELLAOPS_SCHEDULER_URL: "http://scheduler.stella-ops.local"
STELLAOPS_GRAPH_URL: "http://graph.stella-ops.local"
@@ -807,32 +808,33 @@ services:
<<: *healthcheck-tcp
labels: *release-labels
# --- Slot 13: VulnExplorer (api) [src/Findings/StellaOps.VulnExplorer.Api] ---
api:
<<: *resources-light
image: stellaops/api:dev
container_name: stellaops-api
restart: unless-stopped
environment:
ASPNETCORE_URLS: "http://+:8080"
<<: [*kestrel-cert, *router-microservice-defaults, *gc-light]
ConnectionStrings__Default: "${STELLAOPS_POSTGRES_CONNECTION}"
ConnectionStrings__Redis: "cache.stella-ops.local:6379"
Router__Enabled: "${VULNEXPLORER_ROUTER_ENABLED:-true}"
Router__Messaging__ConsumerGroup: "vulnexplorer"
volumes:
- ${STELLAOPS_CERT_VOLUME}
ports:
- "127.1.0.13:80:80"
networks:
stellaops:
aliases:
- vulnexplorer.stella-ops.local
frontdoor: {}
healthcheck:
test: ["CMD-SHELL", "bash -c 'echo > /dev/tcp/$(hostname)/80'"]
<<: *healthcheck-tcp
labels: *release-labels
# --- Slot 13: VulnExplorer (api) - MERGED into findings-ledger-web (SPRINT_20260408_002) ---
# VulnExplorer endpoints are now served by the Findings Ledger WebService.
# api:
# <<: *resources-light
# image: stellaops/api:dev
# container_name: stellaops-api
# restart: unless-stopped
# environment:
# ASPNETCORE_URLS: "http://+:8080"
# <<: [*kestrel-cert, *router-microservice-defaults, *gc-light]
# ConnectionStrings__Default: "${STELLAOPS_POSTGRES_CONNECTION}"
# ConnectionStrings__Redis: "cache.stella-ops.local:6379"
# Router__Enabled: "${VULNEXPLORER_ROUTER_ENABLED:-true}"
# Router__Messaging__ConsumerGroup: "vulnexplorer"
# volumes:
# - ${STELLAOPS_CERT_VOLUME}
# ports:
# - "127.1.0.13:80:80"
# networks:
# stellaops:
# aliases:
# - vulnexplorer.stella-ops.local
# frontdoor: {}
# healthcheck:
# test: ["CMD-SHELL", "bash -c 'echo > /dev/tcp/$(hostname)/80'"]
# <<: *healthcheck-tcp
# labels: *release-labels
# --- Slot 14: Policy Engine ------------------------------------------------
policy-engine:
@@ -1198,55 +1200,26 @@ services:
<<: *healthcheck-tcp
labels: *release-labels
# --- Slot 23: Timeline Indexer ---------------------------------------------
timeline-indexer-web:
<<: *resources-light
image: stellaops/timeline-indexer-web:dev
container_name: stellaops-timeline-indexer-web
restart: unless-stopped
environment:
ASPNETCORE_URLS: "http://+:8080"
<<: [*kestrel-cert, *router-microservice-defaults, *gc-light]
ConnectionStrings__Default: "${STELLAOPS_POSTGRES_CONNECTION}"
ConnectionStrings__Redis: "cache.stella-ops.local:6379"
TIMELINE_Postgres__Timeline__ConnectionString: "${STELLAOPS_POSTGRES_CONNECTION}"
Router__Enabled: "${TIMELINE_ROUTER_ENABLED:-true}"
Router__Messaging__ConsumerGroup: "timelineindexer"
volumes:
- ${STELLAOPS_CERT_VOLUME}
ports:
- "127.1.0.23:80:80"
networks:
stellaops:
aliases:
- timelineindexer.stella-ops.local
frontdoor: {}
healthcheck:
test: ["CMD-SHELL", "bash -c 'echo > /dev/tcp/$(hostname)/80'"]
<<: *healthcheck-tcp
labels: *release-labels
# --- Slot 23: Timeline Indexer (MERGED into timeline-web in Slot 24) --------
# timeline-indexer-web and timeline-indexer-worker have been merged into
# timeline-web. The indexer endpoints, DI services, and background ingestion
# worker now run inside the unified timeline-web container.
# Network alias timelineindexer.stella-ops.local is preserved on timeline-web
# for backwards compatibility.
timeline-indexer-worker:
<<: *resources-light
image: stellaops/timeline-indexer-worker:dev
container_name: stellaops-timeline-indexer-worker
restart: unless-stopped
environment:
<<: [*kestrel-cert, *gc-light]
ConnectionStrings__Default: "${STELLAOPS_POSTGRES_CONNECTION}"
ConnectionStrings__Redis: "cache.stella-ops.local:6379"
TIMELINE_Postgres__Timeline__ConnectionString: "${STELLAOPS_POSTGRES_CONNECTION}"
volumes:
- ${STELLAOPS_CERT_VOLUME}
healthcheck:
<<: *healthcheck-worker
networks:
stellaops:
aliases:
- timeline-indexer-worker.stella-ops.local
labels: *release-labels
# timeline-indexer-web:
# <<: *resources-light
# image: stellaops/timeline-indexer-web:dev
# container_name: stellaops-timeline-indexer-web
# ...
# --- Slot 24: Timeline ----------------------------------------------------
# timeline-indexer-worker:
# <<: *resources-light
# image: stellaops/timeline-indexer-worker:dev
# container_name: stellaops-timeline-indexer-worker
# ...
# --- Slot 24: Timeline (unified: includes merged timeline-indexer) ----------
timeline-web:
<<: *resources-light
image: stellaops/timeline-web:dev
@@ -1262,6 +1235,7 @@ services:
Authority__ResourceServer__Audiences__0: ""
Authority__ResourceServer__BypassNetworks__0: "172.19.0.0/16"
Authority__ResourceServer__BypassNetworks__1: "172.20.0.0/16"
TIMELINE_Postgres__Timeline__ConnectionString: "${STELLAOPS_POSTGRES_CONNECTION}"
Router__Enabled: "${TIMELINE_SERVICE_ROUTER_ENABLED:-true}"
Router__Messaging__ConsumerGroup: "timeline"
volumes:
@@ -1272,6 +1246,7 @@ services:
stellaops:
aliases:
- timeline.stella-ops.local
- timelineindexer.stella-ops.local
frontdoor: {}
healthcheck:
test: ["CMD-SHELL", "bash -c 'echo > /dev/tcp/$(hostname)/80'"]

View File

@@ -7,12 +7,12 @@
# cp env/compliance-china.env.example .env
# docker compose -f docker-compose.stella-ops.yml \
# -f docker-compose.compliance-china.yml \
# -f docker-compose.crypto-sim.yml up -d
# -f docker-compose.crypto-provider.crypto-sim.yml up -d
#
# Usage with SM Remote (production):
# docker compose -f docker-compose.stella-ops.yml \
# -f docker-compose.compliance-china.yml \
# -f docker-compose.sm-remote.yml up -d
# -f docker-compose.crypto-provider.smremote.yml up -d
#
# =============================================================================

View File

@@ -7,7 +7,7 @@
# cp env/compliance-eu.env.example .env
# docker compose -f docker-compose.stella-ops.yml \
# -f docker-compose.compliance-eu.yml \
# -f docker-compose.crypto-sim.yml up -d
# -f docker-compose.crypto-provider.crypto-sim.yml up -d
#
# Usage for production:
# docker compose -f docker-compose.stella-ops.yml \

View File

@@ -7,12 +7,12 @@
# cp env/compliance-russia.env.example .env
# docker compose -f docker-compose.stella-ops.yml \
# -f docker-compose.compliance-russia.yml \
# -f docker-compose.crypto-sim.yml up -d
# -f docker-compose.crypto-provider.crypto-sim.yml up -d
#
# Usage with CryptoPro CSP (production):
# CRYPTOPRO_ACCEPT_EULA=1 docker compose -f docker-compose.stella-ops.yml \
# -f docker-compose.compliance-russia.yml \
# -f docker-compose.cryptopro.yml up -d
# -f docker-compose.crypto-provider.cryptopro.yml up -d
#
# =============================================================================

View File

@@ -16,8 +16,8 @@
"Microservice","^/api/v1/lineage(.*)","http://sbomservice.stella-ops.local/api/v1/lineage$1",,
"Microservice","^/api/v1/resolve(.*)","http://binaryindex.stella-ops.local/api/v1/resolve$1",,
"Microservice","^/api/v1/ops/binaryindex(.*)","http://binaryindex.stella-ops.local/api/v1/ops/binaryindex$1",,
"Microservice","^/api/v1/policy(.*)","http://policy-gateway.stella-ops.local/api/v1/policy$1",,
"Microservice","^/api/v1/governance(.*)","http://policy-gateway.stella-ops.local/api/v1/governance$1",,
"Microservice","^/api/v1/policy(.*)","http://policy-engine.stella-ops.local/api/v1/policy$1",,
"Microservice","^/api/v1/governance(.*)","http://policy-engine.stella-ops.local/api/v1/governance$1",,
"Microservice","^/api/v1/determinization(.*)","http://policy-engine.stella-ops.local/api/v1/determinization$1",,
"Microservice","^/api/v1/workflows(.*)","http://orchestrator.stella-ops.local/api/v1/workflows$1",,
"Microservice","^/api/v1/authority/quotas(.*)","http://platform.stella-ops.local/api/v1/authority/quotas$1",,
@@ -28,7 +28,7 @@
"Microservice","^/api/v1/audit(.*)","http://timeline.stella-ops.local/api/v1/audit$1",,
"Microservice","^/api/v1/export(.*)","https://exportcenter.stella-ops.local/api/v1/export$1",,
"Microservice","^/api/v1/advisory-sources(.*)","http://concelier.stella-ops.local/api/v1/advisory-sources$1",,
"Microservice","^/api/v1/notifier/delivery(.*)","http://notifier.stella-ops.local/api/v2/notify/deliveries$1",,
"Microservice","^/api/v1/notifier/delivery(.*)","http://notify.stella-ops.local/api/v2/notify/deliveries$1",,
"Microservice","^/api/v1/search(.*)","http://advisoryai.stella-ops.local/v1/search$1",,
"Microservice","^/api/v1/advisory-ai(.*)","http://advisoryai.stella-ops.local/v1/advisory-ai$1",,
"Microservice","^/api/v1/advisory(.*)","http://advisoryai.stella-ops.local/api/v1/advisory$1",,
@@ -41,7 +41,7 @@
"Microservice","^/api/v2/integrations(.*)","http://platform.stella-ops.local/api/v2/integrations$1",,
"Microservice","^/api/v1/([^/]+)(.*)","http://$1.stella-ops.local/api/v1/$1$2",,
"Microservice","^/api/v2/([^/]+)(.*)","http://$1.stella-ops.local/api/v2/$1$2",,
"Microservice","^/api/(cvss|gate|exceptions|policy)(.*)","http://policy-gateway.stella-ops.local/api/$1$2",,
"Microservice","^/api/(cvss|gate|exceptions|policy)(.*)","http://policy-engine.stella-ops.local/api/$1$2",,
"Microservice","^/api/(risk|risk-budget)(.*)","http://policy-engine.stella-ops.local/api/$1$2",,
"Microservice","^/api/(release-orchestrator|releases|approvals)(.*)","http://jobengine.stella-ops.local/api/$1$2",,
"Microservice","^/api/(compare|change-traces|sbomservice)(.*)","http://sbomservice.stella-ops.local/api/$1$2",,
@@ -56,7 +56,7 @@
"Microservice","^/api/jobengine(.*)","http://orchestrator.stella-ops.local/api/jobengine$1",,
"Microservice","^/api/scheduler(.*)","http://scheduler.stella-ops.local/api/scheduler$1",,
"Microservice","^/api/doctor(.*)","http://doctor.stella-ops.local/api/doctor$1",,
"Microservice","^/policy(.*)","http://policy-gateway.stella-ops.local/policy$1",,
"Microservice","^/policy(.*)","http://policy-engine.stella-ops.local/policy$1",,
"Microservice","^/v1/evidence-packs(.*)","http://advisoryai.stella-ops.local/v1/evidence-packs$1",,
"Microservice","^/v1/runs(.*)","http://orchestrator.stella-ops.local/v1/runs$1",,
"Microservice","^/v1/advisory-ai(.*)","http://advisoryai.stella-ops.local/v1/advisory-ai$1",,
1 RouteType RoutePath RouteTarget SelectedOpenApiPath StatusCode
16 Microservice ^/api/v1/lineage(.*) http://sbomservice.stella-ops.local/api/v1/lineage$1
17 Microservice ^/api/v1/resolve(.*) http://binaryindex.stella-ops.local/api/v1/resolve$1
18 Microservice ^/api/v1/ops/binaryindex(.*) http://binaryindex.stella-ops.local/api/v1/ops/binaryindex$1
19 Microservice ^/api/v1/policy(.*) http://policy-gateway.stella-ops.local/api/v1/policy$1 http://policy-engine.stella-ops.local/api/v1/policy$1
20 Microservice ^/api/v1/governance(.*) http://policy-gateway.stella-ops.local/api/v1/governance$1 http://policy-engine.stella-ops.local/api/v1/governance$1
21 Microservice ^/api/v1/determinization(.*) http://policy-engine.stella-ops.local/api/v1/determinization$1
22 Microservice ^/api/v1/workflows(.*) http://orchestrator.stella-ops.local/api/v1/workflows$1
23 Microservice ^/api/v1/authority/quotas(.*) http://platform.stella-ops.local/api/v1/authority/quotas$1
28 Microservice ^/api/v1/audit(.*) http://timeline.stella-ops.local/api/v1/audit$1
29 Microservice ^/api/v1/export(.*) https://exportcenter.stella-ops.local/api/v1/export$1
30 Microservice ^/api/v1/advisory-sources(.*) http://concelier.stella-ops.local/api/v1/advisory-sources$1
31 Microservice ^/api/v1/notifier/delivery(.*) http://notifier.stella-ops.local/api/v2/notify/deliveries$1 http://notify.stella-ops.local/api/v2/notify/deliveries$1
32 Microservice ^/api/v1/search(.*) http://advisoryai.stella-ops.local/v1/search$1
33 Microservice ^/api/v1/advisory-ai(.*) http://advisoryai.stella-ops.local/v1/advisory-ai$1
34 Microservice ^/api/v1/advisory(.*) http://advisoryai.stella-ops.local/api/v1/advisory$1
41 Microservice ^/api/v2/integrations(.*) http://platform.stella-ops.local/api/v2/integrations$1
42 Microservice ^/api/v1/([^/]+)(.*) http://$1.stella-ops.local/api/v1/$1$2
43 Microservice ^/api/v2/([^/]+)(.*) http://$1.stella-ops.local/api/v2/$1$2
44 Microservice ^/api/(cvss|gate|exceptions|policy)(.*) http://policy-gateway.stella-ops.local/api/$1$2 http://policy-engine.stella-ops.local/api/$1$2
45 Microservice ^/api/(risk|risk-budget)(.*) http://policy-engine.stella-ops.local/api/$1$2
46 Microservice ^/api/(release-orchestrator|releases|approvals)(.*) http://jobengine.stella-ops.local/api/$1$2
47 Microservice ^/api/(compare|change-traces|sbomservice)(.*) http://sbomservice.stella-ops.local/api/$1$2
56 Microservice ^/api/jobengine(.*) http://orchestrator.stella-ops.local/api/jobengine$1
57 Microservice ^/api/scheduler(.*) http://scheduler.stella-ops.local/api/scheduler$1
58 Microservice ^/api/doctor(.*) http://doctor.stella-ops.local/api/doctor$1
59 Microservice ^/policy(.*) http://policy-gateway.stella-ops.local/policy$1 http://policy-engine.stella-ops.local/policy$1
60 Microservice ^/v1/evidence-packs(.*) http://advisoryai.stella-ops.local/v1/evidence-packs$1
61 Microservice ^/v1/runs(.*) http://orchestrator.stella-ops.local/v1/runs$1
62 Microservice ^/v1/advisory-ai(.*) http://advisoryai.stella-ops.local/v1/advisory-ai$1

View File

@@ -3,13 +3,13 @@
"Microservice","/api/v1/vex","https://vexhub.stella-ops.local/api/v1/vex","/api/v1/vex/index","200"
"Microservice","/api/v1/vexlens","http://vexlens.stella-ops.local/api/v1/vexlens","/api/v1/vexlens/stats","200"
"Microservice","/api/v1/notify","http://notify.stella-ops.local/api/v1/notify","/api/v1/notify/audit","400"
"Microservice","/api/v1/notifier","http://notifier.stella-ops.local/api/v1/notifier",,
"Microservice","/api/v1/notifier","http://notify.stella-ops.local/api/v1/notifier",,
"Microservice","/api/v1/concelier","http://concelier.stella-ops.local/api/v1/concelier","/api/v1/concelier/bundles","200"
"Microservice","/api/v1/platform","http://platform.stella-ops.local/api/v1/platform","/api/v1/platform/search","400"
"Microservice","/api/v1/scanner","http://scanner.stella-ops.local/api/v1/scanner",,
"Microservice","/api/v1/findings","http://findings.stella-ops.local/api/v1/findings","/api/v1/findings/summaries","200"
"Microservice","/api/v1/integrations","http://integrations.stella-ops.local/api/v1/integrations","/api/v1/integrations","401"
"Microservice","/api/v1/policy","http://policy-gateway.stella-ops.local/api/v1/policy","/api/v1/policy/gate/health","200"
"Microservice","/api/v1/policy","http://policy-engine.stella-ops.local/api/v1/policy","/api/v1/policy/gate/health","200"
"Microservice","/api/v1/reachability","http://reachgraph.stella-ops.local/api/v1/reachability",,
"Microservice","/api/v1/attestor","http://attestor.stella-ops.local/api/v1/attestor","/api/v1/attestor/predicates","200"
"Microservice","/api/v1/attestations","http://attestor.stella-ops.local/api/v1/attestations","/api/v1/attestations","200"
@@ -33,7 +33,7 @@
"Microservice","/api/v1/lineage","http://sbomservice.stella-ops.local/api/v1/lineage","/api/v1/lineage/diff","400"
"Microservice","/api/v1/export","https://exportcenter.stella-ops.local/api/v1/export",,
"Microservice","/api/v1/triage","http://scanner.stella-ops.local/api/v1/triage","/api/v1/triage/inbox","401"
"Microservice","/api/v1/governance","http://policy-gateway.stella-ops.local/api/v1/governance","/api/v1/governance/audit/events","400"
"Microservice","/api/v1/governance","http://policy-engine.stella-ops.local/api/v1/governance","/api/v1/governance/audit/events","400"
"Microservice","/api/v1/determinization","http://policy-engine.stella-ops.local/api/v1/determinization",,
"Microservice","/api/v1/opsmemory","http://opsmemory.stella-ops.local/api/v1/opsmemory","/api/v1/opsmemory/stats","400"
"Microservice","/api/v1/secrets","http://scanner.stella-ops.local/api/v1/secrets","/api/v1/secrets/config/rules/categories","401"
@@ -45,20 +45,20 @@
"Microservice","/v1/advisory-ai/adapters","http://advisoryai.stella-ops.local/v1/advisory-ai/adapters","/","200"
"Microservice","/v1/advisory-ai","http://advisoryai.stella-ops.local/v1/advisory-ai","/v1/advisory-ai/consent","200"
"Microservice","/v1/audit-bundles","https://exportcenter.stella-ops.local/v1/audit-bundles","/v1/audit-bundles","200"
"Microservice","/policy","http://policy-gateway.stella-ops.local","/policyEngine","302"
"Microservice","/api/cvss","http://policy-gateway.stella-ops.local/api/cvss","/api/cvss/policies","401"
"Microservice","/api/policy","http://policy-gateway.stella-ops.local/api/policy","/api/policy/packs","401"
"Microservice","/policy","http://policy-engine.stella-ops.local","/policyEngine","302"
"Microservice","/api/cvss","http://policy-engine.stella-ops.local/api/cvss","/api/cvss/policies","401"
"Microservice","/api/policy","http://policy-engine.stella-ops.local/api/policy","/api/policy/packs","401"
"Microservice","/api/risk","http://policy-engine.stella-ops.local/api/risk","/api/risk/events","400"
"Microservice","/api/analytics","http://platform.stella-ops.local/api/analytics","/api/analytics/backlog","400"
"Microservice","/api/release-orchestrator","http://orchestrator.stella-ops.local/api/release-orchestrator","/api/release-orchestrator/releases","200"
"Microservice","/api/releases","http://orchestrator.stella-ops.local/api/releases",,
"Microservice","/api/approvals","http://orchestrator.stella-ops.local/api/approvals",,
"Microservice","/api/gate","http://policy-gateway.stella-ops.local/api/gate",,
"Microservice","/api/gate","http://policy-engine.stella-ops.local/api/gate",,
"Microservice","/api/risk-budget","http://policy-engine.stella-ops.local/api/risk-budget",,
"Microservice","/api/fix-verification","http://scanner.stella-ops.local/api/fix-verification",,
"Microservice","/api/compare","http://sbomservice.stella-ops.local/api/compare",,
"Microservice","/api/change-traces","http://sbomservice.stella-ops.local/api/change-traces",,
"Microservice","/api/exceptions","http://policy-gateway.stella-ops.local/api/exceptions",,
"Microservice","/api/exceptions","http://policy-engine.stella-ops.local/api/exceptions",,
"Microservice","/api/verdicts","https://evidencelocker.stella-ops.local/api/verdicts",,
"Microservice","/api/orchestrator","http://orchestrator.stella-ops.local/api/orchestrator",,
"Microservice","/api/v1/gateway/rate-limits","http://platform.stella-ops.local/api/v1/gateway/rate-limits","/api/v1/gateway/rate-limits","400"
@@ -76,12 +76,12 @@
"Microservice","/authority","https://authority.stella-ops.local/authority","/authority/audit/airgap","401"
"Microservice","/console","https://authority.stella-ops.local/console","/console/filters","401"
"Microservice","/scanner","http://scanner.stella-ops.local","/scanner/api/v1/agents","401"
"Microservice","/policyGateway","http://policy-gateway.stella-ops.local","/policyGateway","302"
"Microservice","/policyGateway","http://policy-engine.stella-ops.local","/policyGateway","302"
"Microservice","/policyEngine","http://policy-engine.stella-ops.local","/policyEngine","302"
"Microservice","/concelier","http://concelier.stella-ops.local","/concelier/jobs","200"
"Microservice","/attestor","http://attestor.stella-ops.local","/attestor/api/v1/bundles","400"
"Microservice","/notify","http://notify.stella-ops.local","/notify/api/v1/notify/audit","400"
"Microservice","/notifier","http://notifier.stella-ops.local","/notifier/api/v2/ack","400"
"Microservice","/notifier","http://notify.stella-ops.local","/notifier/api/v2/ack","400"
"Microservice","/scheduler","http://scheduler.stella-ops.local","/scheduler/graphs/jobs","401"
"Microservice","/signals","http://signals.stella-ops.local","/signals/signals/ping","403"
"Microservice","/excititor","http://excititor.stella-ops.local","/excititor/vex/raw","400"
1 RouteType RoutePath RouteTarget SelectedOpenApiPath StatusCode
3 Microservice /api/v1/vex https://vexhub.stella-ops.local/api/v1/vex /api/v1/vex/index 200
4 Microservice /api/v1/vexlens http://vexlens.stella-ops.local/api/v1/vexlens /api/v1/vexlens/stats 200
5 Microservice /api/v1/notify http://notify.stella-ops.local/api/v1/notify /api/v1/notify/audit 400
6 Microservice /api/v1/notifier http://notifier.stella-ops.local/api/v1/notifier http://notify.stella-ops.local/api/v1/notifier
7 Microservice /api/v1/concelier http://concelier.stella-ops.local/api/v1/concelier /api/v1/concelier/bundles 200
8 Microservice /api/v1/platform http://platform.stella-ops.local/api/v1/platform /api/v1/platform/search 400
9 Microservice /api/v1/scanner http://scanner.stella-ops.local/api/v1/scanner
10 Microservice /api/v1/findings http://findings.stella-ops.local/api/v1/findings /api/v1/findings/summaries 200
11 Microservice /api/v1/integrations http://integrations.stella-ops.local/api/v1/integrations /api/v1/integrations 401
12 Microservice /api/v1/policy http://policy-gateway.stella-ops.local/api/v1/policy http://policy-engine.stella-ops.local/api/v1/policy /api/v1/policy/gate/health 200
13 Microservice /api/v1/reachability http://reachgraph.stella-ops.local/api/v1/reachability
14 Microservice /api/v1/attestor http://attestor.stella-ops.local/api/v1/attestor /api/v1/attestor/predicates 200
15 Microservice /api/v1/attestations http://attestor.stella-ops.local/api/v1/attestations /api/v1/attestations 200
33 Microservice /api/v1/lineage http://sbomservice.stella-ops.local/api/v1/lineage /api/v1/lineage/diff 400
34 Microservice /api/v1/export https://exportcenter.stella-ops.local/api/v1/export
35 Microservice /api/v1/triage http://scanner.stella-ops.local/api/v1/triage /api/v1/triage/inbox 401
36 Microservice /api/v1/governance http://policy-gateway.stella-ops.local/api/v1/governance http://policy-engine.stella-ops.local/api/v1/governance /api/v1/governance/audit/events 400
37 Microservice /api/v1/determinization http://policy-engine.stella-ops.local/api/v1/determinization
38 Microservice /api/v1/opsmemory http://opsmemory.stella-ops.local/api/v1/opsmemory /api/v1/opsmemory/stats 400
39 Microservice /api/v1/secrets http://scanner.stella-ops.local/api/v1/secrets /api/v1/secrets/config/rules/categories 401
45 Microservice /v1/advisory-ai/adapters http://advisoryai.stella-ops.local/v1/advisory-ai/adapters / 200
46 Microservice /v1/advisory-ai http://advisoryai.stella-ops.local/v1/advisory-ai /v1/advisory-ai/consent 200
47 Microservice /v1/audit-bundles https://exportcenter.stella-ops.local/v1/audit-bundles /v1/audit-bundles 200
48 Microservice /policy http://policy-gateway.stella-ops.local http://policy-engine.stella-ops.local /policyEngine 302
49 Microservice /api/cvss http://policy-gateway.stella-ops.local/api/cvss http://policy-engine.stella-ops.local/api/cvss /api/cvss/policies 401
50 Microservice /api/policy http://policy-gateway.stella-ops.local/api/policy http://policy-engine.stella-ops.local/api/policy /api/policy/packs 401
51 Microservice /api/risk http://policy-engine.stella-ops.local/api/risk /api/risk/events 400
52 Microservice /api/analytics http://platform.stella-ops.local/api/analytics /api/analytics/backlog 400
53 Microservice /api/release-orchestrator http://orchestrator.stella-ops.local/api/release-orchestrator /api/release-orchestrator/releases 200
54 Microservice /api/releases http://orchestrator.stella-ops.local/api/releases
55 Microservice /api/approvals http://orchestrator.stella-ops.local/api/approvals
56 Microservice /api/gate http://policy-gateway.stella-ops.local/api/gate http://policy-engine.stella-ops.local/api/gate
57 Microservice /api/risk-budget http://policy-engine.stella-ops.local/api/risk-budget
58 Microservice /api/fix-verification http://scanner.stella-ops.local/api/fix-verification
59 Microservice /api/compare http://sbomservice.stella-ops.local/api/compare
60 Microservice /api/change-traces http://sbomservice.stella-ops.local/api/change-traces
61 Microservice /api/exceptions http://policy-gateway.stella-ops.local/api/exceptions http://policy-engine.stella-ops.local/api/exceptions
62 Microservice /api/verdicts https://evidencelocker.stella-ops.local/api/verdicts
63 Microservice /api/orchestrator http://orchestrator.stella-ops.local/api/orchestrator
64 Microservice /api/v1/gateway/rate-limits http://platform.stella-ops.local/api/v1/gateway/rate-limits /api/v1/gateway/rate-limits 400
76 Microservice /authority https://authority.stella-ops.local/authority /authority/audit/airgap 401
77 Microservice /console https://authority.stella-ops.local/console /console/filters 401
78 Microservice /scanner http://scanner.stella-ops.local /scanner/api/v1/agents 401
79 Microservice /policyGateway http://policy-gateway.stella-ops.local http://policy-engine.stella-ops.local /policyGateway 302
80 Microservice /policyEngine http://policy-engine.stella-ops.local /policyEngine 302
81 Microservice /concelier http://concelier.stella-ops.local /concelier/jobs 200
82 Microservice /attestor http://attestor.stella-ops.local /attestor/api/v1/bundles 400
83 Microservice /notify http://notify.stella-ops.local /notify/api/v1/notify/audit 400
84 Microservice /notifier http://notifier.stella-ops.local http://notify.stella-ops.local /notifier/api/v2/ack 400
85 Microservice /scheduler http://scheduler.stella-ops.local /scheduler/graphs/jobs 401
86 Microservice /signals http://signals.stella-ops.local /signals/signals/ping 403
87 Microservice /excititor http://excititor.stella-ops.local /excititor/vex/raw 400

View File

@@ -3,13 +3,13 @@
"ReverseProxy","/api/v1/vex","https://vexhub.stella-ops.local/api/v1/vex","/api/v1/vex/index","200"
"ReverseProxy","/api/v1/vexlens","http://vexlens.stella-ops.local/api/v1/vexlens","/api/v1/vexlens/stats","200"
"ReverseProxy","/api/v1/notify","http://notify.stella-ops.local/api/v1/notify","/api/v1/notify/audit","400"
"ReverseProxy","/api/v1/notifier","http://notifier.stella-ops.local/api/v1/notifier",,
"ReverseProxy","/api/v1/notifier","http://notify.stella-ops.local/api/v1/notifier",,
"ReverseProxy","/api/v1/concelier","http://concelier.stella-ops.local/api/v1/concelier","/api/v1/concelier/bundles","200"
"ReverseProxy","/api/v1/platform","http://platform.stella-ops.local/api/v1/platform","/api/v1/platform/search","401"
"ReverseProxy","/api/v1/scanner","http://scanner.stella-ops.local/api/v1/scanner",,
"ReverseProxy","/api/v1/findings","http://findings.stella-ops.local/api/v1/findings","/api/v1/findings/summaries","401"
"ReverseProxy","/api/v1/integrations","http://integrations.stella-ops.local/api/v1/integrations","/api/v1/integrations","200"
"ReverseProxy","/api/v1/policy","http://policy-gateway.stella-ops.local/api/v1/policy","/api/v1/policy/schema","404"
"ReverseProxy","/api/v1/policy","http://policy-engine.stella-ops.local/api/v1/policy","/api/v1/policy/schema","404"
"ReverseProxy","/api/v1/reachability","http://reachgraph.stella-ops.local/api/v1/reachability",,
"ReverseProxy","/api/v1/attestor","http://attestor.stella-ops.local/api/v1/attestor","/api/v1/attestor/policies","404"
"ReverseProxy","/api/v1/attestations","http://attestor.stella-ops.local/api/v1/attestations","/api/v1/attestations","401"
@@ -33,7 +33,7 @@
"ReverseProxy","/api/v1/lineage","http://sbomservice.stella-ops.local/api/v1/lineage","/api/v1/lineage/diff","400"
"ReverseProxy","/api/v1/export","https://exportcenter.stella-ops.local/api/v1/export","/api/v1/export/jobs","401"
"ReverseProxy","/api/v1/triage","http://scanner.stella-ops.local/api/v1/triage","/api/v1/triage/inbox","400"
"ReverseProxy","/api/v1/governance","http://policy-gateway.stella-ops.local/api/v1/governance","/api/v1/governance/audit/events","400"
"ReverseProxy","/api/v1/governance","http://policy-engine.stella-ops.local/api/v1/governance","/api/v1/governance/audit/events","400"
"ReverseProxy","/api/v1/determinization","http://policy-engine.stella-ops.local/api/v1/determinization",,
"ReverseProxy","/api/v1/opsmemory","http://opsmemory.stella-ops.local/api/v1/opsmemory","/api/v1/opsmemory/stats","400"
"ReverseProxy","/api/v1/secrets","http://scanner.stella-ops.local/api/v1/secrets","/api/v1/secrets/config/rules/categories","200"
@@ -45,20 +45,20 @@
"ReverseProxy","/v1/advisory-ai/adapters","http://advisoryai.stella-ops.local/v1/advisory-ai/adapters","/","200"
"ReverseProxy","/v1/advisory-ai","http://advisoryai.stella-ops.local/v1/advisory-ai","/v1/advisory-ai/consent","200"
"ReverseProxy","/v1/audit-bundles","https://exportcenter.stella-ops.local/v1/audit-bundles","/v1/audit-bundles","200"
"ReverseProxy","/policy","http://policy-gateway.stella-ops.local","/policy/snapshots","404"
"ReverseProxy","/api/cvss","http://policy-gateway.stella-ops.local/api/cvss","/api/cvss/policies","401"
"ReverseProxy","/api/policy","http://policy-gateway.stella-ops.local/api/policy","/api/policy/packs","401"
"ReverseProxy","/policy","http://policy-engine.stella-ops.local","/policy/snapshots","404"
"ReverseProxy","/api/cvss","http://policy-engine.stella-ops.local/api/cvss","/api/cvss/policies","401"
"ReverseProxy","/api/policy","http://policy-engine.stella-ops.local/api/policy","/api/policy/packs","401"
"ReverseProxy","/api/risk","http://policy-engine.stella-ops.local/api/risk","/api/risk/events","401"
"ReverseProxy","/api/analytics","http://platform.stella-ops.local/api/analytics","/api/analytics/backlog","401"
"ReverseProxy","/api/release-orchestrator","http://orchestrator.stella-ops.local/api/release-orchestrator","/api/release-orchestrator/releases","200"
"ReverseProxy","/api/releases","http://orchestrator.stella-ops.local/api/releases",,
"ReverseProxy","/api/approvals","http://orchestrator.stella-ops.local/api/approvals",,
"ReverseProxy","/api/gate","http://policy-gateway.stella-ops.local/api/gate",,
"ReverseProxy","/api/gate","http://policy-engine.stella-ops.local/api/gate",,
"ReverseProxy","/api/risk-budget","http://policy-engine.stella-ops.local/api/risk-budget",,
"ReverseProxy","/api/fix-verification","http://scanner.stella-ops.local/api/fix-verification",,
"ReverseProxy","/api/compare","http://sbomservice.stella-ops.local/api/compare",,
"ReverseProxy","/api/change-traces","http://sbomservice.stella-ops.local/api/change-traces",,
"ReverseProxy","/api/exceptions","http://policy-gateway.stella-ops.local/api/exceptions",,
"ReverseProxy","/api/exceptions","http://policy-engine.stella-ops.local/api/exceptions",,
"ReverseProxy","/api/verdicts","https://evidencelocker.stella-ops.local/api/verdicts",,
"ReverseProxy","/api/orchestrator","http://orchestrator.stella-ops.local/api/orchestrator",,
"ReverseProxy","/api/v1/gateway/rate-limits","http://platform.stella-ops.local/api/v1/gateway/rate-limits","/api/v1/gateway/rate-limits","401"
@@ -79,12 +79,12 @@
"ReverseProxy","/rekor","http://rekor.stella-ops.local:3322",,
"ReverseProxy","/envsettings.json","http://platform.stella-ops.local/platform/envsettings.json","/","200"
"ReverseProxy","/scanner","http://scanner.stella-ops.local",,
"ReverseProxy","/policyGateway","http://policy-gateway.stella-ops.local",,
"ReverseProxy","/policyGateway","http://policy-engine.stella-ops.local",,
"ReverseProxy","/policyEngine","http://policy-engine.stella-ops.local",,
"ReverseProxy","/concelier","http://concelier.stella-ops.local","/concelier/observations","404"
"ReverseProxy","/attestor","http://attestor.stella-ops.local",,
"ReverseProxy","/notify","http://notify.stella-ops.local",,
"ReverseProxy","/notifier","http://notifier.stella-ops.local",,
"ReverseProxy","/notifier","http://notify.stella-ops.local",,
"ReverseProxy","/scheduler","http://scheduler.stella-ops.local",,
"ReverseProxy","/signals","http://signals.stella-ops.local","/signals/ping","404"
"ReverseProxy","/excititor","http://excititor.stella-ops.local","/excititor/status","404"
1 RouteType RoutePath RouteTarget SelectedOpenApiPath StatusCode
3 ReverseProxy /api/v1/vex https://vexhub.stella-ops.local/api/v1/vex /api/v1/vex/index 200
4 ReverseProxy /api/v1/vexlens http://vexlens.stella-ops.local/api/v1/vexlens /api/v1/vexlens/stats 200
5 ReverseProxy /api/v1/notify http://notify.stella-ops.local/api/v1/notify /api/v1/notify/audit 400
6 ReverseProxy /api/v1/notifier http://notifier.stella-ops.local/api/v1/notifier http://notify.stella-ops.local/api/v1/notifier
7 ReverseProxy /api/v1/concelier http://concelier.stella-ops.local/api/v1/concelier /api/v1/concelier/bundles 200
8 ReverseProxy /api/v1/platform http://platform.stella-ops.local/api/v1/platform /api/v1/platform/search 401
9 ReverseProxy /api/v1/scanner http://scanner.stella-ops.local/api/v1/scanner
10 ReverseProxy /api/v1/findings http://findings.stella-ops.local/api/v1/findings /api/v1/findings/summaries 401
11 ReverseProxy /api/v1/integrations http://integrations.stella-ops.local/api/v1/integrations /api/v1/integrations 200
12 ReverseProxy /api/v1/policy http://policy-gateway.stella-ops.local/api/v1/policy http://policy-engine.stella-ops.local/api/v1/policy /api/v1/policy/schema 404
13 ReverseProxy /api/v1/reachability http://reachgraph.stella-ops.local/api/v1/reachability
14 ReverseProxy /api/v1/attestor http://attestor.stella-ops.local/api/v1/attestor /api/v1/attestor/policies 404
15 ReverseProxy /api/v1/attestations http://attestor.stella-ops.local/api/v1/attestations /api/v1/attestations 401
33 ReverseProxy /api/v1/lineage http://sbomservice.stella-ops.local/api/v1/lineage /api/v1/lineage/diff 400
34 ReverseProxy /api/v1/export https://exportcenter.stella-ops.local/api/v1/export /api/v1/export/jobs 401
35 ReverseProxy /api/v1/triage http://scanner.stella-ops.local/api/v1/triage /api/v1/triage/inbox 400
36 ReverseProxy /api/v1/governance http://policy-gateway.stella-ops.local/api/v1/governance http://policy-engine.stella-ops.local/api/v1/governance /api/v1/governance/audit/events 400
37 ReverseProxy /api/v1/determinization http://policy-engine.stella-ops.local/api/v1/determinization
38 ReverseProxy /api/v1/opsmemory http://opsmemory.stella-ops.local/api/v1/opsmemory /api/v1/opsmemory/stats 400
39 ReverseProxy /api/v1/secrets http://scanner.stella-ops.local/api/v1/secrets /api/v1/secrets/config/rules/categories 200
45 ReverseProxy /v1/advisory-ai/adapters http://advisoryai.stella-ops.local/v1/advisory-ai/adapters / 200
46 ReverseProxy /v1/advisory-ai http://advisoryai.stella-ops.local/v1/advisory-ai /v1/advisory-ai/consent 200
47 ReverseProxy /v1/audit-bundles https://exportcenter.stella-ops.local/v1/audit-bundles /v1/audit-bundles 200
48 ReverseProxy /policy http://policy-gateway.stella-ops.local http://policy-engine.stella-ops.local /policy/snapshots 404
49 ReverseProxy /api/cvss http://policy-gateway.stella-ops.local/api/cvss http://policy-engine.stella-ops.local/api/cvss /api/cvss/policies 401
50 ReverseProxy /api/policy http://policy-gateway.stella-ops.local/api/policy http://policy-engine.stella-ops.local/api/policy /api/policy/packs 401
51 ReverseProxy /api/risk http://policy-engine.stella-ops.local/api/risk /api/risk/events 401
52 ReverseProxy /api/analytics http://platform.stella-ops.local/api/analytics /api/analytics/backlog 401
53 ReverseProxy /api/release-orchestrator http://orchestrator.stella-ops.local/api/release-orchestrator /api/release-orchestrator/releases 200
54 ReverseProxy /api/releases http://orchestrator.stella-ops.local/api/releases
55 ReverseProxy /api/approvals http://orchestrator.stella-ops.local/api/approvals
56 ReverseProxy /api/gate http://policy-gateway.stella-ops.local/api/gate http://policy-engine.stella-ops.local/api/gate
57 ReverseProxy /api/risk-budget http://policy-engine.stella-ops.local/api/risk-budget
58 ReverseProxy /api/fix-verification http://scanner.stella-ops.local/api/fix-verification
59 ReverseProxy /api/compare http://sbomservice.stella-ops.local/api/compare
60 ReverseProxy /api/change-traces http://sbomservice.stella-ops.local/api/change-traces
61 ReverseProxy /api/exceptions http://policy-gateway.stella-ops.local/api/exceptions http://policy-engine.stella-ops.local/api/exceptions
62 ReverseProxy /api/verdicts https://evidencelocker.stella-ops.local/api/verdicts
63 ReverseProxy /api/orchestrator http://orchestrator.stella-ops.local/api/orchestrator
64 ReverseProxy /api/v1/gateway/rate-limits http://platform.stella-ops.local/api/v1/gateway/rate-limits /api/v1/gateway/rate-limits 401
79 ReverseProxy /rekor http://rekor.stella-ops.local:3322
80 ReverseProxy /envsettings.json http://platform.stella-ops.local/platform/envsettings.json / 200
81 ReverseProxy /scanner http://scanner.stella-ops.local
82 ReverseProxy /policyGateway http://policy-gateway.stella-ops.local http://policy-engine.stella-ops.local
83 ReverseProxy /policyEngine http://policy-engine.stella-ops.local
84 ReverseProxy /concelier http://concelier.stella-ops.local /concelier/observations 404
85 ReverseProxy /attestor http://attestor.stella-ops.local
86 ReverseProxy /notify http://notify.stella-ops.local
87 ReverseProxy /notifier http://notifier.stella-ops.local http://notify.stella-ops.local
88 ReverseProxy /scheduler http://scheduler.stella-ops.local
89 ReverseProxy /signals http://signals.stella-ops.local /signals/ping 404
90 ReverseProxy /excititor http://excititor.stella-ops.local /excititor/status 404

View File

@@ -86,7 +86,7 @@
{ "Type": "Microservice", "Path": "^/api/v1/gateway/rate-limits(.*)", "IsRegex": true, "TranslatesTo": "http://platform.stella-ops.local/api/v1/gateway/rate-limits$1" },
{ "Type": "Microservice", "Path": "^/api/v1/jobengine/quotas(.*)", "IsRegex": true, "TranslatesTo": "http://platform.stella-ops.local/api/v1/jobengine/quotas$1" },
{ "Type": "Microservice", "Path": "^/api/v1/reachability(.*)", "IsRegex": true, "TranslatesTo": "http://reachgraph.stella-ops.local/api/v1/reachability$1" },
{ "Type": "Microservice", "Path": "^/api/v1/timeline(.*)", "IsRegex": true, "TranslatesTo": "http://timelineindexer.stella-ops.local/api/v1/timeline$1" },
{ "Type": "Microservice", "Path": "^/api/v1/timeline(.*)", "IsRegex": true, "TranslatesTo": "http://timeline.stella-ops.local/api/v1/timeline$1" },
{ "Type": "Microservice", "Path": "^/api/v1/audit(.*)", "IsRegex": true, "TranslatesTo": "http://timeline.stella-ops.local/api/v1/audit$1" },
{ "Type": "Microservice", "Path": "^/api/v1/export(.*)", "IsRegex": true, "TranslatesTo": "https://exportcenter.stella-ops.local/api/v1/export$1" },
{ "Type": "Microservice", "Path": "^/api/v1/advisory-sources(.*)", "IsRegex": true, "TranslatesTo": "http://concelier.stella-ops.local/api/v1/advisory-sources$1" },

View File

@@ -188,7 +188,7 @@ server {
# Policy gateway (strips /policy/ prefix, regex avoids colliding with
# Angular /policy/exceptions, /policy/packs SPA routes)
location ~ ^/policy/(api|v[0-9]+|shadow)/ {
set \$policy_upstream http://policy-gateway.stella-ops.local;
set \$policy_upstream http://policy-engine.stella-ops.local;
rewrite ^/policy/(.*)\$ /\$1 break;
proxy_pass \$policy_upstream;
proxy_set_header Host \$host;
@@ -314,7 +314,7 @@ server {
sub_filter '"http://platform.stella-ops.local"' '"/platform"';
sub_filter '"http://authority.stella-ops.local"' '"/authority"';
sub_filter '"http://scanner.stella-ops.local"' '"/scanner"';
sub_filter '"http://policy-gateway.stella-ops.local"' '"/policy"';
sub_filter '"http://policy-engine.stella-ops.local"' '"/policy"';
sub_filter '"http://concelier.stella-ops.local"' '"/concelier"';
sub_filter '"http://attestor.stella-ops.local"' '"/attestor"';
sub_filter '"http://notify.stella-ops.local"' '"/notify"';
@@ -371,7 +371,7 @@ server {
sub_filter '"http://platform.stella-ops.local"' '"/platform"';
sub_filter '"http://authority.stella-ops.local"' '"/authority"';
sub_filter '"http://scanner.stella-ops.local"' '"/scanner"';
sub_filter '"http://policy-gateway.stella-ops.local"' '"/policy"';
sub_filter '"http://policy-engine.stella-ops.local"' '"/policy"';
sub_filter '"http://concelier.stella-ops.local"' '"/concelier"';
sub_filter '"http://attestor.stella-ops.local"' '"/attestor"';
sub_filter '"http://notify.stella-ops.local"' '"/notify"';

View File

@@ -37,7 +37,7 @@ server {
sub_filter '"http://platform.stella-ops.local"' '"/platform"';
sub_filter '"http://authority.stella-ops.local"' '"/authority"';
sub_filter '"http://scanner.stella-ops.local"' '"/scanner"';
sub_filter '"http://policy-gateway.stella-ops.local"' '"/policy"';
sub_filter '"http://policy-engine.stella-ops.local"' '"/policy"';
sub_filter '"http://concelier.stella-ops.local"' '"/concelier"';
sub_filter '"http://attestor.stella-ops.local"' '"/attestor"';
sub_filter '"http://notify.stella-ops.local"' '"/notify"';
@@ -144,7 +144,7 @@ server {
# Policy gateway
location ~ ^/policy/(api|v[0-9]+)/ {
set $policy_upstream http://policy-gateway.stella-ops.local;
set $policy_upstream http://policy-engine.stella-ops.local;
rewrite ^/policy/(.*)$ /$1 break;
proxy_pass $policy_upstream;
}
@@ -408,7 +408,7 @@ server {
sub_filter '"http://platform.stella-ops.local"' '"/platform"';
sub_filter '"http://authority.stella-ops.local"' '"/authority"';
sub_filter '"http://scanner.stella-ops.local"' '"/scanner"';
sub_filter '"http://policy-gateway.stella-ops.local"' '"/policy"';
sub_filter '"http://policy-engine.stella-ops.local"' '"/policy"';
sub_filter '"http://concelier.stella-ops.local"' '"/concelier"';
sub_filter '"http://attestor.stella-ops.local"' '"/attestor"';
sub_filter '"http://notify.stella-ops.local"' '"/notify"';

View File

@@ -98,7 +98,7 @@ server {
# Policy gateway (strips /policy/ prefix, regex avoids colliding with
# Angular /policy/exceptions, /policy/packs SPA routes)
location ~ ^/policy/(api|v[0-9]+)/ {
set $policy_upstream http://policy-gateway.stella-ops.local;
set $policy_upstream http://policy-engine.stella-ops.local;
rewrite ^/policy/(.*)$ /$1 break;
proxy_pass $policy_upstream;
proxy_set_header Host $host;
@@ -208,7 +208,7 @@ server {
sub_filter '"http://platform.stella-ops.local"' '"/platform"';
sub_filter '"http://authority.stella-ops.local"' '"/authority"';
sub_filter '"http://scanner.stella-ops.local"' '"/scanner"';
sub_filter '"http://policy-gateway.stella-ops.local"' '"/policy"';
sub_filter '"http://policy-engine.stella-ops.local"' '"/policy"';
sub_filter '"http://concelier.stella-ops.local"' '"/concelier"';
sub_filter '"http://attestor.stella-ops.local"' '"/attestor"';
sub_filter '"http://notify.stella-ops.local"' '"/notify"';

View File

@@ -52,10 +52,10 @@ graph-api|devops/docker/Dockerfile.hardened.template|src/Graph/StellaOps.Graph.A
cartographer|devops/docker/Dockerfile.hardened.template|src/Scanner/StellaOps.Scanner.Cartographer/StellaOps.Scanner.Cartographer.csproj|StellaOps.Scanner.Cartographer|8080
# ── Slot 22: ReachGraph ─────────────────────────────────────────────────────────
reachgraph-web|devops/docker/Dockerfile.hardened.template|src/ReachGraph/StellaOps.ReachGraph.WebService/StellaOps.ReachGraph.WebService.csproj|StellaOps.ReachGraph.WebService|8080
# ── Slot 23: Timeline Indexer ───────────────────────────────────────────────────
timeline-indexer-web|devops/docker/Dockerfile.hardened.template|src/Timeline/StellaOps.TimelineIndexer.WebService/StellaOps.TimelineIndexer.WebService.csproj|StellaOps.TimelineIndexer.WebService|8080
timeline-indexer-worker|devops/docker/Dockerfile.hardened.template|src/Timeline/StellaOps.TimelineIndexer.Worker/StellaOps.TimelineIndexer.Worker.csproj|StellaOps.TimelineIndexer.Worker|8080
# ── Slot 24: Timeline ───────────────────────────────────────────────────────────
# ── Slot 23: Timeline Indexer (MERGED into timeline-web in Slot 24) ────────────
# timeline-indexer-web|devops/docker/Dockerfile.hardened.template|src/Timeline/StellaOps.TimelineIndexer.WebService/StellaOps.TimelineIndexer.WebService.csproj|StellaOps.TimelineIndexer.WebService|8080
# timeline-indexer-worker|devops/docker/Dockerfile.hardened.template|src/Timeline/StellaOps.TimelineIndexer.Worker/StellaOps.TimelineIndexer.Worker.csproj|StellaOps.TimelineIndexer.Worker|8080
# ── Slot 24: Timeline (unified: includes merged timeline-indexer) ──────────────
timeline-web|devops/docker/Dockerfile.hardened.template|src/Timeline/StellaOps.Timeline.WebService/StellaOps.Timeline.WebService.csproj|StellaOps.Timeline.WebService|8080
# ── Slot 25: Findings Ledger ────────────────────────────────────────────────────
findings-ledger-web|devops/docker/Dockerfile.hardened.template|src/Findings/StellaOps.Findings.Ledger.WebService/StellaOps.Findings.Ledger.WebService.csproj|StellaOps.Findings.Ledger.WebService|8080

View File

@@ -194,9 +194,9 @@ For offline bundles, imports, and update workflows, see:
| Region | Testing | Production |
|--------|---------|------------|
| China (SM2/SM3/SM4) | `docker-compose.compliance-china.yml` + `docker-compose.crypto-sim.yml` | `docker-compose.compliance-china.yml` + `docker-compose.sm-remote.yml` |
| Russia (GOST) | `docker-compose.compliance-russia.yml` + `docker-compose.crypto-sim.yml` | `docker-compose.compliance-russia.yml` + `docker-compose.cryptopro.yml` |
| EU (eIDAS) | `docker-compose.compliance-eu.yml` + `docker-compose.crypto-sim.yml` | `docker-compose.compliance-eu.yml` |
| China (SM2/SM3/SM4) | `docker-compose.compliance-china.yml` + `docker-compose.crypto-provider.crypto-sim.yml` | `docker-compose.compliance-china.yml` + `docker-compose.crypto-provider.smremote.yml` |
| Russia (GOST) | `docker-compose.compliance-russia.yml` + `docker-compose.crypto-provider.crypto-sim.yml` | `docker-compose.compliance-russia.yml` + `docker-compose.crypto-provider.cryptopro.yml` |
| EU (eIDAS) | `docker-compose.compliance-eu.yml` + `docker-compose.crypto-provider.crypto-sim.yml` | `docker-compose.compliance-eu.yml` |
See `devops/compose/README.md` for detailed compliance deployment instructions.

View File

@@ -12,7 +12,7 @@ Dedicated remote service for Chinese SM2/SM3/SM4 cryptographic operations, runni
## Implementation Details
- **Service Entry Point**: `src/SmRemote/StellaOps.SmRemote.Service/Program.cs` -- ASP.NET Core minimal API service exposing `/status`, `/health`, `/sign`, `/verify`, `/hash`, `/encrypt`, and `/decrypt`.
- **SmRemote Integration Tests**: `src/SmRemote/__Tests/StellaOps.SmRemote.Service.Tests/SmRemoteServiceApiTests.cs` -- endpoint-level integration coverage for positive and negative paths.
- **Docker Compose Overlay**: `devops/compose/docker-compose.sm-remote.yml` -- overlay configuration for running SM Remote alongside the base platform compose stack.
- **Docker Compose Overlay**: `devops/compose/docker-compose.crypto-provider.smremote.yml` -- overlay configuration for running SM Remote alongside the base platform compose stack.
## E2E Test Plan
- [x] Start the SM Remote service and verify `/health` and `/status` return success responses.
@@ -20,7 +20,7 @@ Dedicated remote service for Chinese SM2/SM3/SM4 cryptographic operations, runni
- [x] Submit an SM2 signing request and verify the returned signature via `/verify`.
- [x] Submit an SM4 encryption request, then decrypt the ciphertext via `/decrypt`, and verify the round-trip matches the original plaintext.
- [x] Verify negative-path validation for invalid hash payloads, invalid SM4 key lengths, and invalid sign input (HTTP 400 responses).
- [x] Confirm compose overlay contract remains documented for alongside-platform deployment (`devops/compose/docker-compose.sm-remote.yml`).
- [x] Confirm compose overlay contract remains documented for alongside-platform deployment (`devops/compose/docker-compose.crypto-provider.smremote.yml`).
## Verification
- Verified on 2026-02-11 via FLOW Tier 0/1/2 replay in `run-005`.

View File

@@ -0,0 +1,569 @@
# Sprint 20260408-002 - VulnExplorer Persistence Migration + Merge into Findings Ledger
## Topic & Scope
- Two-phase plan: first migrate VulnExplorer from in-memory ConcurrentDictionary stores to Postgres, then merge it into the Findings Ledger WebService.
- Phase 1 (Sprint 1) eliminates all in-memory data stores and SampleData in VulnExplorer by introducing a persistence layer with SQL migrations, while VulnExplorer continues to run as its own service. This makes the data durable and tests the schema before the merge.
- Phase 2 (Sprint 2) moves VulnExplorer's endpoint surface into Ledger WebService as projections, wires VEX decisions and fix verifications as Ledger event types, removes the VulnExplorer container, and updates all consumers.
- Working directory: `src/Findings/`
- Expected evidence: all VulnExplorer endpoints backed by Postgres (Phase 1), then accessible via Ledger WebService with no separate container (Phase 2), existing tests pass, new integration tests cover persistence and merged endpoints.
## Analysis Summary (Decision Record)
### Why two phases instead of one
The original single-sprint plan assumed VulnExplorer's in-memory stores could be directly replaced by Ledger projections in one step. However:
1. VulnExplorer has five distinct in-memory stores (`SampleData`, `VexDecisionStore`, `FixVerificationStore`, `AuditBundleStore`, `EvidenceSubgraphStore`) with ConcurrentDictionary-based state and complex business logic (VEX override attestation flow, fix verification state machine, audit bundle aggregation).
2. Migrating persistence and merging service boundaries simultaneously creates too many failure modes -- schema issues mask merge issues and vice versa.
3. Phase 1 gives us a working VulnExplorer with real Postgres persistence that can be validated independently before the merge destabilizes routing and API contracts.
4. Phase 1 also validates the data model against the Ledger schema, ensuring the Phase 2 projection mapping is sound.
### Store-to-persistence mapping
| VulnExplorer Store | Phase 1 (Own Tables) | Phase 2 (Ledger Equivalent) |
|---|---|---|
| `SampleData` (VulnSummary/VulnDetail) | `vulnexplorer.vulnerabilities` table | `findings_projection` table + `VulnerabilityDetailService` + `FindingSummaryService` |
| `VexDecisionStore` | `vulnexplorer.vex_decisions` table | Ledger events (`finding.vex_decision_created/updated`) + `observations` table + `ledger_attestation_pointers` |
| `FixVerificationStore` | `vulnexplorer.fix_verifications` table | Ledger events (`finding.fix_verification_created/updated`) + `observations` table |
| `AuditBundleStore` | `vulnexplorer.audit_bundles` table | `EvidenceBundleService` + `OrchestratorExportService` |
| `EvidenceSubgraphStore` | Delegates to `EvidenceGraphBuilder` via HTTP/internal call | `EvidenceGraphBuilder` + `EvidenceGraphEndpoints` (real persistence-backed graph) |
### Key codebase facts informing this plan
**In-memory stores identified (all `ConcurrentDictionary`):**
- `VexDecisionStore` (`src/Findings/StellaOps.VulnExplorer.Api/Data/VexDecisionStore.cs`) -- 244 lines, includes `CreateWithAttestationAsync`/`UpdateWithAttestationAsync` with `IVexOverrideAttestorClient` integration
- `FixVerificationStore` (`src/Findings/StellaOps.VulnExplorer.Api/Data/TriageWorkflowStores.cs`) -- state machine with transitions
- `AuditBundleStore` (same file) -- sequential ID generation, evidence ref aggregation
- `EvidenceSubgraphStore` (same file) -- returns hardcoded graph structure
- `SampleData` (`src/Findings/StellaOps.VulnExplorer.Api/Data/SampleData.cs`) -- two hardcoded VulnSummary/VulnDetail records
**UI consumers (must preserve API shape):**
- `src/Web/StellaOps.Web/src/app/core/api/vex-decisions.client.ts` -- calls `GET/POST/PATCH /v1/vex-decisions` via `VEX_DECISIONS_API_BASE_URL`
- `src/Web/StellaOps.Web/src/app/core/api/audit-bundles.client.ts` -- calls `GET/POST /v1/audit-bundles` via `AUDIT_BUNDLES_API_BASE_URL`
- `src/Web/StellaOps.Web/src/app/features/vuln-explorer/services/evidence-subgraph.service.ts` -- calls `/api/vuln-explorer/findings/{id}/evidence-subgraph`
- `src/Web/StellaOps.Web/src/app/features/triage/services/vulnerability-list.service.ts` -- calls `/api/v1/vulnerabilities`
- `src/Web/StellaOps.Web/src/app/features/vulnerabilities/vulnerability-detail.component.ts` -- consumes VulnExplorer data
- `src/Web/StellaOps.Web/src/tests/vuln_explorer/` -- behavioral specs for evidence tree and filter presets
- `src/Web/StellaOps.Web/tests/e2e/triage-explainability-workspace.spec.ts` -- E2E test
**Cross-service consumers:**
- `src/VexLens/StellaOps.VexLens/Integration/IVulnExplorerIntegration` + `VulnExplorerIntegration` -- VexLens enriches vulnerabilities with VEX consensus data via this interface
- `src/Concelier/StellaOps.Concelier.Core/Diagnostics/VulnExplorerTelemetry.cs` -- telemetry meter `StellaOps.Concelier.VulnExplorer` for advisory processing metrics
- `src/Concelier/StellaOps.Concelier.WebService/Program.cs` -- calls `VulnExplorerTelemetry` methods during advisory ingest
- `src/Authority/StellaOps.Auth.Abstractions/StellaOpsServiceIdentities.cs` -- defines `VulnExplorer = "vuln-explorer"` service identity
**Infrastructure references (Phase 2 removal scope):**
- `devops/compose/docker-compose.stella-ops.yml` -- vulnexplorer container with alias `vulnexplorer.stella-ops.local`
- `devops/compose/docker-compose.stella-services.yml` -- vulnexplorer service definition, `Router__Messaging__ConsumerGroup: "vulnexplorer"`
- `devops/compose/router-gateway-local.json` -- route `^/api/vuln-explorer(.*)` -> `http://vulnexplorer.stella-ops.local/api/vuln-explorer$1`
- `devops/compose/envsettings-override.json` -- `apiBaseUrls.vulnexplorer`
- `devops/compose/hosts.stellaops.local` -- hostname entry
- `devops/helm/stellaops/values.yaml` -- no vulnexplorer entry found (Helm clean)
- `devops/helm/stellaops/templates/vuln-mock.yaml` -- mock deployment template
**Documentation references:**
- `docs/technical/architecture/webservice-catalog.md`
- `docs/technical/architecture/port-registry.md`
- `docs/technical/architecture/component-map.md`
- `docs/technical/architecture/module-matrix.md`
- `docs/modules/findings-ledger/README.md`
- `docs/modules/web/README.md`
- `docs/modules/ui/README.md`
- `docs/modules/ui/architecture.md`
- `docs/modules/ui/component-preservation-map/` (dead components under `vuln-explorer/`)
- `docs/modules/vex-lens/guides/explorer-integration.md`
- `docs/modules/authority/AUTHORITY.md`
- `docs/API_CLI_REFERENCE.md`
- `docs/features/checked/vulnexplorer/vulnexplorer-triage-api.md`
- `docs/features/checked/web/vuln-explorer-with-evidence-tree-and-citation-links.md`
- `docs/features/checked/web/filter-preset-pills-with-url-synchronization.md`
- `docs/operations/runbooks/vuln-ops.md`
- `docs/qa/feature-checks/state/vulnexplorer.json`
- `docs/dev/DEV_ENVIRONMENT_SETUP.md`
- `docs/dev/SOLUTION_BUILD_GUIDE.md`
**Existing test projects:**
- `src/Findings/__Tests/StellaOps.VulnExplorer.Api.Tests/` -- `VulnApiTests.cs` (4 unit tests), `VulnExplorerTriageApiE2ETests.cs` (5 integration tests covering VEX decisions, attestation, evidence subgraph, fix verification, audit bundles)
## Dependencies & Concurrency
- No upstream sprint dependencies.
- The VEX override attestation flow depends on `IVexOverrideAttestorClient` which calls the Attestor service -- this integration is preserved as-is in both phases.
- Phase 1 tasks (VXPM-*) can run in parallel: VXPM-001/002/003 are independent. VXPM-004 depends on all three. VXPM-005 depends on VXPM-004.
- Phase 2 tasks (VXLM-*) depend on Phase 1 completion. VXLM-001/002 are independent. VXLM-003 depends on both. VXLM-004/005 depend on VXLM-003.
## Documentation Prerequisites
- `docs/modules/findings-ledger/schema.md` (Ledger schema and Merkle invariants)
- `docs/modules/findings-ledger/workflow-inference.md` (projection rules)
- `src/Findings/AGENTS.md` (module working rules)
- `docs/modules/vex-lens/guides/explorer-integration.md` (VexLens integration contract)
---
# Phase 1 -- In-Memory to Postgres Migration
Goal: Replace all ConcurrentDictionary stores with Postgres-backed repositories while VulnExplorer remains its own service. Validate data model and API contract preservation.
## Delivery Tracker (Phase 1)
### VXPM-001 - Create VulnExplorer Postgres schema and SQL migrations
Status: TODO
Dependency: none
Owners: Backend engineer
Task description:
- Create a new persistence library `StellaOps.VulnExplorer.Persistence` (or add persistence to the existing `StellaOps.VulnExplorer.Api` project) following the pattern in `src/Findings/StellaOps.Findings.Ledger/Infrastructure/Postgres/`.
- Design tables under a `vulnexplorer` schema:
- `vulnexplorer.vex_decisions` -- stores VEX decision records with all fields from `VexDecisionDto`: id (PK, uuid), vulnerability_id, subject (JSONB), status, justification_type, justification_text, evidence_refs (JSONB), scope (JSONB), valid_for (JSONB), attestation_ref (JSONB), signed_override (JSONB), supersedes_decision_id, created_by (JSONB), tenant_id, created_at, updated_at.
- `vulnexplorer.fix_verifications` -- stores fix verification records: cve_id (PK), component_purl, artifact_digest, verdict, transitions (JSONB array), tenant_id, created_at, updated_at.
- `vulnexplorer.audit_bundles` -- stores audit bundle records: bundle_id (PK), tenant_id, decision_ids (JSONB array), evidence_refs (JSONB array), created_at.
- Write SQL migration files as embedded resources:
- `001_initial_vulnexplorer_schema.sql` -- create schema and tables
- Include RLS policies for tenant isolation (follow pattern from `src/Findings/StellaOps.Findings.Ledger/migrations/007_enable_rls.sql`)
- Wire `AddStartupMigrations("vulnexplorer", "VulnExplorer", migrationsAssembly)` in VulnExplorer's `Program.cs` per the auto-migration requirement (CLAUDE.md section 2.7).
Tests:
- Unit test that migration SQL is valid and can be parsed
- Integration test that migrations apply cleanly to a fresh database
- Integration test that migrations are idempotent (re-run does not fail)
Users:
- No user-facing changes -- this is infrastructure-only
Documentation:
- Add schema documentation to `src/Findings/StellaOps.VulnExplorer.Api/AGENTS.md` describing the new tables
- Document migration file naming convention in the module AGENTS.md
Completion criteria:
- [ ] SQL migration files exist and are embedded resources in the project
- [ ] Schema creates cleanly on a fresh database
- [ ] Auto-migration wired in Program.cs and runs on startup
- [ ] RLS policies enforce tenant isolation
- [ ] No manual init scripts required
### VXPM-002 - Implement Postgres repository for VEX decisions
Status: TODO
Dependency: none (can start before VXPM-001 with interface-first approach)
Owners: Backend engineer
Task description:
- Create `IVexDecisionRepository` interface mirroring the `VexDecisionStore` API surface:
- `CreateAsync(VexDecisionDto)` -> `VexDecisionDto`
- `UpdateAsync(Guid, UpdateVexDecisionRequest)` -> `VexDecisionDto?`
- `GetAsync(Guid)` -> `VexDecisionDto?`
- `QueryAsync(vulnerabilityId?, subjectName?, status?, skip, take)` -> `IReadOnlyList<VexDecisionDto>`
- `CountAsync()` -> `int`
- Implement `PostgresVexDecisionRepository` using EF Core or raw Npgsql (follow the pattern in `src/Findings/StellaOps.Findings.Ledger/Infrastructure/Postgres/PostgresLedgerEventRepository.cs`).
- Create `IFixVerificationRepository` and `PostgresFixVerificationRepository`:
- `CreateAsync(CreateFixVerificationRequest)` -> `FixVerificationRecord`
- `UpdateAsync(cveId, verdict)` -> `FixVerificationRecord?`
- Create `IAuditBundleRepository` and `PostgresAuditBundleRepository`:
- `CreateAsync(tenant, decisions)` -> `AuditBundleResponse`
- Preserve the `IVexOverrideAttestorClient` integration: `CreateWithAttestationAsync` and `UpdateWithAttestationAsync` logic moves into a service layer that wraps the repository.
Tests:
- Unit tests for each repository method with an in-memory database or test containers
- Test that VEX decision CRUD preserves all fields (especially JSONB: subject, scope, evidence_refs, signed_override)
- Test that fix verification state transitions are correctly persisted and reconstructed
- Test that audit bundle creation aggregates evidence refs from persisted decisions
- Test deterministic ordering (createdAt desc, id asc) matches current in-memory behavior
Users:
- No user-facing API changes -- same endpoints, same request/response shapes
- `VexDecisionStore.CreateWithAttestationAsync` behavior preserved for `IVexOverrideAttestorClient`
Documentation:
- Document repository interfaces in module AGENTS.md
Completion criteria:
- [ ] All repository interfaces defined
- [ ] Postgres implementations for all three repositories
- [ ] Business logic (attestation flow, state machine, bundle aggregation) preserved in service layer
- [ ] All JSONB fields round-trip correctly
### VXPM-003 - Replace SampleData with seeded Postgres data
Status: TODO
Dependency: none
Owners: Backend engineer
Task description:
- Remove `SampleData.cs` (hardcoded VulnSummary/VulnDetail records).
- Replace the vuln list/detail endpoints (`GET /v1/vulns`, `GET /v1/vulns/{id}`) with queries against a new `IVulnerabilityQueryService` that reads from `findings_projection` (the Ledger table, accessed via cross-schema query or a shared connection) or a VulnExplorer-owned view/table.
- Decision needed: whether VulnExplorer reads from `findings_ledger.findings_projection` directly (simpler, couples to Ledger schema) or maintains its own materialized view. Recommendation: read from Ledger projection directly via the shared Postgres connection, since VulnExplorer will be merged into Ledger in Phase 2 anyway.
- If Ledger projection is used: wire the Ledger's `IFindingProjectionRepository` or create a read-only query service that maps `FindingProjection` rows to `VulnSummary`/`VulnDetail`.
- If VulnExplorer-owned table is used: create `vulnexplorer.vulnerability_summaries` table and a sync mechanism from Ledger events.
- Replace `EvidenceSubgraphStore.Build()` (which returns hardcoded graph) with either:
- A call to Ledger's `IEvidenceGraphBuilder.BuildAsync()` (if accessible via shared library reference)
- An HTTP call to Ledger's `/evidence-graph/{findingId}` endpoint
- Recommendation: use shared library reference since both are in `src/Findings/`
Tests:
- Test that `GET /v1/vulns` returns findings from database (not hardcoded data)
- Test that `GET /v1/vulns/{id}` returns finding detail from database
- Test filtering (CVE, PURL, severity, exploitability, fixAvailable) works against real data
- Test pagination (pageToken/pageSize) works
- Test evidence subgraph returns real graph data (not the hardcoded stub)
- Regression test: verify the 4 existing `VulnApiTests` (List_ReturnsDeterministicOrder, List_FiltersByCve, Detail_ReturnsNotFoundWhenMissing, etc.) pass with the new persistence layer -- these will need seed data in the test database
Users:
- `VulnerabilityListService` (UI) calls `/api/v1/vulnerabilities` -- verify response shape unchanged
- `EvidenceSubgraphService` (UI) calls `/api/vuln-explorer/findings/{id}/evidence-subgraph` -- verify response shape unchanged
Documentation:
- Update `docs/features/checked/vulnexplorer/vulnexplorer-triage-api.md` to note that data is now persisted (not in-memory)
Completion criteria:
- [ ] `SampleData.cs` deleted
- [ ] `EvidenceSubgraphStore` hardcoded data removed
- [ ] Vuln list/detail endpoints return data from Postgres
- [ ] Evidence subgraph endpoint returns real graph data
- [ ] All existing filters and pagination work against Postgres queries
- [ ] Existing test assertions updated and passing
### VXPM-004 - Wire repositories into VulnExplorer Program.cs and replace in-memory singletons
Status: TODO
Dependency: VXPM-001, VXPM-002, VXPM-003
Owners: Backend engineer
Task description:
- Update `Program.cs` to replace all in-memory `AddSingleton` registrations:
- Remove `builder.Services.AddSingleton<VexDecisionStore>(...)` -> register `IVexDecisionRepository` (scoped)
- Remove `builder.Services.AddSingleton<FixVerificationStore>()` -> register `IFixVerificationRepository` (scoped)
- Remove `builder.Services.AddSingleton<AuditBundleStore>()` -> register `IAuditBundleRepository` (scoped)
- Remove `builder.Services.AddSingleton<EvidenceSubgraphStore>()` -> register `IEvidenceGraphBuilder` or equivalent
- Update all endpoint handlers in `Program.cs` to use the repository/service interfaces instead of the concrete stores.
- Wire the Postgres connection string from `ConnectionStrings__Default` (already in compose environment).
- Ensure the `StubVexOverrideAttestorClient` remains wired for dev/test, with `HttpVexOverrideAttestorClient` available for production.
- Verify all 10 endpoints continue to work:
- `GET /v1/vulns` (list)
- `GET /v1/vulns/{id}` (detail)
- `POST /v1/vex-decisions` (create, with optional attestation)
- `PATCH /v1/vex-decisions/{id:guid}` (update)
- `GET /v1/vex-decisions` (list)
- `GET /v1/vex-decisions/{id:guid}` (get)
- `GET /v1/evidence-subgraph/{vulnId}` (subgraph)
- `POST /v1/fix-verifications` (create)
- `PATCH /v1/fix-verifications/{cveId}` (update)
- `POST /v1/audit-bundles` (create)
Tests:
- Full integration test suite against Postgres: run the existing `VulnExplorerTriageApiE2ETests` (5 tests) against the Postgres-backed service
- Run the existing `VulnApiTests` (4 tests) against the Postgres-backed service
- Verify no 500 errors on cold start (fresh DB with auto-migration)
- Verify service starts and registers with Valkey router successfully
Users:
- All UI consumers should see zero behavioral change
- Gateway routing unchanged (`/api/vuln-explorer(.*) -> vulnexplorer.stella-ops.local`)
Documentation:
- Update `src/Findings/StellaOps.VulnExplorer.Api/AGENTS.md` to reflect persistence architecture
Completion criteria:
- [ ] Zero `ConcurrentDictionary` or in-memory store references in VulnExplorer
- [ ] All 10 endpoints return data from Postgres
- [ ] `VulnExplorerTriageApiE2ETests` (5 tests) pass
- [ ] `VulnApiTests` (4 tests) pass with seeded data
- [ ] Cold-start works: auto-migration creates schema, service starts, responds to health check
- [ ] Docker compose: vulnexplorer container starts cleanly with Postgres
### VXPM-005 - Phase 1 integration validation
Status: TODO
Dependency: VXPM-004
Owners: QA, Backend engineer
Task description:
- Full system test: bring up the complete compose stack and verify:
- VulnExplorer starts, auto-migrates, and registers with Valkey
- UI flows that consume VulnExplorer work end-to-end (navigate to vuln explorer page, view findings, create VEX decision, view evidence subgraph)
- VexLens `IVulnExplorerIntegration` continues to enrich vulnerabilities (this is an in-process integration in VexLens, not an HTTP call to VulnExplorer -- verify it still works)
- Concelier `VulnExplorerTelemetry` metrics still emit (this is just a meter, no runtime dependency on VulnExplorer service)
- Run all existing test suites:
- `src/Findings/__Tests/StellaOps.VulnExplorer.Api.Tests/` (9 tests)
- `src/Findings/__Tests/StellaOps.Findings.Ledger.Tests/` (verify no regressions)
- `src/Web/StellaOps.Web/src/tests/vuln_explorer/` (2 behavioral specs)
Tests:
- All tests listed above pass
- Manual or Playwright verification of UI vuln explorer page
Users:
- End-to-end user flow validated
Documentation:
- Record test results in Execution Log
Completion criteria:
- [ ] All 9 VulnExplorer API tests pass
- [ ] All Ledger tests pass (no regression)
- [ ] UI behavioral specs pass
- [ ] VulnExplorer container starts and responds in full compose stack
- [ ] Data survives container restart (persistence verified)
---
# Phase 2 -- Merge VulnExplorer into Findings Ledger
Goal: Eliminate VulnExplorer as a separate service. Move all endpoints into Ledger WebService. VEX decisions and fix verifications become Ledger events. Remove VulnExplorer container from compose.
## Delivery Tracker (Phase 2)
### VXLM-001 - Migrate VulnExplorer endpoint DTOs into Ledger WebService
Status: DONE
Dependency: VXPM-005 (Phase 1 complete)
Owners: Backend engineer
Task description:
- Move VulnExplorer contract types into the Ledger WebService `Contracts/` namespace:
- `VulnModels.cs` (VulnSummary, VulnDetail, VulnListResponse, VulnFilter, EvidenceProvenance, PolicyRationale, PackageAffect, AdvisoryRef, EvidenceRef)
- `VexDecisionModels.cs` (VexDecisionDto, CreateVexDecisionRequest, UpdateVexDecisionRequest, VexDecisionListResponse, SubjectRefDto, EvidenceRefDto, VexScopeDto, ValidForDto, AttestationRefDto, ActorRefDto, VexOverrideAttestationDto, AttestationVerificationStatusDto, AttestationRequestOptions, and all enums: VexStatus, SubjectType, EvidenceType, VexJustificationType)
- `FixVerificationModels.cs` (FixVerificationResponse, FixVerificationGoldenSetRef, FixVerificationAnalysis, FunctionChangeResult, FunctionChangeChild, ReachabilityChangeResult, FixVerificationRiskImpact, FixVerificationEvidenceChain, EvidenceChainItem, FixVerificationRequest)
- `AttestationModels.cs` (VulnScanAttestationDto, AttestationSubjectDto, VulnScanPredicateDto, ScannerInfoDto, ScannerDbInfoDto, SeverityCountsDto, FindingReportDto, AttestationMetaDto, AttestationSignerDto, AttestationListResponse, AttestationSummaryDto, AttestationType)
- `TriageWorkflowModels.cs` (CreateFixVerificationRequest, UpdateFixVerificationRequest, CreateAuditBundleRequest, AuditBundleResponse, FixVerificationTransition, FixVerificationRecord)
- Contracts from `StellaOps.VulnExplorer.WebService.Contracts.EvidenceSubgraphContracts` already exist conceptually in the Ledger's `EvidenceGraphContracts.cs` -- create thin adapter types or type aliases where the frontend expects the VulnExplorer shape.
- Keep the VulnExplorer API path prefix (`/v1/vulns`, `/v1/vex-decisions`, `/v1/evidence-subgraph`, `/v1/fix-verifications`, `/v1/audit-bundles`) as route groups in the Ledger WebService to avoid frontend breaking changes.
Tests:
- Compilation test: all contract types compile within Ledger WebService
- Verify no duplicate type definitions between the two projects
- Verify existing Ledger tests still pass after adding new contracts
Users:
- No UI changes needed at this stage -- endpoints return 501 initially
- Frontend API clients (`vex-decisions.client.ts`, `audit-bundles.client.ts`, `evidence-subgraph.service.ts`) will be retargeted in VXLM-004
Documentation:
- Update `docs/API_CLI_REFERENCE.md` to note VulnExplorer endpoints are now served by Findings Ledger
Completion criteria:
- [ ] All VulnExplorer contract types compile within Ledger WebService
- [ ] No duplicate type definitions between the two projects
- [ ] VulnExplorer API paths registered in Ledger WebService (can return 501 initially)
- [ ] Existing Ledger tests still pass
### VXLM-002 - Wire VulnExplorer read endpoints to Ledger projection queries
Status: DONE
Dependency: VXPM-005 (Phase 1 complete)
Owners: Backend engineer
Task description:
- Implement `/v1/vulns` (list) by querying `IFindingProjectionRepository.QueryScoredAsync()` and mapping `FindingProjection` to `VulnSummary`. The Ledger's `VulnerabilityDetailService` already does the field extraction from `labels` JSONB -- reuse that logic.
- Implement `/v1/vulns/{id}` (detail) by calling `VulnerabilityDetailService.GetAsync()` and mapping to `VulnDetail`. The existing `VulnerabilityDetailResponse` is a superset of VulnDetail.
- Implement `/v1/evidence-subgraph/{vulnId}` by calling `IEvidenceGraphBuilder.BuildAsync()` and mapping `EvidenceGraphResponse` to `EvidenceSubgraphResponse`. The Ledger's graph model (verdict node, VEX nodes, reachability, runtime, SBOM, provenance) covers all VulnExplorer subgraph node types.
Tests:
- Integration test: create a finding via Ledger event, then query via `/v1/vulns` and verify it appears in the response
- Integration test: `GET /v1/vulns/{id}` returns correct detail for a known finding
- Integration test: evidence subgraph returns graph with correct node types
- Test filtering (CVE, PURL, severity, exploitability, fixAvailable) works against Ledger projection fields
- Test pagination (pageToken/pageSize) works
Users:
- `VulnerabilityListService` (UI) at `/api/v1/vulnerabilities` -- ensure response shape unchanged
- `EvidenceSubgraphService` (UI) at `/api/vuln-explorer/findings/{id}/evidence-subgraph` -- ensure response shape unchanged
- `vulnerability-detail.component.ts` (UI) -- verify data binding unchanged
Documentation:
- Update `docs/modules/findings-ledger/README.md` with new endpoint groups
Completion criteria:
- [ ] `/v1/vulns` returns findings from Ledger DB (not hardcoded data)
- [ ] `/v1/vulns/{id}` returns finding detail from Ledger projections
- [ ] `/v1/evidence-subgraph/{vulnId}` returns real evidence graph data
- [ ] Filtering (CVE, PURL, severity, exploitability, fixAvailable) works against Ledger projection fields
- [ ] Pagination (pageToken/pageSize) works
### VXLM-003 - Migrate VEX decision and fix verification endpoints to Ledger event persistence
Status: DONE
Dependency: VXLM-001, VXLM-002
Owners: Backend engineer
Task description:
- **New Ledger event types**: Add to `LedgerEventConstants` (`src/Findings/StellaOps.Findings.Ledger/Domain/LedgerEventConstants.cs`):
- `EventFindingVexDecisionCreated = "finding.vex_decision_created"`
- `EventFindingVexDecisionUpdated = "finding.vex_decision_updated"`
- `EventFindingFixVerificationCreated = "finding.fix_verification_created"`
- `EventFindingFixVerificationUpdated = "finding.fix_verification_updated"`
- Add all four to `SupportedEventTypes` and `FindingEventTypes`
- **VEX Decisions**: Wire `POST /v1/vex-decisions` to emit a `finding.vex_decision_created` Ledger event with the VEX decision payload in the event body JSONB. The VEX override attestation flow (`IVexOverrideAttestorClient`) is preserved and produces a `ledger_attestation_pointers` record when attestation succeeds.
- Wire `PATCH /v1/vex-decisions/{id}` to emit a `finding.vex_decision_updated` event (append-only update).
- Wire `GET /v1/vex-decisions` to query `observations` table filtered by action type, or introduce a new Ledger projection for VEX decisions.
- Wire `GET /v1/vex-decisions/{id:guid}` to reconstruct from Ledger events.
- **Fix Verification**: Wire `POST /v1/fix-verifications` to emit `finding.fix_verification_created` event. Store verdict, transitions, and evidence chain in event body. Wire `PATCH /v1/fix-verifications/{cveId}` to emit `finding.fix_verification_updated` event with state transition.
- **Audit Bundle**: Wire `POST /v1/audit-bundles` to delegate to `EvidenceBundleService` or `OrchestratorExportService`, packaging the referenced VEX decisions from the Ledger chain.
- **Data migration**: Migrate VulnExplorer's `vulnexplorer.*` tables into Ledger events. Write a one-time migration that:
- Reads all VEX decisions from `vulnexplorer.vex_decisions` and emits corresponding Ledger events
- Reads all fix verifications from `vulnexplorer.fix_verifications` and emits corresponding events
- Records the migration in the Execution Log
- Add new SQL migration `010_vex_fix_verification_events.sql` to add the event types to the Ledger's `ledger_event_type` enum (if using enum) or document the new type strings.
Tests:
- Integration test: create VEX decision via `POST /v1/vex-decisions`, verify it persists as Ledger event, query back via `GET`
- Integration test: VEX decision with attestation produces both Ledger event and `ledger_attestation_pointers` record
- Integration test: fix verification create and update produce state transitions as Ledger events
- Integration test: audit bundle aggregates from Ledger data, not in-memory store
- Test Merkle chain integrity: new VEX/fix events participate in the append-only hash chain
- Test data migration script: verify it correctly converts existing records
- Test backward compatibility: old VEX decisions created before migration are still queryable
Users:
- `VexDecisionsHttpClient` (UI) -- verify create/list/get/patch all work with Ledger persistence
- `AuditBundlesHttpClient` (UI) -- verify bundle creation aggregates from Ledger events
- `triage-explainability-workspace.spec.ts` (E2E) -- verify full triage workflow
Documentation:
- Update `docs/modules/findings-ledger/schema.md` with new event types
- Update `docs/modules/findings-ledger/workflow-inference.md` if projection rules change
- Update `docs/features/checked/vulnexplorer/vulnexplorer-triage-api.md` to document Ledger-backed persistence
Completion criteria:
- [ ] VEX decisions are persisted as Ledger events (append-only, with Merkle chain integrity)
- [ ] VEX override attestations produce `ledger_attestation_pointers` records
- [ ] Fix verifications are persisted as Ledger events with state transitions
- [ ] Audit bundles aggregate from Ledger data (not in-memory store)
- [ ] New SQL migration `010_vex_fix_verification_events.sql` adds event types
- [ ] All ConcurrentDictionary stores eliminated
- [ ] Data migration from `vulnexplorer.*` tables to Ledger events complete
### VXLM-004 - Remove VulnExplorer service and update compose/routing/consumers
Status: DONE
Dependency: VXLM-003
Owners: Backend engineer, DevOps
Task description:
- Remove `StellaOps.VulnExplorer.Api/` project from the solution.
- Remove `StellaOps.VulnExplorer.WebService/` project from the solution (inline `EvidenceSubgraphContracts` into Ledger if still referenced).
- Remove `StellaOps.VulnExplorer.Persistence/` (Phase 1 persistence library) -- its tables are superseded by Ledger events.
- Update `docker-compose.stella-ops.yml`:
- Remove the vulnexplorer service container
- Remove `STELLAOPS_VULNEXPLORER_URL` from the gateway's environment variables
- Update `docker-compose.stella-services.yml`:
- Remove vulnexplorer service definition
- Remove `STELLAOPS_VULNEXPLORER_URL` from shared environment
- Update `devops/compose/router-gateway-local.json`:
- Change route `^/api/vuln-explorer(.*)` to target `http://findings-ledger.stella-ops.local/api/vuln-explorer$1`
- Or add new routes for `/v1/vulns*`, `/v1/vex-decisions*`, etc. targeting findings-ledger
- Update `devops/compose/hosts.stellaops.local` -- remove vulnexplorer hostname
- Update `devops/compose/envsettings-override.json` -- change `apiBaseUrls.vulnexplorer` to point to findings-ledger or remove if the gateway handles routing
- Update `devops/docker/services-matrix.env` -- remove vulnexplorer project path if present
- Update `devops/helm/stellaops/templates/vuln-mock.yaml` -- remove or repurpose
- Update cross-service references:
- `src/Authority/StellaOps.Auth.Abstractions/StellaOpsServiceIdentities.cs` -- deprecate or remove `VulnExplorer` identity (or redirect to findings-ledger)
- `src/VexLens/StellaOps.VexLens/Integration/` -- `IVulnExplorerIntegration`/`VulnExplorerIntegration` remain as-is (they use `IConsensusProjectionStore`, not HTTP to VulnExplorer)
- `src/Concelier/StellaOps.Concelier.Core/Diagnostics/VulnExplorerTelemetry.cs` -- rename meter to `StellaOps.Findings.VulnExplorer` or leave as-is for telemetry continuity
- `src/Router/__Tests/StellaOps.Gateway.WebService.Tests/Middleware/RouteDispatchMiddlewareMicroserviceTests.cs` -- update test expectations for route target
- `src/Router/__Tests/StellaOps.Router.Gateway.Tests/OpenApi/OpenApiDocumentGeneratorTests.cs` -- update if it references vulnexplorer routes
Tests:
- Verify solution builds without VulnExplorer projects
- Verify all 62+ containers start cleanly (minus vulnexplorer = 61+)
- Verify gateway routes `/v1/vulns*`, `/v1/vex-decisions*`, `/v1/evidence-subgraph*`, `/v1/fix-verifications*`, `/v1/audit-bundles*` to findings-ledger service
- Verify VexLens integration still works (no runtime dependency on VulnExplorer service)
- Verify Concelier telemetry still emits (no runtime dependency on VulnExplorer service)
- Run gateway routing tests and verify they pass with updated route targets
Users:
- UI: `vex-decisions.client.ts` `VEX_DECISIONS_API_BASE_URL` -- verify it resolves to the gateway which now routes to findings-ledger
- UI: `audit-bundles.client.ts` `AUDIT_BUNDLES_API_BASE_URL` -- same verification
- UI: `evidence-subgraph.service.ts` base URL `/api/vuln-explorer` -- verify gateway route rewrite works
- UI: `vulnerability-list.service.ts` base URL `/api/v1/vulnerabilities` -- verify routing
- `envsettings-override.json` apiBaseUrls update consumed by UI at runtime
Documentation:
- Update `docs/technical/architecture/webservice-catalog.md` -- remove VulnExplorer entry, note merged into Findings Ledger
- Update `docs/technical/architecture/port-registry.md` -- remove VulnExplorer port allocation
- Update `docs/technical/architecture/component-map.md` -- update diagram
- Update `docs/technical/architecture/module-matrix.md` -- remove VulnExplorer row
- Update `docs/dev/DEV_ENVIRONMENT_SETUP.md` -- remove VulnExplorer references
- Update `docs/dev/SOLUTION_BUILD_GUIDE.md` -- remove VulnExplorer project
- Update `docs/technical/cicd/path-filters.md` -- remove VulnExplorer paths
Completion criteria:
- [ ] No vulnexplorer container in compose
- [ ] Gateway routes VulnExplorer API paths to findings-ledger service
- [ ] Solution builds without VulnExplorer projects
- [ ] All containers start cleanly
- [ ] Cross-service references updated (VexLens, Concelier, Authority, Router tests)
- [ ] UI `envsettings-override.json` updated
### VXLM-005 - Integration tests, UI validation, and documentation update
Status: TODO
Dependency: VXLM-004
Owners: Backend engineer, QA
Task description:
- Port VulnExplorer test assertions to Ledger test project (`src/Findings/__Tests/StellaOps.Findings.Ledger.Tests/`). Add integration tests that:
- Create a finding via Ledger event, then query via `/v1/vulns` and `/v1/vulns/{id}`.
- Create a VEX decision via `POST /v1/vex-decisions`, verify it persists as Ledger event, query back via `GET`.
- Create a VEX decision with attestation, verify `ledger_attestation_pointers` record.
- Create a fix verification, verify state transitions persist as Ledger events.
- Create an audit bundle from persisted decisions.
- Retrieve evidence subgraph for a finding with real evidence data.
- Full triage workflow: create finding -> create VEX decision -> create fix verification -> create audit bundle -> verify all queryable.
- Run UI behavioral specs:
- `src/Web/StellaOps.Web/src/tests/vuln_explorer/vuln-explorer-with-evidence-tree-and-citation-links.behavior.spec.ts`
- `src/Web/StellaOps.Web/src/tests/vuln_explorer/filter-preset-pills-with-url-synchronization.component.spec.ts`
- `src/Web/StellaOps.Web/tests/e2e/triage-explainability-workspace.spec.ts`
- Remove or archive old VulnExplorer test project (`src/Findings/__Tests/StellaOps.VulnExplorer.Api.Tests/`).
- Update documentation:
- `src/Findings/AGENTS.md` -- document the merged endpoint surface and note VulnExplorer is now part of Findings Ledger
- `docs/modules/findings-ledger/schema.md` -- add new event types (vex_decision_created/updated, fix_verification_created/updated)
- `docs/modules/findings-ledger/README.md` -- note VulnExplorer endpoints merged in
- `docs/modules/web/README.md` -- update service dependency list
- `docs/modules/ui/architecture.md` -- update service dependency list
- `docs/modules/ui/component-preservation-map/README.md` -- update VulnExplorer component status
- `docs/modules/vex-lens/guides/explorer-integration.md` -- note VulnExplorer merged into Ledger
- `docs/modules/authority/AUTHORITY.md` -- note service identity change
- `docs/operations/runbooks/vuln-ops.md` -- update operational procedures
- `docs/qa/feature-checks/state/vulnexplorer.json` -- update state to reflect merge
- `docs/INDEX.md` -- update if VulnExplorer is listed separately
- High-level architecture docs (`docs/07_HIGH_LEVEL_ARCHITECTURE.md`) if the service count changes
Tests:
- All 6+ new integration tests pass
- All existing Ledger tests pass (no regression)
- UI behavioral specs pass
- E2E triage workspace spec passes
- All ported VulnExplorer test assertions pass in Ledger test project
Users:
- End-to-end validation: all UI flows that previously hit VulnExplorer now work via Findings Ledger
- No user-visible behavior change
Documentation:
- All documentation updates listed above completed
Completion criteria:
- [ ] Integration tests cover all 6 merged endpoint groups
- [ ] Existing Ledger tests still pass
- [ ] UI behavioral specs pass
- [ ] E2E triage workspace spec passes
- [ ] Old VulnExplorer test project removed or archived
- [ ] Module AGENTS.md updated with merged endpoint list
- [ ] Schema docs updated with new event types
- [ ] All 13+ documentation files updated
- [ ] High-level architecture docs updated with new service count
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2026-04-08 | Sprint created from VulnExplorer/Ledger merge analysis. Option A (merge first, Ledger projections) selected. | Planning |
| 2026-04-08 | Sprint restructured into two phases: Phase 1 (in-memory to Postgres migration) and Phase 2 (merge into Ledger). Comprehensive consumer/dependency audit added. | Planning |
| 2026-04-08 | Phase 2 implemented (VXLM-001 through VXLM-004): DTOs moved to Ledger `Contracts/VulnExplorer/`, endpoints mounted via `VulnExplorerEndpoints.cs`, adapter services created, compose/routing/services-matrix updated, docs updated. Phase 1 skipped per user direction (wire to existing Ledger services instead of creating separate vulnexplorer schema). VXLM-005 (integration tests) remaining TODO. | Backend |
## Decisions & Risks
- **Decision**: Two-phase approach. Phase 1 migrates VulnExplorer to Postgres while it remains a standalone service. Phase 2 merges into Findings Ledger. Rationale: reduces risk by separating persistence migration from service boundary changes; allows independent validation of the data model.
- **Decision**: VulnExplorer's Phase 1 tables (`vulnexplorer.*` schema) are temporary. They serve as a stepping stone to validate the data model before the Ledger merge in Phase 2. Phase 2 will migrate their data into Ledger events and drop the tables.
- **Decision**: VulnExplorer API paths are preserved as-is in the Ledger WebService to avoid frontend breaking changes. They will be documented as aliases for the Ledger's native v2 endpoints.
- **Decision**: VulnExplorer reads from `findings_ledger.findings_projection` for vuln list/detail (Phase 1, VXPM-003) rather than creating its own vulnerability table. Rationale: avoids data duplication, and this is the same table that Ledger will serve in Phase 2.
- **Risk**: The VEX override attestation workflow (`IVexOverrideAttestorClient`) currently uses a stub in VulnExplorer. Merging preserves this stub but it must be connected to the real Attestor service for production. This is existing tech debt, not introduced by the migration.
- **Risk**: New Ledger event types (`finding.vex_decision_created`, `finding.fix_verification_created`) require a SQL migration to extend the event type set. Must ensure the migration runs before the new code deploys (auto-migration handles this).
- **Risk**: VexLens `IVulnExplorerIntegration` does not make HTTP calls to VulnExplorer -- it uses `IConsensusProjectionStore` in-process. No service dependency, but the interface name references VulnExplorer. Consider renaming in a follow-up sprint.
- **Risk**: Concelier `VulnExplorerTelemetry` meter name (`StellaOps.Concelier.VulnExplorer`) is baked into dashboards/alerts. Renaming would break observability continuity. Decision: leave meter name as-is, document the historical naming.
- **Risk**: `envsettings-override.json` has `apiBaseUrls.vulnexplorer` pointing to `https://stella-ops.local`. If the UI reads this to build API URLs, it must be updated in Phase 2. If the gateway handles all routing, this may be a no-op.
## Next Checkpoints
- **Phase 1**: VXPM-001/002/003 can proceed in parallel immediately. VXPM-004 integrates all three. VXPM-005 validates the complete Phase 1.
- **Phase 2 gate**: Phase 2 must not start until VXPM-005 passes. All VulnExplorer endpoints must be Postgres-backed and tested.
- **Phase 2**: VXLM-001 + VXLM-002 can proceed in parallel. VXLM-003 is the critical-path task. VXLM-004 (service removal) should be the last code change.
- **Demo (Phase 1)**: VulnExplorer with real Postgres persistence, zero hardcoded data, data survives restarts.
- **Demo (Phase 2)**: Merged service with Ledger-backed VulnExplorer endpoints, no VulnExplorer container, all UI flows working.

View File

@@ -0,0 +1,540 @@
# Sprint 20260408-003 - Scheduler Plugin Architecture + Doctor Migration
## Topic & Scope
- Design and implement a generic job-plugin system for the Scheduler service, enabling non-scanning workloads (health checks, policy sweeps, graph builds, etc.) to be scheduled and executed as first-class Scheduler jobs.
- Migrate Doctor's thin scheduling layer (`StellaOps.Doctor.Scheduler`) to become the first Scheduler job plugin, eliminating a standalone service while preserving Doctor-specific UX and trending.
- Working directory: `src/JobEngine/` (primary), `src/Doctor/` (migration source), `src/Web/StellaOps.Web/src/app/features/doctor/` (UI adapter).
- Expected evidence: interface definitions compile, Doctor plugin builds, existing Scheduler tests pass, new plugin tests pass, Doctor UI still renders schedules and trends.
## Dependencies & Concurrency
- No upstream sprint blockers. The Scheduler WebService and Doctor Scheduler are both stable.
- Batch 1 (tasks 001-004) can proceed independently of Batch 2 (005-009).
- Batch 2 (Doctor plugin) depends on Batch 1 (plugin contracts).
- Batch 3 (UI + cleanup, tasks 010-012) depends on Batch 2.
- Safe to develop in parallel with any FE or Findings sprints since working directories do not overlap.
## Documentation Prerequisites
- `docs/modules/scheduler/architecture.md` (read before DOING)
- `src/JobEngine/AGENTS.Scheduler.md`
- `src/Doctor/AGENTS.md`
- `docs/doctor/doctor-capabilities.md`
---
## Architecture Design
### A. Current State Analysis
**Scheduler** (src/JobEngine/StellaOps.Scheduler.WebService):
- Manages `Schedule` entities with cron expressions, `ScheduleMode` (AnalysisOnly, ContentRefresh), `Selector` (image targeting), `ScheduleOnlyIf` preconditions, `ScheduleNotify` preferences, and `ScheduleLimits`.
- Creates `Run` entities with state machine: Planning -> Queued -> Running -> Completed/Error/Cancelled.
- The `Schedule.Mode` enum is hardcoded to scanning modes. The `Selector` model is image-centric (digests, namespaces, repositories, labels).
- Worker Host processes queue segments via `StellaOps.Scheduler.Queue` and `StellaOps.Scheduler.Worker.DependencyInjection`.
- Has an empty `StellaOps.Scheduler.plugins/scheduler/` directory and a working `PluginHostOptions` / `PluginHost.LoadPlugins()` assembly-loading pipeline via `StellaOps.Plugin.Hosting`.
- `SystemScheduleBootstrap` seeds 6 system schedules on startup.
- Already registers plugin assemblies via `RegisterPluginRoutines()` in Program.cs (line 189), which scans for `IDependencyInjectionRoutine` implementations.
**Doctor Scheduler** (src/Doctor/StellaOps.Doctor.Scheduler):
- Standalone slim WebApplication (~65 lines in Program.cs).
- `DoctorScheduleWorker` (BackgroundService): polls every N seconds, evaluates cron via Cronos, dispatches to `ScheduleExecutor`.
- `ScheduleExecutor`: makes HTTP POST to Doctor WebService `/api/v1/doctor/run`, polls for completion, stores trend data, evaluates alert rules.
- `DoctorSchedule` model: ScheduleId, Name, CronExpression, Mode (Quick/Full/Categories/Plugins), Categories[], Plugins[], Enabled, Alerts (AlertConfiguration), TimeZoneId, LastRunAt/Id/Status.
- All persistence is in-memory (`InMemoryScheduleRepository`, `InMemoryTrendRepository`). No Postgres implementation exists yet.
- Exposes REST endpoints at `/api/v1/doctor/scheduler/schedules` and `/api/v1/doctor/scheduler/trends`.
- 20 Doctor plugins across 18+ directories under `src/Doctor/__Plugins/`, each implementing `IDoctorPlugin` with `IDoctorCheck[]`.
**Doctor UI** (src/Web/StellaOps.Web/src/app/features/doctor):
- Calls Doctor WebService directly (`/doctor/api/v1/doctor/...`) for runs, checks, plugins, reports.
- Calls Doctor Scheduler at `/api/v1/doctor/scheduler/trends/categories/{category}` for trend sparklines.
- No schedule management UI exists yet (schedules are created via API or seed data).
### B. Plugin Architecture Design
#### B.1 The `ISchedulerJobPlugin` Contract
A new library `StellaOps.Scheduler.Plugin.Abstractions` defines the plugin contract:
```csharp
namespace StellaOps.Scheduler.Plugin;
/// <summary>
/// Identifies the kind of job a plugin handles. Used in Schedule.JobKind
/// to route cron triggers to the correct plugin at execution time.
/// </summary>
public interface ISchedulerJobPlugin
{
/// <summary>
/// Unique, stable identifier for this job kind (e.g., "scan", "doctor", "policy-sweep").
/// Stored in the Schedule record; must be immutable once published.
/// </summary>
string JobKind { get; }
/// <summary>
/// Human-readable display name for the UI.
/// </summary>
string DisplayName { get; }
/// <summary>
/// Plugin version for compatibility checking.
/// </summary>
Version Version { get; }
/// <summary>
/// Creates a typed execution plan from a Schedule + Run.
/// Called when the cron fires or a manual run is created.
/// Returns a plan object that the Scheduler persists as the Run's plan payload.
/// </summary>
Task<JobPlan> CreatePlanAsync(JobPlanContext context, CancellationToken ct);
/// <summary>
/// Executes the plan. Called by the Worker Host.
/// Must be idempotent and support cancellation.
/// Updates Run state via the provided IRunProgressReporter.
/// </summary>
Task ExecuteAsync(JobExecutionContext context, CancellationToken ct);
/// <summary>
/// Optionally validates plugin-specific configuration stored in Schedule.PluginConfig.
/// Called on schedule create/update.
/// </summary>
Task<JobConfigValidationResult> ValidateConfigAsync(
IReadOnlyDictionary<string, object?> pluginConfig,
CancellationToken ct);
/// <summary>
/// Returns the JSON schema for plugin-specific configuration, enabling UI-driven forms.
/// </summary>
string? GetConfigJsonSchema();
/// <summary>
/// Registers plugin-specific services into DI.
/// Called once during host startup.
/// </summary>
void ConfigureServices(IServiceCollection services, IConfiguration configuration);
/// <summary>
/// Registers plugin-specific HTTP endpoints (optional).
/// Called during app.Map* phase.
/// </summary>
void MapEndpoints(IEndpointRouteBuilder routes);
}
```
#### B.2 Supporting Types
```csharp
/// <summary>
/// Immutable context passed to CreatePlanAsync.
/// </summary>
public sealed record JobPlanContext(
Schedule Schedule,
Run Run,
IServiceProvider Services,
TimeProvider TimeProvider);
/// <summary>
/// The plan produced by a plugin. Serialized to JSON and stored on the Run.
/// </summary>
public sealed record JobPlan(
string JobKind,
IReadOnlyDictionary<string, object?> Payload,
int EstimatedSteps = 1);
/// <summary>
/// Context passed to ExecuteAsync.
/// </summary>
public sealed record JobExecutionContext(
Schedule Schedule,
Run Run,
JobPlan Plan,
IRunProgressReporter Reporter,
IServiceProvider Services,
TimeProvider TimeProvider);
/// <summary>
/// Callback interface for plugins to report progress and update Run state.
/// </summary>
public interface IRunProgressReporter
{
Task ReportProgressAsync(int completed, int total, string? message = null, CancellationToken ct = default);
Task TransitionStateAsync(RunState newState, string? error = null, CancellationToken ct = default);
Task AppendLogAsync(string message, string level = "info", CancellationToken ct = default);
}
/// <summary>
/// Result of plugin config validation.
/// </summary>
public sealed record JobConfigValidationResult(
bool IsValid,
IReadOnlyList<string> Errors);
```
#### B.3 Schedule Model Extension
The existing `Schedule` record needs two new fields:
1. **`JobKind`** (string, default `"scan"`): routes to the correct `ISchedulerJobPlugin`. Existing schedules implicitly use `"scan"`.
2. **`PluginConfig`** (ImmutableDictionary<string, object?>?, optional): plugin-specific configuration stored as JSON. For scan jobs this is null (mode/selector cover everything). For Doctor jobs this contains `{ "doctorMode": "full", "categories": [...], "plugins": [...], "alerts": {...} }`.
The existing `ScheduleMode` and `Selector` remain valid for scan-type jobs. Plugins that don't target images can ignore `Selector` and set `Scope = AllImages` as a no-op.
#### B.4 Plugin Registry and Discovery
```
SchedulerPluginRegistry : ISchedulerPluginRegistry
- Dictionary<string, ISchedulerJobPlugin> _plugins
- Register(ISchedulerJobPlugin plugin)
- Resolve(string jobKind) -> ISchedulerJobPlugin?
- ListRegistered() -> IReadOnlyList<(string JobKind, string DisplayName)>
```
Plugins are discovered in two ways:
1. **Built-in**: The existing scan logic is refactored into `ScanJobPlugin : ISchedulerJobPlugin` with `JobKind = "scan"`. Registered in DI unconditionally.
2. **Assembly-loaded**: The existing `PluginHost.LoadPlugins()` pipeline scans `plugins/scheduler/` for DLLs. Any type implementing `ISchedulerJobPlugin` is instantiated and registered. This uses the existing `PluginHostOptions` infrastructure already wired in the Scheduler.
#### B.5 Execution Flow
```
Cron fires for Schedule (jobKind="doctor")
-> SchedulerPluginRegistry.Resolve("doctor") -> DoctorJobPlugin
-> DoctorJobPlugin.CreatePlanAsync(schedule, run) -> JobPlan
-> Run persisted with state=Queued, plan payload
-> Worker dequeues Run
-> DoctorJobPlugin.ExecuteAsync(context)
-> Calls Doctor WebService HTTP API (same as current ScheduleExecutor)
-> Reports progress via IRunProgressReporter
-> Stores trend data
-> Evaluates alerts
-> Run transitions to Completed/Error
```
#### B.6 Backward Compatibility
- `Schedule.JobKind` defaults to `"scan"` for all existing schedules (migration adds column with default).
- `Schedule.PluginConfig` defaults to null for existing schedules.
- `ScanJobPlugin` wraps the current execution logic with no behavioral change.
- The `ScheduleMode` enum remains but is only meaningful for `jobKind="scan"`. Other plugins ignore it (or set a sentinel value).
- All existing API contracts (`/api/v1/scheduler/schedules`, `/api/v1/scheduler/runs`) are extended, not broken.
### C. Doctor Plugin Design
#### C.1 DoctorJobPlugin
```csharp
public sealed class DoctorJobPlugin : ISchedulerJobPlugin
{
public string JobKind => "doctor";
public string DisplayName => "Doctor Health Checks";
// CreatePlanAsync: reads DoctorScheduleConfig from Schedule.PluginConfig,
// resolves which checks to run, returns JobPlan with check list.
// ExecuteAsync: HTTP POST to Doctor WebService /api/v1/doctor/run,
// polls for completion (same logic as current ScheduleExecutor),
// stores trend data via ITrendRepository,
// evaluates alerts via IAlertService.
// MapEndpoints: registers /api/v1/scheduler/doctor/trends/* endpoints
// to serve trend data (proxied from Scheduler's database).
}
```
#### C.2 Doctor-Specific Config Schema
```json
{
"doctorMode": "full|quick|categories|plugins",
"categories": ["security", "platform"],
"plugins": ["stellaops.doctor.agent"],
"timeoutSeconds": 300,
"alerts": {
"enabled": true,
"alertOnFail": true,
"alertOnWarn": false,
"alertOnStatusChange": true,
"channels": ["email"],
"emailRecipients": [],
"webhookUrls": [],
"minSeverity": "Fail"
}
}
```
This replaces `DoctorSchedule.Mode`, `Categories`, `Plugins`, and `Alerts` with structured data inside `Schedule.PluginConfig`.
#### C.3 What Stays vs. What Moves
| Component | Current Location | After Migration |
|---|---|---|
| Doctor WebService | `src/Doctor/StellaOps.Doctor.WebService/` | **Stays unchanged** -- remains the execution engine |
| Doctor Scheduler (standalone service) | `src/Doctor/StellaOps.Doctor.Scheduler/` | **Deprecated** -- replaced by DoctorJobPlugin in Scheduler |
| Doctor checks (20 plugins) | `src/Doctor/__Plugins/` | **Stay unchanged** -- loaded by Doctor WebService |
| Doctor schedule CRUD | Doctor Scheduler endpoints | **Moves** to Scheduler schedule CRUD (with jobKind="doctor") |
| Doctor trend storage | `InMemoryTrendRepository` | **Moves** to Scheduler persistence (new table `scheduler.doctor_trends`) |
| Doctor trend endpoints | `/api/v1/doctor/scheduler/trends/*` | **Moves** to DoctorJobPlugin.MapEndpoints at same paths (or proxied) |
| Doctor UI | `src/Web/.../doctor/` | **Minor change** -- trend API base URL may change, schedule API uses Scheduler |
#### C.4 Doctor UI Continuity
The Doctor UI (`doctor.client.ts`) currently calls:
1. `/doctor/api/v1/doctor/...` (runs, checks, plugins, reports) -- **no change needed**, Doctor WebService stays.
2. `/api/v1/doctor/scheduler/trends/categories/{category}` (trends) -- **routed to DoctorJobPlugin endpoints registered in Scheduler**, or the existing Doctor Scheduler service can be kept running temporarily as a compatibility shim.
Strategy: DoctorJobPlugin registers the same trend endpoints under the Scheduler service. The gateway route for `doctor-scheduler.stella-ops.local` is remapped to the Scheduler service. UI code requires zero changes.
### D. What This Architecture Enables (Future)
After this sprint, adding a new scheduled job type requires:
1. Implement `ISchedulerJobPlugin` (one class + supporting types).
2. Drop the DLL into `plugins/scheduler/`.
3. Create schedules with `jobKind="your-kind"` and `pluginConfig={...}`.
4. No Scheduler core changes needed.
Future plugin candidates: `policy-sweep`, `graph-build`, `feed-refresh`, `evidence-export`, `compliance-audit`.
---
## Delivery Tracker
### TASK-001 - Create StellaOps.Scheduler.Plugin.Abstractions library
Status: TODO
Dependency: none
Owners: Developer (Backend)
Task description:
- Create new class library `src/JobEngine/StellaOps.Scheduler.__Libraries/StellaOps.Scheduler.Plugin.Abstractions/`.
- Define `ISchedulerJobPlugin`, `JobPlanContext`, `JobPlan`, `JobExecutionContext`, `IRunProgressReporter`, `JobConfigValidationResult`.
- Target net10.0. No external dependencies beyond `StellaOps.Scheduler.Models`.
- Add to `StellaOps.JobEngine.sln`.
Completion criteria:
- [ ] Library compiles with zero warnings
- [ ] All types documented with XML comments
- [ ] Added to solution and referenced by Scheduler.WebService and Scheduler.Worker.Host csproj files
### TASK-002 - Create SchedulerPluginRegistry
Status: TODO
Dependency: TASK-001
Owners: Developer (Backend)
Task description:
- Create `ISchedulerPluginRegistry` and `SchedulerPluginRegistry` in the Scheduler.WebService project (or a shared library).
- Registry stores `Dictionary<string, ISchedulerJobPlugin>` keyed by `JobKind`.
- Provides `Register()`, `Resolve(string jobKind)`, `ListRegistered()`.
- Wire into DI as singleton in Program.cs.
- Integrate with existing `PluginHost.LoadPlugins()` to discover and register `ISchedulerJobPlugin` implementations from plugin assemblies.
Completion criteria:
- [ ] Registry resolves built-in plugins
- [ ] Registry discovers plugins from assembly-loaded DLLs
- [ ] Unit tests verify registration, resolution, and duplicate-kind rejection
### TASK-003 - Extend Schedule model with JobKind and PluginConfig
Status: TODO
Dependency: TASK-001
Owners: Developer (Backend)
Task description:
- Add `JobKind` (string, default "scan") and `PluginConfig` (ImmutableDictionary<string, object?>?) to the `Schedule` record.
- Update `ScheduleCreateRequest` and `ScheduleUpdateRequest` contracts to accept these fields.
- Update `ScheduleEndpoints` create/update handlers to validate `PluginConfig` via the resolved plugin's `ValidateConfigAsync()`.
- Add SQL migration to add `job_kind` (varchar, default 'scan') and `plugin_config` (jsonb, nullable) columns to the schedules table.
- Update EF Core entity mapping and compiled model.
- Update `SystemScheduleBootstrap` to set `JobKind = "scan"` explicitly.
Completion criteria:
- [ ] Existing schedule tests pass (backward compatible)
- [ ] New schedules can be created with jobKind and pluginConfig
- [ ] SQL migration is embedded resource and auto-applies
- [ ] Serialization round-trips correctly for pluginConfig
### TASK-004 - Refactor existing scan logic into ScanJobPlugin
Status: TODO
Dependency: TASK-001, TASK-002
Owners: Developer (Backend)
Task description:
- Create `ScanJobPlugin : ISchedulerJobPlugin` with `JobKind = "scan"`.
- `CreatePlanAsync`: reuse existing run-planning logic (impact resolution, selector evaluation, queue dispatch).
- `ExecuteAsync`: reuse existing worker segment processing.
- `ValidateConfigAsync`: validate ScheduleMode is valid.
- `ConfigureServices`: no-op (scan services already registered).
- `MapEndpoints`: no-op (scan endpoints already registered).
- Register as built-in plugin in `SchedulerPluginRegistry` during DI setup.
- This is a refactoring task. No behavioral change allowed.
Completion criteria:
- [ ] Existing scan schedules work identically through the plugin path
- [ ] All existing Scheduler tests pass without modification
- [ ] ScanJobPlugin is the default plugin when jobKind is "scan" or null
### TASK-005 - Create StellaOps.Scheduler.Plugin.Doctor library
Status: TODO
Dependency: TASK-001, TASK-003
Owners: Developer (Backend)
Task description:
- Create new class library `src/JobEngine/StellaOps.Scheduler.plugins/StellaOps.Scheduler.Plugin.Doctor/`.
- Implement `DoctorJobPlugin : ISchedulerJobPlugin` with `JobKind = "doctor"`.
- Port `ScheduleExecutor` logic: HTTP POST to Doctor WebService, poll for completion, map results.
- Port `DoctorScheduleConfig` deserialization from `Schedule.PluginConfig`.
- Port `AlertConfiguration` evaluation and `IAlertService` integration.
- `ConfigureServices`: register `HttpClient` for Doctor API, `IAlertService`, `ITrendRepository`.
- Use Scheduler's persistence layer for trend storage (new table via embedded SQL migration).
Completion criteria:
- [ ] Plugin compiles and loads via PluginHost
- [ ] Plugin can create a plan from a doctor-type schedule
- [ ] Plugin executes a doctor run via HTTP against Doctor WebService
- [ ] Trend data is stored in Scheduler's Postgres schema
### TASK-006 - Add Doctor trend persistence to Scheduler schema
Status: TODO
Dependency: TASK-005
Owners: Developer (Backend)
Task description:
- Add SQL migration creating `scheduler.doctor_trends` table (timestamp, check_id, plugin_id, category, run_id, status, health_score, duration_ms, evidence_values jsonb).
- Add `scheduler.doctor_trend_summaries` materialized view or summary query.
- Implement `PostgresDoctorTrendRepository : ITrendRepository` using Scheduler's DB connection.
- Implement data retention pruning (configurable, default 365 days).
Completion criteria:
- [ ] Migration auto-applies on Scheduler startup
- [ ] Trend data round-trips correctly
- [ ] Pruning removes old data beyond retention period
- [ ] Query performance acceptable for 365-day windows
### TASK-007 - Register Doctor trend and schedule endpoints in DoctorJobPlugin
Status: TODO
Dependency: TASK-005, TASK-006
Owners: Developer (Backend)
Task description:
- Implement `DoctorJobPlugin.MapEndpoints()` to register:
- `GET /api/v1/scheduler/doctor/trends` (mirrors existing `/api/v1/doctor/scheduler/trends`)
- `GET /api/v1/scheduler/doctor/trends/checks/{checkId}`
- `GET /api/v1/scheduler/doctor/trends/categories/{category}`
- `GET /api/v1/scheduler/doctor/trends/degrading`
- Ensure response shapes match current Doctor Scheduler endpoint contracts for UI compatibility.
- Add gateway route alias so requests to `/api/v1/doctor/scheduler/trends/*` are forwarded to Scheduler service.
Completion criteria:
- [ ] All trend endpoints return correct data shapes
- [ ] Existing Doctor UI trend sparklines work without code changes
- [ ] Gateway routing verified
### TASK-008 - Seed default Doctor schedules via SystemScheduleBootstrap
Status: TODO
Dependency: TASK-003, TASK-005
Owners: Developer (Backend)
Task description:
- Add Doctor system schedules to `SystemScheduleBootstrap.SystemSchedules`:
- `doctor-full-daily` ("Daily Health Check", `0 4 * * *`, jobKind="doctor", pluginConfig for Full mode)
- `doctor-quick-hourly` ("Hourly Quick Check", `0 * * * *`, jobKind="doctor", pluginConfig for Quick mode)
- `doctor-compliance-weekly` ("Weekly Compliance Audit", `0 5 * * 0`, jobKind="doctor", pluginConfig for Categories=["compliance"])
- These replace the in-memory seeds from Doctor Scheduler's `InMemoryScheduleRepository`.
Completion criteria:
- [ ] Doctor schedules are created on fresh DB
- [ ] Existing scan schedules unaffected
- [ ] Schedules appear in Scheduler API with correct jobKind and pluginConfig
### TASK-009 - Integration tests for Doctor plugin lifecycle
Status: TODO
Dependency: TASK-005, TASK-006, TASK-007, TASK-008
Owners: Developer (Backend), Test Automation
Task description:
- Add integration tests in `src/JobEngine/StellaOps.Scheduler.__Tests/`:
- Plugin discovery and registration test
- Doctor schedule create/update with pluginConfig validation
- Doctor plan creation from schedule
- Doctor execution mock (mock HTTP to Doctor WebService)
- Trend storage and query
- Alert evaluation
- Use deterministic fixtures and `TimeProvider.System` replacement for time control.
Completion criteria:
- [ ] All new tests pass
- [ ] No flaky tests (deterministic time, no network)
- [ ] Coverage includes happy path, validation errors, execution errors, cancellation
### TASK-010 - Update Doctor UI trend API base URL
Status: TODO
Dependency: TASK-007
Owners: Developer (Frontend)
Task description:
- If gateway routing alias is set up correctly (TASK-007), this may be a no-op.
- If API path changes, update `doctor.client.ts` `getTrends()` method to use new endpoint path.
- Verify trend sparklines render correctly.
Completion criteria:
- [ ] Doctor dashboard trend sparklines display data
- [ ] No console errors related to trend API calls
### TASK-011 - Deprecate Doctor Scheduler standalone service
Status: TODO
Dependency: TASK-009 (all tests pass)
Owners: Developer (Backend), Project Manager
Task description:
- Add deprecation notice to `src/Doctor/StellaOps.Doctor.Scheduler/README.md`.
- Remove Doctor Scheduler from `docker-compose.stella-ops.yml` (or disable by default).
- Remove Doctor Scheduler from `devops/compose/services-matrix.env` if present.
- Keep source code intact for one release cycle before deletion.
- Update `docs/modules/doctor/` to reflect that scheduling is now handled by the Scheduler service.
Completion criteria:
- [ ] Doctor Scheduler container no longer starts in default compose
- [ ] All Doctor scheduling functionality verified via Scheduler service
- [ ] Deprecation documented
### TASK-012 - Update architecture documentation
Status: TODO
Dependency: TASK-004, TASK-005
Owners: Documentation Author
Task description:
- Update `docs/modules/scheduler/architecture.md` with plugin architecture section.
- Add `ISchedulerJobPlugin` contract reference.
- Update `docs/modules/doctor/` to document scheduler integration.
- Update `docs/07_HIGH_LEVEL_ARCHITECTURE.md` if Scheduler's role description needs updating.
- Create or update `src/JobEngine/StellaOps.Scheduler.plugins/AGENTS.md` with plugin development guide.
Completion criteria:
- [ ] Architecture docs reflect plugin system
- [ ] Doctor scheduling migration documented
- [ ] Plugin development guide exists for future plugin authors
## Execution Log
| Date (UTC) | Update | Owner |
| --- | --- | --- |
| 2026-04-08 | Sprint created with full architectural design after codebase analysis. 12 tasks defined across 3 batches. | Planning |
## Decisions & Risks
### Decisions
1. **Plugin interface vs. message-based dispatch**: Chose an in-process `ISchedulerJobPlugin` interface over a message queue dispatch model. Rationale: the Scheduler already has assembly-loading infrastructure (`PluginHost`), and in-process execution avoids adding another IPC layer. Plugins that need to call remote services (like Doctor) do so via HttpClient, which is already the pattern.
2. **Schedule model extension vs. separate table**: Chose to extend the existing `Schedule` record with `JobKind` + `PluginConfig` rather than creating a separate `PluginSchedule` table. Rationale: keeps the CRUD API unified, avoids join complexity, and the JSON pluginConfig column provides flexibility without schema changes per plugin.
3. **Doctor WebService stays**: Doctor WebService remains a standalone service. The plugin only replaces the scheduling/triggering layer (Doctor Scheduler). This preserves the existing Doctor engine, plugin loading, check execution, and report storage. The plugin communicates with Doctor WebService via HTTP, same as today.
4. **Trend data in Scheduler schema**: Doctor trend data moves to the Scheduler's Postgres schema rather than staying in Doctor's (non-existent) Postgres. Rationale: Scheduler already has persistent storage; Doctor Scheduler was in-memory only. This gives trends durability without adding a new database dependency to Doctor.
5. **ScanJobPlugin as refactoring, not rewrite**: The existing scan logic is wrapped in `ScanJobPlugin` by extracting and delegating, not by rewriting. This minimizes regression risk.
### Risks
1. **Schedule.PluginConfig schema evolution**: As plugin configs evolve, backward compatibility of the JSON blob must be maintained. Mitigation: plugins should version their config schema and handle migration in `ValidateConfigAsync`.
2. **Doctor WebService availability during scheduled runs**: If Doctor WebService is down, the DoctorJobPlugin's execution will fail. Mitigation: implement retry with backoff in the plugin, and use Run state machine to track Error state with meaningful messages.
3. **Gateway routing for trend endpoints**: The UI currently hits Doctor Scheduler directly. After migration, requests must be routed to the Scheduler service. Mitigation: TASK-007 explicitly addresses gateway configuration, and TASK-010 handles UI fallback.
4. **Compiled model regeneration**: Adding columns to Schedule requires regenerating EF Core compiled models. This is mechanical but must not be forgotten.
5. **Plugin isolation**: In-process plugins share the Scheduler's AppDomain. A misbehaving plugin (memory leak, thread starvation) affects all jobs. Mitigation: use `SemaphoreSlim` for concurrency limits (same pattern as current Doctor Scheduler), add plugin execution timeouts.
## Next Checkpoints
- **Batch 1 complete** (TASK-001 through TASK-004): Plugin abstractions + registry + scan refactoring. Demo: existing scan schedules work through plugin dispatch. Estimated: 3-4 days.
- **Batch 2 complete** (TASK-005 through TASK-009): Doctor plugin + trend storage + tests. Demo: doctor health checks triggered by Scheduler, trends visible. Estimated: 4-5 days.
- **Batch 3 complete** (TASK-010 through TASK-012): UI fix-up, deprecation, docs. Demo: full end-to-end. Estimated: 2 days.
- **Total estimated effort**: 9-11 working days for one backend developer + 1 day frontend.

View File

@@ -32,13 +32,13 @@ This page focuses on deterministic slot/port allocation and may include legacy o
| 14 | 10140 | 10141 | Policy Engine | `policy-engine.stella-ops.local` | `src/Policy/StellaOps.Policy.Engine` | `STELLAOPS_POLICY_ENGINE_URL` |
| 15 | 10150 | 10151 | ~~Policy Gateway~~ (merged into Policy Engine, Slot 14) | `policy-gateway.stella-ops.local` -> `policy-engine.stella-ops.local` | _removed_ | _removed_ |
| 16 | 10160 | 10161 | RiskEngine | `riskengine.stella-ops.local` | `src/Findings/StellaOps.RiskEngine.WebService` | `STELLAOPS_RISKENGINE_URL` |
| 17 | 10170 | 10171 | Orchestrator | `jobengine.stella-ops.local` | `src/JobEngine/StellaOps.JobEngine/StellaOps.JobEngine.WebService` | `STELLAOPS_JOBENGINE_URL` |
| 17 | 10170 | 10171 | ~~Orchestrator~~ (retired; audit/first-signal moved to Release Orchestrator, Slot 48) | `jobengine.stella-ops.local` | _removed_ | _removed_ |
| 18 | 10180 | 10181 | TaskRunner | `taskrunner.stella-ops.local` | `src/JobEngine/StellaOps.TaskRunner/StellaOps.TaskRunner.WebService` | `STELLAOPS_TASKRUNNER_URL` |
| 19 | 10190 | 10191 | Scheduler | `scheduler.stella-ops.local` | `src/JobEngine/StellaOps.Scheduler.WebService` | `STELLAOPS_SCHEDULER_URL` |
| 20 | 10200 | 10201 | Graph API | `graph.stella-ops.local` | `src/Graph/StellaOps.Graph.Api` | `STELLAOPS_GRAPH_URL` |
| 21 | 10210 | 10211 | Cartographer | `cartographer.stella-ops.local` | `src/Scanner/StellaOps.Scanner.Cartographer` | `STELLAOPS_CARTOGRAPHER_URL` |
| 22 | 10220 | 10221 | ReachGraph | `reachgraph.stella-ops.local` | `src/ReachGraph/StellaOps.ReachGraph.WebService` | `STELLAOPS_REACHGRAPH_URL` |
| 23 | 10230 | 10231 | Timeline Indexer | `timelineindexer.stella-ops.local` | `src/Timeline/StellaOps.TimelineIndexer.WebService` | `STELLAOPS_TIMELINEINDEXER_URL` |
| 23 | 10230 | 10231 | _(Timeline Indexer merged into Timeline)_ | `timelineindexer.stella-ops.local` (alias) | _(see Timeline)_ | `STELLAOPS_TIMELINEINDEXER_URL` |
| 24 | 10240 | 10241 | Timeline | `timeline.stella-ops.local` | `src/Timeline/StellaOps.Timeline.WebService` | `STELLAOPS_TIMELINE_URL` |
| 25 | 10250 | 10251 | Findings Ledger | `findings.stella-ops.local` | `src/Findings/StellaOps.Findings.Ledger.WebService` | `STELLAOPS_FINDINGS_LEDGER_URL` |
| 26 | 10260 | 10261 | Doctor | `doctor.stella-ops.local` | `src/Doctor/StellaOps.Doctor.WebService` | `STELLAOPS_DOCTOR_URL` |

View File

@@ -815,7 +815,7 @@ public static class ExceptionCommandGroup
if (client.BaseAddress is null)
{
var gatewayUrl = options.PolicyGateway?.BaseUrl
?? Environment.GetEnvironmentVariable("STELLAOPS_POLICY_GATEWAY_URL")
?? Environment.GetEnvironmentVariable("STELLAOPS_POLICY_ENGINE_URL")
?? "http://localhost:5080";
client.BaseAddress = new Uri(gatewayUrl);
}

View File

@@ -277,7 +277,7 @@ public static class GateCommandGroup
if (client.BaseAddress is null)
{
var gatewayUrl = options.PolicyGateway?.BaseUrl
?? Environment.GetEnvironmentVariable("STELLAOPS_POLICY_GATEWAY_URL")
?? Environment.GetEnvironmentVariable("STELLAOPS_POLICY_ENGINE_URL")
?? "http://localhost:5080";
client.BaseAddress = new Uri(gatewayUrl);
}

View File

@@ -1211,7 +1211,7 @@ public static class ScoreGateCommandGroup
if (client.BaseAddress is null)
{
var gatewayUrl = options.PolicyGateway?.BaseUrl
?? Environment.GetEnvironmentVariable("STELLAOPS_POLICY_GATEWAY_URL")
?? Environment.GetEnvironmentVariable("STELLAOPS_POLICY_ENGINE_URL")
?? "http://localhost:5080";
client.BaseAddress = new Uri(gatewayUrl);
}

View File

@@ -19,10 +19,6 @@ Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "StellaOps.JobEngine.Tests",
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "StellaOps.JobEngine.WebService", "StellaOps.JobEngine.WebService", "{7B5EBFF9-DCD8-4C3E-52B7-33A01F59BD96}"
EndProject
Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "StellaOps.JobEngine.Worker", "StellaOps.JobEngine.Worker", "{EEE65590-0DA5-BAFD-3BFC-6492600454B6}"
EndProject
@@ -179,10 +175,6 @@ Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.JobEngine.Tests",
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.JobEngine.WebService", "StellaOps.JobEngine\StellaOps.JobEngine.WebService\StellaOps.JobEngine.WebService.csproj", "{B1C35286-4A4E-5677-A09F-4AD04ABB15D3}"
EndProject
Project("{FAE04EC0-301F-11D3-BF4B-00C04F79EFBC}") = "StellaOps.JobEngine.Worker", "StellaOps.JobEngine\StellaOps.JobEngine.Worker\StellaOps.JobEngine.Worker.csproj", "{D49617DE-10E1-78EF-0AE3-0E0EB1BCA01A}"
EndProject
@@ -331,14 +323,6 @@ Global
{E1413BFB-C320-E54C-14B3-4600AC5A5A70}.Release|Any CPU.Build.0 = Release|Any CPU
{B1C35286-4A4E-5677-A09F-4AD04ABB15D3}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{B1C35286-4A4E-5677-A09F-4AD04ABB15D3}.Debug|Any CPU.Build.0 = Debug|Any CPU
{B1C35286-4A4E-5677-A09F-4AD04ABB15D3}.Release|Any CPU.ActiveCfg = Release|Any CPU
{B1C35286-4A4E-5677-A09F-4AD04ABB15D3}.Release|Any CPU.Build.0 = Release|Any CPU
{D49617DE-10E1-78EF-0AE3-0E0EB1BCA01A}.Debug|Any CPU.ActiveCfg = Debug|Any CPU
{D49617DE-10E1-78EF-0AE3-0E0EB1BCA01A}.Debug|Any CPU.Build.0 = Debug|Any CPU
@@ -403,8 +387,6 @@ Global
{43BD7CCE-81F1-671A-02CF-7BDE295E6D15} = {0BD8BADA-1E00-7228-CA2D-F67E2A51EDC0}
{7B5EBFF9-DCD8-4C3E-52B7-33A01F59BD96} = {0BD8BADA-1E00-7228-CA2D-F67E2A51EDC0}
{EEE65590-0DA5-BAFD-3BFC-6492600454B6} = {0BD8BADA-1E00-7228-CA2D-F67E2A51EDC0}
{F310596E-88BB-9E54-885E-21C61971917E} = {5B52EF8A-3661-DCFF-797D-BC4D6AC60BDA}
@@ -481,8 +463,6 @@ Global
{E1413BFB-C320-E54C-14B3-4600AC5A5A70} = {43BD7CCE-81F1-671A-02CF-7BDE295E6D15}
{B1C35286-4A4E-5677-A09F-4AD04ABB15D3} = {7B5EBFF9-DCD8-4C3E-52B7-33A01F59BD96}
{D49617DE-10E1-78EF-0AE3-0E0EB1BCA01A} = {EEE65590-0DA5-BAFD-3BFC-6492600454B6}
{38A9EE9B-6FC8-93BC-0D43-2A906E678D66} = {772B02B5-6280-E1D4-3E2E-248D0455C2FB}

View File

@@ -116,7 +116,7 @@ CREATE TABLE IF NOT EXISTS jobs (
correlation_id TEXT,
lease_id UUID,
worker_id TEXT,
task_runner_id TEXT,
task_runner_id TEXT, -- nullable: TaskRunner is optional (release-orchestrator doesn't use it)
lease_until TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
scheduled_at TIMESTAMPTZ,

View File

@@ -32,7 +32,7 @@ CREATE TABLE IF NOT EXISTS pack_runs (
idempotency_key TEXT NOT NULL,
correlation_id TEXT,
lease_id UUID,
task_runner_id TEXT,
task_runner_id TEXT, -- nullable: TaskRunner is optional (release-orchestrator doesn't use it)
lease_until TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
scheduled_at TIMESTAMPTZ,

View File

@@ -1,13 +0,0 @@
# StellaOps.JobEngine.WebService Agent Charter
## Mission
Provide JobEngine control-plane APIs, streaming endpoints, and hosted service wiring.
## Required Reading
- docs/modules/jobengine/architecture.md
- docs/modules/platform/architecture-overview.md
## Working Agreement
- Update sprint status in docs/implplan/SPRINT_*.md and local TASKS.md.
- Preserve deterministic ordering and tenant scoping on all endpoints.
- Add or update endpoint and auth tests for API changes.

View File

@@ -1,338 +0,0 @@
using StellaOps.JobEngine.Core.Domain;
using StellaOps.JobEngine.Infrastructure.Repositories;
namespace StellaOps.JobEngine.WebService.Contracts;
// ===== Audit Contracts =====
/// <summary>
/// Response for an audit entry.
/// </summary>
public sealed record AuditEntryResponse(
Guid EntryId,
string TenantId,
string EventType,
string ResourceType,
Guid ResourceId,
string ActorId,
string ActorType,
string? ActorIp,
string? UserAgent,
string? HttpMethod,
string? RequestPath,
string? OldState,
string? NewState,
string Description,
string? CorrelationId,
string? PreviousEntryHash,
string ContentHash,
long SequenceNumber,
DateTimeOffset OccurredAt,
string? Metadata)
{
public static AuditEntryResponse FromDomain(AuditEntry entry) => new(
EntryId: entry.EntryId,
TenantId: entry.TenantId,
EventType: entry.EventType.ToString(),
ResourceType: entry.ResourceType,
ResourceId: entry.ResourceId,
ActorId: entry.ActorId,
ActorType: entry.ActorType.ToString(),
ActorIp: entry.ActorIp,
UserAgent: entry.UserAgent,
HttpMethod: entry.HttpMethod,
RequestPath: entry.RequestPath,
OldState: entry.OldState,
NewState: entry.NewState,
Description: entry.Description,
CorrelationId: entry.CorrelationId,
PreviousEntryHash: entry.PreviousEntryHash,
ContentHash: entry.ContentHash,
SequenceNumber: entry.SequenceNumber,
OccurredAt: entry.OccurredAt,
Metadata: entry.Metadata);
}
/// <summary>
/// List response for audit entries.
/// </summary>
public sealed record AuditEntryListResponse(
IReadOnlyList<AuditEntryResponse> Entries,
string? NextCursor);
/// <summary>
/// Response for audit summary.
/// </summary>
public sealed record AuditSummaryResponse(
long TotalEntries,
long EntriesSince,
long EventTypes,
long UniqueActors,
long UniqueResources,
DateTimeOffset? EarliestEntry,
DateTimeOffset? LatestEntry)
{
public static AuditSummaryResponse FromDomain(AuditSummary summary) => new(
TotalEntries: summary.TotalEntries,
EntriesSince: summary.EntriesSince,
EventTypes: summary.EventTypes,
UniqueActors: summary.UniqueActors,
UniqueResources: summary.UniqueResources,
EarliestEntry: summary.EarliestEntry,
LatestEntry: summary.LatestEntry);
}
/// <summary>
/// Response for chain verification.
/// </summary>
public sealed record ChainVerificationResponse(
bool IsValid,
Guid? InvalidEntryId,
long? InvalidSequence,
string? ErrorMessage)
{
public static ChainVerificationResponse FromDomain(ChainVerificationResult result) => new(
IsValid: result.IsValid,
InvalidEntryId: result.InvalidEntryId,
InvalidSequence: result.InvalidSequence,
ErrorMessage: result.ErrorMessage);
}
// ===== Ledger Contracts =====
/// <summary>
/// Response for a ledger entry.
/// </summary>
public sealed record LedgerEntryResponse(
Guid LedgerId,
string TenantId,
Guid RunId,
Guid SourceId,
string RunType,
string FinalStatus,
int TotalJobs,
int SucceededJobs,
int FailedJobs,
DateTimeOffset RunCreatedAt,
DateTimeOffset? RunStartedAt,
DateTimeOffset RunCompletedAt,
long ExecutionDurationMs,
string InitiatedBy,
string InputDigest,
string OutputDigest,
long SequenceNumber,
string? PreviousEntryHash,
string ContentHash,
DateTimeOffset LedgerCreatedAt,
string? CorrelationId)
{
public static LedgerEntryResponse FromDomain(RunLedgerEntry entry) => new(
LedgerId: entry.LedgerId,
TenantId: entry.TenantId,
RunId: entry.RunId,
SourceId: entry.SourceId,
RunType: entry.RunType,
FinalStatus: entry.FinalStatus.ToString(),
TotalJobs: entry.TotalJobs,
SucceededJobs: entry.SucceededJobs,
FailedJobs: entry.FailedJobs,
RunCreatedAt: entry.RunCreatedAt,
RunStartedAt: entry.RunStartedAt,
RunCompletedAt: entry.RunCompletedAt,
ExecutionDurationMs: (long)entry.ExecutionDuration.TotalMilliseconds,
InitiatedBy: entry.InitiatedBy,
InputDigest: entry.InputDigest,
OutputDigest: entry.OutputDigest,
SequenceNumber: entry.SequenceNumber,
PreviousEntryHash: entry.PreviousEntryHash,
ContentHash: entry.ContentHash,
LedgerCreatedAt: entry.LedgerCreatedAt,
CorrelationId: entry.CorrelationId);
}
/// <summary>
/// List response for ledger entries.
/// </summary>
public sealed record LedgerEntryListResponse(
IReadOnlyList<LedgerEntryResponse> Entries,
string? NextCursor);
/// <summary>
/// Response for ledger summary.
/// </summary>
public sealed record LedgerSummaryResponse(
long TotalEntries,
long EntriesSince,
long TotalRuns,
long SuccessfulRuns,
long FailedRuns,
long TotalJobs,
long UniqueSources,
long UniqueRunTypes,
DateTimeOffset? EarliestEntry,
DateTimeOffset? LatestEntry)
{
public static LedgerSummaryResponse FromDomain(LedgerSummary summary) => new(
TotalEntries: summary.TotalEntries,
EntriesSince: summary.EntriesSince,
TotalRuns: summary.TotalRuns,
SuccessfulRuns: summary.SuccessfulRuns,
FailedRuns: summary.FailedRuns,
TotalJobs: summary.TotalJobs,
UniqueSources: summary.UniqueSources,
UniqueRunTypes: summary.UniqueRunTypes,
EarliestEntry: summary.EarliestEntry,
LatestEntry: summary.LatestEntry);
}
// ===== Export Contracts =====
/// <summary>
/// Request to create a ledger export.
/// </summary>
public sealed record CreateLedgerExportRequest(
string Format,
DateTimeOffset? StartTime,
DateTimeOffset? EndTime,
string? RunTypeFilter,
Guid? SourceIdFilter);
/// <summary>
/// Response for a ledger export.
/// </summary>
public sealed record LedgerExportResponse(
Guid ExportId,
string TenantId,
string Status,
string Format,
DateTimeOffset? StartTime,
DateTimeOffset? EndTime,
string? RunTypeFilter,
Guid? SourceIdFilter,
int EntryCount,
string? OutputUri,
string? OutputDigest,
long? OutputSizeBytes,
string RequestedBy,
DateTimeOffset RequestedAt,
DateTimeOffset? StartedAt,
DateTimeOffset? CompletedAt,
string? ErrorMessage)
{
public static LedgerExportResponse FromDomain(LedgerExport export) => new(
ExportId: export.ExportId,
TenantId: export.TenantId,
Status: export.Status.ToString(),
Format: export.Format,
StartTime: export.StartTime,
EndTime: export.EndTime,
RunTypeFilter: export.RunTypeFilter,
SourceIdFilter: export.SourceIdFilter,
EntryCount: export.EntryCount,
OutputUri: export.OutputUri,
OutputDigest: export.OutputDigest,
OutputSizeBytes: export.OutputSizeBytes,
RequestedBy: export.RequestedBy,
RequestedAt: export.RequestedAt,
StartedAt: export.StartedAt,
CompletedAt: export.CompletedAt,
ErrorMessage: export.ErrorMessage);
}
/// <summary>
/// List response for ledger exports.
/// </summary>
public sealed record LedgerExportListResponse(
IReadOnlyList<LedgerExportResponse> Exports,
string? NextCursor);
// ===== Manifest Contracts =====
/// <summary>
/// Response for a signed manifest.
/// </summary>
public sealed record ManifestResponse(
Guid ManifestId,
string SchemaVersion,
string TenantId,
string ProvenanceType,
Guid SubjectId,
string PayloadDigest,
string SignatureAlgorithm,
bool IsSigned,
bool IsExpired,
string KeyId,
DateTimeOffset CreatedAt,
DateTimeOffset? ExpiresAt)
{
public static ManifestResponse FromDomain(SignedManifest manifest) => new(
ManifestId: manifest.ManifestId,
SchemaVersion: manifest.SchemaVersion,
TenantId: manifest.TenantId,
ProvenanceType: manifest.ProvenanceType.ToString(),
SubjectId: manifest.SubjectId,
PayloadDigest: manifest.PayloadDigest,
SignatureAlgorithm: manifest.SignatureAlgorithm,
IsSigned: manifest.IsSigned,
IsExpired: manifest.IsExpired,
KeyId: manifest.KeyId,
CreatedAt: manifest.CreatedAt,
ExpiresAt: manifest.ExpiresAt);
}
/// <summary>
/// Response with full manifest details including statements and artifacts.
/// </summary>
public sealed record ManifestDetailResponse(
Guid ManifestId,
string SchemaVersion,
string TenantId,
string ProvenanceType,
Guid SubjectId,
string Statements,
string Artifacts,
string Materials,
string? BuildInfo,
string PayloadDigest,
string SignatureAlgorithm,
string Signature,
string KeyId,
DateTimeOffset CreatedAt,
DateTimeOffset? ExpiresAt,
string? Metadata)
{
public static ManifestDetailResponse FromDomain(SignedManifest manifest) => new(
ManifestId: manifest.ManifestId,
SchemaVersion: manifest.SchemaVersion,
TenantId: manifest.TenantId,
ProvenanceType: manifest.ProvenanceType.ToString(),
SubjectId: manifest.SubjectId,
Statements: manifest.Statements,
Artifacts: manifest.Artifacts,
Materials: manifest.Materials,
BuildInfo: manifest.BuildInfo,
PayloadDigest: manifest.PayloadDigest,
SignatureAlgorithm: manifest.SignatureAlgorithm,
Signature: manifest.Signature,
KeyId: manifest.KeyId,
CreatedAt: manifest.CreatedAt,
ExpiresAt: manifest.ExpiresAt,
Metadata: manifest.Metadata);
}
/// <summary>
/// List response for manifests.
/// </summary>
public sealed record ManifestListResponse(
IReadOnlyList<ManifestResponse> Manifests,
string? NextCursor);
/// <summary>
/// Response for manifest verification.
/// </summary>
public sealed record ManifestVerificationResponse(
Guid ManifestId,
bool PayloadIntegrityValid,
bool IsExpired,
bool IsSigned,
string? ValidationError);

View File

@@ -1,90 +0,0 @@
using StellaOps.JobEngine.Core.Domain;
namespace StellaOps.JobEngine.WebService.Contracts;
// ============================================================================
// Circuit Breaker Contracts
// ============================================================================
/// <summary>
/// Response for a circuit breaker.
/// </summary>
public sealed record CircuitBreakerResponse(
Guid CircuitBreakerId,
string TenantId,
string ServiceId,
string State,
int FailureCount,
int SuccessCount,
DateTimeOffset WindowStart,
double FailureThreshold,
TimeSpan WindowDuration,
int MinimumSamples,
DateTimeOffset? OpenedAt,
TimeSpan OpenDuration,
int HalfOpenTestCount,
int HalfOpenCurrentCount,
int HalfOpenSuccessCount,
DateTimeOffset CreatedAt,
DateTimeOffset UpdatedAt,
string UpdatedBy)
{
/// <summary>
/// Creates a response from a domain object.
/// </summary>
public static CircuitBreakerResponse FromDomain(CircuitBreaker cb) =>
new(
CircuitBreakerId: cb.CircuitBreakerId,
TenantId: cb.TenantId,
ServiceId: cb.ServiceId,
State: cb.State.ToString(),
FailureCount: cb.FailureCount,
SuccessCount: cb.SuccessCount,
WindowStart: cb.WindowStart,
FailureThreshold: cb.FailureThreshold,
WindowDuration: cb.WindowDuration,
MinimumSamples: cb.MinimumSamples,
OpenedAt: cb.OpenedAt,
OpenDuration: cb.OpenDuration,
HalfOpenTestCount: cb.HalfOpenTestCount,
HalfOpenCurrentCount: cb.HalfOpenCurrentCount,
HalfOpenSuccessCount: cb.HalfOpenSuccessCount,
CreatedAt: cb.CreatedAt,
UpdatedAt: cb.UpdatedAt,
UpdatedBy: cb.UpdatedBy);
}
/// <summary>
/// Response for a circuit breaker check.
/// </summary>
public sealed record CircuitBreakerCheckResponse(
bool IsAllowed,
string State,
double FailureRate,
TimeSpan? TimeUntilRetry,
string? BlockReason);
/// <summary>
/// Response for a circuit breaker list.
/// </summary>
public sealed record CircuitBreakerListResponse(
IReadOnlyList<CircuitBreakerResponse> Items,
string? NextCursor);
/// <summary>
/// Request to force open a circuit breaker.
/// </summary>
public sealed record ForceOpenCircuitBreakerRequest(
string Reason);
/// <summary>
/// Request to force close a circuit breaker.
/// </summary>
public sealed record ForceCloseCircuitBreakerRequest(
string? Reason);
/// <summary>
/// Request to record a failure.
/// </summary>
public sealed record RecordFailureRequest(
string? FailureReason);

View File

@@ -1,46 +0,0 @@
using StellaOps.JobEngine.Core.Domain;
namespace StellaOps.JobEngine.WebService.Contracts;
/// <summary>
/// Response representing a DAG edge (job dependency).
/// </summary>
public sealed record DagEdgeResponse(
Guid EdgeId,
Guid RunId,
Guid ParentJobId,
Guid ChildJobId,
string EdgeType,
DateTimeOffset CreatedAt)
{
public static DagEdgeResponse FromDomain(DagEdge edge) => new(
edge.EdgeId,
edge.RunId,
edge.ParentJobId,
edge.ChildJobId,
edge.EdgeType,
edge.CreatedAt);
}
/// <summary>
/// Response containing the DAG structure for a run.
/// </summary>
public sealed record DagResponse(
Guid RunId,
IReadOnlyList<DagEdgeResponse> Edges,
IReadOnlyList<Guid> TopologicalOrder,
IReadOnlyList<Guid> CriticalPath,
TimeSpan? EstimatedDuration);
/// <summary>
/// Response containing a list of edges.
/// </summary>
public sealed record DagEdgeListResponse(
IReadOnlyList<DagEdgeResponse> Edges);
/// <summary>
/// Response for blocked jobs (transitively affected by a failure).
/// </summary>
public sealed record BlockedJobsResponse(
Guid FailedJobId,
IReadOnlyList<Guid> BlockedJobIds);

View File

@@ -1,45 +0,0 @@
namespace StellaOps.JobEngine.WebService.Contracts;
/// <summary>
/// API response for first signal endpoint.
/// </summary>
public sealed record FirstSignalResponse
{
public required Guid RunId { get; init; }
public required FirstSignalDto? FirstSignal { get; init; }
public required string SummaryEtag { get; init; }
}
public sealed record FirstSignalDto
{
public required string Type { get; init; }
public string? Stage { get; init; }
public string? Step { get; init; }
public required string Message { get; init; }
public required DateTimeOffset At { get; init; }
public FirstSignalArtifactDto? Artifact { get; init; }
public FirstSignalLastKnownOutcomeDto? LastKnownOutcome { get; init; }
}
public sealed record FirstSignalArtifactDto
{
public required string Kind { get; init; }
public FirstSignalRangeDto? Range { get; init; }
}
public sealed record FirstSignalLastKnownOutcomeDto
{
public required string SignatureId { get; init; }
public string? ErrorCode { get; init; }
public required string Token { get; init; }
public string? Excerpt { get; init; }
public required string Confidence { get; init; }
public required DateTimeOffset FirstSeenAt { get; init; }
public required int HitCount { get; init; }
}
public sealed record FirstSignalRangeDto
{
public required int Start { get; init; }
public required int End { get; init; }
}

View File

@@ -1,129 +0,0 @@
using StellaOps.JobEngine.Core.Domain;
namespace StellaOps.JobEngine.WebService.Contracts;
/// <summary>
/// Response representing a job.
/// </summary>
public sealed record JobResponse(
string TenantId,
string? ProjectId,
Guid JobId,
Guid? RunId,
string JobType,
string Status,
int Priority,
int Attempt,
int MaxAttempts,
string? CorrelationId,
string? WorkerId,
string? TaskRunnerId,
DateTimeOffset CreatedAt,
DateTimeOffset? ScheduledAt,
DateTimeOffset? LeasedAt,
DateTimeOffset? CompletedAt,
DateTimeOffset? NotBefore,
string? Reason,
Guid? ReplayOf,
string CreatedBy)
{
public static JobResponse FromDomain(Job job) => new(
job.TenantId,
job.ProjectId,
job.JobId,
job.RunId,
job.JobType,
job.Status.ToString().ToLowerInvariant(),
job.Priority,
job.Attempt,
job.MaxAttempts,
job.CorrelationId,
job.WorkerId,
job.TaskRunnerId,
job.CreatedAt,
job.ScheduledAt,
job.LeasedAt,
job.CompletedAt,
job.NotBefore,
job.Reason,
job.ReplayOf,
job.CreatedBy);
}
/// <summary>
/// Response representing a job with its full payload.
/// </summary>
public sealed record JobDetailResponse(
string TenantId,
string? ProjectId,
Guid JobId,
Guid? RunId,
string JobType,
string Status,
int Priority,
int Attempt,
int MaxAttempts,
string PayloadDigest,
string Payload,
string IdempotencyKey,
string? CorrelationId,
Guid? LeaseId,
string? WorkerId,
string? TaskRunnerId,
DateTimeOffset? LeaseUntil,
DateTimeOffset CreatedAt,
DateTimeOffset? ScheduledAt,
DateTimeOffset? LeasedAt,
DateTimeOffset? CompletedAt,
DateTimeOffset? NotBefore,
string? Reason,
Guid? ReplayOf,
string CreatedBy)
{
public static JobDetailResponse FromDomain(Job job) => new(
job.TenantId,
job.ProjectId,
job.JobId,
job.RunId,
job.JobType,
job.Status.ToString().ToLowerInvariant(),
job.Priority,
job.Attempt,
job.MaxAttempts,
job.PayloadDigest,
job.Payload,
job.IdempotencyKey,
job.CorrelationId,
job.LeaseId,
job.WorkerId,
job.TaskRunnerId,
job.LeaseUntil,
job.CreatedAt,
job.ScheduledAt,
job.LeasedAt,
job.CompletedAt,
job.NotBefore,
job.Reason,
job.ReplayOf,
job.CreatedBy);
}
/// <summary>
/// Response containing a list of jobs.
/// </summary>
public sealed record JobListResponse(
IReadOnlyList<JobResponse> Jobs,
string? NextCursor);
/// <summary>
/// Summary statistics for jobs.
/// </summary>
public sealed record JobSummary(
int TotalJobs,
int PendingJobs,
int ScheduledJobs,
int LeasedJobs,
int SucceededJobs,
int FailedJobs,
int CanceledJobs,
int TimedOutJobs);

View File

@@ -1,760 +0,0 @@
using System.Reflection;
using System.Text.Json;
using System.Text.Json.Serialization;
namespace StellaOps.JobEngine.WebService.Contracts;
/// <summary>
/// Factory for per-service OpenAPI discovery and specification documents.
/// </summary>
public static class OpenApiDocuments
{
public static readonly JsonSerializerOptions SerializerOptions = new(JsonSerializerDefaults.Web)
{
WriteIndented = true
};
/// <summary>
/// Return the service build/version string based on the executing assembly.
/// </summary>
public static string GetServiceVersion()
=> Assembly.GetExecutingAssembly().GetName().Version?.ToString() ?? "0.0.0";
public static OpenApiDiscoveryDocument CreateDiscoveryDocument(string version)
{
return new OpenApiDiscoveryDocument(
Service: "jobengine",
SpecVersion: "3.1.0",
Version: version,
Format: "application/json",
Url: "/openapi/jobengine.json",
ErrorEnvelopeSchema: "#/components/schemas/Error",
Notifications: new Dictionary<string, string>
{
["topic"] = "orchestrator.contracts",
["event"] = "orchestrator.openapi.updated"
});
}
public static OpenApiSpecDocument CreateSpecification(string version)
{
var exampleJob = ExampleJob();
var exampleJobDetail = ExampleJobDetail();
var exampleClaimRequest = new
{
workerId = "worker-7f9",
jobType = "sbom.build",
idempotencyKey = "claim-12af",
leaseSeconds = 300,
taskRunnerId = "runner-01"
};
var exampleClaimResponse = new
{
jobId = Guid.Parse("11111111-2222-3333-4444-555555555555"),
leaseId = Guid.Parse("aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"),
leaseUntil = "2025-11-30T12:05:00Z",
job = exampleJobDetail
};
var examplePackRunRequest = new
{
packId = "pack.advisory.sbom",
packVersion = "1.2.3",
parameters = @"{""image"":""registry.example/app:1.0.0""}",
projectId = "proj-17",
idempotencyKey = "packrun-123",
priority = 5,
maxAttempts = 3
};
var examplePackRunResponse = new
{
packRunId = Guid.Parse("99999999-0000-1111-2222-333333333333"),
packId = "pack.advisory.sbom",
packVersion = "1.2.3",
status = "scheduled",
idempotencyKey = "packrun-123",
createdAt = "2025-11-30T12:00:00Z",
wasAlreadyScheduled = false
};
var exampleRetryRequest = new
{
parameters = @"{""image"":""registry.example/app:1.0.1""}",
idempotencyKey = "retry-123"
};
var exampleRetryResponse = new
{
originalPackRunId = Guid.Parse("99999999-0000-1111-2222-333333333333"),
newPackRunId = Guid.Parse("aaaaaaaa-0000-1111-2222-bbbbbbbbbbbb"),
status = "scheduled",
createdAt = "2025-11-30T12:10:00Z"
};
var paths = new Dictionary<string, object>
{
["/api/v1/jobengine/jobs"] = new
{
get = new
{
summary = "List jobs",
description = "Paginated job listing with deterministic cursor ordering and idempotent retries.",
parameters = new object[]
{
QueryParameter("status", "query", "Job status filter (pending|scheduled|leased|succeeded|failed)", "string", "scheduled"),
QueryParameter("jobType", "query", "Filter by job type", "string", "sbom.build"),
QueryParameter("projectId", "query", "Filter by project identifier", "string", "proj-17"),
QueryParameter("createdAfter", "query", "RFC3339 timestamp for start of window", "string", "2025-11-01T00:00:00Z"),
QueryParameter("createdBefore", "query", "RFC3339 timestamp for end of window", "string", "2025-11-30T00:00:00Z"),
QueryParameter("limit", "query", "Results per page (max 200)", "integer", 50),
QueryParameter("cursor", "query", "Opaque pagination cursor", "string", "c3RhcnQ6NTA=")
},
responses = new Dictionary<string, object>
{
["200"] = new
{
description = "Jobs page",
headers = new Dictionary<string, object>
{
["Link"] = new
{
description = "RFC 8288 pagination cursor links",
schema = new { type = "string" },
example = "</api/v1/jobengine/jobs?cursor=c3RhcnQ6NTA=>; rel=\"next\""
},
["X-StellaOps-Api-Version"] = new
{
description = "Service build version",
schema = new { type = "string" },
example = version
}
},
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/JobList" },
examples = new Dictionary<string, object>
{
["default"] = new
{
value = new
{
jobs = new[] { exampleJob },
nextCursor = "c3RhcnQ6NTA="
}
}
}
}
}
},
["400"] = ErrorResponse("Invalid filter")
}
}
},
["/api/v1/jobengine/jobs/{jobId}"] = new
{
get = new
{
summary = "Get job",
description = "Fetch job metadata by identifier.",
parameters = new object[]
{
RouteParameter("jobId", "Job identifier", "string")
},
responses = new Dictionary<string, object>
{
["200"] = new
{
description = "Job metadata",
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/Job" },
examples = new Dictionary<string, object>
{
["default"] = new { value = exampleJob }
}
}
}
},
["404"] = ErrorResponse("Not found")
}
}
},
["/api/v1/jobengine/jobs/{jobId}/detail"] = new
{
get = new
{
summary = "Legacy job detail (deprecated)",
description = "Legacy payload-inclusive job detail; prefer GET /api/v1/jobengine/jobs/{jobId} plus artifact lookup.",
deprecated = true,
parameters = new object[]
{
RouteParameter("jobId", "Job identifier", "string")
},
responses = new Dictionary<string, object>
{
["200"] = new
{
description = "Job detail including payload (deprecated)",
headers = StandardDeprecationHeaders("/api/v1/jobengine/jobs/{jobId}"),
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/JobDetail" },
examples = new Dictionary<string, object>
{
["legacy"] = new { value = exampleJobDetail }
}
}
}
},
["404"] = ErrorResponse("Not found")
}
}
},
["/api/v1/jobengine/jobs/summary"] = new
{
get = new
{
summary = "Legacy job summary (deprecated)",
description = "Legacy summary endpoint; use pagination + counts or analytics feed.",
deprecated = true,
responses = new Dictionary<string, object>
{
["200"] = new
{
description = "Summary counts",
headers = StandardDeprecationHeaders("/api/v1/jobengine/jobs"),
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/JobSummary" },
examples = new Dictionary<string, object>
{
["summary"] = new
{
value = new { totalJobs = 120, pendingJobs = 12, scheduledJobs = 30, leasedJobs = 20, succeededJobs = 45, failedJobs = 8, canceledJobs = 3, timedOutJobs = 2 }
}
}
}
}
}
}
}
},
["/api/v1/jobengine/pack-runs"] = new
{
post = new
{
summary = "Schedule pack run",
description = "Schedule an orchestrated pack run with idempotency and quota enforcement.",
requestBody = new
{
required = true,
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/SchedulePackRunRequest" },
examples = new Dictionary<string, object> { ["default"] = new { value = examplePackRunRequest } }
}
}
},
responses = new Dictionary<string, object>
{
["201"] = new
{
description = "Pack run scheduled",
headers = new Dictionary<string, object>
{
["Location"] = new { description = "Pack run resource URL", schema = new { type = "string" }, example = "/api/v1/jobengine/pack-runs/99999999-0000-1111-2222-333333333333" }
},
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/SchedulePackRunResponse" },
examples = new Dictionary<string, object> { ["default"] = new { value = examplePackRunResponse } }
}
}
},
["429"] = new
{
description = "Quota exceeded",
headers = new Dictionary<string, object> { ["Retry-After"] = new { description = "Seconds until retry", schema = new { type = "integer" }, example = 60 } },
content = new Dictionary<string, object> { ["application/json"] = new { schema = new { @ref = "#/components/schemas/PackRunError" } } }
}
}
}
},
["/api/v1/jobengine/pack-runs/{packRunId}/retry"] = new
{
post = new
{
summary = "Retry failed pack run",
description = "Create a new pack run based on a failed one with optional parameter override.",
parameters = new object[] { RouteParameter("packRunId", "Pack run identifier", "string") },
requestBody = new
{
required = true,
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/RetryPackRunRequest" },
examples = new Dictionary<string, object> { ["default"] = new { value = exampleRetryRequest } }
}
}
},
responses = new Dictionary<string, object>
{
["201"] = new
{
description = "Retry scheduled",
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/RetryPackRunResponse" },
examples = new Dictionary<string, object> { ["default"] = new { value = exampleRetryResponse } }
}
}
},
["404"] = ErrorResponse("Pack run not found"),
["409"] = new
{
description = "Retry not allowed",
content = new Dictionary<string, object> { ["application/json"] = new { schema = new { @ref = "#/components/schemas/PackRunError" } } }
}
}
}
},
["/api/v1/jobengine/worker/claim"] = new
{
post = new
{
summary = "Claim next job",
description = "Idempotent worker claim endpoint with optional idempotency key and task runner context.",
parameters = new object[]
{
HeaderParameter("Idempotency-Key", "Optional idempotency key for claim replay safety", "string", "claim-12af")
},
requestBody = new
{
required = true,
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/WorkerClaimRequest" },
examples = new Dictionary<string, object>
{
["default"] = new { value = exampleClaimRequest }
}
}
}
},
responses = new Dictionary<string, object>
{
["200"] = new
{
description = "Job claim response",
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/WorkerClaimResponse" },
examples = new Dictionary<string, object>
{
["default"] = new { value = exampleClaimResponse }
}
}
}
},
["204"] = new { description = "No jobs available" },
["400"] = ErrorResponse("Invalid claim request")
}
}
},
["/healthz"] = new
{
get = new
{
summary = "Health check",
description = "Basic service health probe.",
responses = new Dictionary<string, object>
{
["200"] = new
{
description = "Healthy",
content = new Dictionary<string, object>
{
["application/json"] = new
{
examples = new Dictionary<string, object>
{
["example"] = new
{
value = new { status = "ok", timestamp = "2025-11-30T00:00:00Z" }
}
}
}
}
}
}
}
}
};
var components = new OpenApiComponents(
Schemas: new Dictionary<string, object>
{
["Error"] = new
{
type = "object",
properties = new
{
error = new { type = "string" },
detail = new { type = "string" }
},
required = new[] { "error" }
},
["Job"] = new
{
type = "object",
properties = new
{
jobId = new { type = "string", format = "uuid" },
runId = new { type = "string", format = "uuid", nullable = true },
jobType = new { type = "string" },
status = new { type = "string" },
priority = new { type = "integer" },
attempt = new { type = "integer" },
maxAttempts = new { type = "integer" },
correlationId = new { type = "string", nullable = true },
workerId = new { type = "string", nullable = true },
taskRunnerId = new { type = "string", nullable = true },
createdAt = new { type = "string", format = "date-time" },
scheduledAt = new { type = "string", format = "date-time", nullable = true },
leasedAt = new { type = "string", format = "date-time", nullable = true },
completedAt = new { type = "string", format = "date-time", nullable = true },
notBefore = new { type = "string", format = "date-time", nullable = true },
reason = new { type = "string", nullable = true },
replayOf = new { type = "string", format = "uuid", nullable = true },
createdBy = new { type = "string" }
},
required = new[] { "jobId", "jobType", "status", "priority", "attempt", "maxAttempts", "createdAt", "createdBy" }
},
["JobDetail"] = new
{
allOf = new object[]
{
new { @ref = "#/components/schemas/Job" },
new
{
type = "object",
properties = new
{
payloadDigest = new { type = "string" },
payload = new { type = "string" },
idempotencyKey = new { type = "string" },
leaseId = new { type = "string", format = "uuid", nullable = true },
leaseUntil = new { type = "string", format = "date-time", nullable = true }
}
}
}
},
["JobList"] = new
{
type = "object",
properties = new
{
jobs = new
{
type = "array",
items = new { @ref = "#/components/schemas/Job" }
},
nextCursor = new { type = "string", nullable = true }
},
required = new[] { "jobs" }
},
["JobSummary"] = new
{
type = "object",
properties = new
{
totalJobs = new { type = "integer" },
pendingJobs = new { type = "integer" },
scheduledJobs = new { type = "integer" },
leasedJobs = new { type = "integer" },
succeededJobs = new { type = "integer" },
failedJobs = new { type = "integer" },
canceledJobs = new { type = "integer" },
timedOutJobs = new { type = "integer" }
}
},
["WorkerClaimRequest"] = new
{
type = "object",
properties = new
{
workerId = new { type = "string" },
jobType = new { type = "string" },
idempotencyKey = new { type = "string", nullable = true },
leaseSeconds = new { type = "integer", nullable = true },
taskRunnerId = new { type = "string", nullable = true }
},
required = new[] { "workerId" }
},
["WorkerClaimResponse"] = new
{
type = "object",
properties = new
{
jobId = new { type = "string", format = "uuid" },
leaseId = new { type = "string", format = "uuid" },
leaseUntil = new { type = "string", format = "date-time" },
job = new { @ref = "#/components/schemas/JobDetail" }
},
required = new[] { "jobId", "leaseId", "leaseUntil", "job" }
},
["SchedulePackRunRequest"] = new
{
type = "object",
properties = new
{
packId = new { type = "string" },
packVersion = new { type = "string" },
parameters = new { type = "string", nullable = true },
projectId = new { type = "string", nullable = true },
idempotencyKey = new { type = "string", nullable = true },
correlationId = new { type = "string", nullable = true },
priority = new { type = "integer", nullable = true },
maxAttempts = new { type = "integer", nullable = true },
metadata = new { type = "string", nullable = true }
},
required = new[] { "packId", "packVersion" }
},
["SchedulePackRunResponse"] = new
{
type = "object",
properties = new
{
packRunId = new { type = "string", format = "uuid" },
packId = new { type = "string" },
packVersion = new { type = "string" },
status = new { type = "string" },
idempotencyKey = new { type = "string" },
createdAt = new { type = "string", format = "date-time" },
wasAlreadyScheduled = new { type = "boolean" }
},
required = new[] { "packRunId", "packId", "packVersion", "status", "createdAt", "wasAlreadyScheduled" }
},
["RetryPackRunRequest"] = new
{
type = "object",
properties = new
{
parameters = new { type = "string", nullable = true },
idempotencyKey = new { type = "string", nullable = true }
}
},
["RetryPackRunResponse"] = new
{
type = "object",
properties = new
{
originalPackRunId = new { type = "string", format = "uuid" },
newPackRunId = new { type = "string", format = "uuid" },
status = new { type = "string" },
createdAt = new { type = "string", format = "date-time" }
},
required = new[] { "originalPackRunId", "newPackRunId", "status", "createdAt" }
},
["PackRunError"] = new
{
type = "object",
properties = new
{
code = new { type = "string" },
message = new { type = "string" },
packRunId = new { type = "string", format = "uuid", nullable = true },
retryAfterSeconds = new { type = "integer", nullable = true }
},
required = new[] { "code", "message" }
}
},
Headers: new Dictionary<string, object>
{
["Deprecation"] = new { description = "RFC 8594 deprecation marker", schema = new { type = "string" }, example = "true" },
["Sunset"] = new { description = "Target removal date", schema = new { type = "string" }, example = "Tue, 31 Mar 2026 00:00:00 GMT" },
["Link"] = new { description = "Alternate endpoint for deprecated operation", schema = new { type = "string" } }
});
return new OpenApiSpecDocument(
OpenApi: "3.1.0",
Info: new OpenApiInfo("StellaOps Orchestrator API", version, "Scheduling and automation control plane APIs with pagination, idempotency, and error envelopes."),
Paths: paths,
Components: components,
Servers: new List<object>
{
new { url = "https://api.stella-ops.local" },
new { url = "http://localhost:5201" }
});
// Local helper functions keep the anonymous object creation terse.
static object QueryParameter(string name, string @in, string description, string type, object? example = null)
{
return new Dictionary<string, object?>
{
["name"] = name,
["in"] = @in,
["description"] = description,
["required"] = false,
["schema"] = new { type },
["example"] = example
};
}
static object RouteParameter(string name, string description, string type)
{
return new Dictionary<string, object?>
{
["name"] = name,
["in"] = "path",
["description"] = description,
["required"] = true,
["schema"] = new { type }
};
}
static object HeaderParameter(string name, string description, string type, object? example = null)
{
return new Dictionary<string, object?>
{
["name"] = name,
["in"] = "header",
["description"] = description,
["required"] = false,
["schema"] = new { type },
["example"] = example
};
}
static object ErrorResponse(string description)
{
return new
{
description,
content = new Dictionary<string, object>
{
["application/json"] = new
{
schema = new { @ref = "#/components/schemas/Error" },
examples = new Dictionary<string, object>
{
["error"] = new { value = new { error = "invalid_request", detail = description } }
}
}
}
};
}
static Dictionary<string, object> StandardDeprecationHeaders(string alternate)
{
return new Dictionary<string, object>
{
["Deprecation"] = new { description = "This endpoint is deprecated", schema = new { type = "string" }, example = "true" },
["Link"] = new { description = "Alternate endpoint", schema = new { type = "string" }, example = $"<{alternate}>; rel=\"alternate\"" },
["Sunset"] = new { description = "Planned removal", schema = new { type = "string" }, example = "Tue, 31 Mar 2026 00:00:00 GMT" }
};
}
}
private static object ExampleJob()
{
return new
{
jobId = Guid.Parse("aaaaaaaa-1111-2222-3333-bbbbbbbbbbbb"),
runId = Guid.Parse("cccccccc-1111-2222-3333-dddddddddddd"),
jobType = "scan.image",
status = "scheduled",
priority = 5,
attempt = 0,
maxAttempts = 3,
correlationId = "corr-abc",
workerId = (string?)null,
taskRunnerId = "runner-01",
createdAt = "2025-11-30T12:00:00Z",
scheduledAt = "2025-11-30T12:05:00Z",
leasedAt = (string?)null,
completedAt = (string?)null,
notBefore = "2025-11-30T12:04:00Z",
reason = (string?)null,
replayOf = (string?)null,
createdBy = "scheduler"
};
}
private static object ExampleJobDetail()
{
return new
{
jobId = Guid.Parse("aaaaaaaa-1111-2222-3333-bbbbbbbbbbbb"),
runId = Guid.Parse("cccccccc-1111-2222-3333-dddddddddddd"),
jobType = "scan.image",
status = "leased",
priority = 5,
attempt = 1,
maxAttempts = 3,
payloadDigest = "sha256:abc123",
payload = "{\"image\":\"alpine:3.18\"}",
idempotencyKey = "claim-12af",
correlationId = "corr-abc",
leaseId = Guid.Parse("aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"),
leaseUntil = "2025-11-30T12:05:00Z",
workerId = "worker-7f9",
taskRunnerId = "runner-01",
createdAt = "2025-11-30T12:00:00Z",
scheduledAt = "2025-11-30T12:05:00Z",
leasedAt = "2025-11-30T12:00:15Z",
completedAt = (string?)null,
notBefore = "2025-11-30T12:04:00Z",
reason = (string?)null,
replayOf = (string?)null,
createdBy = "scheduler"
};
}
}
public sealed record OpenApiDiscoveryDocument(
[property: JsonPropertyName("service")] string Service,
[property: JsonPropertyName("specVersion")] string SpecVersion,
[property: JsonPropertyName("version")] string Version,
[property: JsonPropertyName("format")] string Format,
[property: JsonPropertyName("url")] string Url,
[property: JsonPropertyName("errorEnvelopeSchema")] string ErrorEnvelopeSchema,
[property: JsonPropertyName("notifications")] IReadOnlyDictionary<string, string> Notifications);
public sealed record OpenApiSpecDocument(
[property: JsonPropertyName("openapi")] string OpenApi,
[property: JsonPropertyName("info")] OpenApiInfo Info,
[property: JsonPropertyName("paths")] IReadOnlyDictionary<string, object> Paths,
[property: JsonPropertyName("components")] OpenApiComponents Components,
[property: JsonPropertyName("servers")] IReadOnlyList<object>? Servers = null);
public sealed record OpenApiInfo(
[property: JsonPropertyName("title")] string Title,
[property: JsonPropertyName("version")] string Version,
[property: JsonPropertyName("description")] string Description);
public sealed record OpenApiComponents(
[property: JsonPropertyName("schemas")] IReadOnlyDictionary<string, object> Schemas,
[property: JsonPropertyName("headers")] IReadOnlyDictionary<string, object>? Headers = null);

View File

@@ -1,292 +0,0 @@
using StellaOps.JobEngine.Core.Domain;
namespace StellaOps.JobEngine.WebService.Contracts;
// ========== Pack CRUD Requests/Responses ==========
/// <summary>
/// Request to create a new pack in the registry.
/// </summary>
public sealed record CreatePackRequest(
/// <summary>Unique pack name (lowercase, URL-safe).</summary>
string Name,
/// <summary>Display name for the pack.</summary>
string DisplayName,
/// <summary>Optional pack description.</summary>
string? Description,
/// <summary>Optional project scope.</summary>
string? ProjectId,
/// <summary>Optional metadata JSON.</summary>
string? Metadata,
/// <summary>Optional comma-separated tags.</summary>
string? Tags,
/// <summary>Optional icon URI.</summary>
string? IconUri);
/// <summary>
/// Response representing a pack.
/// </summary>
public sealed record PackResponse(
Guid PackId,
string Name,
string DisplayName,
string? Description,
string? ProjectId,
string Status,
string CreatedBy,
DateTimeOffset CreatedAt,
DateTimeOffset UpdatedAt,
string? UpdatedBy,
string? Metadata,
string? Tags,
string? IconUri,
int VersionCount,
string? LatestVersion,
DateTimeOffset? PublishedAt,
string? PublishedBy)
{
public static PackResponse FromDomain(Pack pack) => new(
pack.PackId,
pack.Name,
pack.DisplayName,
pack.Description,
pack.ProjectId,
pack.Status.ToString().ToLowerInvariant(),
pack.CreatedBy,
pack.CreatedAt,
pack.UpdatedAt,
pack.UpdatedBy,
pack.Metadata,
pack.Tags,
pack.IconUri,
pack.VersionCount,
pack.LatestVersion,
pack.PublishedAt,
pack.PublishedBy);
}
/// <summary>
/// Response containing a paginated list of packs.
/// </summary>
public sealed record PackListResponse(
IReadOnlyList<PackResponse> Packs,
int TotalCount,
string? NextCursor);
/// <summary>
/// Request to update a pack.
/// </summary>
public sealed record UpdatePackRequest(
/// <summary>Updated display name.</summary>
string? DisplayName,
/// <summary>Updated description.</summary>
string? Description,
/// <summary>Updated metadata JSON.</summary>
string? Metadata,
/// <summary>Updated comma-separated tags.</summary>
string? Tags,
/// <summary>Updated icon URI.</summary>
string? IconUri);
/// <summary>
/// Request to update pack status (publish, deprecate, archive).
/// </summary>
public sealed record UpdatePackStatusRequest(
/// <summary>New status: draft, published, deprecated, archived.</summary>
string Status);
// ========== Pack Version Requests/Responses ==========
/// <summary>
/// Request to create a new pack version.
/// </summary>
public sealed record CreatePackVersionRequest(
/// <summary>Version string (e.g., "1.0.0", "2.0.0-beta.1").</summary>
string Version,
/// <summary>Optional semantic version for sorting.</summary>
string? SemVer,
/// <summary>Artifact storage URI.</summary>
string ArtifactUri,
/// <summary>Artifact content digest (SHA-256).</summary>
string ArtifactDigest,
/// <summary>Artifact MIME type.</summary>
string? ArtifactMimeType,
/// <summary>Artifact size in bytes.</summary>
long? ArtifactSizeBytes,
/// <summary>Pack manifest JSON.</summary>
string? ManifestJson,
/// <summary>Manifest digest for verification.</summary>
string? ManifestDigest,
/// <summary>Release notes.</summary>
string? ReleaseNotes,
/// <summary>Minimum engine version required.</summary>
string? MinEngineVersion,
/// <summary>Dependencies JSON.</summary>
string? Dependencies,
/// <summary>Optional metadata JSON.</summary>
string? Metadata);
/// <summary>
/// Response representing a pack version.
/// </summary>
public sealed record PackVersionResponse(
Guid PackVersionId,
Guid PackId,
string Version,
string? SemVer,
string Status,
string ArtifactUri,
string ArtifactDigest,
string? ArtifactMimeType,
long? ArtifactSizeBytes,
string? ManifestDigest,
string? ReleaseNotes,
string? MinEngineVersion,
string? Dependencies,
string CreatedBy,
DateTimeOffset CreatedAt,
DateTimeOffset UpdatedAt,
string? UpdatedBy,
DateTimeOffset? PublishedAt,
string? PublishedBy,
DateTimeOffset? DeprecatedAt,
string? DeprecatedBy,
string? DeprecationReason,
bool IsSigned,
string? SignatureAlgorithm,
DateTimeOffset? SignedAt,
string? Metadata,
int DownloadCount)
{
public static PackVersionResponse FromDomain(PackVersion version) => new(
version.PackVersionId,
version.PackId,
version.Version,
version.SemVer,
version.Status.ToString().ToLowerInvariant(),
version.ArtifactUri,
version.ArtifactDigest,
version.ArtifactMimeType,
version.ArtifactSizeBytes,
version.ManifestDigest,
version.ReleaseNotes,
version.MinEngineVersion,
version.Dependencies,
version.CreatedBy,
version.CreatedAt,
version.UpdatedAt,
version.UpdatedBy,
version.PublishedAt,
version.PublishedBy,
version.DeprecatedAt,
version.DeprecatedBy,
version.DeprecationReason,
version.IsSigned,
version.SignatureAlgorithm,
version.SignedAt,
version.Metadata,
version.DownloadCount);
}
/// <summary>
/// Response containing a paginated list of pack versions.
/// </summary>
public sealed record PackVersionListResponse(
IReadOnlyList<PackVersionResponse> Versions,
int TotalCount,
string? NextCursor);
/// <summary>
/// Request to update a pack version.
/// </summary>
public sealed record UpdatePackVersionRequest(
/// <summary>Updated release notes.</summary>
string? ReleaseNotes,
/// <summary>Updated metadata JSON.</summary>
string? Metadata);
/// <summary>
/// Request to update pack version status (publish, deprecate, archive).
/// </summary>
public sealed record UpdatePackVersionStatusRequest(
/// <summary>New status: draft, published, deprecated, archived.</summary>
string Status,
/// <summary>Deprecation reason (required when status is deprecated).</summary>
string? DeprecationReason);
/// <summary>
/// Request to sign a pack version.
/// </summary>
public sealed record SignPackVersionRequest(
/// <summary>Signature storage URI.</summary>
string SignatureUri,
/// <summary>Signature algorithm (e.g., "ecdsa-p256", "rsa-sha256").</summary>
string SignatureAlgorithm);
/// <summary>
/// Response for a download request (includes artifact URL).
/// </summary>
public sealed record PackVersionDownloadResponse(
Guid PackVersionId,
string Version,
string ArtifactUri,
string ArtifactDigest,
string? ArtifactMimeType,
long? ArtifactSizeBytes,
string? SignatureUri,
string? SignatureAlgorithm);
// ========== Search and Discovery ==========
/// <summary>
/// Response for pack search results.
/// </summary>
public sealed record PackSearchResponse(
IReadOnlyList<PackResponse> Packs,
string Query);
/// <summary>
/// Response for registry statistics.
/// </summary>
public sealed record PackRegistryStatsResponse(
int TotalPacks,
int PublishedPacks,
int TotalVersions,
int PublishedVersions,
long TotalDownloads,
DateTimeOffset? LastUpdatedAt);
// ========== Error Response ==========
/// <summary>
/// Error response for pack registry operations.
/// </summary>
public sealed record PackRegistryErrorResponse(
string Code,
string Message,
Guid? PackId,
Guid? PackVersionId);

View File

@@ -1,360 +0,0 @@
using StellaOps.JobEngine.Core.Domain;
namespace StellaOps.JobEngine.WebService.Contracts;
// ========== Scheduling Requests/Responses ==========
/// <summary>
/// Request to schedule a new pack run.
/// </summary>
public sealed record SchedulePackRunRequest(
/// <summary>Authority pack ID to execute.</summary>
string PackId,
/// <summary>Pack version (e.g., "1.2.3", "latest").</summary>
string PackVersion,
/// <summary>Pack input parameters JSON.</summary>
string? Parameters,
/// <summary>Optional project scope.</summary>
string? ProjectId,
/// <summary>Idempotency key for deduplication.</summary>
string? IdempotencyKey,
/// <summary>Correlation ID for tracing.</summary>
string? CorrelationId,
/// <summary>Priority (higher = more urgent).</summary>
int? Priority,
/// <summary>Maximum retry attempts.</summary>
int? MaxAttempts,
/// <summary>Optional metadata JSON.</summary>
string? Metadata);
/// <summary>
/// Response for a scheduled pack run.
/// </summary>
public sealed record SchedulePackRunResponse(
Guid PackRunId,
string PackId,
string PackVersion,
string Status,
string IdempotencyKey,
DateTimeOffset CreatedAt,
bool WasAlreadyScheduled);
/// <summary>
/// Response representing a pack run.
/// </summary>
public sealed record PackRunResponse(
Guid PackRunId,
string PackId,
string PackVersion,
string Status,
int Priority,
int Attempt,
int MaxAttempts,
string? CorrelationId,
string? TaskRunnerId,
DateTimeOffset CreatedAt,
DateTimeOffset? ScheduledAt,
DateTimeOffset? StartedAt,
DateTimeOffset? CompletedAt,
string? Reason,
int? ExitCode,
long? DurationMs,
string CreatedBy)
{
public static PackRunResponse FromDomain(PackRun packRun) => new(
packRun.PackRunId,
packRun.PackId,
packRun.PackVersion,
packRun.Status.ToString().ToLowerInvariant(),
packRun.Priority,
packRun.Attempt,
packRun.MaxAttempts,
packRun.CorrelationId,
packRun.TaskRunnerId,
packRun.CreatedAt,
packRun.ScheduledAt,
packRun.StartedAt,
packRun.CompletedAt,
packRun.Reason,
packRun.ExitCode,
packRun.DurationMs,
packRun.CreatedBy);
}
/// <summary>
/// Response containing a list of pack runs.
/// </summary>
public sealed record PackRunListResponse(
IReadOnlyList<PackRunResponse> PackRuns,
int TotalCount,
string? NextCursor);
/// <summary>
/// Manifest response summarizing pack run state and log statistics.
/// </summary>
public sealed record PackRunManifestResponse(
Guid PackRunId,
string PackId,
string PackVersion,
string Status,
int Attempt,
int MaxAttempts,
DateTimeOffset CreatedAt,
DateTimeOffset? ScheduledAt,
DateTimeOffset? StartedAt,
DateTimeOffset? CompletedAt,
string? Reason,
long LogCount,
long LatestSequence);
// ========== Task Runner (Worker) Requests/Responses ==========
/// <summary>
/// Request to claim a pack run for execution.
/// </summary>
public sealed record ClaimPackRunRequest(
/// <summary>Task runner ID claiming the pack run.</summary>
string TaskRunnerId,
/// <summary>Optional pack ID filter (only claim runs for this pack).</summary>
string? PackId,
/// <summary>Requested lease duration in seconds.</summary>
int? LeaseSeconds,
/// <summary>Idempotency key for claim deduplication.</summary>
string? IdempotencyKey);
/// <summary>
/// Response for a claimed pack run.
/// </summary>
public sealed record ClaimPackRunResponse(
Guid PackRunId,
Guid LeaseId,
string PackId,
string PackVersion,
string Parameters,
string ParametersDigest,
int Attempt,
int MaxAttempts,
DateTimeOffset LeaseUntil,
string IdempotencyKey,
string? CorrelationId,
string? ProjectId,
string? Metadata);
/// <summary>
/// Request to extend a pack run lease (heartbeat).
/// </summary>
public sealed record PackRunHeartbeatRequest(
/// <summary>Current lease ID.</summary>
Guid LeaseId,
/// <summary>Lease extension in seconds.</summary>
int? ExtendSeconds);
/// <summary>
/// Response for a pack run heartbeat.
/// </summary>
public sealed record PackRunHeartbeatResponse(
Guid PackRunId,
Guid LeaseId,
DateTimeOffset LeaseUntil,
bool Acknowledged);
/// <summary>
/// Request to report pack run start.
/// </summary>
public sealed record PackRunStartRequest(
/// <summary>Current lease ID.</summary>
Guid LeaseId);
/// <summary>
/// Response for pack run start.
/// </summary>
public sealed record PackRunStartResponse(
Guid PackRunId,
bool Acknowledged,
DateTimeOffset StartedAt);
/// <summary>
/// Request to complete a pack run.
/// </summary>
public sealed record CompletePackRunRequest(
/// <summary>Current lease ID.</summary>
Guid LeaseId,
/// <summary>Whether the pack run succeeded (exit code 0).</summary>
bool Success,
/// <summary>Exit code from pack execution.</summary>
int ExitCode,
/// <summary>Reason for failure/success.</summary>
string? Reason,
/// <summary>Artifacts produced by the pack run.</summary>
IReadOnlyList<PackRunArtifactRequest>? Artifacts);
/// <summary>
/// Artifact metadata for pack run completion.
/// </summary>
public sealed record PackRunArtifactRequest(
/// <summary>Artifact type (e.g., "report", "log", "manifest").</summary>
string ArtifactType,
/// <summary>Storage URI.</summary>
string Uri,
/// <summary>Content digest (SHA-256).</summary>
string Digest,
/// <summary>MIME type.</summary>
string? MimeType,
/// <summary>Size in bytes.</summary>
long? SizeBytes,
/// <summary>Optional metadata JSON.</summary>
string? Metadata);
/// <summary>
/// Response for pack run completion.
/// </summary>
public sealed record CompletePackRunResponse(
Guid PackRunId,
string Status,
DateTimeOffset CompletedAt,
IReadOnlyList<Guid> ArtifactIds,
long DurationMs);
// ========== Log Requests/Responses ==========
/// <summary>
/// Request to append logs to a pack run.
/// </summary>
public sealed record AppendLogsRequest(
/// <summary>Current lease ID.</summary>
Guid LeaseId,
/// <summary>Log entries to append.</summary>
IReadOnlyList<LogEntryRequest> Logs);
/// <summary>
/// A single log entry to append.
/// </summary>
public sealed record LogEntryRequest(
/// <summary>Log level (trace, debug, info, warn, error, fatal).</summary>
string Level,
/// <summary>Log source (stdout, stderr, system, pack).</summary>
string Source,
/// <summary>Log message.</summary>
string Message,
/// <summary>Timestamp (defaults to server time if not provided).</summary>
DateTimeOffset? Timestamp,
/// <summary>Optional structured data JSON.</summary>
string? Data);
/// <summary>
/// Response for appending logs.
/// </summary>
public sealed record AppendLogsResponse(
Guid PackRunId,
int LogsAppended,
long LatestSequence);
/// <summary>
/// Response for a log entry.
/// </summary>
public sealed record LogEntryResponse(
Guid LogId,
long Sequence,
string Level,
string Source,
string Message,
string Digest,
long SizeBytes,
DateTimeOffset Timestamp,
string? Data)
{
public static LogEntryResponse FromDomain(PackRunLog log) => new(
log.LogId,
log.Sequence,
log.Level.ToString().ToLowerInvariant(),
log.Source,
log.Message,
log.Digest,
log.SizeBytes,
log.Timestamp,
log.Data);
}
/// <summary>
/// Response containing a batch of logs.
/// </summary>
public sealed record LogBatchResponse(
Guid PackRunId,
IReadOnlyList<LogEntryResponse> Logs,
long StartSequence,
long? NextSequence,
bool HasMore);
// ========== Cancel/Retry Requests ==========
/// <summary>
/// Request to cancel a pack run.
/// </summary>
public sealed record CancelPackRunRequest(
/// <summary>Reason for cancellation.</summary>
string Reason);
/// <summary>
/// Response for cancel operation.
/// </summary>
public sealed record CancelPackRunResponse(
Guid PackRunId,
string Status,
string Reason,
DateTimeOffset CanceledAt);
/// <summary>
/// Request to retry a failed pack run.
/// </summary>
public sealed record RetryPackRunRequest(
/// <summary>Override parameters for retry (optional).</summary>
string? Parameters,
/// <summary>New idempotency key for the retry.</summary>
string? IdempotencyKey);
/// <summary>
/// Response for retry operation.
/// </summary>
public sealed record RetryPackRunResponse(
Guid OriginalPackRunId,
Guid NewPackRunId,
string Status,
DateTimeOffset CreatedAt);
// ========== Error Response ==========
/// <summary>
/// Error response for pack run operations.
/// </summary>
public sealed record PackRunErrorResponse(
string Code,
string Message,
Guid? PackRunId,
int? RetryAfterSeconds);

View File

@@ -1,22 +0,0 @@
namespace StellaOps.JobEngine.WebService.Contracts;
/// <summary>
/// Common query options for pagination.
/// </summary>
public sealed record QueryOptions
{
/// <summary>Maximum number of results to return. Default 50.</summary>
public int Limit { get; init; } = 50;
/// <summary>Cursor for pagination (opaque token).</summary>
public string? Cursor { get; init; }
/// <summary>Sort order: "asc" or "desc". Default "desc".</summary>
public string? Sort { get; init; }
/// <summary>Filter by created after date.</summary>
public DateTimeOffset? CreatedAfter { get; init; }
/// <summary>Filter by created before date.</summary>
public DateTimeOffset? CreatedBefore { get; init; }
}

View File

@@ -1,352 +0,0 @@
using StellaOps.JobEngine.Core.Domain;
namespace StellaOps.JobEngine.WebService.Contracts;
// ============================================================================
// Quota Contracts
// ============================================================================
/// <summary>
/// Request to create a quota.
/// </summary>
public sealed record CreateQuotaRequest(
string? JobType,
int MaxActive,
int MaxPerHour,
int BurstCapacity,
double RefillRate);
/// <summary>
/// Request to update a quota.
/// </summary>
public sealed record UpdateQuotaRequest(
int? MaxActive,
int? MaxPerHour,
int? BurstCapacity,
double? RefillRate);
/// <summary>
/// Request to pause a quota.
/// </summary>
public sealed record PauseQuotaRequest(
string Reason,
string? Ticket);
/// <summary>
/// Response for a quota.
/// </summary>
public sealed record QuotaResponse(
Guid QuotaId,
string TenantId,
string? JobType,
int MaxActive,
int MaxPerHour,
int BurstCapacity,
double RefillRate,
double CurrentTokens,
int CurrentActive,
int CurrentHourCount,
bool Paused,
string? PauseReason,
string? QuotaTicket,
DateTimeOffset CreatedAt,
DateTimeOffset UpdatedAt,
string UpdatedBy)
{
public static QuotaResponse FromDomain(Quota quota) =>
new(
QuotaId: quota.QuotaId,
TenantId: quota.TenantId,
JobType: quota.JobType,
MaxActive: quota.MaxActive,
MaxPerHour: quota.MaxPerHour,
BurstCapacity: quota.BurstCapacity,
RefillRate: quota.RefillRate,
CurrentTokens: quota.CurrentTokens,
CurrentActive: quota.CurrentActive,
CurrentHourCount: quota.CurrentHourCount,
Paused: quota.Paused,
PauseReason: quota.PauseReason,
QuotaTicket: quota.QuotaTicket,
CreatedAt: quota.CreatedAt,
UpdatedAt: quota.UpdatedAt,
UpdatedBy: quota.UpdatedBy);
}
/// <summary>
/// Response for quota list.
/// </summary>
public sealed record QuotaListResponse(
IReadOnlyList<QuotaResponse> Items,
string? NextCursor);
// ============================================================================
// SLO Contracts
// ============================================================================
/// <summary>
/// Request to create an SLO.
/// </summary>
public sealed record CreateSloRequest(
string Name,
string? Description,
string Type,
string? JobType,
Guid? SourceId,
double Target,
string Window,
double? LatencyPercentile,
double? LatencyTargetSeconds,
int? ThroughputMinimum);
/// <summary>
/// Request to update an SLO.
/// </summary>
public sealed record UpdateSloRequest(
string? Name,
string? Description,
double? Target,
bool? Enabled);
/// <summary>
/// Response for an SLO.
/// </summary>
public sealed record SloResponse(
Guid SloId,
string TenantId,
string Name,
string? Description,
string Type,
string? JobType,
Guid? SourceId,
double Target,
string Window,
double ErrorBudget,
double? LatencyPercentile,
double? LatencyTargetSeconds,
int? ThroughputMinimum,
bool Enabled,
DateTimeOffset CreatedAt,
DateTimeOffset UpdatedAt)
{
public static SloResponse FromDomain(Slo slo) =>
new(
SloId: slo.SloId,
TenantId: slo.TenantId,
Name: slo.Name,
Description: slo.Description,
Type: slo.Type.ToString().ToLowerInvariant(),
JobType: slo.JobType,
SourceId: slo.SourceId,
Target: slo.Target,
Window: FormatWindow(slo.Window),
ErrorBudget: slo.ErrorBudget,
LatencyPercentile: slo.LatencyPercentile,
LatencyTargetSeconds: slo.LatencyTargetSeconds,
ThroughputMinimum: slo.ThroughputMinimum,
Enabled: slo.Enabled,
CreatedAt: slo.CreatedAt,
UpdatedAt: slo.UpdatedAt);
private static string FormatWindow(SloWindow window) => window switch
{
SloWindow.OneHour => "1h",
SloWindow.OneDay => "1d",
SloWindow.SevenDays => "7d",
SloWindow.ThirtyDays => "30d",
_ => window.ToString()
};
}
/// <summary>
/// Response for SLO list.
/// </summary>
public sealed record SloListResponse(
IReadOnlyList<SloResponse> Items,
string? NextCursor);
/// <summary>
/// Response for SLO state (current metrics).
/// </summary>
public sealed record SloStateResponse(
Guid SloId,
double CurrentSli,
long TotalEvents,
long GoodEvents,
long BadEvents,
double BudgetConsumed,
double BudgetRemaining,
double BurnRate,
double? TimeToExhaustionSeconds,
bool IsMet,
string AlertSeverity,
DateTimeOffset ComputedAt,
DateTimeOffset WindowStart,
DateTimeOffset WindowEnd)
{
public static SloStateResponse FromDomain(SloState state) =>
new(
SloId: state.SloId,
CurrentSli: state.CurrentSli,
TotalEvents: state.TotalEvents,
GoodEvents: state.GoodEvents,
BadEvents: state.BadEvents,
BudgetConsumed: state.BudgetConsumed,
BudgetRemaining: state.BudgetRemaining,
BurnRate: state.BurnRate,
TimeToExhaustionSeconds: state.TimeToExhaustion?.TotalSeconds,
IsMet: state.IsMet,
AlertSeverity: state.AlertSeverity.ToString().ToLowerInvariant(),
ComputedAt: state.ComputedAt,
WindowStart: state.WindowStart,
WindowEnd: state.WindowEnd);
}
/// <summary>
/// Response with SLO and its current state.
/// </summary>
public sealed record SloWithStateResponse(
SloResponse Slo,
SloStateResponse State);
// ============================================================================
// Alert Threshold Contracts
// ============================================================================
/// <summary>
/// Request to create an alert threshold.
/// </summary>
public sealed record CreateAlertThresholdRequest(
double BudgetConsumedThreshold,
double? BurnRateThreshold,
string Severity,
string? NotificationChannel,
string? NotificationEndpoint,
int? CooldownMinutes);
/// <summary>
/// Response for an alert threshold.
/// </summary>
public sealed record AlertThresholdResponse(
Guid ThresholdId,
Guid SloId,
double BudgetConsumedThreshold,
double? BurnRateThreshold,
string Severity,
bool Enabled,
string? NotificationChannel,
string? NotificationEndpoint,
int CooldownMinutes,
DateTimeOffset? LastTriggeredAt,
DateTimeOffset CreatedAt,
DateTimeOffset UpdatedAt)
{
public static AlertThresholdResponse FromDomain(AlertBudgetThreshold threshold) =>
new(
ThresholdId: threshold.ThresholdId,
SloId: threshold.SloId,
BudgetConsumedThreshold: threshold.BudgetConsumedThreshold,
BurnRateThreshold: threshold.BurnRateThreshold,
Severity: threshold.Severity.ToString().ToLowerInvariant(),
Enabled: threshold.Enabled,
NotificationChannel: threshold.NotificationChannel,
NotificationEndpoint: threshold.NotificationEndpoint,
CooldownMinutes: (int)threshold.Cooldown.TotalMinutes,
LastTriggeredAt: threshold.LastTriggeredAt,
CreatedAt: threshold.CreatedAt,
UpdatedAt: threshold.UpdatedAt);
}
// ============================================================================
// Alert Contracts
// ============================================================================
/// <summary>
/// Response for an SLO alert.
/// </summary>
public sealed record SloAlertResponse(
Guid AlertId,
Guid SloId,
Guid ThresholdId,
string Severity,
string Message,
double BudgetConsumed,
double BurnRate,
double CurrentSli,
DateTimeOffset TriggeredAt,
DateTimeOffset? AcknowledgedAt,
string? AcknowledgedBy,
DateTimeOffset? ResolvedAt,
string? ResolutionNotes)
{
public static SloAlertResponse FromDomain(SloAlert alert) =>
new(
AlertId: alert.AlertId,
SloId: alert.SloId,
ThresholdId: alert.ThresholdId,
Severity: alert.Severity.ToString().ToLowerInvariant(),
Message: alert.Message,
BudgetConsumed: alert.BudgetConsumed,
BurnRate: alert.BurnRate,
CurrentSli: alert.CurrentSli,
TriggeredAt: alert.TriggeredAt,
AcknowledgedAt: alert.AcknowledgedAt,
AcknowledgedBy: alert.AcknowledgedBy,
ResolvedAt: alert.ResolvedAt,
ResolutionNotes: alert.ResolutionNotes);
}
/// <summary>
/// Response for alert list.
/// </summary>
public sealed record SloAlertListResponse(
IReadOnlyList<SloAlertResponse> Items,
string? NextCursor);
/// <summary>
/// Request to acknowledge an alert.
/// </summary>
public sealed record AcknowledgeAlertRequest(
string AcknowledgedBy);
/// <summary>
/// Request to resolve an alert.
/// </summary>
public sealed record ResolveAlertRequest(
string ResolutionNotes);
// ============================================================================
// Summary Contracts
// ============================================================================
/// <summary>
/// Summary response for SLO health.
/// </summary>
public sealed record SloSummaryResponse(
long TotalSlos,
long EnabledSlos,
long ActiveAlerts,
long UnacknowledgedAlerts,
long CriticalAlerts,
IReadOnlyList<SloWithStateResponse> SlosAtRisk);
/// <summary>
/// Summary response for quota usage.
/// </summary>
public sealed record QuotaSummaryResponse(
long TotalQuotas,
long PausedQuotas,
double AverageTokenUtilization,
double AverageConcurrencyUtilization,
IReadOnlyList<QuotaUtilizationResponse> Quotas);
/// <summary>
/// Quota utilization response.
/// </summary>
public sealed record QuotaUtilizationResponse(
Guid QuotaId,
string? JobType,
double TokenUtilization,
double ConcurrencyUtilization,
double HourlyUtilization,
bool Paused);

View File

@@ -1,253 +0,0 @@
using StellaOps.JobEngine.Core.Domain;
using StellaOps.JobEngine.Core.Services;
namespace StellaOps.JobEngine.WebService.Contracts;
// ============================================================================
// Quota Governance Contracts
// ============================================================================
/// <summary>
/// Request to create a quota allocation policy.
/// </summary>
public sealed record CreateQuotaAllocationPolicyRequest(
string Name,
string? Description,
string Strategy,
int TotalCapacity,
int MinimumPerTenant,
int MaximumPerTenant,
int ReservedCapacity,
bool AllowBurst,
double BurstMultiplier,
int Priority,
bool Active,
string? JobType);
/// <summary>
/// Request to update a quota allocation policy.
/// </summary>
public sealed record UpdateQuotaAllocationPolicyRequest(
string? Name,
string? Description,
string? Strategy,
int? TotalCapacity,
int? MinimumPerTenant,
int? MaximumPerTenant,
int? ReservedCapacity,
bool? AllowBurst,
double? BurstMultiplier,
int? Priority,
bool? Active,
string? JobType);
/// <summary>
/// Response for a quota allocation policy.
/// </summary>
public sealed record QuotaAllocationPolicyResponse(
Guid PolicyId,
string Name,
string? Description,
string Strategy,
int TotalCapacity,
int MinimumPerTenant,
int MaximumPerTenant,
int ReservedCapacity,
bool AllowBurst,
double BurstMultiplier,
int Priority,
bool Active,
string? JobType,
DateTimeOffset CreatedAt,
DateTimeOffset UpdatedAt,
string UpdatedBy)
{
/// <summary>
/// Creates a response from a domain object.
/// </summary>
public static QuotaAllocationPolicyResponse FromDomain(QuotaAllocationPolicy policy) =>
new(
PolicyId: policy.PolicyId,
Name: policy.Name,
Description: policy.Description,
Strategy: policy.Strategy.ToString(),
TotalCapacity: policy.TotalCapacity,
MinimumPerTenant: policy.MinimumPerTenant,
MaximumPerTenant: policy.MaximumPerTenant,
ReservedCapacity: policy.ReservedCapacity,
AllowBurst: policy.AllowBurst,
BurstMultiplier: policy.BurstMultiplier,
Priority: policy.Priority,
Active: policy.Active,
JobType: policy.JobType,
CreatedAt: policy.CreatedAt,
UpdatedAt: policy.UpdatedAt,
UpdatedBy: policy.UpdatedBy);
}
/// <summary>
/// Response for a quota allocation policy list.
/// </summary>
public sealed record QuotaAllocationPolicyListResponse(
IReadOnlyList<QuotaAllocationPolicyResponse> Items,
string? NextCursor);
/// <summary>
/// Response for a quota allocation calculation.
/// </summary>
public sealed record QuotaAllocationResponse(
string TenantId,
int AllocatedQuota,
int BurstCapacity,
int ReservedCapacity,
bool WasConstrained,
string? ConstraintReason,
Guid PolicyId,
DateTimeOffset CalculatedAt)
{
/// <summary>
/// Creates a response from a domain object.
/// </summary>
public static QuotaAllocationResponse FromDomain(QuotaAllocationResult result) =>
new(
TenantId: result.TenantId,
AllocatedQuota: result.AllocatedQuota,
BurstCapacity: result.BurstCapacity,
ReservedCapacity: result.ReservedCapacity,
WasConstrained: result.WasConstrained,
ConstraintReason: result.ConstraintReason,
PolicyId: result.PolicyId,
CalculatedAt: result.CalculatedAt);
}
/// <summary>
/// Response for a quota request.
/// </summary>
public sealed record QuotaRequestResponse(
bool IsGranted,
int GrantedAmount,
int RequestedAmount,
bool UsedBurst,
int RemainingQuota,
string? DenialReason,
TimeSpan? RetryAfter)
{
/// <summary>
/// Creates a response from a domain object.
/// </summary>
public static QuotaRequestResponse FromDomain(QuotaRequestResult result) =>
new(
IsGranted: result.IsGranted,
GrantedAmount: result.GrantedAmount,
RequestedAmount: result.RequestedAmount,
UsedBurst: result.UsedBurst,
RemainingQuota: result.RemainingQuota,
DenialReason: result.DenialReason,
RetryAfter: result.RetryAfter);
}
/// <summary>
/// Request to request quota.
/// </summary>
public sealed record RequestQuotaRequest(
string? JobType,
int RequestedAmount);
/// <summary>
/// Request to release quota.
/// </summary>
public sealed record ReleaseQuotaRequest(
string? JobType,
int ReleasedAmount);
/// <summary>
/// Response for tenant quota status.
/// </summary>
public sealed record TenantQuotaStatusResponse(
string TenantId,
int AllocatedQuota,
int UsedQuota,
int AvailableQuota,
int BurstAvailable,
int ReservedCapacity,
bool IsUsingBurst,
double UtilizationPercent,
Guid? PolicyId,
int PriorityTier,
DateTimeOffset CalculatedAt)
{
/// <summary>
/// Creates a response from a domain object.
/// </summary>
public static TenantQuotaStatusResponse FromDomain(TenantQuotaStatus status) =>
new(
TenantId: status.TenantId,
AllocatedQuota: status.AllocatedQuota,
UsedQuota: status.UsedQuota,
AvailableQuota: status.AvailableQuota,
BurstAvailable: status.BurstAvailable,
ReservedCapacity: status.ReservedCapacity,
IsUsingBurst: status.IsUsingBurst,
UtilizationPercent: status.UtilizationPercent,
PolicyId: status.PolicyId,
PriorityTier: status.PriorityTier,
CalculatedAt: status.CalculatedAt);
}
/// <summary>
/// Response for quota governance summary.
/// </summary>
public sealed record QuotaGovernanceSummaryResponse(
int TotalCapacity,
int TotalAllocated,
int TotalUsed,
int TotalReserved,
int ActiveTenantCount,
int TenantsBursting,
int TenantsAtLimit,
double OverallUtilization,
int ActivePolicies,
DateTimeOffset CalculatedAt)
{
/// <summary>
/// Creates a response from a domain object.
/// </summary>
public static QuotaGovernanceSummaryResponse FromDomain(QuotaGovernanceSummary summary) =>
new(
TotalCapacity: summary.TotalCapacity,
TotalAllocated: summary.TotalAllocated,
TotalUsed: summary.TotalUsed,
TotalReserved: summary.TotalReserved,
ActiveTenantCount: summary.ActiveTenantCount,
TenantsBursting: summary.TenantsBursting,
TenantsAtLimit: summary.TenantsAtLimit,
OverallUtilization: summary.OverallUtilization,
ActivePolicies: summary.ActivePolicies,
CalculatedAt: summary.CalculatedAt);
}
/// <summary>
/// Response for scheduling check.
/// </summary>
public sealed record SchedulingCheckResponse(
bool IsAllowed,
string? BlockReason,
TimeSpan? RetryAfter,
bool CircuitBreakerBlocking,
bool QuotaExhausted,
TenantQuotaStatusResponse? QuotaStatus)
{
/// <summary>
/// Creates a response from a domain object.
/// </summary>
public static SchedulingCheckResponse FromDomain(SchedulingCheckResult result) =>
new(
IsAllowed: result.IsAllowed,
BlockReason: result.BlockReason,
RetryAfter: result.RetryAfter,
CircuitBreakerBlocking: result.CircuitBreakerBlocking,
QuotaExhausted: result.QuotaExhausted,
QuotaStatus: result.QuotaStatus != null
? TenantQuotaStatusResponse.FromDomain(result.QuotaStatus)
: null);
}

View File

@@ -1,41 +0,0 @@
namespace StellaOps.JobEngine.WebService.Contracts;
/// <summary>
/// Risk snapshot surfaced in promotion/approval contracts (Pack 13/17).
/// </summary>
public sealed record PromotionRiskSnapshot(
string EnvironmentId,
int CriticalReachable,
int HighReachable,
int HighNotReachable,
decimal VexCoveragePercent,
string Severity);
/// <summary>
/// Hybrid reachability coverage (build/image/runtime) surfaced as confidence.
/// </summary>
public sealed record HybridReachabilityCoverage(
int BuildCoveragePercent,
int ImageCoveragePercent,
int RuntimeCoveragePercent,
int EvidenceAgeHours);
/// <summary>
/// Operations/data confidence summary consumed by approvals and promotions.
/// </summary>
public sealed record OpsDataConfidence(
string Status,
string Summary,
int TrustScore,
DateTimeOffset DataAsOf,
IReadOnlyList<string> Signals);
/// <summary>
/// Evidence packet summary for approval decision packets.
/// </summary>
public sealed record ApprovalEvidencePacket(
string DecisionDigest,
string PolicyDecisionDsse,
string SbomSnapshotId,
string ReachabilitySnapshotId,
string DataIntegritySnapshotId);

View File

@@ -1,55 +0,0 @@
using StellaOps.JobEngine.Core.Domain;
namespace StellaOps.JobEngine.WebService.Contracts;
/// <summary>
/// Response representing a run (batch execution).
/// </summary>
public sealed record RunResponse(
Guid RunId,
Guid SourceId,
string RunType,
string Status,
string? CorrelationId,
int TotalJobs,
int CompletedJobs,
int SucceededJobs,
int FailedJobs,
DateTimeOffset CreatedAt,
DateTimeOffset? StartedAt,
DateTimeOffset? CompletedAt,
string CreatedBy)
{
public static RunResponse FromDomain(Run run) => new(
run.RunId,
run.SourceId,
run.RunType,
run.Status.ToString().ToLowerInvariant(),
run.CorrelationId,
run.TotalJobs,
run.CompletedJobs,
run.SucceededJobs,
run.FailedJobs,
run.CreatedAt,
run.StartedAt,
run.CompletedAt,
run.CreatedBy);
}
/// <summary>
/// Response containing a list of runs.
/// </summary>
public sealed record RunListResponse(
IReadOnlyList<RunResponse> Runs,
string? NextCursor);
/// <summary>
/// Summary statistics for runs.
/// </summary>
public sealed record RunSummary(
int TotalRuns,
int PendingRuns,
int RunningRuns,
int SucceededRuns,
int FailedRuns,
int CanceledRuns);

View File

@@ -1,38 +0,0 @@
using StellaOps.JobEngine.Core.Domain;
namespace StellaOps.JobEngine.WebService.Contracts;
/// <summary>
/// Response representing a job source.
/// </summary>
public sealed record SourceResponse(
Guid SourceId,
string Name,
string SourceType,
bool Enabled,
bool Paused,
string? PauseReason,
string? PauseTicket,
DateTimeOffset CreatedAt,
DateTimeOffset UpdatedAt,
string UpdatedBy)
{
public static SourceResponse FromDomain(Source source) => new(
source.SourceId,
source.Name,
source.SourceType,
source.Enabled,
source.Paused,
source.PauseReason,
source.PauseTicket,
source.CreatedAt,
source.UpdatedAt,
source.UpdatedBy);
}
/// <summary>
/// Response containing a list of sources.
/// </summary>
public sealed record SourceListResponse(
IReadOnlyList<SourceResponse> Sources,
string? NextCursor);

View File

@@ -1,157 +0,0 @@
namespace StellaOps.JobEngine.WebService.Contracts;
/// <summary>
/// Request to claim a job for execution.
/// </summary>
/// <param name="WorkerId">Unique identifier for the worker.</param>
/// <param name="TaskRunnerId">Optional task runner identifier.</param>
/// <param name="JobType">Optional job type filter to claim specific job types.</param>
/// <param name="LeaseSeconds">Requested lease duration in seconds (capped by server).</param>
/// <param name="IdempotencyKey">Optional idempotency key to prevent duplicate claims.</param>
public sealed record ClaimRequest(
string WorkerId,
string? TaskRunnerId,
string? JobType,
int? LeaseSeconds,
string? IdempotencyKey);
/// <summary>
/// Response after successfully claiming a job.
/// </summary>
/// <param name="JobId">Claimed job identifier.</param>
/// <param name="LeaseId">Lease token required for subsequent operations.</param>
/// <param name="JobType">Type of the claimed job.</param>
/// <param name="Payload">Job payload JSON.</param>
/// <param name="PayloadDigest">SHA-256 digest of the payload.</param>
/// <param name="Attempt">Current attempt number.</param>
/// <param name="MaxAttempts">Maximum allowed attempts.</param>
/// <param name="LeaseUntil">Lease expiration time (UTC).</param>
/// <param name="IdempotencyKey">Job's idempotency key.</param>
/// <param name="CorrelationId">Correlation ID for tracing.</param>
/// <param name="RunId">Parent run ID if applicable.</param>
/// <param name="ProjectId">Project scope if applicable.</param>
public sealed record ClaimResponse(
Guid JobId,
Guid LeaseId,
string JobType,
string Payload,
string PayloadDigest,
int Attempt,
int MaxAttempts,
DateTimeOffset LeaseUntil,
string IdempotencyKey,
string? CorrelationId,
Guid? RunId,
string? ProjectId);
/// <summary>
/// Request to extend a job lease (heartbeat).
/// </summary>
/// <param name="LeaseId">Current lease token.</param>
/// <param name="ExtendSeconds">Requested extension in seconds.</param>
/// <param name="IdempotencyKey">Idempotency key for the heartbeat request.</param>
public sealed record HeartbeatRequest(
Guid LeaseId,
int? ExtendSeconds,
string? IdempotencyKey);
/// <summary>
/// Response after successfully extending a lease.
/// </summary>
/// <param name="JobId">Job identifier.</param>
/// <param name="LeaseId">Lease token (unchanged).</param>
/// <param name="LeaseUntil">New lease expiration time (UTC).</param>
/// <param name="Acknowledged">Whether the heartbeat was acknowledged.</param>
public sealed record HeartbeatResponse(
Guid JobId,
Guid LeaseId,
DateTimeOffset LeaseUntil,
bool Acknowledged);
/// <summary>
/// Request to report job progress.
/// </summary>
/// <param name="LeaseId">Current lease token.</param>
/// <param name="ProgressPercent">Progress percentage (0-100).</param>
/// <param name="Message">Optional progress message.</param>
/// <param name="Metadata">Optional structured progress metadata JSON.</param>
/// <param name="IdempotencyKey">Idempotency key for the progress report.</param>
public sealed record ProgressRequest(
Guid LeaseId,
double? ProgressPercent,
string? Message,
string? Metadata,
string? IdempotencyKey);
/// <summary>
/// Response after reporting progress.
/// </summary>
/// <param name="JobId">Job identifier.</param>
/// <param name="Acknowledged">Whether the progress was recorded.</param>
/// <param name="LeaseUntil">Current lease expiration (informational).</param>
public sealed record ProgressResponse(
Guid JobId,
bool Acknowledged,
DateTimeOffset LeaseUntil);
/// <summary>
/// Request to complete a job (success or failure).
/// </summary>
/// <param name="LeaseId">Current lease token.</param>
/// <param name="Success">Whether the job succeeded.</param>
/// <param name="Reason">Completion reason (required for failures, optional for success).</param>
/// <param name="Artifacts">Artifacts produced by the job.</param>
/// <param name="ResultDigest">SHA-256 digest of the result for verification.</param>
/// <param name="IdempotencyKey">Idempotency key for the completion request.</param>
public sealed record CompleteRequest(
Guid LeaseId,
bool Success,
string? Reason,
IReadOnlyList<ArtifactInput>? Artifacts,
string? ResultDigest,
string? IdempotencyKey);
/// <summary>
/// Artifact metadata for job completion.
/// </summary>
/// <param name="ArtifactType">Type of artifact (e.g., "sbom", "scan-result", "log").</param>
/// <param name="Uri">Storage URI where artifact is stored.</param>
/// <param name="Digest">SHA-256 content digest for integrity.</param>
/// <param name="MimeType">MIME type of the artifact.</param>
/// <param name="SizeBytes">Size in bytes.</param>
/// <param name="Metadata">Optional structured metadata JSON.</param>
public sealed record ArtifactInput(
string ArtifactType,
string Uri,
string Digest,
string? MimeType,
long? SizeBytes,
string? Metadata);
/// <summary>
/// Response after completing a job.
/// </summary>
/// <param name="JobId">Job identifier.</param>
/// <param name="Status">Final job status.</param>
/// <param name="CompletedAt">Completion timestamp (UTC).</param>
/// <param name="ArtifactIds">IDs of created artifacts.</param>
/// <param name="DurationSeconds">Job execution duration.</param>
public sealed record CompleteResponse(
Guid JobId,
string Status,
DateTimeOffset CompletedAt,
IReadOnlyList<Guid> ArtifactIds,
double DurationSeconds);
/// <summary>
/// Error response for worker operations.
/// </summary>
/// <param name="Error">Error code.</param>
/// <param name="Message">Human-readable error message.</param>
/// <param name="JobId">Job ID if applicable.</param>
/// <param name="RetryAfterSeconds">Suggested retry delay for transient errors.</param>
public sealed record WorkerErrorResponse(
string Error,
string Message,
Guid? JobId,
int? RetryAfterSeconds);

View File

@@ -1,501 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.WebService.Contracts;
using StellaOps.JobEngine.WebService.Services;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// Approval endpoints for the release orchestrator.
/// Routes: /api/release-orchestrator/approvals
/// </summary>
public static class ApprovalEndpoints
{
public static IEndpointRouteBuilder MapApprovalEndpoints(this IEndpointRouteBuilder app)
{
MapApprovalGroup(app, "/api/release-orchestrator/approvals", includeRouteNames: true);
MapApprovalGroup(app, "/api/v1/release-orchestrator/approvals", includeRouteNames: false);
return app;
}
private static void MapApprovalGroup(
IEndpointRouteBuilder app,
string prefix,
bool includeRouteNames)
{
var group = app.MapGroup(prefix)
.WithTags("Approvals")
.RequireAuthorization(JobEnginePolicies.ReleaseRead)
.RequireTenant();
var list = group.MapGet(string.Empty, ListApprovals)
.WithDescription(_t("orchestrator.approval.list_description"));
if (includeRouteNames)
{
list.WithName("Approval_List");
}
var detail = group.MapGet("/{id}", GetApproval)
.WithDescription(_t("orchestrator.approval.get_description"));
if (includeRouteNames)
{
detail.WithName("Approval_Get");
}
var approve = group.MapPost("/{id}/approve", Approve)
.WithDescription(_t("orchestrator.approval.approve_description"))
.RequireAuthorization(JobEnginePolicies.ReleaseApprove);
if (includeRouteNames)
{
approve.WithName("Approval_Approve");
}
var reject = group.MapPost("/{id}/reject", Reject)
.WithDescription(_t("orchestrator.approval.reject_description"))
.RequireAuthorization(JobEnginePolicies.ReleaseApprove);
if (includeRouteNames)
{
reject.WithName("Approval_Reject");
}
var batchApprove = group.MapPost("/batch-approve", BatchApprove)
.WithDescription(_t("orchestrator.approval.create_description"))
.RequireAuthorization(JobEnginePolicies.ReleaseApprove);
if (includeRouteNames)
{
batchApprove.WithName("Approval_BatchApprove");
}
var batchReject = group.MapPost("/batch-reject", BatchReject)
.WithDescription(_t("orchestrator.approval.cancel_description"))
.RequireAuthorization(JobEnginePolicies.ReleaseApprove);
if (includeRouteNames)
{
batchReject.WithName("Approval_BatchReject");
}
}
private static IResult ListApprovals(
[FromQuery] string? statuses,
[FromQuery] string? urgencies,
[FromQuery] string? environment)
{
var approvals = SeedData.Approvals
.Select(WithDerivedSignals)
.Select(ToSummary)
.AsEnumerable();
if (!string.IsNullOrWhiteSpace(statuses))
{
var statusList = statuses.Split(',', StringSplitOptions.RemoveEmptyEntries);
approvals = approvals.Where(a => statusList.Contains(a.Status, StringComparer.OrdinalIgnoreCase));
}
if (!string.IsNullOrWhiteSpace(urgencies))
{
var urgencyList = urgencies.Split(',', StringSplitOptions.RemoveEmptyEntries);
approvals = approvals.Where(a => urgencyList.Contains(a.Urgency, StringComparer.OrdinalIgnoreCase));
}
if (!string.IsNullOrWhiteSpace(environment))
{
approvals = approvals.Where(a =>
string.Equals(a.TargetEnvironment, environment, StringComparison.OrdinalIgnoreCase));
}
return Results.Ok(approvals.ToList());
}
private static IResult GetApproval(string id)
{
var approval = SeedData.Approvals.FirstOrDefault(a => a.Id == id);
return approval is not null
? Results.Ok(WithDerivedSignals(approval))
: Results.NotFound();
}
private static IResult Approve(string id, [FromBody] ApprovalActionDto request)
{
var approval = SeedData.Approvals.FirstOrDefault(a => a.Id == id);
if (approval is null) return Results.NotFound();
return Results.Ok(WithDerivedSignals(approval with
{
CurrentApprovals = approval.CurrentApprovals + 1,
Status = approval.CurrentApprovals + 1 >= approval.RequiredApprovals ? "approved" : approval.Status,
}));
}
private static IResult Reject(string id, [FromBody] ApprovalActionDto request)
{
var approval = SeedData.Approvals.FirstOrDefault(a => a.Id == id);
if (approval is null) return Results.NotFound();
return Results.Ok(WithDerivedSignals(approval with { Status = "rejected" }));
}
private static IResult BatchApprove([FromBody] BatchActionDto request)
{
return Results.NoContent();
}
private static IResult BatchReject([FromBody] BatchActionDto request)
{
return Results.NoContent();
}
public static ApprovalDto WithDerivedSignals(ApprovalDto approval)
{
var manifestDigest = approval.ManifestDigest
?? approval.ReleaseComponents.FirstOrDefault()?.Digest
?? $"sha256:{approval.ReleaseId.Replace("-", string.Empty, StringComparison.Ordinal)}";
var risk = approval.RiskSnapshot
?? ReleaseControlSignalCatalog.GetRiskSnapshot(approval.ReleaseId, approval.TargetEnvironment);
var coverage = approval.ReachabilityCoverage
?? ReleaseControlSignalCatalog.GetCoverage(approval.ReleaseId);
var opsConfidence = approval.OpsConfidence
?? ReleaseControlSignalCatalog.GetOpsConfidence(approval.TargetEnvironment);
var evidencePacket = approval.EvidencePacket
?? ReleaseControlSignalCatalog.BuildEvidencePacket(approval.Id, approval.ReleaseId);
return approval with
{
ManifestDigest = manifestDigest,
RiskSnapshot = risk,
ReachabilityCoverage = coverage,
OpsConfidence = opsConfidence,
EvidencePacket = evidencePacket,
DecisionDigest = approval.DecisionDigest ?? evidencePacket.DecisionDigest,
};
}
public static ApprovalSummaryDto ToSummary(ApprovalDto approval)
{
var enriched = WithDerivedSignals(approval);
return new ApprovalSummaryDto
{
Id = enriched.Id,
ReleaseId = enriched.ReleaseId,
ReleaseName = enriched.ReleaseName,
ReleaseVersion = enriched.ReleaseVersion,
SourceEnvironment = enriched.SourceEnvironment,
TargetEnvironment = enriched.TargetEnvironment,
RequestedBy = enriched.RequestedBy,
RequestedAt = enriched.RequestedAt,
Urgency = enriched.Urgency,
Justification = enriched.Justification,
Status = enriched.Status,
CurrentApprovals = enriched.CurrentApprovals,
RequiredApprovals = enriched.RequiredApprovals,
GatesPassed = enriched.GatesPassed,
ScheduledTime = enriched.ScheduledTime,
ExpiresAt = enriched.ExpiresAt,
ManifestDigest = enriched.ManifestDigest,
RiskSnapshot = enriched.RiskSnapshot,
ReachabilityCoverage = enriched.ReachabilityCoverage,
OpsConfidence = enriched.OpsConfidence,
DecisionDigest = enriched.DecisionDigest,
};
}
// ---- DTOs ----
public sealed record ApprovalSummaryDto
{
public required string Id { get; init; }
public required string ReleaseId { get; init; }
public required string ReleaseName { get; init; }
public required string ReleaseVersion { get; init; }
public required string SourceEnvironment { get; init; }
public required string TargetEnvironment { get; init; }
public required string RequestedBy { get; init; }
public required string RequestedAt { get; init; }
public required string Urgency { get; init; }
public required string Justification { get; init; }
public required string Status { get; init; }
public int CurrentApprovals { get; init; }
public int RequiredApprovals { get; init; }
public bool GatesPassed { get; init; }
public string? ScheduledTime { get; init; }
public string? ExpiresAt { get; init; }
public string? ManifestDigest { get; init; }
public PromotionRiskSnapshot? RiskSnapshot { get; init; }
public HybridReachabilityCoverage? ReachabilityCoverage { get; init; }
public OpsDataConfidence? OpsConfidence { get; init; }
public string? DecisionDigest { get; init; }
}
public sealed record ApprovalDto
{
public required string Id { get; init; }
public required string ReleaseId { get; init; }
public required string ReleaseName { get; init; }
public required string ReleaseVersion { get; init; }
public required string SourceEnvironment { get; init; }
public required string TargetEnvironment { get; init; }
public required string RequestedBy { get; init; }
public required string RequestedAt { get; init; }
public required string Urgency { get; init; }
public required string Justification { get; init; }
public required string Status { get; init; }
public int CurrentApprovals { get; init; }
public int RequiredApprovals { get; init; }
public bool GatesPassed { get; init; }
public string? ScheduledTime { get; init; }
public string? ExpiresAt { get; init; }
public List<GateResultDto> GateResults { get; init; } = new();
public List<ApprovalActionRecordDto> Actions { get; init; } = new();
public List<ApproverDto> Approvers { get; init; } = new();
public List<ReleaseComponentSummaryDto> ReleaseComponents { get; init; } = new();
public string? ManifestDigest { get; init; }
public PromotionRiskSnapshot? RiskSnapshot { get; init; }
public HybridReachabilityCoverage? ReachabilityCoverage { get; init; }
public OpsDataConfidence? OpsConfidence { get; init; }
public ApprovalEvidencePacket? EvidencePacket { get; init; }
public string? DecisionDigest { get; init; }
}
public sealed record GateResultDto
{
public required string GateId { get; init; }
public required string GateName { get; init; }
public required string Type { get; init; }
public required string Status { get; init; }
public required string Message { get; init; }
public Dictionary<string, object> Details { get; init; } = new();
public string? EvaluatedAt { get; init; }
}
public sealed record ApprovalActionRecordDto
{
public required string Id { get; init; }
public required string ApprovalId { get; init; }
public required string Action { get; init; }
public required string Actor { get; init; }
public required string Comment { get; init; }
public required string Timestamp { get; init; }
}
public sealed record ApproverDto
{
public required string Id { get; init; }
public required string Name { get; init; }
public required string Email { get; init; }
public bool HasApproved { get; init; }
public string? ApprovedAt { get; init; }
}
public sealed record ReleaseComponentSummaryDto
{
public required string Name { get; init; }
public required string Version { get; init; }
public required string Digest { get; init; }
}
public sealed record ApprovalActionDto
{
public string? Comment { get; init; }
}
public sealed record BatchActionDto
{
public string[]? Ids { get; init; }
public string? Comment { get; init; }
}
// ---- Seed Data ----
// Generates relative dates so approvals always look fresh regardless of when the service starts.
internal static class SeedData
{
private static string Ago(int hours) => DateTimeOffset.UtcNow.AddHours(-hours).ToString("o");
private static string FromNow(int hours) => DateTimeOffset.UtcNow.AddHours(hours).ToString("o");
public static readonly List<ApprovalDto> Approvals = new()
{
// ── Pending: 1/2 approved, gates OK, normal priority ──
new()
{
Id = "apr-001", ReleaseId = "rel-001", ReleaseName = "API Gateway", ReleaseVersion = "2.4.1",
SourceEnvironment = "staging", TargetEnvironment = "production",
RequestedBy = "alice.johnson", RequestedAt = Ago(3),
Urgency = "normal", Justification = "Scheduled release with new rate limiting feature and bug fixes.",
Status = "pending", CurrentApprovals = 1, RequiredApprovals = 2, GatesPassed = true,
ExpiresAt = FromNow(45),
GateResults = new()
{
new() { GateId = "g1", GateName = "Security Scan", Type = "security", Status = "passed", Message = "No vulnerabilities found", EvaluatedAt = Ago(3) },
new() { GateId = "g2", GateName = "Policy Compliance", Type = "policy", Status = "passed", Message = "All policies satisfied", EvaluatedAt = Ago(3) },
new() { GateId = "g3", GateName = "Quality Gates", Type = "quality", Status = "passed", Message = "Code coverage: 85%", EvaluatedAt = Ago(3) },
},
Actions = new()
{
new() { Id = "act-1", ApprovalId = "apr-001", Action = "approved", Actor = "bob.smith", Comment = "Looks good, tests are passing.", Timestamp = Ago(2) },
},
Approvers = new()
{
new() { Id = "u1", Name = "Bob Smith", Email = "bob.smith@example.com", HasApproved = true, ApprovedAt = Ago(2) },
new() { Id = "u2", Name = "Carol Davis", Email = "carol.davis@example.com" },
},
ReleaseComponents = new()
{
new() { Name = "api-gateway", Version = "2.4.1", Digest = "sha256:abc123def456" },
new() { Name = "rate-limiter", Version = "1.0.5", Digest = "sha256:789xyz012abc" },
},
},
// ── Pending: 0/2 approved, gates FAILING, high priority ──
new()
{
Id = "apr-002", ReleaseId = "rel-002", ReleaseName = "User Service", ReleaseVersion = "3.0.0-rc1",
SourceEnvironment = "staging", TargetEnvironment = "production",
RequestedBy = "david.wilson", RequestedAt = Ago(1),
Urgency = "high", Justification = "Critical fix for user authentication timeout issue.",
Status = "pending", CurrentApprovals = 0, RequiredApprovals = 2, GatesPassed = false,
ExpiresAt = FromNow(23),
GateResults = new()
{
new() { GateId = "g1", GateName = "Security Scan", Type = "security", Status = "warning", Message = "2 low severity vulnerabilities", EvaluatedAt = Ago(1) },
new() { GateId = "g2", GateName = "Policy Compliance", Type = "policy", Status = "passed", Message = "All policies satisfied", EvaluatedAt = Ago(1) },
new() { GateId = "g3", GateName = "Quality Gates", Type = "quality", Status = "failed", Message = "Code coverage: 72% (min 80%)", EvaluatedAt = Ago(1) },
},
Approvers = new()
{
new() { Id = "u1", Name = "Bob Smith", Email = "bob.smith@example.com" },
new() { Id = "u3", Name = "Emily Chen", Email = "emily.chen@example.com" },
},
ReleaseComponents = new()
{
new() { Name = "user-service", Version = "3.0.0-rc1", Digest = "sha256:user123def456" },
},
},
// ── Pending: 0/1 approved, gates OK, critical, expiring soon ──
new()
{
Id = "apr-005", ReleaseId = "rel-005", ReleaseName = "Auth Service", ReleaseVersion = "1.8.3-hotfix",
SourceEnvironment = "staging", TargetEnvironment = "production",
RequestedBy = "frank.miller", RequestedAt = Ago(6),
Urgency = "critical", Justification = "Hotfix: OAuth token refresh loop causing 503 cascade.",
Status = "pending", CurrentApprovals = 0, RequiredApprovals = 1, GatesPassed = true,
ExpiresAt = FromNow(2),
GateResults = new()
{
new() { GateId = "g1", GateName = "Security Scan", Type = "security", Status = "passed", Message = "No vulnerabilities", EvaluatedAt = Ago(6) },
new() { GateId = "g2", GateName = "Policy Compliance", Type = "policy", Status = "passed", Message = "Hotfix policy waiver applied", EvaluatedAt = Ago(6) },
},
Approvers = new()
{
new() { Id = "u4", Name = "Grace Lee", Email = "grace.lee@example.com" },
},
ReleaseComponents = new()
{
new() { Name = "auth-service", Version = "1.8.3-hotfix", Digest = "sha256:auth789ghi012" },
},
},
// ── Pending: dev → staging, gates OK, low priority ──
new()
{
Id = "apr-006", ReleaseId = "rel-006", ReleaseName = "Billing Dashboard", ReleaseVersion = "4.2.0",
SourceEnvironment = "dev", TargetEnvironment = "staging",
RequestedBy = "alice.johnson", RequestedAt = Ago(12),
Urgency = "low", Justification = "New billing analytics dashboard with chart components.",
Status = "pending", CurrentApprovals = 0, RequiredApprovals = 1, GatesPassed = true,
ExpiresAt = FromNow(60),
GateResults = new()
{
new() { GateId = "g1", GateName = "Security Scan", Type = "security", Status = "passed", Message = "Clean scan", EvaluatedAt = Ago(12) },
new() { GateId = "g2", GateName = "Quality Gates", Type = "quality", Status = "passed", Message = "Coverage 91%", EvaluatedAt = Ago(12) },
},
Approvers = new()
{
new() { Id = "u3", Name = "Emily Chen", Email = "emily.chen@example.com" },
},
ReleaseComponents = new()
{
new() { Name = "billing-dashboard", Version = "4.2.0", Digest = "sha256:bill456def789" },
},
},
// ── Approved (completed): critical hotfix ──
new()
{
Id = "apr-003", ReleaseId = "rel-003", ReleaseName = "Payment Gateway", ReleaseVersion = "1.5.2",
SourceEnvironment = "dev", TargetEnvironment = "staging",
RequestedBy = "frank.miller", RequestedAt = Ago(48),
Urgency = "critical", Justification = "Emergency fix for payment processing failure.",
Status = "approved", CurrentApprovals = 2, RequiredApprovals = 2, GatesPassed = true,
ScheduledTime = Ago(46), ExpiresAt = Ago(24),
Actions = new()
{
new() { Id = "act-2", ApprovalId = "apr-003", Action = "approved", Actor = "carol.davis", Comment = "Urgent fix approved.", Timestamp = Ago(47) },
new() { Id = "act-3", ApprovalId = "apr-003", Action = "approved", Actor = "grace.lee", Comment = "Confirmed, proceed.", Timestamp = Ago(46) },
},
Approvers = new()
{
new() { Id = "u2", Name = "Carol Davis", Email = "carol.davis@example.com", HasApproved = true, ApprovedAt = Ago(47) },
new() { Id = "u4", Name = "Grace Lee", Email = "grace.lee@example.com", HasApproved = true, ApprovedAt = Ago(46) },
},
ReleaseComponents = new()
{
new() { Name = "payment-gateway", Version = "1.5.2", Digest = "sha256:pay456abc789" },
},
},
// ── Rejected: missing tests ──
new()
{
Id = "apr-004", ReleaseId = "rel-004", ReleaseName = "Notification Service", ReleaseVersion = "2.0.0",
SourceEnvironment = "staging", TargetEnvironment = "production",
RequestedBy = "alice.johnson", RequestedAt = Ago(72),
Urgency = "low", Justification = "Feature release with new email templates.",
Status = "rejected", CurrentApprovals = 0, RequiredApprovals = 2, GatesPassed = true,
ExpiresAt = Ago(24),
Actions = new()
{
new() { Id = "act-4", ApprovalId = "apr-004", Action = "rejected", Actor = "bob.smith", Comment = "Missing integration tests for the email template renderer.", Timestamp = Ago(70) },
},
Approvers = new()
{
new() { Id = "u1", Name = "Bob Smith", Email = "bob.smith@example.com" },
},
ReleaseComponents = new()
{
new() { Name = "notification-service", Version = "2.0.0", Digest = "sha256:notify789abc" },
},
},
// ── Approved: routine promotion ──
new()
{
Id = "apr-007", ReleaseId = "rel-007", ReleaseName = "Config Service", ReleaseVersion = "1.12.0",
SourceEnvironment = "staging", TargetEnvironment = "production",
RequestedBy = "david.wilson", RequestedAt = Ago(96),
Urgency = "normal", Justification = "Routine config service update with new environment variable support.",
Status = "approved", CurrentApprovals = 2, RequiredApprovals = 2, GatesPassed = true,
ExpiresAt = Ago(48),
Actions = new()
{
new() { Id = "act-5", ApprovalId = "apr-007", Action = "approved", Actor = "emily.chen", Comment = "LGTM.", Timestamp = Ago(94) },
new() { Id = "act-6", ApprovalId = "apr-007", Action = "approved", Actor = "bob.smith", Comment = "Approved.", Timestamp = Ago(93) },
},
Approvers = new()
{
new() { Id = "u3", Name = "Emily Chen", Email = "emily.chen@example.com", HasApproved = true, ApprovedAt = Ago(94) },
new() { Id = "u1", Name = "Bob Smith", Email = "bob.smith@example.com", HasApproved = true, ApprovedAt = Ago(93) },
},
ReleaseComponents = new()
{
new() { Name = "config-service", Version = "1.12.0", Digest = "sha256:cfg012xyz345" },
},
},
};
}
}

View File

@@ -1,261 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Core.Domain;
using StellaOps.JobEngine.Infrastructure.Repositories;
using StellaOps.JobEngine.WebService.Contracts;
using StellaOps.JobEngine.WebService.Services;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// REST API endpoints for audit log operations.
/// </summary>
public static class AuditEndpoints
{
/// <summary>
/// Maps audit endpoints to the route builder.
/// </summary>
public static RouteGroupBuilder MapAuditEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/audit")
.WithTags("Orchestrator Audit")
.RequireAuthorization(JobEnginePolicies.Read)
.RequireTenant();
// List and get operations
group.MapGet(string.Empty, ListAuditEntries)
.WithName("Orchestrator_ListAuditEntries")
.WithDescription(_t("orchestrator.audit.list_description"));
group.MapGet("{entryId:guid}", GetAuditEntry)
.WithName("Orchestrator_GetAuditEntry")
.WithDescription(_t("orchestrator.audit.get_description"));
group.MapGet("resource/{resourceType}/{resourceId:guid}", GetResourceHistory)
.WithName("Orchestrator_GetResourceHistory")
.WithDescription(_t("orchestrator.audit.get_resource_history_description"));
group.MapGet("latest", GetLatestEntry)
.WithName("Orchestrator_GetLatestAuditEntry")
.WithDescription(_t("orchestrator.audit.get_latest_description"));
group.MapGet("sequence/{startSeq:long}/{endSeq:long}", GetBySequenceRange)
.WithName("Orchestrator_GetAuditBySequence")
.WithDescription(_t("orchestrator.audit.get_by_sequence_description"));
// Summary and verification
group.MapGet("summary", GetAuditSummary)
.WithName("Orchestrator_GetAuditSummary")
.WithDescription(_t("orchestrator.audit.summary_description"));
group.MapGet("verify", VerifyAuditChain)
.WithName("Orchestrator_VerifyAuditChain")
.WithDescription(_t("orchestrator.audit.verify_description"));
return group;
}
private static async Task<IResult> ListAuditEntries(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IAuditRepository repository,
[FromQuery] string? eventType = null,
[FromQuery] string? resourceType = null,
[FromQuery] Guid? resourceId = null,
[FromQuery] string? actorId = null,
[FromQuery] DateTimeOffset? startTime = null,
[FromQuery] DateTimeOffset? endTime = null,
[FromQuery] int? limit = null,
[FromQuery] string? cursor = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = EndpointHelpers.GetLimit(limit);
var offset = EndpointHelpers.ParseCursorOffset(cursor);
AuditEventType? parsedEventType = null;
if (!string.IsNullOrEmpty(eventType) && Enum.TryParse<AuditEventType>(eventType, true, out var et))
{
parsedEventType = et;
}
var entries = await repository.ListAsync(
tenantId,
parsedEventType,
resourceType,
resourceId,
actorId,
startTime,
endTime,
effectiveLimit,
offset,
cancellationToken).ConfigureAwait(false);
var responses = entries.Select(AuditEntryResponse.FromDomain).ToList();
var nextCursor = EndpointHelpers.CreateNextCursor(offset, effectiveLimit, responses.Count);
return Results.Ok(new AuditEntryListResponse(responses, nextCursor));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetAuditEntry(
HttpContext context,
[FromRoute] Guid entryId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IAuditRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var entry = await repository.GetByIdAsync(tenantId, entryId, cancellationToken).ConfigureAwait(false);
if (entry is null)
{
return Results.NotFound();
}
return Results.Ok(AuditEntryResponse.FromDomain(entry));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetResourceHistory(
HttpContext context,
[FromRoute] string resourceType,
[FromRoute] Guid resourceId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IAuditRepository repository,
[FromQuery] int? limit = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = EndpointHelpers.GetLimit(limit);
var entries = await repository.GetByResourceAsync(
tenantId,
resourceType,
resourceId,
effectiveLimit,
cancellationToken).ConfigureAwait(false);
var responses = entries.Select(AuditEntryResponse.FromDomain).ToList();
return Results.Ok(new AuditEntryListResponse(responses, null));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetLatestEntry(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IAuditRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var entry = await repository.GetLatestAsync(tenantId, cancellationToken).ConfigureAwait(false);
if (entry is null)
{
return Results.NotFound();
}
return Results.Ok(AuditEntryResponse.FromDomain(entry));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetBySequenceRange(
HttpContext context,
[FromRoute] long startSeq,
[FromRoute] long endSeq,
[FromServices] TenantResolver tenantResolver,
[FromServices] IAuditRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
if (startSeq < 1 || endSeq < startSeq)
{
return Results.BadRequest(new { error = _t("orchestrator.audit.error.invalid_sequence_range") });
}
var entries = await repository.GetBySequenceRangeAsync(
tenantId,
startSeq,
endSeq,
cancellationToken).ConfigureAwait(false);
var responses = entries.Select(AuditEntryResponse.FromDomain).ToList();
return Results.Ok(new AuditEntryListResponse(responses, null));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetAuditSummary(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IAuditRepository repository,
[FromQuery] DateTimeOffset? since = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var summary = await repository.GetSummaryAsync(tenantId, since, cancellationToken).ConfigureAwait(false);
return Results.Ok(AuditSummaryResponse.FromDomain(summary));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> VerifyAuditChain(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IAuditRepository repository,
[FromQuery] long? startSeq = null,
[FromQuery] long? endSeq = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var result = await repository.VerifyChainAsync(tenantId, startSeq, endSeq, cancellationToken).ConfigureAwait(false);
Infrastructure.JobEngineMetrics.AuditChainVerified(tenantId, result.IsValid);
return Results.Ok(ChainVerificationResponse.FromDomain(result));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
}

View File

@@ -1,258 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Core.Domain;
using StellaOps.JobEngine.Core.Services;
using StellaOps.JobEngine.WebService.Contracts;
using StellaOps.JobEngine.WebService.Services;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// REST API endpoints for circuit breaker management.
/// </summary>
public static class CircuitBreakerEndpoints
{
/// <summary>
/// Maps circuit breaker endpoints to the route builder.
/// </summary>
public static RouteGroupBuilder MapCircuitBreakerEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/circuit-breakers")
.WithTags("Orchestrator Circuit Breakers")
.RequireAuthorization(JobEnginePolicies.Read)
.RequireTenant();
// List circuit breakers
group.MapGet(string.Empty, ListCircuitBreakers)
.WithName("Orchestrator_ListCircuitBreakers")
.WithDescription(_t("orchestrator.circuit_breaker.list_description"));
// Get specific circuit breaker
group.MapGet("{serviceId}", GetCircuitBreaker)
.WithName("Orchestrator_GetCircuitBreaker")
.WithDescription(_t("orchestrator.circuit_breaker.get_description"));
// Check if request is allowed
group.MapGet("{serviceId}/check", CheckCircuitBreaker)
.WithName("Orchestrator_CheckCircuitBreaker")
.WithDescription(_t("orchestrator.circuit_breaker.check_description"));
// Record success
group.MapPost("{serviceId}/success", RecordSuccess)
.WithName("Orchestrator_RecordCircuitBreakerSuccess")
.WithDescription(_t("orchestrator.circuit_breaker.record_success_description"))
.RequireAuthorization(JobEnginePolicies.Operate);
// Record failure
group.MapPost("{serviceId}/failure", RecordFailure)
.WithName("Orchestrator_RecordCircuitBreakerFailure")
.WithDescription(_t("orchestrator.circuit_breaker.record_failure_description"))
.RequireAuthorization(JobEnginePolicies.Operate);
// Force open
group.MapPost("{serviceId}/force-open", ForceOpen)
.WithName("Orchestrator_ForceOpenCircuitBreaker")
.WithDescription(_t("orchestrator.circuit_breaker.force_open_description"))
.RequireAuthorization(JobEnginePolicies.Operate);
// Force close
group.MapPost("{serviceId}/force-close", ForceClose)
.WithName("Orchestrator_ForceCloseCircuitBreaker")
.WithDescription(_t("orchestrator.circuit_breaker.force_close_description"))
.RequireAuthorization(JobEnginePolicies.Operate);
return group;
}
private static async Task<IResult> ListCircuitBreakers(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] ICircuitBreakerService service,
[FromQuery] string? state = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
CircuitState? filterState = null;
if (!string.IsNullOrEmpty(state) && Enum.TryParse<CircuitState>(state, ignoreCase: true, out var parsed))
{
filterState = parsed;
}
var circuitBreakers = await service.ListAsync(tenantId, filterState, cancellationToken)
.ConfigureAwait(false);
var responses = circuitBreakers.Select(CircuitBreakerResponse.FromDomain).ToList();
return Results.Ok(new CircuitBreakerListResponse(responses, null));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetCircuitBreaker(
HttpContext context,
[FromRoute] string serviceId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ICircuitBreakerService service,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var state = await service.GetStateAsync(tenantId, serviceId, cancellationToken).ConfigureAwait(false);
if (state is null)
{
return Results.NotFound();
}
return Results.Ok(CircuitBreakerResponse.FromDomain(state));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> CheckCircuitBreaker(
HttpContext context,
[FromRoute] string serviceId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ICircuitBreakerService service,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var result = await service.CheckAsync(tenantId, serviceId, cancellationToken).ConfigureAwait(false);
return Results.Ok(new CircuitBreakerCheckResponse(
IsAllowed: result.IsAllowed,
State: result.State.ToString(),
FailureRate: result.FailureRate,
TimeUntilRetry: result.TimeUntilRetry,
BlockReason: result.BlockReason));
}
catch (ArgumentException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> RecordSuccess(
HttpContext context,
[FromRoute] string serviceId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ICircuitBreakerService service,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
await service.RecordSuccessAsync(tenantId, serviceId, cancellationToken).ConfigureAwait(false);
return Results.Ok(new { recorded = true });
}
catch (ArgumentException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> RecordFailure(
HttpContext context,
[FromRoute] string serviceId,
[FromBody] RecordFailureRequest? request,
[FromServices] TenantResolver tenantResolver,
[FromServices] ICircuitBreakerService service,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var failureReason = request?.FailureReason ?? "Unspecified failure";
await service.RecordFailureAsync(tenantId, serviceId, failureReason, cancellationToken).ConfigureAwait(false);
return Results.Ok(new { recorded = true });
}
catch (ArgumentException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> ForceOpen(
HttpContext context,
[FromRoute] string serviceId,
[FromBody] ForceOpenCircuitBreakerRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] ICircuitBreakerService service,
CancellationToken cancellationToken = default)
{
try
{
if (string.IsNullOrWhiteSpace(request.Reason))
{
return Results.BadRequest(new { error = _t("orchestrator.circuit_breaker.error.force_open_reason_required") });
}
var tenantId = tenantResolver.Resolve(context);
var actorId = context.User?.Identity?.Name ?? "system";
await service.ForceOpenAsync(tenantId, serviceId, request.Reason, actorId, cancellationToken).ConfigureAwait(false);
return Results.Ok(new { opened = true });
}
catch (ArgumentException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> ForceClose(
HttpContext context,
[FromRoute] string serviceId,
[FromBody] ForceCloseCircuitBreakerRequest? request,
[FromServices] TenantResolver tenantResolver,
[FromServices] ICircuitBreakerService service,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var actorId = context.User?.Identity?.Name ?? "system";
await service.ForceCloseAsync(tenantId, serviceId, actorId, cancellationToken).ConfigureAwait(false);
return Results.Ok(new { closed = true });
}
catch (ArgumentException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
}

View File

@@ -1,246 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Core.Scheduling;
using StellaOps.JobEngine.Infrastructure.Repositories;
using StellaOps.JobEngine.WebService.Contracts;
using StellaOps.JobEngine.WebService.Services;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// REST API endpoints for job DAG (dependency graph).
/// </summary>
public static class DagEndpoints
{
/// <summary>
/// Maps DAG endpoints to the route builder.
/// </summary>
public static RouteGroupBuilder MapDagEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/dag")
.WithTags("Orchestrator DAG")
.RequireAuthorization(JobEnginePolicies.Read)
.RequireTenant();
group.MapGet("run/{runId:guid}", GetRunDag)
.WithName("Orchestrator_GetRunDag")
.WithDescription(_t("orchestrator.dag.get_run_description"));
group.MapGet("run/{runId:guid}/edges", GetRunEdges)
.WithName("Orchestrator_GetRunEdges")
.WithDescription(_t("orchestrator.dag.get_run_edges_description"));
group.MapGet("run/{runId:guid}/ready-jobs", GetReadyJobs)
.WithName("Orchestrator_GetReadyJobs")
.WithDescription(_t("orchestrator.dag.get_ready_jobs_description"));
group.MapGet("run/{runId:guid}/blocked/{jobId:guid}", GetBlockedJobs)
.WithName("Orchestrator_GetBlockedJobs")
.WithDescription(_t("orchestrator.dag.get_blocked_jobs_description"));
group.MapGet("job/{jobId:guid}/parents", GetJobParents)
.WithName("Orchestrator_GetJobParents")
.WithDescription(_t("orchestrator.dag.get_job_parents_description"));
group.MapGet("job/{jobId:guid}/children", GetJobChildren)
.WithName("Orchestrator_GetJobChildren")
.WithDescription(_t("orchestrator.dag.get_job_children_description"));
return group;
}
private static async Task<IResult> GetRunDag(
HttpContext context,
[FromRoute] Guid runId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IRunRepository runRepository,
[FromServices] IJobRepository jobRepository,
[FromServices] IDagEdgeRepository dagEdgeRepository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
// Verify run exists
var run = await runRepository.GetByIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
if (run is null)
{
return Results.NotFound();
}
// Get all edges
var edges = await dagEdgeRepository.GetByRunIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
var edgeResponses = edges.Select(DagEdgeResponse.FromDomain).ToList();
// Get all jobs for topological sort and critical path
var jobs = await jobRepository.GetByRunIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
// Compute topological order
IReadOnlyList<Guid> topologicalOrder;
try
{
topologicalOrder = DagPlanner.TopologicalSort(jobs.Select(j => j.JobId), edges);
}
catch (InvalidOperationException)
{
// Cycle detected - return empty order
topologicalOrder = [];
}
// Compute critical path (using a fixed estimate for simplicity)
var criticalPath = DagPlanner.CalculateCriticalPath(jobs, edges, _ => TimeSpan.FromMinutes(5));
return Results.Ok(new DagResponse(
runId,
edgeResponses,
topologicalOrder,
criticalPath.CriticalPathJobIds,
criticalPath.TotalDuration));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetRunEdges(
HttpContext context,
[FromRoute] Guid runId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IRunRepository runRepository,
[FromServices] IDagEdgeRepository dagEdgeRepository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
// Verify run exists
var run = await runRepository.GetByIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
if (run is null)
{
return Results.NotFound();
}
var edges = await dagEdgeRepository.GetByRunIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
var responses = edges.Select(DagEdgeResponse.FromDomain).ToList();
return Results.Ok(new DagEdgeListResponse(responses));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetReadyJobs(
HttpContext context,
[FromRoute] Guid runId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IRunRepository runRepository,
[FromServices] IJobRepository jobRepository,
[FromServices] IDagEdgeRepository dagEdgeRepository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
// Verify run exists
var run = await runRepository.GetByIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
if (run is null)
{
return Results.NotFound();
}
var jobs = await jobRepository.GetByRunIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
var edges = await dagEdgeRepository.GetByRunIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
var readyJobs = DagPlanner.GetReadyJobs(jobs, edges);
var responses = readyJobs.Select(JobResponse.FromDomain).ToList();
return Results.Ok(new JobListResponse(responses, null));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetBlockedJobs(
HttpContext context,
[FromRoute] Guid runId,
[FromRoute] Guid jobId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IRunRepository runRepository,
[FromServices] IDagEdgeRepository dagEdgeRepository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
// Verify run exists
var run = await runRepository.GetByIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
if (run is null)
{
return Results.NotFound();
}
var edges = await dagEdgeRepository.GetByRunIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
var blockedJobs = DagPlanner.GetBlockedJobs(jobId, edges);
return Results.Ok(new BlockedJobsResponse(jobId, blockedJobs.ToList()));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetJobParents(
HttpContext context,
[FromRoute] Guid jobId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IDagEdgeRepository dagEdgeRepository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var edges = await dagEdgeRepository.GetParentEdgesAsync(tenantId, jobId, cancellationToken).ConfigureAwait(false);
var responses = edges.Select(DagEdgeResponse.FromDomain).ToList();
return Results.Ok(new DagEdgeListResponse(responses));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetJobChildren(
HttpContext context,
[FromRoute] Guid jobId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IDagEdgeRepository dagEdgeRepository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var edges = await dagEdgeRepository.GetChildEdgesAsync(tenantId, jobId, cancellationToken).ConfigureAwait(false);
var responses = edges.Select(DagEdgeResponse.FromDomain).ToList();
return Results.Ok(new DagEdgeListResponse(responses));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
}

View File

@@ -1,817 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using Npgsql;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Core.DeadLetter;
using StellaOps.JobEngine.Core.Domain;
using StellaOps.JobEngine.WebService.Services;
using System;
using System.Globalization;
using System.Text;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// REST API endpoints for dead-letter store.
/// </summary>
public static class DeadLetterEndpoints
{
/// <summary>
/// Maps dead-letter endpoints to the route builder.
/// </summary>
public static RouteGroupBuilder MapDeadLetterEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/deadletter")
.WithTags("Orchestrator Dead-Letter")
.RequireAuthorization(JobEnginePolicies.Read)
.RequireTenant();
// Entry management
group.MapGet(string.Empty, ListEntries)
.WithName("Orchestrator_ListDeadLetterEntries")
.WithDescription(_t("orchestrator.dead_letter.list_description"));
group.MapGet("{entryId:guid}", GetEntry)
.WithName("Orchestrator_GetDeadLetterEntry")
.WithDescription(_t("orchestrator.dead_letter.get_description"));
group.MapGet("by-job/{jobId:guid}", GetEntryByJobId)
.WithName("Orchestrator_GetDeadLetterEntryByJobId")
.WithDescription(_t("orchestrator.dead_letter.get_by_job_description"));
group.MapGet("stats", GetStats)
.WithName("Orchestrator_GetDeadLetterStats")
.WithDescription(_t("orchestrator.dead_letter.stats_description"));
group.MapGet("export", ExportEntries)
.WithName("Orchestrator_ExportDeadLetterEntries")
.WithDescription(_t("orchestrator.dead_letter.export_description"));
group.MapGet("summary", GetActionableSummary)
.WithName("Orchestrator_GetDeadLetterSummary")
.WithDescription(_t("orchestrator.dead_letter.summary_description"));
// Replay operations
group.MapPost("{entryId:guid}/replay", ReplayEntry)
.WithName("Orchestrator_ReplayDeadLetterEntry")
.WithDescription(_t("orchestrator.dead_letter.replay_description"))
.RequireAuthorization(JobEnginePolicies.Operate);
group.MapPost("replay/batch", ReplayBatch)
.WithName("Orchestrator_ReplayDeadLetterBatch")
.WithDescription(_t("orchestrator.dead_letter.replay_batch_description"))
.RequireAuthorization(JobEnginePolicies.Operate);
group.MapPost("replay/pending", ReplayPending)
.WithName("Orchestrator_ReplayPendingDeadLetters")
.WithDescription(_t("orchestrator.dead_letter.replay_pending_description"))
.RequireAuthorization(JobEnginePolicies.Operate);
// Resolution
group.MapPost("{entryId:guid}/resolve", ResolveEntry)
.WithName("Orchestrator_ResolveDeadLetterEntry")
.WithDescription(_t("orchestrator.dead_letter.resolve_description"))
.RequireAuthorization(JobEnginePolicies.Operate);
group.MapPost("resolve/batch", ResolveBatch)
.WithName("Orchestrator_ResolveDeadLetterBatch")
.WithDescription(_t("orchestrator.dead_letter.resolve_batch_description"))
.RequireAuthorization(JobEnginePolicies.Operate);
// Error classification reference
group.MapGet("error-codes", ListErrorCodes)
.WithName("Orchestrator_ListDeadLetterErrorCodes")
.WithDescription(_t("orchestrator.dead_letter.error_codes_description"));
// Audit
group.MapGet("{entryId:guid}/audit", GetReplayAudit)
.WithName("Orchestrator_GetDeadLetterReplayAudit")
.WithDescription(_t("orchestrator.dead_letter.replay_audit_description"));
return group;
}
private static async Task<IResult> ListEntries(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IDeadLetterRepository repository,
[FromQuery] string? status = null,
[FromQuery] string? category = null,
[FromQuery] string? jobType = null,
[FromQuery] string? errorCode = null,
[FromQuery] Guid? sourceId = null,
[FromQuery] Guid? runId = null,
[FromQuery] bool? isRetryable = null,
[FromQuery] string? createdAfter = null,
[FromQuery] string? createdBefore = null,
[FromQuery] int? limit = null,
[FromQuery] string? cursor = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = EndpointHelpers.GetLimit(limit);
var options = new DeadLetterListOptions(
Status: TryParseDeadLetterStatus(status),
Category: TryParseErrorCategory(category),
JobType: jobType,
ErrorCode: errorCode,
SourceId: sourceId,
RunId: runId,
IsRetryable: isRetryable,
CreatedAfter: EndpointHelpers.TryParseDateTimeOffset(createdAfter),
CreatedBefore: EndpointHelpers.TryParseDateTimeOffset(createdBefore),
Cursor: cursor,
Limit: effectiveLimit);
var entries = await repository.ListAsync(tenantId, options, cancellationToken)
.ConfigureAwait(false);
var totalCount = await repository.CountAsync(tenantId, options, cancellationToken)
.ConfigureAwait(false);
var responses = entries.Select(DeadLetterEntryResponse.FromDomain).ToList();
var nextCursor = entries.Count >= effectiveLimit
? entries.Last().CreatedAt.ToString("O", CultureInfo.InvariantCulture)
: null;
return Results.Ok(new DeadLetterListResponse(responses, nextCursor, totalCount));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (PostgresException ex) when (IsMissingDeadLetterTable(ex))
{
return Results.Ok(new DeadLetterListResponse(new List<DeadLetterEntryResponse>(), null, 0));
}
}
private static async Task<IResult> GetEntry(
HttpContext context,
[FromRoute] Guid entryId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IDeadLetterRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var entry = await repository.GetByIdAsync(tenantId, entryId, cancellationToken)
.ConfigureAwait(false);
if (entry is null)
{
return Results.NotFound();
}
return Results.Ok(DeadLetterEntryDetailResponse.FromDomain(entry));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (PostgresException ex) when (IsMissingDeadLetterTable(ex))
{
return Results.NotFound();
}
}
private static async Task<IResult> GetEntryByJobId(
HttpContext context,
[FromRoute] Guid jobId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IDeadLetterRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var entry = await repository.GetByOriginalJobIdAsync(tenantId, jobId, cancellationToken)
.ConfigureAwait(false);
if (entry is null)
{
return Results.NotFound();
}
return Results.Ok(DeadLetterEntryDetailResponse.FromDomain(entry));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (PostgresException ex) when (IsMissingDeadLetterTable(ex))
{
return Results.NotFound();
}
}
private static async Task<IResult> GetStats(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IDeadLetterRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var stats = await repository.GetStatsAsync(tenantId, cancellationToken)
.ConfigureAwait(false);
return Results.Ok(DeadLetterStatsResponse.FromDomain(stats));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (PostgresException ex) when (IsMissingDeadLetterTable(ex))
{
return Results.Ok(DeadLetterStatsResponse.FromDomain(CreateEmptyStats()));
}
}
private static async Task<IResult> ExportEntries(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IDeadLetterRepository repository,
[FromQuery] string? status = null,
[FromQuery] string? category = null,
[FromQuery] string? jobType = null,
[FromQuery] string? errorCode = null,
[FromQuery] bool? isRetryable = null,
[FromQuery] int? limit = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = Math.Clamp(limit ?? 1000, 1, 10000);
var options = new DeadLetterListOptions(
Status: TryParseDeadLetterStatus(status),
Category: TryParseErrorCategory(category),
JobType: jobType,
ErrorCode: errorCode,
IsRetryable: isRetryable,
Limit: effectiveLimit);
var entries = await repository.ListAsync(tenantId, options, cancellationToken)
.ConfigureAwait(false);
var csv = BuildDeadLetterCsv(entries);
var payload = Encoding.UTF8.GetBytes(csv);
var fileName = $"deadletter-export-{DateTime.UtcNow:yyyyMMdd-HHmmss}.csv";
return Results.File(payload, "text/csv", fileName);
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (PostgresException ex) when (IsMissingDeadLetterTable(ex))
{
var payload = Encoding.UTF8.GetBytes(BuildDeadLetterCsv(Array.Empty<DeadLetterEntry>()));
var fileName = $"deadletter-export-{DateTime.UtcNow:yyyyMMdd-HHmmss}.csv";
return Results.File(payload, "text/csv", fileName);
}
}
private static async Task<IResult> GetActionableSummary(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IDeadLetterRepository repository,
[FromQuery] int? limit = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = Math.Clamp(limit ?? 10, 1, 50);
var summaries = await repository.GetActionableSummaryAsync(tenantId, effectiveLimit, cancellationToken)
.ConfigureAwait(false);
return Results.Ok(new DeadLetterSummaryListResponse(
summaries.Select(s => new DeadLetterSummaryResponse(
s.ErrorCode,
s.Category.ToString(),
s.EntryCount,
s.RetryableCount,
s.OldestEntry,
s.SampleReason)).ToList()));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (PostgresException ex) when (IsMissingDeadLetterTable(ex))
{
return Results.Ok(new DeadLetterSummaryListResponse(new List<DeadLetterSummaryResponse>()));
}
}
private static async Task<IResult> ReplayEntry(
HttpContext context,
[FromRoute] Guid entryId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IReplayManager replayManager,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var user = GetCurrentUser(context);
var result = await replayManager.ReplayAsync(tenantId, entryId, user, cancellationToken)
.ConfigureAwait(false);
if (!result.Success)
{
return Results.UnprocessableEntity(new { error = result.ErrorMessage });
}
return Results.Ok(new ReplayResultResponse(
result.Success,
result.NewJobId,
result.ErrorMessage,
DeadLetterEntryResponse.FromDomain(result.UpdatedEntry)));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> ReplayBatch(
HttpContext context,
[FromBody] ReplayBatchRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IReplayManager replayManager,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var user = GetCurrentUser(context);
var result = await replayManager.ReplayBatchAsync(tenantId, request.EntryIds, user, cancellationToken)
.ConfigureAwait(false);
return Results.Ok(new BatchReplayResultResponse(
result.Attempted,
result.Succeeded,
result.Failed,
result.Results.Select(r => new ReplayResultResponse(
r.Success,
r.NewJobId,
r.ErrorMessage,
r.UpdatedEntry is not null ? DeadLetterEntryResponse.FromDomain(r.UpdatedEntry) : null)).ToList()));
}
catch (ArgumentException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> ReplayPending(
HttpContext context,
[FromBody] ReplayPendingRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IReplayManager replayManager,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var user = GetCurrentUser(context);
var result = await replayManager.ReplayPendingAsync(
tenantId,
request.ErrorCode,
TryParseErrorCategory(request.Category),
request.MaxCount ?? 100,
user,
cancellationToken).ConfigureAwait(false);
return Results.Ok(new BatchReplayResultResponse(
result.Attempted,
result.Succeeded,
result.Failed,
result.Results.Select(r => new ReplayResultResponse(
r.Success,
r.NewJobId,
r.ErrorMessage,
r.UpdatedEntry is not null ? DeadLetterEntryResponse.FromDomain(r.UpdatedEntry) : null)).ToList()));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> ResolveEntry(
HttpContext context,
[FromRoute] Guid entryId,
[FromBody] ResolveEntryRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IReplayManager replayManager,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var user = GetCurrentUser(context);
var entry = await replayManager.ResolveAsync(tenantId, entryId, request.Notes, user, cancellationToken)
.ConfigureAwait(false);
return Results.Ok(DeadLetterEntryResponse.FromDomain(entry));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> ResolveBatch(
HttpContext context,
[FromBody] ResolveBatchRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IReplayManager replayManager,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var user = GetCurrentUser(context);
var count = await replayManager.ResolveBatchAsync(
tenantId, request.EntryIds, request.Notes, user, cancellationToken)
.ConfigureAwait(false);
return Results.Ok(new { resolvedCount = count });
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static Task<IResult> ListErrorCodes(
[FromServices] IErrorClassifier classifier,
CancellationToken cancellationToken = default)
{
// Return the known error codes with their classifications
var errorCodes = new[]
{
// Transient errors
DefaultErrorClassifier.ErrorCodes.NetworkTimeout,
DefaultErrorClassifier.ErrorCodes.ConnectionRefused,
DefaultErrorClassifier.ErrorCodes.DnsResolutionFailed,
DefaultErrorClassifier.ErrorCodes.ServiceUnavailable,
DefaultErrorClassifier.ErrorCodes.GatewayTimeout,
// Not found errors
DefaultErrorClassifier.ErrorCodes.ImageNotFound,
DefaultErrorClassifier.ErrorCodes.SourceNotFound,
DefaultErrorClassifier.ErrorCodes.RegistryNotFound,
// Auth errors
DefaultErrorClassifier.ErrorCodes.InvalidCredentials,
DefaultErrorClassifier.ErrorCodes.TokenExpired,
DefaultErrorClassifier.ErrorCodes.InsufficientPermissions,
// Rate limit errors
DefaultErrorClassifier.ErrorCodes.RateLimited,
DefaultErrorClassifier.ErrorCodes.QuotaExceeded,
// Validation errors
DefaultErrorClassifier.ErrorCodes.InvalidPayload,
DefaultErrorClassifier.ErrorCodes.InvalidConfiguration,
// Upstream errors
DefaultErrorClassifier.ErrorCodes.RegistryError,
DefaultErrorClassifier.ErrorCodes.AdvisoryFeedError,
// Internal errors
DefaultErrorClassifier.ErrorCodes.InternalError,
DefaultErrorClassifier.ErrorCodes.ProcessingError
};
var responses = errorCodes.Select(code =>
{
var classified = classifier.Classify(code, string.Empty);
return new ErrorCodeResponse(
classified.ErrorCode,
classified.Category.ToString(),
classified.Description,
classified.RemediationHint,
classified.IsRetryable,
classified.SuggestedRetryDelay?.TotalSeconds);
}).ToList();
return Task.FromResult(Results.Ok(new ErrorCodeListResponse(responses)));
}
private static async Task<IResult> GetReplayAudit(
HttpContext context,
[FromRoute] Guid entryId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IReplayAuditRepository auditRepository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var audits = await auditRepository.GetByEntryAsync(tenantId, entryId, cancellationToken)
.ConfigureAwait(false);
var responses = audits.Select(a => new ReplayAuditResponse(
a.AuditId,
a.EntryId,
a.AttemptNumber,
a.Success,
a.NewJobId,
a.ErrorMessage,
a.TriggeredBy,
a.TriggeredAt,
a.CompletedAt,
a.InitiatedBy)).ToList();
return Results.Ok(new ReplayAuditListResponse(responses));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static DeadLetterStatus? TryParseDeadLetterStatus(string? value) =>
string.IsNullOrWhiteSpace(value) ? null :
Enum.TryParse<DeadLetterStatus>(value, ignoreCase: true, out var status) ? status : null;
private static ErrorCategory? TryParseErrorCategory(string? value) =>
string.IsNullOrWhiteSpace(value) ? null :
Enum.TryParse<ErrorCategory>(value, ignoreCase: true, out var category) ? category : null;
private static string GetCurrentUser(HttpContext context) =>
context.User?.Identity?.Name ?? "anonymous";
private static bool IsMissingDeadLetterTable(PostgresException exception) =>
string.Equals(exception.SqlState, "42P01", StringComparison.Ordinal)
|| string.Equals(exception.SqlState, "25P02", StringComparison.Ordinal);
private static DeadLetterStats CreateEmptyStats() =>
new(
TotalEntries: 0,
PendingEntries: 0,
ReplayingEntries: 0,
ReplayedEntries: 0,
ResolvedEntries: 0,
ExhaustedEntries: 0,
ExpiredEntries: 0,
RetryableEntries: 0,
ByCategory: new Dictionary<ErrorCategory, long>(),
TopErrorCodes: new Dictionary<string, long>(),
TopJobTypes: new Dictionary<string, long>());
private static string BuildDeadLetterCsv(IReadOnlyList<DeadLetterEntry> entries)
{
var builder = new StringBuilder();
builder.AppendLine("entryId,jobId,status,errorCode,category,retryable,replayAttempts,maxReplayAttempts,failedAt,createdAt,resolvedAt,reason");
foreach (var entry in entries)
{
builder.Append(EscapeCsv(entry.EntryId.ToString())).Append(',');
builder.Append(EscapeCsv(entry.OriginalJobId.ToString())).Append(',');
builder.Append(EscapeCsv(entry.Status.ToString())).Append(',');
builder.Append(EscapeCsv(entry.ErrorCode)).Append(',');
builder.Append(EscapeCsv(entry.Category.ToString())).Append(',');
builder.Append(EscapeCsv(entry.IsRetryable.ToString(CultureInfo.InvariantCulture))).Append(',');
builder.Append(EscapeCsv(entry.ReplayAttempts.ToString(CultureInfo.InvariantCulture))).Append(',');
builder.Append(EscapeCsv(entry.MaxReplayAttempts.ToString(CultureInfo.InvariantCulture))).Append(',');
builder.Append(EscapeCsv(entry.FailedAt.ToString("O", CultureInfo.InvariantCulture))).Append(',');
builder.Append(EscapeCsv(entry.CreatedAt.ToString("O", CultureInfo.InvariantCulture))).Append(',');
builder.Append(EscapeCsv(entry.ResolvedAt?.ToString("O", CultureInfo.InvariantCulture))).Append(',');
builder.Append(EscapeCsv(entry.FailureReason));
builder.AppendLine();
}
return builder.ToString();
}
private static string EscapeCsv(string? value)
{
if (string.IsNullOrEmpty(value))
{
return string.Empty;
}
return "\"" + value.Replace("\"", "\"\"", StringComparison.Ordinal) + "\"";
}
}
// Response DTOs
public sealed record DeadLetterEntryResponse(
Guid EntryId,
Guid OriginalJobId,
Guid? RunId,
Guid? SourceId,
string JobType,
string Status,
string ErrorCode,
string FailureReason,
string? RemediationHint,
string Category,
bool IsRetryable,
int OriginalAttempts,
int ReplayAttempts,
int MaxReplayAttempts,
bool CanReplay,
DateTimeOffset FailedAt,
DateTimeOffset CreatedAt,
DateTimeOffset ExpiresAt,
DateTimeOffset? ResolvedAt)
{
public static DeadLetterEntryResponse FromDomain(DeadLetterEntry entry) =>
new(
entry.EntryId,
entry.OriginalJobId,
entry.RunId,
entry.SourceId,
entry.JobType,
entry.Status.ToString(),
entry.ErrorCode,
entry.FailureReason,
entry.RemediationHint,
entry.Category.ToString(),
entry.IsRetryable,
entry.OriginalAttempts,
entry.ReplayAttempts,
entry.MaxReplayAttempts,
entry.CanReplay,
entry.FailedAt,
entry.CreatedAt,
entry.ExpiresAt,
entry.ResolvedAt);
}
public sealed record DeadLetterEntryDetailResponse(
Guid EntryId,
Guid OriginalJobId,
Guid? RunId,
Guid? SourceId,
string JobType,
string Payload,
string PayloadDigest,
string IdempotencyKey,
string? CorrelationId,
string Status,
string ErrorCode,
string FailureReason,
string? RemediationHint,
string Category,
bool IsRetryable,
int OriginalAttempts,
int ReplayAttempts,
int MaxReplayAttempts,
bool CanReplay,
DateTimeOffset FailedAt,
DateTimeOffset CreatedAt,
DateTimeOffset UpdatedAt,
DateTimeOffset ExpiresAt,
DateTimeOffset? ResolvedAt,
string? ResolutionNotes,
string CreatedBy,
string UpdatedBy)
{
public static DeadLetterEntryDetailResponse FromDomain(DeadLetterEntry entry) =>
new(
entry.EntryId,
entry.OriginalJobId,
entry.RunId,
entry.SourceId,
entry.JobType,
entry.Payload,
entry.PayloadDigest,
entry.IdempotencyKey,
entry.CorrelationId,
entry.Status.ToString(),
entry.ErrorCode,
entry.FailureReason,
entry.RemediationHint,
entry.Category.ToString(),
entry.IsRetryable,
entry.OriginalAttempts,
entry.ReplayAttempts,
entry.MaxReplayAttempts,
entry.CanReplay,
entry.FailedAt,
entry.CreatedAt,
entry.UpdatedAt,
entry.ExpiresAt,
entry.ResolvedAt,
entry.ResolutionNotes,
entry.CreatedBy,
entry.UpdatedBy);
}
public sealed record DeadLetterListResponse(
IReadOnlyList<DeadLetterEntryResponse> Entries,
string? NextCursor,
long TotalCount);
public sealed record DeadLetterStatsResponse(
long TotalEntries,
long PendingEntries,
long ReplayingEntries,
long ReplayedEntries,
long ResolvedEntries,
long ExhaustedEntries,
long ExpiredEntries,
long RetryableEntries,
IDictionary<string, long> ByCategory,
IDictionary<string, long> TopErrorCodes,
IDictionary<string, long> TopJobTypes)
{
public static DeadLetterStatsResponse FromDomain(DeadLetterStats stats) =>
new(
stats.TotalEntries,
stats.PendingEntries,
stats.ReplayingEntries,
stats.ReplayedEntries,
stats.ResolvedEntries,
stats.ExhaustedEntries,
stats.ExpiredEntries,
stats.RetryableEntries,
stats.ByCategory.ToDictionary(kv => kv.Key.ToString(), kv => kv.Value),
new Dictionary<string, long>(stats.TopErrorCodes),
new Dictionary<string, long>(stats.TopJobTypes));
}
public sealed record DeadLetterSummaryResponse(
string ErrorCode,
string Category,
long EntryCount,
long RetryableCount,
DateTimeOffset OldestEntry,
string? SampleReason);
public sealed record DeadLetterSummaryListResponse(
IReadOnlyList<DeadLetterSummaryResponse> Summaries);
public sealed record ReplayResultResponse(
bool Success,
Guid? NewJobId,
string? ErrorMessage,
DeadLetterEntryResponse? UpdatedEntry);
public sealed record BatchReplayResultResponse(
int Attempted,
int Succeeded,
int Failed,
IReadOnlyList<ReplayResultResponse> Results);
public sealed record ReplayBatchRequest(
IReadOnlyList<Guid> EntryIds);
public sealed record ReplayPendingRequest(
string? ErrorCode,
string? Category,
int? MaxCount);
public sealed record ResolveEntryRequest(
string Notes);
public sealed record ResolveBatchRequest(
IReadOnlyList<Guid> EntryIds,
string Notes);
public sealed record ErrorCodeResponse(
string ErrorCode,
string Category,
string Description,
string RemediationHint,
bool IsRetryable,
double? SuggestedRetryDelaySeconds);
public sealed record ErrorCodeListResponse(
IReadOnlyList<ErrorCodeResponse> ErrorCodes);
public sealed record ReplayAuditResponse(
Guid AuditId,
Guid EntryId,
int AttemptNumber,
bool Success,
Guid? NewJobId,
string? ErrorMessage,
string TriggeredBy,
DateTimeOffset TriggeredAt,
DateTimeOffset? CompletedAt,
string InitiatedBy);
public sealed record ReplayAuditListResponse(
IReadOnlyList<ReplayAuditResponse> Audits);

View File

@@ -1,431 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.WebService.Services;
using System.Security.Claims;
namespace StellaOps.JobEngine.WebService.Endpoints;
public static class DeploymentEndpoints
{
public static IEndpointRouteBuilder MapDeploymentEndpoints(this IEndpointRouteBuilder app)
{
Map(app, "/api/release-orchestrator/deployments", true);
Map(app, "/api/v1/release-orchestrator/deployments", false);
return app;
}
private static void Map(IEndpointRouteBuilder app, string prefix, bool named)
{
var group = app.MapGroup(prefix)
.WithTags("Deployments")
.RequireAuthorization(JobEnginePolicies.ReleaseRead)
.RequireTenant();
var create = group.MapPost("", CreateAsync).RequireAuthorization(JobEnginePolicies.ReleaseWrite);
var list = group.MapGet("", ListAsync);
var detail = group.MapGet("/{id}", GetAsync);
var logs = group.MapGet("/{id}/logs", GetLogsAsync);
var targetLogs = group.MapGet("/{id}/targets/{targetId}/logs", GetTargetLogsAsync);
var events = group.MapGet("/{id}/events", GetEventsAsync);
var metrics = group.MapGet("/{id}/metrics", GetMetricsAsync);
var pause = group.MapPost("/{id}/pause", PauseAsync).RequireAuthorization(JobEnginePolicies.ReleaseWrite);
var resume = group.MapPost("/{id}/resume", ResumeAsync).RequireAuthorization(JobEnginePolicies.ReleaseWrite);
var cancel = group.MapPost("/{id}/cancel", CancelAsync).RequireAuthorization(JobEnginePolicies.ReleaseWrite);
var rollback = group.MapPost("/{id}/rollback", RollbackAsync).RequireAuthorization(JobEnginePolicies.ReleaseApprove);
var retry = group.MapPost("/{id}/targets/{targetId}/retry", RetryTargetAsync).RequireAuthorization(JobEnginePolicies.ReleaseWrite);
if (!named)
{
return;
}
create.WithName("Deployment_Create");
list.WithName("Deployment_List");
detail.WithName("Deployment_Get");
logs.WithName("Deployment_GetLogs");
targetLogs.WithName("Deployment_GetTargetLogs");
events.WithName("Deployment_GetEvents");
metrics.WithName("Deployment_GetMetrics");
pause.WithName("Deployment_Pause");
resume.WithName("Deployment_Resume");
cancel.WithName("Deployment_Cancel");
rollback.WithName("Deployment_Rollback");
retry.WithName("Deployment_RetryTarget");
}
private static async Task<IResult> CreateAsync(
HttpContext context,
IStellaOpsTenantAccessor tenantAccessor,
IDeploymentCompatibilityStore store,
ClaimsPrincipal user,
[FromBody] CreateDeploymentRequest request,
CancellationToken cancellationToken)
{
if (string.IsNullOrWhiteSpace(request.ReleaseId))
{
return Results.BadRequest(new { message = "releaseId is required." });
}
if (string.IsNullOrWhiteSpace(request.EnvironmentId))
{
return Results.BadRequest(new { message = "environmentId is required." });
}
var strategy = NormalizeStrategy(request.Strategy);
if (strategy is null)
{
return Results.BadRequest(new { message = "strategy must be one of rolling, canary, blue_green, or all_at_once." });
}
var actor = user.FindFirstValue(ClaimTypes.NameIdentifier)
?? user.FindFirstValue(ClaimTypes.Name)
?? "release-operator";
var deployment = await store.CreateAsync(
ResolveTenant(tenantAccessor, context),
request with { Strategy = strategy },
actor,
cancellationToken).ConfigureAwait(false);
return Results.Created($"/api/v1/release-orchestrator/deployments/{deployment.Id}", deployment);
}
private static async Task<IResult> ListAsync(
HttpContext context,
IStellaOpsTenantAccessor tenantAccessor,
IDeploymentCompatibilityStore store,
[FromQuery] string? status,
[FromQuery] string? statuses,
[FromQuery] string? environment,
[FromQuery] string? environments,
[FromQuery] string? releaseId,
[FromQuery] string? releases,
[FromQuery] string? sortField,
[FromQuery] string? sortOrder,
[FromQuery] int? page,
[FromQuery] int? pageSize,
CancellationToken cancellationToken)
{
IEnumerable<DeploymentSummaryDto> items = (await store.ListAsync(
ResolveTenant(tenantAccessor, context),
cancellationToken).ConfigureAwait(false)).Select(ToSummary);
var statusSet = Csv(statuses, status);
if (statusSet.Count > 0)
{
items = items.Where(item => statusSet.Contains(item.Status, StringComparer.OrdinalIgnoreCase));
}
var environmentSet = Csv(environments, environment);
if (environmentSet.Count > 0)
{
items = items.Where(item =>
environmentSet.Contains(item.EnvironmentId, StringComparer.OrdinalIgnoreCase)
|| environmentSet.Contains(item.EnvironmentName, StringComparer.OrdinalIgnoreCase));
}
var releaseSet = Csv(releases, releaseId);
if (releaseSet.Count > 0)
{
items = items.Where(item => releaseSet.Contains(item.ReleaseId, StringComparer.OrdinalIgnoreCase));
}
items = (sortField?.ToLowerInvariant(), sortOrder?.ToLowerInvariant()) switch
{
("status", "asc") => items.OrderBy(item => item.Status, StringComparer.OrdinalIgnoreCase),
("status", _) => items.OrderByDescending(item => item.Status, StringComparer.OrdinalIgnoreCase),
("environment", "asc") => items.OrderBy(item => item.EnvironmentName, StringComparer.OrdinalIgnoreCase),
("environment", _) => items.OrderByDescending(item => item.EnvironmentName, StringComparer.OrdinalIgnoreCase),
(_, "asc") => items.OrderBy(item => item.StartedAt),
_ => items.OrderByDescending(item => item.StartedAt),
};
var list = items.ToList();
var resolvedPage = Math.Max(page ?? 1, 1);
var resolvedPageSize = Math.Clamp(pageSize ?? 20, 1, 100);
return Results.Ok(new
{
items = list.Skip((resolvedPage - 1) * resolvedPageSize).Take(resolvedPageSize).ToList(),
totalCount = list.Count,
page = resolvedPage,
pageSize = resolvedPageSize,
});
}
private static async Task<IResult> GetAsync(
string id,
HttpContext context,
IStellaOpsTenantAccessor tenantAccessor,
IDeploymentCompatibilityStore store,
CancellationToken cancellationToken)
{
var deployment = await store.GetAsync(
ResolveTenant(tenantAccessor, context),
id,
cancellationToken).ConfigureAwait(false);
return deployment is null ? Results.NotFound() : Results.Ok(deployment);
}
private static async Task<IResult> GetLogsAsync(
string id,
HttpContext context,
IStellaOpsTenantAccessor tenantAccessor,
IDeploymentCompatibilityStore store,
[FromQuery] string? level,
[FromQuery] int? limit,
CancellationToken cancellationToken)
{
var entries = await store.GetLogsAsync(
ResolveTenant(tenantAccessor, context),
id,
targetId: null,
level,
limit,
cancellationToken).ConfigureAwait(false);
return entries is null ? Results.NotFound() : Results.Ok(new { entries });
}
private static async Task<IResult> GetTargetLogsAsync(
string id,
string targetId,
HttpContext context,
IStellaOpsTenantAccessor tenantAccessor,
IDeploymentCompatibilityStore store,
[FromQuery] string? level,
[FromQuery] int? limit,
CancellationToken cancellationToken)
{
var entries = await store.GetLogsAsync(
ResolveTenant(tenantAccessor, context),
id,
targetId,
level,
limit,
cancellationToken).ConfigureAwait(false);
return entries is null ? Results.NotFound() : Results.Ok(new { entries });
}
private static async Task<IResult> GetEventsAsync(
string id,
HttpContext context,
IStellaOpsTenantAccessor tenantAccessor,
IDeploymentCompatibilityStore store,
CancellationToken cancellationToken)
{
var events = await store.GetEventsAsync(
ResolveTenant(tenantAccessor, context),
id,
cancellationToken).ConfigureAwait(false);
return events is null ? Results.NotFound() : Results.Ok(new { events });
}
private static async Task<IResult> GetMetricsAsync(
string id,
HttpContext context,
IStellaOpsTenantAccessor tenantAccessor,
IDeploymentCompatibilityStore store,
CancellationToken cancellationToken)
{
var metrics = await store.GetMetricsAsync(
ResolveTenant(tenantAccessor, context),
id,
cancellationToken).ConfigureAwait(false);
return metrics is null ? Results.NotFound() : Results.Ok(new { metrics });
}
private static Task<IResult> PauseAsync(
string id,
HttpContext context,
IStellaOpsTenantAccessor tenantAccessor,
IDeploymentCompatibilityStore store,
CancellationToken cancellationToken)
=> TransitionAsync(
context,
tenantAccessor,
store,
id,
["running", "pending"],
"paused",
"paused",
$"Deployment {id} paused.",
complete: false,
cancellationToken);
private static Task<IResult> ResumeAsync(
string id,
HttpContext context,
IStellaOpsTenantAccessor tenantAccessor,
IDeploymentCompatibilityStore store,
CancellationToken cancellationToken)
=> TransitionAsync(
context,
tenantAccessor,
store,
id,
["paused"],
"running",
"resumed",
$"Deployment {id} resumed.",
complete: false,
cancellationToken);
private static Task<IResult> CancelAsync(
string id,
HttpContext context,
IStellaOpsTenantAccessor tenantAccessor,
IDeploymentCompatibilityStore store,
CancellationToken cancellationToken)
=> TransitionAsync(
context,
tenantAccessor,
store,
id,
["running", "pending", "paused"],
"cancelled",
"cancelled",
$"Deployment {id} cancelled.",
complete: true,
cancellationToken);
private static Task<IResult> RollbackAsync(
string id,
HttpContext context,
IStellaOpsTenantAccessor tenantAccessor,
IDeploymentCompatibilityStore store,
CancellationToken cancellationToken)
=> TransitionAsync(
context,
tenantAccessor,
store,
id,
["completed", "failed", "running", "paused"],
"rolling_back",
"rollback_started",
$"Rollback initiated for deployment {id}.",
complete: false,
cancellationToken);
private static async Task<IResult> RetryTargetAsync(
string id,
string targetId,
HttpContext context,
IStellaOpsTenantAccessor tenantAccessor,
IDeploymentCompatibilityStore store,
CancellationToken cancellationToken)
{
var result = await store.RetryAsync(
ResolveTenant(tenantAccessor, context),
id,
targetId,
cancellationToken).ConfigureAwait(false);
return ToMutationResult(result);
}
private static async Task<IResult> TransitionAsync(
HttpContext context,
IStellaOpsTenantAccessor tenantAccessor,
IDeploymentCompatibilityStore store,
string deploymentId,
IReadOnlyCollection<string> allowedStatuses,
string nextStatus,
string eventType,
string message,
bool complete,
CancellationToken cancellationToken)
{
var result = await store.TransitionAsync(
ResolveTenant(tenantAccessor, context),
deploymentId,
allowedStatuses,
nextStatus,
eventType,
message,
complete,
cancellationToken).ConfigureAwait(false);
return ToMutationResult(result);
}
private static IResult ToMutationResult(DeploymentMutationResult result)
{
return result.Status switch
{
DeploymentMutationStatus.Success => Results.Ok(new
{
success = true,
message = result.Message,
deployment = result.Deployment,
}),
DeploymentMutationStatus.Conflict => Results.Conflict(new
{
success = false,
message = result.Message,
}),
_ => Results.NotFound(),
};
}
private static string ResolveTenant(IStellaOpsTenantAccessor tenantAccessor, HttpContext context)
{
if (!string.IsNullOrWhiteSpace(tenantAccessor.TenantId))
{
return tenantAccessor.TenantId;
}
throw new InvalidOperationException(
$"A tenant is required for deployment compatibility operations on route '{context.Request.Path}'.");
}
private static string? NormalizeStrategy(string? strategy)
{
return (strategy ?? string.Empty).Trim().ToLowerInvariant() switch
{
"rolling" => "rolling",
"canary" => "canary",
"blue_green" => "blue_green",
"all_at_once" => "all_at_once",
"recreate" => "all_at_once",
"ab-release" => "blue_green",
_ => null,
};
}
private static HashSet<string> Csv(params string?[] values)
{
var set = new HashSet<string>(StringComparer.OrdinalIgnoreCase);
foreach (var value in values)
{
if (string.IsNullOrWhiteSpace(value))
{
continue;
}
foreach (var part in value.Split(',', StringSplitOptions.RemoveEmptyEntries | StringSplitOptions.TrimEntries))
{
set.Add(part);
}
}
return set;
}
private static DeploymentSummaryDto ToSummary(DeploymentDto deployment)
{
return new DeploymentSummaryDto
{
Id = deployment.Id,
ReleaseId = deployment.ReleaseId,
ReleaseName = deployment.ReleaseName,
ReleaseVersion = deployment.ReleaseVersion,
EnvironmentId = deployment.EnvironmentId,
EnvironmentName = deployment.EnvironmentName,
Status = deployment.Status,
Strategy = deployment.Strategy,
Progress = deployment.Progress,
StartedAt = deployment.StartedAt,
CompletedAt = deployment.CompletedAt,
InitiatedBy = deployment.InitiatedBy,
TargetCount = deployment.TargetCount,
CompletedTargets = deployment.CompletedTargets,
FailedTargets = deployment.FailedTargets,
};
}
}

View File

@@ -1,329 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using System.Security.Cryptography;
using System.Text;
using System.Text.Json;
using StellaOps.Auth.ServerIntegration.Tenancy;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// Evidence management endpoints for the release orchestrator.
/// Provides listing, inspection, verification, export, and timeline
/// operations for release evidence packets.
/// Routes: /api/release-orchestrator/evidence
/// </summary>
public static class EvidenceEndpoints
{
public static IEndpointRouteBuilder MapEvidenceEndpoints(this IEndpointRouteBuilder app)
{
MapEvidenceGroup(app, "/api/release-orchestrator/evidence", includeRouteNames: true);
MapEvidenceGroup(app, "/api/v1/release-orchestrator/evidence", includeRouteNames: false);
return app;
}
private static void MapEvidenceGroup(
IEndpointRouteBuilder app,
string prefix,
bool includeRouteNames)
{
var group = app.MapGroup(prefix)
.WithTags("Evidence")
.RequireAuthorization(JobEnginePolicies.ReleaseRead)
.RequireTenant();
var list = group.MapGet(string.Empty, ListEvidence)
.WithDescription("Return a paginated list of evidence packets for the calling tenant, optionally filtered by release, type, and creation time window. Each packet includes its identifier, associated release, evidence type, content hash, and creation timestamp.");
if (includeRouteNames)
{
list.WithName("Evidence_List");
}
var detail = group.MapGet("/{id}", GetEvidence)
.WithDescription("Return the full evidence packet record for the specified ID including release association, evidence type, content hash, algorithm, size, and metadata. Returns 404 when the evidence packet does not exist in the tenant.");
if (includeRouteNames)
{
detail.WithName("Evidence_Get");
}
var verify = group.MapPost("/{id}/verify", VerifyEvidence)
.WithDescription("Verify the integrity of the specified evidence packet by recomputing and comparing its content hash. Returns the verification result including the computed hash, algorithm used, and whether the content matches the stored digest.");
if (includeRouteNames)
{
verify.WithName("Evidence_Verify");
}
var export = group.MapGet("/{id}/export", ExportEvidence)
.WithDescription("Export the specified evidence packet as a self-contained JSON bundle suitable for offline audit. The bundle includes the evidence metadata, content, and verification hashes.");
if (includeRouteNames)
{
export.WithName("Evidence_Export");
}
var raw = group.MapGet("/{id}/raw", DownloadRaw)
.WithDescription("Download the raw binary content of the specified evidence packet. Returns the unprocessed evidence payload with Content-Type application/octet-stream. Returns 404 when the evidence packet does not exist.");
if (includeRouteNames)
{
raw.WithName("Evidence_DownloadRaw");
}
var timeline = group.MapGet("/{id}/timeline", GetTimeline)
.WithDescription("Return the chronological event timeline for the specified evidence packet including creation, verification, export, and access events. Useful for audit trails and provenance tracking.");
if (includeRouteNames)
{
timeline.WithName("Evidence_Timeline");
}
}
// ---- Handlers ----
private static IResult ListEvidence(
[FromQuery] string? releaseId,
[FromQuery] string? type,
[FromQuery] string? search,
[FromQuery] string? sortField,
[FromQuery] string? sortOrder,
[FromQuery] int? page,
[FromQuery] int? pageSize)
{
var packets = SeedData.EvidencePackets.AsEnumerable();
if (!string.IsNullOrWhiteSpace(releaseId))
{
packets = packets.Where(e =>
string.Equals(e.ReleaseId, releaseId, StringComparison.OrdinalIgnoreCase));
}
if (!string.IsNullOrWhiteSpace(type))
{
packets = packets.Where(e =>
string.Equals(e.Type, type, StringComparison.OrdinalIgnoreCase));
}
if (!string.IsNullOrWhiteSpace(search))
{
var term = search.ToLowerInvariant();
packets = packets.Where(e =>
e.Id.Contains(term, StringComparison.OrdinalIgnoreCase) ||
e.ReleaseId.Contains(term, StringComparison.OrdinalIgnoreCase) ||
e.Type.Contains(term, StringComparison.OrdinalIgnoreCase) ||
e.Description.Contains(term, StringComparison.OrdinalIgnoreCase));
}
var sorted = (sortField?.ToLowerInvariant(), sortOrder?.ToLowerInvariant()) switch
{
("type", "asc") => packets.OrderBy(e => e.Type),
("type", _) => packets.OrderByDescending(e => e.Type),
("releaseId", "asc") => packets.OrderBy(e => e.ReleaseId),
("releaseId", _) => packets.OrderByDescending(e => e.ReleaseId),
(_, "asc") => packets.OrderBy(e => e.CreatedAt),
_ => packets.OrderByDescending(e => e.CreatedAt),
};
var all = sorted.ToList();
var effectivePage = Math.Max(page ?? 1, 1);
var effectivePageSize = Math.Clamp(pageSize ?? 20, 1, 100);
var items = all.Skip((effectivePage - 1) * effectivePageSize).Take(effectivePageSize).ToList();
return Results.Ok(new
{
items,
totalCount = all.Count,
page = effectivePage,
pageSize = effectivePageSize,
});
}
private static IResult GetEvidence(string id)
{
var packet = SeedData.EvidencePackets.FirstOrDefault(e => e.Id == id);
return packet is not null ? Results.Ok(packet) : Results.NotFound();
}
private static IResult VerifyEvidence(string id)
{
var packet = SeedData.EvidencePackets.FirstOrDefault(e => e.Id == id);
if (packet is null) return Results.NotFound();
var content = BuildRawContent(packet);
var computedHash = ComputeHash(content, packet.Algorithm);
var verified = string.Equals(packet.Hash, computedHash, StringComparison.OrdinalIgnoreCase);
return Results.Ok(new
{
evidenceId = packet.Id,
verified,
hash = packet.Hash,
computedHash,
algorithm = packet.Algorithm,
verifiedAt = packet.VerifiedAt ?? packet.CreatedAt,
message = verified
? "Evidence integrity verified successfully."
: "Evidence integrity verification failed.",
});
}
private static IResult ExportEvidence(string id)
{
var packet = SeedData.EvidencePackets.FirstOrDefault(e => e.Id == id);
if (packet is null) return Results.NotFound();
var content = BuildRawContent(packet);
var computedHash = ComputeHash(content, packet.Algorithm);
var exportedAt = packet.VerifiedAt ?? packet.CreatedAt;
var bundle = new
{
exportVersion = "1.0",
exportedAt,
evidence = packet,
contentBase64 = Convert.ToBase64String(content),
verification = new
{
hash = packet.Hash,
computedHash,
algorithm = packet.Algorithm,
verified = string.Equals(packet.Hash, computedHash, StringComparison.OrdinalIgnoreCase),
},
};
return Results.Json(bundle, contentType: "application/json");
}
private static IResult DownloadRaw(string id)
{
var packet = SeedData.EvidencePackets.FirstOrDefault(e => e.Id == id);
if (packet is null) return Results.NotFound();
var content = BuildRawContent(packet);
return Results.Bytes(content, contentType: "application/octet-stream",
fileDownloadName: $"{packet.Id}.bin");
}
private static IResult GetTimeline(string id)
{
var packet = SeedData.EvidencePackets.FirstOrDefault(e => e.Id == id);
if (packet is null) return Results.NotFound();
if (SeedData.Timelines.TryGetValue(id, out var events))
{
return Results.Ok(new { evidenceId = id, events });
}
return Results.Ok(new { evidenceId = id, events = Array.Empty<object>() });
}
private static byte[] BuildRawContent(EvidencePacketDto packet)
{
return JsonSerializer.SerializeToUtf8Bytes(new
{
evidenceId = packet.Id,
releaseId = packet.ReleaseId,
type = packet.Type,
description = packet.Description,
status = packet.Status,
createdBy = packet.CreatedBy,
createdAt = packet.CreatedAt,
});
}
private static string ComputeHash(byte[] content, string algorithm)
{
var normalized = algorithm.Trim().ToUpperInvariant();
return normalized switch
{
"SHA-256" => $"sha256:{Convert.ToHexString(SHA256.HashData(content)).ToLowerInvariant()}",
_ => throw new NotSupportedException($"Unsupported evidence hash algorithm '{algorithm}'."),
};
}
// ---- DTOs ----
public sealed record EvidencePacketDto
{
public required string Id { get; init; }
public required string ReleaseId { get; init; }
public required string Type { get; init; }
public required string Description { get; init; }
public required string Hash { get; init; }
public required string Algorithm { get; init; }
public long SizeBytes { get; init; }
public required string Status { get; init; }
public required string CreatedBy { get; init; }
public DateTimeOffset CreatedAt { get; init; }
public DateTimeOffset? VerifiedAt { get; init; }
}
public sealed record EvidenceTimelineEventDto
{
public required string Id { get; init; }
public required string EvidenceId { get; init; }
public required string EventType { get; init; }
public required string Actor { get; init; }
public required string Message { get; init; }
public DateTimeOffset Timestamp { get; init; }
}
// ---- Seed Data ----
internal static class SeedData
{
public static readonly List<EvidencePacketDto> EvidencePackets = new()
{
CreatePacket("evi-001", "rel-001", "sbom", "Software Bill of Materials for Platform Release v1.2.3", 24576, "verified", "ci-pipeline", "2026-01-10T08:15:00Z", "2026-01-10T08:16:00Z"),
CreatePacket("evi-002", "rel-001", "attestation", "Build provenance attestation for Platform Release v1.2.3", 8192, "verified", "attestor-service", "2026-01-10T08:20:00Z", "2026-01-10T08:21:00Z"),
CreatePacket("evi-003", "rel-002", "scan-result", "Security scan results for Platform Release v1.3.0-rc1", 16384, "verified", "scanner-service", "2026-01-11T10:30:00Z", "2026-01-11T10:31:00Z"),
CreatePacket("evi-004", "rel-003", "policy-decision", "Policy gate evaluation for Hotfix v1.2.4", 4096, "pending", "policy-engine", "2026-01-12T06:15:00Z", null),
CreatePacket("evi-005", "rel-001", "deployment-log", "Production deployment log for Platform Release v1.2.3", 32768, "verified", "deploy-bot", "2026-01-11T14:35:00Z", "2026-01-11T14:36:00Z"),
};
public static readonly Dictionary<string, List<EvidenceTimelineEventDto>> Timelines = new()
{
["evi-001"] = new()
{
new() { Id = "evt-e001", EvidenceId = "evi-001", EventType = "created", Actor = "ci-pipeline", Message = "SBOM evidence packet created from build pipeline", Timestamp = DateTimeOffset.Parse("2026-01-10T08:15:00Z") },
new() { Id = "evt-e002", EvidenceId = "evi-001", EventType = "hashed", Actor = "evidence-locker", Message = "Content hash computed: SHA-256", Timestamp = DateTimeOffset.Parse("2026-01-10T08:15:30Z") },
new() { Id = "evt-e003", EvidenceId = "evi-001", EventType = "verified", Actor = "attestor-service", Message = "Integrity verification passed", Timestamp = DateTimeOffset.Parse("2026-01-10T08:16:00Z") },
new() { Id = "evt-e004", EvidenceId = "evi-001", EventType = "exported", Actor = "admin", Message = "Evidence bundle exported for audit", Timestamp = DateTimeOffset.Parse("2026-01-10T12:00:00Z") },
},
["evi-002"] = new()
{
new() { Id = "evt-e005", EvidenceId = "evi-002", EventType = "created", Actor = "attestor-service", Message = "Build provenance attestation generated", Timestamp = DateTimeOffset.Parse("2026-01-10T08:20:00Z") },
new() { Id = "evt-e006", EvidenceId = "evi-002", EventType = "verified", Actor = "attestor-service", Message = "Attestation signature verified", Timestamp = DateTimeOffset.Parse("2026-01-10T08:21:00Z") },
},
};
private static EvidencePacketDto CreatePacket(
string id,
string releaseId,
string type,
string description,
long sizeBytes,
string status,
string createdBy,
string createdAt,
string? verifiedAt)
{
var packet = new EvidencePacketDto
{
Id = id,
ReleaseId = releaseId,
Type = type,
Description = description,
Algorithm = "SHA-256",
SizeBytes = sizeBytes,
Status = status,
CreatedBy = createdBy,
CreatedAt = DateTimeOffset.Parse(createdAt),
VerifiedAt = verifiedAt is null ? null : DateTimeOffset.Parse(verifiedAt),
Hash = string.Empty,
};
return packet with
{
Hash = ComputeHash(BuildRawContent(packet), packet.Algorithm),
};
}
}
}

View File

@@ -1,388 +0,0 @@
using Microsoft.AspNetCore.Http.HttpResults;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Core.Domain;
using StellaOps.JobEngine.Core.Domain.Export;
using StellaOps.JobEngine.Core.Services;
using StellaOps.JobEngine.WebService.Contracts;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// REST API endpoints for export job management.
/// </summary>
public static class ExportJobEndpoints
{
/// <summary>
/// Maps export job endpoints to the route builder.
/// </summary>
public static void MapExportJobEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/export")
.WithTags("Export Jobs")
.RequireAuthorization(JobEnginePolicies.ExportViewer)
.RequireTenant();
group.MapPost("jobs", CreateExportJob)
.WithName("Orchestrator_CreateExportJob")
.WithDescription(_t("orchestrator.export_job.create_description"))
.RequireAuthorization(JobEnginePolicies.ExportOperator);
group.MapGet("jobs", ListExportJobs)
.WithName("Orchestrator_ListExportJobs")
.WithDescription(_t("orchestrator.export_job.list_description"));
group.MapGet("jobs/{jobId:guid}", GetExportJob)
.WithName("Orchestrator_GetExportJob")
.WithDescription(_t("orchestrator.export_job.get_description"));
group.MapPost("jobs/{jobId:guid}/cancel", CancelExportJob)
.WithName("Orchestrator_CancelExportJob")
.WithDescription(_t("orchestrator.export_job.cancel_description"))
.RequireAuthorization(JobEnginePolicies.ExportOperator);
group.MapGet("quota", GetQuotaStatus)
.WithName("Orchestrator_GetExportQuotaStatus")
.WithDescription(_t("orchestrator.export_job.quota_status_description"));
group.MapPost("quota", EnsureQuota)
.WithName("Orchestrator_EnsureExportQuota")
.WithDescription(_t("orchestrator.export_job.ensure_quota_description"))
.RequireAuthorization(JobEnginePolicies.ExportOperator);
group.MapGet("types", GetExportTypes)
.WithName("Orchestrator_GetExportTypes")
.WithDescription(_t("orchestrator.export_job.types_description"));
}
private static async Task<Results<Created<ExportJobResponse>, BadRequest<ErrorResponse>, Conflict<ErrorResponse>>> CreateExportJob(
CreateExportJobRequest request,
IExportJobService exportJobService,
HttpContext context,
CancellationToken cancellationToken)
{
var tenantId = GetTenantId(context);
if (string.IsNullOrWhiteSpace(request.ExportType))
{
return TypedResults.BadRequest(new ErrorResponse("invalid_export_type", _t("orchestrator.export_job.error.export_type_required")));
}
if (!ExportJobTypes.IsExportJob(request.ExportType) && !ExportJobTypes.All.Contains(request.ExportType))
{
return TypedResults.BadRequest(new ErrorResponse("invalid_export_type", _t("orchestrator.export_job.error.unknown_export_type", request.ExportType)));
}
var payload = new ExportJobPayload(
Format: request.Format ?? "json",
StartTime: request.StartTime,
EndTime: request.EndTime,
SourceId: request.SourceId,
ProjectId: request.ProjectId,
EntityIds: request.EntityIds,
MaxEntries: request.MaxEntries,
IncludeProvenance: request.IncludeProvenance ?? true,
SignOutput: request.SignOutput ?? true,
Compression: request.Compression,
DestinationUri: request.DestinationUri,
CallbackUrl: request.CallbackUrl,
Options: request.Options);
try
{
var job = await exportJobService.CreateExportJobAsync(
tenantId,
request.ExportType,
payload,
GetActorId(context),
request.ProjectId,
request.CorrelationId,
request.Priority,
cancellationToken);
var response = MapToResponse(job);
return TypedResults.Created($"/api/v1/jobengine/export/jobs/{job.JobId}", response);
}
catch (InvalidOperationException ex)
{
return TypedResults.Conflict(new ErrorResponse("quota_exceeded", ex.Message));
}
}
private static async Task<Ok<ExportJobListResponse>> ListExportJobs(
IExportJobService exportJobService,
HttpContext context,
string? exportType = null,
string? status = null,
string? projectId = null,
DateTimeOffset? createdAfter = null,
DateTimeOffset? createdBefore = null,
int limit = 50,
int offset = 0,
CancellationToken cancellationToken = default)
{
var tenantId = GetTenantId(context);
JobStatus? statusFilter = null;
if (!string.IsNullOrEmpty(status) && Enum.TryParse<JobStatus>(status, true, out var parsed))
{
statusFilter = parsed;
}
var jobs = await exportJobService.ListExportJobsAsync(
tenantId,
exportType,
statusFilter,
projectId,
createdAfter,
createdBefore,
limit,
offset,
cancellationToken);
var response = new ExportJobListResponse(
Items: jobs.Select(MapToResponse).ToList(),
Limit: limit,
Offset: offset,
HasMore: jobs.Count == limit);
return TypedResults.Ok(response);
}
private static async Task<Results<Ok<ExportJobResponse>, NotFound>> GetExportJob(
Guid jobId,
IExportJobService exportJobService,
HttpContext context,
CancellationToken cancellationToken)
{
var tenantId = GetTenantId(context);
var job = await exportJobService.GetExportJobAsync(tenantId, jobId, cancellationToken);
if (job is null)
{
return TypedResults.NotFound();
}
return TypedResults.Ok(MapToResponse(job));
}
private static async Task<Results<Ok<CancelExportJobResponse>, NotFound, BadRequest<ErrorResponse>>> CancelExportJob(
Guid jobId,
CancelExportJobRequest request,
IExportJobService exportJobService,
HttpContext context,
CancellationToken cancellationToken)
{
var tenantId = GetTenantId(context);
var success = await exportJobService.CancelExportJobAsync(
tenantId,
jobId,
request.Reason ?? "Canceled by user",
GetActorId(context),
cancellationToken);
if (!success)
{
var job = await exportJobService.GetExportJobAsync(tenantId, jobId, cancellationToken);
if (job is null)
{
return TypedResults.NotFound();
}
return TypedResults.BadRequest(new ErrorResponse(
"cannot_cancel",
_t("orchestrator.export_job.error.cannot_cancel", job.Status)));
}
return TypedResults.Ok(new CancelExportJobResponse(jobId, true, DateTimeOffset.UtcNow));
}
private static async Task<Ok<ExportQuotaStatusResponse>> GetQuotaStatus(
IExportJobService exportJobService,
HttpContext context,
string? exportType = null,
CancellationToken cancellationToken = default)
{
var tenantId = GetTenantId(context);
var status = await exportJobService.GetQuotaStatusAsync(tenantId, exportType, cancellationToken);
var response = new ExportQuotaStatusResponse(
MaxActive: status.MaxActive,
CurrentActive: status.CurrentActive,
MaxPerHour: status.MaxPerHour,
CurrentHourCount: status.CurrentHourCount,
AvailableTokens: status.AvailableTokens,
Paused: status.Paused,
PauseReason: status.PauseReason,
CanCreateJob: status.CanCreateJob,
EstimatedWaitSeconds: status.EstimatedWaitTime?.TotalSeconds);
return TypedResults.Ok(response);
}
private static async Task<Created<QuotaResponse>> EnsureQuota(
EnsureExportQuotaRequest request,
IExportJobService exportJobService,
HttpContext context,
CancellationToken cancellationToken)
{
var tenantId = GetTenantId(context);
var quota = await exportJobService.EnsureQuotaAsync(
tenantId,
request.ExportType,
GetActorId(context),
cancellationToken);
var response = QuotaResponse.FromDomain(quota);
return TypedResults.Created($"/api/v1/jobengine/quotas/{quota.QuotaId}", response);
}
private static Ok<ExportTypesResponse> GetExportTypes()
{
var types = ExportJobTypes.All.Select(jobType =>
{
var rateLimit = ExportJobPolicy.RateLimits.GetForJobType(jobType);
var target = ExportJobTypes.GetExportTarget(jobType) ?? "unknown";
return new ExportTypeInfo(
JobType: jobType,
Target: target,
MaxConcurrent: rateLimit.MaxConcurrent,
MaxPerHour: rateLimit.MaxPerHour,
EstimatedDurationSeconds: rateLimit.EstimatedDurationSeconds);
}).ToList();
return TypedResults.Ok(new ExportTypesResponse(
Types: types,
DefaultQuota: new DefaultQuotaInfo(
MaxActive: ExportJobPolicy.QuotaDefaults.MaxActive,
MaxPerHour: ExportJobPolicy.QuotaDefaults.MaxPerHour,
BurstCapacity: ExportJobPolicy.QuotaDefaults.BurstCapacity,
RefillRate: ExportJobPolicy.QuotaDefaults.RefillRate,
DefaultPriority: ExportJobPolicy.QuotaDefaults.DefaultPriority,
MaxAttempts: ExportJobPolicy.QuotaDefaults.MaxAttempts,
DefaultLeaseSeconds: ExportJobPolicy.QuotaDefaults.DefaultLeaseSeconds,
RecommendedHeartbeatInterval: ExportJobPolicy.QuotaDefaults.RecommendedHeartbeatInterval)));
}
private static string GetTenantId(HttpContext context) =>
context.Request.Headers["X-StellaOps-Tenant"].FirstOrDefault() ?? "default";
private static string GetActorId(HttpContext context) =>
context.User.Identity?.Name ?? "anonymous";
private static ExportJobResponse MapToResponse(Job job) => new(
JobId: job.JobId,
TenantId: job.TenantId,
ProjectId: job.ProjectId,
ExportType: job.JobType,
Status: job.Status.ToString(),
Priority: job.Priority,
Attempt: job.Attempt,
MaxAttempts: job.MaxAttempts,
PayloadDigest: job.PayloadDigest,
IdempotencyKey: job.IdempotencyKey,
CorrelationId: job.CorrelationId,
WorkerId: job.WorkerId,
LeaseUntil: job.LeaseUntil,
CreatedAt: job.CreatedAt,
ScheduledAt: job.ScheduledAt,
LeasedAt: job.LeasedAt,
CompletedAt: job.CompletedAt,
Reason: job.Reason,
CreatedBy: job.CreatedBy);
}
// Request/Response records
public sealed record CreateExportJobRequest(
string ExportType,
string? Format,
DateTimeOffset? StartTime,
DateTimeOffset? EndTime,
Guid? SourceId,
string? ProjectId,
IReadOnlyList<Guid>? EntityIds,
int? MaxEntries,
bool? IncludeProvenance,
bool? SignOutput,
string? Compression,
string? DestinationUri,
string? CallbackUrl,
string? CorrelationId,
int? Priority,
IReadOnlyDictionary<string, string>? Options);
public sealed record ExportJobResponse(
Guid JobId,
string TenantId,
string? ProjectId,
string ExportType,
string Status,
int Priority,
int Attempt,
int MaxAttempts,
string PayloadDigest,
string IdempotencyKey,
string? CorrelationId,
string? WorkerId,
DateTimeOffset? LeaseUntil,
DateTimeOffset CreatedAt,
DateTimeOffset? ScheduledAt,
DateTimeOffset? LeasedAt,
DateTimeOffset? CompletedAt,
string? Reason,
string CreatedBy);
public sealed record ExportJobListResponse(
IReadOnlyList<ExportJobResponse> Items,
int Limit,
int Offset,
bool HasMore);
public sealed record CancelExportJobRequest(string? Reason);
public sealed record CancelExportJobResponse(
Guid JobId,
bool Canceled,
DateTimeOffset CanceledAt);
public sealed record ExportQuotaStatusResponse(
int MaxActive,
int CurrentActive,
int MaxPerHour,
int CurrentHourCount,
double AvailableTokens,
bool Paused,
string? PauseReason,
bool CanCreateJob,
double? EstimatedWaitSeconds);
public sealed record EnsureExportQuotaRequest(string ExportType);
public sealed record ExportTypesResponse(
IReadOnlyList<ExportTypeInfo> Types,
DefaultQuotaInfo DefaultQuota);
public sealed record ExportTypeInfo(
string JobType,
string Target,
int MaxConcurrent,
int MaxPerHour,
int EstimatedDurationSeconds);
public sealed record DefaultQuotaInfo(
int MaxActive,
int MaxPerHour,
int BurstCapacity,
double RefillRate,
int DefaultPriority,
int MaxAttempts,
int DefaultLeaseSeconds,
int RecommendedHeartbeatInterval);
public sealed record ErrorResponse(string Error, string Message);

View File

@@ -1,120 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Core.Services;
using StellaOps.JobEngine.WebService.Contracts;
using StellaOps.JobEngine.WebService.Services;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// REST API endpoint for first signal (TTFS).
/// </summary>
public static class FirstSignalEndpoints
{
public static RouteGroupBuilder MapFirstSignalEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/runs")
.WithTags("Orchestrator Runs")
.RequireAuthorization(JobEnginePolicies.Read)
.RequireTenant();
group.MapGet("{runId:guid}/first-signal", GetFirstSignal)
.WithName("Orchestrator_GetFirstSignal")
.WithDescription(_t("orchestrator.first_signal.get_description"));
return group;
}
private static async Task<IResult> GetFirstSignal(
HttpContext context,
[FromRoute] Guid runId,
[FromHeader(Name = "If-None-Match")] string? ifNoneMatch,
[FromServices] TenantResolver tenantResolver,
[FromServices] IFirstSignalService firstSignalService,
CancellationToken cancellationToken)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var result = await firstSignalService
.GetFirstSignalAsync(runId, tenantId, ifNoneMatch, cancellationToken)
.ConfigureAwait(false);
context.Response.Headers["Cache-Status"] = result.CacheHit ? "hit" : "miss";
if (!string.IsNullOrWhiteSpace(result.Source))
{
context.Response.Headers["X-FirstSignal-Source"] = result.Source;
}
if (!string.IsNullOrWhiteSpace(result.ETag))
{
context.Response.Headers.ETag = result.ETag;
context.Response.Headers.CacheControl = "private, max-age=60";
}
return result.Status switch
{
FirstSignalResultStatus.Found => Results.Ok(MapToResponse(runId, result)),
FirstSignalResultStatus.NotModified => Results.StatusCode(StatusCodes.Status304NotModified),
FirstSignalResultStatus.NotFound => Results.NotFound(),
FirstSignalResultStatus.NotAvailable => Results.NoContent(),
_ => Results.Problem(_t("orchestrator.first_signal.error.server_error"))
};
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (ArgumentException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static FirstSignalResponse MapToResponse(Guid runId, FirstSignalResult result)
{
if (result.Signal is null)
{
return new FirstSignalResponse
{
RunId = runId,
FirstSignal = null,
SummaryEtag = result.ETag ?? string.Empty
};
}
var signal = result.Signal;
return new FirstSignalResponse
{
RunId = runId,
SummaryEtag = result.ETag ?? string.Empty,
FirstSignal = new FirstSignalDto
{
Type = signal.Kind.ToString().ToLowerInvariant(),
Stage = signal.Phase.ToString().ToLowerInvariant(),
Step = null,
Message = signal.Summary,
At = signal.Timestamp,
Artifact = new FirstSignalArtifactDto
{
Kind = signal.Scope.Type,
Range = null
},
LastKnownOutcome = signal.LastKnownOutcome is null
? null
: new FirstSignalLastKnownOutcomeDto
{
SignatureId = signal.LastKnownOutcome.SignatureId,
ErrorCode = signal.LastKnownOutcome.ErrorCode,
Token = signal.LastKnownOutcome.Token,
Excerpt = signal.LastKnownOutcome.Excerpt,
Confidence = signal.LastKnownOutcome.Confidence,
FirstSeenAt = signal.LastKnownOutcome.FirstSeenAt,
HitCount = signal.LastKnownOutcome.HitCount
}
}
};
}
}

View File

@@ -1,191 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.JobEngine.Infrastructure.Postgres;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// Health and readiness probe endpoints.
/// </summary>
public static class HealthEndpoints
{
/// <summary>
/// Maps health endpoints to the route builder.
/// </summary>
public static IEndpointRouteBuilder MapHealthEndpoints(this IEndpointRouteBuilder app)
{
app.MapGet("/healthz", GetHealth)
.WithName("Orchestrator_Health")
.WithTags("Health")
.WithDescription(_t("orchestrator.health.liveness_description"))
.AllowAnonymous();
app.MapGet("/readyz", GetReadiness)
.WithName("Orchestrator_Readiness")
.WithTags("Health")
.WithDescription(_t("orchestrator.health.readiness_description"))
.AllowAnonymous();
app.MapGet("/livez", GetLiveness)
.WithName("Orchestrator_Liveness")
.WithTags("Health")
.WithDescription(_t("orchestrator.health.liveness_description"))
.AllowAnonymous();
app.MapGet("/health/details", GetHealthDetails)
.WithName("Orchestrator_HealthDetails")
.WithTags("Health")
.WithDescription(_t("orchestrator.health.deep_description"))
.AllowAnonymous();
return app;
}
private static IResult GetHealth([FromServices] TimeProvider timeProvider)
{
return Results.Ok(new HealthResponse("ok", timeProvider.GetUtcNow()));
}
private static async Task<IResult> GetReadiness(
[FromServices] JobEngineDataSource dataSource,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken)
{
try
{
// Check database connectivity
var dbHealthy = await CheckDatabaseAsync(dataSource, cancellationToken).ConfigureAwait(false);
if (!dbHealthy)
{
return Results.Json(
new ReadinessResponse("not_ready", timeProvider.GetUtcNow(), new Dictionary<string, string>
{
["database"] = "unhealthy"
}),
statusCode: StatusCodes.Status503ServiceUnavailable);
}
return Results.Ok(new ReadinessResponse("ready", timeProvider.GetUtcNow(), new Dictionary<string, string>
{
["database"] = "healthy"
}));
}
catch (Exception ex)
{
return Results.Json(
new ReadinessResponse("not_ready", timeProvider.GetUtcNow(), new Dictionary<string, string>
{
["database"] = $"error: {ex.Message}"
}),
statusCode: StatusCodes.Status503ServiceUnavailable);
}
}
private static IResult GetLiveness([FromServices] TimeProvider timeProvider)
{
// Liveness just checks the process is alive
return Results.Ok(new HealthResponse("alive", timeProvider.GetUtcNow()));
}
private static async Task<IResult> GetHealthDetails(
[FromServices] JobEngineDataSource dataSource,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken)
{
var checks = new Dictionary<string, HealthCheckResult>();
var overallHealthy = true;
// Database check
try
{
var dbHealthy = await CheckDatabaseAsync(dataSource, cancellationToken).ConfigureAwait(false);
checks["database"] = new HealthCheckResult(
dbHealthy ? "healthy" : "unhealthy",
dbHealthy ? null : "Connection test failed",
timeProvider.GetUtcNow());
overallHealthy &= dbHealthy;
}
catch (Exception ex)
{
checks["database"] = new HealthCheckResult("unhealthy", ex.Message, timeProvider.GetUtcNow());
overallHealthy = false;
}
// Memory check
var memoryInfo = GC.GetGCMemoryInfo();
var memoryUsedMb = GC.GetTotalMemory(false) / (1024.0 * 1024.0);
var memoryLimitMb = memoryInfo.TotalAvailableMemoryBytes / (1024.0 * 1024.0);
var memoryHealthy = memoryUsedMb < memoryLimitMb * 0.9; // < 90% threshold
checks["memory"] = new HealthCheckResult(
memoryHealthy ? "healthy" : "degraded",
$"Used: {memoryUsedMb:F2} MB",
timeProvider.GetUtcNow());
// Thread pool check
ThreadPool.GetAvailableThreads(out var workerThreads, out var completionPortThreads);
ThreadPool.GetMaxThreads(out var maxWorkerThreads, out var maxCompletionPortThreads);
var threadPoolHealthy = workerThreads > maxWorkerThreads * 0.1; // > 10% available
checks["threadPool"] = new HealthCheckResult(
threadPoolHealthy ? "healthy" : "degraded",
$"Worker threads available: {workerThreads}/{maxWorkerThreads}",
timeProvider.GetUtcNow());
var response = new HealthDetailsResponse(
overallHealthy ? "healthy" : "unhealthy",
timeProvider.GetUtcNow(),
checks);
return overallHealthy
? Results.Ok(response)
: Results.Json(response, statusCode: StatusCodes.Status503ServiceUnavailable);
}
private static async Task<bool> CheckDatabaseAsync(JobEngineDataSource dataSource, CancellationToken cancellationToken)
{
try
{
// Use a system tenant for health checks
await using var connection = await dataSource.OpenConnectionAsync("_system", "health", cancellationToken).ConfigureAwait(false);
await using var command = connection.CreateCommand();
command.CommandText = "SELECT 1";
await command.ExecuteScalarAsync(cancellationToken).ConfigureAwait(false);
return true;
}
catch
{
return false;
}
}
}
/// <summary>
/// Basic health response.
/// </summary>
public sealed record HealthResponse(string Status, DateTimeOffset Timestamp);
/// <summary>
/// Readiness response with dependency status.
/// </summary>
public sealed record ReadinessResponse(
string Status,
DateTimeOffset Timestamp,
IReadOnlyDictionary<string, string> Dependencies);
/// <summary>
/// Individual health check result.
/// </summary>
public sealed record HealthCheckResult(
string Status,
string? Details,
DateTimeOffset CheckedAt);
/// <summary>
/// Detailed health response with all checks.
/// </summary>
public sealed record HealthDetailsResponse(
string Status,
DateTimeOffset Timestamp,
IReadOnlyDictionary<string, HealthCheckResult> Checks);

View File

@@ -1,207 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Infrastructure.Repositories;
using StellaOps.JobEngine.WebService.Contracts;
using StellaOps.JobEngine.WebService.Services;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// REST API endpoints for jobs.
/// </summary>
public static class JobEndpoints
{
/// <summary>
/// Maps job endpoints to the route builder.
/// </summary>
public static RouteGroupBuilder MapJobEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/jobs")
.WithTags("Orchestrator Jobs")
.RequireAuthorization(JobEnginePolicies.Read)
.RequireTenant();
group.MapGet(string.Empty, ListJobs)
.WithName("Orchestrator_ListJobs")
.WithDescription(_t("orchestrator.job.list_description"));
group.MapGet("{jobId:guid}", GetJob)
.WithName("Orchestrator_GetJob")
.WithDescription(_t("orchestrator.job.get_description"));
group.MapGet("{jobId:guid}/detail", GetJobDetail)
.WithName("Orchestrator_GetJobDetail")
.WithDescription(_t("orchestrator.job.get_detail_description"));
group.MapGet("summary", GetJobSummary)
.WithName("Orchestrator_GetJobSummary")
.WithDescription(_t("orchestrator.job.get_summary_description"));
group.MapGet("by-idempotency-key/{key}", GetJobByIdempotencyKey)
.WithName("Orchestrator_GetJobByIdempotencyKey")
.WithDescription(_t("orchestrator.job.get_by_idempotency_key_description"));
return group;
}
private static async Task<IResult> ListJobs(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IJobRepository repository,
[FromQuery] string? status = null,
[FromQuery] string? jobType = null,
[FromQuery] string? projectId = null,
[FromQuery] string? createdAfter = null,
[FromQuery] string? createdBefore = null,
[FromQuery] int? limit = null,
[FromQuery] string? cursor = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = EndpointHelpers.GetLimit(limit);
var offset = EndpointHelpers.ParseCursorOffset(cursor);
var parsedStatus = EndpointHelpers.TryParseJobStatus(status);
var parsedCreatedAfter = EndpointHelpers.TryParseDateTimeOffset(createdAfter);
var parsedCreatedBefore = EndpointHelpers.TryParseDateTimeOffset(createdBefore);
var jobs = await repository.ListAsync(
tenantId,
parsedStatus,
jobType,
projectId,
parsedCreatedAfter,
parsedCreatedBefore,
effectiveLimit,
offset,
cancellationToken).ConfigureAwait(false);
var responses = jobs.Select(JobResponse.FromDomain).ToList();
var nextCursor = EndpointHelpers.CreateNextCursor(offset, effectiveLimit, responses.Count);
return Results.Ok(new JobListResponse(responses, nextCursor));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetJob(
HttpContext context,
[FromRoute] Guid jobId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IJobRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var job = await repository.GetByIdAsync(tenantId, jobId, cancellationToken).ConfigureAwait(false);
if (job is null)
{
return Results.NotFound();
}
return Results.Ok(JobResponse.FromDomain(job));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetJobDetail(
HttpContext context,
[FromRoute] Guid jobId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IJobRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
DeprecationHeaders.Apply(context.Response, "/api/v1/jobengine/jobs/{jobId}");
var job = await repository.GetByIdAsync(tenantId, jobId, cancellationToken).ConfigureAwait(false);
if (job is null)
{
return Results.NotFound();
}
return Results.Ok(JobDetailResponse.FromDomain(job));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetJobSummary(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IJobRepository repository,
[FromQuery] string? jobType = null,
[FromQuery] string? projectId = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
DeprecationHeaders.Apply(context.Response, "/api/v1/jobengine/jobs");
// Single aggregate query using text comparison against enum labels.
// Replaces 7 individual COUNT round trips with one FILTER-based query.
var counts = await repository.GetStatusCountsAsync(tenantId, jobType, projectId, cancellationToken).ConfigureAwait(false);
var summary = new JobSummary(
TotalJobs: counts.Total,
PendingJobs: counts.Pending,
ScheduledJobs: counts.Scheduled,
LeasedJobs: counts.Leased,
SucceededJobs: counts.Succeeded,
FailedJobs: counts.Failed,
CanceledJobs: counts.Canceled,
TimedOutJobs: counts.TimedOut);
return Results.Ok(summary);
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetJobByIdempotencyKey(
HttpContext context,
[FromRoute] string key,
[FromServices] TenantResolver tenantResolver,
[FromServices] IJobRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
if (string.IsNullOrWhiteSpace(key))
{
return Results.BadRequest(new { error = _t("orchestrator.job.error.idempotency_key_required") });
}
var job = await repository.GetByIdempotencyKeyAsync(tenantId, key, cancellationToken).ConfigureAwait(false);
if (job is null)
{
return Results.NotFound();
}
return Results.Ok(JobResponse.FromDomain(job));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
}

View File

@@ -1,150 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.Metrics.Kpi;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// Quality KPI endpoints for explainable triage metrics.
/// </summary>
public static class KpiEndpoints
{
/// <summary>
/// Maps KPI endpoints to the route builder.
/// </summary>
public static IEndpointRouteBuilder MapKpiEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/metrics/kpis")
.WithTags("Quality KPIs")
.RequireAuthorization(JobEnginePolicies.ObservabilityRead)
.RequireTenant();
// GET /api/v1/metrics/kpis
group.MapGet("/", GetQualityKpis)
.WithName("Orchestrator_GetQualityKpis")
.WithDescription(_t("orchestrator.kpi.quality_description"));
// GET /api/v1/metrics/kpis/reachability
group.MapGet("/reachability", GetReachabilityKpis)
.WithName("Orchestrator_GetReachabilityKpis")
.WithDescription(_t("orchestrator.kpi.reachability_description"));
// GET /api/v1/metrics/kpis/explainability
group.MapGet("/explainability", GetExplainabilityKpis)
.WithName("Orchestrator_GetExplainabilityKpis")
.WithDescription(_t("orchestrator.kpi.explainability_description"));
// GET /api/v1/metrics/kpis/runtime
group.MapGet("/runtime", GetRuntimeKpis)
.WithName("Orchestrator_GetRuntimeKpis")
.WithDescription(_t("orchestrator.kpi.runtime_description"));
// GET /api/v1/metrics/kpis/replay
group.MapGet("/replay", GetReplayKpis)
.WithName("Orchestrator_GetReplayKpis")
.WithDescription(_t("orchestrator.kpi.replay_description"));
// GET /api/v1/metrics/kpis/trend
group.MapGet("/trend", GetKpiTrend)
.WithName("Orchestrator_GetKpiTrend")
.WithDescription(_t("orchestrator.kpi.trend_description"));
return app;
}
private static async Task<IResult> GetQualityKpis(
[FromQuery] DateTimeOffset? from,
[FromQuery] DateTimeOffset? to,
[FromQuery] string? tenant,
[FromServices] IKpiCollector collector,
[FromServices] TimeProvider timeProvider,
CancellationToken ct)
{
var now = timeProvider.GetUtcNow();
var start = from ?? now.AddDays(-7);
var end = to ?? now;
var kpis = await collector.CollectAsync(start, end, tenant, ct);
return Results.Ok(kpis);
}
private static async Task<IResult> GetReachabilityKpis(
[FromQuery] DateTimeOffset? from,
[FromQuery] DateTimeOffset? to,
[FromQuery] string? tenant,
[FromServices] IKpiCollector collector,
[FromServices] TimeProvider timeProvider,
CancellationToken ct)
{
var now = timeProvider.GetUtcNow();
var kpis = await collector.CollectAsync(
from ?? now.AddDays(-7),
to ?? now,
tenant,
ct);
return Results.Ok(kpis.Reachability);
}
private static async Task<IResult> GetExplainabilityKpis(
[FromQuery] DateTimeOffset? from,
[FromQuery] DateTimeOffset? to,
[FromQuery] string? tenant,
[FromServices] IKpiCollector collector,
[FromServices] TimeProvider timeProvider,
CancellationToken ct)
{
var now = timeProvider.GetUtcNow();
var kpis = await collector.CollectAsync(
from ?? now.AddDays(-7),
to ?? now,
tenant,
ct);
return Results.Ok(kpis.Explainability);
}
private static async Task<IResult> GetRuntimeKpis(
[FromQuery] DateTimeOffset? from,
[FromQuery] DateTimeOffset? to,
[FromQuery] string? tenant,
[FromServices] IKpiCollector collector,
[FromServices] TimeProvider timeProvider,
CancellationToken ct)
{
var now = timeProvider.GetUtcNow();
var kpis = await collector.CollectAsync(
from ?? now.AddDays(-7),
to ?? now,
tenant,
ct);
return Results.Ok(kpis.Runtime);
}
private static async Task<IResult> GetReplayKpis(
[FromQuery] DateTimeOffset? from,
[FromQuery] DateTimeOffset? to,
[FromQuery] string? tenant,
[FromServices] IKpiCollector collector,
[FromServices] TimeProvider timeProvider,
CancellationToken ct)
{
var now = timeProvider.GetUtcNow();
var kpis = await collector.CollectAsync(
from ?? now.AddDays(-7),
to ?? now,
tenant,
ct);
return Results.Ok(kpis.Replay);
}
private static async Task<IResult> GetKpiTrend(
[FromServices] IKpiTrendService trendService,
CancellationToken ct,
[FromQuery] int days = 30,
[FromQuery] string? tenant = null)
{
var trend = await trendService.GetTrendAsync(days, tenant, ct);
return Results.Ok(trend);
}
}

View File

@@ -1,574 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Core.Domain;
using StellaOps.JobEngine.Infrastructure.Repositories;
using StellaOps.JobEngine.WebService.Contracts;
using StellaOps.JobEngine.WebService.Services;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// REST API endpoints for ledger operations.
/// </summary>
public static class LedgerEndpoints
{
/// <summary>
/// Maps ledger endpoints to the route builder.
/// </summary>
public static RouteGroupBuilder MapLedgerEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/ledger")
.WithTags("Orchestrator Ledger")
.RequireAuthorization(JobEnginePolicies.Read)
.RequireTenant();
// Ledger entry operations
group.MapGet(string.Empty, ListLedgerEntries)
.WithName("Orchestrator_ListLedgerEntries")
.WithDescription(_t("orchestrator.ledger.list_description"));
group.MapGet("{ledgerId:guid}", GetLedgerEntry)
.WithName("Orchestrator_GetLedgerEntry")
.WithDescription(_t("orchestrator.ledger.get_description"));
group.MapGet("run/{runId:guid}", GetByRunId)
.WithName("Orchestrator_GetLedgerByRunId")
.WithDescription(_t("orchestrator.ledger.get_by_run_description"));
group.MapGet("source/{sourceId:guid}", GetBySource)
.WithName("Orchestrator_GetLedgerBySource")
.WithDescription(_t("orchestrator.ledger.get_by_source_description"));
group.MapGet("latest", GetLatestEntry)
.WithName("Orchestrator_GetLatestLedgerEntry")
.WithDescription(_t("orchestrator.ledger.get_latest_description"));
group.MapGet("sequence/{startSeq:long}/{endSeq:long}", GetBySequenceRange)
.WithName("Orchestrator_GetLedgerBySequence")
.WithDescription(_t("orchestrator.ledger.get_by_sequence_description"));
// Summary and verification
group.MapGet("summary", GetLedgerSummary)
.WithName("Orchestrator_GetLedgerSummary")
.WithDescription(_t("orchestrator.ledger.summary_description"));
group.MapGet("verify", VerifyLedgerChain)
.WithName("Orchestrator_VerifyLedgerChain")
.WithDescription(_t("orchestrator.ledger.verify_chain_description"));
// Export operations
group.MapGet("exports", ListExports)
.WithName("Orchestrator_ListLedgerExports")
.WithDescription(_t("orchestrator.ledger.list_exports_description"));
group.MapGet("exports/{exportId:guid}", GetExport)
.WithName("Orchestrator_GetLedgerExport")
.WithDescription(_t("orchestrator.ledger.get_export_description"));
group.MapPost("exports", CreateExport)
.WithName("Orchestrator_CreateLedgerExport")
.WithDescription(_t("orchestrator.ledger.create_export_description"))
.RequireAuthorization(JobEnginePolicies.ExportOperator);
// Manifest operations
group.MapGet("manifests", ListManifests)
.WithName("Orchestrator_ListManifests")
.WithDescription(_t("orchestrator.ledger.list_manifests_description"));
group.MapGet("manifests/{manifestId:guid}", GetManifest)
.WithName("Orchestrator_GetManifest")
.WithDescription(_t("orchestrator.ledger.get_manifest_description"));
group.MapGet("manifests/subject/{subjectId:guid}", GetManifestBySubject)
.WithName("Orchestrator_GetManifestBySubject")
.WithDescription(_t("orchestrator.ledger.get_manifest_by_subject_description"));
group.MapGet("manifests/{manifestId:guid}/verify", VerifyManifest)
.WithName("Orchestrator_VerifyManifest")
.WithDescription(_t("orchestrator.ledger.verify_manifest_description"));
return group;
}
private static async Task<IResult> ListLedgerEntries(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] ILedgerRepository repository,
[FromQuery] string? runType = null,
[FromQuery] Guid? sourceId = null,
[FromQuery] string? finalStatus = null,
[FromQuery] DateTimeOffset? startTime = null,
[FromQuery] DateTimeOffset? endTime = null,
[FromQuery] int? limit = null,
[FromQuery] string? cursor = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = EndpointHelpers.GetLimit(limit);
var offset = EndpointHelpers.ParseCursorOffset(cursor);
RunStatus? parsedStatus = null;
if (!string.IsNullOrEmpty(finalStatus) && Enum.TryParse<RunStatus>(finalStatus, true, out var rs))
{
parsedStatus = rs;
}
var entries = await repository.ListAsync(
tenantId,
runType,
sourceId,
parsedStatus,
startTime,
endTime,
effectiveLimit,
offset,
cancellationToken).ConfigureAwait(false);
var responses = entries.Select(LedgerEntryResponse.FromDomain).ToList();
var nextCursor = EndpointHelpers.CreateNextCursor(offset, effectiveLimit, responses.Count);
return Results.Ok(new LedgerEntryListResponse(responses, nextCursor));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetLedgerEntry(
HttpContext context,
[FromRoute] Guid ledgerId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ILedgerRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var entry = await repository.GetByIdAsync(tenantId, ledgerId, cancellationToken).ConfigureAwait(false);
if (entry is null)
{
return Results.NotFound();
}
return Results.Ok(LedgerEntryResponse.FromDomain(entry));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetByRunId(
HttpContext context,
[FromRoute] Guid runId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ILedgerRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var entry = await repository.GetByRunIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
if (entry is null)
{
return Results.NotFound();
}
return Results.Ok(LedgerEntryResponse.FromDomain(entry));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetBySource(
HttpContext context,
[FromRoute] Guid sourceId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ILedgerRepository repository,
[FromQuery] int? limit = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = EndpointHelpers.GetLimit(limit);
var entries = await repository.GetBySourceAsync(
tenantId,
sourceId,
effectiveLimit,
cancellationToken).ConfigureAwait(false);
var responses = entries.Select(LedgerEntryResponse.FromDomain).ToList();
return Results.Ok(new LedgerEntryListResponse(responses, null));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetLatestEntry(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] ILedgerRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var entry = await repository.GetLatestAsync(tenantId, cancellationToken).ConfigureAwait(false);
if (entry is null)
{
return Results.NotFound();
}
return Results.Ok(LedgerEntryResponse.FromDomain(entry));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetBySequenceRange(
HttpContext context,
[FromRoute] long startSeq,
[FromRoute] long endSeq,
[FromServices] TenantResolver tenantResolver,
[FromServices] ILedgerRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
if (startSeq < 1 || endSeq < startSeq)
{
return Results.BadRequest(new { error = _t("orchestrator.ledger.error.invalid_sequence_range") });
}
var entries = await repository.GetBySequenceRangeAsync(
tenantId,
startSeq,
endSeq,
cancellationToken).ConfigureAwait(false);
var responses = entries.Select(LedgerEntryResponse.FromDomain).ToList();
return Results.Ok(new LedgerEntryListResponse(responses, null));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetLedgerSummary(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] ILedgerRepository repository,
[FromQuery] DateTimeOffset? since = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var summary = await repository.GetSummaryAsync(tenantId, since, cancellationToken).ConfigureAwait(false);
return Results.Ok(LedgerSummaryResponse.FromDomain(summary));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> VerifyLedgerChain(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] ILedgerRepository repository,
[FromQuery] long? startSeq = null,
[FromQuery] long? endSeq = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var result = await repository.VerifyChainAsync(tenantId, startSeq, endSeq, cancellationToken).ConfigureAwait(false);
Infrastructure.JobEngineMetrics.LedgerChainVerified(tenantId, result.IsValid);
return Results.Ok(ChainVerificationResponse.FromDomain(result));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> ListExports(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] ILedgerExportRepository repository,
[FromQuery] string? status = null,
[FromQuery] int? limit = null,
[FromQuery] string? cursor = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = EndpointHelpers.GetLimit(limit);
var offset = EndpointHelpers.ParseCursorOffset(cursor);
LedgerExportStatus? parsedStatus = null;
if (!string.IsNullOrEmpty(status) && Enum.TryParse<LedgerExportStatus>(status, true, out var es))
{
parsedStatus = es;
}
var exports = await repository.ListAsync(
tenantId,
parsedStatus,
effectiveLimit,
offset,
cancellationToken).ConfigureAwait(false);
var responses = exports.Select(LedgerExportResponse.FromDomain).ToList();
var nextCursor = EndpointHelpers.CreateNextCursor(offset, effectiveLimit, responses.Count);
return Results.Ok(new LedgerExportListResponse(responses, nextCursor));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetExport(
HttpContext context,
[FromRoute] Guid exportId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ILedgerExportRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var export = await repository.GetByIdAsync(tenantId, exportId, cancellationToken).ConfigureAwait(false);
if (export is null)
{
return Results.NotFound();
}
return Results.Ok(LedgerExportResponse.FromDomain(export));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> CreateExport(
HttpContext context,
[FromBody] CreateLedgerExportRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] ILedgerExportRepository repository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var actorId = context.User?.Identity?.Name ?? "system";
var now = timeProvider.GetUtcNow();
// Validate format
var validFormats = new[] { "json", "ndjson", "csv" };
if (!validFormats.Contains(request.Format?.ToLowerInvariant()))
{
return Results.BadRequest(new { error = _t("orchestrator.ledger.error.invalid_format", string.Join(", ", validFormats)) });
}
// Validate time range
if (request.StartTime.HasValue && request.EndTime.HasValue && request.StartTime > request.EndTime)
{
return Results.BadRequest(new { error = _t("orchestrator.ledger.error.start_before_end") });
}
var export = LedgerExport.CreateRequest(
tenantId: tenantId,
format: request.Format!,
requestedBy: actorId,
requestedAt: now,
startTime: request.StartTime,
endTime: request.EndTime,
runTypeFilter: request.RunTypeFilter,
sourceIdFilter: request.SourceIdFilter);
await repository.CreateAsync(export, cancellationToken).ConfigureAwait(false);
return Results.Created($"/api/v1/jobengine/ledger/exports/{export.ExportId}",
LedgerExportResponse.FromDomain(export));
}
catch (ArgumentException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> ListManifests(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IManifestRepository repository,
[FromQuery] string? provenanceType = null,
[FromQuery] int? limit = null,
[FromQuery] string? cursor = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = EndpointHelpers.GetLimit(limit);
var offset = EndpointHelpers.ParseCursorOffset(cursor);
ProvenanceType? parsedType = null;
if (!string.IsNullOrEmpty(provenanceType) && Enum.TryParse<ProvenanceType>(provenanceType, true, out var pt))
{
parsedType = pt;
}
var manifests = await repository.ListAsync(
tenantId,
parsedType,
effectiveLimit,
offset,
cancellationToken).ConfigureAwait(false);
var responses = manifests.Select(ManifestResponse.FromDomain).ToList();
var nextCursor = EndpointHelpers.CreateNextCursor(offset, effectiveLimit, responses.Count);
return Results.Ok(new ManifestListResponse(responses, nextCursor));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetManifest(
HttpContext context,
[FromRoute] Guid manifestId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IManifestRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var manifest = await repository.GetByIdAsync(tenantId, manifestId, cancellationToken).ConfigureAwait(false);
if (manifest is null)
{
return Results.NotFound();
}
return Results.Ok(ManifestDetailResponse.FromDomain(manifest));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetManifestBySubject(
HttpContext context,
[FromRoute] Guid subjectId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IManifestRepository repository,
[FromQuery] string? provenanceType = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
ProvenanceType parsedType = ProvenanceType.Run;
if (!string.IsNullOrEmpty(provenanceType) && Enum.TryParse<ProvenanceType>(provenanceType, true, out var pt))
{
parsedType = pt;
}
var manifest = await repository.GetBySubjectAsync(tenantId, parsedType, subjectId, cancellationToken).ConfigureAwait(false);
if (manifest is null)
{
return Results.NotFound();
}
return Results.Ok(ManifestDetailResponse.FromDomain(manifest));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> VerifyManifest(
HttpContext context,
[FromRoute] Guid manifestId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IManifestRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var manifest = await repository.GetByIdAsync(tenantId, manifestId, cancellationToken).ConfigureAwait(false);
if (manifest is null)
{
return Results.NotFound();
}
var payloadValid = manifest.VerifyPayloadIntegrity();
string? validationError = null;
if (!payloadValid)
{
validationError = _t("orchestrator.ledger.error.payload_digest_mismatch");
}
else if (manifest.IsExpired)
{
validationError = _t("orchestrator.ledger.error.manifest_expired");
}
Infrastructure.JobEngineMetrics.ManifestVerified(tenantId, payloadValid && !manifest.IsExpired);
return Results.Ok(new ManifestVerificationResponse(
ManifestId: manifestId,
PayloadIntegrityValid: payloadValid,
IsExpired: manifest.IsExpired,
IsSigned: manifest.IsSigned,
ValidationError: validationError));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
}

View File

@@ -1,45 +0,0 @@
using StellaOps.JobEngine.WebService.Contracts;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// OpenAPI discovery and specification endpoints.
/// </summary>
public static class OpenApiEndpoints
{
/// <summary>
/// Maps OpenAPI discovery endpoints.
/// </summary>
public static IEndpointRouteBuilder MapOpenApiEndpoints(this IEndpointRouteBuilder app)
{
app.MapGet("/.well-known/openapi", (HttpContext context) =>
{
var version = OpenApiDocuments.GetServiceVersion();
var discovery = OpenApiDocuments.CreateDiscoveryDocument(version);
context.Response.Headers.CacheControl = "private, max-age=300";
context.Response.Headers.ETag = $"W/\"oas-{version}\"";
context.Response.Headers["X-StellaOps-Service"] = "jobengine";
context.Response.Headers["X-StellaOps-Api-Version"] = version;
return Results.Json(discovery, OpenApiDocuments.SerializerOptions);
})
.WithName("Orchestrator_OpenApiDiscovery")
.WithTags("OpenAPI")
.WithDescription("Return the OpenAPI discovery document for the Orchestrator service, including the service name, current version, and a link to the full OpenAPI specification. The response is cached for 5 minutes and includes ETag-based conditional caching support.")
.AllowAnonymous();
app.MapGet("/openapi/jobengine.json", () =>
{
var version = OpenApiDocuments.GetServiceVersion();
var spec = OpenApiDocuments.CreateSpecification(version);
return Results.Json(spec, OpenApiDocuments.SerializerOptions);
})
.WithName("Orchestrator_OpenApiSpec")
.WithTags("OpenAPI")
.WithDescription("Return the full OpenAPI 3.x specification for the Orchestrator service as a JSON document. Used by the Router to aggregate the service's endpoint metadata and by developer tooling to generate clients and documentation.")
.AllowAnonymous();
return app;
}
}

View File

@@ -1,888 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Core.Domain;
using StellaOps.JobEngine.Infrastructure.Repositories;
using StellaOps.JobEngine.WebService.Contracts;
using StellaOps.JobEngine.WebService.Services;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// Pack registry endpoints for pack management, versioning, and discovery.
/// Per 150.B-PacksRegistry: Registry API for pack CRUD operations.
/// </summary>
public static class PackRegistryEndpoints
{
private const int DefaultLimit = 50;
private const int MaxLimit = 100;
/// <summary>
/// Maps pack registry endpoints to the route builder.
/// </summary>
public static RouteGroupBuilder MapPackRegistryEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/registry/packs")
.WithTags("Orchestrator Pack Registry")
.RequireAuthorization(JobEnginePolicies.PacksRead)
.RequireTenant();
// Pack CRUD endpoints
group.MapPost("", CreatePack)
.WithName("Registry_CreatePack")
.WithDescription(_t("orchestrator.pack_registry.create_pack_description"))
.RequireAuthorization(JobEnginePolicies.PacksWrite);
group.MapGet("{packId:guid}", GetPackById)
.WithName("Registry_GetPackById")
.WithDescription(_t("orchestrator.pack_registry.get_pack_by_id_description"));
group.MapGet("by-name/{name}", GetPackByName)
.WithName("Registry_GetPackByName")
.WithDescription(_t("orchestrator.pack_registry.get_pack_by_name_description"));
group.MapGet("", ListPacks)
.WithName("Registry_ListPacks")
.WithDescription(_t("orchestrator.pack_registry.list_packs_description"));
group.MapPatch("{packId:guid}", UpdatePack)
.WithName("Registry_UpdatePack")
.WithDescription(_t("orchestrator.pack_registry.update_pack_description"))
.RequireAuthorization(JobEnginePolicies.PacksWrite);
group.MapPost("{packId:guid}/status", UpdatePackStatus)
.WithName("Registry_UpdatePackStatus")
.WithDescription(_t("orchestrator.pack_registry.update_pack_status_description"))
.RequireAuthorization(JobEnginePolicies.PacksWrite);
group.MapDelete("{packId:guid}", DeletePack)
.WithName("Registry_DeletePack")
.WithDescription(_t("orchestrator.pack_registry.delete_pack_description"))
.RequireAuthorization(JobEnginePolicies.PacksWrite);
// Pack version endpoints
group.MapPost("{packId:guid}/versions", CreatePackVersion)
.WithName("Registry_CreatePackVersion")
.WithDescription(_t("orchestrator.pack_registry.create_version_description"))
.RequireAuthorization(JobEnginePolicies.PacksWrite);
group.MapGet("{packId:guid}/versions", ListVersions)
.WithName("Registry_ListVersions")
.WithDescription(_t("orchestrator.pack_registry.list_versions_description"));
group.MapGet("{packId:guid}/versions/{version}", GetVersion)
.WithName("Registry_GetVersion")
.WithDescription(_t("orchestrator.pack_registry.get_version_description"));
group.MapGet("{packId:guid}/versions/latest", GetLatestVersion)
.WithName("Registry_GetLatestVersion")
.WithDescription(_t("orchestrator.pack_registry.get_latest_version_description"));
group.MapPatch("{packId:guid}/versions/{packVersionId:guid}", UpdateVersion)
.WithName("Registry_UpdateVersion")
.WithDescription(_t("orchestrator.pack_registry.update_version_description"))
.RequireAuthorization(JobEnginePolicies.PacksWrite);
group.MapPost("{packId:guid}/versions/{packVersionId:guid}/status", UpdateVersionStatus)
.WithName("Registry_UpdateVersionStatus")
.WithDescription(_t("orchestrator.pack_registry.update_version_status_description"))
.RequireAuthorization(JobEnginePolicies.PacksWrite);
group.MapPost("{packId:guid}/versions/{packVersionId:guid}/sign", SignVersion)
.WithName("Registry_SignVersion")
.WithDescription(_t("orchestrator.pack_registry.sign_version_description"))
.RequireAuthorization(JobEnginePolicies.PacksApprove);
group.MapPost("{packId:guid}/versions/{packVersionId:guid}/download", DownloadVersion)
.WithName("Registry_DownloadVersion")
.WithDescription(_t("orchestrator.pack_registry.download_version_description"));
group.MapDelete("{packId:guid}/versions/{packVersionId:guid}", DeleteVersion)
.WithName("Registry_DeleteVersion")
.WithDescription(_t("orchestrator.pack_registry.delete_version_description"))
.RequireAuthorization(JobEnginePolicies.PacksWrite);
// Search and discovery endpoints
group.MapGet("search", SearchPacks)
.WithName("Registry_SearchPacks")
.WithDescription(_t("orchestrator.pack_registry.search_packs_description"));
group.MapGet("by-tag/{tag}", GetPacksByTag)
.WithName("Registry_GetPacksByTag")
.WithDescription(_t("orchestrator.pack_registry.get_packs_by_tag_description"));
group.MapGet("popular", GetPopularPacks)
.WithName("Registry_GetPopularPacks")
.WithDescription(_t("orchestrator.pack_registry.get_popular_packs_description"));
group.MapGet("recent", GetRecentPacks)
.WithName("Registry_GetRecentPacks")
.WithDescription(_t("orchestrator.pack_registry.get_recent_packs_description"));
// Statistics endpoint
group.MapGet("stats", GetStats)
.WithName("Registry_GetStats")
.WithDescription(_t("orchestrator.pack_registry.stats_description"));
return group;
}
// ========== Pack CRUD Endpoints ==========
private static async Task<IResult> CreatePack(
HttpContext context,
[FromBody] CreatePackRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken)
{
if (string.IsNullOrWhiteSpace(request.Name))
{
return Results.BadRequest(new PackRegistryErrorResponse(
"invalid_request", _t("orchestrator.pack_registry.error.name_required"), null, null));
}
if (string.IsNullOrWhiteSpace(request.DisplayName))
{
return Results.BadRequest(new PackRegistryErrorResponse(
"invalid_request", _t("orchestrator.pack_registry.error.display_name_required"), null, null));
}
var tenantId = tenantResolver.Resolve(context);
var actor = context.User?.Identity?.Name ?? "system";
var now = timeProvider.GetUtcNow();
// Check for existing pack with same name
var existing = await repository.GetPackByNameAsync(tenantId, request.Name.ToLowerInvariant(), cancellationToken);
if (existing is not null)
{
return Results.Conflict(new PackRegistryErrorResponse(
"duplicate_name", _t("orchestrator.pack_registry.error.pack_name_exists", request.Name), existing.PackId, null));
}
var pack = Pack.Create(
packId: Guid.NewGuid(),
tenantId: tenantId,
projectId: request.ProjectId,
name: request.Name,
displayName: request.DisplayName,
description: request.Description,
createdBy: actor,
metadata: request.Metadata,
tags: request.Tags,
iconUri: request.IconUri,
createdAt: now);
await repository.CreatePackAsync(pack, cancellationToken);
return Results.Created($"/api/v1/jobengine/registry/packs/{pack.PackId}", PackResponse.FromDomain(pack));
}
private static async Task<IResult> GetPackById(
HttpContext context,
[FromRoute] Guid packId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var pack = await repository.GetPackByIdAsync(tenantId, packId, cancellationToken);
if (pack is null)
{
return Results.NotFound(new PackRegistryErrorResponse(
"not_found", _t("orchestrator.pack_registry.error.pack_id_not_found", packId), packId, null));
}
return Results.Ok(PackResponse.FromDomain(pack));
}
private static async Task<IResult> GetPackByName(
HttpContext context,
[FromRoute] string name,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var pack = await repository.GetPackByNameAsync(tenantId, name.ToLowerInvariant(), cancellationToken);
if (pack is null)
{
return Results.NotFound(new PackRegistryErrorResponse(
"not_found", $"Pack '{name}' not found", null, null));
}
return Results.Ok(PackResponse.FromDomain(pack));
}
private static async Task<IResult> ListPacks(
HttpContext context,
[FromQuery] string? projectId,
[FromQuery] string? status,
[FromQuery] string? search,
[FromQuery] string? tag,
[FromQuery] int? limit,
[FromQuery] int? offset,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = Math.Min(limit ?? DefaultLimit, MaxLimit);
var effectiveOffset = offset ?? 0;
PackStatus? statusFilter = null;
if (!string.IsNullOrEmpty(status) && Enum.TryParse<PackStatus>(status, true, out var parsed))
{
statusFilter = parsed;
}
var packs = await repository.ListPacksAsync(
tenantId, projectId, statusFilter, search, tag,
effectiveLimit, effectiveOffset, cancellationToken);
var totalCount = await repository.CountPacksAsync(
tenantId, projectId, statusFilter, search, tag, cancellationToken);
var responses = packs.Select(PackResponse.FromDomain).ToList();
var nextCursor = responses.Count == effectiveLimit
? (effectiveOffset + effectiveLimit).ToString()
: null;
return Results.Ok(new PackListResponse(responses, totalCount, nextCursor));
}
private static async Task<IResult> UpdatePack(
HttpContext context,
[FromRoute] Guid packId,
[FromBody] UpdatePackRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var actor = context.User?.Identity?.Name ?? "system";
var now = timeProvider.GetUtcNow();
var pack = await repository.GetPackByIdAsync(tenantId, packId, cancellationToken);
if (pack is null)
{
return Results.NotFound(new PackRegistryErrorResponse(
"not_found", $"Pack {packId} not found", packId, null));
}
if (pack.IsTerminal)
{
return Results.Conflict(new PackRegistryErrorResponse(
"terminal_status", "Cannot update a pack in terminal status", packId, null));
}
var updated = pack with
{
DisplayName = request.DisplayName ?? pack.DisplayName,
Description = request.Description ?? pack.Description,
Metadata = request.Metadata ?? pack.Metadata,
Tags = request.Tags ?? pack.Tags,
IconUri = request.IconUri ?? pack.IconUri,
UpdatedAt = now,
UpdatedBy = actor
};
await repository.UpdatePackAsync(updated, cancellationToken);
return Results.Ok(PackResponse.FromDomain(updated));
}
private static async Task<IResult> UpdatePackStatus(
HttpContext context,
[FromRoute] Guid packId,
[FromBody] UpdatePackStatusRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken)
{
if (string.IsNullOrWhiteSpace(request.Status))
{
return Results.BadRequest(new PackRegistryErrorResponse(
"invalid_request", "Status is required", packId, null));
}
if (!Enum.TryParse<PackStatus>(request.Status, true, out var newStatus))
{
return Results.BadRequest(new PackRegistryErrorResponse(
"invalid_status", $"Invalid status: {request.Status}", packId, null));
}
var tenantId = tenantResolver.Resolve(context);
var actor = context.User?.Identity?.Name ?? "system";
var now = timeProvider.GetUtcNow();
var pack = await repository.GetPackByIdAsync(tenantId, packId, cancellationToken);
if (pack is null)
{
return Results.NotFound(new PackRegistryErrorResponse(
"not_found", $"Pack {packId} not found", packId, null));
}
// Validate status transition
var canTransition = newStatus switch
{
PackStatus.Published => pack.CanPublish,
PackStatus.Deprecated => pack.CanDeprecate,
PackStatus.Archived => pack.CanArchive,
PackStatus.Draft => false, // Cannot go back to draft
_ => false
};
if (!canTransition)
{
return Results.Conflict(new PackRegistryErrorResponse(
"invalid_transition", $"Cannot transition from {pack.Status} to {newStatus}", packId, null));
}
DateTimeOffset? publishedAt = newStatus == PackStatus.Published ? now : pack.PublishedAt;
string? publishedBy = newStatus == PackStatus.Published ? actor : pack.PublishedBy;
await repository.UpdatePackStatusAsync(
tenantId, packId, newStatus, actor, publishedAt, publishedBy, cancellationToken);
var updated = pack.WithStatus(newStatus, actor, now);
return Results.Ok(PackResponse.FromDomain(updated));
}
private static async Task<IResult> DeletePack(
HttpContext context,
[FromRoute] Guid packId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var pack = await repository.GetPackByIdAsync(tenantId, packId, cancellationToken);
if (pack is null)
{
return Results.NotFound(new PackRegistryErrorResponse(
"not_found", $"Pack {packId} not found", packId, null));
}
if (pack.Status != PackStatus.Draft)
{
return Results.Conflict(new PackRegistryErrorResponse(
"not_draft", "Only draft packs can be deleted", packId, null));
}
if (pack.VersionCount > 0)
{
return Results.Conflict(new PackRegistryErrorResponse(
"has_versions", "Cannot delete pack with versions", packId, null));
}
var deleted = await repository.DeletePackAsync(tenantId, packId, cancellationToken);
if (!deleted)
{
return Results.Conflict(new PackRegistryErrorResponse(
"delete_failed", "Failed to delete pack", packId, null));
}
return Results.NoContent();
}
// ========== Pack Version Endpoints ==========
private static async Task<IResult> CreatePackVersion(
HttpContext context,
[FromRoute] Guid packId,
[FromBody] CreatePackVersionRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken)
{
if (string.IsNullOrWhiteSpace(request.Version))
{
return Results.BadRequest(new PackRegistryErrorResponse(
"invalid_request", "Version is required", packId, null));
}
if (string.IsNullOrWhiteSpace(request.ArtifactUri))
{
return Results.BadRequest(new PackRegistryErrorResponse(
"invalid_request", "ArtifactUri is required", packId, null));
}
if (string.IsNullOrWhiteSpace(request.ArtifactDigest))
{
return Results.BadRequest(new PackRegistryErrorResponse(
"invalid_request", "ArtifactDigest is required", packId, null));
}
var tenantId = tenantResolver.Resolve(context);
var actor = context.User?.Identity?.Name ?? "system";
var now = timeProvider.GetUtcNow();
var pack = await repository.GetPackByIdAsync(tenantId, packId, cancellationToken);
if (pack is null)
{
return Results.NotFound(new PackRegistryErrorResponse(
"not_found", $"Pack {packId} not found", packId, null));
}
if (!pack.CanAddVersion)
{
return Results.Conflict(new PackRegistryErrorResponse(
"cannot_add_version", $"Cannot add version to pack in {pack.Status} status", packId, null));
}
// Check for duplicate version
var existing = await repository.GetVersionAsync(tenantId, packId, request.Version, cancellationToken);
if (existing is not null)
{
return Results.Conflict(new PackRegistryErrorResponse(
"duplicate_version", $"Version {request.Version} already exists", packId, existing.PackVersionId));
}
var version = PackVersion.Create(
packVersionId: Guid.NewGuid(),
tenantId: tenantId,
packId: packId,
version: request.Version,
semVer: request.SemVer,
artifactUri: request.ArtifactUri,
artifactDigest: request.ArtifactDigest,
artifactMimeType: request.ArtifactMimeType,
artifactSizeBytes: request.ArtifactSizeBytes,
manifestJson: request.ManifestJson,
manifestDigest: request.ManifestDigest,
releaseNotes: request.ReleaseNotes,
minEngineVersion: request.MinEngineVersion,
dependencies: request.Dependencies,
createdBy: actor,
metadata: request.Metadata,
createdAt: now);
await repository.CreateVersionAsync(version, cancellationToken);
// Update pack version count
var updatedPack = pack.WithVersionAdded(request.Version, actor, now);
await repository.UpdatePackAsync(updatedPack, cancellationToken);
return Results.Created(
$"/api/v1/jobengine/registry/packs/{packId}/versions/{version.PackVersionId}",
PackVersionResponse.FromDomain(version));
}
private static async Task<IResult> ListVersions(
HttpContext context,
[FromRoute] Guid packId,
[FromQuery] string? status,
[FromQuery] int? limit,
[FromQuery] int? offset,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = Math.Min(limit ?? DefaultLimit, MaxLimit);
var effectiveOffset = offset ?? 0;
PackVersionStatus? statusFilter = null;
if (!string.IsNullOrEmpty(status) && Enum.TryParse<PackVersionStatus>(status, true, out var parsed))
{
statusFilter = parsed;
}
var versions = await repository.ListVersionsAsync(
tenantId, packId, statusFilter, effectiveLimit, effectiveOffset, cancellationToken);
var totalCount = await repository.CountVersionsAsync(
tenantId, packId, statusFilter, cancellationToken);
var responses = versions.Select(PackVersionResponse.FromDomain).ToList();
var nextCursor = responses.Count == effectiveLimit
? (effectiveOffset + effectiveLimit).ToString()
: null;
return Results.Ok(new PackVersionListResponse(responses, totalCount, nextCursor));
}
private static async Task<IResult> GetVersion(
HttpContext context,
[FromRoute] Guid packId,
[FromRoute] string version,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var packVersion = await repository.GetVersionAsync(tenantId, packId, version, cancellationToken);
if (packVersion is null)
{
return Results.NotFound(new PackRegistryErrorResponse(
"not_found", $"Version {version} not found for pack {packId}", packId, null));
}
return Results.Ok(PackVersionResponse.FromDomain(packVersion));
}
private static async Task<IResult> GetLatestVersion(
HttpContext context,
[FromRoute] Guid packId,
[FromQuery] bool? includePrerelease,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var version = await repository.GetLatestVersionAsync(
tenantId, packId, includePrerelease ?? false, cancellationToken);
if (version is null)
{
return Results.NotFound(new PackRegistryErrorResponse(
"not_found", $"No published versions found for pack {packId}", packId, null));
}
return Results.Ok(PackVersionResponse.FromDomain(version));
}
private static async Task<IResult> UpdateVersion(
HttpContext context,
[FromRoute] Guid packId,
[FromRoute] Guid packVersionId,
[FromBody] UpdatePackVersionRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var actor = context.User?.Identity?.Name ?? "system";
var now = timeProvider.GetUtcNow();
var version = await repository.GetVersionByIdAsync(tenantId, packVersionId, cancellationToken);
if (version is null || version.PackId != packId)
{
return Results.NotFound(new PackRegistryErrorResponse(
"not_found", $"Version {packVersionId} not found", packId, packVersionId));
}
if (version.IsTerminal)
{
return Results.Conflict(new PackRegistryErrorResponse(
"terminal_status", "Cannot update version in terminal status", packId, packVersionId));
}
var updated = version with
{
ReleaseNotes = request.ReleaseNotes ?? version.ReleaseNotes,
Metadata = request.Metadata ?? version.Metadata,
UpdatedAt = now,
UpdatedBy = actor
};
await repository.UpdateVersionAsync(updated, cancellationToken);
return Results.Ok(PackVersionResponse.FromDomain(updated));
}
private static async Task<IResult> UpdateVersionStatus(
HttpContext context,
[FromRoute] Guid packId,
[FromRoute] Guid packVersionId,
[FromBody] UpdatePackVersionStatusRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken)
{
if (string.IsNullOrWhiteSpace(request.Status))
{
return Results.BadRequest(new PackRegistryErrorResponse(
"invalid_request", "Status is required", packId, packVersionId));
}
if (!Enum.TryParse<PackVersionStatus>(request.Status, true, out var newStatus))
{
return Results.BadRequest(new PackRegistryErrorResponse(
"invalid_status", $"Invalid status: {request.Status}", packId, packVersionId));
}
var tenantId = tenantResolver.Resolve(context);
var actor = context.User?.Identity?.Name ?? "system";
var now = timeProvider.GetUtcNow();
var version = await repository.GetVersionByIdAsync(tenantId, packVersionId, cancellationToken);
if (version is null || version.PackId != packId)
{
return Results.NotFound(new PackRegistryErrorResponse(
"not_found", $"Version {packVersionId} not found", packId, packVersionId));
}
// Validate status transition
var canTransition = newStatus switch
{
PackVersionStatus.Published => version.CanPublish,
PackVersionStatus.Deprecated => version.CanDeprecate,
PackVersionStatus.Archived => version.CanArchive,
PackVersionStatus.Draft => false,
_ => false
};
if (!canTransition)
{
return Results.Conflict(new PackRegistryErrorResponse(
"invalid_transition", $"Cannot transition from {version.Status} to {newStatus}", packId, packVersionId));
}
if (newStatus == PackVersionStatus.Deprecated && string.IsNullOrWhiteSpace(request.DeprecationReason))
{
return Results.BadRequest(new PackRegistryErrorResponse(
"invalid_request", "DeprecationReason is required when deprecating", packId, packVersionId));
}
DateTimeOffset? publishedAt = newStatus == PackVersionStatus.Published ? now : version.PublishedAt;
string? publishedBy = newStatus == PackVersionStatus.Published ? actor : version.PublishedBy;
DateTimeOffset? deprecatedAt = newStatus == PackVersionStatus.Deprecated ? now : version.DeprecatedAt;
string? deprecatedBy = newStatus == PackVersionStatus.Deprecated ? actor : version.DeprecatedBy;
await repository.UpdateVersionStatusAsync(
tenantId, packVersionId, newStatus, actor,
publishedAt, publishedBy,
deprecatedAt, deprecatedBy, request.DeprecationReason,
cancellationToken);
var updated = newStatus == PackVersionStatus.Deprecated
? version.WithDeprecation(actor, request.DeprecationReason, now)
: version.WithStatus(newStatus, actor, now);
return Results.Ok(PackVersionResponse.FromDomain(updated));
}
private static async Task<IResult> SignVersion(
HttpContext context,
[FromRoute] Guid packId,
[FromRoute] Guid packVersionId,
[FromBody] SignPackVersionRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken)
{
if (string.IsNullOrWhiteSpace(request.SignatureUri))
{
return Results.BadRequest(new PackRegistryErrorResponse(
"invalid_request", "SignatureUri is required", packId, packVersionId));
}
if (string.IsNullOrWhiteSpace(request.SignatureAlgorithm))
{
return Results.BadRequest(new PackRegistryErrorResponse(
"invalid_request", "SignatureAlgorithm is required", packId, packVersionId));
}
var tenantId = tenantResolver.Resolve(context);
var actor = context.User?.Identity?.Name ?? "system";
var now = timeProvider.GetUtcNow();
var version = await repository.GetVersionByIdAsync(tenantId, packVersionId, cancellationToken);
if (version is null || version.PackId != packId)
{
return Results.NotFound(new PackRegistryErrorResponse(
"not_found", $"Version {packVersionId} not found", packId, packVersionId));
}
if (version.IsSigned)
{
return Results.Conflict(new PackRegistryErrorResponse(
"already_signed", "Version is already signed", packId, packVersionId));
}
await repository.UpdateVersionSignatureAsync(
tenantId, packVersionId,
request.SignatureUri, request.SignatureAlgorithm,
actor, now,
cancellationToken);
var signed = version.WithSignature(request.SignatureUri, request.SignatureAlgorithm, actor, now);
return Results.Ok(PackVersionResponse.FromDomain(signed));
}
private static async Task<IResult> DownloadVersion(
HttpContext context,
[FromRoute] Guid packId,
[FromRoute] Guid packVersionId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var version = await repository.GetVersionByIdAsync(tenantId, packVersionId, cancellationToken);
if (version is null || version.PackId != packId)
{
return Results.NotFound(new PackRegistryErrorResponse(
"not_found", $"Version {packVersionId} not found", packId, packVersionId));
}
if (version.Status != PackVersionStatus.Published)
{
return Results.Conflict(new PackRegistryErrorResponse(
"not_published", "Only published versions can be downloaded", packId, packVersionId));
}
// Increment download count
await repository.IncrementDownloadCountAsync(tenantId, packVersionId, cancellationToken);
return Results.Ok(new PackVersionDownloadResponse(
version.PackVersionId,
version.Version,
version.ArtifactUri,
version.ArtifactDigest,
version.ArtifactMimeType,
version.ArtifactSizeBytes,
version.SignatureUri,
version.SignatureAlgorithm));
}
private static async Task<IResult> DeleteVersion(
HttpContext context,
[FromRoute] Guid packId,
[FromRoute] Guid packVersionId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var version = await repository.GetVersionByIdAsync(tenantId, packVersionId, cancellationToken);
if (version is null || version.PackId != packId)
{
return Results.NotFound(new PackRegistryErrorResponse(
"not_found", $"Version {packVersionId} not found", packId, packVersionId));
}
if (version.Status != PackVersionStatus.Draft)
{
return Results.Conflict(new PackRegistryErrorResponse(
"not_draft", "Only draft versions can be deleted", packId, packVersionId));
}
var deleted = await repository.DeleteVersionAsync(tenantId, packVersionId, cancellationToken);
if (!deleted)
{
return Results.Conflict(new PackRegistryErrorResponse(
"delete_failed", "Failed to delete version", packId, packVersionId));
}
return Results.NoContent();
}
// ========== Search and Discovery Endpoints ==========
private static async Task<IResult> SearchPacks(
HttpContext context,
[FromQuery] string query,
[FromQuery] string? status,
[FromQuery] int? limit,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
CancellationToken cancellationToken)
{
if (string.IsNullOrWhiteSpace(query))
{
return Results.BadRequest(new PackRegistryErrorResponse(
"invalid_request", "Query is required", null, null));
}
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = Math.Min(limit ?? DefaultLimit, MaxLimit);
PackStatus? statusFilter = null;
if (!string.IsNullOrEmpty(status) && Enum.TryParse<PackStatus>(status, true, out var parsed))
{
statusFilter = parsed;
}
var packs = await repository.SearchPacksAsync(
tenantId, query, statusFilter, effectiveLimit, cancellationToken);
var responses = packs.Select(PackResponse.FromDomain).ToList();
return Results.Ok(new PackSearchResponse(responses, query));
}
private static async Task<IResult> GetPacksByTag(
HttpContext context,
[FromRoute] string tag,
[FromQuery] int? limit,
[FromQuery] int? offset,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = Math.Min(limit ?? DefaultLimit, MaxLimit);
var effectiveOffset = offset ?? 0;
var packs = await repository.GetPacksByTagAsync(
tenantId, tag, effectiveLimit, effectiveOffset, cancellationToken);
var responses = packs.Select(PackResponse.FromDomain).ToList();
return Results.Ok(new PackListResponse(responses, responses.Count, null));
}
private static async Task<IResult> GetPopularPacks(
HttpContext context,
[FromQuery] int? limit,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = Math.Min(limit ?? 10, 50);
var packs = await repository.GetPopularPacksAsync(tenantId, effectiveLimit, cancellationToken);
var responses = packs.Select(PackResponse.FromDomain).ToList();
return Results.Ok(new PackListResponse(responses, responses.Count, null));
}
private static async Task<IResult> GetRecentPacks(
HttpContext context,
[FromQuery] int? limit,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = Math.Min(limit ?? 10, 50);
var packs = await repository.GetRecentPacksAsync(tenantId, effectiveLimit, cancellationToken);
var responses = packs.Select(PackResponse.FromDomain).ToList();
return Results.Ok(new PackListResponse(responses, responses.Count, null));
}
private static async Task<IResult> GetStats(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRegistryRepository repository,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
var stats = await repository.GetStatsAsync(tenantId, cancellationToken);
return Results.Ok(new PackRegistryStatsResponse(
stats.TotalPacks,
stats.PublishedPacks,
stats.TotalVersions,
stats.PublishedVersions,
stats.TotalDownloads,
stats.LastUpdatedAt));
}
}

View File

@@ -1,379 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Core.Domain;
using StellaOps.JobEngine.Infrastructure.Postgres;
using StellaOps.JobEngine.Infrastructure.Repositories;
using StellaOps.JobEngine.WebService.Contracts;
using StellaOps.JobEngine.WebService.Services;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// REST API endpoints for quota management.
/// </summary>
public static class QuotaEndpoints
{
/// <summary>
/// Maps quota endpoints to the route builder.
/// </summary>
public static RouteGroupBuilder MapQuotaEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/quotas")
.WithTags("Orchestrator Quotas")
.RequireAuthorization(JobEnginePolicies.Quota)
.RequireTenant();
// Quota CRUD operations
group.MapGet(string.Empty, ListQuotas)
.WithName("Orchestrator_ListQuotas")
.WithDescription(_t("orchestrator.quota.list_description"));
group.MapGet("{quotaId:guid}", GetQuota)
.WithName("Orchestrator_GetQuota")
.WithDescription(_t("orchestrator.quota.get_description"));
group.MapPost(string.Empty, CreateQuota)
.WithName("Orchestrator_CreateQuota")
.WithDescription(_t("orchestrator.quota.create_description"));
group.MapPut("{quotaId:guid}", UpdateQuota)
.WithName("Orchestrator_UpdateQuota")
.WithDescription(_t("orchestrator.quota.update_description"));
group.MapDelete("{quotaId:guid}", DeleteQuota)
.WithName("Orchestrator_DeleteQuota")
.WithDescription(_t("orchestrator.quota.delete_description"));
// Quota control operations
group.MapPost("{quotaId:guid}/pause", PauseQuota)
.WithName("Orchestrator_PauseQuota")
.WithDescription(_t("orchestrator.quota.pause_description"));
group.MapPost("{quotaId:guid}/resume", ResumeQuota)
.WithName("Orchestrator_ResumeQuota")
.WithDescription(_t("orchestrator.quota.resume_description"));
// Quota summary
group.MapGet("summary", GetQuotaSummary)
.WithName("Orchestrator_GetQuotaSummary")
.WithDescription(_t("orchestrator.quota.reset_description"));
return group;
}
private static async Task<IResult> ListQuotas(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IQuotaRepository repository,
[FromQuery] string? jobType = null,
[FromQuery] bool? paused = null,
[FromQuery] int? limit = null,
[FromQuery] string? cursor = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = EndpointHelpers.GetLimit(limit);
var offset = EndpointHelpers.ParseCursorOffset(cursor);
var quotas = await repository.ListAsync(
tenantId,
jobType,
paused,
effectiveLimit,
offset,
cancellationToken).ConfigureAwait(false);
var responses = quotas.Select(QuotaResponse.FromDomain).ToList();
var nextCursor = EndpointHelpers.CreateNextCursor(offset, effectiveLimit, responses.Count);
return Results.Ok(new QuotaListResponse(responses, nextCursor));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetQuota(
HttpContext context,
[FromRoute] Guid quotaId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IQuotaRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var quota = await repository.GetByIdAsync(tenantId, quotaId, cancellationToken).ConfigureAwait(false);
if (quota is null)
{
return Results.NotFound();
}
return Results.Ok(QuotaResponse.FromDomain(quota));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> CreateQuota(
HttpContext context,
[FromBody] CreateQuotaRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IQuotaRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var actorId = context.User?.Identity?.Name ?? "system";
// Validate request
if (request.MaxActive <= 0)
return Results.BadRequest(new { error = _t("orchestrator.quota.error.max_active_positive") });
if (request.MaxPerHour <= 0)
return Results.BadRequest(new { error = _t("orchestrator.quota.error.max_per_hour_positive") });
if (request.BurstCapacity <= 0)
return Results.BadRequest(new { error = _t("orchestrator.quota.error.burst_capacity_positive") });
if (request.RefillRate <= 0)
return Results.BadRequest(new { error = _t("orchestrator.quota.error.refill_rate_positive") });
var now = DateTimeOffset.UtcNow;
var quota = new Quota(
QuotaId: Guid.NewGuid(),
TenantId: tenantId,
JobType: request.JobType,
MaxActive: request.MaxActive,
MaxPerHour: request.MaxPerHour,
BurstCapacity: request.BurstCapacity,
RefillRate: request.RefillRate,
CurrentTokens: request.BurstCapacity,
LastRefillAt: now,
CurrentActive: 0,
CurrentHourCount: 0,
CurrentHourStart: new DateTimeOffset(now.Year, now.Month, now.Day, now.Hour, 0, 0, now.Offset),
Paused: false,
PauseReason: null,
QuotaTicket: null,
CreatedAt: now,
UpdatedAt: now,
UpdatedBy: actorId);
await repository.CreateAsync(quota, cancellationToken).ConfigureAwait(false);
return Results.Created($"/api/v1/jobengine/quotas/{quota.QuotaId}", QuotaResponse.FromDomain(quota));
}
catch (DuplicateQuotaException ex)
{
return Results.Conflict(new { error = ex.Message });
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> UpdateQuota(
HttpContext context,
[FromRoute] Guid quotaId,
[FromBody] UpdateQuotaRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IQuotaRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var actorId = context.User?.Identity?.Name ?? "system";
var quota = await repository.GetByIdAsync(tenantId, quotaId, cancellationToken).ConfigureAwait(false);
if (quota is null)
{
return Results.NotFound();
}
// Validate request
if (request.MaxActive.HasValue && request.MaxActive <= 0)
return Results.BadRequest(new { error = _t("orchestrator.quota.error.max_active_positive") });
if (request.MaxPerHour.HasValue && request.MaxPerHour <= 0)
return Results.BadRequest(new { error = _t("orchestrator.quota.error.max_per_hour_positive") });
if (request.BurstCapacity.HasValue && request.BurstCapacity <= 0)
return Results.BadRequest(new { error = _t("orchestrator.quota.error.burst_capacity_positive") });
if (request.RefillRate.HasValue && request.RefillRate <= 0)
return Results.BadRequest(new { error = _t("orchestrator.quota.error.refill_rate_positive") });
var updated = quota with
{
MaxActive = request.MaxActive ?? quota.MaxActive,
MaxPerHour = request.MaxPerHour ?? quota.MaxPerHour,
BurstCapacity = request.BurstCapacity ?? quota.BurstCapacity,
RefillRate = request.RefillRate ?? quota.RefillRate,
UpdatedAt = DateTimeOffset.UtcNow,
UpdatedBy = actorId
};
await repository.UpdateAsync(updated, cancellationToken).ConfigureAwait(false);
return Results.Ok(QuotaResponse.FromDomain(updated));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> DeleteQuota(
HttpContext context,
[FromRoute] Guid quotaId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IQuotaRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var deleted = await repository.DeleteAsync(tenantId, quotaId, cancellationToken).ConfigureAwait(false);
if (!deleted)
{
return Results.NotFound();
}
return Results.NoContent();
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> PauseQuota(
HttpContext context,
[FromRoute] Guid quotaId,
[FromBody] PauseQuotaRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IQuotaRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var actorId = context.User?.Identity?.Name ?? "system";
var quota = await repository.GetByIdAsync(tenantId, quotaId, cancellationToken).ConfigureAwait(false);
if (quota is null)
{
return Results.NotFound();
}
if (string.IsNullOrWhiteSpace(request.Reason))
{
return Results.BadRequest(new { error = _t("orchestrator.quota.error.pause_reason_required") });
}
await repository.PauseAsync(tenantId, quotaId, request.Reason, request.Ticket, actorId, cancellationToken)
.ConfigureAwait(false);
var updated = await repository.GetByIdAsync(tenantId, quotaId, cancellationToken).ConfigureAwait(false);
return Results.Ok(QuotaResponse.FromDomain(updated!));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> ResumeQuota(
HttpContext context,
[FromRoute] Guid quotaId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IQuotaRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var actorId = context.User?.Identity?.Name ?? "system";
var quota = await repository.GetByIdAsync(tenantId, quotaId, cancellationToken).ConfigureAwait(false);
if (quota is null)
{
return Results.NotFound();
}
await repository.ResumeAsync(tenantId, quotaId, actorId, cancellationToken).ConfigureAwait(false);
var updated = await repository.GetByIdAsync(tenantId, quotaId, cancellationToken).ConfigureAwait(false);
return Results.Ok(QuotaResponse.FromDomain(updated!));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetQuotaSummary(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IQuotaRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
// Get all quotas for the tenant
var quotas = await repository.ListAsync(tenantId, null, null, 1000, 0, cancellationToken)
.ConfigureAwait(false);
var totalQuotas = quotas.Count;
var pausedQuotas = quotas.Count(q => q.Paused);
// Calculate utilization for each quota
var utilizationItems = quotas.Select(q =>
{
var tokenUtilization = q.BurstCapacity > 0
? 1.0 - (q.CurrentTokens / q.BurstCapacity)
: 0.0;
var concurrencyUtilization = q.MaxActive > 0
? (double)q.CurrentActive / q.MaxActive
: 0.0;
var hourlyUtilization = q.MaxPerHour > 0
? (double)q.CurrentHourCount / q.MaxPerHour
: 0.0;
return new QuotaUtilizationResponse(
QuotaId: q.QuotaId,
JobType: q.JobType,
TokenUtilization: Math.Round(tokenUtilization, 4),
ConcurrencyUtilization: Math.Round(concurrencyUtilization, 4),
HourlyUtilization: Math.Round(hourlyUtilization, 4),
Paused: q.Paused);
}).ToList();
var avgTokenUtilization = utilizationItems.Count > 0
? utilizationItems.Average(u => u.TokenUtilization)
: 0.0;
var avgConcurrencyUtilization = utilizationItems.Count > 0
? utilizationItems.Average(u => u.ConcurrencyUtilization)
: 0.0;
return Results.Ok(new QuotaSummaryResponse(
TotalQuotas: totalQuotas,
PausedQuotas: pausedQuotas,
AverageTokenUtilization: Math.Round(avgTokenUtilization, 4),
AverageConcurrencyUtilization: Math.Round(avgConcurrencyUtilization, 4),
Quotas: utilizationItems));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
}

View File

@@ -1,384 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Core.Domain;
using StellaOps.JobEngine.Core.Services;
using StellaOps.JobEngine.WebService.Contracts;
using StellaOps.JobEngine.WebService.Services;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// REST API endpoints for quota governance management.
/// </summary>
public static class QuotaGovernanceEndpoints
{
/// <summary>
/// Maps quota governance endpoints to the route builder.
/// </summary>
public static RouteGroupBuilder MapQuotaGovernanceEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/quota-governance")
.WithTags("Orchestrator Quota Governance")
.RequireAuthorization(JobEnginePolicies.Read)
.RequireTenant();
// Policy management
group.MapGet("policies", ListPolicies)
.WithName("Orchestrator_ListQuotaAllocationPolicies")
.WithDescription(_t("orchestrator.quota_governance.list_description"));
group.MapGet("policies/{policyId:guid}", GetPolicy)
.WithName("Orchestrator_GetQuotaAllocationPolicy")
.WithDescription(_t("orchestrator.quota_governance.get_description"));
group.MapPost("policies", CreatePolicy)
.WithName("Orchestrator_CreateQuotaAllocationPolicy")
.WithDescription(_t("orchestrator.quota_governance.create_description"))
.RequireAuthorization(JobEnginePolicies.Quota);
group.MapPut("policies/{policyId:guid}", UpdatePolicy)
.WithName("Orchestrator_UpdateQuotaAllocationPolicy")
.WithDescription(_t("orchestrator.quota_governance.update_description"))
.RequireAuthorization(JobEnginePolicies.Quota);
group.MapDelete("policies/{policyId:guid}", DeletePolicy)
.WithName("Orchestrator_DeleteQuotaAllocationPolicy")
.WithDescription(_t("orchestrator.quota_governance.delete_description"))
.RequireAuthorization(JobEnginePolicies.Quota);
// Quota allocation calculations
group.MapGet("allocation", CalculateAllocation)
.WithName("Orchestrator_CalculateQuotaAllocation")
.WithDescription(_t("orchestrator.quota_governance.evaluate_description"));
// Quota requests
group.MapPost("request", RequestQuota)
.WithName("Orchestrator_RequestQuota")
.WithDescription(_t("orchestrator.quota_governance.snapshot_description"))
.RequireAuthorization(JobEnginePolicies.Quota);
group.MapPost("release", ReleaseQuota)
.WithName("Orchestrator_ReleaseQuota")
.WithDescription(_t("orchestrator.quota_governance.simulate_description"))
.RequireAuthorization(JobEnginePolicies.Quota);
// Status and summary
group.MapGet("status", GetTenantStatus)
.WithName("Orchestrator_GetTenantQuotaStatus")
.WithDescription(_t("orchestrator.quota_governance.priority_description"));
group.MapGet("summary", GetSummary)
.WithName("Orchestrator_GetQuotaGovernanceSummary")
.WithDescription(_t("orchestrator.quota_governance.audit_description"));
// Scheduling check
group.MapGet("can-schedule", CanSchedule)
.WithName("Orchestrator_CanScheduleJob")
.WithDescription(_t("orchestrator.quota_governance.reorder_description"));
return group;
}
private static async Task<IResult> ListPolicies(
HttpContext context,
[FromServices] IQuotaGovernanceService service,
[FromQuery] bool? enabled = null,
CancellationToken cancellationToken = default)
{
try
{
var policies = await service.ListPoliciesAsync(enabled, cancellationToken).ConfigureAwait(false);
var responses = policies.Select(QuotaAllocationPolicyResponse.FromDomain).ToList();
return Results.Ok(new QuotaAllocationPolicyListResponse(responses, null));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetPolicy(
HttpContext context,
[FromRoute] Guid policyId,
[FromServices] IQuotaGovernanceService service,
CancellationToken cancellationToken = default)
{
try
{
var policy = await service.GetPolicyAsync(policyId, cancellationToken).ConfigureAwait(false);
if (policy is null)
{
return Results.NotFound();
}
return Results.Ok(QuotaAllocationPolicyResponse.FromDomain(policy));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> CreatePolicy(
HttpContext context,
[FromBody] CreateQuotaAllocationPolicyRequest request,
[FromServices] IQuotaGovernanceService service,
CancellationToken cancellationToken = default)
{
try
{
if (!Enum.TryParse<AllocationStrategy>(request.Strategy, ignoreCase: true, out var strategy))
{
return Results.BadRequest(new { error = _t("orchestrator.quota_governance.error.invalid_strategy", request.Strategy) });
}
var actorId = context.User?.Identity?.Name ?? "system";
var now = DateTimeOffset.UtcNow;
var policy = new QuotaAllocationPolicy(
PolicyId: Guid.NewGuid(),
Name: request.Name,
Description: request.Description,
Strategy: strategy,
TotalCapacity: request.TotalCapacity,
MinimumPerTenant: request.MinimumPerTenant,
MaximumPerTenant: request.MaximumPerTenant,
ReservedCapacity: request.ReservedCapacity,
AllowBurst: request.AllowBurst,
BurstMultiplier: request.BurstMultiplier,
Priority: request.Priority,
Active: request.Active,
JobType: request.JobType,
CreatedAt: now,
UpdatedAt: now,
UpdatedBy: actorId);
var created = await service.CreatePolicyAsync(policy, cancellationToken).ConfigureAwait(false);
return Results.Created($"/api/v1/jobengine/quota-governance/policies/{created.PolicyId}",
QuotaAllocationPolicyResponse.FromDomain(created));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> UpdatePolicy(
HttpContext context,
[FromRoute] Guid policyId,
[FromBody] UpdateQuotaAllocationPolicyRequest request,
[FromServices] IQuotaGovernanceService service,
CancellationToken cancellationToken = default)
{
try
{
var existing = await service.GetPolicyAsync(policyId, cancellationToken).ConfigureAwait(false);
if (existing is null)
{
return Results.NotFound();
}
AllocationStrategy? newStrategy = null;
if (!string.IsNullOrEmpty(request.Strategy))
{
if (!Enum.TryParse<AllocationStrategy>(request.Strategy, ignoreCase: true, out var parsed))
{
return Results.BadRequest(new { error = _t("orchestrator.quota_governance.error.invalid_strategy", request.Strategy) });
}
newStrategy = parsed;
}
var actorId = context.User?.Identity?.Name ?? "system";
var now = DateTimeOffset.UtcNow;
var updated = existing with
{
Name = request.Name ?? existing.Name,
Description = request.Description ?? existing.Description,
Strategy = newStrategy ?? existing.Strategy,
TotalCapacity = request.TotalCapacity ?? existing.TotalCapacity,
MinimumPerTenant = request.MinimumPerTenant ?? existing.MinimumPerTenant,
MaximumPerTenant = request.MaximumPerTenant ?? existing.MaximumPerTenant,
ReservedCapacity = request.ReservedCapacity ?? existing.ReservedCapacity,
AllowBurst = request.AllowBurst ?? existing.AllowBurst,
BurstMultiplier = request.BurstMultiplier ?? existing.BurstMultiplier,
Priority = request.Priority ?? existing.Priority,
Active = request.Active ?? existing.Active,
JobType = request.JobType ?? existing.JobType,
UpdatedAt = now,
UpdatedBy = actorId
};
var result = await service.UpdatePolicyAsync(updated, cancellationToken).ConfigureAwait(false);
return Results.Ok(QuotaAllocationPolicyResponse.FromDomain(result));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> DeletePolicy(
HttpContext context,
[FromRoute] Guid policyId,
[FromServices] IQuotaGovernanceService service,
CancellationToken cancellationToken = default)
{
try
{
var deleted = await service.DeletePolicyAsync(policyId, cancellationToken).ConfigureAwait(false);
if (!deleted)
{
return Results.NotFound();
}
return Results.NoContent();
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> CalculateAllocation(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IQuotaGovernanceService service,
[FromQuery] string? jobType = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var result = await service.CalculateAllocationAsync(tenantId, jobType, cancellationToken).ConfigureAwait(false);
return Results.Ok(QuotaAllocationResponse.FromDomain(result));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> RequestQuota(
HttpContext context,
[FromBody] RequestQuotaRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IQuotaGovernanceService service,
CancellationToken cancellationToken = default)
{
try
{
if (request.RequestedAmount <= 0)
{
return Results.BadRequest(new { error = _t("orchestrator.quota_governance.error.amount_positive") });
}
var tenantId = tenantResolver.Resolve(context);
var result = await service.RequestQuotaAsync(tenantId, request.JobType, request.RequestedAmount, cancellationToken).ConfigureAwait(false);
return Results.Ok(QuotaRequestResponse.FromDomain(result));
}
catch (ArgumentOutOfRangeException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> ReleaseQuota(
HttpContext context,
[FromBody] ReleaseQuotaRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IQuotaGovernanceService service,
CancellationToken cancellationToken = default)
{
try
{
if (request.ReleasedAmount <= 0)
{
return Results.BadRequest(new { error = _t("orchestrator.quota_governance.error.amount_positive") });
}
var tenantId = tenantResolver.Resolve(context);
await service.ReleaseQuotaAsync(tenantId, request.JobType, request.ReleasedAmount, cancellationToken).ConfigureAwait(false);
return Results.Ok(new { released = true, amount = request.ReleasedAmount });
}
catch (ArgumentOutOfRangeException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetTenantStatus(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IQuotaGovernanceService service,
[FromQuery] string? jobType = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var status = await service.GetTenantStatusAsync(tenantId, jobType, cancellationToken).ConfigureAwait(false);
return Results.Ok(TenantQuotaStatusResponse.FromDomain(status));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetSummary(
HttpContext context,
[FromServices] IQuotaGovernanceService service,
[FromQuery] Guid? policyId = null,
CancellationToken cancellationToken = default)
{
try
{
var summary = await service.GetSummaryAsync(policyId, cancellationToken).ConfigureAwait(false);
return Results.Ok(QuotaGovernanceSummaryResponse.FromDomain(summary));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> CanSchedule(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IQuotaGovernanceService service,
[FromQuery] string? jobType = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var result = await service.CanScheduleAsync(tenantId, jobType, cancellationToken).ConfigureAwait(false);
return Results.Ok(SchedulingCheckResponse.FromDomain(result));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
}

View File

@@ -1,544 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.WebService.Contracts;
using StellaOps.JobEngine.WebService.Services;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// v2 contract adapters for Pack-driven release control routes.
/// </summary>
public static class ReleaseControlV2Endpoints
{
public static IEndpointRouteBuilder MapReleaseControlV2Endpoints(this IEndpointRouteBuilder app)
{
MapApprovalsV2(app);
MapRunsV2(app);
MapEnvironmentsV2(app);
return app;
}
private static void MapApprovalsV2(IEndpointRouteBuilder app)
{
var approvals = app.MapGroup("/api/v1/approvals")
.WithTags("Approvals v2")
.RequireAuthorization(JobEnginePolicies.ReleaseRead)
.RequireTenant();
approvals.MapGet(string.Empty, ListApprovals)
.WithName("ApprovalsV2_List")
.WithDescription("Return the v2 approval queue for the calling tenant, including per-request digest confidence, reachability-weighted risk score, and ops-data integrity score. Optionally filtered by status and target environment. Designed for the enhanced approval UX.");
approvals.MapGet("/{id}", GetApprovalDetail)
.WithName("ApprovalsV2_Get")
.WithDescription("Return the v2 decision packet for the specified approval, including the full policy gate evaluation trace, reachability-adjusted finding counts, confidence bands, and all structured evidence references required to make an informed approval decision.");
approvals.MapGet("/{id}/gates", GetApprovalGates)
.WithName("ApprovalsV2_Gates")
.WithDescription("Return the detailed gate evaluation trace for the specified v2 approval, showing each policy gate's inputs, computed verdict, confidence weight, and any override history. Used by approvers to understand the basis for automated gate results.");
approvals.MapGet("/{id}/evidence", GetApprovalEvidence)
.WithName("ApprovalsV2_Evidence")
.WithDescription("Return the structured evidence reference set attached to the specified v2 approval decision packet, including SBOM digests, attestation references, scan results, and provenance records. Used to verify the completeness of the evidence chain before approving.");
approvals.MapGet("/{id}/security-snapshot", GetApprovalSecuritySnapshot)
.WithName("ApprovalsV2_SecuritySnapshot")
.WithDescription("Return the security snapshot computed for the specified approval context, including reachability-adjusted critical and high finding counts (CritR, HighR), SBOM coverage percentage, and the weighted risk score used in the approval decision packet.");
approvals.MapGet("/{id}/ops-health", GetApprovalOpsHealth)
.WithName("ApprovalsV2_OpsHealth")
.WithDescription("Return the operational data-integrity confidence indicators for the specified approval, including staleness of scan data, missing coverage gaps, and pipeline signal freshness. Low confidence scores reduce the defensibility of approval decisions.");
approvals.MapPost("/{id}/decision", PostApprovalDecision)
.WithName("ApprovalsV2_Decision")
.WithDescription("Apply a structured decision action (approve, reject, defer, escalate) to the specified v2 approval, attributing the decision to the calling principal with an optional comment. Returns 409 if the approval is not in a state that accepts decisions.")
.RequireAuthorization(JobEnginePolicies.ReleaseApprove);
}
private static void MapRunsV2(IEndpointRouteBuilder app)
{
static void MapRunGroup(RouteGroupBuilder runs)
{
runs.MapGet("/{id}", GetRunDetail)
.WithDescription("Return the promotion run detail timeline for the specified run ID, including each pipeline stage with status, duration, and attached evidence references. Provides the full chronological execution narrative for a release promotion run.");
runs.MapGet("/{id}/steps", GetRunSteps)
.WithDescription("Return the checkpoint-level step list for the specified promotion run, with per-step status, start/end timestamps, and whether the step produced captured evidence. Used to navigate individual steps in a long-running promotion pipeline.");
runs.MapGet("/{id}/steps/{stepId}", GetRunStepDetail)
.WithDescription("Return the detailed record for a single promotion run step including its structured log output, captured evidence references, policy gate results, and duration. Used for deep inspection of a specific checkpoint within a promotion run.");
runs.MapPost("/{id}/rollback", TriggerRollback)
.WithDescription("Initiate a rollback of the specified promotion run, computing a guard-state projection that identifies any post-deployment state that must be unwound before the rollback can proceed. Returns the rollback plan with an estimated blast radius assessment.")
.RequireAuthorization(JobEnginePolicies.ReleaseApprove);
}
var apiRuns = app.MapGroup("/api/v1/runs")
.WithTags("Runs v2")
.RequireAuthorization(JobEnginePolicies.ReleaseRead)
.RequireTenant();
MapRunGroup(apiRuns);
apiRuns.WithGroupName("runs-v2");
var legacyV1Runs = app.MapGroup("/v1/runs")
.WithTags("Runs v2")
.RequireAuthorization(JobEnginePolicies.ReleaseRead)
.RequireTenant();
MapRunGroup(legacyV1Runs);
legacyV1Runs.WithGroupName("runs-v1-compat");
}
private static void MapEnvironmentsV2(IEndpointRouteBuilder app)
{
var environments = app.MapGroup("/api/v1/environments")
.WithTags("Environments v2")
.RequireAuthorization(JobEnginePolicies.ReleaseRead)
.RequireTenant();
environments.MapGet("/{id}", GetEnvironmentDetail)
.WithName("EnvironmentsV2_Get")
.WithDescription("Return the standardized environment detail header for the specified environment ID, including its name, tier (dev/stage/prod), current active release, and promotion pipeline position. Used to populate the environment context in release dashboards.");
environments.MapGet("/{id}/deployments", GetEnvironmentDeployments)
.WithName("EnvironmentsV2_Deployments")
.WithDescription("Return the deployment history for the specified environment ordered by deployment timestamp descending, including each release version, deployment status, and rollback availability. Used for environment-scoped audit and change management views.");
environments.MapGet("/{id}/security-snapshot", GetEnvironmentSecuritySnapshot)
.WithName("EnvironmentsV2_SecuritySnapshot")
.WithDescription("Return the current security posture snapshot for the specified environment, including reachability-adjusted critical and high finding counts, SBOM coverage, and the top-ranked risks by exploitability. Refreshed on each new deployment or scan cycle.");
environments.MapGet("/{id}/evidence", GetEnvironmentEvidence)
.WithName("EnvironmentsV2_Evidence")
.WithDescription("Return the evidence snapshot and export references for the specified environment, including the active attestation bundle, SBOM digest, scan result references, and the evidence locker ID for compliance archiving. Used for environment-level attestation workflows.");
environments.MapGet("/{id}/ops-health", GetEnvironmentOpsHealth)
.WithName("EnvironmentsV2_OpsHealth")
.WithDescription("Return the operational data-confidence and health signals for the specified environment, including scan data staleness, missing SBOM coverage, pipeline signal freshness, and any active incidents affecting the environment's reliability score.");
}
private static IResult ListApprovals(
[FromQuery] string? status,
[FromQuery] string? targetEnvironment)
{
var rows = ApprovalEndpoints.SeedData.Approvals
.Select(ApprovalEndpoints.WithDerivedSignals)
.Select(ApprovalEndpoints.ToSummary)
.OrderByDescending(row => row.RequestedAt, StringComparer.Ordinal)
.AsEnumerable();
if (!string.IsNullOrWhiteSpace(status))
{
rows = rows.Where(row => string.Equals(row.Status, status, StringComparison.OrdinalIgnoreCase));
}
if (!string.IsNullOrWhiteSpace(targetEnvironment))
{
rows = rows.Where(row => string.Equals(row.TargetEnvironment, targetEnvironment, StringComparison.OrdinalIgnoreCase));
}
return Results.Ok(rows.ToList());
}
private static IResult GetApprovalDetail(string id)
{
var approval = FindApproval(id);
return approval is null ? Results.NotFound() : Results.Ok(approval);
}
private static IResult GetApprovalGates(string id)
{
var approval = FindApproval(id);
return approval is null ? Results.NotFound() : Results.Ok(new
{
approvalId = approval.Id,
decisionDigest = approval.DecisionDigest,
gates = approval.GateResults.OrderBy(g => g.GateName, StringComparer.Ordinal).ToList(),
});
}
private static IResult GetApprovalEvidence(string id)
{
var approval = FindApproval(id);
return approval is null ? Results.NotFound() : Results.Ok(new
{
approvalId = approval.Id,
packet = approval.EvidencePacket,
manifestDigest = approval.ManifestDigest,
decisionDigest = approval.DecisionDigest,
});
}
private static IResult GetApprovalSecuritySnapshot(string id)
{
var approval = FindApproval(id);
return approval is null ? Results.NotFound() : Results.Ok(new
{
approvalId = approval.Id,
manifestDigest = approval.ManifestDigest,
risk = approval.RiskSnapshot,
reachability = approval.ReachabilityCoverage,
topFindings = BuildTopFindings(approval),
});
}
private static IResult GetApprovalOpsHealth(string id)
{
var approval = FindApproval(id);
return approval is null ? Results.NotFound() : Results.Ok(new
{
approvalId = approval.Id,
opsConfidence = approval.OpsConfidence,
impactedJobs = BuildImpactedJobs(approval.TargetEnvironment),
});
}
private static IResult PostApprovalDecision(string id, [FromBody] ApprovalDecisionRequest request)
{
var idx = ApprovalEndpoints.SeedData.Approvals.FindIndex(approval =>
string.Equals(approval.Id, id, StringComparison.OrdinalIgnoreCase));
if (idx < 0)
{
return Results.NotFound();
}
var approval = ApprovalEndpoints.WithDerivedSignals(ApprovalEndpoints.SeedData.Approvals[idx]);
var normalizedAction = (request.Action ?? string.Empty).Trim().ToLowerInvariant();
var actor = string.IsNullOrWhiteSpace(request.Actor) ? "release-manager" : request.Actor.Trim();
var timestamp = DateTimeOffset.Parse("2026-02-19T03:20:00Z").ToString("O");
var nextStatus = normalizedAction switch
{
"approve" => approval.CurrentApprovals + 1 >= approval.RequiredApprovals ? "approved" : approval.Status,
"reject" => "rejected",
"defer" => "pending",
"escalate" => "pending",
_ => approval.Status,
};
var updated = approval with
{
Status = nextStatus,
CurrentApprovals = normalizedAction == "approve"
? Math.Min(approval.RequiredApprovals, approval.CurrentApprovals + 1)
: approval.CurrentApprovals,
Actions = approval.Actions
.Concat(new[]
{
new ApprovalEndpoints.ApprovalActionRecordDto
{
Id = $"act-{approval.Actions.Count + 1}",
ApprovalId = approval.Id,
Action = normalizedAction is "approve" or "reject" ? normalizedAction : "comment",
Actor = actor,
Comment = string.IsNullOrWhiteSpace(request.Comment)
? $"Decision action: {normalizedAction}"
: request.Comment.Trim(),
Timestamp = timestamp,
},
})
.ToList(),
};
ApprovalEndpoints.SeedData.Approvals[idx] = updated;
return Results.Ok(ApprovalEndpoints.WithDerivedSignals(updated));
}
private static IResult GetRunDetail(string id)
{
if (!RunCatalog.TryGetValue(id, out var run))
{
return Results.NotFound();
}
return Results.Ok(run with
{
Steps = run.Steps.OrderBy(step => step.Order).ToList(),
});
}
private static IResult GetRunSteps(string id)
{
if (!RunCatalog.TryGetValue(id, out var run))
{
return Results.NotFound();
}
return Results.Ok(run.Steps.OrderBy(step => step.Order).ToList());
}
private static IResult GetRunStepDetail(string id, string stepId)
{
if (!RunCatalog.TryGetValue(id, out var run))
{
return Results.NotFound();
}
var step = run.Steps.FirstOrDefault(item => string.Equals(item.StepId, stepId, StringComparison.OrdinalIgnoreCase));
if (step is null)
{
return Results.NotFound();
}
return Results.Ok(step);
}
private static IResult TriggerRollback(string id, [FromBody] RollbackRequest? request)
{
if (!RunCatalog.TryGetValue(id, out var run))
{
return Results.NotFound();
}
var rollbackAllowed = string.Equals(run.Status, "failed", StringComparison.OrdinalIgnoreCase)
|| string.Equals(run.Status, "warning", StringComparison.OrdinalIgnoreCase)
|| string.Equals(run.Status, "degraded", StringComparison.OrdinalIgnoreCase);
if (!rollbackAllowed)
{
return Results.BadRequest(new
{
error = "rollback_guard_blocked",
reason = "Rollback is only allowed when run status is failed/warning/degraded.",
});
}
var rollbackRunId = $"rb-{id}";
return Results.Accepted($"/api/v1/runs/{rollbackRunId}", new
{
rollbackRunId,
sourceRunId = id,
scope = request?.Scope ?? "full",
status = "queued",
requestedAt = "2026-02-19T03:22:00Z",
preview = request?.Preview ?? true,
});
}
private static IResult GetEnvironmentDetail(string id)
{
if (!EnvironmentCatalog.TryGetValue(id, out var env))
{
return Results.NotFound();
}
return Results.Ok(env);
}
private static IResult GetEnvironmentDeployments(string id)
{
if (!EnvironmentCatalog.TryGetValue(id, out var env))
{
return Results.NotFound();
}
return Results.Ok(env.RecentDeployments.OrderByDescending(item => item.DeployedAt).ToList());
}
private static IResult GetEnvironmentSecuritySnapshot(string id)
{
if (!EnvironmentCatalog.TryGetValue(id, out var env))
{
return Results.NotFound();
}
return Results.Ok(new
{
environmentId = env.EnvironmentId,
manifestDigest = env.ManifestDigest,
risk = env.RiskSnapshot,
reachability = env.ReachabilityCoverage,
sbomStatus = env.SbomStatus,
topFindings = env.TopFindings,
});
}
private static IResult GetEnvironmentEvidence(string id)
{
if (!EnvironmentCatalog.TryGetValue(id, out var env))
{
return Results.NotFound();
}
return Results.Ok(new
{
environmentId = env.EnvironmentId,
evidence = env.Evidence,
});
}
private static IResult GetEnvironmentOpsHealth(string id)
{
if (!EnvironmentCatalog.TryGetValue(id, out var env))
{
return Results.NotFound();
}
return Results.Ok(new
{
environmentId = env.EnvironmentId,
opsConfidence = env.OpsConfidence,
impactedJobs = BuildImpactedJobs(env.EnvironmentName),
});
}
private static ApprovalEndpoints.ApprovalDto? FindApproval(string id)
{
var approval = ApprovalEndpoints.SeedData.Approvals
.FirstOrDefault(item => string.Equals(item.Id, id, StringComparison.OrdinalIgnoreCase));
return approval is null ? null : ApprovalEndpoints.WithDerivedSignals(approval);
}
private static IReadOnlyList<object> BuildTopFindings(ApprovalEndpoints.ApprovalDto approval)
{
return new[]
{
new
{
cve = "CVE-2026-1234",
component = approval.ReleaseComponents.FirstOrDefault()?.Name ?? "unknown-component",
severity = "critical",
reachability = "reachable",
},
new
{
cve = "CVE-2026-2088",
component = approval.ReleaseComponents.Skip(1).FirstOrDefault()?.Name ?? approval.ReleaseComponents.FirstOrDefault()?.Name ?? "unknown-component",
severity = "high",
reachability = "not_reachable",
},
};
}
private static IReadOnlyList<object> BuildImpactedJobs(string targetEnvironment)
{
var ops = ReleaseControlSignalCatalog.GetOpsConfidence(targetEnvironment);
return ops.Signals
.Select((signal, index) => new
{
job = $"ops-job-{index + 1}",
signal,
status = ops.Status,
})
.ToList();
}
private static readonly IReadOnlyDictionary<string, RunDetailDto> RunCatalog =
new Dictionary<string, RunDetailDto>(StringComparer.OrdinalIgnoreCase)
{
["run-001"] = new(
RunId: "run-001",
ReleaseId: "rel-002",
ManifestDigest: "sha256:beef000000000000000000000000000000000000000000000000000000000002",
Status: "warning",
StartedAt: "2026-02-19T02:10:00Z",
CompletedAt: "2026-02-19T02:19:00Z",
RollbackGuard: "armed",
Steps:
[
new RunStepDto("step-01", 1, "Materialize Inputs", "passed", "2026-02-19T02:10:00Z", "2026-02-19T02:11:00Z", "/api/v1/evidence/thread/sha256-materialize", "/logs/run-001/step-01.log"),
new RunStepDto("step-02", 2, "Policy Evaluation", "passed", "2026-02-19T02:11:00Z", "2026-02-19T02:13:00Z", "/api/v1/evidence/thread/sha256-policy", "/logs/run-001/step-02.log"),
new RunStepDto("step-03", 3, "Deploy Stage", "warning", "2026-02-19T02:13:00Z", "2026-02-19T02:19:00Z", "/api/v1/evidence/thread/sha256-deploy", "/logs/run-001/step-03.log"),
]),
};
private static readonly IReadOnlyDictionary<string, EnvironmentDetailDto> EnvironmentCatalog =
new Dictionary<string, EnvironmentDetailDto>(StringComparer.OrdinalIgnoreCase)
{
["env-production"] = new(
EnvironmentId: "env-production",
EnvironmentName: "production",
Region: "us-east",
DeployStatus: "degraded",
SbomStatus: "stale",
ManifestDigest: "sha256:beef000000000000000000000000000000000000000000000000000000000002",
RiskSnapshot: ReleaseControlSignalCatalog.GetRiskSnapshot("rel-002", "production"),
ReachabilityCoverage: ReleaseControlSignalCatalog.GetCoverage("rel-002"),
OpsConfidence: ReleaseControlSignalCatalog.GetOpsConfidence("production"),
TopFindings:
[
"CVE-2026-1234 reachable in user-service",
"Runtime ingest lag reduces confidence to WARN",
],
RecentDeployments:
[
new EnvironmentDeploymentDto("run-001", "rel-002", "1.3.0-rc1", "warning", "2026-02-19T02:19:00Z"),
new EnvironmentDeploymentDto("run-000", "rel-001", "1.2.3", "passed", "2026-02-18T08:30:00Z"),
],
Evidence: new EnvironmentEvidenceDto(
"env-snapshot-production-20260219",
"sha256:evidence-production-20260219",
"/api/v1/evidence/thread/sha256:evidence-production-20260219")),
["env-staging"] = new(
EnvironmentId: "env-staging",
EnvironmentName: "staging",
Region: "us-east",
DeployStatus: "healthy",
SbomStatus: "fresh",
ManifestDigest: "sha256:beef000000000000000000000000000000000000000000000000000000000001",
RiskSnapshot: ReleaseControlSignalCatalog.GetRiskSnapshot("rel-001", "staging"),
ReachabilityCoverage: ReleaseControlSignalCatalog.GetCoverage("rel-001"),
OpsConfidence: ReleaseControlSignalCatalog.GetOpsConfidence("staging"),
TopFindings:
[
"No critical reachable findings.",
],
RecentDeployments:
[
new EnvironmentDeploymentDto("run-000", "rel-001", "1.2.3", "passed", "2026-02-18T08:30:00Z"),
],
Evidence: new EnvironmentEvidenceDto(
"env-snapshot-staging-20260219",
"sha256:evidence-staging-20260219",
"/api/v1/evidence/thread/sha256:evidence-staging-20260219")),
};
}
public sealed record ApprovalDecisionRequest(string Action, string? Comment, string? Actor);
public sealed record RollbackRequest(string? Scope, bool? Preview);
public sealed record RunDetailDto(
string RunId,
string ReleaseId,
string ManifestDigest,
string Status,
string StartedAt,
string CompletedAt,
string RollbackGuard,
IReadOnlyList<RunStepDto> Steps);
public sealed record RunStepDto(
string StepId,
int Order,
string Name,
string Status,
string StartedAt,
string CompletedAt,
string EvidenceThreadLink,
string LogArtifactLink);
public sealed record EnvironmentDetailDto(
string EnvironmentId,
string EnvironmentName,
string Region,
string DeployStatus,
string SbomStatus,
string ManifestDigest,
PromotionRiskSnapshot RiskSnapshot,
HybridReachabilityCoverage ReachabilityCoverage,
OpsDataConfidence OpsConfidence,
IReadOnlyList<string> TopFindings,
IReadOnlyList<EnvironmentDeploymentDto> RecentDeployments,
EnvironmentEvidenceDto Evidence);
public sealed record EnvironmentDeploymentDto(
string RunId,
string ReleaseId,
string ReleaseVersion,
string Status,
string DeployedAt);
public sealed record EnvironmentEvidenceDto(
string SnapshotId,
string DecisionDigest,
string ThreadLink);

View File

@@ -1,180 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.WebService.Services;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// Release dashboard endpoints consumed by the Console control plane.
/// </summary>
public static class ReleaseDashboardEndpoints
{
public static IEndpointRouteBuilder MapReleaseDashboardEndpoints(this IEndpointRouteBuilder app)
{
MapForPrefix(app, "/api/v1/release-orchestrator", includeRouteNames: true);
MapForPrefix(app, "/api/release-orchestrator", includeRouteNames: false);
return app;
}
private static void MapForPrefix(IEndpointRouteBuilder app, string prefix, bool includeRouteNames)
{
var group = app.MapGroup(prefix)
.WithTags("ReleaseDashboard")
.RequireAuthorization(JobEnginePolicies.ReleaseRead)
.RequireTenant();
var dashboard = group.MapGet("/dashboard", GetDashboard)
.WithDescription("Return a consolidated release dashboard snapshot for the Console control plane, including pending approvals, active promotions, recent deployments, and environment health indicators. Used by the UI to populate the main release management view.");
if (includeRouteNames)
{
dashboard.WithName("ReleaseDashboard_Get");
}
var approve = group.MapPost("/promotions/{id}/approve", ApprovePromotion)
.WithDescription("Record an approval decision on the specified pending promotion request, allowing the associated release to advance to the next environment. The calling principal must hold the release approval scope. Returns 404 when the promotion ID does not exist.")
.RequireAuthorization(JobEnginePolicies.ReleaseApprove);
if (includeRouteNames)
{
approve.WithName("ReleaseDashboard_ApprovePromotion");
}
var reject = group.MapPost("/promotions/{id}/reject", RejectPromotion)
.WithDescription("Record a rejection decision on the specified pending promotion request with an optional rejection reason, blocking the release from advancing. The calling principal must hold the release approval scope. Returns 404 when the promotion ID does not exist.")
.RequireAuthorization(JobEnginePolicies.ReleaseApprove);
if (includeRouteNames)
{
reject.WithName("ReleaseDashboard_RejectPromotion");
}
}
private static IResult GetDashboard(ReleasePromotionDecisionStore decisionStore)
{
var approvals = decisionStore.Apply(ApprovalEndpoints.SeedData.Approvals);
var snapshot = ReleaseDashboardSnapshotBuilder.Build(approvals: approvals);
var releases = ReleaseEndpoints.SeedData.Releases;
var byStatus = new Dictionary<string, int>(StringComparer.OrdinalIgnoreCase)
{
["draft"] = releases.Count(r => string.Equals(r.Status, "draft", StringComparison.OrdinalIgnoreCase)),
["ready"] = releases.Count(r => string.Equals(r.Status, "ready", StringComparison.OrdinalIgnoreCase)),
["deploying"] = releases.Count(r => string.Equals(r.Status, "deploying", StringComparison.OrdinalIgnoreCase)),
["deployed"] = releases.Count(r => string.Equals(r.Status, "deployed", StringComparison.OrdinalIgnoreCase)),
["failed"] = releases.Count(r => string.Equals(r.Status, "failed", StringComparison.OrdinalIgnoreCase)),
};
var allGates = approvals.SelectMany(a => a.GateResults).ToList();
var gatesSummary = new Dictionary<string, int>(StringComparer.OrdinalIgnoreCase)
{
["pass"] = allGates.Count(g => string.Equals(g.Status, "passed", StringComparison.OrdinalIgnoreCase)),
["warn"] = allGates.Count(g => string.Equals(g.Status, "warning", StringComparison.OrdinalIgnoreCase)),
["block"] = allGates.Count(g => string.Equals(g.Status, "failed", StringComparison.OrdinalIgnoreCase)),
};
var recentActivity = snapshot.RecentReleases
.Select(r => new
{
r.Id,
r.Name,
r.Version,
r.Status,
r.CurrentEnvironment,
r.CreatedAt,
r.CreatedBy,
})
.ToList();
return Results.Ok(new
{
totalReleases = releases.Count,
byStatus,
pendingApprovals = snapshot.PendingApprovals.Count,
activeDeployments = snapshot.ActiveDeployments.Count,
gatesSummary,
recentActivity,
pipeline = snapshot.PipelineData,
pendingApprovalDetails = snapshot.PendingApprovals,
activeDeploymentDetails = snapshot.ActiveDeployments,
});
}
private static IResult ApprovePromotion(
string id,
HttpContext context,
ReleasePromotionDecisionStore decisionStore)
{
if (!decisionStore.TryApprove(
id,
ResolveActor(context),
comment: null,
out var approval,
out var error))
{
if (string.Equals(error, "promotion_not_found", StringComparison.Ordinal))
{
return Results.NotFound(new { message = $"Promotion '{id}' was not found." });
}
return Results.Conflict(new { message = $"Promotion '{id}' is not pending." });
}
if (approval is null)
{
return Results.NotFound(new { message = $"Promotion '{id}' was not found." });
}
return Results.Ok(new
{
success = true,
promotionId = id,
action = "approved",
status = approval.Status,
currentApprovals = approval.CurrentApprovals,
});
}
private static IResult RejectPromotion(
string id,
HttpContext context,
ReleasePromotionDecisionStore decisionStore,
[FromBody] RejectPromotionRequest? request)
{
if (!decisionStore.TryReject(
id,
ResolveActor(context),
request?.Reason,
out var approval,
out var error))
{
if (string.Equals(error, "promotion_not_found", StringComparison.Ordinal))
{
return Results.NotFound(new { message = $"Promotion '{id}' was not found." });
}
return Results.Conflict(new { message = $"Promotion '{id}' is not pending." });
}
if (approval is null)
{
return Results.NotFound(new { message = $"Promotion '{id}' was not found." });
}
return Results.Ok(new
{
success = true,
promotionId = id,
action = "rejected",
status = approval.Status,
reason = request?.Reason,
});
}
private static string ResolveActor(HttpContext context)
{
return context.Request.Headers["X-StellaOps-Actor"].FirstOrDefault()
?? context.User.Identity?.Name
?? "system";
}
public sealed record RejectPromotionRequest(string? Reason);
}

View File

@@ -1,749 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.WebService.Services;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// Release management endpoints for the Orchestrator service.
/// Provides CRUD and lifecycle operations for managed releases.
/// Routes: /api/release-orchestrator/releases
/// </summary>
public static class ReleaseEndpoints
{
private static readonly DateTimeOffset PreviewEvaluatedAt = DateTimeOffset.Parse("2026-02-19T03:15:00Z");
public static IEndpointRouteBuilder MapReleaseEndpoints(this IEndpointRouteBuilder app)
{
MapReleaseGroup(app, "/api/release-orchestrator/releases", includeRouteNames: true);
MapReleaseGroup(app, "/api/v1/release-orchestrator/releases", includeRouteNames: false);
return app;
}
private static void MapReleaseGroup(
IEndpointRouteBuilder app,
string prefix,
bool includeRouteNames)
{
var group = app.MapGroup(prefix)
.WithTags("Releases")
.RequireAuthorization(JobEnginePolicies.ReleaseRead)
.RequireTenant();
var list = group.MapGet(string.Empty, ListReleases)
.WithDescription("Return a paginated list of releases for the calling tenant, optionally filtered by status, environment, project, and creation time window. Each release record includes its name, version, current status, component count, and lifecycle timestamps.");
if (includeRouteNames)
{
list.WithName("Release_List");
}
var detail = group.MapGet("/{id}", GetRelease)
.WithDescription("Return the full release record for the specified ID including name, version, status, component list, approval gate state, and event history summary. Returns 404 when the release does not exist in the tenant.");
if (includeRouteNames)
{
detail.WithName("Release_Get");
}
var create = group.MapPost(string.Empty, CreateRelease)
.WithDescription("Create a new release record in Draft state. The release captures an intent to promote a versioned set of components through defined environments. Returns 409 if a release with the same name and version already exists.")
.RequireAuthorization(JobEnginePolicies.ReleaseWrite);
if (includeRouteNames)
{
create.WithName("Release_Create");
}
var update = group.MapPatch("/{id}", UpdateRelease)
.WithDescription("Update mutable metadata on the specified release including description, target environment, and custom labels. Status transitions must be performed through the dedicated lifecycle endpoints. Returns 404 when the release does not exist.")
.RequireAuthorization(JobEnginePolicies.ReleaseWrite);
if (includeRouteNames)
{
update.WithName("Release_Update");
}
var remove = group.MapDelete("/{id}", DeleteRelease)
.WithDescription("Permanently remove the specified release record. Only releases in Draft or Failed status can be deleted; returns 409 for releases in other states. All associated components and events are removed with the release record.")
.RequireAuthorization(JobEnginePolicies.ReleaseWrite);
if (includeRouteNames)
{
remove.WithName("Release_Delete");
}
var ready = group.MapPost("/{id}/ready", MarkReady)
.WithDescription("Transition the specified release from Draft to Ready state, signalling that all components are assembled and the release is eligible for promotion gate evaluation. Returns 409 if the release is not in Draft state or required components are missing.")
.RequireAuthorization(JobEnginePolicies.ReleaseWrite);
if (includeRouteNames)
{
ready.WithName("Release_MarkReady");
}
var promote = group.MapPost("/{id}/promote", RequestPromotion)
.WithDescription("Initiate the promotion workflow to advance the specified release to its next target environment, triggering policy gate evaluation. The promotion runs asynchronously; poll the release record or subscribe to events for outcome updates.")
.RequireAuthorization(JobEnginePolicies.ReleaseApprove);
if (includeRouteNames)
{
promote.WithName("Release_Promote");
}
var deploy = group.MapPost("/{id}/deploy", Deploy)
.WithDescription("Trigger deployment of the specified release to its current target environment. Deployment is orchestrated by the platform and may include pre-deployment checks, artifact staging, and post-deployment validation. Returns 409 if gates have not been satisfied.")
.RequireAuthorization(JobEnginePolicies.ReleaseApprove);
if (includeRouteNames)
{
deploy.WithName("Release_Deploy");
}
var rollback = group.MapPost("/{id}/rollback", Rollback)
.WithDescription("Initiate a rollback of the specified deployed release to the previous stable version in the current environment. The rollback is audited and creates a new release event. Returns 409 if the release is not in Deployed state or no prior stable version exists.")
.RequireAuthorization(JobEnginePolicies.ReleaseApprove);
if (includeRouteNames)
{
rollback.WithName("Release_Rollback");
}
var clone = group.MapPost("/{id}/clone", CloneRelease)
.WithDescription("Create a new release by copying the components, labels, and target environment from the specified source release, applying a new name and version. The cloned release starts in Draft state and is independent of the source.")
.RequireAuthorization(JobEnginePolicies.ReleaseWrite);
if (includeRouteNames)
{
clone.WithName("Release_Clone");
}
var components = group.MapGet("/{releaseId}/components", GetComponents)
.WithDescription("Return the list of components registered in the specified release including their artifact references, versions, content digests, and current deployment status. Returns 404 when the release does not exist.");
if (includeRouteNames)
{
components.WithName("Release_GetComponents");
}
var addComponent = group.MapPost("/{releaseId}/components", AddComponent)
.WithDescription("Register a new component in the specified release, supplying the artifact reference and content digest. Components must be added before the release is marked Ready. Returns 409 if a component with the same name is already registered.")
.RequireAuthorization(JobEnginePolicies.ReleaseWrite);
if (includeRouteNames)
{
addComponent.WithName("Release_AddComponent");
}
var updateComponent = group.MapPatch("/{releaseId}/components/{componentId}", UpdateComponent)
.WithDescription("Update the artifact reference, version, or content digest of the specified release component. Returns 404 when the component does not exist within the release or the release itself does not exist in the tenant.")
.RequireAuthorization(JobEnginePolicies.ReleaseWrite);
if (includeRouteNames)
{
updateComponent.WithName("Release_UpdateComponent");
}
var removeComponent = group.MapDelete("/{releaseId}/components/{componentId}", RemoveComponent)
.WithDescription("Remove the specified component from the release. Only permitted when the release is in Draft state; returns 409 for releases that are Ready or beyond. Returns 404 when the component or release does not exist in the tenant.")
.RequireAuthorization(JobEnginePolicies.ReleaseWrite);
if (includeRouteNames)
{
removeComponent.WithName("Release_RemoveComponent");
}
var events = group.MapGet("/{releaseId}/events", GetEvents)
.WithDescription("Return the chronological event log for the specified release including status transitions, gate evaluations, approval decisions, deployment actions, and rollback events. Useful for audit trails and post-incident analysis.");
if (includeRouteNames)
{
events.WithName("Release_GetEvents");
}
var preview = group.MapGet("/{releaseId}/promotion-preview", GetPromotionPreview)
.WithDescription("Evaluate and return the gate check results for the specified release's next promotion without committing any state change. Returns the verdict for each configured policy gate so operators can assess promotion eligibility before triggering it.");
if (includeRouteNames)
{
preview.WithName("Release_PromotionPreview");
}
var targets = group.MapGet("/{releaseId}/available-environments", GetAvailableEnvironments)
.WithDescription("Return the list of environment targets that the specified release can be promoted to from its current state, based on the configured promotion pipeline and the caller's access rights. Returns 404 when the release does not exist.");
if (includeRouteNames)
{
targets.WithName("Release_AvailableEnvironments");
}
var activity = group.MapGet("/activity", ListActivity)
.WithDescription("Return a paginated feed of release activities across all releases, optionally filtered by environment, outcome, and time window.");
if (includeRouteNames)
{
activity.WithName("Release_Activity");
}
var versions = group.MapGet("/versions", ListVersions)
.WithDescription("Return a filtered list of release versions, optionally filtered by gate status.");
if (includeRouteNames)
{
versions.WithName("Release_Versions");
}
}
// ---- Handlers ----
private static IResult ListReleases(
[FromQuery] string? search,
[FromQuery] string? statuses,
[FromQuery] string? environment,
[FromQuery] string? sortField,
[FromQuery] string? sortOrder,
[FromQuery] int? page,
[FromQuery] int? pageSize)
{
var releases = SeedData.Releases.AsEnumerable();
if (!string.IsNullOrWhiteSpace(search))
{
var term = search.ToLowerInvariant();
releases = releases.Where(r =>
r.Name.Contains(term, StringComparison.OrdinalIgnoreCase) ||
r.Version.Contains(term, StringComparison.OrdinalIgnoreCase) ||
r.Description.Contains(term, StringComparison.OrdinalIgnoreCase));
}
if (!string.IsNullOrWhiteSpace(statuses))
{
var statusList = statuses.Split(',', StringSplitOptions.RemoveEmptyEntries);
releases = releases.Where(r => statusList.Contains(r.Status, StringComparer.OrdinalIgnoreCase));
}
if (!string.IsNullOrWhiteSpace(environment))
{
releases = releases.Where(r =>
string.Equals(r.CurrentEnvironment, environment, StringComparison.OrdinalIgnoreCase) ||
string.Equals(r.TargetEnvironment, environment, StringComparison.OrdinalIgnoreCase));
}
var sorted = (sortField?.ToLowerInvariant(), sortOrder?.ToLowerInvariant()) switch
{
("name", "asc") => releases.OrderBy(r => r.Name),
("name", _) => releases.OrderByDescending(r => r.Name),
("version", "asc") => releases.OrderBy(r => r.Version),
("version", _) => releases.OrderByDescending(r => r.Version),
("status", "asc") => releases.OrderBy(r => r.Status),
("status", _) => releases.OrderByDescending(r => r.Status),
(_, "asc") => releases.OrderBy(r => r.CreatedAt),
_ => releases.OrderByDescending(r => r.CreatedAt),
};
var all = sorted.ToList();
var effectivePage = Math.Max(page ?? 1, 1);
var effectivePageSize = Math.Clamp(pageSize ?? 20, 1, 100);
var items = all.Skip((effectivePage - 1) * effectivePageSize).Take(effectivePageSize).ToList();
return Results.Ok(new
{
items,
total = all.Count,
page = effectivePage,
pageSize = effectivePageSize,
});
}
private static IResult GetRelease(string id)
{
var release = SeedData.Releases.FirstOrDefault(r => r.Id == id);
return release is not null ? Results.Ok(release) : Results.NotFound();
}
private static IResult CreateRelease([FromBody] CreateReleaseDto request, [FromServices] TimeProvider time)
{
var now = time.GetUtcNow();
// When versionId is provided, link to an existing version (copy its digest/components)
ManagedReleaseDto? sourceVersion = null;
if (!string.IsNullOrEmpty(request.VersionId))
{
sourceVersion = SeedData.Releases.FirstOrDefault(r => r.Id == request.VersionId);
}
var release = new ManagedReleaseDto
{
Id = $"rel-{Guid.NewGuid():N}"[..11],
Name = request.Name,
Version = request.Version,
Description = request.Description ?? sourceVersion?.Description ?? "",
Status = "draft",
CurrentEnvironment = null,
TargetEnvironment = request.TargetEnvironment ?? sourceVersion?.TargetEnvironment,
ComponentCount = sourceVersion?.ComponentCount ?? 0,
CreatedAt = now,
CreatedBy = "api",
UpdatedAt = now,
DeployedAt = null,
DeploymentStrategy = request.DeploymentStrategy ?? sourceVersion?.DeploymentStrategy ?? "rolling",
};
// Add the new release to the in-memory store so it appears in list queries
SeedData.Releases.Add(release);
return Results.Created($"/api/release-orchestrator/releases/{release.Id}", release);
}
private static IResult UpdateRelease(string id, [FromBody] UpdateReleaseDto request)
{
var release = SeedData.Releases.FirstOrDefault(r => r.Id == id);
if (release is null) return Results.NotFound();
return Results.Ok(release with
{
Name = request.Name ?? release.Name,
Description = request.Description ?? release.Description,
TargetEnvironment = request.TargetEnvironment ?? release.TargetEnvironment,
DeploymentStrategy = request.DeploymentStrategy ?? release.DeploymentStrategy,
UpdatedAt = DateTimeOffset.UtcNow,
});
}
private static IResult DeleteRelease(string id)
{
var exists = SeedData.Releases.Any(r => r.Id == id);
return exists ? Results.NoContent() : Results.NotFound();
}
private static IResult MarkReady(string id)
{
var release = SeedData.Releases.FirstOrDefault(r => r.Id == id);
if (release is null) return Results.NotFound();
return Results.Ok(release with { Status = "ready", UpdatedAt = DateTimeOffset.UtcNow });
}
private static IResult RequestPromotion(
string id,
[FromBody] PromoteDto request,
[FromServices] TimeProvider time)
{
var release = SeedData.Releases.FirstOrDefault(r => r.Id == id);
if (release is null) return Results.NotFound();
var targetEnvironment = ResolveTargetEnvironment(request);
var existing = ApprovalEndpoints.SeedData.Approvals
.Select(ApprovalEndpoints.WithDerivedSignals)
.FirstOrDefault(a =>
string.Equals(a.ReleaseId, id, StringComparison.OrdinalIgnoreCase) &&
string.Equals(a.TargetEnvironment, targetEnvironment, StringComparison.OrdinalIgnoreCase) &&
string.Equals(a.Status, "pending", StringComparison.OrdinalIgnoreCase));
if (existing is not null)
{
return Results.Ok(ApprovalEndpoints.ToSummary(existing));
}
var nextId = $"apr-{ApprovalEndpoints.SeedData.Approvals.Count + 1:000}";
var now = time.GetUtcNow().ToString("O");
var approval = ApprovalEndpoints.WithDerivedSignals(new ApprovalEndpoints.ApprovalDto
{
Id = nextId,
ReleaseId = release.Id,
ReleaseName = release.Name,
ReleaseVersion = release.Version,
SourceEnvironment = release.CurrentEnvironment ?? "staging",
TargetEnvironment = targetEnvironment,
RequestedBy = "release-orchestrator",
RequestedAt = now,
Urgency = request.Urgency ?? "normal",
Justification = string.IsNullOrWhiteSpace(request.Justification)
? $"Promotion requested for {release.Name} {release.Version}."
: request.Justification.Trim(),
Status = "pending",
CurrentApprovals = 0,
RequiredApprovals = 2,
GatesPassed = true,
ScheduledTime = request.ScheduledTime,
ExpiresAt = time.GetUtcNow().AddHours(48).ToString("O"),
GateResults = new List<ApprovalEndpoints.GateResultDto>
{
new()
{
GateId = "g-security",
GateName = "Security Snapshot",
Type = "security",
Status = "passed",
Message = "Critical reachable findings within policy threshold.",
Details = new Dictionary<string, object>(),
EvaluatedAt = now,
},
new()
{
GateId = "g-ops",
GateName = "Data Integrity",
Type = "quality",
Status = "warning",
Message = "Runtime ingest lag reduces confidence for production decisions.",
Details = new Dictionary<string, object>(),
EvaluatedAt = now,
},
},
ReleaseComponents = BuildReleaseComponents(release.Id),
});
ApprovalEndpoints.SeedData.Approvals.Add(approval);
return Results.Ok(ApprovalEndpoints.ToSummary(approval));
}
private static IResult Deploy(string id)
{
var release = SeedData.Releases.FirstOrDefault(r => r.Id == id);
if (release is null) return Results.NotFound();
var now = DateTimeOffset.UtcNow;
return Results.Ok(release with
{
Status = "deployed",
CurrentEnvironment = release.TargetEnvironment,
TargetEnvironment = null,
DeployedAt = now,
UpdatedAt = now,
});
}
private static IResult Rollback(string id)
{
var release = SeedData.Releases.FirstOrDefault(r => r.Id == id);
if (release is null) return Results.NotFound();
return Results.Ok(release with
{
Status = "rolled_back",
CurrentEnvironment = null,
UpdatedAt = DateTimeOffset.UtcNow,
});
}
private static IResult CloneRelease(string id, [FromBody] CloneReleaseDto request)
{
var release = SeedData.Releases.FirstOrDefault(r => r.Id == id);
if (release is null) return Results.NotFound();
var now = DateTimeOffset.UtcNow;
return Results.Ok(release with
{
Id = $"rel-{Guid.NewGuid():N}"[..11],
Name = request.Name,
Version = request.Version,
Status = "draft",
CurrentEnvironment = null,
TargetEnvironment = null,
CreatedAt = now,
UpdatedAt = now,
DeployedAt = null,
CreatedBy = "api",
});
}
private static IResult GetComponents(string releaseId)
{
if (!SeedData.Components.TryGetValue(releaseId, out var components))
return Results.Ok(Array.Empty<object>());
return Results.Ok(components);
}
private static IResult AddComponent(string releaseId, [FromBody] AddComponentDto request)
{
var component = new ReleaseComponentDto
{
Id = $"comp-{Guid.NewGuid():N}"[..12],
ReleaseId = releaseId,
Name = request.Name,
ImageRef = request.ImageRef,
Digest = request.Digest,
Tag = request.Tag,
Version = request.Version,
Type = request.Type,
ConfigOverrides = request.ConfigOverrides ?? new Dictionary<string, string>(),
};
return Results.Created($"/api/release-orchestrator/releases/{releaseId}/components/{component.Id}", component);
}
private static IResult UpdateComponent(string releaseId, string componentId, [FromBody] UpdateComponentDto request)
{
if (!SeedData.Components.TryGetValue(releaseId, out var components))
return Results.NotFound();
var comp = components.FirstOrDefault(c => c.Id == componentId);
if (comp is null) return Results.NotFound();
return Results.Ok(comp with { ConfigOverrides = request.ConfigOverrides ?? comp.ConfigOverrides });
}
private static IResult RemoveComponent(string releaseId, string componentId)
{
return Results.NoContent();
}
private static IResult GetEvents(string releaseId)
{
if (!SeedData.Events.TryGetValue(releaseId, out var events))
return Results.Ok(Array.Empty<object>());
return Results.Ok(events);
}
private static IResult GetPromotionPreview(string releaseId, [FromQuery] string? targetEnvironmentId)
{
var targetEnvironment = targetEnvironmentId == "env-production" ? "production" : "staging";
var risk = ReleaseControlSignalCatalog.GetRiskSnapshot(releaseId, targetEnvironment);
var coverage = ReleaseControlSignalCatalog.GetCoverage(releaseId);
var ops = ReleaseControlSignalCatalog.GetOpsConfidence(targetEnvironment);
var manifestDigest = ResolveManifestDigest(releaseId);
return Results.Ok(new
{
releaseId,
releaseName = "Platform Release",
sourceEnvironment = "staging",
targetEnvironment,
manifestDigest,
riskSnapshot = risk,
reachabilityCoverage = coverage,
opsConfidence = ops,
gateResults = new[]
{
new { gateId = "g1", gateName = "Security Scan", type = "security", status = "passed", message = "No blocking vulnerabilities found", details = new Dictionary<string, object>(), evaluatedAt = PreviewEvaluatedAt },
new { gateId = "g2", gateName = "Policy Compliance", type = "policy", status = "passed", message = "All policies satisfied", details = new Dictionary<string, object>(), evaluatedAt = PreviewEvaluatedAt },
new { gateId = "g3", gateName = "Ops Data Integrity", type = "quality", status = ops.Status == "healthy" ? "passed" : "warning", message = ops.Summary, details = new Dictionary<string, object>(), evaluatedAt = PreviewEvaluatedAt },
},
allGatesPassed = true,
requiredApprovers = 2,
estimatedDeployTime = 300,
warnings = ops.Status == "healthy"
? Array.Empty<string>()
: new[] { "Data-integrity confidence is degraded; decision remains auditable but requires explicit acknowledgment." },
});
}
private static IResult GetAvailableEnvironments(string releaseId)
{
return Results.Ok(new[]
{
new { id = "env-staging", name = "Staging", tier = "staging", opsConfidence = ReleaseControlSignalCatalog.GetOpsConfidence("staging") },
new { id = "env-production", name = "Production", tier = "production", opsConfidence = ReleaseControlSignalCatalog.GetOpsConfidence("production") },
new { id = "env-canary", name = "Canary", tier = "production", opsConfidence = ReleaseControlSignalCatalog.GetOpsConfidence("canary") },
});
}
private static string ResolveTargetEnvironment(PromoteDto request)
{
if (!string.IsNullOrWhiteSpace(request.TargetEnvironment))
{
return request.TargetEnvironment.Trim().ToLowerInvariant();
}
return request.TargetEnvironmentId switch
{
"env-production" => "production",
"env-canary" => "canary",
_ => "staging",
};
}
private static string ResolveManifestDigest(string releaseId)
{
if (SeedData.Components.TryGetValue(releaseId, out var components) && components.Count > 0)
{
var digestSeed = string.Join('|', components.Select(component => component.Digest));
return $"sha256:{Convert.ToHexString(System.Security.Cryptography.SHA256.HashData(System.Text.Encoding.UTF8.GetBytes(digestSeed))).ToLowerInvariant()[..64]}";
}
return $"sha256:{releaseId.Replace("-", string.Empty, StringComparison.Ordinal).PadRight(64, '0')[..64]}";
}
private static List<ApprovalEndpoints.ReleaseComponentSummaryDto> BuildReleaseComponents(string releaseId)
{
if (!SeedData.Components.TryGetValue(releaseId, out var components))
{
return new List<ApprovalEndpoints.ReleaseComponentSummaryDto>();
}
return components
.OrderBy(component => component.Name, StringComparer.Ordinal)
.Select(component => new ApprovalEndpoints.ReleaseComponentSummaryDto
{
Name = component.Name,
Version = component.Version,
Digest = component.Digest,
})
.ToList();
}
// ---- DTOs ----
public sealed record ManagedReleaseDto
{
public required string Id { get; init; }
public required string Name { get; init; }
public required string Version { get; init; }
public required string Description { get; init; }
public required string Status { get; init; }
public string? CurrentEnvironment { get; init; }
public string? TargetEnvironment { get; init; }
public int ComponentCount { get; init; }
public DateTimeOffset CreatedAt { get; init; }
public string? CreatedBy { get; init; }
public DateTimeOffset UpdatedAt { get; init; }
public DateTimeOffset? DeployedAt { get; init; }
public string DeploymentStrategy { get; init; } = "rolling";
}
public sealed record ReleaseComponentDto
{
public required string Id { get; init; }
public required string ReleaseId { get; init; }
public required string Name { get; init; }
public required string ImageRef { get; init; }
public required string Digest { get; init; }
public string? Tag { get; init; }
public required string Version { get; init; }
public required string Type { get; init; }
public Dictionary<string, string> ConfigOverrides { get; init; } = new();
}
public sealed record ReleaseEventDto
{
public required string Id { get; init; }
public required string ReleaseId { get; init; }
public required string Type { get; init; }
public string? Environment { get; init; }
public required string Actor { get; init; }
public required string Message { get; init; }
public DateTimeOffset Timestamp { get; init; }
public Dictionary<string, object> Metadata { get; init; } = new();
}
public sealed record CreateReleaseDto
{
public required string Name { get; init; }
public required string Version { get; init; }
public string? VersionId { get; init; }
public string? Description { get; init; }
public string? TargetEnvironment { get; init; }
public string? DeploymentStrategy { get; init; }
}
public sealed record UpdateReleaseDto
{
public string? Name { get; init; }
public string? Description { get; init; }
public string? TargetEnvironment { get; init; }
public string? DeploymentStrategy { get; init; }
}
public sealed record PromoteDto
{
public string? TargetEnvironment { get; init; }
public string? TargetEnvironmentId { get; init; }
public string? Urgency { get; init; }
public string? Justification { get; init; }
public string? ScheduledTime { get; init; }
}
public sealed record CloneReleaseDto
{
public required string Name { get; init; }
public required string Version { get; init; }
}
public sealed record AddComponentDto
{
public required string Name { get; init; }
public required string ImageRef { get; init; }
public required string Digest { get; init; }
public string? Tag { get; init; }
public required string Version { get; init; }
public required string Type { get; init; }
public Dictionary<string, string>? ConfigOverrides { get; init; }
}
public sealed record UpdateComponentDto
{
public Dictionary<string, string>? ConfigOverrides { get; init; }
}
private static IResult ListActivity(
[FromQuery] string? environment,
[FromQuery] string? outcome,
[FromQuery] int? limit,
[FromQuery] string? releaseId)
{
var events = SeedData.Events.Values.SelectMany(e => e).AsEnumerable();
if (!string.IsNullOrWhiteSpace(environment))
events = events.Where(e => string.Equals(e.Environment, environment, StringComparison.OrdinalIgnoreCase));
if (!string.IsNullOrWhiteSpace(outcome))
events = events.Where(e => string.Equals(e.Type, outcome, StringComparison.OrdinalIgnoreCase));
if (!string.IsNullOrWhiteSpace(releaseId))
events = events.Where(e => string.Equals(e.ReleaseId, releaseId, StringComparison.OrdinalIgnoreCase));
var sorted = events.OrderByDescending(e => e.Timestamp).ToList();
var items = limit > 0 ? sorted.Take(limit.Value).ToList() : sorted;
return Results.Ok(new { items, total = sorted.Count });
}
private static IResult ListVersions(
[FromQuery] string? gateStatus,
[FromQuery] int? limit)
{
var releases = SeedData.Releases.AsEnumerable();
if (!string.IsNullOrWhiteSpace(gateStatus))
{
// Map gate status to release status for filtering
releases = gateStatus.ToLowerInvariant() switch
{
"block" => releases.Where(r => r.Status is "failed" or "rolled_back"),
"pass" => releases.Where(r => r.Status is "ready" or "deployed"),
"warn" => releases.Where(r => r.Status is "deploying"),
_ => releases,
};
}
var sorted = releases.OrderByDescending(r => r.CreatedAt).ToList();
var items = limit > 0 ? sorted.Take(limit.Value).ToList() : sorted;
return Results.Ok(new { items, total = sorted.Count });
}
// ---- Seed Data ----
internal static class SeedData
{
public static readonly List<ManagedReleaseDto> Releases = new()
{
new() { Id = "rel-001", Name = "Platform Release", Version = "1.2.3", Description = "Feature release with API improvements and bug fixes", Status = "deployed", CurrentEnvironment = "production", TargetEnvironment = null, ComponentCount = 3, CreatedAt = DateTimeOffset.Parse("2026-01-10T08:00:00Z"), CreatedBy = "deploy-bot", UpdatedAt = DateTimeOffset.Parse("2026-01-11T14:30:00Z"), DeployedAt = DateTimeOffset.Parse("2026-01-11T14:30:00Z"), DeploymentStrategy = "rolling" },
new() { Id = "rel-002", Name = "Platform Release", Version = "1.3.0-rc1", Description = "Release candidate for next major version", Status = "ready", CurrentEnvironment = "staging", TargetEnvironment = "production", ComponentCount = 4, CreatedAt = DateTimeOffset.Parse("2026-01-11T10:00:00Z"), CreatedBy = "ci-pipeline", UpdatedAt = DateTimeOffset.Parse("2026-01-12T09:00:00Z"), DeploymentStrategy = "blue_green" },
new() { Id = "rel-003", Name = "Hotfix", Version = "1.2.4", Description = "Critical security patch", Status = "deploying", CurrentEnvironment = "staging", TargetEnvironment = "production", ComponentCount = 1, CreatedAt = DateTimeOffset.Parse("2026-01-12T06:00:00Z"), CreatedBy = "security-team", UpdatedAt = DateTimeOffset.Parse("2026-01-12T10:00:00Z"), DeploymentStrategy = "rolling" },
new() { Id = "rel-004", Name = "Feature Branch", Version = "2.0.0-alpha", Description = "New architecture preview", Status = "draft", TargetEnvironment = "dev", ComponentCount = 5, CreatedAt = DateTimeOffset.Parse("2026-01-08T15:00:00Z"), CreatedBy = "dev-team", UpdatedAt = DateTimeOffset.Parse("2026-01-10T11:00:00Z"), DeploymentStrategy = "recreate" },
new() { Id = "rel-005", Name = "Platform Release", Version = "1.2.2", Description = "Previous stable release", Status = "rolled_back", ComponentCount = 3, CreatedAt = DateTimeOffset.Parse("2026-01-05T12:00:00Z"), CreatedBy = "deploy-bot", UpdatedAt = DateTimeOffset.Parse("2026-01-10T08:00:00Z"), DeployedAt = DateTimeOffset.Parse("2026-01-06T10:00:00Z"), DeploymentStrategy = "rolling" },
};
public static readonly Dictionary<string, List<ReleaseComponentDto>> Components = new()
{
["rel-001"] = new()
{
new() { Id = "comp-001", ReleaseId = "rel-001", Name = "api-service", ImageRef = "registry.example.com/api-service", Digest = "sha256:abc123def456", Tag = "v1.2.3", Version = "1.2.3", Type = "container" },
new() { Id = "comp-002", ReleaseId = "rel-001", Name = "worker-service", ImageRef = "registry.example.com/worker-service", Digest = "sha256:def456abc789", Tag = "v1.2.3", Version = "1.2.3", Type = "container" },
new() { Id = "comp-003", ReleaseId = "rel-001", Name = "web-app", ImageRef = "registry.example.com/web-app", Digest = "sha256:789abc123def", Tag = "v1.2.3", Version = "1.2.3", Type = "container" },
},
["rel-002"] = new()
{
new() { Id = "comp-004", ReleaseId = "rel-002", Name = "api-service", ImageRef = "registry.example.com/api-service", Digest = "sha256:new123new456", Tag = "v1.3.0-rc1", Version = "1.3.0-rc1", Type = "container" },
new() { Id = "comp-005", ReleaseId = "rel-002", Name = "worker-service", ImageRef = "registry.example.com/worker-service", Digest = "sha256:new456new789", Tag = "v1.3.0-rc1", Version = "1.3.0-rc1", Type = "container" },
new() { Id = "comp-006", ReleaseId = "rel-002", Name = "web-app", ImageRef = "registry.example.com/web-app", Digest = "sha256:new789newabc", Tag = "v1.3.0-rc1", Version = "1.3.0-rc1", Type = "container" },
new() { Id = "comp-007", ReleaseId = "rel-002", Name = "migration", ImageRef = "registry.example.com/migration", Digest = "sha256:mig123mig456", Tag = "v1.3.0-rc1", Version = "1.3.0-rc1", Type = "script" },
},
};
public static readonly Dictionary<string, List<ReleaseEventDto>> Events = new()
{
["rel-001"] = new()
{
new() { Id = "evt-001", ReleaseId = "rel-001", Type = "created", Environment = null, Actor = "deploy-bot", Message = "Release created", Timestamp = DateTimeOffset.Parse("2026-01-10T08:00:00Z") },
new() { Id = "evt-002", ReleaseId = "rel-001", Type = "promoted", Environment = "dev", Actor = "deploy-bot", Message = "Promoted to dev", Timestamp = DateTimeOffset.Parse("2026-01-10T09:00:00Z") },
new() { Id = "evt-003", ReleaseId = "rel-001", Type = "deployed", Environment = "dev", Actor = "deploy-bot", Message = "Successfully deployed to dev", Timestamp = DateTimeOffset.Parse("2026-01-10T09:30:00Z") },
new() { Id = "evt-004", ReleaseId = "rel-001", Type = "approved", Environment = "staging", Actor = "qa-team", Message = "Approved for staging", Timestamp = DateTimeOffset.Parse("2026-01-10T14:00:00Z") },
new() { Id = "evt-005", ReleaseId = "rel-001", Type = "deployed", Environment = "staging", Actor = "deploy-bot", Message = "Successfully deployed to staging", Timestamp = DateTimeOffset.Parse("2026-01-10T14:30:00Z") },
new() { Id = "evt-006", ReleaseId = "rel-001", Type = "approved", Environment = "production", Actor = "release-manager", Message = "Approved for production", Timestamp = DateTimeOffset.Parse("2026-01-11T10:00:00Z") },
new() { Id = "evt-007", ReleaseId = "rel-001", Type = "deployed", Environment = "production", Actor = "deploy-bot", Message = "Successfully deployed to production", Timestamp = DateTimeOffset.Parse("2026-01-11T14:30:00Z") },
},
["rel-002"] = new()
{
new() { Id = "evt-008", ReleaseId = "rel-002", Type = "created", Environment = null, Actor = "ci-pipeline", Message = "Release created from CI", Timestamp = DateTimeOffset.Parse("2026-01-11T10:00:00Z") },
new() { Id = "evt-009", ReleaseId = "rel-002", Type = "deployed", Environment = "staging", Actor = "deploy-bot", Message = "Deployed to staging for testing", Timestamp = DateTimeOffset.Parse("2026-01-11T12:00:00Z") },
},
};
}
}

View File

@@ -1,185 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Infrastructure.Repositories;
using StellaOps.JobEngine.WebService.Contracts;
using StellaOps.JobEngine.WebService.Services;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// REST API endpoints for runs (batch executions).
/// </summary>
public static class RunEndpoints
{
/// <summary>
/// Maps run endpoints to the route builder.
/// </summary>
public static RouteGroupBuilder MapRunEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/runs")
.WithTags("Orchestrator Runs")
.RequireAuthorization(JobEnginePolicies.Read)
.RequireTenant();
group.MapGet(string.Empty, ListRuns)
.WithName("Orchestrator_ListRuns")
.WithDescription(_t("orchestrator.run.list_description"));
group.MapGet("{runId:guid}", GetRun)
.WithName("Orchestrator_GetRun")
.WithDescription(_t("orchestrator.run.get_description"));
group.MapGet("{runId:guid}/jobs", GetRunJobs)
.WithName("Orchestrator_GetRunJobs")
.WithDescription(_t("orchestrator.run.get_jobs_description"));
group.MapGet("{runId:guid}/summary", GetRunSummary)
.WithName("Orchestrator_GetRunSummary")
.WithDescription(_t("orchestrator.run.get_summary_description"));
return group;
}
private static async Task<IResult> ListRuns(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] IRunRepository repository,
[FromQuery] Guid? sourceId = null,
[FromQuery] string? runType = null,
[FromQuery] string? status = null,
[FromQuery] string? projectId = null,
[FromQuery] string? createdAfter = null,
[FromQuery] string? createdBefore = null,
[FromQuery] int? limit = null,
[FromQuery] string? cursor = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = EndpointHelpers.GetLimit(limit);
var offset = EndpointHelpers.ParseCursorOffset(cursor);
var parsedStatus = EndpointHelpers.TryParseRunStatus(status);
var parsedCreatedAfter = EndpointHelpers.TryParseDateTimeOffset(createdAfter);
var parsedCreatedBefore = EndpointHelpers.TryParseDateTimeOffset(createdBefore);
var runs = await repository.ListAsync(
tenantId,
sourceId,
runType,
parsedStatus,
projectId,
parsedCreatedAfter,
parsedCreatedBefore,
effectiveLimit,
offset,
cancellationToken).ConfigureAwait(false);
var responses = runs.Select(RunResponse.FromDomain).ToList();
var nextCursor = EndpointHelpers.CreateNextCursor(offset, effectiveLimit, responses.Count);
return Results.Ok(new RunListResponse(responses, nextCursor));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetRun(
HttpContext context,
[FromRoute] Guid runId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IRunRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var run = await repository.GetByIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
if (run is null)
{
return Results.NotFound();
}
return Results.Ok(RunResponse.FromDomain(run));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetRunJobs(
HttpContext context,
[FromRoute] Guid runId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IRunRepository runRepository,
[FromServices] IJobRepository jobRepository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
// Verify run exists
var run = await runRepository.GetByIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
if (run is null)
{
return Results.NotFound();
}
var jobs = await jobRepository.GetByRunIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
var responses = jobs.Select(JobResponse.FromDomain).ToList();
return Results.Ok(new JobListResponse(responses, null));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetRunSummary(
HttpContext context,
[FromRoute] Guid runId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IRunRepository runRepository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var run = await runRepository.GetByIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
if (run is null)
{
return Results.NotFound();
}
// Return the aggregate counts from the run itself
var summary = new
{
runId = run.RunId,
status = run.Status.ToString().ToLowerInvariant(),
totalJobs = run.TotalJobs,
completedJobs = run.CompletedJobs,
succeededJobs = run.SucceededJobs,
failedJobs = run.FailedJobs,
pendingJobs = run.TotalJobs - run.CompletedJobs,
createdAt = run.CreatedAt,
startedAt = run.StartedAt,
completedAt = run.CompletedAt
};
return Results.Ok(summary);
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
}

View File

@@ -1,250 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.JobEngine.Core.Scale;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// Endpoints for autoscaling metrics and load shedding status.
/// </summary>
public static class ScaleEndpoints
{
/// <summary>
/// Maps scale endpoints to the route builder.
/// </summary>
public static IEndpointRouteBuilder MapScaleEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/scale")
.WithTags("Scaling")
.AllowAnonymous();
// Autoscaling metrics for KEDA/HPA
group.MapGet("/metrics", GetAutoscaleMetrics)
.WithName("Orchestrator_AutoscaleMetrics")
.WithDescription(_t("orchestrator.scale.metrics_description"));
// Prometheus-compatible metrics endpoint
group.MapGet("/metrics/prometheus", GetPrometheusMetrics)
.WithName("Orchestrator_PrometheusScaleMetrics")
.WithDescription(_t("orchestrator.scale.prometheus_description"));
// Load shedding status
group.MapGet("/load", GetLoadStatus)
.WithName("Orchestrator_LoadStatus")
.WithDescription(_t("orchestrator.scale.load_description"));
// Scale snapshot for debugging
group.MapGet("/snapshot", GetScaleSnapshot)
.WithName("Orchestrator_ScaleSnapshot")
.WithDescription(_t("orchestrator.scale.snapshot_description"));
// Startup probe (slower to pass, includes warmup check)
app.MapGet("/startupz", GetStartupStatus)
.WithName("Orchestrator_StartupProbe")
.WithTags("Health")
.WithDescription(_t("orchestrator.scale.startupz_description"))
.AllowAnonymous();
return app;
}
private static IResult GetAutoscaleMetrics(
[FromServices] ScaleMetrics scaleMetrics)
{
var metrics = scaleMetrics.GetAutoscaleMetrics();
return Results.Ok(metrics);
}
private static IResult GetPrometheusMetrics(
[FromServices] ScaleMetrics scaleMetrics,
[FromServices] LoadShedder loadShedder)
{
var metrics = scaleMetrics.GetAutoscaleMetrics();
var loadStatus = loadShedder.GetStatus();
// Format as Prometheus text exposition
var lines = new List<string>
{
"# HELP orchestrator_queue_depth Current number of pending jobs",
"# TYPE orchestrator_queue_depth gauge",
$"orchestrator_queue_depth {metrics.QueueDepth}",
"",
"# HELP orchestrator_active_jobs Current number of active jobs",
"# TYPE orchestrator_active_jobs gauge",
$"orchestrator_active_jobs {metrics.ActiveJobs}",
"",
"# HELP orchestrator_dispatch_latency_p95_ms P95 dispatch latency in milliseconds",
"# TYPE orchestrator_dispatch_latency_p95_ms gauge",
$"orchestrator_dispatch_latency_p95_ms {metrics.DispatchLatencyP95Ms:F2}",
"",
"# HELP orchestrator_dispatch_latency_p99_ms P99 dispatch latency in milliseconds",
"# TYPE orchestrator_dispatch_latency_p99_ms gauge",
$"orchestrator_dispatch_latency_p99_ms {metrics.DispatchLatencyP99Ms:F2}",
"",
"# HELP orchestrator_recommended_replicas Recommended replica count for autoscaling",
"# TYPE orchestrator_recommended_replicas gauge",
$"orchestrator_recommended_replicas {metrics.RecommendedReplicas}",
"",
"# HELP orchestrator_under_pressure Whether the system is under pressure (1=yes, 0=no)",
"# TYPE orchestrator_under_pressure gauge",
$"orchestrator_under_pressure {(metrics.IsUnderPressure ? 1 : 0)}",
"",
"# HELP orchestrator_load_factor Current load factor (1.0 = at target)",
"# TYPE orchestrator_load_factor gauge",
$"orchestrator_load_factor {loadStatus.LoadFactor:F3}",
"",
"# HELP orchestrator_load_shedding_state Current load shedding state (0=normal, 1=warning, 2=critical, 3=emergency)",
"# TYPE orchestrator_load_shedding_state gauge",
$"orchestrator_load_shedding_state {(int)loadStatus.State}",
"",
"# HELP orchestrator_scale_samples Number of latency samples in measurement window",
"# TYPE orchestrator_scale_samples gauge",
$"orchestrator_scale_samples {metrics.SamplesInWindow}"
};
return Results.Text(string.Join("\n", lines), "text/plain");
}
private static IResult GetLoadStatus(
[FromServices] LoadShedder loadShedder)
{
var status = loadShedder.GetStatus();
return Results.Ok(status);
}
private static IResult GetScaleSnapshot(
[FromServices] ScaleMetrics scaleMetrics,
[FromServices] LoadShedder loadShedder)
{
var snapshot = scaleMetrics.GetSnapshot();
var loadStatus = loadShedder.GetStatus();
return Results.Ok(new
{
snapshot.Timestamp,
snapshot.TotalQueueDepth,
snapshot.TotalActiveJobs,
DispatchLatency = new
{
snapshot.DispatchLatency.Count,
snapshot.DispatchLatency.Min,
snapshot.DispatchLatency.Max,
snapshot.DispatchLatency.Avg,
snapshot.DispatchLatency.P50,
snapshot.DispatchLatency.P95,
snapshot.DispatchLatency.P99
},
LoadShedding = new
{
loadStatus.State,
loadStatus.LoadFactor,
loadStatus.IsSheddingLoad,
loadStatus.AcceptingPriority,
loadStatus.RecommendedDelayMs
},
QueueDepthByKey = snapshot.QueueDepthByKey,
ActiveJobsByKey = snapshot.ActiveJobsByKey
});
}
private static IResult GetStartupStatus(
[FromServices] ScaleMetrics scaleMetrics,
[FromServices] StartupProbe startupProbe)
{
if (!startupProbe.IsReady)
{
return Results.Json(new StartupResponse(
Status: "starting",
Ready: false,
UptimeSeconds: startupProbe.UptimeSeconds,
WarmupComplete: startupProbe.WarmupComplete,
Message: startupProbe.StatusMessage),
statusCode: StatusCodes.Status503ServiceUnavailable);
}
return Results.Ok(new StartupResponse(
Status: "started",
Ready: true,
UptimeSeconds: startupProbe.UptimeSeconds,
WarmupComplete: startupProbe.WarmupComplete,
Message: "Service is ready"));
}
}
/// <summary>
/// Startup probe response.
/// </summary>
public sealed record StartupResponse(
string Status,
bool Ready,
double UptimeSeconds,
bool WarmupComplete,
string Message);
/// <summary>
/// Startup probe service that tracks warmup status.
/// </summary>
public sealed class StartupProbe
{
private readonly DateTimeOffset _startTime = DateTimeOffset.UtcNow;
private readonly TimeSpan _minWarmupTime;
private volatile bool _warmupComplete;
private string _statusMessage = "Starting up";
public StartupProbe(TimeSpan? minWarmupTime = null)
{
_minWarmupTime = minWarmupTime ?? TimeSpan.FromSeconds(5);
}
/// <summary>
/// Gets whether the service is ready.
/// </summary>
public bool IsReady => WarmupComplete;
/// <summary>
/// Gets whether warmup has completed.
/// </summary>
public bool WarmupComplete
{
get
{
if (_warmupComplete) return true;
// Auto-complete warmup after minimum time
if (UptimeSeconds >= _minWarmupTime.TotalSeconds)
{
_warmupComplete = true;
_statusMessage = "Warmup complete";
}
return _warmupComplete;
}
}
/// <summary>
/// Gets the uptime in seconds.
/// </summary>
public double UptimeSeconds => (DateTimeOffset.UtcNow - _startTime).TotalSeconds;
/// <summary>
/// Gets the current status message.
/// </summary>
public string StatusMessage => _statusMessage;
/// <summary>
/// Marks warmup as complete.
/// </summary>
public void MarkWarmupComplete()
{
_warmupComplete = true;
_statusMessage = "Warmup complete";
}
/// <summary>
/// Updates the status message.
/// </summary>
public void SetStatus(string message)
{
_statusMessage = message;
}
}

View File

@@ -1,759 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Core.Domain;
using StellaOps.JobEngine.Core.SloManagement;
using StellaOps.JobEngine.WebService.Contracts;
using StellaOps.JobEngine.WebService.Services;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// REST API endpoints for SLO management.
/// </summary>
public static class SloEndpoints
{
/// <summary>
/// Maps SLO endpoints to the route builder.
/// </summary>
public static RouteGroupBuilder MapSloEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/slos")
.WithTags("Orchestrator SLOs")
.RequireAuthorization(JobEnginePolicies.Read)
.RequireTenant();
// SLO CRUD operations
group.MapGet(string.Empty, ListSlos)
.WithName("Orchestrator_ListSlos")
.WithDescription("Return a cursor-paginated list of Service Level Objectives defined for the calling tenant, optionally filtered by enabled state and job type. Each SLO record includes its target metric, threshold, evaluation window, and current enabled state.");
group.MapGet("{sloId:guid}", GetSlo)
.WithName("Orchestrator_GetSlo")
.WithDescription("Return the full definition of the specified SLO including its target metric type (success rate, p95 latency, throughput), threshold value, evaluation window, job type scope, and enabled state. Returns 404 when the SLO does not exist in the tenant.");
group.MapPost(string.Empty, CreateSlo)
.WithName("Orchestrator_CreateSlo")
.WithDescription("Create a new Service Level Objective for the calling tenant. The SLO is disabled by default and must be explicitly enabled. Specify the metric type, threshold, evaluation window, and the job type it governs.")
.RequireAuthorization(JobEnginePolicies.Operate);
group.MapPut("{sloId:guid}", UpdateSlo)
.WithName("Orchestrator_UpdateSlo")
.WithDescription("Update the definition of the specified SLO including threshold, evaluation window, and description. The SLO must be disabled before structural changes can be applied. Returns 404 when the SLO does not exist in the tenant.")
.RequireAuthorization(JobEnginePolicies.Operate);
group.MapDelete("{sloId:guid}", DeleteSlo)
.WithName("Orchestrator_DeleteSlo")
.WithDescription("Permanently remove the specified SLO definition and all associated alert thresholds. Active alerts linked to this SLO are automatically resolved. Returns 404 when the SLO does not exist in the tenant.")
.RequireAuthorization(JobEnginePolicies.Operate);
// SLO state
group.MapGet("{sloId:guid}/state", GetSloState)
.WithName("Orchestrator_GetSloState")
.WithDescription("Return the current evaluation state of the specified SLO including the measured metric value, the computed burn rate relative to the threshold, and whether the SLO is currently in breach. Updated on each evaluation cycle.");
group.MapGet("states", GetAllSloStates)
.WithName("Orchestrator_GetAllSloStates")
.WithDescription("Return the current evaluation state for all enabled SLOs in the calling tenant in a single response. Useful for operations dashboards that need a snapshot of overall SLO health without polling each SLO individually.");
// SLO control
group.MapPost("{sloId:guid}/enable", EnableSlo)
.WithName("Orchestrator_EnableSlo")
.WithDescription("Activate the specified SLO so that it is included in evaluation cycles and can generate alerts when its threshold is breached. The SLO must be in a disabled state; enabling an already-active SLO is a no-op.")
.RequireAuthorization(JobEnginePolicies.Operate);
group.MapPost("{sloId:guid}/disable", DisableSlo)
.WithName("Orchestrator_DisableSlo")
.WithDescription("Deactivate the specified SLO, pausing evaluation and suppressing new alerts. Any active alerts are automatically acknowledged. The SLO definition is retained and can be re-enabled without data loss.")
.RequireAuthorization(JobEnginePolicies.Operate);
// Alert thresholds
group.MapGet("{sloId:guid}/thresholds", ListThresholds)
.WithName("Orchestrator_ListAlertThresholds")
.WithDescription("Return all alert thresholds configured for the specified SLO including their severity level, burn rate multiplier trigger, and notification channel references. Thresholds define the graduated alerting behaviour as an SLO degrades.");
group.MapPost("{sloId:guid}/thresholds", CreateThreshold)
.WithName("Orchestrator_CreateAlertThreshold")
.WithDescription("Add a new alert threshold to the specified SLO. Each threshold specifies a severity level and the burn rate or metric value at which the alert fires. Multiple thresholds at different severities implement graduated alerting.")
.RequireAuthorization(JobEnginePolicies.Operate);
group.MapDelete("{sloId:guid}/thresholds/{thresholdId:guid}", DeleteThreshold)
.WithName("Orchestrator_DeleteAlertThreshold")
.WithDescription("Remove the specified alert threshold from its parent SLO. In-flight alerts generated by this threshold are not automatically resolved. Returns 404 when the threshold ID does not belong to the SLO in the calling tenant.")
.RequireAuthorization(JobEnginePolicies.Operate);
// Alerts
group.MapGet("alerts", ListAlerts)
.WithName("Orchestrator_ListSloAlerts")
.WithDescription("Return a paginated list of SLO alerts for the calling tenant, optionally filtered by SLO ID, severity, status (firing, acknowledged, resolved), and time window. Each alert record includes the SLO reference, breach value, and lifecycle timestamps.");
group.MapGet("alerts/{alertId:guid}", GetAlert)
.WithName("Orchestrator_GetSloAlert")
.WithDescription("Return the full alert record for the specified ID including the SLO reference, fired-at timestamp, breach metric value, current status, and the acknowledge/resolve attribution if applicable. Returns 404 when the alert does not exist in the tenant.");
group.MapPost("alerts/{alertId:guid}/acknowledge", AcknowledgeAlert)
.WithName("Orchestrator_AcknowledgeAlert")
.WithDescription("Acknowledge the specified SLO alert, recording the calling principal and timestamp. Acknowledgment suppresses repeat notifications for the breach period but does not resolve the alert; the SLO violation must be corrected for resolution.")
.RequireAuthorization(JobEnginePolicies.Operate);
group.MapPost("alerts/{alertId:guid}/resolve", ResolveAlert)
.WithName("Orchestrator_ResolveAlert")
.WithDescription("Mark the specified SLO alert as resolved, attributing the resolution to the calling principal. Resolved alerts are archived and excluded from active alert counts. Use when the underlying SLO breach has been addressed and the system is within threshold.")
.RequireAuthorization(JobEnginePolicies.Operate);
// Summary
group.MapGet("summary", GetSloSummary)
.WithName("Orchestrator_GetSloSummary")
.WithDescription("Return a tenant-wide SLO health summary including total SLO count, count of SLOs currently in breach, count of enabled SLOs, and the number of active (unresolved) alerts grouped by severity. Used for high-level service health dashboards.");
return group;
}
private static async Task<IResult> ListSlos(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloRepository repository,
[FromQuery] bool? enabled = null,
[FromQuery] string? jobType = null,
[FromQuery] int? limit = null,
[FromQuery] string? cursor = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = EndpointHelpers.GetLimit(limit);
var offset = EndpointHelpers.ParseCursorOffset(cursor);
var slos = await repository.ListAsync(
tenantId,
enabledOnly: enabled ?? false,
jobType: jobType,
cancellationToken: cancellationToken).ConfigureAwait(false);
// Apply pagination manually since ListAsync doesn't support it directly
var paged = slos.Skip(offset).Take(effectiveLimit).ToList();
var responses = paged.Select(SloResponse.FromDomain).ToList();
var nextCursor = EndpointHelpers.CreateNextCursor(offset, effectiveLimit, responses.Count);
return Results.Ok(new SloListResponse(responses, nextCursor));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetSlo(
HttpContext context,
[FromRoute] Guid sloId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var slo = await repository.GetByIdAsync(tenantId, sloId, cancellationToken).ConfigureAwait(false);
if (slo is null)
{
return Results.NotFound();
}
return Results.Ok(SloResponse.FromDomain(slo));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> CreateSlo(
HttpContext context,
[FromBody] CreateSloRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloRepository repository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var actorId = context.User?.Identity?.Name ?? "system";
var now = timeProvider.GetUtcNow();
// Parse and validate type
if (!TryParseSloType(request.Type, out var sloType))
{
return Results.BadRequest(new { error = "Invalid SLO type. Must be 'availability', 'latency', or 'throughput'" });
}
// Parse and validate window
if (!TryParseSloWindow(request.Window, out var window))
{
return Results.BadRequest(new { error = "Invalid window. Must be '1h', '1d', '7d', or '30d'" });
}
// Create SLO based on type
Slo slo = sloType switch
{
SloType.Availability => Slo.CreateAvailability(
tenantId, request.Name, request.Target, window, actorId, now,
request.Description, request.JobType, request.SourceId),
SloType.Latency => Slo.CreateLatency(
tenantId, request.Name,
request.LatencyPercentile ?? 0.95,
request.LatencyTargetSeconds ?? 1.0,
request.Target, window, actorId, now,
request.Description, request.JobType, request.SourceId),
SloType.Throughput => Slo.CreateThroughput(
tenantId, request.Name,
request.ThroughputMinimum ?? 1,
request.Target, window, actorId, now,
request.Description, request.JobType, request.SourceId),
_ => throw new InvalidOperationException($"Unknown SLO type: {sloType}")
};
await repository.CreateAsync(slo, cancellationToken).ConfigureAwait(false);
return Results.Created($"/api/v1/jobengine/slos/{slo.SloId}", SloResponse.FromDomain(slo));
}
catch (ArgumentException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> UpdateSlo(
HttpContext context,
[FromRoute] Guid sloId,
[FromBody] UpdateSloRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloRepository repository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var actorId = context.User?.Identity?.Name ?? "system";
var now = timeProvider.GetUtcNow();
var slo = await repository.GetByIdAsync(tenantId, sloId, cancellationToken).ConfigureAwait(false);
if (slo is null)
{
return Results.NotFound();
}
var updated = slo.Update(
updatedAt: now,
name: request.Name,
description: request.Description,
target: request.Target,
enabled: request.Enabled,
updatedBy: actorId);
await repository.UpdateAsync(updated, cancellationToken).ConfigureAwait(false);
return Results.Ok(SloResponse.FromDomain(updated));
}
catch (ArgumentException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> DeleteSlo(
HttpContext context,
[FromRoute] Guid sloId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var deleted = await repository.DeleteAsync(tenantId, sloId, cancellationToken).ConfigureAwait(false);
if (!deleted)
{
return Results.NotFound();
}
return Results.NoContent();
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetSloState(
HttpContext context,
[FromRoute] Guid sloId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloRepository repository,
[FromServices] IBurnRateEngine burnRateEngine,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var slo = await repository.GetByIdAsync(tenantId, sloId, cancellationToken).ConfigureAwait(false);
if (slo is null)
{
return Results.NotFound();
}
var state = await burnRateEngine.ComputeStateAsync(slo, cancellationToken).ConfigureAwait(false);
return Results.Ok(new SloWithStateResponse(
Slo: SloResponse.FromDomain(slo),
State: SloStateResponse.FromDomain(state)));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetAllSloStates(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloRepository repository,
[FromServices] IBurnRateEngine burnRateEngine,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var states = await burnRateEngine.ComputeAllStatesAsync(tenantId, cancellationToken).ConfigureAwait(false);
var slos = await repository.ListAsync(tenantId, enabledOnly: true, cancellationToken: cancellationToken)
.ConfigureAwait(false);
var sloMap = slos.ToDictionary(s => s.SloId);
var responses = states
.Where(s => sloMap.ContainsKey(s.SloId))
.Select(s => new SloWithStateResponse(
Slo: SloResponse.FromDomain(sloMap[s.SloId]),
State: SloStateResponse.FromDomain(s)))
.ToList();
return Results.Ok(responses);
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> EnableSlo(
HttpContext context,
[FromRoute] Guid sloId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloRepository repository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var actorId = context.User?.Identity?.Name ?? "system";
var now = timeProvider.GetUtcNow();
var slo = await repository.GetByIdAsync(tenantId, sloId, cancellationToken).ConfigureAwait(false);
if (slo is null)
{
return Results.NotFound();
}
var enabled = slo.Enable(actorId, now);
await repository.UpdateAsync(enabled, cancellationToken).ConfigureAwait(false);
return Results.Ok(SloResponse.FromDomain(enabled));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> DisableSlo(
HttpContext context,
[FromRoute] Guid sloId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloRepository repository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var actorId = context.User?.Identity?.Name ?? "system";
var now = timeProvider.GetUtcNow();
var slo = await repository.GetByIdAsync(tenantId, sloId, cancellationToken).ConfigureAwait(false);
if (slo is null)
{
return Results.NotFound();
}
var disabled = slo.Disable(actorId, now);
await repository.UpdateAsync(disabled, cancellationToken).ConfigureAwait(false);
return Results.Ok(SloResponse.FromDomain(disabled));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> ListThresholds(
HttpContext context,
[FromRoute] Guid sloId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloRepository sloRepository,
[FromServices] IAlertThresholdRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var slo = await sloRepository.GetByIdAsync(tenantId, sloId, cancellationToken).ConfigureAwait(false);
if (slo is null)
{
return Results.NotFound();
}
var thresholds = await repository.ListBySloAsync(sloId, cancellationToken).ConfigureAwait(false);
var responses = thresholds.Select(AlertThresholdResponse.FromDomain).ToList();
return Results.Ok(responses);
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> CreateThreshold(
HttpContext context,
[FromRoute] Guid sloId,
[FromBody] CreateAlertThresholdRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloRepository sloRepository,
[FromServices] IAlertThresholdRepository repository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var actorId = context.User?.Identity?.Name ?? "system";
var now = timeProvider.GetUtcNow();
var slo = await sloRepository.GetByIdAsync(tenantId, sloId, cancellationToken).ConfigureAwait(false);
if (slo is null)
{
return Results.NotFound();
}
if (!TryParseAlertSeverity(request.Severity, out var severity))
{
return Results.BadRequest(new { error = "Invalid severity. Must be 'info', 'warning', 'critical', or 'emergency'" });
}
var threshold = AlertBudgetThreshold.Create(
sloId: sloId,
tenantId: tenantId,
budgetConsumedThreshold: request.BudgetConsumedThreshold,
severity: severity,
createdBy: actorId,
createdAt: now,
burnRateThreshold: request.BurnRateThreshold,
notificationChannel: request.NotificationChannel,
notificationEndpoint: request.NotificationEndpoint,
cooldown: request.CooldownMinutes.HasValue
? TimeSpan.FromMinutes(request.CooldownMinutes.Value)
: null);
await repository.CreateAsync(threshold, cancellationToken).ConfigureAwait(false);
return Results.Created(
$"/api/v1/jobengine/slos/{sloId}/thresholds/{threshold.ThresholdId}",
AlertThresholdResponse.FromDomain(threshold));
}
catch (ArgumentException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> DeleteThreshold(
HttpContext context,
[FromRoute] Guid sloId,
[FromRoute] Guid thresholdId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloRepository sloRepository,
[FromServices] IAlertThresholdRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var slo = await sloRepository.GetByIdAsync(tenantId, sloId, cancellationToken).ConfigureAwait(false);
if (slo is null)
{
return Results.NotFound();
}
var deleted = await repository.DeleteAsync(tenantId, thresholdId, cancellationToken).ConfigureAwait(false);
if (!deleted)
{
return Results.NotFound();
}
return Results.NoContent();
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> ListAlerts(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloAlertRepository repository,
[FromQuery] Guid? sloId = null,
[FromQuery] bool? acknowledged = null,
[FromQuery] bool? resolved = null,
[FromQuery] int? limit = null,
[FromQuery] string? cursor = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = EndpointHelpers.GetLimit(limit);
var offset = EndpointHelpers.ParseCursorOffset(cursor);
var alerts = await repository.ListAsync(
tenantId, sloId, acknowledged, resolved, effectiveLimit, offset, cancellationToken)
.ConfigureAwait(false);
var responses = alerts.Select(SloAlertResponse.FromDomain).ToList();
var nextCursor = EndpointHelpers.CreateNextCursor(offset, effectiveLimit, responses.Count);
return Results.Ok(new SloAlertListResponse(responses, nextCursor));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetAlert(
HttpContext context,
[FromRoute] Guid alertId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloAlertRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var alert = await repository.GetByIdAsync(tenantId, alertId, cancellationToken).ConfigureAwait(false);
if (alert is null)
{
return Results.NotFound();
}
return Results.Ok(SloAlertResponse.FromDomain(alert));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> AcknowledgeAlert(
HttpContext context,
[FromRoute] Guid alertId,
[FromBody] AcknowledgeAlertRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloAlertRepository repository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var alert = await repository.GetByIdAsync(tenantId, alertId, cancellationToken).ConfigureAwait(false);
if (alert is null)
{
return Results.NotFound();
}
if (alert.IsAcknowledged)
{
return Results.BadRequest(new { error = "Alert is already acknowledged" });
}
var acknowledged = alert.Acknowledge(request.AcknowledgedBy, timeProvider.GetUtcNow());
await repository.UpdateAsync(acknowledged, cancellationToken).ConfigureAwait(false);
return Results.Ok(SloAlertResponse.FromDomain(acknowledged));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> ResolveAlert(
HttpContext context,
[FromRoute] Guid alertId,
[FromBody] ResolveAlertRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloAlertRepository repository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var alert = await repository.GetByIdAsync(tenantId, alertId, cancellationToken).ConfigureAwait(false);
if (alert is null)
{
return Results.NotFound();
}
if (alert.IsResolved)
{
return Results.BadRequest(new { error = "Alert is already resolved" });
}
var resolved = alert.Resolve(request.ResolutionNotes, timeProvider.GetUtcNow());
await repository.UpdateAsync(resolved, cancellationToken).ConfigureAwait(false);
return Results.Ok(SloAlertResponse.FromDomain(resolved));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetSloSummary(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISloRepository sloRepository,
[FromServices] ISloAlertRepository alertRepository,
[FromServices] IBurnRateEngine burnRateEngine,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var slos = await sloRepository.ListAsync(tenantId, enabledOnly: false, cancellationToken: cancellationToken)
.ConfigureAwait(false);
var enabledSlos = slos.Where(s => s.Enabled).ToList();
var states = await burnRateEngine.ComputeAllStatesAsync(tenantId, cancellationToken).ConfigureAwait(false);
var activeAlertCount = await alertRepository.GetActiveAlertCountAsync(tenantId, cancellationToken)
.ConfigureAwait(false);
var alerts = await alertRepository.ListAsync(tenantId, null, false, false, 100, 0, cancellationToken)
.ConfigureAwait(false);
var unacknowledgedAlerts = alerts.Count(a => !a.IsAcknowledged && !a.IsResolved);
var criticalAlerts = alerts.Count(a => !a.IsResolved &&
(a.Severity == AlertSeverity.Critical || a.Severity == AlertSeverity.Emergency));
// Find SLOs at risk (budget consumed > 50% or burn rate > 2x)
var sloMap = enabledSlos.ToDictionary(s => s.SloId);
var slosAtRisk = states
.Where(s => sloMap.ContainsKey(s.SloId) && (s.BudgetConsumed >= 0.5 || s.BurnRate >= 2.0))
.OrderByDescending(s => s.BudgetConsumed)
.Take(10)
.Select(s => new SloWithStateResponse(
Slo: SloResponse.FromDomain(sloMap[s.SloId]),
State: SloStateResponse.FromDomain(s)))
.ToList();
return Results.Ok(new SloSummaryResponse(
TotalSlos: slos.Count,
EnabledSlos: enabledSlos.Count,
ActiveAlerts: activeAlertCount,
UnacknowledgedAlerts: unacknowledgedAlerts,
CriticalAlerts: criticalAlerts,
SlosAtRisk: slosAtRisk));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static bool TryParseSloType(string value, out SloType type)
{
type = value.ToLowerInvariant() switch
{
"availability" => SloType.Availability,
"latency" => SloType.Latency,
"throughput" => SloType.Throughput,
_ => default
};
return value.ToLowerInvariant() is "availability" or "latency" or "throughput";
}
private static bool TryParseSloWindow(string value, out SloWindow window)
{
window = value.ToLowerInvariant() switch
{
"1h" or "one_hour" => SloWindow.OneHour,
"1d" or "one_day" => SloWindow.OneDay,
"7d" or "seven_days" => SloWindow.SevenDays,
"30d" or "thirty_days" => SloWindow.ThirtyDays,
_ => default
};
return value.ToLowerInvariant() is "1h" or "one_hour" or "1d" or "one_day" or "7d" or "seven_days" or "30d" or "thirty_days";
}
private static bool TryParseAlertSeverity(string value, out AlertSeverity severity)
{
severity = value.ToLowerInvariant() switch
{
"info" => AlertSeverity.Info,
"warning" => AlertSeverity.Warning,
"critical" => AlertSeverity.Critical,
"emergency" => AlertSeverity.Emergency,
_ => default
};
return value.ToLowerInvariant() is "info" or "warning" or "critical" or "emergency";
}
}

View File

@@ -1,94 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Infrastructure.Repositories;
using StellaOps.JobEngine.WebService.Contracts;
using StellaOps.JobEngine.WebService.Services;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// REST API endpoints for job sources.
/// </summary>
public static class SourceEndpoints
{
/// <summary>
/// Maps source endpoints to the route builder.
/// </summary>
public static RouteGroupBuilder MapSourceEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/sources")
.WithTags("Orchestrator Sources")
.RequireAuthorization(JobEnginePolicies.Read)
.RequireTenant();
group.MapGet(string.Empty, ListSources)
.WithName("Orchestrator_ListSources")
.WithDescription("Return a cursor-paginated list of job sources registered for the calling tenant, optionally filtered by source type and enabled state. Sources represent the external integrations or internal triggers that produce jobs for the orchestrator.");
group.MapGet("{sourceId:guid}", GetSource)
.WithName("Orchestrator_GetSource")
.WithDescription("Return the configuration and status record for a single job source identified by its GUID. Returns 404 when no source with that ID exists in the tenant.");
return group;
}
private static async Task<IResult> ListSources(
HttpContext context,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISourceRepository repository,
[FromQuery] string? sourceType = null,
[FromQuery] bool? enabled = null,
[FromQuery] int? limit = null,
[FromQuery] string? cursor = null,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var effectiveLimit = EndpointHelpers.GetLimit(limit);
var offset = EndpointHelpers.ParseCursorOffset(cursor);
var sources = await repository.ListAsync(
tenantId,
sourceType,
enabled,
effectiveLimit,
offset,
cancellationToken).ConfigureAwait(false);
var responses = sources.Select(SourceResponse.FromDomain).ToList();
var nextCursor = EndpointHelpers.CreateNextCursor(offset, effectiveLimit, responses.Count);
return Results.Ok(new SourceListResponse(responses, nextCursor));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
private static async Task<IResult> GetSource(
HttpContext context,
[FromRoute] Guid sourceId,
[FromServices] TenantResolver tenantResolver,
[FromServices] ISourceRepository repository,
CancellationToken cancellationToken = default)
{
try
{
var tenantId = tenantResolver.Resolve(context);
var source = await repository.GetByIdAsync(tenantId, sourceId, cancellationToken).ConfigureAwait(false);
if (source is null)
{
return Results.NotFound();
}
return Results.Ok(SourceResponse.FromDomain(source));
}
catch (InvalidOperationException ex)
{
return Results.BadRequest(new { error = ex.Message });
}
}
}

View File

@@ -1,177 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Infrastructure.Repositories;
using StellaOps.JobEngine.WebService.Services;
using StellaOps.JobEngine.WebService.Streaming;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// Server-Sent Events streaming endpoints for real-time updates.
/// </summary>
public static class StreamEndpoints
{
/// <summary>
/// Maps stream endpoints to the route builder.
/// </summary>
public static RouteGroupBuilder MapStreamEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/stream")
.WithTags("Orchestrator Streams")
.RequireAuthorization(JobEnginePolicies.Read)
.RequireTenant();
group.MapGet("jobs/{jobId:guid}", StreamJob)
.WithName("Orchestrator_StreamJob")
.WithDescription("Open a Server-Sent Events (SSE) stream delivering real-time status change events for the specified job. The stream closes when the job reaches a terminal state (Succeeded, Failed, Canceled, TimedOut) or the client disconnects. Returns 404 if the job does not exist.");
group.MapGet("runs/{runId:guid}", StreamRun)
.WithName("Orchestrator_StreamRun")
.WithDescription("Open a Server-Sent Events (SSE) stream delivering real-time run progress events including individual job status changes and aggregate counters. The stream closes when all jobs in the run reach terminal states or the client disconnects.");
group.MapGet("pack-runs/{packRunId:guid}", StreamPackRun)
.WithName("Orchestrator_StreamPackRun")
.WithDescription("Open a Server-Sent Events (SSE) stream delivering real-time log lines and status transitions for the specified pack run. Log lines are emitted in append order; the stream closes when the pack run completes or is canceled.");
group.MapGet("pack-runs/{packRunId:guid}/ws", StreamPackRunWebSocket)
.WithName("Orchestrator_StreamPackRunWebSocket")
.WithDescription("Establish a WebSocket connection for real-time log and status streaming of the specified pack run. Functionally equivalent to the SSE endpoint but uses the WebSocket protocol for environments where SSE is not supported. Requires an HTTP upgrade handshake.");
return group;
}
private static async Task StreamJob(
HttpContext context,
[FromRoute] Guid jobId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IJobRepository jobRepository,
[FromServices] IJobStreamCoordinator streamCoordinator,
CancellationToken cancellationToken)
{
try
{
var tenantId = tenantResolver.ResolveForStreaming(context);
var job = await jobRepository.GetByIdAsync(tenantId, jobId, cancellationToken).ConfigureAwait(false);
if (job is null)
{
context.Response.StatusCode = StatusCodes.Status404NotFound;
await context.Response.WriteAsJsonAsync(new { error = "Job not found" }, cancellationToken).ConfigureAwait(false);
return;
}
await streamCoordinator.StreamAsync(context, tenantId, job, cancellationToken).ConfigureAwait(false);
}
catch (OperationCanceledException) when (cancellationToken.IsCancellationRequested)
{
// Client disconnected
}
catch (InvalidOperationException ex)
{
if (!context.Response.HasStarted)
{
context.Response.StatusCode = StatusCodes.Status400BadRequest;
await context.Response.WriteAsJsonAsync(new { error = ex.Message }, cancellationToken).ConfigureAwait(false);
}
}
}
private static async Task StreamRun(
HttpContext context,
[FromRoute] Guid runId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IRunRepository runRepository,
[FromServices] IRunStreamCoordinator streamCoordinator,
CancellationToken cancellationToken)
{
try
{
var tenantId = tenantResolver.ResolveForStreaming(context);
var run = await runRepository.GetByIdAsync(tenantId, runId, cancellationToken).ConfigureAwait(false);
if (run is null)
{
context.Response.StatusCode = StatusCodes.Status404NotFound;
await context.Response.WriteAsJsonAsync(new { error = "Run not found" }, cancellationToken).ConfigureAwait(false);
return;
}
await streamCoordinator.StreamAsync(context, tenantId, run, cancellationToken).ConfigureAwait(false);
}
catch (OperationCanceledException) when (cancellationToken.IsCancellationRequested)
{
// Client disconnected
}
catch (InvalidOperationException ex)
{
if (!context.Response.HasStarted)
{
context.Response.StatusCode = StatusCodes.Status400BadRequest;
await context.Response.WriteAsJsonAsync(new { error = ex.Message }, cancellationToken).ConfigureAwait(false);
}
}
}
private static async Task StreamPackRun(
HttpContext context,
[FromRoute] Guid packRunId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRunRepository packRunRepository,
[FromServices] IPackRunStreamCoordinator streamCoordinator,
CancellationToken cancellationToken)
{
try
{
var tenantId = tenantResolver.ResolveForStreaming(context);
var packRun = await packRunRepository.GetByIdAsync(tenantId, packRunId, cancellationToken).ConfigureAwait(false);
if (packRun is null)
{
context.Response.StatusCode = StatusCodes.Status404NotFound;
await context.Response.WriteAsJsonAsync(new { error = "Pack run not found" }, cancellationToken).ConfigureAwait(false);
return;
}
await streamCoordinator.StreamAsync(context, tenantId, packRun, cancellationToken).ConfigureAwait(false);
}
catch (OperationCanceledException) when (cancellationToken.IsCancellationRequested)
{
}
catch (InvalidOperationException ex)
{
if (!context.Response.HasStarted)
{
context.Response.StatusCode = StatusCodes.Status400BadRequest;
await context.Response.WriteAsJsonAsync(new { error = ex.Message }, cancellationToken).ConfigureAwait(false);
}
}
}
private static async Task StreamPackRunWebSocket(
HttpContext context,
[FromRoute] Guid packRunId,
[FromServices] TenantResolver tenantResolver,
[FromServices] IPackRunRepository packRunRepository,
[FromServices] IPackRunStreamCoordinator streamCoordinator,
CancellationToken cancellationToken)
{
if (!context.WebSockets.IsWebSocketRequest)
{
context.Response.StatusCode = StatusCodes.Status400BadRequest;
await context.Response.WriteAsJsonAsync(new { error = "Expected WebSocket request" }, cancellationToken).ConfigureAwait(false);
return;
}
var tenantId = tenantResolver.ResolveForStreaming(context);
var packRun = await packRunRepository.GetByIdAsync(tenantId, packRunId, cancellationToken).ConfigureAwait(false);
if (packRun is null)
{
context.Response.StatusCode = StatusCodes.Status404NotFound;
await context.Response.WriteAsJsonAsync(new { error = "Pack run not found" }, cancellationToken).ConfigureAwait(false);
return;
}
using var socket = await context.WebSockets.AcceptWebSocketAsync().ConfigureAwait(false);
await streamCoordinator.StreamWebSocketAsync(socket, tenantId, packRun, cancellationToken).ConfigureAwait(false);
}
}

View File

@@ -1,374 +0,0 @@
using Microsoft.AspNetCore.Mvc;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.JobEngine.Core.Domain;
using StellaOps.JobEngine.Infrastructure;
using StellaOps.JobEngine.Infrastructure.Repositories;
using StellaOps.JobEngine.WebService.Contracts;
using StellaOps.JobEngine.WebService.Services;
using static StellaOps.Localization.T;
namespace StellaOps.JobEngine.WebService.Endpoints;
/// <summary>
/// Worker endpoints for job claim, heartbeat, progress, and completion.
/// </summary>
public static class WorkerEndpoints
{
private const int DefaultLeaseSeconds = 300; // 5 minutes
private const int MaxLeaseSeconds = 3600; // 1 hour
private const int DefaultExtendSeconds = 300;
private const int MaxExtendSeconds = 1800; // 30 minutes
/// <summary>
/// Maps worker endpoints to the route builder.
/// </summary>
public static RouteGroupBuilder MapWorkerEndpoints(this IEndpointRouteBuilder app)
{
var group = app.MapGroup("/api/v1/jobengine/worker")
.WithTags("Orchestrator Workers")
.RequireAuthorization(JobEnginePolicies.Operate)
.RequireTenant();
group.MapPost("claim", ClaimJob)
.WithName("Orchestrator_ClaimJob")
.WithDescription(_t("orchestrator.worker.claim_description"));
group.MapPost("jobs/{jobId:guid}/heartbeat", Heartbeat)
.WithName("Orchestrator_Heartbeat")
.WithDescription(_t("orchestrator.worker.heartbeat_description"));
group.MapPost("jobs/{jobId:guid}/progress", ReportProgress)
.WithName("Orchestrator_ReportProgress")
.WithDescription(_t("orchestrator.worker.progress_description"));
group.MapPost("jobs/{jobId:guid}/complete", CompleteJob)
.WithName("Orchestrator_CompleteJob")
.WithDescription(_t("orchestrator.worker.complete_description"));
return group;
}
private static async Task<IResult> ClaimJob(
HttpContext context,
[FromBody] ClaimRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IJobRepository jobRepository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken)
{
// Validate request
if (string.IsNullOrWhiteSpace(request.WorkerId))
{
return Results.BadRequest(new WorkerErrorResponse(
"invalid_request",
_t("orchestrator.worker.error.worker_id_required"),
null,
null));
}
var tenantId = tenantResolver.Resolve(context);
// Idempotency check - if idempotency key provided, check for existing claim
if (!string.IsNullOrEmpty(request.IdempotencyKey))
{
var existingJob = await jobRepository.GetByIdempotencyKeyAsync(
tenantId, $"claim:{request.IdempotencyKey}", cancellationToken).ConfigureAwait(false);
if (existingJob is not null && existingJob.Status == JobStatus.Leased &&
existingJob.WorkerId == request.WorkerId)
{
// Return the existing claim
return Results.Ok(CreateClaimResponse(existingJob));
}
}
// Calculate lease duration
var leaseSeconds = Math.Min(request.LeaseSeconds ?? DefaultLeaseSeconds, MaxLeaseSeconds);
var now = timeProvider.GetUtcNow();
var leaseUntil = now.AddSeconds(leaseSeconds);
var leaseId = Guid.NewGuid();
// Try to acquire a job
var job = await jobRepository.LeaseNextAsync(
tenantId,
request.JobType,
leaseId,
request.WorkerId,
leaseUntil,
cancellationToken).ConfigureAwait(false);
if (job is null)
{
return Results.Json(
new WorkerErrorResponse("no_jobs_available", "No jobs available for claim", null, 5),
statusCode: StatusCodes.Status204NoContent);
}
// Update task runner ID if provided
if (!string.IsNullOrEmpty(request.TaskRunnerId) && job.TaskRunnerId != request.TaskRunnerId)
{
await jobRepository.UpdateStatusAsync(
tenantId,
job.JobId,
job.Status,
job.Attempt,
job.LeaseId,
job.WorkerId,
request.TaskRunnerId,
job.LeaseUntil,
job.ScheduledAt,
job.LeasedAt,
job.CompletedAt,
job.NotBefore,
job.Reason,
cancellationToken).ConfigureAwait(false);
job = job with { TaskRunnerId = request.TaskRunnerId };
}
JobEngineMetrics.JobLeased(tenantId, job.JobType);
return Results.Ok(CreateClaimResponse(job));
}
private static async Task<IResult> Heartbeat(
HttpContext context,
[FromRoute] Guid jobId,
[FromBody] HeartbeatRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IJobRepository jobRepository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
// Get current job
var job = await jobRepository.GetByIdAsync(tenantId, jobId, cancellationToken).ConfigureAwait(false);
if (job is null)
{
return Results.NotFound(new WorkerErrorResponse(
"job_not_found",
$"Job {jobId} not found",
jobId,
null));
}
// Verify lease ownership
if (job.LeaseId != request.LeaseId)
{
return Results.Json(
new WorkerErrorResponse("invalid_lease", "Lease ID does not match", jobId, null),
statusCode: StatusCodes.Status409Conflict);
}
if (job.Status != JobStatus.Leased)
{
return Results.Json(
new WorkerErrorResponse("invalid_status", $"Job is not in leased status: {job.Status}", jobId, null),
statusCode: StatusCodes.Status409Conflict);
}
// Calculate extension
var extendSeconds = Math.Min(request.ExtendSeconds ?? DefaultExtendSeconds, MaxExtendSeconds);
var now = timeProvider.GetUtcNow();
var newLeaseUntil = now.AddSeconds(extendSeconds);
// Extend the lease
var extended = await jobRepository.ExtendLeaseAsync(
tenantId, jobId, request.LeaseId, newLeaseUntil, cancellationToken).ConfigureAwait(false);
if (!extended)
{
return Results.Json(
new WorkerErrorResponse("lease_expired", "Lease has expired and cannot be extended", jobId, null),
statusCode: StatusCodes.Status409Conflict);
}
JobEngineMetrics.LeaseExtended(tenantId, job.JobType);
JobEngineMetrics.HeartbeatReceived(tenantId, job.JobType);
return Results.Ok(new HeartbeatResponse(
jobId,
request.LeaseId,
newLeaseUntil,
Acknowledged: true));
}
private static async Task<IResult> ReportProgress(
HttpContext context,
[FromRoute] Guid jobId,
[FromBody] ProgressRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IJobRepository jobRepository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
// Get current job
var job = await jobRepository.GetByIdAsync(tenantId, jobId, cancellationToken).ConfigureAwait(false);
if (job is null)
{
return Results.NotFound(new WorkerErrorResponse(
"job_not_found",
$"Job {jobId} not found",
jobId,
null));
}
// Verify lease ownership
if (job.LeaseId != request.LeaseId)
{
return Results.Json(
new WorkerErrorResponse("invalid_lease", "Lease ID does not match", jobId, null),
statusCode: StatusCodes.Status409Conflict);
}
if (job.Status != JobStatus.Leased)
{
return Results.Json(
new WorkerErrorResponse("invalid_status", $"Job is not in leased status: {job.Status}", jobId, null),
statusCode: StatusCodes.Status409Conflict);
}
// Validate progress percentage
if (request.ProgressPercent.HasValue && (request.ProgressPercent.Value < 0 || request.ProgressPercent.Value > 100))
{
return Results.BadRequest(new WorkerErrorResponse(
"invalid_progress",
"Progress percentage must be between 0 and 100",
jobId,
null));
}
// Progress is recorded via metrics/events; in a full implementation we'd store it
JobEngineMetrics.ProgressReported(tenantId, job.JobType);
return Results.Ok(new ProgressResponse(
jobId,
Acknowledged: true,
LeaseUntil: job.LeaseUntil ?? timeProvider.GetUtcNow()));
}
private static async Task<IResult> CompleteJob(
HttpContext context,
[FromRoute] Guid jobId,
[FromBody] CompleteRequest request,
[FromServices] TenantResolver tenantResolver,
[FromServices] IJobRepository jobRepository,
[FromServices] IArtifactRepository artifactRepository,
[FromServices] IRunRepository runRepository,
[FromServices] TimeProvider timeProvider,
CancellationToken cancellationToken)
{
var tenantId = tenantResolver.Resolve(context);
// Get current job
var job = await jobRepository.GetByIdAsync(tenantId, jobId, cancellationToken).ConfigureAwait(false);
if (job is null)
{
return Results.NotFound(new WorkerErrorResponse(
"job_not_found",
$"Job {jobId} not found",
jobId,
null));
}
// Verify lease ownership
if (job.LeaseId != request.LeaseId)
{
return Results.Json(
new WorkerErrorResponse("invalid_lease", "Lease ID does not match", jobId, null),
statusCode: StatusCodes.Status409Conflict);
}
if (job.Status != JobStatus.Leased)
{
return Results.Json(
new WorkerErrorResponse("invalid_status", $"Job is not in leased status: {job.Status}", jobId, null),
statusCode: StatusCodes.Status409Conflict);
}
var now = timeProvider.GetUtcNow();
var newStatus = request.Success ? JobStatus.Succeeded : JobStatus.Failed;
// Create artifacts if provided
var artifactIds = new List<Guid>();
if (request.Artifacts is { Count: > 0 })
{
var artifacts = request.Artifacts.Select(a => new Artifact(
ArtifactId: Guid.NewGuid(),
TenantId: tenantId,
JobId: jobId,
RunId: job.RunId,
ArtifactType: a.ArtifactType,
Uri: a.Uri,
Digest: a.Digest,
MimeType: a.MimeType,
SizeBytes: a.SizeBytes,
CreatedAt: now,
Metadata: a.Metadata)).ToList();
await artifactRepository.CreateBatchAsync(artifacts, cancellationToken).ConfigureAwait(false);
artifactIds.AddRange(artifacts.Select(a => a.ArtifactId));
}
// Update job status
await jobRepository.UpdateStatusAsync(
tenantId,
jobId,
newStatus,
job.Attempt,
null, // Clear lease
null, // Clear worker
null, // Clear task runner
null, // Clear lease until
job.ScheduledAt,
job.LeasedAt,
now, // Set completed at
job.NotBefore,
request.Reason,
cancellationToken).ConfigureAwait(false);
// Update run counts if job belongs to a run
if (job.RunId.HasValue)
{
await runRepository.IncrementJobCountsAsync(
tenantId, job.RunId.Value, request.Success, cancellationToken).ConfigureAwait(false);
}
// Record metrics
var duration = job.LeasedAt.HasValue ? (now - job.LeasedAt.Value).TotalSeconds : 0;
JobEngineMetrics.JobCompleted(tenantId, job.JobType, newStatus.ToString().ToLowerInvariant());
JobEngineMetrics.RecordJobDuration(tenantId, job.JobType, duration);
if (!request.Success)
{
JobEngineMetrics.JobFailed(tenantId, job.JobType);
}
return Results.Ok(new CompleteResponse(
jobId,
newStatus.ToString().ToLowerInvariant(),
now,
artifactIds,
duration));
}
private static ClaimResponse CreateClaimResponse(Job job)
{
return new ClaimResponse(
job.JobId,
job.LeaseId!.Value,
job.JobType,
job.Payload,
job.PayloadDigest,
job.Attempt,
job.MaxAttempts,
job.LeaseUntil!.Value,
job.IdempotencyKey,
job.CorrelationId,
job.RunId,
job.ProjectId);
}
}

View File

@@ -1,135 +0,0 @@
using Microsoft.AspNetCore.Authorization;
using StellaOps.Auth.Abstractions;
using StellaOps.Auth.ServerIntegration;
namespace StellaOps.JobEngine.WebService;
/// <summary>
/// Named authorization policy constants for the Orchestrator service.
/// Each constant is the policy name used with <c>RequireAuthorization(policyName)</c>
/// and corresponds to one or more canonical StellaOps scopes.
/// </summary>
public static class JobEnginePolicies
{
// --- Orchestrator core policies ---
/// <summary>
/// Read-only access to orchestrator run and job state, telemetry, sources, DAG topology,
/// first-signal metrics, SLOs, and the immutable audit log.
/// Requires scope: <c>orch:read</c>.
/// </summary>
public const string Read = StellaOpsScopes.OrchRead;
/// <summary>
/// Operational control actions: cancel, retry, replay, force-close circuit breakers,
/// resolve dead-letter entries, and manage workers.
/// Requires scope: <c>orch:operate</c>.
/// </summary>
public const string Operate = StellaOpsScopes.OrchOperate;
/// <summary>
/// Manage orchestrator quotas, quota governance policies, allocation, and pause/resume lifecycle.
/// Requires scope: <c>orch:quota</c>.
/// </summary>
public const string Quota = StellaOpsScopes.OrchQuota;
// --- Pack registry and execution policies ---
/// <summary>
/// Read-only access to Task Pack registry catalogue, manifests, and pack run history.
/// Requires scope: <c>packs.read</c>.
/// </summary>
public const string PacksRead = StellaOpsScopes.PacksRead;
/// <summary>
/// Publish, update, sign, and delete Task Packs in the registry.
/// Requires scope: <c>packs.write</c>.
/// </summary>
public const string PacksWrite = StellaOpsScopes.PacksWrite;
/// <summary>
/// Schedule and execute Task Pack runs via the orchestrator.
/// Requires scope: <c>packs.run</c>.
/// </summary>
public const string PacksRun = StellaOpsScopes.PacksRun;
/// <summary>
/// Fulfil Task Pack approval gates (approve or reject pending pack steps).
/// Requires scope: <c>packs.approve</c>.
/// </summary>
public const string PacksApprove = StellaOpsScopes.PacksApprove;
// --- Release orchestration policies ---
/// <summary>
/// Read-only access to release records, promotion previews, release events, and dashboards.
/// Requires scope: <c>release:read</c>.
/// </summary>
public const string ReleaseRead = StellaOpsScopes.ReleaseRead;
/// <summary>
/// Create, update, and manage release lifecycle state (start, stop, fail, complete).
/// Requires scope: <c>release:write</c>.
/// </summary>
public const string ReleaseWrite = StellaOpsScopes.ReleaseWrite;
/// <summary>
/// Approve or reject release promotions and environment-level approval gates.
/// Requires scope: <c>release:publish</c>.
/// </summary>
public const string ReleaseApprove = StellaOpsScopes.ReleasePublish;
// --- Export job policies ---
/// <summary>
/// Read-only access to export job status, results, and quota information.
/// Requires scope: <c>export.viewer</c>.
/// </summary>
public const string ExportViewer = StellaOpsScopes.ExportViewer;
/// <summary>
/// Create, cancel, and manage export jobs; ensure export quotas.
/// Requires scope: <c>export.operator</c>.
/// </summary>
public const string ExportOperator = StellaOpsScopes.ExportOperator;
// --- Observability / KPI metrics policy ---
/// <summary>
/// Read-only access to KPI metrics, SLO dashboards, and observability data.
/// Requires scope: <c>obs:read</c>.
/// </summary>
public const string ObservabilityRead = StellaOpsScopes.ObservabilityRead;
/// <summary>
/// Registers all Orchestrator service authorization policies into the ASP.NET Core
/// authorization options. Call this from <c>Program.cs</c> inside <c>AddAuthorization</c>.
/// </summary>
public static void AddJobEnginePolicies(this AuthorizationOptions options)
{
ArgumentNullException.ThrowIfNull(options);
// Orchestrator core
options.AddStellaOpsScopePolicy(Read, StellaOpsScopes.OrchRead);
options.AddStellaOpsScopePolicy(Operate, StellaOpsScopes.OrchOperate);
options.AddStellaOpsScopePolicy(Quota, StellaOpsScopes.OrchQuota);
// Pack registry and execution
options.AddStellaOpsScopePolicy(PacksRead, StellaOpsScopes.PacksRead);
options.AddStellaOpsScopePolicy(PacksWrite, StellaOpsScopes.PacksWrite);
options.AddStellaOpsScopePolicy(PacksRun, StellaOpsScopes.PacksRun);
options.AddStellaOpsScopePolicy(PacksApprove, StellaOpsScopes.PacksApprove);
// Release orchestration
options.AddStellaOpsScopePolicy(ReleaseRead, StellaOpsScopes.ReleaseRead);
options.AddStellaOpsScopePolicy(ReleaseWrite, StellaOpsScopes.ReleaseWrite);
options.AddStellaOpsScopePolicy(ReleaseApprove, StellaOpsScopes.ReleasePublish);
// Export jobs
options.AddStellaOpsScopePolicy(ExportViewer, StellaOpsScopes.ExportViewer);
options.AddStellaOpsScopePolicy(ExportOperator, StellaOpsScopes.ExportOperator);
// Observability / KPI
options.AddStellaOpsScopePolicy(ObservabilityRead, StellaOpsScopes.ObservabilityRead);
}
}

View File

@@ -1,219 +0,0 @@
using Microsoft.Extensions.Configuration;
using StellaOps.Localization;
using StellaOps.Messaging.DependencyInjection;
using StellaOps.JobEngine.Core.Scale;
using StellaOps.JobEngine.Infrastructure;
using StellaOps.JobEngine.Infrastructure.Services;
using StellaOps.JobEngine.WebService;
using StellaOps.JobEngine.WebService.Endpoints;
using StellaOps.JobEngine.WebService.Services;
using StellaOps.JobEngine.WebService.Streaming;
using StellaOps.Auth.ServerIntegration;
using StellaOps.Auth.ServerIntegration.Tenancy;
using StellaOps.Router.AspNet;
using StellaOps.Telemetry.Core;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddStellaOpsTenantServices();
builder.Services.AddStellaOpsCors(builder.Environment, builder.Configuration);
builder.Services.AddRouting(options => options.LowercaseUrls = true);
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddOpenApi();
// Authentication (resource server JWT validation via Authority)
builder.Services.AddStellaOpsResourceServerAuthentication(builder.Configuration);
// Register jobengine authorization policies (scope-based, per RASD-03)
builder.Services.AddAuthorization(options =>
{
options.AddJobEnginePolicies();
});
// Register messaging transport (used for distributed caching primitives).
// Defaults to in-memory unless explicitly configured.
var configuredCacheBackend = builder.Configuration["FirstSignal:Cache:Backend"]?.Trim().ToLowerInvariant();
var configuredTransport = builder.Configuration["messaging:transport"]?.Trim().ToLowerInvariant();
var transport = string.IsNullOrWhiteSpace(configuredCacheBackend) ? configuredTransport : configuredCacheBackend;
if (!string.Equals(transport, "none", StringComparison.OrdinalIgnoreCase))
{
var normalizedTransport = string.IsNullOrWhiteSpace(transport)
? "valkey"
: transport;
IConfiguration messagingConfiguration = builder.Configuration;
if (string.IsNullOrWhiteSpace(builder.Configuration["messaging:transport"]))
{
messagingConfiguration = new ConfigurationBuilder()
.AddConfiguration(builder.Configuration)
.AddInMemoryCollection(new Dictionary<string, string?>
{
["messaging:transport"] = normalizedTransport
})
.Build();
}
builder.Services.AddMessagingPlugins(messagingConfiguration, options =>
{
options.ConfigurationSection = "messaging";
options.RequireTransport = true;
var pluginDirectory = builder.Configuration["messaging:PluginDirectory"];
if (!string.IsNullOrWhiteSpace(pluginDirectory))
{
options.PluginDirectory = pluginDirectory;
}
var searchPattern = builder.Configuration["messaging:SearchPattern"];
if (!string.IsNullOrWhiteSpace(searchPattern))
{
options.SearchPattern = searchPattern;
}
});
}
// Register StellaOps telemetry with OpenTelemetry integration
// Per ORCH-OBS-50-001: Wire StellaOps.Telemetry.Core into jobengine host
builder.Services.AddStellaOpsTelemetry(
builder.Configuration,
serviceName: "StellaOps.JobEngine",
serviceVersion: "1.0.0",
configureMetrics: meterBuilder =>
{
// Include the existing jobengine metrics meter
meterBuilder.AddMeter("StellaOps.JobEngine");
meterBuilder.AddMeter("StellaOps.GoldenSignals");
},
configureTracing: tracerBuilder =>
{
// Add jobengine activity source for custom spans
tracerBuilder.AddSource("StellaOps.JobEngine");
});
// Register telemetry context propagation
builder.Services.AddTelemetryContextPropagation();
// Register golden signal metrics for scheduler instrumentation
builder.Services.AddGoldenSignalMetrics();
// Register TTFS metrics for first-signal endpoint/service
builder.Services.AddTimeToFirstSignalMetrics();
// Register incident mode for enhanced telemetry during incidents
builder.Services.AddIncidentMode(builder.Configuration);
// Register sealed-mode telemetry for air-gapped operation
builder.Services.AddSealedModeTelemetry(builder.Configuration);
// Register JobEngine infrastructure (Postgres repositories, data source)
builder.Services.AddJobEngineInfrastructure(builder.Configuration);
// Register WebService services
builder.Services.AddSingleton<TenantResolver>();
builder.Services.AddSingleton(TimeProvider.System);
builder.Services.AddSingleton<ReleasePromotionDecisionStore>();
builder.Services.AddDeploymentCompatibilityStore();
// Register streaming options and coordinators
builder.Services.Configure<StreamOptions>(builder.Configuration.GetSection(StreamOptions.SectionName));
builder.Services.AddScoped<IJobStreamCoordinator, JobStreamCoordinator>();
builder.Services.AddScoped<IRunStreamCoordinator, RunStreamCoordinator>();
builder.Services.AddScoped<IPackRunStreamCoordinator, PackRunStreamCoordinator>();
// Optional TTFS snapshot writer (disabled by default via config)
builder.Services.AddHostedService<FirstSignalSnapshotWriter>();
// Register scale metrics and load shedding services
builder.Services.AddSingleton<ScaleMetrics>();
builder.Services.AddSingleton<LoadShedder>(sp => new LoadShedder(sp.GetRequiredService<ScaleMetrics>()));
builder.Services.AddSingleton<StartupProbe>();
builder.Services.AddStellaOpsLocalization(builder.Configuration);
builder.Services.AddTranslationBundle(System.Reflection.Assembly.GetExecutingAssembly());
// Stella Router integration
var routerEnabled = builder.Services.AddRouterMicroservice(
builder.Configuration,
serviceName: "jobengine",
version: System.Reflection.CustomAttributeExtensions.GetCustomAttribute<System.Reflection.AssemblyInformationalVersionAttribute>(System.Reflection.Assembly.GetExecutingAssembly())?.InformationalVersion ?? "1.0.0",
routerOptionsSection: "Router");
builder.TryAddStellaOpsLocalBinding("jobengine");
var app = builder.Build();
app.LogStellaOpsLocalHostname("jobengine");
if (app.Environment.IsDevelopment())
{
app.MapOpenApi();
}
app.UseStellaOpsCors();
app.UseStellaOpsLocalization();
app.UseIdentityEnvelopeAuthentication();
app.UseAuthentication();
app.UseAuthorization();
app.UseStellaOpsTenantMiddleware();
// Enable telemetry context propagation (extracts tenant/actor/correlation from headers)
// Per ORCH-OBS-50-001
app.UseStellaOpsTelemetryContext();
// Enable WebSocket support for streaming endpoints
app.UseWebSockets();
app.TryUseStellaRouter(routerEnabled);
await app.LoadTranslationsAsync();
// OpenAPI discovery endpoints (available in all environments)
app.MapOpenApiEndpoints();
// Register health endpoints (replaces simple /healthz and /readyz)
app.MapHealthEndpoints();
// Register scale and autoscaling endpoints
app.MapScaleEndpoints();
// Register API endpoints
app.MapSourceEndpoints();
app.MapRunEndpoints();
app.MapFirstSignalEndpoints();
app.MapJobEndpoints();
app.MapDagEndpoints();
app.MapPackRunEndpoints();
app.MapPackRegistryEndpoints();
// Register streaming endpoints
app.MapStreamEndpoints();
// Register worker endpoints (claim, heartbeat, progress, complete)
app.MapWorkerEndpoints();
// Register quota governance and circuit breaker endpoints (per SPRINT_20260208_042)
app.MapCircuitBreakerEndpoints();
app.MapQuotaEndpoints();
app.MapQuotaGovernanceEndpoints();
// Register dead-letter queue management endpoints
app.MapDeadLetterEndpoints();
// Register release management, approval, and deployment monitoring endpoints
app.MapReleaseEndpoints();
app.MapApprovalEndpoints();
app.MapDeploymentEndpoints();
app.MapReleaseDashboardEndpoints();
app.MapReleaseControlV2Endpoints();
app.MapEvidenceEndpoints();
app.MapAuditEndpoints();
// Refresh Router endpoint cache
app.TryRefreshStellaRouterEndpoints(routerEnabled);
await app.RunAsync().ConfigureAwait(false);
// Make Program class file-scoped to prevent it from being exposed to referencing assemblies
file sealed partial class Program;

View File

@@ -1,27 +0,0 @@
{
"$schema": "https://json.schemastore.org/launchsettings.json",
"profiles": {
"http": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": false,
"applicationUrl": "http://localhost:10171",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development",
"STELLAOPS_WEBSERVICES_CORS": "true",
"STELLAOPS_WEBSERVICES_CORS_ORIGIN": "https://stella-ops.local,https://stella-ops.local:10000,https://localhost:10000"
}
},
"https": {
"commandName": "Project",
"dotnetRunMessages": true,
"launchBrowser": false,
"applicationUrl": "https://localhost:10170;http://localhost:10171",
"environmentVariables": {
"ASPNETCORE_ENVIRONMENT": "Development",
"STELLAOPS_WEBSERVICES_CORS": "true",
"STELLAOPS_WEBSERVICES_CORS_ORIGIN": "https://stella-ops.local,https://stella-ops.local:10000,https://localhost:10000"
}
}
}
}

View File

@@ -1,120 +0,0 @@
using System.Text.Json;
namespace StellaOps.JobEngine.WebService.Services;
public sealed record CreateDeploymentRequest
{
public string ReleaseId { get; init; } = string.Empty;
public string EnvironmentId { get; init; } = string.Empty;
public string? EnvironmentName { get; init; }
public string Strategy { get; init; } = "rolling";
public JsonElement? StrategyConfig { get; init; }
public string? PackageType { get; init; }
public string? PackageRefId { get; init; }
public string? PackageRefName { get; init; }
public IReadOnlyList<PromotionStageDto> PromotionStages { get; init; } = Array.Empty<PromotionStageDto>();
}
public sealed record PromotionStageDto
{
public string Name { get; init; } = string.Empty;
public string EnvironmentId { get; init; } = string.Empty;
}
public record class DeploymentSummaryDto
{
public required string Id { get; init; }
public required string ReleaseId { get; init; }
public required string ReleaseName { get; init; }
public required string ReleaseVersion { get; init; }
public required string EnvironmentId { get; init; }
public required string EnvironmentName { get; init; }
public required string Status { get; init; }
public required string Strategy { get; init; }
public int Progress { get; init; }
public DateTimeOffset StartedAt { get; init; }
public DateTimeOffset? CompletedAt { get; init; }
public string InitiatedBy { get; init; } = string.Empty;
public int TargetCount { get; init; }
public int CompletedTargets { get; init; }
public int FailedTargets { get; init; }
}
public sealed record DeploymentDto : DeploymentSummaryDto
{
public List<DeploymentTargetDto> Targets { get; init; } = [];
public string? CurrentStep { get; init; }
public bool CanPause { get; init; }
public bool CanResume { get; init; }
public bool CanCancel { get; init; }
public bool CanRollback { get; init; }
public JsonElement? StrategyConfig { get; init; }
public IReadOnlyList<PromotionStageDto> PromotionStages { get; init; } = Array.Empty<PromotionStageDto>();
public string? PackageType { get; init; }
public string? PackageRefId { get; init; }
public string? PackageRefName { get; init; }
}
public sealed record DeploymentTargetDto
{
public required string Id { get; init; }
public required string Name { get; init; }
public required string Type { get; init; }
public required string Status { get; init; }
public int Progress { get; init; }
public DateTimeOffset? StartedAt { get; init; }
public DateTimeOffset? CompletedAt { get; init; }
public int? Duration { get; init; }
public string AgentId { get; init; } = string.Empty;
public string? Error { get; init; }
public string? PreviousVersion { get; init; }
}
public sealed record DeploymentEventDto
{
public required string Id { get; init; }
public required string Type { get; init; }
public string? TargetId { get; init; }
public string? TargetName { get; init; }
public required string Message { get; init; }
public DateTimeOffset Timestamp { get; init; }
}
public sealed record DeploymentLogEntryDto
{
public DateTimeOffset Timestamp { get; init; }
public required string Level { get; init; }
public required string Source { get; init; }
public string? TargetId { get; init; }
public required string Message { get; init; }
}
public sealed record DeploymentMetricsDto
{
public int TotalDuration { get; init; }
public int AverageTargetDuration { get; init; }
public double SuccessRate { get; init; }
public int RollbackCount { get; init; }
public int ImagesPulled { get; init; }
public int ContainersStarted { get; init; }
public int ContainersRemoved { get; init; }
public int HealthChecksPerformed { get; init; }
}
public sealed record DeploymentCompatibilityState(
DeploymentDto Deployment,
List<DeploymentLogEntryDto> Logs,
List<DeploymentEventDto> Events,
DeploymentMetricsDto Metrics);
public enum DeploymentMutationStatus
{
Success,
NotFound,
Conflict,
}
public sealed record DeploymentMutationResult(
DeploymentMutationStatus Status,
string Message,
DeploymentDto? Deployment);

View File

@@ -1,22 +0,0 @@
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Options;
using StellaOps.JobEngine.Infrastructure.Options;
namespace StellaOps.JobEngine.WebService.Services;
public static class DeploymentCompatibilityServiceCollectionExtensions
{
public static IServiceCollection AddDeploymentCompatibilityStore(this IServiceCollection services)
{
services.AddSingleton<InMemoryDeploymentCompatibilityStore>();
services.AddSingleton<IDeploymentCompatibilityStore>(sp =>
{
var options = sp.GetRequiredService<IOptions<JobEngineServiceOptions>>().Value;
return string.IsNullOrWhiteSpace(options.Database.ConnectionString)
? sp.GetRequiredService<InMemoryDeploymentCompatibilityStore>()
: ActivatorUtilities.CreateInstance<PostgresDeploymentCompatibilityStore>(sp);
});
return services;
}
}

View File

@@ -1,358 +0,0 @@
namespace StellaOps.JobEngine.WebService.Services;
internal static class DeploymentCompatibilityStateFactory
{
public static IReadOnlyList<DeploymentCompatibilityState> CreateSeedStates()
=> [
CreateSeedState("dep-001", "rel-001", "platform-release", "2026.04.01", "env-prod", "Production", "completed", "rolling", DateTimeOffset.Parse("2026-04-01T09:00:00Z"), 3, null, 1),
CreateSeedState("dep-002", "rel-002", "checkout-api", "2026.04.02", "env-staging", "Staging", "running", "canary", DateTimeOffset.Parse("2026-04-02T12:15:00Z"), 3, null, 4),
CreateSeedState("dep-003", "rel-003", "worker-service", "2026.04.03", "env-dev", "Development", "failed", "all_at_once", DateTimeOffset.Parse("2026-04-03T08:30:00Z"), 4, 2, 7),
CreateSeedState("dep-004", "rel-004", "gateway-hotfix", "hf-2026.04.04", "env-stage-eu", "EU Stage", "paused", "blue_green", DateTimeOffset.Parse("2026-04-04T06:00:00Z"), 4, 0, 10),
];
public static DeploymentCompatibilityState CreateState(
CreateDeploymentRequest request,
string actor,
TimeProvider timeProvider)
{
var now = timeProvider.GetUtcNow();
var id = $"dep-{Guid.NewGuid():N}"[..16];
var envName = string.IsNullOrWhiteSpace(request.EnvironmentName)
? Pretty(request.EnvironmentId)
: request.EnvironmentName!;
var targets = CreateTargets(
request.EnvironmentId,
request.Strategy == "all_at_once" ? 4 : 3,
failedIndex: null,
offset: 20,
baseTime: now.AddMinutes(-4));
var deployment = Recalculate(new DeploymentDto
{
Id = id,
ReleaseId = request.ReleaseId,
ReleaseName = request.PackageRefName ?? request.ReleaseId,
ReleaseVersion = request.PackageRefName ?? request.PackageRefId ?? "version-1",
EnvironmentId = request.EnvironmentId,
EnvironmentName = envName,
Status = "pending",
Strategy = request.Strategy,
StartedAt = now,
InitiatedBy = actor,
Targets = targets,
CurrentStep = "Queued for rollout",
CanCancel = true,
StrategyConfig = request.StrategyConfig,
PromotionStages = request.PromotionStages,
PackageType = request.PackageType,
PackageRefId = request.PackageRefId,
PackageRefName = request.PackageRefName,
});
return new DeploymentCompatibilityState(
deployment,
[
new DeploymentLogEntryDto
{
Timestamp = now,
Level = "info",
Source = "jobengine",
Message = $"Deployment {id} created for {request.EnvironmentId}.",
},
],
[
new DeploymentEventDto
{
Id = $"evt-{Guid.NewGuid():N}"[..16],
Type = "started",
Message = $"Deployment {id} queued.",
Timestamp = now,
},
],
new DeploymentMetricsDto());
}
public static DeploymentCompatibilityState Transition(
DeploymentCompatibilityState current,
string nextStatus,
string eventType,
string message,
bool complete,
TimeProvider timeProvider)
{
var now = timeProvider.GetUtcNow();
var nextDeployment = Recalculate(current.Deployment with
{
Status = nextStatus,
CompletedAt = complete ? now : current.Deployment.CompletedAt,
CurrentStep = nextStatus switch
{
"paused" => "Deployment paused",
"running" => "Deployment resumed",
"cancelled" => "Deployment cancelled",
"rolling_back" => "Rollback started",
_ => current.Deployment.CurrentStep,
},
});
var nextMetrics = nextStatus == "rolling_back"
? current.Metrics with { RollbackCount = current.Metrics.RollbackCount + 1 }
: current.Metrics;
var logs = current.Logs
.Append(new DeploymentLogEntryDto
{
Timestamp = now,
Level = "info",
Source = "jobengine",
Message = message,
})
.ToList();
var events = current.Events
.Append(new DeploymentEventDto
{
Id = $"evt-{Guid.NewGuid():N}"[..16],
Type = eventType,
Message = message,
Timestamp = now,
})
.ToList();
return current with
{
Deployment = nextDeployment,
Logs = logs,
Events = events,
Metrics = nextMetrics,
};
}
public static DeploymentCompatibilityState Retry(
DeploymentCompatibilityState current,
string targetId,
TimeProvider timeProvider)
{
var target = current.Deployment.Targets.First(t => string.Equals(t.Id, targetId, StringComparison.OrdinalIgnoreCase));
var now = timeProvider.GetUtcNow();
var targets = current.Deployment.Targets
.Select(item => item.Id == targetId
? item with
{
Status = "pending",
Progress = 0,
StartedAt = null,
CompletedAt = null,
Duration = null,
Error = null,
}
: item)
.ToList();
var nextDeployment = Recalculate(current.Deployment with
{
Status = "running",
CompletedAt = null,
CurrentStep = $"Retrying {target.Name}",
Targets = targets,
});
var logs = current.Logs
.Append(new DeploymentLogEntryDto
{
Timestamp = now,
Level = "warn",
Source = "jobengine",
TargetId = targetId,
Message = $"Retry requested for {target.Name}.",
})
.ToList();
var events = current.Events
.Append(new DeploymentEventDto
{
Id = $"evt-{Guid.NewGuid():N}"[..16],
Type = "target_started",
TargetId = targetId,
TargetName = target.Name,
Message = $"Retry started for {target.Name}.",
Timestamp = now,
})
.ToList();
return current with
{
Deployment = nextDeployment,
Logs = logs,
Events = events,
};
}
private static DeploymentCompatibilityState CreateSeedState(
string id,
string releaseId,
string releaseName,
string releaseVersion,
string environmentId,
string environmentName,
string status,
string strategy,
DateTimeOffset startedAt,
int targetCount,
int? failedIndex,
int offset)
{
var targets = CreateTargets(environmentId, targetCount, failedIndex, offset, startedAt.AddMinutes(-targetCount * 4));
DateTimeOffset? completedAt = status is "completed" or "failed" ? startedAt.AddMinutes(18) : null;
var deployment = Recalculate(new DeploymentDto
{
Id = id,
ReleaseId = releaseId,
ReleaseName = releaseName,
ReleaseVersion = releaseVersion,
EnvironmentId = environmentId,
EnvironmentName = environmentName,
Status = status,
Strategy = strategy,
StartedAt = startedAt,
CompletedAt = completedAt,
InitiatedBy = "deploy-bot",
Targets = targets,
CurrentStep = status switch
{
"running" => $"Deploying {targets.First(t => t.Status == "running").Name}",
"paused" => "Awaiting operator resume",
"failed" => $"Target {targets.First(t => t.Status == "failed").Name} failed",
_ => null,
},
});
var logs = new List<DeploymentLogEntryDto>
{
new()
{
Timestamp = startedAt,
Level = "info",
Source = "jobengine",
Message = $"Deployment {id} started.",
},
};
logs.AddRange(targets.Select(target => new DeploymentLogEntryDto
{
Timestamp = target.StartedAt ?? startedAt,
Level = target.Status == "failed" ? "error" : "info",
Source = target.AgentId,
TargetId = target.Id,
Message = target.Status == "failed"
? $"{target.Name} failed health checks."
: $"{target.Name} progressed to {target.Status}.",
}));
var events = new List<DeploymentEventDto>
{
new()
{
Id = $"evt-{id}-start",
Type = "started",
Message = $"Deployment {id} started.",
Timestamp = startedAt,
},
};
events.AddRange(targets.Select(target => new DeploymentEventDto
{
Id = $"evt-{id}-{target.Id}",
Type = target.Status == "failed"
? "target_failed"
: target.Status == "running"
? "target_started"
: "target_completed",
TargetId = target.Id,
TargetName = target.Name,
Message = target.Status == "failed"
? $"{target.Name} failed."
: target.Status == "running"
? $"{target.Name} is running."
: $"{target.Name} completed.",
Timestamp = target.StartedAt ?? startedAt,
}));
var completedDurations = targets.Where(target => target.Duration.HasValue).Select(target => target.Duration!.Value).ToArray();
var metrics = new DeploymentMetricsDto
{
TotalDuration = completedAt.HasValue ? (int)(completedAt.Value - startedAt).TotalMilliseconds : 0,
AverageTargetDuration = completedDurations.Length == 0 ? 0 : (int)completedDurations.Average(),
SuccessRate = Math.Round(targets.Count(target => target.Status == "completed") / (double)targetCount * 100, 2),
ImagesPulled = targetCount,
ContainersStarted = targets.Count(target => target.Status is "completed" or "running"),
ContainersRemoved = targets.Count(target => target.Status == "completed"),
HealthChecksPerformed = targetCount * 2,
};
return new DeploymentCompatibilityState(deployment, logs, events, metrics);
}
private static List<DeploymentTargetDto> CreateTargets(
string environmentId,
int count,
int? failedIndex,
int offset,
DateTimeOffset baseTime)
{
var items = new List<DeploymentTargetDto>(count);
var prefix = environmentId.Contains("prod", StringComparison.OrdinalIgnoreCase) ? "prod" : "node";
for (var index = 0; index < count; index++)
{
var failed = failedIndex.HasValue && index == failedIndex.Value;
var running = !failedIndex.HasValue && index == count - 1;
var status = failed ? "failed" : running ? "running" : "completed";
var startedAt = baseTime.AddMinutes(index * 3);
DateTimeOffset? completedAt = status == "completed" ? startedAt.AddMinutes(2) : null;
items.Add(new DeploymentTargetDto
{
Id = $"tgt-{offset + index:000}",
Name = $"{prefix}-{offset + index:00}",
Type = index % 2 == 0 ? "docker_host" : "compose_host",
Status = status,
Progress = status == "completed" ? 100 : status == "running" ? 65 : 45,
StartedAt = startedAt,
CompletedAt = completedAt,
Duration = completedAt.HasValue ? (int)(completedAt.Value - startedAt).TotalMilliseconds : null,
AgentId = $"agent-{offset + index:000}",
Error = status == "failed" ? "Health check failed" : null,
PreviousVersion = "2026.03.31",
});
}
return items;
}
internal static DeploymentDto Recalculate(DeploymentDto deployment)
{
var totalTargets = deployment.Targets.Count;
var completedTargets = deployment.Targets.Count(target => target.Status == "completed");
var failedTargets = deployment.Targets.Count(target => target.Status == "failed");
var progress = totalTargets == 0
? 0
: (int)Math.Round(deployment.Targets.Sum(target => target.Progress) / (double)totalTargets);
return deployment with
{
TargetCount = totalTargets,
CompletedTargets = completedTargets,
FailedTargets = failedTargets,
Progress = progress,
CanPause = deployment.Status == "running",
CanResume = deployment.Status == "paused",
CanCancel = deployment.Status is "pending" or "running" or "paused",
CanRollback = deployment.Status is "completed" or "failed" or "running" or "paused",
};
}
private static string Pretty(string value)
{
return string.Join(
' ',
value.Split(['-', '_'], StringSplitOptions.RemoveEmptyEntries)
.Select(part => char.ToUpperInvariant(part[0]) + part[1..]));
}
}

View File

@@ -1,37 +0,0 @@
using Microsoft.AspNetCore.Http;
using System.Globalization;
namespace StellaOps.JobEngine.WebService.Services;
/// <summary>
/// Helper for applying HTTP deprecation metadata to legacy endpoints.
/// </summary>
public static class DeprecationHeaders
{
/// <summary>
/// Apply standard deprecation headers and alternate link hint to the response.
/// </summary>
/// <param name="response">HTTP response to annotate.</param>
/// <param name="alternate">Alternate endpoint that supersedes the deprecated one.</param>
/// <param name="sunset">Optional sunset date (UTC).</param>
public static void Apply(HttpResponse response, string alternate, DateTimeOffset? sunset = null)
{
// RFC 8594 recommends HTTP-date for Sunset; default to a near-term horizon to prompt migrations.
var sunsetValue = (sunset ?? new DateTimeOffset(2026, 03, 31, 0, 0, 0, TimeSpan.Zero))
.UtcDateTime
.ToString("r", CultureInfo.InvariantCulture);
if (!response.Headers.ContainsKey("Deprecation"))
{
response.Headers.Append("Deprecation", "true");
}
// Link: <...>; rel="alternate"; title="Replacement"
var linkValue = $"<{alternate}>; rel=\"alternate\"; title=\"Replacement endpoint\"";
response.Headers.Append("Link", linkValue);
response.Headers.Append("Sunset", sunsetValue);
response.Headers.Append("X-StellaOps-Deprecated", "orchestrator:legacy-endpoint");
}
}

View File

@@ -1,170 +0,0 @@
using StellaOps.JobEngine.Core.Domain;
using System.Text;
namespace StellaOps.JobEngine.WebService.Services;
/// <summary>
/// Helper methods for endpoint operations.
/// </summary>
public static class EndpointHelpers
{
private const int DefaultLimit = 50;
private const int MaxLimit = 100;
/// <summary>
/// Parses a positive integer from a string, returning null if invalid.
/// </summary>
public static int? TryParsePositiveInt(string? value)
{
if (string.IsNullOrWhiteSpace(value))
{
return null;
}
if (int.TryParse(value, out var result) && result > 0)
{
return result;
}
return null;
}
/// <summary>
/// Parses a DateTimeOffset from a string, returning null if invalid.
/// </summary>
public static DateTimeOffset? TryParseDateTimeOffset(string? value)
{
if (string.IsNullOrWhiteSpace(value))
{
return null;
}
if (DateTimeOffset.TryParse(value, out var result))
{
return result;
}
return null;
}
/// <summary>
/// Parses a GUID from a string, returning null if invalid.
/// </summary>
public static Guid? TryParseGuid(string? value)
{
if (string.IsNullOrWhiteSpace(value))
{
return null;
}
if (Guid.TryParse(value, out var result))
{
return result;
}
return null;
}
/// <summary>
/// Gets limit value, clamped to valid range.
/// </summary>
public static int GetLimit(int? requestedLimit) =>
Math.Clamp(requestedLimit ?? DefaultLimit, 1, MaxLimit);
/// <summary>
/// Creates a cursor string from a job for pagination.
/// </summary>
public static string CreateJobCursor(Job job) =>
$"{job.CreatedAt:O}|{job.JobId}";
/// <summary>
/// Creates a cursor string from a run for pagination.
/// </summary>
public static string CreateRunCursor(Run run) =>
$"{run.CreatedAt:O}|{run.RunId}";
/// <summary>
/// Creates a cursor string from a source for pagination.
/// </summary>
public static string CreateSourceCursor(Source source) =>
$"{source.CreatedAt:O}|{source.SourceId}";
/// <summary>
/// Parses offset from cursor string.
/// </summary>
public static int ParseCursorOffset(string? cursor, int defaultOffset = 0)
{
// For simplicity, we use offset-based pagination
// Cursor format: base64(offset)
if (string.IsNullOrWhiteSpace(cursor))
{
return defaultOffset;
}
try
{
var decoded = Encoding.UTF8.GetString(Convert.FromBase64String(cursor));
if (int.TryParse(decoded, out var offset))
{
return offset;
}
}
catch
{
// Invalid cursor, return default
}
return defaultOffset;
}
/// <summary>
/// Creates a cursor for the next page.
/// </summary>
public static string? CreateNextCursor(int currentOffset, int limit, int returnedCount)
{
if (returnedCount < limit)
{
return null; // No more results
}
var nextOffset = currentOffset + limit;
return Convert.ToBase64String(Encoding.UTF8.GetBytes(nextOffset.ToString()));
}
/// <summary>
/// Parses a job status from a string.
/// </summary>
public static JobStatus? TryParseJobStatus(string? value)
{
if (string.IsNullOrWhiteSpace(value))
{
return null;
}
if (Enum.TryParse<JobStatus>(value, ignoreCase: true, out var status))
{
return status;
}
return null;
}
/// <summary>
/// Parses a run status from a string.
/// </summary>
public static RunStatus? TryParseRunStatus(string? value)
{
if (string.IsNullOrWhiteSpace(value))
{
return null;
}
if (Enum.TryParse<RunStatus>(value, ignoreCase: true, out var status))
{
return status;
}
return null;
}
}

View File

@@ -1,48 +0,0 @@
namespace StellaOps.JobEngine.WebService.Services;
public interface IDeploymentCompatibilityStore
{
Task<IReadOnlyList<DeploymentDto>> ListAsync(string tenantId, CancellationToken cancellationToken);
Task<DeploymentDto?> GetAsync(string tenantId, string deploymentId, CancellationToken cancellationToken);
Task<IReadOnlyList<DeploymentLogEntryDto>?> GetLogsAsync(
string tenantId,
string deploymentId,
string? targetId,
string? level,
int? limit,
CancellationToken cancellationToken);
Task<IReadOnlyList<DeploymentEventDto>?> GetEventsAsync(
string tenantId,
string deploymentId,
CancellationToken cancellationToken);
Task<DeploymentMetricsDto?> GetMetricsAsync(
string tenantId,
string deploymentId,
CancellationToken cancellationToken);
Task<DeploymentDto> CreateAsync(
string tenantId,
CreateDeploymentRequest request,
string actor,
CancellationToken cancellationToken);
Task<DeploymentMutationResult> TransitionAsync(
string tenantId,
string deploymentId,
IReadOnlyCollection<string> allowedStatuses,
string nextStatus,
string eventType,
string message,
bool complete,
CancellationToken cancellationToken);
Task<DeploymentMutationResult> RetryAsync(
string tenantId,
string deploymentId,
string targetId,
CancellationToken cancellationToken);
}

View File

@@ -1,169 +0,0 @@
using System.Collections.Concurrent;
namespace StellaOps.JobEngine.WebService.Services;
public sealed class InMemoryDeploymentCompatibilityStore : IDeploymentCompatibilityStore
{
private readonly ConcurrentDictionary<string, ConcurrentDictionary<string, DeploymentCompatibilityState>> _tenants = new(StringComparer.Ordinal);
private readonly TimeProvider _timeProvider;
public InMemoryDeploymentCompatibilityStore(TimeProvider timeProvider)
{
_timeProvider = timeProvider;
}
public Task<IReadOnlyList<DeploymentDto>> ListAsync(string tenantId, CancellationToken cancellationToken)
{
var states = GetOrSeedTenantState(tenantId);
return Task.FromResult<IReadOnlyList<DeploymentDto>>(states.Values
.Select(state => state.Deployment)
.OrderByDescending(item => item.StartedAt)
.ThenBy(item => item.Id, StringComparer.Ordinal)
.ToList());
}
public Task<DeploymentDto?> GetAsync(string tenantId, string deploymentId, CancellationToken cancellationToken)
{
var states = GetOrSeedTenantState(tenantId);
return Task.FromResult(states.TryGetValue(deploymentId, out var state) ? state.Deployment : null);
}
public Task<IReadOnlyList<DeploymentLogEntryDto>?> GetLogsAsync(
string tenantId,
string deploymentId,
string? targetId,
string? level,
int? limit,
CancellationToken cancellationToken)
{
var states = GetOrSeedTenantState(tenantId);
if (!states.TryGetValue(deploymentId, out var state))
{
return Task.FromResult<IReadOnlyList<DeploymentLogEntryDto>?>(null);
}
IEnumerable<DeploymentLogEntryDto> logs = state.Logs;
if (!string.IsNullOrWhiteSpace(targetId))
{
logs = logs.Where(item => string.Equals(item.TargetId, targetId, StringComparison.OrdinalIgnoreCase));
}
if (!string.IsNullOrWhiteSpace(level))
{
logs = logs.Where(item => string.Equals(item.Level, level, StringComparison.OrdinalIgnoreCase));
}
return Task.FromResult<IReadOnlyList<DeploymentLogEntryDto>?>(logs
.TakeLast(Math.Clamp(limit ?? 500, 1, 5000))
.ToList());
}
public Task<IReadOnlyList<DeploymentEventDto>?> GetEventsAsync(
string tenantId,
string deploymentId,
CancellationToken cancellationToken)
{
var states = GetOrSeedTenantState(tenantId);
return Task.FromResult<IReadOnlyList<DeploymentEventDto>?>(states.TryGetValue(deploymentId, out var state)
? state.Events.OrderBy(item => item.Timestamp).ToList()
: null);
}
public Task<DeploymentMetricsDto?> GetMetricsAsync(
string tenantId,
string deploymentId,
CancellationToken cancellationToken)
{
var states = GetOrSeedTenantState(tenantId);
return Task.FromResult(states.TryGetValue(deploymentId, out var state) ? state.Metrics : null);
}
public Task<DeploymentDto> CreateAsync(
string tenantId,
CreateDeploymentRequest request,
string actor,
CancellationToken cancellationToken)
{
var states = GetOrSeedTenantState(tenantId);
var state = DeploymentCompatibilityStateFactory.CreateState(request, actor, _timeProvider);
states[state.Deployment.Id] = state;
return Task.FromResult(state.Deployment);
}
public Task<DeploymentMutationResult> TransitionAsync(
string tenantId,
string deploymentId,
IReadOnlyCollection<string> allowedStatuses,
string nextStatus,
string eventType,
string message,
bool complete,
CancellationToken cancellationToken)
{
var states = GetOrSeedTenantState(tenantId);
if (!states.TryGetValue(deploymentId, out var current))
{
return Task.FromResult(new DeploymentMutationResult(DeploymentMutationStatus.NotFound, string.Empty, null));
}
if (!allowedStatuses.Contains(current.Deployment.Status, StringComparer.OrdinalIgnoreCase))
{
return Task.FromResult(new DeploymentMutationResult(
DeploymentMutationStatus.Conflict,
$"Deployment {deploymentId} cannot transition from '{current.Deployment.Status}' to '{nextStatus}'.",
null));
}
var next = DeploymentCompatibilityStateFactory.Transition(current, nextStatus, eventType, message, complete, _timeProvider);
states[deploymentId] = next;
return Task.FromResult(new DeploymentMutationResult(DeploymentMutationStatus.Success, message, next.Deployment));
}
public Task<DeploymentMutationResult> RetryAsync(
string tenantId,
string deploymentId,
string targetId,
CancellationToken cancellationToken)
{
var states = GetOrSeedTenantState(tenantId);
if (!states.TryGetValue(deploymentId, out var current))
{
return Task.FromResult(new DeploymentMutationResult(DeploymentMutationStatus.NotFound, string.Empty, null));
}
var target = current.Deployment.Targets.FirstOrDefault(item => string.Equals(item.Id, targetId, StringComparison.OrdinalIgnoreCase));
if (target is null)
{
return Task.FromResult(new DeploymentMutationResult(DeploymentMutationStatus.NotFound, string.Empty, null));
}
if (target.Status is not ("failed" or "skipped"))
{
return Task.FromResult(new DeploymentMutationResult(
DeploymentMutationStatus.Conflict,
$"Target {targetId} is not in a retryable state.",
null));
}
var next = DeploymentCompatibilityStateFactory.Retry(current, targetId, _timeProvider);
states[deploymentId] = next;
return Task.FromResult(new DeploymentMutationResult(
DeploymentMutationStatus.Success,
$"Retry initiated for {target.Name}.",
next.Deployment));
}
private ConcurrentDictionary<string, DeploymentCompatibilityState> GetOrSeedTenantState(string tenantId)
{
return _tenants.GetOrAdd(tenantId, _ =>
{
var states = new ConcurrentDictionary<string, DeploymentCompatibilityState>(StringComparer.OrdinalIgnoreCase);
foreach (var seed in DeploymentCompatibilityStateFactory.CreateSeedStates())
{
states[seed.Deployment.Id] = seed;
}
return states;
});
}
}

View File

@@ -1,329 +0,0 @@
using System.Text.Json;
using Microsoft.Extensions.Options;
using Npgsql;
using StellaOps.JobEngine.Infrastructure.Options;
namespace StellaOps.JobEngine.WebService.Services;
public sealed class PostgresDeploymentCompatibilityStore : IDeploymentCompatibilityStore, IAsyncDisposable
{
private const string QualifiedTableName = "\"orchestrator\".compatibility_deployments";
private static readonly JsonSerializerOptions SerializerOptions = new(JsonSerializerDefaults.Web);
private readonly NpgsqlDataSource _dataSource;
private readonly int _commandTimeoutSeconds;
private readonly TimeProvider _timeProvider;
public PostgresDeploymentCompatibilityStore(
IOptions<JobEngineServiceOptions> options,
TimeProvider timeProvider)
{
ArgumentNullException.ThrowIfNull(options);
var databaseOptions = options.Value.Database;
var connectionStringBuilder = new NpgsqlConnectionStringBuilder(databaseOptions.ConnectionString)
{
ApplicationName = "stellaops-jobengine-compatibility-store",
Pooling = databaseOptions.EnablePooling,
MinPoolSize = databaseOptions.MinPoolSize,
MaxPoolSize = databaseOptions.MaxPoolSize,
};
_dataSource = new NpgsqlDataSourceBuilder(connectionStringBuilder.ConnectionString).Build();
_commandTimeoutSeconds = Math.Max(databaseOptions.CommandTimeoutSeconds, 1);
_timeProvider = timeProvider;
}
public ValueTask DisposeAsync()
=> _dataSource.DisposeAsync();
public async Task<IReadOnlyList<DeploymentDto>> ListAsync(string tenantId, CancellationToken cancellationToken)
{
await using var connection = await OpenConnectionAsync(tenantId, cancellationToken).ConfigureAwait(false);
await EnsureSeedDataAsync(connection, tenantId, cancellationToken).ConfigureAwait(false);
var sql =
$"""
SELECT deployment_json
FROM {QualifiedTableName}
WHERE tenant_id = @tenant
ORDER BY started_at DESC, deployment_id
""";
await using var command = CreateCommand(sql, connection);
command.Parameters.AddWithValue("tenant", tenantId);
var items = new List<DeploymentDto>();
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
while (await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
{
items.Add(Deserialize<DeploymentDto>(reader.GetString(0)));
}
return items;
}
public async Task<DeploymentDto?> GetAsync(string tenantId, string deploymentId, CancellationToken cancellationToken)
{
var state = await GetStateAsync(tenantId, deploymentId, cancellationToken).ConfigureAwait(false);
return state?.Deployment;
}
public async Task<IReadOnlyList<DeploymentLogEntryDto>?> GetLogsAsync(
string tenantId,
string deploymentId,
string? targetId,
string? level,
int? limit,
CancellationToken cancellationToken)
{
var state = await GetStateAsync(tenantId, deploymentId, cancellationToken).ConfigureAwait(false);
if (state is null)
{
return null;
}
IEnumerable<DeploymentLogEntryDto> logs = state.Logs;
if (!string.IsNullOrWhiteSpace(targetId))
{
logs = logs.Where(item => string.Equals(item.TargetId, targetId, StringComparison.OrdinalIgnoreCase));
}
if (!string.IsNullOrWhiteSpace(level))
{
logs = logs.Where(item => string.Equals(item.Level, level, StringComparison.OrdinalIgnoreCase));
}
return logs.TakeLast(Math.Clamp(limit ?? 500, 1, 5000)).ToList();
}
public async Task<IReadOnlyList<DeploymentEventDto>?> GetEventsAsync(
string tenantId,
string deploymentId,
CancellationToken cancellationToken)
{
var state = await GetStateAsync(tenantId, deploymentId, cancellationToken).ConfigureAwait(false);
return state?.Events.OrderBy(item => item.Timestamp).ToList();
}
public async Task<DeploymentMetricsDto?> GetMetricsAsync(
string tenantId,
string deploymentId,
CancellationToken cancellationToken)
{
var state = await GetStateAsync(tenantId, deploymentId, cancellationToken).ConfigureAwait(false);
return state?.Metrics;
}
public async Task<DeploymentDto> CreateAsync(
string tenantId,
CreateDeploymentRequest request,
string actor,
CancellationToken cancellationToken)
{
await using var connection = await OpenConnectionAsync(tenantId, cancellationToken).ConfigureAwait(false);
await EnsureSeedDataAsync(connection, tenantId, cancellationToken).ConfigureAwait(false);
var state = DeploymentCompatibilityStateFactory.CreateState(request, actor, _timeProvider);
await UpsertStateAsync(connection, transaction: null, tenantId, state, cancellationToken).ConfigureAwait(false);
return state.Deployment;
}
public async Task<DeploymentMutationResult> TransitionAsync(
string tenantId,
string deploymentId,
IReadOnlyCollection<string> allowedStatuses,
string nextStatus,
string eventType,
string message,
bool complete,
CancellationToken cancellationToken)
{
await using var connection = await OpenConnectionAsync(tenantId, cancellationToken).ConfigureAwait(false);
await EnsureSeedDataAsync(connection, tenantId, cancellationToken).ConfigureAwait(false);
await using var transaction = await connection.BeginTransactionAsync(cancellationToken).ConfigureAwait(false);
var current = await LoadStateAsync(connection, transaction, tenantId, deploymentId, forUpdate: true, cancellationToken).ConfigureAwait(false);
if (current is null)
{
await transaction.RollbackAsync(cancellationToken).ConfigureAwait(false);
return new DeploymentMutationResult(DeploymentMutationStatus.NotFound, string.Empty, null);
}
if (!allowedStatuses.Contains(current.Deployment.Status, StringComparer.OrdinalIgnoreCase))
{
await transaction.RollbackAsync(cancellationToken).ConfigureAwait(false);
return new DeploymentMutationResult(
DeploymentMutationStatus.Conflict,
$"Deployment {deploymentId} cannot transition from '{current.Deployment.Status}' to '{nextStatus}'.",
null);
}
var next = DeploymentCompatibilityStateFactory.Transition(current, nextStatus, eventType, message, complete, _timeProvider);
await UpsertStateAsync(connection, transaction, tenantId, next, cancellationToken).ConfigureAwait(false);
await transaction.CommitAsync(cancellationToken).ConfigureAwait(false);
return new DeploymentMutationResult(DeploymentMutationStatus.Success, message, next.Deployment);
}
public async Task<DeploymentMutationResult> RetryAsync(
string tenantId,
string deploymentId,
string targetId,
CancellationToken cancellationToken)
{
await using var connection = await OpenConnectionAsync(tenantId, cancellationToken).ConfigureAwait(false);
await EnsureSeedDataAsync(connection, tenantId, cancellationToken).ConfigureAwait(false);
await using var transaction = await connection.BeginTransactionAsync(cancellationToken).ConfigureAwait(false);
var current = await LoadStateAsync(connection, transaction, tenantId, deploymentId, forUpdate: true, cancellationToken).ConfigureAwait(false);
if (current is null)
{
await transaction.RollbackAsync(cancellationToken).ConfigureAwait(false);
return new DeploymentMutationResult(DeploymentMutationStatus.NotFound, string.Empty, null);
}
var target = current.Deployment.Targets.FirstOrDefault(item => string.Equals(item.Id, targetId, StringComparison.OrdinalIgnoreCase));
if (target is null)
{
await transaction.RollbackAsync(cancellationToken).ConfigureAwait(false);
return new DeploymentMutationResult(DeploymentMutationStatus.NotFound, string.Empty, null);
}
if (target.Status is not ("failed" or "skipped"))
{
await transaction.RollbackAsync(cancellationToken).ConfigureAwait(false);
return new DeploymentMutationResult(
DeploymentMutationStatus.Conflict,
$"Target {targetId} is not in a retryable state.",
null);
}
var next = DeploymentCompatibilityStateFactory.Retry(current, targetId, _timeProvider);
await UpsertStateAsync(connection, transaction, tenantId, next, cancellationToken).ConfigureAwait(false);
await transaction.CommitAsync(cancellationToken).ConfigureAwait(false);
return new DeploymentMutationResult(
DeploymentMutationStatus.Success,
$"Retry initiated for {target.Name}.",
next.Deployment);
}
private async Task<DeploymentCompatibilityState?> GetStateAsync(
string tenantId,
string deploymentId,
CancellationToken cancellationToken)
{
await using var connection = await OpenConnectionAsync(tenantId, cancellationToken).ConfigureAwait(false);
await EnsureSeedDataAsync(connection, tenantId, cancellationToken).ConfigureAwait(false);
return await LoadStateAsync(connection, transaction: null, tenantId, deploymentId, forUpdate: false, cancellationToken).ConfigureAwait(false);
}
private async Task EnsureSeedDataAsync(
NpgsqlConnection connection,
string tenantId,
CancellationToken cancellationToken)
{
var countSql = $"SELECT COUNT(*) FROM {QualifiedTableName} WHERE tenant_id = @tenant";
await using var countCommand = CreateCommand(countSql, connection);
countCommand.Parameters.AddWithValue("tenant", tenantId);
var existing = (long)(await countCommand.ExecuteScalarAsync(cancellationToken).ConfigureAwait(false) ?? 0L);
if (existing > 0)
{
return;
}
await using var transaction = await connection.BeginTransactionAsync(cancellationToken).ConfigureAwait(false);
foreach (var seed in DeploymentCompatibilityStateFactory.CreateSeedStates())
{
await UpsertStateAsync(connection, transaction, tenantId, seed, cancellationToken).ConfigureAwait(false);
}
await transaction.CommitAsync(cancellationToken).ConfigureAwait(false);
}
private async Task<DeploymentCompatibilityState?> LoadStateAsync(
NpgsqlConnection connection,
NpgsqlTransaction? transaction,
string tenantId,
string deploymentId,
bool forUpdate,
CancellationToken cancellationToken)
{
var sql =
$"""
SELECT deployment_json, logs_json, events_json, metrics_json
FROM {QualifiedTableName}
WHERE tenant_id = @tenant AND deployment_id = @deploymentId
{(forUpdate ? "FOR UPDATE" : string.Empty)}
""";
await using var command = CreateCommand(sql, connection, transaction);
command.Parameters.AddWithValue("tenant", tenantId);
command.Parameters.AddWithValue("deploymentId", deploymentId);
await using var reader = await command.ExecuteReaderAsync(cancellationToken).ConfigureAwait(false);
if (!await reader.ReadAsync(cancellationToken).ConfigureAwait(false))
{
return null;
}
return new DeploymentCompatibilityState(
Deserialize<DeploymentDto>(reader.GetString(0)),
Deserialize<List<DeploymentLogEntryDto>>(reader.GetString(1)),
Deserialize<List<DeploymentEventDto>>(reader.GetString(2)),
Deserialize<DeploymentMetricsDto>(reader.GetString(3)));
}
private async Task UpsertStateAsync(
NpgsqlConnection connection,
NpgsqlTransaction? transaction,
string tenantId,
DeploymentCompatibilityState state,
CancellationToken cancellationToken)
{
var now = _timeProvider.GetUtcNow();
var sql =
$"""
INSERT INTO {QualifiedTableName}
(tenant_id, deployment_id, started_at, created_at, updated_at, deployment_json, logs_json, events_json, metrics_json)
VALUES
(@tenant, @deploymentId, @startedAt, @createdAt, @updatedAt, CAST(@deploymentJson AS jsonb), CAST(@logsJson AS jsonb), CAST(@eventsJson AS jsonb), CAST(@metricsJson AS jsonb))
ON CONFLICT (tenant_id, deployment_id) DO UPDATE
SET started_at = EXCLUDED.started_at,
updated_at = EXCLUDED.updated_at,
deployment_json = EXCLUDED.deployment_json,
logs_json = EXCLUDED.logs_json,
events_json = EXCLUDED.events_json,
metrics_json = EXCLUDED.metrics_json
""";
await using var command = CreateCommand(sql, connection, transaction);
command.Parameters.AddWithValue("tenant", tenantId);
command.Parameters.AddWithValue("deploymentId", state.Deployment.Id);
command.Parameters.AddWithValue("startedAt", state.Deployment.StartedAt);
command.Parameters.AddWithValue("createdAt", state.Deployment.StartedAt);
command.Parameters.AddWithValue("updatedAt", now);
command.Parameters.AddWithValue("deploymentJson", JsonSerializer.Serialize(state.Deployment, SerializerOptions));
command.Parameters.AddWithValue("logsJson", JsonSerializer.Serialize(state.Logs, SerializerOptions));
command.Parameters.AddWithValue("eventsJson", JsonSerializer.Serialize(state.Events, SerializerOptions));
command.Parameters.AddWithValue("metricsJson", JsonSerializer.Serialize(state.Metrics, SerializerOptions));
await command.ExecuteNonQueryAsync(cancellationToken).ConfigureAwait(false);
}
private Task<NpgsqlConnection> OpenConnectionAsync(string tenantId, CancellationToken cancellationToken)
{
return _dataSource.OpenConnectionAsync(cancellationToken).AsTask();
}
private NpgsqlCommand CreateCommand(string sql, NpgsqlConnection connection, NpgsqlTransaction? transaction = null)
{
return new NpgsqlCommand(sql, connection, transaction)
{
CommandTimeout = _commandTimeoutSeconds,
};
}
private static T Deserialize<T>(string json)
=> JsonSerializer.Deserialize<T>(json, SerializerOptions)
?? throw new InvalidOperationException($"Failed to deserialize deployment compatibility payload for {typeof(T).Name}.");
}

View File

@@ -1,121 +0,0 @@
using StellaOps.JobEngine.WebService.Contracts;
namespace StellaOps.JobEngine.WebService.Services;
/// <summary>
/// Deterministic signal projections used by release-control contract adapters.
/// </summary>
public static class ReleaseControlSignalCatalog
{
private static readonly IReadOnlyDictionary<string, PromotionRiskSnapshot> RiskByRelease =
new Dictionary<string, PromotionRiskSnapshot>(StringComparer.OrdinalIgnoreCase)
{
["rel-001"] = new("production", 0, 0, 1, 96.5m, "clean"),
["rel-002"] = new("production", 1, 1, 3, 62.0m, "warning"),
["rel-003"] = new("production", 2, 1, 2, 58.0m, "blocked"),
["rel-004"] = new("dev", 0, 1, 1, 88.0m, "warning"),
["rel-005"] = new("production", 0, 0, 0, 97.0m, "clean"),
};
private static readonly IReadOnlyDictionary<string, HybridReachabilityCoverage> CoverageByRelease =
new Dictionary<string, HybridReachabilityCoverage>(StringComparer.OrdinalIgnoreCase)
{
["rel-001"] = new(100, 100, 92, 2),
["rel-002"] = new(100, 86, 41, 26),
["rel-003"] = new(100, 80, 35, 31),
["rel-004"] = new(100, 72, 0, 48),
["rel-005"] = new(100, 100, 100, 1),
};
private static readonly IReadOnlyDictionary<string, OpsDataConfidence> OpsByEnvironment =
new Dictionary<string, OpsDataConfidence>(StringComparer.OrdinalIgnoreCase)
{
["production"] = new(
"warning",
"NVD freshness and runtime ingest lag reduce decision confidence.",
71,
DateTimeOffset.Parse("2026-02-19T03:15:00Z"),
new[]
{
"feeds:nvd=warn(3h stale)",
"sbom-rescan=fail(12 digests stale)",
"reach-runtime=warn(agent degraded)",
}),
["staging"] = new(
"healthy",
"All freshness and ingest checks are within policy threshold.",
94,
DateTimeOffset.Parse("2026-02-19T03:15:00Z"),
new[]
{
"feeds=ok",
"sbom-rescan=ok",
"reach-runtime=ok",
}),
["dev"] = new(
"warning",
"Runtime evidence coverage is limited for non-prod workloads.",
78,
DateTimeOffset.Parse("2026-02-19T03:15:00Z"),
new[]
{
"feeds=ok",
"sbom-rescan=ok",
"reach-runtime=warn(low coverage)",
}),
["canary"] = new(
"healthy",
"Canary telemetry and feed freshness are green.",
90,
DateTimeOffset.Parse("2026-02-19T03:15:00Z"),
new[]
{
"feeds=ok",
"sbom-rescan=ok",
"reach-runtime=ok",
}),
};
public static PromotionRiskSnapshot GetRiskSnapshot(string releaseId, string targetEnvironment)
{
if (RiskByRelease.TryGetValue(releaseId, out var risk))
{
return string.Equals(risk.EnvironmentId, targetEnvironment, StringComparison.OrdinalIgnoreCase)
? risk
: risk with { EnvironmentId = targetEnvironment };
}
return new PromotionRiskSnapshot(targetEnvironment, 0, 0, 0, 100m, "clean");
}
public static HybridReachabilityCoverage GetCoverage(string releaseId)
{
return CoverageByRelease.TryGetValue(releaseId, out var coverage)
? coverage
: new HybridReachabilityCoverage(100, 100, 100, 1);
}
public static OpsDataConfidence GetOpsConfidence(string targetEnvironment)
{
return OpsByEnvironment.TryGetValue(targetEnvironment, out var confidence)
? confidence
: new OpsDataConfidence(
"unknown",
"No platform data-integrity signal is available for this environment.",
0,
DateTimeOffset.Parse("2026-02-19T03:15:00Z"),
new[] { "platform-signal=missing" });
}
public static ApprovalEvidencePacket BuildEvidencePacket(string approvalId, string releaseId)
{
var suffix = $"{releaseId}-{approvalId}".Replace(":", string.Empty, StringComparison.Ordinal);
return new ApprovalEvidencePacket(
DecisionDigest: $"sha256:decision-{suffix}",
PolicyDecisionDsse: $"policy-decision-{approvalId}.dsse",
SbomSnapshotId: $"sbom-snapshot-{releaseId}",
ReachabilitySnapshotId: $"reachability-snapshot-{releaseId}",
DataIntegritySnapshotId: $"ops-snapshot-{releaseId}");
}
}

View File

@@ -1,250 +0,0 @@
using System.Globalization;
using StellaOps.JobEngine.WebService.Endpoints;
namespace StellaOps.JobEngine.WebService.Services;
/// <summary>
/// Builds deterministic release dashboard snapshots from in-memory seed data.
/// </summary>
public static class ReleaseDashboardSnapshotBuilder
{
private static readonly PipelineDefinition[] PipelineDefinitions =
{
new("dev", "development", "Development", 1),
new("staging", "staging", "Staging", 2),
new("uat", "uat", "UAT", 3),
new("production", "production", "Production", 4),
};
private static readonly HashSet<string> AllowedReleaseStatuses = new(StringComparer.OrdinalIgnoreCase)
{
"draft",
"ready",
"promoting",
"deployed",
"failed",
"deprecated",
"rolled_back",
};
public static ReleaseDashboardSnapshot Build(
IReadOnlyList<ApprovalEndpoints.ApprovalDto>? approvals = null,
IReadOnlyList<ReleaseEndpoints.ManagedReleaseDto>? releases = null)
{
var releaseItems = (releases ?? ReleaseEndpoints.SeedData.Releases)
.OrderByDescending(release => release.CreatedAt)
.ThenBy(release => release.Id, StringComparer.Ordinal)
.ToArray();
var approvalItems = (approvals ?? ApprovalEndpoints.SeedData.Approvals)
.OrderBy(approval => ParseTimestamp(approval.RequestedAt))
.ThenBy(approval => approval.Id, StringComparer.Ordinal)
.ToArray();
var pendingApprovals = approvalItems
.Where(approval => string.Equals(approval.Status, "pending", StringComparison.OrdinalIgnoreCase))
.Select(approval => new PendingApprovalItem(
approval.Id,
approval.ReleaseId,
approval.ReleaseName,
approval.ReleaseVersion,
ToDisplayEnvironment(approval.SourceEnvironment),
ToDisplayEnvironment(approval.TargetEnvironment),
approval.RequestedBy,
approval.RequestedAt,
NormalizeUrgency(approval.Urgency)))
.ToArray();
var activeDeployments = releaseItems
.Where(release => string.Equals(release.Status, "deploying", StringComparison.OrdinalIgnoreCase))
.OrderByDescending(release => release.UpdatedAt)
.ThenBy(release => release.Id, StringComparer.Ordinal)
.Select((release, index) =>
{
var progress = Math.Min(90, 45 + (index * 15));
var totalTargets = Math.Max(1, release.ComponentCount);
var completedTargets = Math.Clamp(
(int)Math.Round(totalTargets * (progress / 100d), MidpointRounding.AwayFromZero),
1,
totalTargets);
return new ActiveDeploymentItem(
Id: $"dep-{release.Id}",
ReleaseId: release.Id,
ReleaseName: release.Name,
ReleaseVersion: release.Version,
Environment: ToDisplayEnvironment(release.TargetEnvironment ?? release.CurrentEnvironment ?? "staging"),
Progress: progress,
Status: "running",
StartedAt: release.UpdatedAt.ToString("O"),
CompletedTargets: completedTargets,
TotalTargets: totalTargets);
})
.ToArray();
var pipelineEnvironments = PipelineDefinitions
.Select(definition =>
{
var releaseCount = releaseItems.Count(release =>
string.Equals(NormalizeEnvironment(release.CurrentEnvironment), definition.NormalizedName, StringComparison.OrdinalIgnoreCase));
var pendingCount = pendingApprovals.Count(approval =>
string.Equals(NormalizeEnvironment(approval.TargetEnvironment), definition.NormalizedName, StringComparison.OrdinalIgnoreCase));
var hasActiveDeployment = activeDeployments.Any(deployment =>
string.Equals(NormalizeEnvironment(deployment.Environment), definition.NormalizedName, StringComparison.OrdinalIgnoreCase));
var healthStatus = hasActiveDeployment || pendingCount > 0
? "degraded"
: releaseCount > 0
? "healthy"
: "unknown";
return new PipelineEnvironmentItem(
definition.Id,
definition.NormalizedName,
definition.DisplayName,
definition.Order,
releaseCount,
pendingCount,
healthStatus);
})
.ToArray();
var pipelineConnections = PipelineDefinitions
.Skip(1)
.Select((definition, index) => new PipelineConnectionItem(
PipelineDefinitions[index].Id,
definition.Id))
.ToArray();
var recentReleases = releaseItems
.Take(10)
.Select(release => new RecentReleaseItem(
release.Id,
release.Name,
release.Version,
NormalizeReleaseStatus(release.Status),
release.CurrentEnvironment is null ? null : ToDisplayEnvironment(release.CurrentEnvironment),
release.CreatedAt.ToString("O"),
string.IsNullOrWhiteSpace(release.CreatedBy) ? "system" : release.CreatedBy,
release.ComponentCount))
.ToArray();
return new ReleaseDashboardSnapshot(
new PipelineData(pipelineEnvironments, pipelineConnections),
pendingApprovals,
activeDeployments,
recentReleases);
}
private static DateTimeOffset ParseTimestamp(string value)
{
if (DateTimeOffset.TryParse(value, CultureInfo.InvariantCulture, DateTimeStyles.AssumeUniversal, out var parsed))
{
return parsed;
}
return DateTimeOffset.MinValue;
}
private static string NormalizeEnvironment(string? value)
{
var normalized = value?.Trim().ToLowerInvariant() ?? string.Empty;
return normalized switch
{
"dev" => "development",
"stage" => "staging",
"prod" => "production",
_ => normalized,
};
}
private static string ToDisplayEnvironment(string? value)
{
return NormalizeEnvironment(value) switch
{
"development" => "Development",
"staging" => "Staging",
"uat" => "UAT",
"production" => "Production",
var other when string.IsNullOrWhiteSpace(other) => "Unknown",
var other => CultureInfo.InvariantCulture.TextInfo.ToTitleCase(other),
};
}
private static string NormalizeReleaseStatus(string value)
{
var normalized = value.Trim().ToLowerInvariant();
if (string.Equals(normalized, "deploying", StringComparison.OrdinalIgnoreCase))
{
return "promoting";
}
return AllowedReleaseStatuses.Contains(normalized) ? normalized : "draft";
}
private static string NormalizeUrgency(string value)
{
var normalized = value.Trim().ToLowerInvariant();
return normalized switch
{
"low" or "normal" or "high" or "critical" => normalized,
_ => "normal",
};
}
private sealed record PipelineDefinition(string Id, string NormalizedName, string DisplayName, int Order);
}
public sealed record ReleaseDashboardSnapshot(
PipelineData PipelineData,
IReadOnlyList<PendingApprovalItem> PendingApprovals,
IReadOnlyList<ActiveDeploymentItem> ActiveDeployments,
IReadOnlyList<RecentReleaseItem> RecentReleases);
public sealed record PipelineData(
IReadOnlyList<PipelineEnvironmentItem> Environments,
IReadOnlyList<PipelineConnectionItem> Connections);
public sealed record PipelineEnvironmentItem(
string Id,
string Name,
string DisplayName,
int Order,
int ReleaseCount,
int PendingCount,
string HealthStatus);
public sealed record PipelineConnectionItem(string From, string To);
public sealed record PendingApprovalItem(
string Id,
string ReleaseId,
string ReleaseName,
string ReleaseVersion,
string SourceEnvironment,
string TargetEnvironment,
string RequestedBy,
string RequestedAt,
string Urgency);
public sealed record ActiveDeploymentItem(
string Id,
string ReleaseId,
string ReleaseName,
string ReleaseVersion,
string Environment,
int Progress,
string Status,
string StartedAt,
int CompletedTargets,
int TotalTargets);
public sealed record RecentReleaseItem(
string Id,
string Name,
string Version,
string Status,
string? CurrentEnvironment,
string CreatedAt,
string CreatedBy,
int ComponentCount);

View File

@@ -1,214 +0,0 @@
using System.Collections.Concurrent;
using System.Globalization;
using StellaOps.JobEngine.WebService.Endpoints;
namespace StellaOps.JobEngine.WebService.Services;
/// <summary>
/// Tracks in-memory promotion decisions for the dashboard compatibility endpoints
/// without mutating the shared seed catalog used by deterministic tests.
/// </summary>
public sealed class ReleasePromotionDecisionStore
{
private readonly ConcurrentDictionary<string, ApprovalEndpoints.ApprovalDto> overrides =
new(StringComparer.OrdinalIgnoreCase);
public IReadOnlyList<ApprovalEndpoints.ApprovalDto> Apply(IEnumerable<ApprovalEndpoints.ApprovalDto> approvals)
{
return approvals
.Select(Apply)
.ToArray();
}
public ApprovalEndpoints.ApprovalDto Apply(ApprovalEndpoints.ApprovalDto approval)
{
return overrides.TryGetValue(approval.Id, out var updated)
? updated
: approval;
}
public bool TryApprove(
string approvalId,
string actor,
string? comment,
out ApprovalEndpoints.ApprovalDto? approval,
out string? error)
{
lock (overrides)
{
var current = ResolveCurrentApproval(approvalId);
if (current is null)
{
approval = null;
error = "promotion_not_found";
return false;
}
if (string.Equals(current.Status, "rejected", StringComparison.OrdinalIgnoreCase))
{
approval = null;
error = "promotion_not_pending";
return false;
}
if (string.Equals(current.Status, "approved", StringComparison.OrdinalIgnoreCase))
{
approval = current;
error = null;
return true;
}
var approvedAt = NextTimestamp(current);
var currentApprovals = Math.Min(current.RequiredApprovals, current.CurrentApprovals + 1);
var status = currentApprovals >= current.RequiredApprovals ? "approved" : current.Status;
approval = current with
{
CurrentApprovals = currentApprovals,
Status = status,
Actions = AppendAction(current.Actions, new ApprovalEndpoints.ApprovalActionRecordDto
{
Id = BuildActionId(current.Id, current.Actions.Count + 1),
ApprovalId = current.Id,
Action = "approved",
Actor = actor,
Comment = comment ?? string.Empty,
Timestamp = approvedAt,
}),
Approvers = ApplyApprovalToApprovers(current.Approvers, actor, approvedAt),
};
overrides[approval.Id] = approval;
error = null;
return true;
}
}
public bool TryReject(
string approvalId,
string actor,
string? comment,
out ApprovalEndpoints.ApprovalDto? approval,
out string? error)
{
lock (overrides)
{
var current = ResolveCurrentApproval(approvalId);
if (current is null)
{
approval = null;
error = "promotion_not_found";
return false;
}
if (string.Equals(current.Status, "approved", StringComparison.OrdinalIgnoreCase))
{
approval = null;
error = "promotion_not_pending";
return false;
}
if (string.Equals(current.Status, "rejected", StringComparison.OrdinalIgnoreCase))
{
approval = current;
error = null;
return true;
}
var rejectedAt = NextTimestamp(current);
approval = current with
{
Status = "rejected",
Actions = AppendAction(current.Actions, new ApprovalEndpoints.ApprovalActionRecordDto
{
Id = BuildActionId(current.Id, current.Actions.Count + 1),
ApprovalId = current.Id,
Action = "rejected",
Actor = actor,
Comment = comment ?? string.Empty,
Timestamp = rejectedAt,
}),
};
overrides[approval.Id] = approval;
error = null;
return true;
}
}
private ApprovalEndpoints.ApprovalDto? ResolveCurrentApproval(string approvalId)
{
if (overrides.TryGetValue(approvalId, out var updated))
{
return updated;
}
return ApprovalEndpoints.SeedData.Approvals
.FirstOrDefault(item => string.Equals(item.Id, approvalId, StringComparison.OrdinalIgnoreCase));
}
private static List<ApprovalEndpoints.ApproverDto> ApplyApprovalToApprovers(
List<ApprovalEndpoints.ApproverDto> approvers,
string actor,
string approvedAt)
{
var updated = approvers
.Select(item =>
{
var matchesActor =
string.Equals(item.Id, actor, StringComparison.OrdinalIgnoreCase)
|| string.Equals(item.Email, actor, StringComparison.OrdinalIgnoreCase)
|| string.Equals(item.Name, actor, StringComparison.OrdinalIgnoreCase);
return matchesActor
? item with { HasApproved = true, ApprovedAt = approvedAt }
: item;
})
.ToList();
if (updated.Any(item => item.HasApproved && string.Equals(item.ApprovedAt, approvedAt, StringComparison.Ordinal)))
{
return updated;
}
updated.Add(new ApprovalEndpoints.ApproverDto
{
Id = actor,
Name = actor,
Email = actor.Contains('@', StringComparison.Ordinal) ? actor : $"{actor}@local",
HasApproved = true,
ApprovedAt = approvedAt,
});
return updated;
}
private static List<ApprovalEndpoints.ApprovalActionRecordDto> AppendAction(
List<ApprovalEndpoints.ApprovalActionRecordDto> actions,
ApprovalEndpoints.ApprovalActionRecordDto action)
{
var updated = actions.ToList();
updated.Add(action);
return updated;
}
private static string BuildActionId(string approvalId, int index)
=> $"{approvalId}-action-{index:D2}";
private static string NextTimestamp(ApprovalEndpoints.ApprovalDto approval)
{
var latestTimestamp = approval.Actions
.Select(action => ParseTimestamp(action.Timestamp))
.Append(ParseTimestamp(approval.RequestedAt))
.Max();
return latestTimestamp.AddMinutes(1).ToString("O", CultureInfo.InvariantCulture);
}
private static DateTimeOffset ParseTimestamp(string value)
{
return DateTimeOffset.TryParse(value, CultureInfo.InvariantCulture, DateTimeStyles.AssumeUniversal, out var parsed)
? parsed
: DateTimeOffset.UnixEpoch;
}
}

View File

@@ -1,123 +0,0 @@
using Microsoft.Extensions.Options;
using StellaOps.JobEngine.Infrastructure.Options;
namespace StellaOps.JobEngine.WebService.Services;
/// <summary>
/// Resolves tenant context from HTTP request headers.
/// </summary>
public sealed class TenantResolver
{
private readonly JobEngineServiceOptions _options;
private const string DefaultTenantHeader = "X-Tenant-Id";
private const string DefaultTenantQueryParam = "tenant";
public TenantResolver(IOptions<JobEngineServiceOptions> options)
{
_options = options?.Value ?? throw new ArgumentNullException(nameof(options));
}
/// <summary>
/// Resolves the tenant ID from the request headers.
/// </summary>
/// <param name="context">HTTP context.</param>
/// <returns>Tenant ID.</returns>
/// <exception cref="InvalidOperationException">Thrown when tenant header is missing or empty.</exception>
public string Resolve(HttpContext context)
{
ArgumentNullException.ThrowIfNull(context);
var headerName = _options.TenantHeader ?? DefaultTenantHeader;
if (!context.Request.Headers.TryGetValue(headerName, out var values))
{
throw new InvalidOperationException(
$"Tenant header '{headerName}' is required for Orchestrator operations.");
}
var tenantId = values.ToString();
if (string.IsNullOrWhiteSpace(tenantId))
{
throw new InvalidOperationException(
$"Tenant header '{headerName}' must contain a value.");
}
return tenantId.Trim();
}
/// <summary>
/// Resolves the tenant ID for streaming endpoints.
/// EventSource cannot set custom headers, so we allow a query string fallback.
/// </summary>
/// <param name="context">HTTP context.</param>
/// <returns>Tenant ID.</returns>
public string ResolveForStreaming(HttpContext context)
{
ArgumentNullException.ThrowIfNull(context);
if (TryResolve(context, out var tenantId) && !string.IsNullOrWhiteSpace(tenantId))
{
return tenantId;
}
if (TryResolveFromQuery(context, out tenantId) && !string.IsNullOrWhiteSpace(tenantId))
{
return tenantId;
}
var headerName = _options.TenantHeader ?? DefaultTenantHeader;
throw new InvalidOperationException(
$"Tenant header '{headerName}' or query parameter '{DefaultTenantQueryParam}' is required for Orchestrator streaming operations.");
}
/// <summary>
/// Tries to resolve the tenant ID from the request headers.
/// </summary>
/// <param name="context">HTTP context.</param>
/// <param name="tenantId">Resolved tenant ID.</param>
/// <returns>True if tenant ID was resolved; otherwise false.</returns>
public bool TryResolve(HttpContext context, out string? tenantId)
{
tenantId = null;
if (context is null)
{
return false;
}
var headerName = _options.TenantHeader ?? DefaultTenantHeader;
if (!context.Request.Headers.TryGetValue(headerName, out var values))
{
return false;
}
var value = values.ToString();
if (string.IsNullOrWhiteSpace(value))
{
return false;
}
tenantId = value.Trim();
return true;
}
private static bool TryResolveFromQuery(HttpContext context, out string? tenantId)
{
tenantId = null;
if (context is null)
{
return false;
}
var value = context.Request.Query[DefaultTenantQueryParam].ToString();
if (string.IsNullOrWhiteSpace(value))
{
return false;
}
tenantId = value.Trim();
return true;
}
}

View File

@@ -1,56 +0,0 @@
<?xml version="1.0" ?>
<Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<TargetFramework>net10.0</TargetFramework>
<ImplicitUsings>enable</ImplicitUsings>
<Nullable>enable</Nullable>
<LangVersion>preview</LangVersion>
<TreatWarningsAsErrors>true</TreatWarningsAsErrors>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.AspNetCore.OpenApi" />
</ItemGroup>
<ItemGroup>
<ProjectReference Include="..\StellaOps.JobEngine.Core\StellaOps.JobEngine.Core.csproj"/>
<ProjectReference Include="..\StellaOps.JobEngine.Infrastructure\StellaOps.JobEngine.Infrastructure.csproj"/>
<ProjectReference Include="..\..\..\Telemetry\StellaOps.Telemetry.Core\StellaOps.Telemetry.Core\StellaOps.Telemetry.Core.csproj"/>
<ProjectReference Include="..\..\..\Router/__Libraries/StellaOps.Messaging\StellaOps.Messaging.csproj" />
<ProjectReference Include="..\..\..\Router/__Libraries/StellaOps.Messaging.Transport.InMemory\StellaOps.Messaging.Transport.InMemory.csproj" />
<ProjectReference Include="..\..\..\Router/__Libraries/StellaOps.Messaging.Transport.Postgres\StellaOps.Messaging.Transport.Postgres.csproj" />
<ProjectReference Include="..\..\..\Router/__Libraries/StellaOps.Messaging.Transport.Valkey\StellaOps.Messaging.Transport.Valkey.csproj" />
<ProjectReference Include="..\..\..\__Libraries\StellaOps.Metrics\StellaOps.Metrics.csproj" />
<ProjectReference Include="..\..\..\Router/__Libraries/StellaOps.Router.AspNet\StellaOps.Router.AspNet.csproj" />
<ProjectReference Include="..\..\..\Authority\StellaOps.Authority\StellaOps.Auth.ServerIntegration\StellaOps.Auth.ServerIntegration.csproj" />
<ProjectReference Include="..\..\..\__Libraries\StellaOps.Localization\StellaOps.Localization.csproj" />
</ItemGroup>
<ItemGroup>
<EmbeddedResource Include="Translations\*.json" />
</ItemGroup>
<PropertyGroup Label="StellaOpsReleaseVersion">
<Version>1.0.0-alpha1</Version>
<InformationalVersion>1.0.0-alpha1</InformationalVersion>
</PropertyGroup>
</Project>

View File

@@ -1,6 +0,0 @@
@StellaOps.Orchestrator.WebService_HostAddress = http://localhost:5151
GET {{StellaOps.Orchestrator.WebService_HostAddress}}/weatherforecast/
Accept: application/json
###

View File

@@ -1,144 +0,0 @@
using Microsoft.Extensions.Options;
using StellaOps.JobEngine.Core.Domain;
using StellaOps.JobEngine.Infrastructure.Repositories;
using System.Text.Json;
namespace StellaOps.JobEngine.WebService.Streaming;
/// <summary>
/// Interface for coordinating job SSE streams.
/// </summary>
public interface IJobStreamCoordinator
{
/// <summary>
/// Streams job updates via SSE until the job reaches a terminal state or timeout.
/// </summary>
Task StreamAsync(HttpContext context, string tenantId, Job initialJob, CancellationToken cancellationToken);
}
/// <summary>
/// Coordinates streaming of job state changes via Server-Sent Events.
/// </summary>
public sealed class JobStreamCoordinator : IJobStreamCoordinator
{
private static readonly JsonSerializerOptions SerializerOptions = new(JsonSerializerDefaults.Web);
private readonly IJobRepository _jobRepository;
private readonly TimeProvider _timeProvider;
private readonly ILogger<JobStreamCoordinator> _logger;
private readonly StreamOptions _options;
public JobStreamCoordinator(
IJobRepository jobRepository,
IOptions<StreamOptions> options,
TimeProvider? timeProvider,
ILogger<JobStreamCoordinator> logger)
{
_jobRepository = jobRepository ?? throw new ArgumentNullException(nameof(jobRepository));
_timeProvider = timeProvider ?? TimeProvider.System;
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
_options = (options ?? throw new ArgumentNullException(nameof(options))).Value.Validate();
}
public async Task StreamAsync(HttpContext context, string tenantId, Job initialJob, CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(context);
ArgumentNullException.ThrowIfNull(initialJob);
var response = context.Response;
SseWriter.ConfigureSseHeaders(response);
await SseWriter.WriteRetryAsync(response, _options.ReconnectDelay, cancellationToken).ConfigureAwait(false);
var lastJob = initialJob;
await SseWriter.WriteEventAsync(response, "initial", JobSnapshotPayload.FromJob(lastJob), SerializerOptions, cancellationToken).ConfigureAwait(false);
await SseWriter.WriteEventAsync(response, "heartbeat", HeartbeatPayload.Create(_timeProvider.GetUtcNow(), lastJob.JobId.ToString()), SerializerOptions, cancellationToken).ConfigureAwait(false);
// If already terminal, send completed and exit
if (IsTerminal(lastJob.Status))
{
await SseWriter.WriteEventAsync(response, "completed", JobSnapshotPayload.FromJob(lastJob), SerializerOptions, cancellationToken).ConfigureAwait(false);
return;
}
var startTime = _timeProvider.GetUtcNow();
using var pollTimer = new PeriodicTimer(_options.PollInterval);
using var heartbeatTimer = new PeriodicTimer(_options.HeartbeatInterval);
try
{
while (!cancellationToken.IsCancellationRequested)
{
// Check max stream duration
if (_timeProvider.GetUtcNow() - startTime > _options.MaxStreamDuration)
{
_logger.LogInformation("Job stream for {JobId} reached max duration; closing.", lastJob.JobId);
await SseWriter.WriteEventAsync(response, "timeout", new { jobId = lastJob.JobId, reason = "Max stream duration reached" }, SerializerOptions, cancellationToken).ConfigureAwait(false);
break;
}
var pollTask = pollTimer.WaitForNextTickAsync(cancellationToken).AsTask();
var heartbeatTask = heartbeatTimer.WaitForNextTickAsync(cancellationToken).AsTask();
var completed = await Task.WhenAny(pollTask, heartbeatTask).ConfigureAwait(false);
if (completed == pollTask && await pollTask.ConfigureAwait(false))
{
var current = await _jobRepository.GetByIdAsync(tenantId, lastJob.JobId, cancellationToken).ConfigureAwait(false);
if (current is null)
{
_logger.LogWarning("Job {JobId} disappeared while streaming; signalling notFound event.", lastJob.JobId);
await SseWriter.WriteEventAsync(response, "notFound", new NotFoundPayload(lastJob.JobId.ToString(), "job"), SerializerOptions, cancellationToken).ConfigureAwait(false);
break;
}
if (HasChanged(lastJob, current))
{
await EmitJobChangeAsync(response, lastJob, current, cancellationToken).ConfigureAwait(false);
lastJob = current;
if (IsTerminal(lastJob.Status))
{
await SseWriter.WriteEventAsync(response, "completed", JobSnapshotPayload.FromJob(lastJob), SerializerOptions, cancellationToken).ConfigureAwait(false);
break;
}
}
}
else if (completed == heartbeatTask && await heartbeatTask.ConfigureAwait(false))
{
await SseWriter.WriteEventAsync(response, "heartbeat", HeartbeatPayload.Create(_timeProvider.GetUtcNow(), lastJob.JobId.ToString()), SerializerOptions, cancellationToken).ConfigureAwait(false);
}
}
}
catch (OperationCanceledException) when (cancellationToken.IsCancellationRequested)
{
_logger.LogDebug("Job stream cancelled for job {JobId}.", lastJob.JobId);
}
}
private static bool HasChanged(Job previous, Job current)
{
return previous.Status != current.Status ||
previous.Attempt != current.Attempt ||
previous.WorkerId != current.WorkerId ||
previous.LeaseId != current.LeaseId ||
previous.Reason != current.Reason;
}
private async Task EmitJobChangeAsync(HttpResponse response, Job previous, Job current, CancellationToken cancellationToken)
{
var payload = new JobStateChangedPayload(
current.JobId,
previous.Status.ToString().ToLowerInvariant(),
current.Status.ToString().ToLowerInvariant(),
current.Attempt,
current.WorkerId,
current.Reason,
_timeProvider.GetUtcNow());
await SseWriter.WriteEventAsync(response, "stateChanged", payload, SerializerOptions, cancellationToken).ConfigureAwait(false);
}
private static bool IsTerminal(JobStatus status) =>
status is JobStatus.Succeeded or JobStatus.Failed or JobStatus.Canceled or JobStatus.TimedOut;
}

View File

@@ -1,308 +0,0 @@
using Microsoft.Extensions.Options;
using StellaOps.JobEngine.Core.Domain;
using StellaOps.JobEngine.Infrastructure.Repositories;
using System.Net.WebSockets;
using System.Text;
using System.Text.Json;
namespace StellaOps.JobEngine.WebService.Streaming;
public interface IPackRunStreamCoordinator
{
Task StreamAsync(HttpContext context, string tenantId, PackRun packRun, CancellationToken cancellationToken);
Task StreamWebSocketAsync(WebSocket socket, string tenantId, PackRun packRun, CancellationToken cancellationToken);
}
/// <summary>
/// Streams pack run status/log updates over SSE.
/// </summary>
public sealed class PackRunStreamCoordinator : IPackRunStreamCoordinator
{
private const int DefaultBatchSize = 200;
private static readonly JsonSerializerOptions SerializerOptions = new(JsonSerializerDefaults.Web);
private readonly IPackRunRepository _packRunRepository;
private readonly IPackRunLogRepository _logRepository;
private readonly TimeProvider _timeProvider;
private readonly StreamOptions _options;
private readonly ILogger<PackRunStreamCoordinator> _logger;
public PackRunStreamCoordinator(
IPackRunRepository packRunRepository,
IPackRunLogRepository logRepository,
IOptions<StreamOptions> options,
TimeProvider? timeProvider,
ILogger<PackRunStreamCoordinator> logger)
{
_packRunRepository = packRunRepository ?? throw new ArgumentNullException(nameof(packRunRepository));
_logRepository = logRepository ?? throw new ArgumentNullException(nameof(logRepository));
_options = (options ?? throw new ArgumentNullException(nameof(options))).Value.Validate();
_timeProvider = timeProvider ?? TimeProvider.System;
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
}
public async Task StreamAsync(HttpContext context, string tenantId, PackRun packRun, CancellationToken cancellationToken)
{
var response = context.Response;
SseWriter.ConfigureSseHeaders(response);
await SseWriter.WriteRetryAsync(response, _options.ReconnectDelay, cancellationToken).ConfigureAwait(false);
var (logCount, latestSeq) = await _logRepository.GetLogStatsAsync(tenantId, packRun.PackRunId, cancellationToken).ConfigureAwait(false);
await SseWriter.WriteEventAsync(response, "initial", PackRunSnapshotPayload.From(packRun, logCount, latestSeq), SerializerOptions, cancellationToken).ConfigureAwait(false);
await SseWriter.WriteEventAsync(response, "heartbeat", HeartbeatPayload.Create(_timeProvider.GetUtcNow(), packRun.PackRunId.ToString()), SerializerOptions, cancellationToken).ConfigureAwait(false);
if (IsTerminal(packRun.Status))
{
await EmitCompletedAsync(response, packRun, logCount, latestSeq, cancellationToken).ConfigureAwait(false);
return;
}
var last = packRun;
var lastSeq = latestSeq;
var start = _timeProvider.GetUtcNow();
using var poll = new PeriodicTimer(_options.PollInterval);
using var heartbeat = new PeriodicTimer(_options.HeartbeatInterval);
try
{
while (!cancellationToken.IsCancellationRequested)
{
if (_timeProvider.GetUtcNow() - start > _options.MaxStreamDuration)
{
await SseWriter.WriteEventAsync(response, "timeout", new { packRunId = last.PackRunId, reason = "Max stream duration reached" }, SerializerOptions, cancellationToken).ConfigureAwait(false);
break;
}
var pollTask = poll.WaitForNextTickAsync(cancellationToken).AsTask();
var hbTask = heartbeat.WaitForNextTickAsync(cancellationToken).AsTask();
var completed = await Task.WhenAny(pollTask, hbTask).ConfigureAwait(false);
if (completed == hbTask && await hbTask.ConfigureAwait(false))
{
await SseWriter.WriteEventAsync(response, "heartbeat", HeartbeatPayload.Create(_timeProvider.GetUtcNow(), packRun.PackRunId.ToString()), SerializerOptions, cancellationToken).ConfigureAwait(false);
continue;
}
if (completed == pollTask && await pollTask.ConfigureAwait(false))
{
var current = await _packRunRepository.GetByIdAsync(tenantId, last.PackRunId, cancellationToken).ConfigureAwait(false);
if (current is null)
{
await SseWriter.WriteEventAsync(response, "notFound", new NotFoundPayload(last.PackRunId.ToString(), "pack-run"), SerializerOptions, cancellationToken).ConfigureAwait(false);
break;
}
// Send new logs
var batch = await _logRepository.GetLogsAsync(tenantId, current.PackRunId, lastSeq, DefaultBatchSize, cancellationToken).ConfigureAwait(false);
if (batch.Logs.Count > 0)
{
lastSeq = batch.Logs[^1].Sequence;
await SseWriter.WriteEventAsync(response, "logs", batch.Logs.Select(PackRunLogPayload.FromDomain), SerializerOptions, cancellationToken).ConfigureAwait(false);
}
if (HasStatusChanged(last, current))
{
await SseWriter.WriteEventAsync(response, "statusChanged", PackRunSnapshotPayload.From(current, batch.Logs.Count, lastSeq), SerializerOptions, cancellationToken).ConfigureAwait(false);
last = current;
if (IsTerminal(current.Status))
{
await EmitCompletedAsync(response, current, batch.Logs.Count, lastSeq, cancellationToken).ConfigureAwait(false);
break;
}
}
}
}
}
catch (OperationCanceledException) when (cancellationToken.IsCancellationRequested)
{
_logger.LogDebug("Pack run stream cancelled for {PackRunId}.", last.PackRunId);
}
}
public async Task StreamWebSocketAsync(WebSocket socket, string tenantId, PackRun packRun, CancellationToken cancellationToken)
{
if (socket is null) throw new ArgumentNullException(nameof(socket));
var (logCount, latestSeq) = await _logRepository.GetLogStatsAsync(tenantId, packRun.PackRunId, cancellationToken).ConfigureAwait(false);
await SendAsync(socket, "initial", PackRunSnapshotPayload.From(packRun, logCount, latestSeq), cancellationToken).ConfigureAwait(false);
await SendAsync(socket, "heartbeat", HeartbeatPayload.Create(_timeProvider.GetUtcNow(), packRun.PackRunId.ToString()), cancellationToken).ConfigureAwait(false);
if (IsTerminal(packRun.Status))
{
await SendCompletedAsync(socket, packRun, logCount, latestSeq, cancellationToken).ConfigureAwait(false);
return;
}
var last = packRun;
var lastSeq = latestSeq;
var start = _timeProvider.GetUtcNow();
using var poll = new PeriodicTimer(_options.PollInterval);
using var heartbeat = new PeriodicTimer(_options.HeartbeatInterval);
try
{
while (!cancellationToken.IsCancellationRequested && socket.State == WebSocketState.Open)
{
if (_timeProvider.GetUtcNow() - start > _options.MaxStreamDuration)
{
await SendAsync(socket, "timeout", new { packRunId = last.PackRunId, reason = "Max stream duration reached" }, cancellationToken).ConfigureAwait(false);
break;
}
var pollTask = poll.WaitForNextTickAsync(cancellationToken).AsTask();
var hbTask = heartbeat.WaitForNextTickAsync(cancellationToken).AsTask();
var completed = await Task.WhenAny(pollTask, hbTask).ConfigureAwait(false);
if (completed == hbTask && await hbTask.ConfigureAwait(false))
{
await SendAsync(socket, "heartbeat", HeartbeatPayload.Create(_timeProvider.GetUtcNow(), packRun.PackRunId.ToString()), cancellationToken).ConfigureAwait(false);
continue;
}
if (completed == pollTask && await pollTask.ConfigureAwait(false))
{
var current = await _packRunRepository.GetByIdAsync(tenantId, last.PackRunId, cancellationToken).ConfigureAwait(false);
if (current is null)
{
await SendAsync(socket, "notFound", new NotFoundPayload(last.PackRunId.ToString(), "pack-run"), cancellationToken).ConfigureAwait(false);
break;
}
var batch = await _logRepository.GetLogsAsync(tenantId, current.PackRunId, lastSeq, DefaultBatchSize, cancellationToken).ConfigureAwait(false);
if (batch.Logs.Count > 0)
{
lastSeq = batch.Logs[^1].Sequence;
await SendAsync(socket, "logs", batch.Logs.Select(PackRunLogPayload.FromDomain), cancellationToken).ConfigureAwait(false);
}
if (HasStatusChanged(last, current))
{
await SendAsync(socket, "statusChanged", PackRunSnapshotPayload.From(current, batch.Logs.Count, lastSeq), cancellationToken).ConfigureAwait(false);
last = current;
if (IsTerminal(current.Status))
{
await SendCompletedAsync(socket, current, batch.Logs.Count, lastSeq, cancellationToken).ConfigureAwait(false);
break;
}
}
}
}
}
catch (OperationCanceledException) when (cancellationToken.IsCancellationRequested)
{
_logger.LogDebug("Pack run websocket stream cancelled for {PackRunId}.", packRun.PackRunId);
}
}
private static bool HasStatusChanged(PackRun previous, PackRun current)
{
return previous.Status != current.Status || previous.Attempt != current.Attempt || previous.LeaseId != current.LeaseId;
}
private async Task EmitCompletedAsync(HttpResponse response, PackRun packRun, long logCount, long latestSequence, CancellationToken cancellationToken)
{
var durationSeconds = packRun.CompletedAt.HasValue && packRun.StartedAt.HasValue
? (packRun.CompletedAt.Value - packRun.StartedAt.Value).TotalSeconds
: packRun.CompletedAt.HasValue ? (packRun.CompletedAt.Value - packRun.CreatedAt).TotalSeconds : 0;
var payload = new PackRunCompletedPayload(
PackRunId: packRun.PackRunId,
Status: packRun.Status.ToString().ToLowerInvariant(),
CompletedAt: packRun.CompletedAt ?? _timeProvider.GetUtcNow(),
DurationSeconds: durationSeconds,
LogCount: logCount,
LatestSequence: latestSequence);
await SseWriter.WriteEventAsync(response, "completed", payload, SerializerOptions, cancellationToken).ConfigureAwait(false);
}
private async Task SendCompletedAsync(WebSocket socket, PackRun packRun, long logCount, long latestSequence, CancellationToken cancellationToken)
{
var durationSeconds = packRun.CompletedAt.HasValue && packRun.StartedAt.HasValue
? (packRun.CompletedAt.Value - packRun.StartedAt.Value).TotalSeconds
: packRun.CompletedAt.HasValue ? (packRun.CompletedAt.Value - packRun.CreatedAt).TotalSeconds : 0;
var payload = new PackRunCompletedPayload(
PackRunId: packRun.PackRunId,
Status: packRun.Status.ToString().ToLowerInvariant(),
CompletedAt: packRun.CompletedAt ?? _timeProvider.GetUtcNow(),
DurationSeconds: durationSeconds,
LogCount: logCount,
LatestSequence: latestSequence);
await SendAsync(socket, "completed", payload, cancellationToken).ConfigureAwait(false);
}
private static bool IsTerminal(PackRunStatus status) =>
status is PackRunStatus.Succeeded or PackRunStatus.Failed or PackRunStatus.Canceled or PackRunStatus.TimedOut;
private static async Task SendAsync(WebSocket socket, string type, object payload, CancellationToken cancellationToken)
{
var json = JsonSerializer.Serialize(new { type, data = payload }, SerializerOptions);
var buffer = Encoding.UTF8.GetBytes(json);
await socket.SendAsync(buffer, WebSocketMessageType.Text, true, cancellationToken).ConfigureAwait(false);
}
}
internal sealed record PackRunSnapshotPayload(
Guid PackRunId,
string Status,
string PackId,
string PackVersion,
int Attempt,
int MaxAttempts,
string? TaskRunnerId,
Guid? LeaseId,
DateTimeOffset CreatedAt,
DateTimeOffset? StartedAt,
DateTimeOffset? CompletedAt,
long LogCount,
long LatestSequence)
{
public static PackRunSnapshotPayload From(PackRun packRun, long logCount, long latestSequence) => new(
packRun.PackRunId,
packRun.Status.ToString().ToLowerInvariant(),
packRun.PackId,
packRun.PackVersion,
packRun.Attempt,
packRun.MaxAttempts,
packRun.TaskRunnerId,
packRun.LeaseId,
packRun.CreatedAt,
packRun.StartedAt,
packRun.CompletedAt,
logCount,
latestSequence);
}
internal sealed record PackRunLogPayload(
long Sequence,
string Level,
string Source,
string Message,
string Digest,
long SizeBytes,
DateTimeOffset Timestamp,
string? Data)
{
public static PackRunLogPayload FromDomain(PackRunLog log) => new(
log.Sequence,
log.Level.ToString().ToLowerInvariant(),
log.Source,
log.Message,
log.Digest,
log.SizeBytes,
log.Timestamp,
log.Data);
}
internal sealed record PackRunCompletedPayload(
Guid PackRunId,
string Status,
DateTimeOffset CompletedAt,
double DurationSeconds,
long LogCount,
long LatestSequence);

View File

@@ -1,216 +0,0 @@
using Microsoft.Extensions.Options;
using StellaOps.JobEngine.Core.Domain;
using StellaOps.JobEngine.Core.Services;
using StellaOps.JobEngine.Infrastructure.Repositories;
using System.Text.Json;
namespace StellaOps.JobEngine.WebService.Streaming;
/// <summary>
/// Interface for coordinating run SSE streams.
/// </summary>
public interface IRunStreamCoordinator
{
/// <summary>
/// Streams run updates via SSE until the run completes or timeout.
/// </summary>
Task StreamAsync(HttpContext context, string tenantId, Run initialRun, CancellationToken cancellationToken);
}
/// <summary>
/// Coordinates streaming of run state changes via Server-Sent Events.
/// </summary>
public sealed class RunStreamCoordinator : IRunStreamCoordinator
{
private static readonly JsonSerializerOptions SerializerOptions = new(JsonSerializerDefaults.Web);
private readonly IRunRepository _runRepository;
private readonly IFirstSignalService _firstSignalService;
private readonly TimeProvider _timeProvider;
private readonly ILogger<RunStreamCoordinator> _logger;
private readonly StreamOptions _options;
public RunStreamCoordinator(
IRunRepository runRepository,
IFirstSignalService firstSignalService,
IOptions<StreamOptions> options,
TimeProvider? timeProvider,
ILogger<RunStreamCoordinator> logger)
{
_runRepository = runRepository ?? throw new ArgumentNullException(nameof(runRepository));
_firstSignalService = firstSignalService ?? throw new ArgumentNullException(nameof(firstSignalService));
_timeProvider = timeProvider ?? TimeProvider.System;
_logger = logger ?? throw new ArgumentNullException(nameof(logger));
_options = (options ?? throw new ArgumentNullException(nameof(options))).Value.Validate();
}
public async Task StreamAsync(HttpContext context, string tenantId, Run initialRun, CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(context);
ArgumentNullException.ThrowIfNull(initialRun);
var response = context.Response;
SseWriter.ConfigureSseHeaders(response);
await SseWriter.WriteRetryAsync(response, _options.ReconnectDelay, cancellationToken).ConfigureAwait(false);
string? lastFirstSignalEtag = null;
var lastRun = initialRun;
await SseWriter.WriteEventAsync(response, "initial", RunSnapshotPayload.FromRun(lastRun), SerializerOptions, cancellationToken).ConfigureAwait(false);
await SseWriter.WriteEventAsync(response, "heartbeat", HeartbeatPayload.Create(_timeProvider.GetUtcNow(), lastRun.RunId.ToString()), SerializerOptions, cancellationToken).ConfigureAwait(false);
lastFirstSignalEtag = await EmitFirstSignalIfUpdatedAsync(response, tenantId, lastRun.RunId, lastFirstSignalEtag, cancellationToken).ConfigureAwait(false);
// If already terminal, send completed and exit
if (IsTerminal(lastRun.Status))
{
await EmitCompletedAsync(response, lastRun, cancellationToken).ConfigureAwait(false);
return;
}
var startTime = _timeProvider.GetUtcNow();
using var pollTimer = new PeriodicTimer(_options.PollInterval);
using var heartbeatTimer = new PeriodicTimer(_options.HeartbeatInterval);
try
{
while (!cancellationToken.IsCancellationRequested)
{
// Check max stream duration
if (_timeProvider.GetUtcNow() - startTime > _options.MaxStreamDuration)
{
_logger.LogInformation("Run stream for {RunId} reached max duration; closing.", lastRun.RunId);
await SseWriter.WriteEventAsync(response, "timeout", new { runId = lastRun.RunId, reason = "Max stream duration reached" }, SerializerOptions, cancellationToken).ConfigureAwait(false);
break;
}
var pollTask = pollTimer.WaitForNextTickAsync(cancellationToken).AsTask();
var heartbeatTask = heartbeatTimer.WaitForNextTickAsync(cancellationToken).AsTask();
var completed = await Task.WhenAny(pollTask, heartbeatTask).ConfigureAwait(false);
if (completed == pollTask && await pollTask.ConfigureAwait(false))
{
var current = await _runRepository.GetByIdAsync(tenantId, lastRun.RunId, cancellationToken).ConfigureAwait(false);
if (current is null)
{
_logger.LogWarning("Run {RunId} disappeared while streaming; signalling notFound event.", lastRun.RunId);
await SseWriter.WriteEventAsync(response, "notFound", new NotFoundPayload(lastRun.RunId.ToString(), "run"), SerializerOptions, cancellationToken).ConfigureAwait(false);
break;
}
lastFirstSignalEtag = await EmitFirstSignalIfUpdatedAsync(response, tenantId, current.RunId, lastFirstSignalEtag, cancellationToken).ConfigureAwait(false);
if (HasChanged(lastRun, current))
{
await EmitProgressAsync(response, current, cancellationToken).ConfigureAwait(false);
lastRun = current;
if (IsTerminal(lastRun.Status))
{
await EmitCompletedAsync(response, lastRun, cancellationToken).ConfigureAwait(false);
break;
}
}
}
else if (completed == heartbeatTask && await heartbeatTask.ConfigureAwait(false))
{
await SseWriter.WriteEventAsync(response, "heartbeat", HeartbeatPayload.Create(_timeProvider.GetUtcNow(), lastRun.RunId.ToString()), SerializerOptions, cancellationToken).ConfigureAwait(false);
}
}
}
catch (OperationCanceledException) when (cancellationToken.IsCancellationRequested)
{
_logger.LogDebug("Run stream cancelled for run {RunId}.", lastRun.RunId);
}
}
private static bool HasChanged(Run previous, Run current)
{
return previous.Status != current.Status ||
previous.CompletedJobs != current.CompletedJobs ||
previous.SucceededJobs != current.SucceededJobs ||
previous.FailedJobs != current.FailedJobs ||
previous.TotalJobs != current.TotalJobs;
}
private async Task EmitProgressAsync(HttpResponse response, Run run, CancellationToken cancellationToken)
{
var progressPercent = run.TotalJobs > 0
? Math.Round((double)run.CompletedJobs / run.TotalJobs * 100, 2)
: 0;
var payload = new RunProgressPayload(
run.RunId,
run.Status.ToString().ToLowerInvariant(),
run.TotalJobs,
run.CompletedJobs,
run.SucceededJobs,
run.FailedJobs,
progressPercent);
await SseWriter.WriteEventAsync(response, "progress", payload, SerializerOptions, cancellationToken).ConfigureAwait(false);
}
private async Task EmitCompletedAsync(HttpResponse response, Run run, CancellationToken cancellationToken)
{
var durationSeconds = run.CompletedAt.HasValue && run.StartedAt.HasValue
? (run.CompletedAt.Value - run.StartedAt.Value).TotalSeconds
: run.CompletedAt.HasValue
? (run.CompletedAt.Value - run.CreatedAt).TotalSeconds
: 0;
var payload = new RunCompletedPayload(
run.RunId,
run.Status.ToString().ToLowerInvariant(),
run.TotalJobs,
run.SucceededJobs,
run.FailedJobs,
run.CompletedAt ?? _timeProvider.GetUtcNow(),
durationSeconds);
await SseWriter.WriteEventAsync(response, "completed", payload, SerializerOptions, cancellationToken).ConfigureAwait(false);
}
private async Task<string?> EmitFirstSignalIfUpdatedAsync(
HttpResponse response,
string tenantId,
Guid runId,
string? lastFirstSignalEtag,
CancellationToken cancellationToken)
{
try
{
var result = await _firstSignalService
.GetFirstSignalAsync(runId, tenantId, lastFirstSignalEtag, cancellationToken)
.ConfigureAwait(false);
if (result.Status != FirstSignalResultStatus.Found || result.Signal is null || string.IsNullOrWhiteSpace(result.ETag))
{
return lastFirstSignalEtag;
}
await SseWriter.WriteEventAsync(
response,
"first_signal",
new { runId, signal = result.Signal, etag = result.ETag },
SerializerOptions,
cancellationToken)
.ConfigureAwait(false);
return result.ETag;
}
catch (OperationCanceledException) when (cancellationToken.IsCancellationRequested)
{
return lastFirstSignalEtag;
}
catch (Exception ex)
{
_logger.LogWarning(ex, "Failed to emit first_signal event for run {RunId}.", runId);
return lastFirstSignalEtag;
}
}
private static bool IsTerminal(RunStatus status) =>
status is RunStatus.Succeeded or RunStatus.PartiallySucceeded or RunStatus.Failed or RunStatus.Canceled;
}

View File

@@ -1,86 +0,0 @@
using System.Text.Json;
namespace StellaOps.JobEngine.WebService.Streaming;
/// <summary>
/// Helper for writing Server-Sent Events to HTTP responses.
/// </summary>
internal static class SseWriter
{
/// <summary>
/// Writes the retry directive to the SSE stream.
/// </summary>
public static async Task WriteRetryAsync(HttpResponse response, TimeSpan reconnectDelay, CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(response);
var milliseconds = (int)Math.Clamp(reconnectDelay.TotalMilliseconds, 1, int.MaxValue);
await response.WriteAsync($"retry: {milliseconds}\r\n\r\n", cancellationToken).ConfigureAwait(false);
await response.Body.FlushAsync(cancellationToken).ConfigureAwait(false);
}
/// <summary>
/// Writes a named event with JSON payload to the SSE stream.
/// </summary>
public static async Task WriteEventAsync(
HttpResponse response,
string eventName,
object payload,
JsonSerializerOptions serializerOptions,
CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(response);
ArgumentNullException.ThrowIfNull(payload);
ArgumentNullException.ThrowIfNull(serializerOptions);
if (string.IsNullOrWhiteSpace(eventName))
{
throw new ArgumentException("Event name must be provided.", nameof(eventName));
}
await response.WriteAsync($"event: {eventName}\r\n", cancellationToken).ConfigureAwait(false);
var json = JsonSerializer.Serialize(payload, serializerOptions);
using var reader = new StringReader(json);
string? line;
while ((line = reader.ReadLine()) is not null)
{
await response.WriteAsync($"data: {line}\r\n", cancellationToken).ConfigureAwait(false);
}
await response.WriteAsync("\r\n", cancellationToken).ConfigureAwait(false);
await response.Body.FlushAsync(cancellationToken).ConfigureAwait(false);
}
/// <summary>
/// Writes a comment to the SSE stream (useful for keep-alives).
/// </summary>
public static async Task WriteCommentAsync(HttpResponse response, string comment, CancellationToken cancellationToken)
{
ArgumentNullException.ThrowIfNull(response);
if (!string.IsNullOrEmpty(comment))
{
await response.WriteAsync($": {comment}\r\n\r\n", cancellationToken).ConfigureAwait(false);
}
else
{
await response.WriteAsync(":\r\n\r\n", cancellationToken).ConfigureAwait(false);
}
await response.Body.FlushAsync(cancellationToken).ConfigureAwait(false);
}
/// <summary>
/// Configures HTTP response headers for SSE streaming.
/// </summary>
public static void ConfigureSseHeaders(HttpResponse response)
{
response.StatusCode = StatusCodes.Status200OK;
response.Headers.CacheControl = "no-store";
response.Headers["X-Accel-Buffering"] = "no";
response.Headers["Connection"] = "keep-alive";
response.ContentType = "text/event-stream";
}
}

View File

@@ -1,67 +0,0 @@
namespace StellaOps.JobEngine.WebService.Streaming;
/// <summary>
/// Configuration options for SSE streaming.
/// </summary>
public sealed class StreamOptions
{
/// <summary>
/// Configuration section name.
/// </summary>
public const string SectionName = "Orchestrator:Stream";
private static readonly TimeSpan MinimumInterval = TimeSpan.FromMilliseconds(100);
private static readonly TimeSpan MinimumReconnectDelay = TimeSpan.FromMilliseconds(500);
/// <summary>
/// How often to poll for state changes.
/// </summary>
public TimeSpan PollInterval { get; set; } = TimeSpan.FromSeconds(2);
/// <summary>
/// How often to send heartbeat events.
/// </summary>
public TimeSpan HeartbeatInterval { get; set; } = TimeSpan.FromSeconds(15);
/// <summary>
/// Recommended reconnect delay for clients.
/// </summary>
public TimeSpan ReconnectDelay { get; set; } = TimeSpan.FromSeconds(5);
/// <summary>
/// Maximum duration for a single stream session.
/// </summary>
public TimeSpan MaxStreamDuration { get; set; } = TimeSpan.FromMinutes(30);
/// <summary>
/// Validates the options and returns this instance.
/// </summary>
public StreamOptions Validate()
{
if (PollInterval < MinimumInterval)
{
throw new ArgumentOutOfRangeException(nameof(PollInterval), PollInterval,
"Poll interval must be at least 100ms.");
}
if (HeartbeatInterval < MinimumInterval)
{
throw new ArgumentOutOfRangeException(nameof(HeartbeatInterval), HeartbeatInterval,
"Heartbeat interval must be at least 100ms.");
}
if (ReconnectDelay < MinimumReconnectDelay)
{
throw new ArgumentOutOfRangeException(nameof(ReconnectDelay), ReconnectDelay,
"Reconnect delay must be at least 500ms.");
}
if (MaxStreamDuration < TimeSpan.FromMinutes(1))
{
throw new ArgumentOutOfRangeException(nameof(MaxStreamDuration), MaxStreamDuration,
"Max stream duration must be at least 1 minute.");
}
return this;
}
}

View File

@@ -1,125 +0,0 @@
using StellaOps.JobEngine.Core.Domain;
using System.Text.Json.Serialization;
namespace StellaOps.JobEngine.WebService.Streaming;
/// <summary>
/// Heartbeat event payload.
/// </summary>
public sealed record HeartbeatPayload(
[property: JsonPropertyName("ts")] DateTimeOffset Timestamp,
[property: JsonPropertyName("id")] string? Id)
{
public static HeartbeatPayload Create(DateTimeOffset timestamp, string? id = null) => new(timestamp, id);
}
/// <summary>
/// Job snapshot event payload.
/// </summary>
public sealed record JobSnapshotPayload(
[property: JsonPropertyName("jobId")] Guid JobId,
[property: JsonPropertyName("runId")] Guid? RunId,
[property: JsonPropertyName("jobType")] string JobType,
[property: JsonPropertyName("status")] string Status,
[property: JsonPropertyName("attempt")] int Attempt,
[property: JsonPropertyName("workerId")] string? WorkerId,
[property: JsonPropertyName("createdAt")] DateTimeOffset CreatedAt,
[property: JsonPropertyName("scheduledAt")] DateTimeOffset? ScheduledAt,
[property: JsonPropertyName("leasedAt")] DateTimeOffset? LeasedAt,
[property: JsonPropertyName("completedAt")] DateTimeOffset? CompletedAt,
[property: JsonPropertyName("reason")] string? Reason)
{
public static JobSnapshotPayload FromJob(Job job) => new(
job.JobId,
job.RunId,
job.JobType,
job.Status.ToString().ToLowerInvariant(),
job.Attempt,
job.WorkerId,
job.CreatedAt,
job.ScheduledAt,
job.LeasedAt,
job.CompletedAt,
job.Reason);
}
/// <summary>
/// Job state change event payload.
/// </summary>
public sealed record JobStateChangedPayload(
[property: JsonPropertyName("jobId")] Guid JobId,
[property: JsonPropertyName("previousStatus")] string PreviousStatus,
[property: JsonPropertyName("currentStatus")] string CurrentStatus,
[property: JsonPropertyName("attempt")] int Attempt,
[property: JsonPropertyName("workerId")] string? WorkerId,
[property: JsonPropertyName("reason")] string? Reason,
[property: JsonPropertyName("changedAt")] DateTimeOffset ChangedAt);
/// <summary>
/// Run snapshot event payload.
/// </summary>
public sealed record RunSnapshotPayload(
[property: JsonPropertyName("runId")] Guid RunId,
[property: JsonPropertyName("sourceId")] Guid SourceId,
[property: JsonPropertyName("runType")] string RunType,
[property: JsonPropertyName("status")] string Status,
[property: JsonPropertyName("totalJobs")] int TotalJobs,
[property: JsonPropertyName("completedJobs")] int CompletedJobs,
[property: JsonPropertyName("succeededJobs")] int SucceededJobs,
[property: JsonPropertyName("failedJobs")] int FailedJobs,
[property: JsonPropertyName("createdAt")] DateTimeOffset CreatedAt,
[property: JsonPropertyName("startedAt")] DateTimeOffset? StartedAt,
[property: JsonPropertyName("completedAt")] DateTimeOffset? CompletedAt)
{
public static RunSnapshotPayload FromRun(Run run) => new(
run.RunId,
run.SourceId,
run.RunType,
run.Status.ToString().ToLowerInvariant(),
run.TotalJobs,
run.CompletedJobs,
run.SucceededJobs,
run.FailedJobs,
run.CreatedAt,
run.StartedAt,
run.CompletedAt);
}
/// <summary>
/// Run progress update event payload.
/// </summary>
public sealed record RunProgressPayload(
[property: JsonPropertyName("runId")] Guid RunId,
[property: JsonPropertyName("status")] string Status,
[property: JsonPropertyName("totalJobs")] int TotalJobs,
[property: JsonPropertyName("completedJobs")] int CompletedJobs,
[property: JsonPropertyName("succeededJobs")] int SucceededJobs,
[property: JsonPropertyName("failedJobs")] int FailedJobs,
[property: JsonPropertyName("progressPercent")] double ProgressPercent);
/// <summary>
/// Run completed event payload.
/// </summary>
public sealed record RunCompletedPayload(
[property: JsonPropertyName("runId")] Guid RunId,
[property: JsonPropertyName("status")] string Status,
[property: JsonPropertyName("totalJobs")] int TotalJobs,
[property: JsonPropertyName("succeededJobs")] int SucceededJobs,
[property: JsonPropertyName("failedJobs")] int FailedJobs,
[property: JsonPropertyName("completedAt")] DateTimeOffset CompletedAt,
[property: JsonPropertyName("durationSeconds")] double DurationSeconds);
/// <summary>
/// Not found event payload.
/// </summary>
public sealed record NotFoundPayload(
[property: JsonPropertyName("id")] string Id,
[property: JsonPropertyName("type")] string Type);
/// <summary>
/// Error event payload.
/// </summary>
public sealed record ErrorPayload(
[property: JsonPropertyName("code")] string Code,
[property: JsonPropertyName("message")] string Message);

View File

@@ -1,16 +0,0 @@
# StellaOps.JobEngine.WebService Task Board
This board mirrors active sprint tasks for this module.
Source of truth: `docs-archived/implplan/2025-12-29-csproj-audit/SPRINT_20251229_049_BE_csproj_audit_maint_tests.md`.
| Task ID | Status | Notes |
| --- | --- | --- |
| SPRINT_20260405_011-XPORT | DONE | `docs/implplan/SPRINT_20260405_011___Libraries_transport_pooling_and_attribution_hardening.md`: named the deployment compatibility PostgreSQL datasource path for runtime attribution and pooling. |
| SPRINT_20260323_001-TASK-002 | DONE | Sprint `docs-archived/implplan/2026-03-31-completed-sprints/SPRINT_20260323_001_BE_release_api_proxy_and_endpoints.md`: deployment monitoring compatibility endpoints under `/api/v1/release-orchestrator/deployments/*` were verified as implemented and reachable. |
| SPRINT_20260323_001-TASK-003 | DONE | Sprint `docs-archived/implplan/2026-03-31-completed-sprints/SPRINT_20260323_001_BE_release_api_proxy_and_endpoints.md`: evidence compatibility endpoints now verify hashes against deterministic raw payloads and export stable offline bundles. |
| SPRINT_20260323_001-TASK-005 | DONE | Sprint `docs-archived/implplan/2026-03-31-completed-sprints/SPRINT_20260323_001_BE_release_api_proxy_and_endpoints.md`: dashboard approval/rejection endpoints now persist in-memory promotion decisions per app instance for Console compatibility flows. |
| U-002-ORCH-DEADLETTER | DOING | Sprint `docs/implplan/SPRINT_20260218_004_Platform_local_setup_usability_hardening.md`: add/fix deadletter API behavior used by console actions (including export route) and validate local setup usability paths. |
| AUDIT-0425-M | DONE | Revalidated 2026-01-07; maintainability audit for StellaOps.JobEngine.WebService. |
| AUDIT-0425-T | DONE | Revalidated 2026-01-07; test coverage audit for StellaOps.JobEngine.WebService. |
| AUDIT-0425-A | TODO | Revalidated 2026-01-07 (open findings). |
| REMED-06 | DONE | SOLID review notes captured for SPRINT_20260130_002. |

View File

@@ -1,282 +0,0 @@
{
"_meta": { "locale": "en-US", "namespace": "orchestrator", "version": "1.0" },
"orchestrator.job.list_description": "Return a paginated list of orchestration jobs for the calling tenant, optionally filtered by job type, status, run ID, or project ID. Each record includes the job type, current status, attempt count, payload digest, and scheduling metadata.",
"orchestrator.job.get_description": "Return the full state record for the specified job including current status, payload, lease information, and scheduling timestamps. Returns 404 when the job does not exist in the tenant.",
"orchestrator.job.get_detail_description": "Return the full payload and output artifacts for the specified job, including the raw JSON payload and all artifacts produced during execution. Returns 404 when the job does not exist in the tenant.",
"orchestrator.job.get_summary_description": "Return an aggregate summary of job counts grouped by status for the calling tenant, optionally scoped to a specific run ID. Used by dashboards to render job status breakdowns without fetching individual job records.",
"orchestrator.job.get_by_idempotency_key_description": "Look up a job by its idempotency key, returning the full job record if found. Returns 404 when no job with the given key exists for the tenant. Used by producers to check for duplicate submissions before scheduling new work.",
"orchestrator.job.error.idempotency_key_required": "Idempotency key is required.",
"orchestrator.run.list_description": "Return a paginated list of orchestration runs for the calling tenant, optionally filtered by project ID or status. Each record includes the run type, current aggregate status, job counts, and scheduling metadata.",
"orchestrator.run.get_description": "Return the full state record for the specified run including current aggregate status, job counts by status, duration, and associated project. Returns 404 when the run does not exist in the tenant.",
"orchestrator.run.get_jobs_description": "Return a paginated list of all jobs associated with the specified run, including their current status, job type, and scheduling timestamps. Returns 404 when the run does not exist in the tenant.",
"orchestrator.run.get_summary_description": "Return an aggregate summary of job counts by status for the specified run. Used by dashboards to render job status breakdowns. Returns 404 when the run does not exist in the tenant.",
"orchestrator.approval.list_description": "Return a paginated list of manual approval requests for the calling tenant, optionally filtered by status, run ID, or project ID. Each record includes the approval type, current status, requestor ID, and lifecycle timestamps.",
"orchestrator.approval.get_description": "Return the full details of the specified approval request including current status, approver history, associated run and job context, and any attached justification. Returns 404 when the approval does not exist in the tenant.",
"orchestrator.approval.create_description": "Create a new manual approval gate request for the specified run or job, blocking execution until the approval is either granted or rejected. The request captures the requesting actor, optional justification, and approver requirements from the configured gate policy.",
"orchestrator.approval.approve_description": "Grant approval for a pending approval request, optionally providing a comment. Transitions the request to the approved state and unblocks the associated run or job for continued execution. Returns 409 when the request is not in a pending state.",
"orchestrator.approval.reject_description": "Reject a pending approval request, providing a mandatory reason that is persisted in the audit trail. Transitions the request to the rejected terminal state and blocks the associated run from continuing.",
"orchestrator.approval.cancel_description": "Cancel an open approval request. Only the original requestor or a tenant administrator may cancel. Returns 403 when called by an unauthorized actor and 400 when the request is already in a terminal state.",
"orchestrator.release.list_description": "Return a cursor-paginated list of release orchestration runs for the calling tenant, optionally filtered by project, environment, or status. Each record includes the release version, target environment, current status, and lifecycle timestamps.",
"orchestrator.release.get_description": "Return the full state record for a release run including current status, associated jobs, environment targets, promotion chain, and audit trail of lifecycle actions. Returns 404 when the release run does not exist.",
"orchestrator.release.create_description": "Create a new release orchestration run for the specified project and target environment. The run is created in Pending state and becomes eligible for job scheduling once environment promotion policy is evaluated.",
"orchestrator.release.approve_description": "Grant approval for a pending release gate, optionally with a comment. Transitions the gate to approved state and unblocks the release for promotion to the next environment.",
"orchestrator.release.reject_description": "Reject a pending release gate, providing a mandatory reason. Transitions the gate to rejected terminal state and prevents promotion of the associated release.",
"orchestrator.release.promote_description": "Trigger promotion of an approved release to its next configured environment, recording the promoting actor and timestamp.",
"orchestrator.release.rollback_description": "Initiate rollback of a failed or degraded release, scheduling recovery jobs for the affected environment. Only allowed when the release run is in failed, warning, or degraded status.",
"orchestrator.release.cancel_description": "Cancel an in-progress release run, sending a cancellation signal to active workers and transitioning the run to the canceled terminal state.",
"orchestrator.release.list_gates_description": "Return the list of configured release gates for the specified run, including their current evaluation status, required approvers, and any recorded decisions.",
"orchestrator.release.list_actions_description": "Return the ordered list of lifecycle actions recorded against the specified release run, including actor ID, action type, and timestamp.",
"orchestrator.release.list_events_description": "Return the event stream for the specified release run, including all system and user-generated events with their payloads and timestamps.",
"orchestrator.release.get_dashboard_description": "Return a dashboard-optimised aggregate view of the specified release run, including current status, environment promotion progress, gate evaluation counts, and SLO metrics.",
"orchestrator.release.get_summary_description": "Return a concise summary of the specified release run for use in list views and notifications, including current status, target environment, and key timestamps.",
"orchestrator.release.export_description": "Export the full audit record of the specified release run as a structured JSON document suitable for compliance reporting and external archiving.",
"orchestrator.release.error.rollback_only_on_failure": "Rollback is only allowed when run status is failed/warning/degraded.",
"orchestrator.pack_run.schedule_description": "Schedule a new pack run by enqueuing the specified pack version for execution. The run is created in Pending state and becomes claimable once the scheduler evaluates its priority and quota constraints. Returns 409 if quota is exhausted.",
"orchestrator.pack_run.get_description": "Return the full state record for the specified pack run including current status, pack version reference, scheduled and started timestamps, worker assignment, and lease expiry. Returns 404 when the pack run does not exist in the tenant.",
"orchestrator.pack_run.list_description": "Return a cursor-paginated list of pack runs for the calling tenant, optionally filtered by pack name, version, status, and creation time window. Each record includes scheduling metadata and current lifecycle state.",
"orchestrator.pack_run.get_manifest_description": "Return the manifest for the specified pack run including log line counts by severity, execution duration, exit code, and final status. Used by CI and audit systems to assess run outcomes without retrieving individual log lines.",
"orchestrator.pack_run.claim_description": "Atomically claim the next available pack run for the calling task runner identity, acquiring an exclusive time-limited lease. Returns 204 when no pack runs are available. Must be called by task runner workers, not by human principals.",
"orchestrator.pack_run.heartbeat_description": "Extend the execution lease on a claimed pack run to prevent it from being reclaimed due to timeout. Must be called before the current lease expiry; returns 409 if the lease ID does not match or has expired.",
"orchestrator.pack_run.start_description": "Transition the specified pack run from Claimed to Running state, recording the actual start timestamp and worker identity. Must be called after claiming but before appending log output. Returns 409 on lease mismatch.",
"orchestrator.pack_run.complete_description": "Mark the specified pack run as succeeded or failed, releasing the lease and recording the exit code, duration, and final log statistics. Artifact references produced by the run may be included in the completion payload.",
"orchestrator.pack_run.append_logs_description": "Append a batch of log lines to the specified pack run. Log lines are stored with sequence numbers for ordered replay and are streamed in real time to connected SSE/WebSocket clients. Returns 409 on lease mismatch.",
"orchestrator.pack_run.get_logs_description": "Return a cursor-paginated slice of log lines for the specified pack run, optionally filtered by minimum severity level. Log lines are returned in emission order. The cursor allows efficient incremental polling without re-fetching prior lines.",
"orchestrator.pack_run.cancel_description": "Request cancellation of the specified pack run. A cancellation signal is sent to the active worker via the lease mechanism; the run transitions to Canceled state once the worker acknowledges or the lease expires. Returns 400 for terminal-state runs.",
"orchestrator.pack_run.retry_description": "Schedule a new pack run using the same pack version and input as the specified failed or canceled run. Returns the new pack run ID. The original run record is retained and linked to the retry via correlation ID.",
"orchestrator.pack_run.error.pack_id_required": "PackId is required.",
"orchestrator.pack_run.error.pack_version_required": "PackVersion is required.",
"orchestrator.pack_run.error.project_id_required": "ProjectId is required.",
"orchestrator.pack_run.error.quota_exceeded": "Pack run quota exceeded.",
"orchestrator.pack_run.error.task_runner_id_required": "TaskRunnerId is required.",
"orchestrator.quota.list_description": "Return the list of quota configurations for the calling tenant, including current token bucket state, active run counts, and hourly usage metrics. Used by operators to monitor rate-limiting status and plan capacity.",
"orchestrator.quota.get_description": "Return the full quota configuration for the specified quota identifier, including limits, current token state, refill rate, and usage history. Returns 404 when the quota does not exist.",
"orchestrator.quota.create_description": "Create a new quota rule for the specified job type, configuring the maximum active count, hourly limit, burst capacity, and token refill rate. Quota rules control the rate at which jobs of the given type are admitted for execution.",
"orchestrator.quota.update_description": "Update the limits and configuration of an existing quota rule. Changes take effect immediately. The token bucket state is not reset when limits are changed.",
"orchestrator.quota.pause_description": "Pause a quota, preventing new jobs of the associated type from being admitted. Requires a mandatory pause reason that is recorded in the audit trail.",
"orchestrator.quota.resume_description": "Resume a paused quota, re-enabling job admission for the associated job type. The token bucket state is restored to its pre-pause level.",
"orchestrator.quota.reset_description": "Reset the token bucket state for the specified quota, restoring it to full capacity. Used by operators after clearing a burst condition or resolving a queue backlog.",
"orchestrator.quota.delete_description": "Delete the specified quota configuration. If no quota exists for a job type, the system applies the global default limits.",
"orchestrator.quota.error.max_active_positive": "MaxActive must be positive.",
"orchestrator.quota.error.max_per_hour_positive": "MaxPerHour must be positive.",
"orchestrator.quota.error.burst_capacity_positive": "BurstCapacity must be positive.",
"orchestrator.quota.error.refill_rate_positive": "RefillRate must be positive.",
"orchestrator.quota.error.pause_reason_required": "Reason is required when pausing a quota.",
"orchestrator.quota_governance.list_description": "Return all quota governance rules for the calling tenant, including priority ordering, condition expressions, limit overrides, and activation schedules. Used by capacity planning tools to audit current rate-limiting policies.",
"orchestrator.quota_governance.get_description": "Return the full configuration of the specified quota governance rule, including condition expression, limit overrides, and effective period. Returns 404 when the rule does not exist.",
"orchestrator.quota_governance.create_description": "Create a new quota governance rule that applies limit overrides when the specified condition is satisfied. Rules are evaluated in priority order; the first matching rule wins.",
"orchestrator.quota_governance.update_description": "Update the condition expression, limit overrides, or activation schedule of an existing quota governance rule. Changes take effect on the next evaluation cycle.",
"orchestrator.quota_governance.delete_description": "Delete the specified quota governance rule. The deletion takes effect immediately; any active override from this rule is withdrawn.",
"orchestrator.quota_governance.evaluate_description": "Evaluate all governance rules for the specified tenant and job type, returning the winning rule and resulting limit overrides. Used to preview the effect of governance rules before committing changes.",
"orchestrator.quota_governance.priority_description": "Return the current priority ordering of all governance rules for the calling tenant. Rules are evaluated in this order on every job admission.",
"orchestrator.quota_governance.reorder_description": "Update the priority ordering of governance rules. Accepts a complete ordered list of rule identifiers; the provided order replaces the existing priority sequence.",
"orchestrator.quota_governance.snapshot_description": "Return a point-in-time snapshot of the current governance state for the calling tenant, including active rule evaluations, effective limits, and a list of any overrides currently in force.",
"orchestrator.quota_governance.simulate_description": "Simulate the governance rule evaluation for a hypothetical scenario without affecting live state. Used by policy authors to validate rule conditions and preview limit overrides before deployment.",
"orchestrator.quota_governance.audit_description": "Return the governance audit log for the calling tenant, listing all rule creation, update, deletion, and evaluation events with actor IDs and timestamps.",
"orchestrator.quota_governance.error.amount_positive": "Amount must be positive.",
"orchestrator.quota_governance.error.invalid_strategy": "Invalid strategy: {0}. Valid values are: increment, decrement, set.",
"orchestrator.health.liveness_description": "Liveness probe for the Orchestrator service. Returns HTTP 200 when the process is alive. Used by container orchestrators to determine when to restart the service.",
"orchestrator.health.readiness_description": "Readiness probe for the Orchestrator service. Verifies that the database connection is available before reporting ready. Returns HTTP 503 when the database is unreachable.",
"orchestrator.health.deep_description": "Deep health check that verifies all runtime dependencies are operational, including the database, event bus, and quota subsystem. Returns a structured report with per-dependency status and latencies.",
"orchestrator.health.info_description": "Return service metadata including the assembly version, build timestamp, and environment configuration. Used by monitoring systems to correlate deployed versions with runtime behaviour.",
"orchestrator.scale.metrics_description": "Return the current autoscaling metrics for KEDA/HPA consumption, including queue depth, active job count, dispatch latency percentiles, recommended replica count, and pressure state.",
"orchestrator.scale.prometheus_description": "Return scale metrics in Prometheus text exposition format (text/plain), suitable for scraping by Prometheus or compatible monitoring systems. Includes queue depth, active jobs, dispatch latency percentiles, load factor, and load shedding state gauges.",
"orchestrator.scale.load_description": "Return the current load shedding status including the state (normal, warning, critical, emergency), load factor relative to target, whether shedding is active, the minimum accepted job priority, and the recommended dispatch delay in milliseconds.",
"orchestrator.scale.snapshot_description": "Return a detailed scale metrics snapshot for debugging and capacity analysis, including per-job-type queue depth and active job counts, the full dispatch latency distribution (min, max, avg, P50, P95, P99), and the current load shedding state.",
"orchestrator.scale.startupz_description": "Return the startup readiness verdict for Kubernetes startup probes. Returns 503 until the service has completed its minimum warmup period (default 5 seconds). Kubernetes will not route traffic or start liveness checks until this probe passes.",
"orchestrator.audit.list_description": "Return a cursor-paginated list of immutable audit log entries for the calling tenant, optionally filtered by event type, resource type, resource ID, actor ID, and creation time window. Audit entries are append-only and hash-chained for tamper detection.",
"orchestrator.audit.get_description": "Return the full audit log entry for the specified ID, including the event type, actor identity, resource reference, before/after state digest, and the chained hash linking it to the prior entry. Returns 404 when the entry does not exist in the tenant.",
"orchestrator.audit.get_resource_history_description": "Return the complete chronological audit history for a specific resource identified by type and ID. Use this endpoint to reconstruct the full lifecycle of a run, job, quota, or circuit breaker from creation through terminal state.",
"orchestrator.audit.get_latest_description": "Return the most recent audit log entry recorded for the calling tenant. Used by monitoring systems to confirm that audit logging is active and to track the highest written sequence number. Returns 404 when no entries exist.",
"orchestrator.audit.get_by_sequence_description": "Return audit log entries with sequence numbers in the inclusive range [startSeq, endSeq]. Sequence numbers are monotonically increasing per tenant and are used for deterministic replay and gap detection during compliance audits. Returns 400 for invalid ranges.",
"orchestrator.audit.summary_description": "Return aggregate audit log statistics for the calling tenant including total entry count, breakdown by event type, and the sequence range of persisted entries. Optionally scoped to a time window via the 'since' query parameter.",
"orchestrator.audit.verify_description": "Verify the cryptographic hash chain integrity of the audit log for the calling tenant, optionally scoped to a sequence range. Returns a verification result indicating whether the chain is intact or identifies the first sequence number where a break was detected.",
"orchestrator.audit.error.invalid_sequence_range": "Invalid sequence range.",
"orchestrator.circuit_breaker.list_description": "Return all circuit breaker instances for the calling tenant, optionally filtered by current state (Closed, Open, HalfOpen). Circuit breakers protect downstream service dependencies from cascading failures.",
"orchestrator.circuit_breaker.get_description": "Return the full state record for the circuit breaker protecting the specified downstream service, including current state, failure rate, trip timestamp, and time-until-retry. Returns 404 if no circuit breaker has been initialized for that service ID.",
"orchestrator.circuit_breaker.check_description": "Evaluate whether a call to the specified downstream service is currently permitted by the circuit breaker. Returns the allowed flag, current state, measured failure rate, and the reason for blocking when requests are denied.",
"orchestrator.circuit_breaker.record_success_description": "Record a successful interaction with the specified downstream service, contributing to the rolling success window used to transition the circuit breaker from HalfOpen to Closed state.",
"orchestrator.circuit_breaker.record_failure_description": "Record a failed interaction with the specified downstream service, incrementing the failure rate counter and potentially tripping the circuit breaker to Open state. A failure reason should be supplied for audit purposes.",
"orchestrator.circuit_breaker.force_open_description": "Manually trip the circuit breaker to Open state, immediately blocking all requests to the specified downstream service regardless of the current failure rate. A non-empty reason is required and the action is attributed to the calling principal.",
"orchestrator.circuit_breaker.force_close_description": "Manually reset the circuit breaker to Closed state, allowing requests to flow to the specified downstream service immediately. Use with caution during incident recovery; the action is attributed to the calling principal.",
"orchestrator.circuit_breaker.error.force_open_reason_required": "Reason is required when manually opening a circuit breaker.",
"orchestrator.dag.get_run_description": "Return the full directed acyclic graph (DAG) structure for a run, including all dependency edges, the computed topological execution order, and the critical path with estimated total duration. Returns 400 if a cycle is detected in the dependency graph.",
"orchestrator.dag.get_run_edges_description": "Return all directed dependency edges for the specified run as a flat list of (fromJob, toJob) pairs. Use this endpoint when you need the raw edge set without the topological sort or critical path computation overhead.",
"orchestrator.dag.get_ready_jobs_description": "Return the set of jobs within the run whose upstream dependencies have all reached a terminal succeeded state and are therefore eligible for scheduling. This endpoint is used by scheduler components to determine the next dispatch frontier.",
"orchestrator.dag.get_blocked_jobs_description": "Return the set of job IDs that are transitively blocked because the specified job is in a failed or canceled state. Used during incident triage to identify the blast radius of a failing job within the run DAG.",
"orchestrator.dag.get_job_parents_description": "Return the direct upstream dependency edges for the specified job, identifying all jobs that must complete before this job can be scheduled. Useful for tracing why a job remains in a blocked or pending state.",
"orchestrator.dag.get_job_children_description": "Return the direct downstream dependency edges for the specified job, identifying all jobs that will be unblocked once this job succeeds. Used to assess the downstream impact of a job failure or delay.",
"orchestrator.dead_letter.list_description": "Return a cursor-paginated list of dead-letter entries for the calling tenant, optionally filtered by job type, error code, retry eligibility, and creation time window. Dead-letter entries represent jobs that exhausted all retry attempts or were explicitly moved to the dead-letter store.",
"orchestrator.dead_letter.get_description": "Return the full dead-letter entry record including the original job payload digest, error classification, retry history, and current resolution state. Returns 404 when the entry ID does not belong to the calling tenant.",
"orchestrator.dead_letter.get_by_job_description": "Locate the dead-letter entry corresponding to the specified original job ID. Useful for tracing from a known failed job to its dead-letter record without querying the full list.",
"orchestrator.dead_letter.stats_description": "Return aggregate dead-letter statistics for the calling tenant including total entry count, breakdown by status (pending, resolved, replaying), and failure counts grouped by error code.",
"orchestrator.dead_letter.export_description": "Stream a CSV export of dead-letter entries matching the specified filters. The response uses content-type text/csv and is suitable for offline analysis and incident reporting.",
"orchestrator.dead_letter.summary_description": "Return a grouped actionable summary of dead-letter entries organized by error code, showing entry counts and recommended triage actions per error group. Designed for operator dashboards where bulk replay or resolution decisions are made.",
"orchestrator.dead_letter.replay_description": "Enqueue a new job from the payload of the specified dead-letter entry, resetting the attempt counter and applying the original job type and priority. The dead-letter entry transitions to Replaying state and is linked to the new job ID.",
"orchestrator.dead_letter.replay_batch_description": "Enqueue new jobs for a set of dead-letter entry IDs in a single transactional batch. Each eligible entry transitions to Replaying state; entries that are not retryable or are already resolved are skipped and reported in the response.",
"orchestrator.dead_letter.replay_pending_description": "Enqueue new jobs for all pending retryable dead-letter entries matching the specified job type and error code filters. Returns the count of entries submitted for replay; use for bulk recovery after a downstream service outage.",
"orchestrator.dead_letter.resolve_description": "Mark the specified dead-letter entry as manually resolved, recording the resolution reason and the calling principal. Resolved entries are excluded from replay and summary counts. The action is immutable once applied.",
"orchestrator.dead_letter.resolve_batch_description": "Mark a set of dead-letter entries as manually resolved in a single operation. Each eligible entry is attributed to the calling principal with the supplied resolution reason; already-resolved entries are reported but not re-processed.",
"orchestrator.dead_letter.error_codes_description": "Return the catalogue of known dead-letter error codes with their human-readable descriptions, severity classifications (transient, permanent, policy), and recommended remediation actions. Used by tooling and UIs to annotate dead-letter entries.",
"orchestrator.dead_letter.replay_audit_description": "Return the complete replay audit trail for the specified dead-letter entry, including each replay attempt, the resulting job ID, the actor who initiated replay, and the outcome. Used during incident post-mortems to trace retry history.",
"orchestrator.export_job.create_description": "Submit a new export job to the orchestrator queue. The job is created with the specified export type, output format, time window, and optional signing and provenance flags. Returns 409 if the tenant's quota is exhausted for the requested export type.",
"orchestrator.export_job.list_description": "Return a paginated list of export jobs for the calling tenant, optionally filtered by export type, status, project, and creation time window. Each record includes scheduling metadata, current status, and worker lease information.",
"orchestrator.export_job.get_description": "Return the full export job record for the specified ID, including current status, attempt count, lease state, and completion timestamp. Returns 404 when the job does not exist in the tenant.",
"orchestrator.export_job.cancel_description": "Request cancellation of a pending or actively running export job. Returns 400 if the job is already in a terminal state (succeeded, failed, canceled). The cancellation reason is recorded for audit purposes.",
"orchestrator.export_job.quota_status_description": "Return the current export quota status for the calling tenant including active job count, hourly rate consumption, available token balance, and whether new jobs can be created. Optionally scoped to a specific export type.",
"orchestrator.export_job.ensure_quota_description": "Ensure a quota record exists for the specified export type, creating one with platform defaults if it does not already exist. Idempotent — safe to call on every tenant initialization. Returns the quota record regardless of whether it was created or already existed.",
"orchestrator.export_job.types_description": "Return the catalogue of supported export job types with their associated rate limits (max concurrent, max per hour, estimated duration), export target descriptions, and default quota parameters. Used by clients to validate export type values before submission.",
"orchestrator.export_job.error.export_type_required": "Export type is required.",
"orchestrator.export_job.error.unknown_export_type": "Unknown export type: {0}.",
"orchestrator.export_job.error.cannot_cancel": "Cannot cancel job in status: {0}.",
"orchestrator.first_signal.get_description": "Return the first meaningful signal produced by the specified run, supporting ETag-based conditional polling via If-None-Match. Returns 200 with the signal when available, 204 when the run has not yet emitted a signal, 304 when the signal is unchanged, or 404 when the run does not exist.",
"orchestrator.first_signal.error.server_error": "An internal error occurred. Please try again.",
"orchestrator.kpi.quality_description": "Return the composite quality KPI bundle for the specified tenant and time window, including reachability, explainability, runtime, and replay sub-categories. Defaults to the trailing 7 days when no time window is supplied.",
"orchestrator.kpi.reachability_description": "Return the reachability sub-category KPIs measuring how effectively the platform identifies actually-reachable vulnerabilities within the specified time window. Useful for tracking the signal-quality impact of reachability-aware triage.",
"orchestrator.kpi.explainability_description": "Return the explainability sub-category KPIs measuring the proportion of findings that include human-readable rationale, decision trails, and AI-generated summaries within the specified time window.",
"orchestrator.kpi.runtime_description": "Return the runtime corroboration sub-category KPIs measuring how well static findings are cross-validated against live runtime signals (e.g., eBPF, flame-graph traces) within the specified time window.",
"orchestrator.kpi.replay_description": "Return the replay and determinism sub-category KPIs measuring how consistently the platform reproduces prior analysis results from the same input artifacts within the specified time window. A proxy for pipeline determinism.",
"orchestrator.kpi.trend_description": "Return the rolling trend of composite quality KPI scores over the specified number of days, bucketed by day. Used to detect regressions or improvements in platform quality over time. Defaults to 30 days.",
"orchestrator.ledger.list_description": "Return a cursor-paginated list of immutable ledger entries for the calling tenant, optionally filtered by run type, source, final status, and time window. Ledger entries record the finalized outcome of every run for compliance and replay purposes.",
"orchestrator.ledger.get_description": "Return the full ledger entry for the specified ID, including the run summary, job counts, duration, final status, and the hash-chain link to the prior entry. Returns 404 when the ledger ID does not exist in the tenant.",
"orchestrator.ledger.get_by_run_description": "Return the ledger entry associated with the specified run ID. Each completed run produces exactly one ledger entry. Returns 404 if the run has not yet been ledgered or does not exist in the tenant.",
"orchestrator.ledger.get_by_source_description": "Return ledger entries produced by runs initiated from the specified source, in reverse chronological order. Useful for auditing the history of a particular integration or trigger.",
"orchestrator.ledger.get_latest_description": "Return the most recently written ledger entry for the calling tenant. Used by compliance tooling to track the highest written sequence and confirm that ledgering is active.",
"orchestrator.ledger.get_by_sequence_description": "Return ledger entries with sequence numbers in the inclusive range [startSeq, endSeq]. Sequence numbers are monotonically increasing per tenant and enable deterministic replay and gap detection during compliance audits. Returns 400 for invalid or inverted ranges.",
"orchestrator.ledger.summary_description": "Return aggregate ledger statistics for the calling tenant including total entry count, success/failure breakdown, and the current sequence range. Useful for compliance dashboards tracking ledger coverage against total run volume.",
"orchestrator.ledger.verify_chain_description": "Verify the cryptographic hash chain integrity of the ledger, optionally scoped to a sequence range. Returns a verification result indicating whether the chain is intact or identifies the first sequence number where tampering was detected.",
"orchestrator.ledger.list_exports_description": "Return a list of ledger export operations for the calling tenant including their status, requested time window, output format, and completion timestamps. Exports produce signed, portable bundles for offline compliance review.",
"orchestrator.ledger.get_export_description": "Return the full record for a specific ledger export including its status, artifact URI, content digest, and signing metadata. Returns 404 when the export ID does not belong to the calling tenant.",
"orchestrator.ledger.create_export_description": "Submit a new ledger export request for the calling tenant. The export is queued as a background job and produces a signed, content-addressed bundle of ledger entries covering the specified time window and entry types.",
"orchestrator.ledger.list_manifests_description": "Return the list of signed ledger manifests for the calling tenant. Manifests provide cryptographically attested summaries of ledger segments and are used for compliance archiving and cross-environment verification.",
"orchestrator.ledger.get_manifest_description": "Return the full signed manifest record for the specified ID, including the subject reference, signing key ID, signature, and the ledger entry range it covers. Returns 404 when the manifest does not exist in the tenant.",
"orchestrator.ledger.get_manifest_by_subject_description": "Return the manifest associated with the specified subject (typically a run or export artifact ID). Returns 404 when no manifest has been issued for that subject in the calling tenant.",
"orchestrator.ledger.verify_manifest_description": "Verify the cryptographic signature and payload integrity of the specified manifest against the current signing key. Returns a verification result with the verification status, key ID used, and any detected anomalies.",
"orchestrator.ledger.error.invalid_sequence_range": "Invalid sequence range.",
"orchestrator.ledger.error.start_before_end": "Start time must be before end time.",
"orchestrator.ledger.error.invalid_format": "Invalid format. Must be one of: {0}.",
"orchestrator.ledger.error.payload_digest_mismatch": "Payload digest does not match computed digest.",
"orchestrator.ledger.error.manifest_expired": "Manifest has expired.",
"orchestrator.pack_registry.list_description": "Return a paginated list of registered packs for the calling tenant, optionally filtered by status or tag. Each record includes the pack name, version, description, and lifecycle status.",
"orchestrator.pack_registry.get_description": "Return the full registration record for the specified pack, including all versions, tags, metadata, and lifecycle history. Returns 404 when the pack does not exist.",
"orchestrator.pack_registry.create_description": "Register a new pack definition. The pack is validated before being persisted. Duplicate pack names within the same tenant return 409.",
"orchestrator.pack_registry.update_description": "Update the mutable fields of an existing pack registration, including display name, description, and tags.",
"orchestrator.pack_registry.delete_description": "Delete the specified pack registration. Returns 409 when the pack has active or scheduled runs.",
"orchestrator.pack_registry.publish_version_description": "Publish a new version of the specified pack, adding it to the version history and optionally promoting it to stable.",
"orchestrator.pack_registry.get_version_description": "Return the full details of the specified pack version, including the manifest, parameter schema, and artifact digests.",
"orchestrator.pack_registry.list_versions_description": "Return the version history for the specified pack, ordered by publication date. Each entry includes the version string, status, and publication timestamp.",
"orchestrator.pack_registry.deprecate_version_description": "Mark the specified pack version as deprecated, preventing it from being scheduled for new runs while allowing existing runs to complete.",
"orchestrator.pack_registry.yank_version_description": "Permanently withdraw a pack version from use, blocking both new scheduling and completion of existing runs for this version.",
"orchestrator.pack_registry.add_tag_description": "Add one or more tags to the specified pack version, enabling version discovery by semantic label.",
"orchestrator.pack_registry.remove_tag_description": "Remove the specified tag from the pack version.",
"orchestrator.pack_registry.get_schema_description": "Return the parameter input schema for the specified pack version as a JSON Schema document.",
"orchestrator.pack_registry.validate_schema_description": "Validate a candidate parameter document against the input schema for the specified pack version, returning validation errors and warnings.",
"orchestrator.pack_registry.list_permissions_description": "Return the access control entries for the specified pack, listing which principals have read, run, and admin permissions.",
"orchestrator.pack_registry.update_permissions_description": "Update the access control list for the specified pack, granting or revoking permissions for the specified principals.",
"orchestrator.pack_registry.stats_description": "Return aggregate statistics for the pack registry, including total pack counts by status, run counts, and most-used packs.",
"orchestrator.pack_registry.search_description": "Search the pack registry by name fragment, tag, or metadata, returning paginated matching entries.",
"orchestrator.pack_registry.error.name_required": "Name is required.",
"orchestrator.pack_registry.error.display_name_required": "DisplayName is required.",
"orchestrator.pack_registry.error.pack_already_exists": "Pack with name '{0}' already exists.",
"orchestrator.pack_registry.error.pack_not_found": "Pack {0} not found.",
"orchestrator.pack_registry.error.pack_name_not_found": "Pack '{0}' not found.",
"orchestrator.pack_registry.error.cannot_update_terminal": "Cannot update a pack in terminal status.",
"orchestrator.pack_registry.error.status_required": "Status is required.",
"orchestrator.pack_registry.error.invalid_pack_status": "Invalid status: {0}.",
"orchestrator.pack_registry.error.cannot_transition_pack": "Cannot transition from {0} to {1}.",
"orchestrator.pack_registry.error.only_draft_packs_deleted": "Only draft packs can be deleted.",
"orchestrator.pack_registry.error.cannot_delete_with_versions": "Cannot delete pack with versions.",
"orchestrator.pack_registry.error.delete_pack_failed": "Failed to delete pack.",
"orchestrator.pack_registry.error.version_required": "Version is required.",
"orchestrator.pack_registry.error.artifact_uri_required": "ArtifactUri is required.",
"orchestrator.pack_registry.error.artifact_digest_required": "ArtifactDigest is required.",
"orchestrator.pack_registry.error.cannot_add_version": "Cannot add version to pack in {0} status.",
"orchestrator.pack_registry.error.version_already_exists": "Version {0} already exists.",
"orchestrator.pack_registry.error.version_not_found": "Version {0} not found for pack {1}.",
"orchestrator.pack_registry.error.version_id_not_found": "Version {0} not found.",
"orchestrator.pack_registry.error.no_published_versions": "No published versions found for pack {0}.",
"orchestrator.pack_registry.error.cannot_update_version_terminal": "Cannot update version in terminal status.",
"orchestrator.pack_registry.error.invalid_version_status": "Invalid status: {0}.",
"orchestrator.pack_registry.error.cannot_transition_version": "Cannot transition from {0} to {1}.",
"orchestrator.pack_registry.error.deprecation_reason_required": "DeprecationReason is required when deprecating.",
"orchestrator.pack_registry.error.signature_uri_required": "SignatureUri is required.",
"orchestrator.pack_registry.error.signature_algorithm_required": "SignatureAlgorithm is required.",
"orchestrator.pack_registry.error.already_signed": "Version is already signed.",
"orchestrator.pack_registry.error.only_published_can_download": "Only published versions can be downloaded.",
"orchestrator.pack_registry.error.only_draft_versions_deleted": "Only draft versions can be deleted.",
"orchestrator.pack_registry.error.delete_version_failed": "Failed to delete version.",
"orchestrator.pack_registry.error.query_required": "Query is required.",
"orchestrator.release_control.list_description": "Return a paginated list of release control records for the calling tenant, optionally filtered by project, environment, or status.",
"orchestrator.release_control.get_description": "Return the full detail of the specified release control record, including approval state, gate evaluations, and promotion history.",
"orchestrator.release_control.create_description": "Create a new release control record for the specified project and target environment.",
"orchestrator.release_control.approve_description": "Grant approval for the specified release gate, unblocking promotion to the next environment.",
"orchestrator.release_control.reject_description": "Reject the specified release gate, blocking the associated promotion.",
"orchestrator.release_control.promote_description": "Trigger environment promotion for an approved release.",
"orchestrator.release_control.rollback_description": "Initiate rollback of a failed or degraded release. Only permitted when the run status is failed, warning, or degraded.",
"orchestrator.release_control.cancel_description": "Cancel an in-progress release.",
"orchestrator.release_control.list_actions_description": "Return the ordered list of lifecycle actions recorded for the specified release.",
"orchestrator.release_control.list_gates_description": "Return the configured gates and their current evaluation status for the specified release.",
"orchestrator.release_control.get_summary_description": "Return a concise summary of the specified release for use in list views and notifications.",
"orchestrator.release_dashboard.get_description": "Return a dashboard-optimised aggregate view of the specified release run, including current status, environment promotion progress, gate evaluation counts, and SLO metrics.",
"orchestrator.release_dashboard.list_description": "Return a paginated list of release dashboard entries for the calling tenant, each including current status, environment, and summary metrics.",
"orchestrator.release_dashboard.get_promotion_description": "Return the promotion progress details for the specified release, including completed and pending environment targets.",
"orchestrator.slo.list_description": "Return the list of SLO definitions for the calling tenant, optionally filtered by SLO type or status.",
"orchestrator.slo.get_description": "Return the full configuration of the specified SLO, including objectives, measurement window, and current compliance status.",
"orchestrator.slo.create_description": "Create a new SLO definition for the calling tenant, specifying the SLO type, objective percentage, measurement window, and alerting thresholds.",
"orchestrator.slo.update_description": "Update the objective, window, or alerting configuration of the specified SLO.",
"orchestrator.slo.delete_description": "Delete the specified SLO definition.",
"orchestrator.slo.get_compliance_description": "Return the current compliance status for the specified SLO, including error budget remaining and burn rate.",
"orchestrator.slo.list_alerts_description": "Return the list of active and historical SLO alerts for the calling tenant.",
"orchestrator.slo.get_alert_description": "Return the full detail of the specified SLO alert, including trigger conditions and current status.",
"orchestrator.slo.acknowledge_alert_description": "Acknowledge an active SLO alert, suppressing further notifications for the configured snooze duration.",
"orchestrator.slo.resolve_alert_description": "Resolve an active SLO alert, recording the resolution timestamp and actor.",
"orchestrator.slo.history_description": "Return the compliance history for the specified SLO over the requested time window, bucketed by the configured granularity.",
"orchestrator.slo.burn_rate_description": "Return the current and projected error budget burn rate for the specified SLO.",
"orchestrator.slo.report_description": "Generate a compliance report for the specified SLO over the requested time window, suitable for sharing with stakeholders.",
"orchestrator.slo.forecast_description": "Return a forecast of SLO compliance for the next configured window based on current burn rate trends.",
"orchestrator.slo.test_description": "Evaluate a candidate SLO configuration against historical data without persisting it, returning expected compliance metrics.",
"orchestrator.slo.bulk_status_description": "Return the current compliance status for all SLOs in a single batched response, optimised for dashboard rendering.",
"orchestrator.slo.error.invalid_type": "Invalid SLO type. Must be 'availability', 'latency', or 'throughput'.",
"orchestrator.slo.error.invalid_window": "Invalid window. Must be '1h', '1d', '7d', or '30d'.",
"orchestrator.slo.error.invalid_severity": "Invalid severity. Must be 'info', 'warning', 'critical', or 'emergency'.",
"orchestrator.slo.error.alert_already_acknowledged": "Alert is already acknowledged.",
"orchestrator.slo.error.alert_already_resolved": "Alert is already resolved.",
"orchestrator.source.list_description": "Return the list of source integrations registered for the calling tenant, including their connection status and last sync timestamps.",
"orchestrator.source.get_description": "Return the full configuration and connection state of the specified source integration. Returns 404 when the source does not exist.",
"orchestrator.stream.job_logs_description": "Stream log lines for the specified job as a WebSocket connection. Log lines are pushed in real time as they are appended by the executing worker. The connection is closed when the job reaches a terminal state.",
"orchestrator.stream.run_events_description": "Stream lifecycle events for the specified run as a WebSocket connection. Events are pushed in real time as the run progresses through scheduling, execution, approval, and completion states.",
"orchestrator.stream.pack_run_logs_description": "Stream log lines for the specified pack run as a WebSocket connection, pushed in real time as the task runner appends them.",
"orchestrator.stream.metrics_description": "Stream live orchestrator metrics as a WebSocket connection, including queue depth, lease counts, and throughput gauges, updated every few seconds.",
"orchestrator.stream.error.websocket_required": "Expected WebSocket request.",
"orchestrator.worker.claim_description": "Atomically claim the next available job of the requested type for the calling worker identity, acquiring an exclusive time-limited lease. Returns 204 when no jobs are available. Idempotency-key support prevents duplicate claims on retry.",
"orchestrator.worker.heartbeat_description": "Extend the execution lease on a currently leased job to prevent it from being reclaimed by another worker. Must be called before the current lease expiry; returns 409 if the lease ID does not match or has already expired.",
"orchestrator.worker.progress_description": "Report incremental execution progress (0-100%) for a leased job. Progress is recorded for telemetry and dashboard display. Must be called with a valid lease ID; returns 409 on lease mismatch or expired lease.",
"orchestrator.worker.complete_description": "Mark a leased job as succeeded or failed, release the lease, persist output artifacts, and update the parent run's aggregate job counts. Artifacts are stored with content-addressable digests. Returns 409 on lease mismatch.",
"orchestrator.worker.error.worker_id_required": "WorkerId is required.",
"orchestrator.openapi.discovery_description": "Return the OpenAPI discovery document for the Orchestrator service, including the service name, current version, and a link to the full OpenAPI specification. The response is cached for 5 minutes and includes ETag-based conditional caching support.",
"orchestrator.openapi.spec_description": "Return the full OpenAPI 3.x specification for the Orchestrator service as a JSON document. Used by the Router to aggregate the service's endpoint metadata and by developer tooling to generate clients and documentation."
}

View File

@@ -1,8 +0,0 @@
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
}
}

View File

@@ -1,33 +0,0 @@
{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*",
"Orchestrator": {
"Database": {
"ConnectionString": "Host=localhost;Port=5432;Database=stellaops_orchestrator;Username=stellaops;Password=stellaops",
"CommandTimeoutSeconds": 30,
"EnablePooling": true,
"MinPoolSize": 1,
"MaxPoolSize": 100
},
"Lease": {
"DefaultLeaseDurationSeconds": 300,
"MaxLeaseDurationSeconds": 3600,
"RenewalThreshold": 0.5,
"ExpiryCheckIntervalSeconds": 30
},
"RateLimit": {
"DefaultMaxActive": 10,
"DefaultMaxPerHour": 1000,
"DefaultBurstCapacity": 50,
"DefaultRefillRate": 1.0,
"CircuitBreakerThreshold": 0.5,
"CircuitBreakerWindowMinutes": 5,
"CircuitBreakerMinSamples": 10
}
}
}

View File

@@ -22,9 +22,8 @@
"STELLAOPS_VEXLENS_URL": "https://vexlens.stella-ops.local",
"STELLAOPS_VULNEXPLORER_URL": "https://vulnexplorer.stella-ops.local",
"STELLAOPS_POLICY_ENGINE_URL": "https://policy-engine.stella-ops.local",
"STELLAOPS_POLICY_GATEWAY_URL": "https://policy-gateway.stella-ops.local",
"STELLAOPS_RISKENGINE_URL": "https://riskengine.stella-ops.local",
"STELLAOPS_JOBENGINE_URL": "https://jobengine.stella-ops.local",
"STELLAOPS_RELEASE_ORCHESTRATOR_URL": "https://release-orchestrator.stella-ops.local",
"STELLAOPS_TASKRUNNER_URL": "https://taskrunner.stella-ops.local",
"STELLAOPS_SCHEDULER_URL": "https://scheduler.stella-ops.local",
"STELLAOPS_GRAPH_URL": "https://graph.stella-ops.local",

View File

@@ -111,7 +111,7 @@
{ "Type": "Microservice", "Path": "^/api/v1/release-control(.*)", "IsRegex": true, "TranslatesTo": "http://platform.stella-ops.local/api/v1/release-control$1" },
{ "Type": "Microservice", "Path": "^/api/v1/gateway/rate-limits(.*)", "IsRegex": true, "TranslatesTo": "http://platform.stella-ops.local/api/v1/gateway/rate-limits$1" },
{ "Type": "Microservice", "Path": "^/api/v1/reachability(.*)", "IsRegex": true, "TranslatesTo": "http://reachgraph.stella-ops.local/api/v1/reachability$1" },
{ "Type": "Microservice", "Path": "^/api/v1/timeline(.*)", "IsRegex": true, "TranslatesTo": "http://timelineindexer.stella-ops.local/api/v1/timeline$1" },
{ "Type": "Microservice", "Path": "^/api/v1/timeline(.*)", "IsRegex": true, "TranslatesTo": "http://timeline.stella-ops.local/api/v1/timeline$1" },
{ "Type": "Microservice", "Path": "^/api/v1/audit(.*)", "IsRegex": true, "TranslatesTo": "http://timeline.stella-ops.local/api/v1/audit$1" },
{ "Type": "Microservice", "Path": "^/api/v1/export(.*)", "IsRegex": true, "TranslatesTo": "http://exportcenter.stella-ops.local/api/v1/export$1" },
{ "Type": "Microservice", "Path": "^/api/v1/advisory-sources(.*)", "IsRegex": true, "TranslatesTo": "http://concelier.stella-ops.local/api/v1/advisory-sources$1" },

Some files were not shown because too many files have changed in this diff Show More