docs: add service README.md files + update AGENTS.md decisions

- Create README.md for 25+ service modules with container info, API surface, storage
- Document attestor-tileproxy separation rationale (air-gap network isolation)
- Document opsmemory-advisoryai separation rationale (resource isolation, blast radius)
- Update Timeline AGENTS.md with merged indexer info

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
master
2026-04-08 13:45:03 +03:00
parent 59ba757eaa
commit 59e7f25d96
31 changed files with 553 additions and 8 deletions

View File

@@ -97,6 +97,15 @@ npx playwright install && npm run test:e2e # E2E tests (requi
**Stubs for WebApplicationFactory tests:** Replace `IKnowledgeSearchService`, `IKnowledgeIndexer`, `IUnifiedSearchService`, `IUnifiedSearchIndexer`, `ISynthesisEngine`, and `IVectorEncoder` via `services.RemoveAll<T>()` + `services.AddSingleton<T, StubT>()`. See `UnifiedSearchSprintIntegrationTests.cs` for the canonical pattern. **Stubs for WebApplicationFactory tests:** Replace `IKnowledgeSearchService`, `IKnowledgeIndexer`, `IUnifiedSearchService`, `IUnifiedSearchIndexer`, `ISynthesisEngine`, and `IVectorEncoder` via `services.RemoveAll<T>()` + `services.AddSingleton<T, StubT>()`. See `UnifiedSearchSprintIntegrationTests.cs` for the canonical pattern.
## OpsMemory (decision ledger -- separate service, co-located in this directory)
- **StellaOps.OpsMemory.WebService**: Decision ledger and playbook suggestion engine. Uses cosine similarity for matching -- no LLM dependency.
- **Advisory-AI**: LLM-powered advisory interface (the primary service in this directory).
- **Why OpsMemory is a separate service from Advisory-AI:**
- **Resource isolation** -- OpsMemory runs at 512 MB; Advisory-AI requires 1 GB+. Separate containers prevent OpsMemory from being starved by LLM workloads.
- **Blast radius** -- An Advisory-AI crash or OOM does not take down the decision ledger.
- **Independent deployability** -- OpsMemory can be updated, scaled, or disabled without redeploying Advisory-AI.
- **Known gap:** `NullOpsMemoryClient` exists in Advisory-AI (`Chat/Assembly/Providers/OpsMemoryDataProvider.cs`, `Chat/DependencyInjection/AdvisoryChatServiceCollectionExtensions.cs`). Future work should extract shared models to a `StellaOps.OpsMemory.Abstractions` library to remove the direct stub and decouple the contract.
## Docs & Change Sync ## Docs & Change Sync
- When changing behaviors or contracts, update relevant docs under `docs/modules/advisory-ai`, `docs/modules/policy/guides/assistant-parameters.md`, or sprint-linked docs; mirror decisions in sprint **Decisions & Risks**. - When changing behaviors or contracts, update relevant docs under `docs/modules/advisory-ai`, `docs/modules/policy/guides/assistant-parameters.md`, or sprint-linked docs; mirror decisions in sprint **Decisions & Risks**.
- If new advisories/platform decisions occur, notify sprint log and link updated docs. - If new advisories/platform decisions occur, notify sprint log and link updated docs.

19
src/AdvisoryAI/README.md Normal file
View File

@@ -0,0 +1,19 @@
# AdvisoryAI
**Container(s):** stellaops-advisory-ai-web, stellaops-advisory-ai-worker, stellaops-opsmemory-web
**Slot:** 45 (advisory-ai), 27 (opsmemory) | **Port:** 8080 | **Consumer Group:** advisoryai, opsmemory
**Resource Tier:** medium (advisory-ai), light (opsmemory)
## Purpose
AdvisoryAI provides AI-assisted security advisory analysis, remediation planning, and knowledge search across the platform. It supports local and remote LLM inference, chat-based interaction, policy studio integration, unified search across findings/VEX/policy, and evidence pack generation. OpsMemory is a decision ledger and playbook suggestion service that records operational decisions and uses similarity vectors to recommend playbooks.
## API Surface
- `advisoryai` (via Router) — advisory analysis, chat sessions, remediation plans, knowledge search, explanation generation, attestation, rate-limited inference, unified search, policy studio
- `opsmemory` (via Router) — decision recording, playbook suggestions, similarity-based recall
## Storage
PostgreSQL (OpsMemory: `ConnectionStrings:Default`; AdvisoryAI knowledge search: `KnowledgeSearch:ConnectionString`); file-system queue/plans/outputs for AdvisoryAI
## Background Workers
- `KnowledgeSearchStartupRebuildService` — rebuilds knowledge search index on startup
- Advisory AI worker — processes queued advisory analysis jobs

18
src/AirGap/README.md Normal file
View File

@@ -0,0 +1,18 @@
# AirGap
**Container(s):** stellaops-airgap-controller, stellaops-airgap-time
**Slot:** 32 (controller), 33 (time) | **Port:** 8080 | **Consumer Group:** airgap-controller, airgap-time
**Resource Tier:** light
## Purpose
The AirGap module supports fully disconnected (air-gapped) deployments. The Controller manages seal/unseal operations, import verification, and air-gap status tracking. The Time service provides trusted time anchoring for environments without NTP — it validates time-anchor tokens, enforces staleness policies, and ensures cryptographic operations have verifiable timestamps even offline.
## API Surface
- `airgap-controller` (via Router) — seal/unseal, status queries, import verification, bundle validation
- `airgap-time` (via Router) — time anchor status, verification, staleness checks, health (`/healthz/ready`)
## Storage
PostgreSQL (controller: `ConnectionStrings:Default`); Valkey for cache (controller); in-memory time anchor store (time service)
## Background Workers
- `SealedStartupHostedService` (time) — validates time anchors and trust roots on startup

View File

@@ -35,6 +35,14 @@ Manage the attestation and proof chain infrastructure for StellaOps:
- **StellaOps.Provenance.Attestation**: SLSA/DSSE attestation generation library - **StellaOps.Provenance.Attestation**: SLSA/DSSE attestation generation library
- **StellaOps.Provenance.Attestation.Tool**: Forensic verification CLI tool - **StellaOps.Provenance.Attestation.Tool**: Forensic verification CLI tool
### TileProxy (caching reverse proxy for Sigstore Rekor tiles -- separate service)
- **StellaOps.Attestor.TileProxy**: Caching reverse proxy for Rekor transparency log tiles (~1,000 lines of application code)
- Pre-warms tile cache every 6 hours from `rekor.sigstore.dev` (or a local Rekor instance in air-gap deployments)
- **Why TileProxy is a separate service from Attestor:**
- **Network isolation** -- Attestor has no outbound internet access by design; TileProxy requires outbound access to fetch tiles. Merging the two would violate the air-gap network policy.
- **Zero runtime coupling** -- Attestor does not call TileProxy at runtime. They share a trust domain but operate independently.
- Key components: `TileProxyService`, `ContentAddressedTileStore`, `TileSyncJob`, `TileEndpoints`
### Tests ### Tests
- **__Tests**: Integration tests with Testcontainers for PostgreSQL - **__Tests**: Integration tests with Testcontainers for PostgreSQL

19
src/Attestor/README.md Normal file
View File

@@ -0,0 +1,19 @@
# Attestor
**Container(s):** stellaops-attestor, stellaops-signer, stellaops-attestor-tileproxy
**Slot:** 4 (attestor), 30 (signer), 5 (tileproxy) | **Port:** 8442 (attestor), 8441 (signer) | **Consumer Group:** attestor, signer
**Resource Tier:** light
## Purpose
The Attestor module provides cryptographic attestation, DSSE signing, and transparency log integration for release artifacts. The Signer service manages key lifecycle (rotation, trust anchors, ceremonies) and provides signing/verification endpoints. The TileProxy caches Rekor transparency log tiles for offline/air-gap scenarios.
## API Surface
- `attestor` (via Router) — attestation creation, verification, DSSE envelope operations
- `signer` (via Router) — sign, verify, key rotation, ceremony management (create/approve/execute/cancel)
- `tileproxy` — cached proxy for Rekor tile endpoints
## Storage
PostgreSQL (attestor: `ConnectionStrings:Default`; signer: `ConnectionStrings:KeyManagement` for EF Core key management); TileProxy uses tmpfs cache
## Background Workers
- TileProxy: `TilePrefetchJob` (background tile prefetching)

19
src/Authority/README.md Normal file
View File

@@ -0,0 +1,19 @@
# Authority
**Container(s):** stellaops-authority, stellaops-issuer-directory
**Slot:** 2 (authority), 37 (issuer-directory) | **Port:** 8440 (authority) | **Consumer Group:** authority, issuerdirectory
**Resource Tier:** heavy (authority), light (issuer-directory)
## Purpose
The Authority service is the OAuth2/OIDC identity provider for the entire Stella Ops platform. It handles user authentication, token issuance, tenant management, client registration, RBAC, plugin-based identity providers, rate limiting, notifications, and audit logging. The Issuer Directory service manages trusted CSAF publisher and issuer discovery.
## API Surface
- `authority` (via Router) — OpenID Connect endpoints (`/connect/authorize`, `/connect/token`, `.well-known/openid-configuration`), tenant CRUD, user management, role management, client management, branding, audit, notifications, vulnerability workflow
- `issuerdirectory` (via Router) — issuer lookup, CSAF publisher seed, trust chain queries
## Storage
PostgreSQL database `stellaops_authority` (dedicated DB); Valkey for session/cache; Issuer Directory uses PostgreSQL (`ConnectionStrings:Default`)
## Background Workers
- `AuthoritySecretHasherInitializer` — crypto secret initialization on startup
- Plugin hosting via `IPluginHost` (standard identity plugin with bootstrap user/client seeding)

18
src/BinaryIndex/README.md Normal file
View File

@@ -0,0 +1,18 @@
# BinaryIndex
**Container(s):** stellaops-binaryindex-web, stellaops-symbols
**Slot:** 36 (binaryindex), 38 (symbols) | **Port:** 8080 | **Consumer Group:** binaryindex, symbols
**Resource Tier:** light
## Purpose
The BinaryIndex service provides binary vulnerability detection through build-ID and binary signature resolution, golden set management, delta signature tracking, and VEX bridge integration. It includes a caching layer (Valkey) for resolution results. The Symbols server provides symbol recovery and debug information lookup for binary analysis.
## API Surface
- `binaryindex` (via Router) — binary resolution (build-ID/signature lookup), golden set queries, delta signature management, VEX bridge queries, telemetry
- `symbols` (via Router) — symbol file upload/download, debug info queries
## Storage
PostgreSQL (via `ConnectionStrings:Default`); Valkey for resolution cache; in-memory golden set and vulnerability stores
## Background Workers
None

20
src/Concelier/README.md Normal file
View File

@@ -0,0 +1,20 @@
# Concelier
**Container(s):** stellaops-concelier, stellaops-excititor, stellaops-excititor-worker
**Slot:** 9 (concelier), 10 (excititor) | **Port:** 8080 | **Consumer Group:** concelier, excititor
**Resource Tier:** medium
## Purpose
Concelier is the advisory feed aggregator and SBOM correlation engine. It ingests, normalizes, and merges security advisories from multiple sources, manages advisory linksets, and supports air-gap mirror exports/imports. Excititor is the VEX (Vulnerability Exploitability eXchange) processing engine that normalizes CSAF, CycloneDX, and OpenVEX documents, verifies signatures and attestations, and maintains consensus projections across providers.
## API Surface
- `concelier` (via Router) — advisory queries, SBOM correlation, federation, observation management, canonical advisory views, mirror export/import, AoC (Attestation of Conformity) endpoints
- `excititor` (via Router) — VEX document ingestion, normalization, provider management, signature verification, graph queries, policy integration, export
## Storage
PostgreSQL (`concelier` schema via `PostgresStorage:ConnectionString`; `vex` schema for Excititor via `Postgres:Excititor`); RustFS/S3 for artifact storage; Valkey for cache
## Background Workers
- `VexWorkerHostedService` (excititor-worker) — background VEX provider polling and document ingestion
- `VexConsensusRefreshService` (excititor-worker) — periodic consensus recalculation
- `VexWorkerHeartbeatService` (excititor-worker) — orchestrator heartbeat

18
src/Doctor/README.md Normal file
View File

@@ -0,0 +1,18 @@
# Doctor
**Container(s):** stellaops-doctor-web, stellaops-doctor-scheduler
**Slot:** 26 (web), scheduler | **Port:** 8080 | **Consumer Group:** doctor, doctor-scheduler
**Resource Tier:** light
## Purpose
The Doctor service runs diagnostic health checks across the entire Stella Ops platform. It uses a plugin architecture covering core services, databases, service graphs, integrations, security, observability, Docker, attestation (Rekor/Cosign), verification (SBOM/VEX/signature/policy), release pipelines, environment health, scanner/reachability, compliance/evidence, binary analysis, and timestamping (eIDAS). The Doctor Scheduler automates periodic diagnostic runs with trend analysis and alerting.
## API Surface
- `doctor` (via Router) — diagnostic run execution, report retrieval, timestamping dashboard
- `doctor-scheduler` (via Router) — schedule management for periodic doctor runs, trend queries
## Storage
In-memory (report storage, schedule/trend repositories); PostgreSQL connection available via `ConnectionStrings:Default`
## Background Workers
- `DoctorScheduleWorker` (scheduler service) — executes scheduled diagnostic runs via HTTP calls to Doctor API

View File

@@ -0,0 +1,17 @@
# EvidenceLocker
**Container(s):** stellaops-evidence-locker-web, stellaops-evidence-locker-worker
**Slot:** 6 | **Port:** 8080 | **Consumer Group:** evidencelocker
**Resource Tier:** light
## Purpose
The Evidence Locker provides write-once, tamper-evident storage for release evidence artifacts (scan results, attestations, policy verdicts, approval records). It supports optional cryptographic signing (ES256), quota enforcement, snapshot queries, and multi-material evidence bundles. The worker handles background evidence processing tasks.
## API Surface
- `evidencelocker` (via Router) — evidence material upload (write-once), retrieval, snapshot queries, health checks, observability endpoints
## Storage
PostgreSQL (via `EvidenceLocker:Database:ConnectionString`); file-system object store (`/data/evidence`) or configurable backend; Valkey for cache
## Background Workers
- Evidence Locker worker — background evidence processing, integrity verification

19
src/Findings/README.md Normal file
View File

@@ -0,0 +1,19 @@
# Findings
**Container(s):** stellaops-findings-ledger-web, stellaops-riskengine-web, stellaops-riskengine-worker, stellaops-api (VulnExplorer)
**Slot:** 25 (ledger), 16 (riskengine), 13 (vulnexplorer) | **Port:** 8080 | **Consumer Group:** findings-ledger, riskengine, vulnexplorer
**Resource Tier:** medium (ledger, riskengine), light (vulnexplorer, riskengine-worker)
## Purpose
The Findings module provides an append-only event ledger for security findings, a risk scoring engine with pluggable providers (CVSS/KEV/EPSS/VEX/fix-exposure), and a vulnerability explorer API. The Ledger tracks finding lifecycle with Merkle-tree integrity, incident management, and scoring APIs. The RiskEngine computes risk scores via job queue. VulnExplorer provides the UI-facing query API.
## API Surface
- `findings-ledger` (via Router) — finding event ingestion, queries, export, incident management, EWS scoring, Merkle proofs, attachment management
- `riskengine` (via Router) — risk score providers listing, job submission, simulation, exploit maturity
- `vulnexplorer` (via Router) — vulnerability search and investigation queries
## Storage
PostgreSQL (`ConnectionStrings:Default` / `ConnectionStrings:FindingsLedger`); RiskEngine supports PostgreSQL or in-memory
## Background Workers
- `riskengine-worker` — background risk score computation (`Worker` hosted service)

17
src/Graph/README.md Normal file
View File

@@ -0,0 +1,17 @@
# Graph
**Container(s):** stellaops-graph-api
**Slot:** 20 | **Port:** 8080 | **Consumer Group:** graph
**Resource Tier:** medium
## Purpose
The Graph API service provides a dependency and service graph for the Stella Ops platform. It supports graph search, path queries, diff computation, lineage tracking, overlay projections, saved views, and export functionality. It serves as the central topology store for understanding relationships between components, images, and services.
## API Surface
- `graph` (via Router) — graph search, path queries, diff, lineage, overlay, saved views, export (GEXF/DOT/JSON), edge metadata, audit log, rate-limited access
## Storage
PostgreSQL (via `Postgres:Graph` for saved views); in-memory graph repository for core graph data
## Background Workers
- `GraphSavedViewsMigrationHostedService` — migrates saved views on startup

View File

@@ -0,0 +1,17 @@
# Integrations
**Container(s):** stellaops-integrations-web
**Slot:** 42 | **Port:** 8080 | **Consumer Group:** integrations
**Resource Tier:** light
## Purpose
The Integrations service provides a unified catalog and management API for external tool connections. It supports plugins for GitHub App, Harbor, Gitea, Jenkins, Nexus, Docker Registry, GitLab, Vault, Consul, and eBPF Agent integrations. It includes AI Code Guard capabilities and manages authentication references (AuthRef) for secure credential resolution.
## API Surface
- `integrations` (via Router) — integration catalog CRUD, connection testing, credential management (AuthRef/Vault), AI Code Guard endpoints, plugin discovery
## Storage
PostgreSQL (via `ConnectionStrings:IntegrationsDb`); EF Core with auto-migrations (`AddStartupMigrations`)
## Background Workers
None

21
src/JobEngine/README.md Normal file
View File

@@ -0,0 +1,21 @@
# JobEngine
**Container(s):** stellaops-scheduler-web, stellaops-scheduler-worker, stellaops-taskrunner-web, stellaops-taskrunner-worker, stellaops-packsregistry-web, stellaops-packsregistry-worker
**Slot:** 19 (scheduler), 18 (taskrunner), 34 (packsregistry) | **Port:** 8080 | **Consumer Group:** scheduler, taskrunner, packsregistry
**Resource Tier:** medium (scheduler), light (taskrunner, packsregistry)
## Purpose
The JobEngine module provides scheduled scan orchestration, task execution, and pack registry management. The Scheduler manages scan schedules (CRON-based), graph jobs, policy simulation runs, vulnerability resolver jobs, and failure signatures. The TaskRunner executes task packs with air-gap-aware egress policies, simulation, and attestation. The PacksRegistry stores and serves versioned task pack bundles.
## API Surface
- `scheduler` (via Router) — schedule CRUD, run history, graph jobs, policy runs, policy simulations, failure signatures, event webhooks, scripts endpoint
- `taskrunner` (via Router) — task pack execution, simulation, planning, incident mode, artifact management
- `packsregistry` (via Router) — pack upload, download, version listing, approval workflow
## Storage
PostgreSQL schema `scheduler` (Scheduler); PostgreSQL for TaskRunner and PacksRegistry; Valkey queue for job dispatch; seed-fs object store for artifacts
## Background Workers
- Scheduler: `SchedulerWorkerHostedService` — picks up scheduled jobs from Valkey and dispatches scan runs
- TaskRunner: worker process for pack execution
- PacksRegistry: worker process for background pack processing

17
src/Notifier/README.md Normal file
View File

@@ -0,0 +1,17 @@
# Notifier
**Container(s):** stellaops-notifier-web, stellaops-notifier-worker
**Slot:** 28 | **Port:** 8080 | **Consumer Group:** notifier
**Resource Tier:** medium (web), light (worker)
## Purpose
The Notifier service provides multi-channel notification dispatch with rule evaluation, correlation, storm breaking (dedup/throttling), escalation, dead-letter handling, and template rendering. The web service exposes management APIs and simulation endpoints; the worker processes the notification queue, delivers messages through configured channels, and manages retention.
## API Surface
- `notifier` (via Router) — notification submission, rule simulation, channel configuration, dead-letter management, escalation rules, template management, correlation queries, storm-breaker status
## Storage
PostgreSQL (via `notifier:storage:postgres:ConnectionString`); Valkey queue (via `notifier:queue:Redis:ConnectionString`)
## Background Workers
- Notifier worker — queue-driven notification dispatch, channel delivery, retention cleanup, escalation processing

17
src/Notify/README.md Normal file
View File

@@ -0,0 +1,17 @@
# Notify
**Container(s):** stellaops-notify-web
**Slot:** 29 | **Port:** 8080 | **Consumer Group:** notify
**Resource Tier:** medium
## Purpose
Notify is the notification routing and delivery service for the Stella Ops platform. It manages notification channels, templates, preferences, routing rules, and delivery pipelines with plugin-based channel support. It handles alert dispatching, escalation workflows, and notification lifecycle tracking with PostgreSQL-backed persistence.
## API Surface
- `notify` (via Router) — notification submission, channel management, preference CRUD, template management, delivery status, routing rules, audit trail
## Storage
PostgreSQL schema `notify` (via `Postgres:Notify:ConnectionString`); plugin-based channel storage
## Background Workers
- None (web-only service; delivery is synchronous or plugin-delegated)

18
src/Platform/README.md Normal file
View File

@@ -0,0 +1,18 @@
# Platform
**Container(s):** stellaops-platform
**Slot:** 1 | **Port:** 8080 | **Consumer Group:** platform
**Resource Tier:** heavy
## Purpose
The Platform service is the central configuration and orchestration hub for the Stella Ops suite. It serves UI environment settings (`/platform/envsettings.json`), manages service discovery URLs for all 40+ microservices, runs auto-migrations (58 SQL files on startup), provides federated telemetry aggregation, analytics, unified score computation, and acts as the service registry for the Router mesh.
## API Surface
- `platform` (via Router) — environment settings, service health aggregation, analytics, context management, telemetry federation sync, auto-migration status
## Storage
PostgreSQL schema `platform` (auto-migrated via `AddStartupMigrations`); Valkey for messaging and cache
## Background Workers
- Startup migration hosted service — applies 58 SQL migration files on boot
- Federated telemetry sync service — aggregates telemetry from downstream services

21
src/Policy/README.md Normal file
View File

@@ -0,0 +1,21 @@
# Policy
**Container(s):** stellaops-policy-engine
**Slot:** 14 | **Port:** 8080 | **Consumer Group:** policy-engine
**Resource Tier:** medium
## Purpose
The Policy Engine evaluates security policies against scan results, computes risk scores (CVSS v4, EPSS, EWS), manages exceptions with approval workflows, and produces go/no-go gate decisions for release promotions. It includes merged Policy Gateway functionality (delta computation, drift gates, unknowns gates, score-based gates, tool lattice access control).
## API Surface
- `policy-engine` (via Router) — policy compilation, evaluation, simulation, batch context, risk profiles, CVSS receipts, exception management, delta/snapshot endpoints, gate evaluation (drift, unknowns, score-based), overlay projection, trust weighting, advisory AI knobs, sealed-mode, air-gap bundle import/export, governance, tool lattice, verification policies, attestation reports, registry webhooks
## Storage
PostgreSQL schema `policy` (via `Postgres:Policy`); Valkey for cache
## Background Workers
- `ExceptionLifecycleWorker` — exception state machine transitions
- `ExceptionExpiryWorker` — auto-expire stale exceptions
- `IncidentModeExpirationWorker` — incident mode TTL enforcement
- `PolicyEngineBootstrapWorker` — startup initialization
- `GateEvaluationWorker` — async gate evaluation queue processing

17
src/ReachGraph/README.md Normal file
View File

@@ -0,0 +1,17 @@
# ReachGraph
**Container(s):** stellaops-reachgraph-web
**Slot:** 22 | **Port:** 8080 | **Consumer Group:** reachgraph
**Resource Tier:** light
## Purpose
The ReachGraph service provides content-addressed storage and retrieval of reachability subgraphs. It stores canonical serialized reachability data with digest-based addressing, uses PostgreSQL for persistence and Valkey for caching, and supports rate-limited access for high-throughput graph queries.
## API Surface
- `reachgraph` (via Router) — reachability subgraph storage (PUT), retrieval by digest (GET), comparison, rate-limited queries
## Storage
PostgreSQL (via `ConnectionStrings:PostgreSQL`); Valkey cache (via `ConnectionStrings:Redis`)
## Background Workers
None

View File

@@ -0,0 +1,17 @@
# ReleaseOrchestrator
**Container(s):** stellaops-release-orchestrator
**Slot:** 48 | **Port:** 8080 | **Consumer Group:** release-orchestrator
**Resource Tier:** medium
## Purpose
The Release Orchestrator manages environment promotions (Dev -> Stage -> Prod) with policy-gated approval workflows. It provides release lifecycle management, approval collection, deployment compatibility checks, evidence linkage, audit trails, and first-signal tracking. It delegates workflow execution to the Workflow Engine service.
## API Surface
- `release-orchestrator` (via Router) — release creation/promotion, approval endpoints, deployment management, release dashboard, release control v2, evidence queries, audit trail, first-signal tracking
## Storage
PostgreSQL (via JobEngine infrastructure: `ConnectionStrings:Default`); in-memory stores for promotion decisions and deployment compatibility
## Background Workers
None (delegates async work to Workflow Engine)

21
src/Router/README.md Normal file
View File

@@ -0,0 +1,21 @@
# Router
**Container(s):** stellaops-router-gateway
**Slot:** 0 | **Port:** 443 (HTTPS), 8080 (HTTP) | **Consumer Group:** router-gateway
**Resource Tier:** heavy
## Purpose
The Router Gateway is the front-door reverse proxy and API gateway for all Stella Ops services. It handles HTTPS termination, OAuth2/OIDC token validation (with DPoP support), identity envelope signing (HMAC-SHA256) for downstream service authentication, Valkey-based messaging transport for service-to-service routing, rate limiting, and serves the Angular console static files.
## API Surface
- Gateway — proxies all `/api/*` requests to downstream microservices via Valkey messaging transport
- Static files — serves Angular console from `/` (mounted from `console-dist` volume)
- Auth — `/connect/*` proxied to Authority, identity envelope injection for all routed requests
## Storage
PostgreSQL (via `ConnectionStrings:Default` for effective claims); Valkey for messaging transport, queue-based request/response routing
## Background Workers
- Router transport plugin loader — loads messaging transport DLLs at startup
- Authority integration service — syncs effective claims from Authority
- Gateway readiness evaluator — tracks downstream service availability

22
src/Scanner/README.md Normal file
View File

@@ -0,0 +1,22 @@
# Scanner
**Container(s):** stellaops-scanner-web, stellaops-scanner-worker, stellaops-cartographer
**Slot:** 8 (web + worker), 21 (cartographer) | **Port:** 8444 (web) | **Consumer Group:** scanner (web), cartographer
**Resource Tier:** heavy (web + worker), light (cartographer)
## Purpose
The Scanner module performs SBOM generation, vulnerability analysis, reachability mapping, and supply-chain security scanning of container images. The web service exposes scan APIs (triage, SBOM queries, offline-kit management, replay commands), while the worker processes scan jobs from Valkey queues through a multi-stage pipeline (analyzers, EPSS enrichment, secrets detection, crypto analysis, build provenance, PoE generation, verdict push).
## API Surface
- `scanner` (via Router) — SBOM queries, scan submissions, triage, reachability slices, offline-kit import/export, smart-diff, policy gate evaluation
- `cartographer` (via Router) — dependency graph construction and mapping
## Storage
PostgreSQL schema `scanner` (via `ScannerStorage:Postgres`); RustFS object store for artifacts (`scanner-artifacts` bucket)
## Background Workers
- `ScannerWorkerHostedService` — processes scan jobs from Valkey queue
- `EpssIngestJob` — EPSS score ingestion
- `EpssEnrichmentJob` — live EPSS enrichment of scan results
- `EpssSignalJob` — EPSS signal emission
- `FnDriftMetricsExporter` — function drift metrics

17
src/Signals/README.md Normal file
View File

@@ -0,0 +1,17 @@
# Signals
**Container(s):** stellaops-signals
**Slot:** 44 | **Port:** 8080 | **Consumer Group:** signals
**Resource Tier:** light
## Purpose
The Signals service ingests and routes events from SCM webhooks (Gitea, GitHub, GitLab), container registries, and other sources into the Stella Ops platform. It parses webhook payloads, maps them to internal signal models, and forwards them to downstream services (Scanner, Scheduler) for processing. It also provides SCM integration for repository and commit queries.
## API Surface
- `signals` (via Router) — webhook ingestion endpoints, SCM repository queries, commit history, signal routing, event replay
## Storage
PostgreSQL (via `ConnectionStrings:Default`); Valkey for event routing
## Background Workers
None (event processing is synchronous on webhook receipt)

17
src/SmRemote/README.md Normal file
View File

@@ -0,0 +1,17 @@
# SmRemote
**Container(s):** stellaops-smremote (via docker-compose.crypto-provider.smremote.yml overlay)
**Slot:** 31 | **Port:** TBD | **Consumer Group:** TBD
**Resource Tier:** light
## Purpose
SmRemote provides Chinese national cryptography (SM2/SM3/SM4) signing and verification as a remote service. It wraps the SM-Soft crypto provider plugin and exposes sign/verify operations over HTTP for air-gapped or HSM-backed deployments where the crypto provider cannot run in-process.
## API Surface
- `smremote` (via Router) — SM2 sign, SM2 verify, key info queries
## Storage
None (stateless crypto service)
## Background Workers
None

View File

@@ -21,21 +21,29 @@ Before working on this module, read:
### Directory Ownership ### Directory Ownership
- **WebService**: `src/Timeline/StellaOps.Timeline.WebService/` - **WebService (unified)**: `src/Timeline/StellaOps.Timeline.WebService/`
- **Core Library**: `src/Timeline/__Libraries/StellaOps.Timeline.Core/` - **Core Library (HLC)**: `src/Timeline/__Libraries/StellaOps.Timeline.Core/`
- **Indexer Core Library**: `src/Timeline/__Libraries/StellaOps.TimelineIndexer.Core/`
- **Indexer Infrastructure Library**: `src/Timeline/__Libraries/StellaOps.TimelineIndexer.Infrastructure/`
- **Tests**: `src/Timeline/__Tests/` - **Tests**: `src/Timeline/__Tests/`
Note: `StellaOps.TimelineIndexer.WebService/` and `StellaOps.TimelineIndexer.Worker/` are
dormant (their logic has been merged into `StellaOps.Timeline.WebService`).
### Dependencies ### Dependencies
- Depends on: `StellaOps.Eventing`, `StellaOps.HybridLogicalClock`, `StellaOps.Replay.Core` - Depends on: `StellaOps.Eventing`, `StellaOps.HybridLogicalClock`, `StellaOps.Replay.Core`,
- Consumed by: UI Console (Timeline view), CLI (replay commands) `StellaOps.TimelineIndexer.Core`, `StellaOps.TimelineIndexer.Infrastructure`
- Consumed by: UI Console (Timeline view), CLI (replay commands), ExportCenter, Platform.Database
### API Conventions ### API Conventions
1. All endpoints under `/api/v1/timeline` 1. Indexer endpoints under `/api/v1/timeline` (tenant-scoped event queries, evidence)
2. HLC timestamps returned as sortable strings 2. HLC endpoints under `/api/v1/timeline/hlc` (correlation queries, critical path, replay, export)
3. Pagination via `limit`, `offset`, and `nextCursor` 3. Unified audit endpoints under `/api/v1/audit`
4. Export produces NDJSON by default 4. HLC timestamps returned as sortable strings
5. Pagination via `limit`, `offset`, and `nextCursor`
6. Export produces NDJSON by default
### Testing Requirements ### Testing Requirements

24
src/Timeline/README.md Normal file
View File

@@ -0,0 +1,24 @@
# Timeline
**Container(s):** stellaops-timeline-web
**Slot:** 24 (timeline) | **Port:** 8080 | **Consumer Group:** timeline
**Resource Tier:** light
**Network aliases:** `timeline.stella-ops.local`, `timelineindexer.stella-ops.local` (backwards compat)
## Purpose
The Timeline module provides a unified, HLC-ordered event timeline across the entire platform. It aggregates audit events from Authority, JobEngine, Policy, Evidence Locker, and Notify via HTTP polling and direct ingestion. It also serves timeline indexer query and evidence linkage endpoints (previously in separate timeline-indexer-web and timeline-indexer-worker containers, now merged).
## API Surface
- `/api/v1/audit/*` — unified audit aggregation, anomaly detection, export
- `/api/v1/timeline/*` — timeline indexer event CRUD, indexed queries, evidence linkage
- `/api/v1/timeline/hlc/*` — HLC-ordered event queries, replay, export
- `/timeline/*` — bare-prefix indexer endpoints (direct access)
## Storage
PostgreSQL schema `timeline` (via `Postgres:Timeline:ConnectionString`); Valkey for eventing
## Background Workers
- `TimelineIngestionWorker` (hosted service) — background event ingestion from NATS/Redis (transports disabled by default)
## Merge History
- Timeline Indexer (Slot 23) was merged into Timeline (Slot 24). The `timelineindexer.stella-ops.local` network alias is preserved on the timeline-web container for backwards compatibility.

17
src/Unknowns/README.md Normal file
View File

@@ -0,0 +1,17 @@
# Unknowns
**Container(s):** stellaops-unknowns-web
**Slot:** 46 | **Port:** 8080 | **Consumer Group:** unknowns
**Resource Tier:** light
## Purpose
The Unknowns service tracks unidentified or unresolved components discovered during scanning — packages without known provenance, binaries without debug symbols, or dependencies that cannot be matched to any advisory source. It provides provenance hints and investigation tools to help resolve unknown components.
## API Surface
- `unknowns` (via Router) — unknown component queries, provenance hint submission, investigation status, health checks
## Storage
PostgreSQL (via `ConnectionStrings:Default` / `ConnectionStrings:UnknownsDb`)
## Background Workers
None

17
src/VexHub/README.md Normal file
View File

@@ -0,0 +1,17 @@
# VexHub
**Container(s):** stellaops-vexhub-web
**Slot:** 11 | **Port:** 8080 | **Consumer Group:** vexhub
**Resource Tier:** light
## Purpose
VexHub is the centralized VEX (Vulnerability Exploitability eXchange) document repository for the platform. It stores, indexes, and serves VEX statements from multiple sources with API-key-based authentication, supporting document search, filtering, and administrative operations.
## API Surface
- `vexhub` (via Router) — VEX document CRUD, search, filtering, provider management, admin operations
## Storage
PostgreSQL (via `Postgres:ConnectionString`, schema `vexhub`); Valkey for cache
## Background Workers
None

17
src/VexLens/README.md Normal file
View File

@@ -0,0 +1,17 @@
# VexLens
**Container(s):** stellaops-vexlens-web
**Slot:** 12 | **Port:** 8080 | **Consumer Group:** vexlens
**Resource Tier:** light
## Purpose
VexLens provides VEX consensus computation and trust-weighted analysis across multiple VEX statement providers. It evaluates conflicting VEX statements using a trust weight engine, produces consensus projections, manages an issuer directory for trust scoring, and exposes verification endpoints for statement integrity checks.
## API Surface
- `vexlens` (via Router) — consensus queries, trust weight configuration, issuer directory, statement verification, projection views
## Storage
PostgreSQL (via `ConnectionStrings:Default`); in-memory consensus projection store; Valkey for cache
## Background Workers
None

19
src/Workflow/README.md Normal file
View File

@@ -0,0 +1,19 @@
# Workflow
**Container(s):** stellaops-workflow
**Slot:** (compose inline) | **Port:** 8080 | **Consumer Group:** workflow
**Resource Tier:** medium
## Purpose
The Workflow Engine provides a general-purpose workflow runtime for orchestrating multi-step release and operational processes. It supports workflow definition deployment, runtime execution, signal-driven task transitions, dead-letter handling, and authorization-gated task progression. It is the backbone for release promotion workflows.
## API Surface
- `workflow` (via Router) — workflow definition CRUD, instance lifecycle (start/signal/cancel), task queries, dead-letter management, projection queries (`/api/workflow` prefix)
## Storage
PostgreSQL schema `workflow` (via `ConnectionStrings:WorkflowPostgres`, `WorkflowBackend:Postgres`)
## Background Workers
- Signal pump hosted service — processes workflow signals from the event bus
- Retention hosted service — cleans up completed workflow instances
- `WorkflowDefinitionBootstrap` — deploys built-in workflow definitions on startup

20
src/Zastava/README.md Normal file
View File

@@ -0,0 +1,20 @@
# Zastava
**Container(s):** stellaops-zastava-webhook (compose slot 43); Agent and Observer are standalone daemons
**Slot:** 43 (webhook) | **Port:** 8080 (webhook) | **Consumer Group:** N/A (no Router integration)
**Resource Tier:** light (webhook)
## Purpose
Zastava provides Kubernetes-native admission control and runtime security for container workloads. The Webhook service acts as a validating/mutating admission webhook that intercepts pod creation and queries the Scanner for policy compliance before allowing deployment. The Agent runs as a DaemonSet collecting runtime telemetry. The Observer watches cluster events for drift detection.
## API Surface
- `zastava-webhook` — Kubernetes admission review endpoints (`/validate`, `/mutate`), health checks (`/healthz/ready`, `/healthz/live`)
- Agent — no HTTP surface (background daemon, logs to stdout)
- Observer — no HTTP surface (background daemon, watches Kubernetes events)
## Storage
None (stateless; delegates policy checks to Scanner/Authority)
## Background Workers
- Agent: `ZastavaAgent` hosted service — runtime container telemetry collection
- Observer: `ZastavaObserver` hosted service — Kubernetes event watcher for drift detection